Compare commits

...

106 Commits

Author SHA1 Message Date
Yaron Schneider 22536a9640
Merge pull request #1438 from KentHsu/fix-perfsprint-linter-error
fix perfsprint linter error
2025-07-02 10:56:08 -07:00
KentHsu fff4c9158f fix linter error
Signed-off-by: KentHsu <chiahaohsu9@gmail.com>
2025-05-13 22:09:22 +08:00
Kent (Chia-Hao), Hsu 3198d9ca7e
Merge branch 'master' into fix-perfsprint-linter-error 2025-05-13 22:07:20 +08:00
Yaron Schneider b51eab0d84
Merge pull request #1509 from twinguy/master
Add detection for incompatible flags with --run-file
2025-05-05 18:59:17 -07:00
twinguy b31a9f2c56
chore: added go mod tidy to clear up pipeline issues
Signed-off-by: twinguy <twinguy17@gmail.com>
2025-04-14 21:37:58 -05:00
twinguy c939814420
Use compatibleflags approach instead of incompatible
Signed-off-by: twinguy <twinguy17@gmail.com>
2025-04-11 21:43:37 -05:00
Yaron Schneider a85b8132db
Merge branch 'master' into fix-perfsprint-linter-error 2025-03-29 00:40:15 +03:00
twinguy 5da3528524
Remove deprecated tests
Signed-off-by: twinguy <twinguy17@gmail.com>
2025-03-24 16:56:21 -05:00
twinguy e7c1a322d7
Refactor warning message for incompatible flags in --run-file
Signed-off-by: twinguy <twinguy17@gmail.com>
2025-03-24 16:32:13 -05:00
twinguy ce0b9fb4d9
Add detection for incompatible flags with --run-file
Signed-off-by: twinguy <twinguy17@gmail.com>
2025-03-23 23:39:44 -05:00
Yaron Schneider 29f8962111
Merge release 1.15 into master (#1499)
* use non-deprecated flags in List operation (#1478)

Signed-off-by: yaron2 <schneider.yaron@live.com>

* Scheduler: set broadcast address to localhost:50006 in selfhosted (#1480)

* Scheduler: set broadcast address to localhost:50006 in selfhosted

Signed-off-by: joshvanl <me@joshvanl.dev>

* Set schedulder override flag for edge and dev

Signed-off-by: joshvanl <me@joshvanl.dev>

---------

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix scheduler broadcast address for windows (#1481)

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Remove deprecated flags (#1482)

* remove deprecated flags

Signed-off-by: yaron2 <schneider.yaron@live.com>

* update Dapr version in tests

Signed-off-by: yaron2 <schneider.yaron@live.com>

---------

Signed-off-by: yaron2 <schneider.yaron@live.com>

* Fix daprsystem configuration retrieval when renewing certificates (#1486)

The issue found when similar resource were installed in k8s that use the name "configurations".
In this case the knative's "configurations.serving.knative.dev/v1" was the last in the list and the command returned the error
`Error from server (NotFound): configurations.serving.knative.dev "daprsystem" not found`

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix: arguments accept units (#1490)

* fix: arguments accept units
`max-body-size` and `read-buffer-size` now accept units as defined in the docs.

Fixes #1489

Signed-off-by: Mike Nguyen <hey@mike.ee>

* chore: gofumpt

Signed-off-by: Mike Nguyen <hey@mike.ee>

* refactor: modify logic to comply with vetting

Signed-off-by: Mike Nguyen <hey@mike.ee>

* chore: gofumpt -w .

Signed-off-by: Mike Nguyen <hey@mike.ee>

* refactor: set defaults
`max-body-size` is defaulted to 4Mi
`request-buffer-size` is defaulted to 4Ki

This is inline with the runtime.

Signed-off-by: Mike Nguyen <hey@mike.ee>

* fix: set defaults in run and annotate

Signed-off-by: Mike Nguyen <hey@mike.ee>

* chore: gofumpt

Signed-off-by: Mike Nguyen <hey@mike.ee>

* refactor: exit with error rather than panic

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Mike Nguyen <hey@mike.ee>

---------

Signed-off-by: Mike Nguyen <hey@mike.ee>
Co-authored-by: Anton Troshin <troll.sic@gmail.com>

* Fix scheduler pod count for 1.15 version when testing master and latest (#1492)

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix podman CI (#1493)

* Fix podman CI
Update to podman 5.4.0

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix --cpus flag

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix dapr upgrade command incorrectly detecting HA mode for new version 1.15 (#1494)

* Fix dapr upgrade command detecting HA mode for new version 1.15
The issue is that the scheduler by default uses 3 replicas, which incorrectly identified non-HA install as HA.

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix e2e

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix scheduler address for dapr run with file on Windows (#1497)

Signed-off-by: Anton Troshin <anton@diagrid.io>

* release: test upgrade/downgrade for 1.13/1.14/1.15 + mariner (#1491)

* release: test upgrade/downgrade for 1.13/1.14/1.15 + mariner

Signed-off-by: Mike Nguyen <hey@mike.ee>

* fix: version skews

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Mike Nguyen <hey@mike.ee>

* Update tests/e2e/upgrade/upgrade_test.go

Accepted

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>

* Update tests/e2e/upgrade/upgrade_test.go

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>

* Fix downgrade issue from 1.15 by deleting previous version scheduler pods
Update 1.15 RC to latest RC.18

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix downgrade 1.15 to 1.13 scenario with 0 scheduler pods

Signed-off-by: Anton Troshin <anton@diagrid.io>

* increase update test timeout to 60m and update latest version to 1.15

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix httpendpoint tests cleanup and checks

Signed-off-by: Anton Troshin <anton@diagrid.io>

* make sure matrix runs appropriate tests, every matrix ran the same tests

Signed-off-by: Anton Troshin <anton@diagrid.io>

* skip TestKubernetesRunFile on HA

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix skip TestKubernetesRunFile on HA

Signed-off-by: Anton Troshin <anton@diagrid.io>

* update to latest dapr 1.15.2

Signed-off-by: Anton Troshin <anton@diagrid.io>

* add logs when waiting for pod deletion

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Mike Nguyen <hey@mike.ee>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>
Signed-off-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Anton Troshin <troll.sic@gmail.com>

* Fix dapr init test latest version retrieval (#1500)

Lint

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix downgrade stuck (#1501)

* Fix goroutine channel leaks and ensure proper cleanup in tests

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Add artificial delay before deleting scheduler pods during downgrade

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Add timeout to helm upgrade tests, they are being stuck sometime for 5+ minutes

Signed-off-by: Anton Troshin <anton@diagrid.io>

* bump helm.sh/helm/v3 to v3.17.1

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: yaron2 <schneider.yaron@live.com>
Signed-off-by: joshvanl <me@joshvanl.dev>
Signed-off-by: Anton Troshin <anton@diagrid.io>
Signed-off-by: Mike Nguyen <hey@mike.ee>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Josh van Leeuwen <me@joshvanl.dev>
Co-authored-by: Mike Nguyen <hey@mike.ee>
2025-03-13 18:36:13 -07:00
Anton Troshin 16aeac5701
Merge branch 'master' into merge-release-1.15-into-master 2025-03-13 18:01:03 -05:00
Anton Troshin 16cc1d1b59
Fix downgrade stuck (#1501)
* Fix goroutine channel leaks and ensure proper cleanup in tests

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Add artificial delay before deleting scheduler pods during downgrade

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Add timeout to helm upgrade tests, they are being stuck sometime for 5+ minutes

Signed-off-by: Anton Troshin <anton@diagrid.io>

* bump helm.sh/helm/v3 to v3.17.1

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-03-12 18:27:40 -07:00
Anton Troshin ecc4ea4953
Fix dapr init test latest version retrieval (#1500)
Lint

Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-03-07 13:42:23 -08:00
Mike Nguyen 4c1c26f2b6
release: test upgrade/downgrade for 1.13/1.14/1.15 + mariner (#1491)
* release: test upgrade/downgrade for 1.13/1.14/1.15 + mariner

Signed-off-by: Mike Nguyen <hey@mike.ee>

* fix: version skews

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Mike Nguyen <hey@mike.ee>

* Update tests/e2e/upgrade/upgrade_test.go

Accepted

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>

* Update tests/e2e/upgrade/upgrade_test.go

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>

* Fix downgrade issue from 1.15 by deleting previous version scheduler pods
Update 1.15 RC to latest RC.18

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix downgrade 1.15 to 1.13 scenario with 0 scheduler pods

Signed-off-by: Anton Troshin <anton@diagrid.io>

* increase update test timeout to 60m and update latest version to 1.15

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix httpendpoint tests cleanup and checks

Signed-off-by: Anton Troshin <anton@diagrid.io>

* make sure matrix runs appropriate tests, every matrix ran the same tests

Signed-off-by: Anton Troshin <anton@diagrid.io>

* skip TestKubernetesRunFile on HA

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix skip TestKubernetesRunFile on HA

Signed-off-by: Anton Troshin <anton@diagrid.io>

* update to latest dapr 1.15.2

Signed-off-by: Anton Troshin <anton@diagrid.io>

* add logs when waiting for pod deletion

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Mike Nguyen <hey@mike.ee>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>
Signed-off-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Anton Troshin <troll.sic@gmail.com>
2025-03-05 17:19:56 -08:00
Anton Troshin a0921c7820
Fix scheduler address for dapr run with file on Windows (#1497)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-03-05 17:18:23 -08:00
Anton Troshin 6c9bcc6dcf
Fix dapr upgrade command incorrectly detecting HA mode for new version 1.15 (#1494)
* Fix dapr upgrade command detecting HA mode for new version 1.15
The issue is that the scheduler by default uses 3 replicas, which incorrectly identified non-HA install as HA.

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix e2e

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-02-26 10:08:12 -08:00
Anton Troshin 42b93eaa7a
Merge branch 'master' into fix-perfsprint-linter-error 2025-02-24 11:13:45 -06:00
Anton Troshin 98b9da9699
Fix scheduler pod count for 1.15 version when testing master and latest (#1488)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-02-24 09:11:00 -08:00
Anton Troshin bd09c94b77
Fix podman CI (#1493)
* Fix podman CI
Update to podman 5.4.0

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix --cpus flag

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-02-24 09:07:42 -08:00
Anton Troshin 06f38ed9bc
Fix scheduler pod count for 1.15 version when testing master and latest (#1492)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-02-21 19:11:41 -08:00
Mike Nguyen 0cd0585b64
fix: arguments accept units (#1490)
* fix: arguments accept units
`max-body-size` and `read-buffer-size` now accept units as defined in the docs.

Fixes #1489

Signed-off-by: Mike Nguyen <hey@mike.ee>

* chore: gofumpt

Signed-off-by: Mike Nguyen <hey@mike.ee>

* refactor: modify logic to comply with vetting

Signed-off-by: Mike Nguyen <hey@mike.ee>

* chore: gofumpt -w .

Signed-off-by: Mike Nguyen <hey@mike.ee>

* refactor: set defaults
`max-body-size` is defaulted to 4Mi
`request-buffer-size` is defaulted to 4Ki

This is inline with the runtime.

Signed-off-by: Mike Nguyen <hey@mike.ee>

* fix: set defaults in run and annotate

Signed-off-by: Mike Nguyen <hey@mike.ee>

* chore: gofumpt

Signed-off-by: Mike Nguyen <hey@mike.ee>

* refactor: exit with error rather than panic

Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Signed-off-by: Mike Nguyen <hey@mike.ee>

---------

Signed-off-by: Mike Nguyen <hey@mike.ee>
Co-authored-by: Anton Troshin <troll.sic@gmail.com>
2025-02-21 08:11:11 -08:00
Anton Troshin 728b329ce5
Merge branch 'master' into fix-perfsprint-linter-error 2025-02-18 10:28:56 -06:00
Anton Troshin a968b18f08
Fix daprsystem configuration retrieval when renewing certificates (#1486)
The issue found when similar resource were installed in k8s that use the name "configurations".
In this case the knative's "configurations.serving.knative.dev/v1" was the last in the list and the command returned the error
`Error from server (NotFound): configurations.serving.knative.dev "daprsystem" not found`

Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-02-14 17:49:29 -08:00
Yaron Schneider ac6822ea69
Remove deprecated flags (#1482)
* remove deprecated flags

Signed-off-by: yaron2 <schneider.yaron@live.com>

* update Dapr version in tests

Signed-off-by: yaron2 <schneider.yaron@live.com>

---------

Signed-off-by: yaron2 <schneider.yaron@live.com>
2025-02-03 13:27:22 -08:00
Anton Troshin f8ee63c8f4
Fix scheduler broadcast address for windows (#1481)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-01-31 12:34:07 -08:00
Josh van Leeuwen 8ce6b9fed9
Scheduler: set broadcast address to localhost:50006 in selfhosted (#1480)
* Scheduler: set broadcast address to localhost:50006 in selfhosted

Signed-off-by: joshvanl <me@joshvanl.dev>

* Set schedulder override flag for edge and dev

Signed-off-by: joshvanl <me@joshvanl.dev>

---------

Signed-off-by: joshvanl <me@joshvanl.dev>
2025-01-27 10:16:37 -08:00
Yaron Schneider 953c4a2a3f
use non-deprecated flags in List operation (#1478)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2025-01-21 22:52:26 -08:00
Mike Nguyen c3f0fb2472
release: pin go to 1.23.5 (#1477)
Signed-off-by: mikeee <hey@mike.ee>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-01-20 16:50:10 -08:00
Mike Nguyen 3acfa7d7e4
fix: population of the schedulerhostaddress in self-hosted mode (#1475)
* fix: population of the schedulerhostaddress in self-hosted mode

The scheduler host address is pre-populated when using the self-hosted mode multi-app run similarly to the single app run.
Kubernetes multi-app run is not affected and you will still need to specify a scheduler host address.

Signed-off-by: mikeee <hey@mike.ee>

* chore: lint

Signed-off-by: mikeee <hey@mike.ee>

---------

Signed-off-by: mikeee <hey@mike.ee>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-01-20 14:18:31 -08:00
Mike Nguyen 104849bc74
chore: remove gopkg / cleanup repo (#1415)
* chore: remove gopkg

Signed-off-by: mikeee <hey@mike.ee>

* chore: upgrade actions versions and remove explicit caching steps

Signed-off-by: mikeee <hey@mike.ee>

---------

Signed-off-by: mikeee <hey@mike.ee>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-01-15 05:28:22 -08:00
Anton Troshin 17f4785906
Update dependencies, Go version and address CVEs (#1474)
* Update dependencies, Go version and address CVEs

Signed-off-by: Anton Troshin <anton@diagrid.io>

* update golangci-lint version and list of disabled linters form dapr/dapr

Signed-off-by: Anton Troshin <anton@diagrid.io>

* adjust golangci-lint settings and fix lint issues

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix test

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-01-09 14:16:44 -08:00
Filinto Duran efe1d6c1e2
add image pull policy (#1462)
* add image pull policy

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* add allowed values

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* feedback refactor allowed values name

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* add unit tests

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* lint

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* lint

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* more lint

Signed-off-by: Filinto Duran <filinto@diagrid.io>

* more lint

Signed-off-by: Filinto Duran <filinto@diagrid.io>

---------

Signed-off-by: Filinto Duran <filinto@diagrid.io>
Co-authored-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Mike Nguyen <hey@mike.ee>
2025-01-07 09:09:00 -08:00
Mike Nguyen dbbe022a8a
fix: allow the scheduler client to initialise for edge builds (#1467)
Signed-off-by: mikeee <hey@mike.ee>
2024-11-26 13:09:01 -08:00
Anton Troshin 25d9ece42f
Add version parsing check skip malformed versions and avoid panic (#1469)
* Add version parsing check skip malformed versions and avoid panic

Signed-off-by: Anton Troshin <anton@diagrid.io>

* lint

Signed-off-by: Anton Troshin <anton@diagrid.io>

* do not return error on nil, skip bad versions

Signed-off-by: Anton Troshin <anton@diagrid.io>

* simplify condition to skip prerelease and versions with metadata
print warning on error and non-semver version tag

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-22 13:59:51 -08:00
Anton Troshin 6d5e64d964
Fix tests running master in CI with specific dapr version (#1461)
* Fix tests running master in CI with specific dapr version

Signed-off-by: Anton Troshin <anton@diagrid.io>

* move env version load into common

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix k8s test files

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Revert "fix k8s test files"

This reverts commit 344867d19ca4b38e5a83a82a2a00bb04c1775bab.

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Revert "move env version load into common"

This reverts commit 39e8c8caf54a157464bb44dffe448fc75727487f.

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Revert "Fix tests running master in CI with specific dapr version"

This reverts commit a02c81f7e25a6bbdb8e3b172a8e215dae60d321f.

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Add GetRuntimeVersion to be able to compare semver dapr versions for conditional tests
Use GetRuntimeVersion in test

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-08 10:33:08 -08:00
Yaron Schneider 8cd81b0477
Merge pull request #1460 from antontroshin/merge-release-1.14-to-master
Merge release 1.14 to master
2024-11-05 17:29:12 -08:00
Anton Troshin 5446171840
Add versioned pod number validation
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 17:38:38 -06:00
Anton Troshin 1152a1ef55
Add placement and scheduler in slim mode self-hosted tests
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 16:22:01 -06:00
Anton Troshin d781b03002
Change podman mount to home
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 03:07:52 -06:00
Anton Troshin 9bc96c2fa0
Fix test
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 02:11:25 -06:00
Anton Troshin 376690b43e
Fix number of HA mode pods to wait in tests
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 01:35:53 -06:00
Anton Troshin 086a3b9adb
Fix number of pods to wait in tests
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 01:09:47 -06:00
Anton Troshin 1f080952a5
Add logs
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-11-05 00:56:45 -06:00
Anton Troshin 002a223864
Merge branch 'master' into merge-release-1.14-to-master 2024-11-04 20:14:52 -06:00
Anton Troshin db712e7eed
Fixing e2e tests, podman and scheduler fail with mariner images (#1450)
* Fix standalone e2e tests
Fix podman e2e install
Fix scheduler start failure on standalone mariner image variant

Signed-off-by: Anton Troshin <anton@diagrid.io>

* fix test

Signed-off-by: Anton Troshin <anton@diagrid.io>

* revert

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix podman machine settings
Add cleanups
Remove parallel tests
Fix mariner volume mount location
Remove old build tags

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-10-01 09:19:35 -07:00
Yetkin Timocin a3571c8464
Fix a typo (#1453)
Signed-off-by: ytimocin <ytimocin@microsoft.com>
2024-10-01 09:17:12 -07:00
Josh van Leeuwen e08443b9b3
Remove Docker client dependency from standalone run (#1443)
Signed-off-by: joshvanl <me@joshvanl.dev>
2024-08-15 13:53:26 -07:00
Yaron Schneider aefcca1899
Merge branch 'master' into fix-perfsprint-linter-error 2024-08-13 18:21:22 -07:00
Rishab Kumar aa0436ebe0
Added new holopin cli badge (#1399)
Signed-off-by: Rishab Kumar <rishabkumar7@gmail.com>
Co-authored-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
Co-authored-by: Mike Nguyen <hey@mike.ee>
2024-08-06 14:08:58 -07:00
Yaron Schneider 027f5da3e1
update redis version (#1439)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-07-25 09:58:14 -07:00
KentHsu 22059c399e fix perfsprint linter error
Signed-off-by: KentHsu <chiahaohsu9@gmail.com>
2024-07-24 11:12:02 +08:00
Yaron Schneider fecf47d752
pin bitnami chart version for redis (#1437)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-07-23 15:49:14 -07:00
Anton Troshin ad67ce58c4
Add dapr scheduler server to the status command (#1434)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2024-07-22 16:38:06 -07:00
Cassie Coyle e72f95393c
Fix Scheduler Data Dir Permissions Issue (#1432)
* fix w/ @joshvanl & anton

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>

* add a .

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>

---------

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>
2024-07-19 11:49:08 -07:00
Josh van Leeuwen 16a513ba7a
Change scheduler container data dir from `/var/run/...` to `/var/lib`. (#1429)
Signed-off-by: joshvanl <me@joshvanl.dev>
2024-07-18 16:29:43 -07:00
Josh van Leeuwen 30d888f4e6
Fix dapr init scheduler (#1428)
* Add E2E to validate scheduler after init.

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

* Adds dapr_scheduler verify container

Signed-off-by: joshvanl <me@joshvanl.dev>

* Assert eventually TCP connect

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix eventually t check

Signed-off-by: joshvanl <me@joshvanl.dev>

* Write etcd-data-dir to custom path with volume

Signed-off-by: joshvanl <me@joshvanl.dev>

* Adds Helpers to test funcs

Signed-off-by: joshvanl <me@joshvanl.dev>

* Adds container name to TCP check

Signed-off-by: joshvanl <me@joshvanl.dev>

* Use rc.3 for scheduler

Signed-off-by: joshvanl <me@joshvanl.dev>

* Print container logs on failed TCP conn

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix params

Signed-off-by: joshvanl <me@joshvanl.dev>

* Use b

Signed-off-by: joshvanl <me@joshvanl.dev>

* Addfs `dev` flag to rc init

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix version check

Signed-off-by: joshvanl <me@joshvanl.dev>

* Skip TCP check on slim mode

Signed-off-by: joshvanl <me@joshvanl.dev>

* Remove debug test code

Signed-off-by: joshvanl <me@joshvanl.dev>

---------

Signed-off-by: Artur Souza <asouza.pro@gmail.com>
Signed-off-by: joshvanl <me@joshvanl.dev>
Co-authored-by: Artur Souza <asouza.pro@gmail.com>
2024-07-18 12:50:16 -07:00
Josh van Leeuwen 29d29ab549
Give scheduler a default volume, making it resilient to restarts by (#1423)
* Give scheduler a default volume, making it resilient to restarts by
default

Signed-off-by: joshvanl <me@joshvanl.dev>

* Remove dapr_scheduler volume on uninstall, gated by `--all`

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix containerErrs in standaone.go

Signed-off-by: joshvanl <me@joshvanl.dev>

* Do not attempt to delete scheduler volume if no container runtime

Signed-off-by: joshvanl <me@joshvanl.dev>

* Increase upgrade test timeout to 40m

Signed-off-by: joshvanl <me@joshvanl.dev>

---------

Signed-off-by: joshvanl <me@joshvanl.dev>
Co-authored-by: Artur Souza <artursouza.ms@outlook.com>
2024-07-17 12:13:51 -07:00
Yaron Schneider ddf43a5f55
update link (#1426)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-07-16 23:42:12 -07:00
Josh van Leeuwen ad3442b252
dapr_scheduler: Adds scheduler-volume flag (#1422)
* dapr_scheduler: pre-create data dir

Signed-off-by: joshvanl <me@joshvanl.dev>

* Adds --scheduler-volume to specify volume for data directory

Signed-off-by: joshvanl <me@joshvanl.dev>

---------

Signed-off-by: joshvanl <me@joshvanl.dev>
2024-07-12 08:22:34 -07:00
Mike Nguyen ed0d3af2d0
fix: scheduler host address passed to runtime (#1421)
* fix: scheduler host address passed to runtime

Signed-off-by: mikeee <hey@mike.ee>

* fix: scheduler client stream initialised for 1.14<

Signed-off-by: mikeee <hey@mike.ee>

* fix: modify scheduler host address validation

if the scheduler container is not active, the scheduler flag will
not be passed to the runtime

Signed-off-by: mikeee <hey@mike.ee>

* fix: lint and refactor

Signed-off-by: mikeee <hey@mike.ee>

---------

Signed-off-by: mikeee <hey@mike.ee>
2024-07-10 09:37:34 -07:00
Artur Souza 762e2bb4ac
Fix check for version to contain scheduler. (#1417)
Signed-off-by: Artur Souza <asouza.pro@gmail.com>
2024-07-05 20:16:40 -07:00
Cassie Coyle fd1d8e85bf
Distributed Scheduler CLI Changes (#1405)
* wip

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>

* rm scheduleJob logic. keep only init/uninstall logic

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>

* Fixes install and uninstall of scheduler in standalone mode.

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

* Fixing path for Go tools in Darwin.

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

* Fix Go bin location for MacOS.

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

* Fix min scheduler version to be 1.14.x

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

* Use env var to pass scheduler host.

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

* Fix CLI build to work with latest MacOS runners from GH

Signed-off-by: Artur Souza <asouza.pro@gmail.com>

---------

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>
Signed-off-by: Artur Souza <asouza.pro@gmail.com>
Co-authored-by: Artur Souza <asouza.pro@gmail.com>
2024-07-03 14:58:55 -07:00
dependabot[bot] 4881ca11d7
Bump golang.org/x/net from 0.21.0 to 0.23.0 (#1401)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.21.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.21.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-14 20:01:32 +05:30
Josh van Leeuwen 619cd9cd84
Update dependencies for v1.13.0-rc.6 release (#1383)
* Update dependencies for v1.13.0-rc.1 release

Signed-off-by: joshvanl <me@joshvanl.dev>

* Update tests/e2e/upgrade/upgrade_test.go

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* Update github.com/dapr/go-sdk to v1.9.1

Signed-off-by: joshvanl <me@joshvanl.dev>

* Ping go-sdk to joshvanl main fork

Signed-off-by: joshvanl <me@joshvanl.dev>

* Update and fix e2e tests in standalone mode

Signed-off-by: joshvanl <me@joshvanl.dev>

* Set correct rc version

Signed-off-by: joshvanl <me@joshvanl.dev>

* Revert go.mod to keep requires for kube & docker auth

Signed-off-by: joshvanl <me@joshvanl.dev>

* Update dapr/dapr to v1.13.0-rc.2

Signed-off-by: joshvanl <me@joshvanl.dev>

* Update to use rc to 2

Signed-off-by: joshvanl <me@joshvanl.dev>

* Update github.com/dapr/go-sdk to main

Signed-off-by: joshvanl <me@joshvanl.dev>

* Update to 1.13.0-rc.6

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix func name call

Signed-off-by: joshvanl <me@joshvanl.dev>

---------

Signed-off-by: joshvanl <me@joshvanl.dev>
Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
Co-authored-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
2024-02-16 11:37:37 -08:00
dependabot[bot] 3ea5f9c6b2
Bump github.com/opencontainers/runc from 1.1.5 to 1.1.12 (#1380)
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.5 to 1.1.12.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.12/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.5...v1.1.12)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-07 13:20:32 +05:30
dependabot[bot] 1da8760f11
Bump github.com/lestrrat-go/jwx/v2 from 2.0.12 to 2.0.19 (#1377)
Bumps [github.com/lestrrat-go/jwx/v2](https://github.com/lestrrat-go/jwx) from 2.0.12 to 2.0.19.
- [Release notes](https://github.com/lestrrat-go/jwx/releases)
- [Changelog](https://github.com/lestrrat-go/jwx/blob/develop/v2/Changes)
- [Commits](https://github.com/lestrrat-go/jwx/compare/v2.0.12...v2.0.19)

---
updated-dependencies:
- dependency-name: github.com/lestrrat-go/jwx/v2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 20:33:22 -08:00
twinguy 2380446870
fix default installation directory permissions (#1375)
Signed-off-by: Kenny Meador <kenny.meador@outlook.com>
Signed-off-by: Kenny Meador <kmeador@dollargeneral.com>
Co-authored-by: Kenny Meador <kmeador@dollargeneral.com>
Co-authored-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
2024-01-23 16:00:06 +05:30
Luis Rascão 50e1dffec0
Fix flaky upgrade test (#1368)
* Fix flaky upgrade test

Signed-off-by: Luis Rascao <luis.rascao@gmail.com>

* fixup! Fix flaky upgrade test

Signed-off-by: Luis Rascao <luis.rascao@gmail.com>

---------

Signed-off-by: Luis Rascao <luis.rascao@gmail.com>
Co-authored-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
2024-01-23 15:25:30 +05:30
Mukundan Sundararajan f5eb4fda6c
fix upload artifact name conflict (#1373)
* fix upload test report name

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* fix linter errors

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* fix upload artifacts version and name conflicts

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* fix typo

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* fix typo

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

---------

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
2024-01-18 02:07:09 -08:00
Pravin Pushkar c7e8612b11
Upgrade go to 1.21 (#1331)
Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>
2023-12-18 23:36:54 -08:00
Stuart Leeks 668bab451d
Add --run-file support for stdin (#1364)
If the run-file argument is speficied as '-', read the config file from stdin

Signed-off-by: Stuart Leeks <stuartle@microsoft.com>
2023-11-22 07:04:02 -08:00
dependabot[bot] a339bdf491
Bump google.golang.org/grpc from 1.57.0 to 1.57.1 (#1362)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.57.0 to 1.57.1.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.57.0...v1.57.1)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-26 12:48:03 +05:30
dependabot[bot] ac0a3cb06f
Bump golang.org/x/net from 0.15.0 to 0.17.0 (#1356)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.15.0 to 0.17.0.
- [Commits](https://github.com/golang/net/compare/v0.15.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-12 23:35:55 +05:30
Mukundan Sundararajan d55944ccec
Merge pull request #1358 from mukundansundar/merge_rel_1.12_master
Merge rel 1.12 master
2023-10-12 04:54:22 -07:00
Mukundan Sundararajan 02eec8bbc8
Merge branch 'master' into merge_rel_1.12_master 2023-10-12 04:15:14 -07:00
Mukundan Sundararajan 1d60280de4
update dapr runtime to v1.12.0 (#1357)
* update dapr runtime to v1.12.0

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* fix dashboard version

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

---------

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
2023-10-12 07:45:22 +05:30
Shubham Sharma 0c1c275972
Upgrade runtime and go-sdk versions (#1355)
Signed-off-by: Shubham Sharma <shubhash@microsoft.com>
2023-10-11 13:15:09 +05:30
Mukundan Sundararajan bb2c3a7f66
Merge pull request #1350 from mukundansundar/merge_rel_1.12_master
Merge release 1.12 master
2023-10-04 00:15:03 -07:00
Pravin Pushkar 7a09fd6b9c
Merge branch 'master' into merge_rel_1.12_master 2023-10-04 09:30:30 +05:30
hdget a08eebb5db
create dapr install dir before copy (#1112)
Signed-off-by: hdget <hdget@qq.com>
Co-authored-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
Co-authored-by: Shubham Sharma <shubhash@microsoft.com>
Co-authored-by: Pravin Pushkar <ppushkar@microsoft.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2023-10-03 13:17:49 -07:00
Pravin Pushkar a38a44d27b
update runtime RC to RC.5 (#1353)
Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>
2023-10-03 09:37:51 -07:00
Mukundan Sundararajan 6a85f487e7 Merge branch 'release-1.12' into merge_rel_1.12_master 2023-10-03 17:51:00 +05:30
Mukundan Sundararajan df8f242fc8
add appconfig default config for dev mode multi-app run k8s (#1352)
Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
2023-10-03 17:34:40 +05:30
Mukundan Sundararajan 3fe6d8fffd
Merge branch 'master' into merge_rel_1.12_master 2023-09-29 10:44:21 -07:00
Mukundan Sundararajan 0ebded24f7
upgrade k8s client (#1349)
Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
2023-09-29 18:06:56 +05:30
Pravin Pushkar 504d4eadba
Removing preview feature for dapr run -f (#1348)
Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>
2023-09-27 09:03:24 -07:00
Marc Duiker 235f40cf81
Add holopin.yml config (#1344)
Signed-off-by: Marc Duiker <marcduiker@users.noreply.github.com>
2023-09-21 10:41:53 +05:30
Josh van Leeuwen 0c99afb4db
Updates dapr/dapr to v1.12 (#1342)
* Updates the Dapr version to v1.12.0-rc.1

Signed-off-by: joshvanl <me@joshvanl.dev>

* Use correct rc tag name

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix string match on `HTTP server is running on port x`

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix error string check for `internal gRPC server`

Signed-off-by: joshvanl <me@joshvanl.dev>

* Update dapr/dapr to 1.12.0-rc.3

Signed-off-by: joshvanl <me@joshvanl.dev>

* Rolling restart the sidecar injector during a restart

Signed-off-by: joshvanl <me@joshvanl.dev>

* Linting

Signed-off-by: joshvanl <me@joshvanl.dev>

* Fix string matching on dapr app output logs

Signed-off-by: joshvanl <me@joshvanl.dev>

* Increase e2e tests `20m` -> `25m`

Signed-off-by: joshvanl <me@joshvanl.dev>

---------

Signed-off-by: joshvanl <me@joshvanl.dev>
2023-09-20 18:19:32 +05:30
Mukundan Sundararajan 41f324016e
adding e2e test for multi app run k8s (#1336)
Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
2023-09-04 20:28:08 +05:30
Mukundan Sundararajan a15a3eb856
Initial implementation of multi app run for Kubernetes Dev (#1333)
* initial commit for multi-app run k8s impl

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* fix import

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* move runfileconfig

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* add protobuf conflict warn env

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* Add pubsub component. Check before component creation.

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* fix e2e

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* fix e2e

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* fix e2e

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

* address review comments.

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>

---------

Signed-off-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
2023-09-04 08:42:58 +05:30
Pravin Pushkar 6738eefe2b
Multiapp run and stop implementation for windows (#1315)
* windows impl for multiapp run

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* Uncommenting run on windows

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* fix static checks

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* Kill children and grand children forcefully

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* ommiting tests for wiondows

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* rename method

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* shut down all processes

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* Use job handle and named evens together to kill the processes

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* lint fix

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* revert wait to kill

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* Adding E2E for windows template file run

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* rename job name

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* review comments

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* build failure

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

---------

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>
Co-authored-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
2023-08-30 11:06:40 +05:30
Pravin Pushkar 4d586752bc
Self Hosted E2E - fail-fast set to false (#1334)
* fail-fast set to false

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* add TODO

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

---------

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>
2023-08-30 10:39:15 +05:30
Anton Troshin 60f50e3497
Add health and metrics port mapping for placement stand-alone mode (#1323)
* Add health and metrics port mapping for placement stand-alone mode

Signed-off-by: Anton Troshin <anton@diagrid.io>

* move ports declaration to const

Signed-off-by: Anton Troshin <anton@diagrid.io>

* Fix linter check: appendCombine: can combine chain of 3 appends into one (gocritic)

Signed-off-by: Anton Troshin <anton@diagrid.io>

---------

Signed-off-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Artur Souza <artursouza.ms@outlook.com>
2023-08-09 00:17:49 +05:30
Alessandro (Ale) Segala 650fd6fe8d
Add APP_PROTOCOL env (#1318)
Signed-off-by: ItalyPaleAle <43508+ItalyPaleAle@users.noreply.github.com>
Co-authored-by: Shubham Sharma <shubhash@microsoft.com>
2023-08-07 12:37:34 +05:30
Pravin Pushkar 31b9ea27ae
add release doc (#1325)
* add release doc

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* Apply suggestions from code review

Co-authored-by: Shubham Sharma <shubhash@microsoft.com>
Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* Update release.md

Sample PR

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

* Update docs/development/release.md

Co-authored-by: Shubham Sharma <shubhash@microsoft.com>
Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>

---------

Signed-off-by: Pravin Pushkar <ppushkar@microsoft.com>
Co-authored-by: Shubham Sharma <shubhash@microsoft.com>
2023-07-05 07:42:24 -07:00
Mohit Pal Singh 2751f5fa74
Fix for Quote in username errors dapr init #972 (#1322)
* Fix for Quote in username errors dapr init #972

Signed-off-by: mrmtheboss <starktony239@gmail.com>

* addressed a GitHub Actions Check failure

Signed-off-by: mrmtheboss <starktony239@gmail.com>

* Addressed Shubham's comments

Signed-off-by: Mohit Pal Singh <mohit.pal.singh@outlook.com>

* Addressed Shubham's new comments

Signed-off-by: Mohit Pal Singh <mohit.pal.singh@outlook.com>

* Addressed a GitHub actions failure

Signed-off-by: Mohit Pal Singh <mohit.pal.singh@outlook.com>

* Update utils/utils.go

Co-authored-by: Shubham Sharma <shubhash@microsoft.com>
Signed-off-by: Mohit Pal Singh <mohit.pal.singh@outlook.com>

* Added Unit test for SanitizeDir() in utils/utils_test.go

Signed-off-by: Mohit Pal Singh <mohit.pal.singh@outlook.com>

* Addressed a GitHub actions failure

Signed-off-by: Mohit Pal Singh <mohit.pal.singh@outlook.com>

* Update comment.

Signed-off-by: Shubham Sharma <shubhash@microsoft.com>

---------

Signed-off-by: mrmtheboss <starktony239@gmail.com>
Signed-off-by: Mohit Pal Singh <mohit.pal.singh@outlook.com>
Signed-off-by: Shubham Sharma <shubhash@microsoft.com>
Co-authored-by: mrmtheboss <starktony239@gmail.com>
Co-authored-by: Shubham Sharma <shubhash@microsoft.com>
2023-06-26 11:50:24 -07:00
Vishal Yadav a4f924f49d
Adding warning for uninstall all while container runtime not running (#1310)
Signed-off-by: Vishal Yadav <vishalydv.me@gmail.com>
Co-authored-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
Co-authored-by: Shubham Sharma <shubhash@microsoft.com>
2023-06-20 10:10:57 +05:30
Shivam Kumar Singh 7da8bd7b15
shivam-51: Add log file paths to dapr list 1228 (#1296)
* shivam-51: Add log file paths to dapr list 1228

Signed-off-by: Shivam Kumar Singh <shivamhere247@gmail.com>

* Minor type fix.

Signed-off-by: Shivam Kumar Singh <shivamhere247@gmail.com>

* Add condition for log file in dapr list

Signed-off-by: Shivam Kumar Singh <shivamhere247@gmail.com>

* Add tests

Signed-off-by: Shivam Kumar Singh <shivamhere247@gmail.com>

* Complete tests

Signed-off-by: Shivam Kumar Singh <shivamhere247@gmail.com>

* Fix failing e2e tests

Signed-off-by: Shivam Kumar Singh <shivamhere247@gmail.com>

* Use constants

Signed-off-by: Shivam Kumar Singh <shivamhere247@gmail.com>

* Add success tests

Signed-off-by: Shivam Kumar Singh <shivamhere247@gmail.com>

* Add proper assert in test

Signed-off-by: Shivam Kumar Singh <shivamhere247@gmail.com>

* Fix typo

Signed-off-by: Shivam Kumar Singh <shivamhere247@gmail.com>

---------

Signed-off-by: Shivam Kumar Singh <shivamhere247@gmail.com>
Co-authored-by: Mukundan Sundararajan <65565396+mukundansundar@users.noreply.github.com>
2023-06-13 18:34:03 +05:30
Alessandro (Ale) Segala fa2c99d8b0
Update dev container (#1314) 2023-06-13 01:26:54 -07:00
Mukundan Sundararajan c41ea8a4d3
Merge pull request #1312 from shubham1172/shubham1172/merge-release-1.11
Merge release-1.11 into master
2023-06-11 22:50:29 -07:00
Shubham Sharma 35f324a09e
Merge branch 'master' into shubham1172/merge-release-1.11 2023-06-10 18:24:23 +05:30
Shubham Sharma 48cd76517f Merge branch 'release-1.11' into shubham1172/merge-release-1.11
Signed-off-by: Shubham Sharma <shubhash@microsoft.com>
2023-06-10 16:17:06 +05:30
Shubham Sharma 77fbc192ea
Fix checks for container runtime installation (#1307)
* Check container runtime while installing

Signed-off-by: Shubham Sharma <shubhash@microsoft.com>

* Fix lint

Signed-off-by: Shubham Sharma <shubhash@microsoft.com>

* Fix lint (another attempt)

Signed-off-by: Shubham Sharma <shubhash@microsoft.com>

---------

Signed-off-by: Shubham Sharma <shubhash@microsoft.com>
2023-06-09 11:14:23 +05:30
Sam 96e225432a
feat(httpendpoint): add tests for cli (#1280)
* feat(httpenpoint): add tests for cli

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* feat(httpendpoints): add httpendpoints in another place in tests

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(e2e): adjust checks for httpendpoints

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* tests(e2e): more adjustments for httpendpoints

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(e2e): keep in mind dapr versions for tests

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(e2e): try workaround until 1.11 release

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(e2e): only test httpendpoints on new tests

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(tests): check correct resource output

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* style: make lint

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(tests): only check httpendpoints if opt enabled

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(tests): only check httpendpoints if opt enabled for uninstall

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(e2e): use latest rc

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* style: make lint again

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* test: try this

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(test): correct expected output string

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(tests): master -> rc version

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(tests): acct for ns already being deleted

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* style: make linter happy

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(tests): try a modification

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* build: retrigger build with random comment update

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(tests): rm extra test case

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix: address pr feedback

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix: update based on pr feedback again

Signed-off-by: Samantha Coyle <sam@diagrid.io>

---------

Signed-off-by: Samantha Coyle <sam@diagrid.io>
Co-authored-by: Artur Souza <artursouza.ms@outlook.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2023-05-30 11:25:40 -05:00
Tom Meadows 36a13bc55a
fixed small typo in variables and function call (#1297)
Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
2023-05-29 18:08:06 -05:00
129 changed files with 5319 additions and 3335 deletions

View File

@ -15,7 +15,9 @@
"features": {
"ghcr.io/devcontainers/features/sshd:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {},
"ghcr.io/devcontainers/features/node": { "version": "lts"}
"ghcr.io/devcontainers/features/node": {
"version": "lts"
}
},
"mounts": [
// Mount docker-in-docker library volume
@ -61,13 +63,16 @@
"ms-azuretools.vscode-dapr",
"ms-azuretools.vscode-docker",
"ms-kubernetes-tools.vscode-kubernetes-tools"
],
],
"settings": {
"go.toolsManagement.checkForUpdates": "local",
"go.useLanguageServer": true,
"go.gopath": "/go",
"go.buildTags": "e2e,perf,conftests,unit,integration_test,certtests",
"git.alwaysSignOff": true
"go.buildTags": "e2e,perf,conftests,unit,integration_test,certtests,allcomponents",
"git.alwaysSignOff": true,
"terminal.integrated.env.linux": {
"GOLANG_PROTOBUF_REGISTRATION_CONFLICT": "ignore"
}
}
}
},

6
.github/holopin.yml vendored Normal file
View File

@ -0,0 +1,6 @@
organization: dapr
defaultSticker: clutq4bgp107990fl1h4m7jp3b
stickers:
-
id: clutq4bgp107990fl1h4m7jp3b
alias: cli-badge

View File

@ -29,7 +29,7 @@ jobs:
name: Build ${{ matrix.target_os }}_${{ matrix.target_arch }} binaries
runs-on: ${{ matrix.os }}
env:
GOLANG_CI_LINT_VER: v1.51.2
GOLANG_CI_LINT_VER: v1.61.0
GOOS: ${{ matrix.target_os }}
GOARCH: ${{ matrix.target_arch }}
GOPROXY: https://proxy.golang.org
@ -39,7 +39,7 @@ jobs:
WIX_BIN_PATH: 'C:/Program Files (x86)/WiX Toolset v3.11/bin'
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
os: [ubuntu-latest, windows-latest, macOS-latest, macOS-latest-large]
target_arch: [arm, arm64, amd64]
include:
- os: ubuntu-latest
@ -48,6 +48,8 @@ jobs:
target_os: windows
- os: macOS-latest
target_os: darwin
- os: macOS-latest-large
target_os: darwin
exclude:
- os: windows-latest
target_arch: arm
@ -55,44 +57,28 @@ jobs:
target_arch: arm64
- os: macOS-latest
target_arch: arm
- os: macOS-latest
target_arch: amd64
- os: macOS-latest-large
target_arch: arm
- os: macOS-latest-large
target_arch: arm64
steps:
- name: Prepare Go's bin location - MacOS
if: matrix.target_os == 'darwin'
run: |
export PATH=$HOME/bin:$PATH
echo "$HOME/bin" >> $GITHUB_PATH
echo "GOBIN=$HOME/bin" >> $GITHUB_ENV
mkdir -p $HOME/bin
- name: Check out code into the Go module directory
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
id: setup-go
with:
go-version-file: 'go.mod'
- name: Cache Go modules (Linux)
if: matrix.target_os == 'linux'
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Cache Go modules (Windows)
if: matrix.target_os == 'windows'
uses: actions/cache@v3
with:
path: |
~\AppData\Local\go-build
~\go\pkg\mod
key: ${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Cache Go modules (macOS)
if: matrix.target_os == 'darwin'
uses: actions/cache@v3
with:
path: |
~/Library/Caches/go-build
~/go/pkg/mod
key: ${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Run golangci-lint
if: matrix.target_arch == 'amd64' && matrix.target_os == 'linux'
uses: golangci/golangci-lint-action@v3.2.0
@ -132,14 +118,14 @@ jobs:
if: matrix.target_arch == 'amd64' && matrix.target_os == 'linux'
run: |
[ ! -z "${{ env.REL_VERSION }}" ] && echo "${{ env.REL_VERSION }}" > "${{ env.ARCHIVE_OUTDIR }}/release_version.txt"
- name: upload artifacts
uses: actions/upload-artifact@master
- name: upload artifacts ## Following migration guide in https://github.com/actions/upload-artifact/blob/main/docs/MIGRATION.md
uses: actions/upload-artifact@v4
with:
name: cli_drop
name: cli_drop-${{ matrix.target_os }}_${{ matrix.target_arch }}
path: ${{ env.ARCHIVE_OUTDIR }}
- name: Upload test results
if: always()
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.target_os }}_${{ matrix.target_arch }}_test_unit.json
path: ${{ env.TEST_OUTPUT_FILE }}
@ -152,9 +138,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: download artifacts
uses: actions/download-artifact@master
uses: actions/download-artifact@v4
with:
name: cli_drop
pattern: cli_drop-*
merge-multiple: true
path: ${{ env.ARTIFACT_DIR }}
- name: Set Release Version
run: |
@ -186,7 +173,7 @@ jobs:
runs-on: windows-latest
steps:
- name: Check out code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Parse release version and set REL_VERSION
run: python ./.github/scripts/get_release_version.py
- name: Update winget manifests

View File

@ -11,7 +11,7 @@ jobs:
pull-requests: write
packages: write
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: "Publish Features"
uses: devcontainers/action@v1

View File

@ -22,7 +22,7 @@ jobs:
- ubuntu:latest
- mcr.microsoft.com/devcontainers/base:ubuntu
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: "Install latest devcontainer CLI"
run: npm install -g @devcontainers/cli
@ -39,7 +39,7 @@ jobs:
features:
- dapr-cli
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: "Install latest devcontainer CLI"
run: npm install -g @devcontainers/cli
@ -52,7 +52,7 @@ jobs:
runs-on: ubuntu-latest
continue-on-error: true
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: "Install latest devcontainer CLI"
run: npm install -g @devcontainers/cli

View File

@ -9,7 +9,7 @@ jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: "Validate devcontainer-feature.json files"
uses: devcontainers/action@v1

View File

@ -34,7 +34,7 @@ jobs:
FOSSA_API_KEY: b88e1f4287c3108c8751bf106fb46db6 # This is a push-only token that is safe to be exposed.
steps:
- name: "Checkout code"
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: "Run FOSSA Scan"
uses: fossas/fossa-action@v1.3.1 # Use a specific version if locking is preferred

View File

@ -38,7 +38,7 @@ jobs:
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v3
- uses: actions/checkout@v4
# Install Dapr
- name: Install DAPR CLI

View File

@ -50,11 +50,10 @@ jobs:
name: E2E tests for K8s (KinD)
runs-on: ubuntu-latest
env:
DAPR_RUNTIME_PINNED_VERSION: 1.11.0
DAPR_DASHBOARD_PINNED_VERSION: 0.13.0
DAPR_RUNTIME_PINNED_VERSION: 1.14.4
DAPR_DASHBOARD_PINNED_VERSION: 0.14.0
DAPR_RUNTIME_LATEST_STABLE_VERSION:
DAPR_DASHBOARD_LATEST_STABLE_VERSION:
DAPR_TGZ: dapr-1.11.0.tgz
strategy:
fail-fast: false # Keep running if one leg fails.
matrix:
@ -80,23 +79,14 @@ jobs:
kind-image-sha: sha256:9be91e9e9cdf116809841fc77ebdb8845443c4c72fe5218f3ae9eb57fdb4bace
steps:
- name: Check out code onto GOPATH
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: ./src/github.com/dapr/cli
- name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
id: setup-go
with:
go-version-file: './src/github.com/dapr/cli/go.mod'
- name: Cache Go modules
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ matrix.k8s-version }}-${{ matrix.kind-version }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.k8s-version }}-${{ matrix.kind-version }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Configure KinD
# Generate a KinD configuration file that uses:
@ -183,6 +173,7 @@ jobs:
export TEST_OUTPUT_FILE=$GITHUB_WORKSPACE/test-e2e-kind.json
echo "TEST_OUTPUT_FILE=$TEST_OUTPUT_FILE" >> $GITHUB_ENV
export GITHUB_TOKEN=${{ secrets.GITHUB_TOKEN }}
export TEST_DAPR_HA_MODE=${{ matrix.mode }}
make e2e-build-run-k8s
shell: bash
- name: Run tests with Docker hub
@ -191,11 +182,12 @@ jobs:
export TEST_OUTPUT_FILE=$GITHUB_WORKSPACE/test-e2e-kind.json
echo "TEST_OUTPUT_FILE=$TEST_OUTPUT_FILE" >> $GITHUB_ENV
export GITHUB_TOKEN=${{ secrets.GITHUB_TOKEN }}
export TEST_DAPR_HA_MODE=${{ matrix.mode }}
make e2e-build-run-k8s
shell: bash
- name: Upload test results
if: always()
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.k8s-version }}_${{ matrix.mode }}_e2e_k8s.json
path: ${{ env.TEST_OUTPUT_FILE }}

View File

@ -38,20 +38,24 @@ jobs:
GOARCH: ${{ matrix.target_arch }}
GOPROXY: https://proxy.golang.org
ARCHIVE_OUTDIR: dist/archives
DAPR_RUNTIME_PINNED_VERSION: "1.11.0"
DAPR_DASHBOARD_PINNED_VERSION: 0.13.0
DAPR_RUNTIME_LATEST_STABLE_VERSION:
DAPR_DASHBOARD_LATEST_STABLE_VERSION:
PODMAN_VERSION: 4.4.4
DAPR_RUNTIME_PINNED_VERSION: "1.14.4"
DAPR_DASHBOARD_PINNED_VERSION: 0.14.0
DAPR_RUNTIME_LATEST_STABLE_VERSION: ""
DAPR_DASHBOARD_LATEST_STABLE_VERSION: ""
GOLANG_PROTOBUF_REGISTRATION_CONFLICT: warn
PODMAN_VERSION: 5.4.0
strategy:
# TODO: Remove this when our E2E tests are stable for podman on MacOS.
fail-fast: false # Keep running if one leg fails.
matrix:
os: [macos-latest, ubuntu-latest, windows-latest]
# See https://github.com/actions/runner-images
os: [macos-latest-large, ubuntu-latest, windows-latest]
target_arch: [amd64]
dapr_install_mode: [slim, complete]
include:
- os: ubuntu-latest
target_os: linux
- os: macOS-latest
- os: macos-latest-large
target_os: darwin
- os: windows-latest
target_os: windows
@ -59,46 +63,24 @@ jobs:
- os: windows-latest
dapr_install_mode: complete
steps:
- name: Prepare Go's bin location - MacOS
if: matrix.os == 'macos-latest-large'
run: |
export PATH=$HOME/bin:$PATH
echo "$HOME/bin" >> $GITHUB_PATH
echo "GOBIN=$HOME/bin" >> $GITHUB_ENV
mkdir -p $HOME/bin
- name: Check out code into the Go module directory
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
id: setup-go
with:
go-version-file: "go.mod"
- name: Cache Go modules (Linux)
if: matrix.target_os == 'linux'
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Cache Go modules (Windows)
if: matrix.target_os == 'windows'
uses: actions/cache@v3
with:
path: |
~\AppData\Local\go-build
~\go\pkg\mod
key: ${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Cache Go modules (macOS)
if: matrix.target_os == 'darwin'
uses: actions/cache@v3
with:
path: |
~/Library/Caches/go-build
~/go/pkg/mod
key: ${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.target_os }}-${{ matrix.target_arch }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Install podman - MacOS
timeout-minutes: 15
if: matrix.os == 'macos-latest' && matrix.dapr_install_mode == 'complete'
if: matrix.os == 'macos-latest-large' && matrix.dapr_install_mode == 'complete'
run: |
# Install podman
curl -sL -o podman.pkg https://github.com/containers/podman/releases/download/v${{ env.PODMAN_VERSION }}/podman-installer-macos-amd64.pkg
@ -108,8 +90,10 @@ jobs:
# Start podman machine
sudo podman-mac-helper install
podman machine init
podman machine init -v $HOME:$HOME --memory 16384 --cpus 12
podman machine start --log-level debug
podman machine ssh sudo sysctl -w kernel.keys.maxkeys=20000
podman info
echo "CONTAINER_RUNTIME=podman" >> $GITHUB_ENV
- name: Determine latest Dapr Runtime version including Pre-releases
if: github.base_ref == 'master'
@ -146,7 +130,7 @@ jobs:
echo "DAPR_DASHBOARD_LATEST_STABLE_VERSION=$LATEST_STABLE_DASHBOARD_VERSION" >> $GITHUB_ENV
shell: bash
- name: Set the test timeout - MacOS
if: matrix.os == 'macos-latest'
if: matrix.os == 'macos-latest-large'
run: echo "E2E_SH_TEST_TIMEOUT=30m" >> $GITHUB_ENV
- name: Run E2E tests with GHCR
# runs every 6hrs
@ -176,7 +160,7 @@ jobs:
shell: bash
- name: Upload test results
if: always()
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.target_os }}_${{ matrix.target_arch }}_e2e_standalone.json
name: ${{ matrix.target_os }}_${{ matrix.target_arch }}_${{ matrix.dapr_install_mode }}_e2e_standalone.json
path: ${{ env.TEST_OUTPUT_FILE }}

View File

@ -74,24 +74,14 @@ jobs:
kind-image-sha: sha256:9be91e9e9cdf116809841fc77ebdb8845443c4c72fe5218f3ae9eb57fdb4bace
steps:
- name: Check out code onto GOPATH
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: ./src/github.com/dapr/cli
- name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
id: setup-go
with:
go-version-file: './src/github.com/dapr/cli/go.mod'
- name: Cache Go modules
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ matrix.k8s-version }}-${{ matrix.kind-version }}-go-${{ steps.setup-go.outputs.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ matrix.k8s-version }}-${{ matrix.kind-version }}-go-${{ steps.setup-go.outputs.go-version }}-
- name: Configure KinD
# Generate a KinD configuration file that uses:
@ -147,6 +137,7 @@ jobs:
run: |
export TEST_OUTPUT_FILE=$GITHUB_WORKSPACE/test-e2e-upgrade-kind.json
echo "TEST_OUTPUT_FILE=$TEST_OUTPUT_FILE" >> $GITHUB_ENV
export TEST_DAPR_HA_MODE=${{ matrix.mode }}
make e2e-build-run-upgrade
- name: Run tests with Docker hub
@ -154,11 +145,12 @@ jobs:
run: |
export TEST_OUTPUT_FILE=$GITHUB_WORKSPACE/test-e2e-upgrade-kind.json
echo "TEST_OUTPUT_FILE=$TEST_OUTPUT_FILE" >> $GITHUB_ENV
export TEST_DAPR_HA_MODE=${{ matrix.mode }}
make e2e-build-run-upgrade
- name: Upload test results
if: always()
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.k8s-version }}_${{ matrix.mode }}_e2e_upgrade_k8s.json
path: ${{ env.TEST_OUTPUT_FILE }}

7
.gitignore vendored
View File

@ -6,6 +6,9 @@
*.dylib
cli
# Handy directory to keep local scripts and help files.
.local/
# Mac's metadata folder
.DS_Store
@ -24,6 +27,8 @@ cli
# CLI's auto-generated components directory
**/components
# Auto generated deploy dir inside .dapr directory
**/.dapr/deploy
# Auto generated logs dir inside .dapr directory
**/.dapr/logs
@ -35,4 +40,4 @@ go.work
#Wix files
*.wixobj
*.wixpdb
*.msi
*.msi

View File

@ -4,7 +4,7 @@ run:
concurrency: 4
# timeout for analysis, e.g. 30s, 5m, default is 1m
deadline: 10m
timeout: 10m
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
@ -16,28 +16,22 @@ run:
#build-tags:
# - mytag
issues:
# which dirs to skip: they won't be analyzed;
# can use regexp here: generated.*, regexp is applied on full path;
# default value is empty list, but next dirs are always skipped independently
# from this option's value:
# third_party$, testdata$, examples$, Godeps$, builtin$
skip-dirs:
exclude-dirs:
- ^pkg.*client.*clientset.*versioned.*
- ^pkg.*client.*informers.*externalversions.*
- pkg.*mod.*k8s.io.*
# which files to skip: they will be analyzed, but issues from them
# won't be reported. Default value is empty list, but there is
# no need to include all autogenerated files, we confidently recognize
# autogenerated files. If it's not please let us know.
skip-files: []
# - ".*\\.my\\.go$"
# - lib/bad.go
# output configuration options
output:
# colored-line-number|line-number|json|tab|checkstyle, default is "colored-line-number"
format: tab
formats:
- format: tab
# print lines of code with issue, default is true
print-issued-lines: true
@ -71,9 +65,6 @@ linters-settings:
statements: 40
govet:
# report about shadowed variables
check-shadowing: true
# settings per analyzer
settings:
printf: # analyzer name, run `go tool vet help` to see all analyzers
@ -82,13 +73,18 @@ linters-settings:
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Warnf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Errorf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Fatalf
- github.com/dapr/cli/pkg/print.FailureStatusEvent
- github.com/dapr/cli/pkg/print.SuccessStatusEvent
- github.com/dapr/cli/pkg/print.WarningStatusEvent
- github.com/dapr/cli/pkg/print.InfoStatusEvent
- github.com/dapr/cli/pkg/print.StatusEvent
- github.com/dapr/cli/pkg/print.Spinner
# enable or disable analyzers by name
enable:
- atomicalign
enable-all: false
disable:
- shadow
enable-all: false
disable-all: false
revive:
# linting errors below this confidence will be ignored, default is 0.8
@ -106,9 +102,6 @@ linters-settings:
gocognit:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
maligned:
# print struct with more effective memory layout or not, false by default
suggest-new: true
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
@ -118,13 +111,11 @@ linters-settings:
# minimal occurrences count to trigger, 3 by default
min-occurrences: 5
depguard:
list-type: blacklist
include-go-root: false
packages:
- github.com/Sirupsen/logrus
packages-with-error-messages:
# specify an error message to output when a blacklisted package is used
github.com/Sirupsen/logrus: "must use github.com/sirupsen/logrus"
rules:
main:
deny:
- pkg: "github.com/Sirupsen/logrus"
desc: "must use github.com/sirupsen/logrus"
misspell:
# Correct spellings using locale preferences for US or UK.
# Default is to use a neutral variety of English.
@ -143,7 +134,7 @@ linters-settings:
# XXX: if you enable this setting, unused will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find funcs usages. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
exported-fields-are-used: false
unparam:
# Inspect exported functions, default is false. Set to true if no external program/library imports your code.
# XXX: if you enable this setting, unparam will report a lot of false-positives in text editors:
@ -216,15 +207,19 @@ linters-settings:
# Allow multiline assignments to be cuddled. Default is true.
allow-multiline-assign: true
# Allow case blocks to end with a whitespace.
allow-case-traling-whitespace: true
# Allow declarations (var) to be cuddled.
allow-cuddle-declarations: false
testifylint:
disable:
- require-error
linters:
fast: false
enable-all: true
disable:
# TODO Enforce the below linters later
- musttag
- dupl
- errcheck
- funlen
@ -233,33 +228,48 @@ linters:
- gocyclo
- gocognit
- godox
- interfacer
- lll
- maligned
- scopelint
- unparam
- wsl
- gomnd
- testpackage
- nestif
- goerr113
- nlreturn
- exhaustive
- gci
- noctx
- exhaustivestruct
- exhaustruct
- gomoddirectives
- paralleltest
- noctx
- gci
- tparallel
- wastedassign
- cyclop
- forbidigo
- tagliatelle
- thelper
- paralleltest
- wrapcheck
- varnamelen
- forcetypeassert
- tagliatelle
- ireturn
- golint
- nosnakecase
- errchkjson
- contextcheck
- gomoddirectives
- godot
- cyclop
- varnamelen
- errorlint
- forcetypeassert
- maintidx
- nilnil
- predeclared
- tenv
- thelper
- wastedassign
- containedctx
- gosimple
- nonamedreturns
- asasalint
- rowserrcheck
- sqlclosecheck
- inamedparam
- tagalign
- mnd
- canonicalheader
- exportloopref
- execinquery
- err113
- fatcontext
- forbidigo

View File

@ -40,7 +40,7 @@ Before you file an issue, make sure you've checked the following:
- 👎 down-vote
1. For bugs
- Check it's not an environment issue. For example, if running on Kubernetes, make sure prerequisites are in place. (state stores, bindings, etc.)
- You have as much data as possible. This usually comes in the form of logs and/or stacktrace. If running on Kubernetes or other environment, look at the logs of the Dapr services (runtime, operator, placement service). More details on how to get logs can be found [here](https://docs.dapr.io/operations/troubleshooting/logs-troubleshooting/).
- You have as much data as possible. This usually comes in the form of logs and/or stacktrace. If running on Kubernetes or other environment, look at the logs of the Dapr services (runtime, operator, placement, scheduler service). More details on how to get logs can be found [here](https://docs.dapr.io/operations/troubleshooting/logs-troubleshooting/).
1. For proposals
- Many changes to the Dapr runtime may require changes to the API. In that case, the best place to discuss the potential feature is the main [Dapr repo](https://github.com/dapr/dapr).
- Other examples could include bindings, state stores or entirely new components.

991
Gopkg.lock generated
View File

@ -1,991 +0,0 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
digest = "1:80004fcc5cf64e591486b3e11b406f1e0d17bf85d475d64203c8494f5da4fcd1"
name = "cloud.google.com/go"
packages = ["compute/metadata"]
pruneopts = "UT"
revision = "8c41231e01b2085512d98153bcffb847ff9b4b9f"
version = "v0.38.0"
[[projects]]
digest = "1:6b1426cad7057b717351eacf5b6fe70f053f11aac1ce254bbf2fd72c031719eb"
name = "contrib.go.opencensus.io/exporter/ocagent"
packages = ["."]
pruneopts = "UT"
revision = "dcb33c7f3b7cfe67e8a2cea10207ede1b7c40764"
version = "v0.4.12"
[[projects]]
digest = "1:b88fe174accff6609eee9dc7e4ec9f828cbda83e3646111538dbcc7f762f1a56"
name = "github.com/Azure/go-autorest"
packages = [
"autorest",
"autorest/adal",
"autorest/azure",
"autorest/date",
"logger",
"tracing",
]
pruneopts = "UT"
revision = "f29a2eccaa178b367df0405778cd85e0af7b4225"
version = "v12.1.0"
[[projects]]
digest = "1:d5e752c67b445baa5b6cb6f8aa706775c2aa8e41aca95a0c651520ff2c80361a"
name = "github.com/Microsoft/go-winio"
packages = [
".",
"pkg/guid",
]
pruneopts = "UT"
revision = "6c72808b55902eae4c5943626030429ff20f3b63"
version = "v0.4.14"
[[projects]]
branch = "master"
digest = "1:174cfc45f3f3e0f24f4bc3d2c80d8bcd4c02e274f9a0c4e7fcb6ff3273c0eeee"
name = "github.com/Pallinder/sillyname-go"
packages = ["."]
pruneopts = "UT"
revision = "97aeae9e6ba11ec62a40cf8b6b4bc42116c0a303"
[[projects]]
digest = "1:04457f9f6f3ffc5fea48e71d62f2ca256637dee0a04d710288e27e05c8b41976"
name = "github.com/Sirupsen/logrus"
packages = ["."]
pruneopts = "UT"
revision = "839c75faf7f98a33d445d181f3018b5c3409a45e"
version = "v1.4.2"
[[projects]]
digest = "1:937ce6e0cd5ccfec205f444a0d9c74f5680cbb68cd0a992b000559bf964ea20b"
name = "github.com/briandowns/spinner"
packages = ["."]
pruneopts = "UT"
revision = "e3fb08e7443c496a847cb2eef48e3883f3e12c38"
version = "v1.6.1"
[[projects]]
digest = "1:c1100fc71e23b6a32b2c68a5202a848fd13811d5a10b12edb8019c3667d1cd9a"
name = "github.com/cenkalti/backoff"
packages = ["."]
pruneopts = "UT"
revision = "4b4cebaf850ec58f1bb1fec5bdebdf8501c2bc3f"
version = "v3.0.0"
[[projects]]
digest = "1:fdb4ed936abeecb46a8c27dcac83f75c05c87a46d9ec7711411eb785c213fa02"
name = "github.com/census-instrumentation/opencensus-proto"
packages = [
"gen-go/agent/common/v1",
"gen-go/agent/metrics/v1",
"gen-go/agent/trace/v1",
"gen-go/metrics/v1",
"gen-go/resource/v1",
"gen-go/trace/v1",
]
pruneopts = "UT"
revision = "a105b96453fe85139acc07b68de48f2cbdd71249"
version = "v0.2.0"
[[projects]]
digest = "1:95ea6524ccf5526a5f57fa634f2789266684ae1c15ce1a0cab3ae68e7ea3c4d0"
name = "github.com/dapr/dapr"
packages = [
"pkg/apis/components",
"pkg/apis/components/v1alpha1",
"pkg/components",
"pkg/config/modes",
]
pruneopts = "UT"
revision = "c75b111b7d2258ce7339f6b40c6d1af5b0b6de22"
version = "v0.2.0"
[[projects]]
digest = "1:ffe9824d294da03b391f44e1ae8281281b4afc1bdaa9588c9097785e3af10cec"
name = "github.com/davecgh/go-spew"
packages = ["spew"]
pruneopts = "UT"
revision = "8991bc29aa16c548c550c7ff78260e27b9ab7c73"
version = "v1.1.1"
[[projects]]
digest = "1:76dc72490af7174349349838f2fe118996381b31ea83243812a97e5a0fd5ed55"
name = "github.com/dgrijalva/jwt-go"
packages = ["."]
pruneopts = "UT"
revision = "06ea1031745cb8b3dab3f6a236daf2b0aa468b7e"
version = "v3.2.0"
[[projects]]
digest = "1:4ddc17aeaa82cb18c5f0a25d7c253a10682f518f4b2558a82869506eec223d76"
name = "github.com/docker/distribution"
packages = [
"digestset",
"reference",
]
pruneopts = "UT"
revision = "2461543d988979529609e8cb6fca9ca190dc48da"
version = "v2.7.1"
[[projects]]
digest = "1:c4c7064c2c67a0a00815918bae489dd62cd88d859d24c95115d69b00b3d33334"
name = "github.com/docker/docker"
packages = [
"api/types",
"api/types/blkiodev",
"api/types/container",
"api/types/events",
"api/types/filters",
"api/types/mount",
"api/types/network",
"api/types/reference",
"api/types/registry",
"api/types/strslice",
"api/types/swarm",
"api/types/time",
"api/types/versions",
"api/types/volume",
"client",
"pkg/tlsconfig",
]
pruneopts = "UT"
revision = "092cba3727bb9b4a2f0e922cd6c0f93ea270e363"
version = "v1.13.1"
[[projects]]
digest = "1:811c86996b1ca46729bad2724d4499014c4b9effd05ef8c71b852aad90deb0ce"
name = "github.com/docker/go-connections"
packages = [
"nat",
"sockets",
"tlsconfig",
]
pruneopts = "UT"
revision = "7395e3f8aa162843a74ed6d48e79627d9792ac55"
version = "v0.4.0"
[[projects]]
digest = "1:e95ef557dc3120984bb66b385ae01b4bb8ff56bcde28e7b0d1beed0cccc4d69f"
name = "github.com/docker/go-units"
packages = ["."]
pruneopts = "UT"
revision = "519db1ee28dcc9fd2474ae59fca29a810482bfb1"
version = "v0.4.0"
[[projects]]
digest = "1:865079840386857c809b72ce300be7580cb50d3d3129ce11bf9aa6ca2bc1934a"
name = "github.com/fatih/color"
packages = ["."]
pruneopts = "UT"
revision = "5b77d2a35fb0ede96d138fc9a99f5c9b6aef11b4"
version = "v1.7.0"
[[projects]]
digest = "1:abeb38ade3f32a92943e5be54f55ed6d6e3b6602761d74b4aab4c9dd45c18abd"
name = "github.com/fsnotify/fsnotify"
packages = ["."]
pruneopts = "UT"
revision = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9"
version = "v1.4.7"
[[projects]]
digest = "1:2cd7915ab26ede7d95b8749e6b1f933f1c6d5398030684e6505940a10f31cfda"
name = "github.com/ghodss/yaml"
packages = ["."]
pruneopts = "UT"
revision = "0ca9ea5df5451ffdf184b4428c902747c2c11cd7"
version = "v1.0.0"
[[projects]]
branch = "master"
digest = "1:adddf11eb27039a3afcc74c5e3d13da84e189012ec37acfc2c70385f25edbe0f"
name = "github.com/gocarina/gocsv"
packages = ["."]
pruneopts = "UT"
revision = "2fc85fcf0c07e8bb9123b2104e84cfc2a5b53724"
[[projects]]
digest = "1:4d02824a56d268f74a6b6fdd944b20b58a77c3d70e81008b3ee0c4f1a6777340"
name = "github.com/gogo/protobuf"
packages = [
"proto",
"sortkeys",
]
pruneopts = "UT"
revision = "ba06b47c162d49f2af050fb4c75bcbc86a159d5c"
version = "v1.2.1"
[[projects]]
digest = "1:489a99067cd08971bd9c1ee0055119ba8febc1429f9200ab0bec68d35e8c4833"
name = "github.com/golang/protobuf"
packages = [
"jsonpb",
"proto",
"protoc-gen-go/descriptor",
"protoc-gen-go/generator",
"protoc-gen-go/generator/internal/remap",
"protoc-gen-go/plugin",
"ptypes",
"ptypes/any",
"ptypes/duration",
"ptypes/struct",
"ptypes/timestamp",
"ptypes/wrappers",
]
pruneopts = "UT"
revision = "b5d812f8a3706043e23a9cd5babf2e5423744d30"
version = "v1.3.1"
[[projects]]
digest = "1:a6181aca1fd5e27103f9a920876f29ac72854df7345a39f3b01e61c8c94cc8af"
name = "github.com/google/gofuzz"
packages = ["."]
pruneopts = "UT"
revision = "f140a6486e521aad38f5917de355cbf147cc0496"
version = "v1.0.0"
[[projects]]
digest = "1:582b704bebaa06b48c29b0cec224a6058a09c86883aaddabde889cd1a5f73e1b"
name = "github.com/google/uuid"
packages = ["."]
pruneopts = "UT"
revision = "0cd6bf5da1e1c83f8b45653022c74f71af0538a4"
version = "v1.1.1"
[[projects]]
digest = "1:65c4414eeb350c47b8de71110150d0ea8a281835b1f386eacaa3ad7325929c21"
name = "github.com/googleapis/gnostic"
packages = [
"OpenAPIv2",
"compiler",
"extensions",
]
pruneopts = "UT"
revision = "7c663266750e7d82587642f65e60bc4083f1f84e"
version = "v0.2.0"
[[projects]]
digest = "1:c7810b83a74c6ec1d14d16d4b950c09abce6fbe9cc660ac2cde5b57efa8cc12e"
name = "github.com/gophercloud/gophercloud"
packages = [
".",
"openstack",
"openstack/identity/v2/tenants",
"openstack/identity/v2/tokens",
"openstack/identity/v3/tokens",
"openstack/utils",
"pagination",
]
pruneopts = "UT"
revision = "c2d73b246b48e239d3f03c455905e06fe26e33c3"
version = "v0.1.0"
[[projects]]
digest = "1:4f30fff718a459f9be272e7aa87463cdf4ba27bb8bd7f586ac34c36d670aada4"
name = "github.com/grpc-ecosystem/grpc-gateway"
packages = [
"internal",
"runtime",
"utilities",
]
pruneopts = "UT"
revision = "cddead4ec1d10cc62f08e1fd6f8591fbe71cfff9"
version = "v1.9.1"
[[projects]]
digest = "1:67474f760e9ac3799f740db2c489e6423a4cde45520673ec123ac831ad849cb8"
name = "github.com/hashicorp/golang-lru"
packages = ["simplelru"]
pruneopts = "UT"
revision = "7087cb70de9f7a8bc0a10c375cb0d2280a8edf9c"
version = "v0.5.1"
[[projects]]
digest = "1:c0d19ab64b32ce9fe5cf4ddceba78d5bc9807f0016db6b1183599da3dcc24d10"
name = "github.com/hashicorp/hcl"
packages = [
".",
"hcl/ast",
"hcl/parser",
"hcl/printer",
"hcl/scanner",
"hcl/strconv",
"hcl/token",
"json/parser",
"json/scanner",
"json/token",
]
pruneopts = "UT"
revision = "8cb6e5b959231cc1119e43259c4a608f9c51a241"
version = "v1.0.0"
[[projects]]
digest = "1:a0cefd27d12712af4b5018dc7046f245e1e3b5760e2e848c30b171b570708f9b"
name = "github.com/imdario/mergo"
packages = ["."]
pruneopts = "UT"
revision = "7c29201646fa3de8506f701213473dd407f19646"
version = "v0.3.7"
[[projects]]
digest = "1:870d441fe217b8e689d7949fef6e43efbc787e50f200cb1e70dbca9204a1d6be"
name = "github.com/inconshreveable/mousetrap"
packages = ["."]
pruneopts = "UT"
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
version = "v1.0"
[[projects]]
digest = "1:f5a2051c55d05548d2d4fd23d244027b59fbd943217df8aa3b5e170ac2fd6e1b"
name = "github.com/json-iterator/go"
packages = ["."]
pruneopts = "UT"
revision = "0ff49de124c6f76f8494e194af75bde0f1a49a29"
version = "v1.1.6"
[[projects]]
digest = "1:fa4b4bfa0edfb83c4690d8746ea25bc2447ad0c20063ba72adcb5725f54acde0"
name = "github.com/klauspost/compress"
packages = [
"flate",
"gzip",
"zlib",
]
pruneopts = "UT"
revision = "4e96aec082898e4dad17d8aca1a7e2d01362ff6c"
version = "v1.9.2"
[[projects]]
digest = "1:31e761d97c76151dde79e9d28964a812c46efc5baee4085b86f68f0c654450de"
name = "github.com/konsorten/go-windows-terminal-sequences"
packages = ["."]
pruneopts = "UT"
revision = "f55edac94c9bbba5d6182a4be46d86a2c9b5b50e"
version = "v1.0.2"
[[projects]]
digest = "1:5a0ef768465592efca0412f7e838cdc0826712f8447e70e6ccc52eb441e9ab13"
name = "github.com/magiconair/properties"
packages = ["."]
pruneopts = "UT"
revision = "de8848e004dd33dc07a2947b3d76f618a7fc7ef1"
version = "v1.8.1"
[[projects]]
digest = "1:c658e84ad3916da105a761660dcaeb01e63416c8ec7bc62256a9b411a05fcd67"
name = "github.com/mattn/go-colorable"
packages = ["."]
pruneopts = "UT"
revision = "167de6bfdfba052fa6b2d3664c8f5272e23c9072"
version = "v0.0.9"
[[projects]]
digest = "1:e150b5fafbd7607e2d638e4e5cf43aa4100124e5593385147b0a74e2733d8b0d"
name = "github.com/mattn/go-isatty"
packages = ["."]
pruneopts = "UT"
revision = "c2a7a6ca930a4cd0bc33a3f298eb71960732a3a7"
version = "v0.0.7"
[[projects]]
digest = "1:0356f3312c9bd1cbeda81505b7fd437501d8e778ab66998ef69f00d7f9b3a0d7"
name = "github.com/mattn/go-runewidth"
packages = ["."]
pruneopts = "UT"
revision = "3ee7d812e62a0804a7d0a324e0249ca2db3476d3"
version = "v0.0.4"
[[projects]]
branch = "master"
digest = "1:56aff9bb73896906956fee6927207393212bfaa732c1aab4feaf29de4b1418e9"
name = "github.com/mitchellh/go-ps"
packages = ["."]
pruneopts = "UT"
revision = "621e5597135b1d14a7d9c2bfc7bc312e7c58463c"
[[projects]]
digest = "1:53bc4cd4914cd7cd52139990d5170d6dc99067ae31c56530621b18b35fc30318"
name = "github.com/mitchellh/mapstructure"
packages = ["."]
pruneopts = "UT"
revision = "3536a929edddb9a5b34bd6861dc4a9647cb459fe"
version = "v1.1.2"
[[projects]]
digest = "1:33422d238f147d247752996a26574ac48dcf472976eda7f5134015f06bf16563"
name = "github.com/modern-go/concurrent"
packages = ["."]
pruneopts = "UT"
revision = "bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94"
version = "1.0.3"
[[projects]]
digest = "1:e32bdbdb7c377a07a9a46378290059822efdce5c8d96fe71940d87cb4f918855"
name = "github.com/modern-go/reflect2"
packages = ["."]
pruneopts = "UT"
revision = "4b7aa43c6742a2c18fdef89dd197aaae7dac7ccd"
version = "1.0.1"
[[projects]]
branch = "master"
digest = "1:6491080aa184f88c2bb8e2f6056e5e0e9a578b2d8666efbd6e97bc37a0c41e72"
name = "github.com/nightlyone/lockfile"
packages = ["."]
pruneopts = "UT"
revision = "0ad87eef1443f64d3d8c50da647e2b1552851124"
[[projects]]
digest = "1:abcdbf03ca6ca13d3697e2186edc1f33863bbdac2b3a44dfa39015e8903f7409"
name = "github.com/olekukonko/tablewriter"
packages = ["."]
pruneopts = "UT"
revision = "e6d60cf7ba1f42d86d54cdf5508611c4aafb3970"
version = "v0.0.1"
[[projects]]
digest = "1:ee4d4af67d93cc7644157882329023ce9a7bcfce956a079069a9405521c7cc8d"
name = "github.com/opencontainers/go-digest"
packages = ["."]
pruneopts = "UT"
revision = "279bed98673dd5bef374d3b6e4b09e2af76183bf"
version = "v1.0.0-rc1"
[[projects]]
digest = "1:6eea828983c70075ca297bb915ffbcfd3e34c5a50affd94428a65df955c0ff9c"
name = "github.com/pelletier/go-toml"
packages = ["."]
pruneopts = "UT"
revision = "903d9455db9ff1d7ac1ab199062eca7266dd11a3"
version = "v1.6.0"
[[projects]]
digest = "1:7413525ee648f20b4181be7fe8103d0cb98be9e141926a03ee082dc207061e4e"
name = "github.com/phayes/freeport"
packages = ["."]
pruneopts = "UT"
revision = "b8543db493a5ed890c5499e935e2cad7504f3a04"
version = "1.0.2"
[[projects]]
digest = "1:cf31692c14422fa27c83a05292eb5cbe0fb2775972e8f1f8446a71549bd8980b"
name = "github.com/pkg/errors"
packages = ["."]
pruneopts = "UT"
revision = "ba968bfe8b2f7e042a574c888954fccecfa385b4"
version = "v0.8.1"
[[projects]]
digest = "1:bb495ec276ab82d3dd08504bbc0594a65de8c3b22c6f2aaa92d05b73fbf3a82e"
name = "github.com/spf13/afero"
packages = [
".",
"mem",
]
pruneopts = "UT"
revision = "588a75ec4f32903aa5e39a2619ba6a4631e28424"
version = "v1.2.2"
[[projects]]
digest = "1:08d65904057412fc0270fc4812a1c90c594186819243160dc779a402d4b6d0bc"
name = "github.com/spf13/cast"
packages = ["."]
pruneopts = "UT"
revision = "8c9545af88b134710ab1cd196795e7f2388358d7"
version = "v1.3.0"
[[projects]]
digest = "1:645cabccbb4fa8aab25a956cbcbdf6a6845ca736b2c64e197ca7cbb9d210b939"
name = "github.com/spf13/cobra"
packages = ["."]
pruneopts = "UT"
revision = "ef82de70bb3f60c65fb8eebacbb2d122ef517385"
version = "v0.0.3"
[[projects]]
digest = "1:1b753ec16506f5864d26a28b43703c58831255059644351bbcb019b843950900"
name = "github.com/spf13/jwalterweatherman"
packages = ["."]
pruneopts = "UT"
revision = "94f6ae3ed3bceceafa716478c5fbf8d29ca601a1"
version = "v1.1.0"
[[projects]]
digest = "1:c1b1102241e7f645bc8e0c22ae352e8f0dc6484b6cb4d132fa9f24174e0119e2"
name = "github.com/spf13/pflag"
packages = ["."]
pruneopts = "UT"
revision = "298182f68c66c05229eb03ac171abe6e309ee79a"
version = "v1.0.3"
[[projects]]
digest = "1:0b60fc944fb6a7b6c985832bd341bdb7ed8fe894fea330414e7774bb24652962"
name = "github.com/spf13/viper"
packages = ["."]
pruneopts = "UT"
revision = "72b022eb357a56469725dcd03918449e2278d02e"
version = "v1.5.0"
[[projects]]
digest = "1:f4b32291cad5efac2bfdba89ccde6aa04618b62ce06c1a571da2dc4f3f2677fb"
name = "github.com/subosito/gotenv"
packages = ["."]
pruneopts = "UT"
revision = "2ef7124db659d49edac6aa459693a15ae36c671a"
version = "v1.2.0"
[[projects]]
digest = "1:c468422f334a6b46a19448ad59aaffdfc0a36b08fdcc1c749a0b29b6453d7e59"
name = "github.com/valyala/bytebufferpool"
packages = ["."]
pruneopts = "UT"
revision = "e746df99fe4a3986f4d4f79e13c1e0117ce9c2f7"
version = "v1.0.0"
[[projects]]
digest = "1:15ad8a80098fcc7a194b9db6b26d74072a852e4faa957848c8118193d3c69230"
name = "github.com/valyala/fasthttp"
packages = [
".",
"fasthttputil",
"stackless",
]
pruneopts = "UT"
revision = "e5f51c11919d4f66400334047b897ef0a94c6f3c"
version = "v20180529"
[[projects]]
digest = "1:4c93890bbbb5016505e856cb06b5c5a2ff5b7217584d33f2a9071ebef4b5d473"
name = "go.opencensus.io"
packages = [
".",
"internal",
"internal/tagencoding",
"metric/metricdata",
"metric/metricproducer",
"plugin/ocgrpc",
"plugin/ochttp",
"plugin/ochttp/propagation/b3",
"plugin/ochttp/propagation/tracecontext",
"resource",
"stats",
"stats/internal",
"stats/view",
"tag",
"trace",
"trace/internal",
"trace/propagation",
"trace/tracestate",
]
pruneopts = "UT"
revision = "43463a80402d8447b7fce0d2c58edf1687ff0b58"
version = "v0.19.3"
[[projects]]
branch = "master"
digest = "1:bbe51412d9915d64ffaa96b51d409e070665efc5194fcf145c4a27d4133107a4"
name = "golang.org/x/crypto"
packages = ["ssh/terminal"]
pruneopts = "UT"
revision = "e1dfcc566284e143ba8f9afbb3fa563f2a0d212b"
[[projects]]
branch = "master"
digest = "1:01a2f697724170f98739b5261ec830eafc626f56b6786c730578dabcce47649c"
name = "golang.org/x/net"
packages = [
"context",
"context/ctxhttp",
"http/httpguts",
"http2",
"http2/hpack",
"idna",
"internal/socks",
"internal/timeseries",
"proxy",
"trace",
]
pruneopts = "UT"
revision = "a4d6f7feada510cc50e69a37b484cb0fdc6b7876"
[[projects]]
branch = "master"
digest = "1:645cb780e4f3177111b40588f0a7f5950efcfb473e7ff41d8d81b2ba5eaa6ed5"
name = "golang.org/x/oauth2"
packages = [
".",
"google",
"internal",
"jws",
"jwt",
]
pruneopts = "UT"
revision = "9f3314589c9a9136388751d9adae6b0ed400978a"
[[projects]]
branch = "master"
digest = "1:382bb5a7fb4034db3b6a2d19e5a4a6bcf52f4750530603c01ca18a172fa3089b"
name = "golang.org/x/sync"
packages = ["semaphore"]
pruneopts = "UT"
revision = "112230192c580c3556b8cee6403af37a4fc5f28c"
[[projects]]
branch = "master"
digest = "1:656ce95b4cdf841c00825f4cb94f6e0e1422d7d6faaf3094e94cd18884a32251"
name = "golang.org/x/sys"
packages = [
"unix",
"windows",
]
pruneopts = "UT"
revision = "a129542de9ae0895210abff9c95d67a1f33cb93d"
[[projects]]
digest = "1:8d8faad6b12a3a4c819a3f9618cb6ee1fa1cfc33253abeeea8b55336721e3405"
name = "golang.org/x/text"
packages = [
"collate",
"collate/build",
"internal/colltab",
"internal/gen",
"internal/language",
"internal/language/compact",
"internal/tag",
"internal/triegen",
"internal/ucd",
"language",
"secure/bidirule",
"transform",
"unicode/bidi",
"unicode/cldr",
"unicode/norm",
"unicode/rangetable",
]
pruneopts = "UT"
revision = "342b2e1fbaa52c93f31447ad2c6abc048c63e475"
version = "v0.3.2"
[[projects]]
branch = "master"
digest = "1:9fdc2b55e8e0fafe4b41884091e51e77344f7dc511c5acedcfd98200003bff90"
name = "golang.org/x/time"
packages = ["rate"]
pruneopts = "UT"
revision = "9d24e82272b4f38b78bc8cff74fa936d31ccd8ef"
[[projects]]
digest = "1:5f003878aabe31d7f6b842d4de32b41c46c214bb629bb485387dbcce1edf5643"
name = "google.golang.org/api"
packages = ["support/bundler"]
pruneopts = "UT"
revision = "aac82e61c0c8fe133c297b4b59316b9f481e1f0a"
version = "v0.6.0"
[[projects]]
digest = "1:04f2ff15fc59e1ddaf9900ad0e19e5b19586b31f9dafd4d592b617642b239d8f"
name = "google.golang.org/appengine"
packages = [
".",
"internal",
"internal/app_identity",
"internal/base",
"internal/datastore",
"internal/log",
"internal/modules",
"internal/remote_api",
"internal/urlfetch",
"urlfetch",
]
pruneopts = "UT"
revision = "54a98f90d1c46b7731eb8fb305d2a321c30ef610"
version = "v1.5.0"
[[projects]]
branch = "master"
digest = "1:3565a93b7692277a5dea355bc47bd6315754f3246ed07a224be6aec28972a805"
name = "google.golang.org/genproto"
packages = [
"googleapis/api/httpbody",
"googleapis/rpc/status",
"protobuf/field_mask",
]
pruneopts = "UT"
revision = "a7e196e89fd3a3c4d103ca540bd5dac3a736e375"
[[projects]]
digest = "1:e8800ddadd6bce3bc0c5ffd7bc55dbdddc6e750956c10cc10271cade542fccbe"
name = "google.golang.org/grpc"
packages = [
".",
"balancer",
"balancer/base",
"balancer/roundrobin",
"binarylog/grpc_binarylog_v1",
"codes",
"connectivity",
"credentials",
"credentials/internal",
"encoding",
"encoding/proto",
"grpclog",
"internal",
"internal/backoff",
"internal/balancerload",
"internal/binarylog",
"internal/channelz",
"internal/envconfig",
"internal/grpcrand",
"internal/grpcsync",
"internal/syscall",
"internal/transport",
"keepalive",
"metadata",
"naming",
"peer",
"resolver",
"resolver/dns",
"resolver/passthrough",
"stats",
"status",
"tap",
]
pruneopts = "UT"
revision = "501c41df7f472c740d0674ff27122f3f48c80ce7"
version = "v1.21.1"
[[projects]]
digest = "1:2d1fbdc6777e5408cabeb02bf336305e724b925ff4546ded0fa8715a7267922a"
name = "gopkg.in/inf.v0"
packages = ["."]
pruneopts = "UT"
revision = "d2d2541c53f18d2a059457998ce2876cc8e67cbf"
version = "v0.9.1"
[[projects]]
digest = "1:4d2e5a73dc1500038e504a8d78b986630e3626dc027bc030ba5c75da257cdb96"
name = "gopkg.in/yaml.v2"
packages = ["."]
pruneopts = "UT"
revision = "51d6538a90f86fe93ac480b35f37b2be17fef232"
version = "v2.2.2"
[[projects]]
digest = "1:86ad5797d1189de342ed6988fbb76b92dc0429a4d677ad69888d6137efa5712e"
name = "k8s.io/api"
packages = [
"admissionregistration/v1beta1",
"apps/v1",
"apps/v1beta1",
"apps/v1beta2",
"auditregistration/v1alpha1",
"authentication/v1",
"authentication/v1beta1",
"authorization/v1",
"authorization/v1beta1",
"autoscaling/v1",
"autoscaling/v2beta1",
"autoscaling/v2beta2",
"batch/v1",
"batch/v1beta1",
"batch/v2alpha1",
"certificates/v1beta1",
"coordination/v1",
"coordination/v1beta1",
"core/v1",
"events/v1beta1",
"extensions/v1beta1",
"networking/v1",
"networking/v1beta1",
"node/v1alpha1",
"node/v1beta1",
"policy/v1beta1",
"rbac/v1",
"rbac/v1alpha1",
"rbac/v1beta1",
"scheduling/v1",
"scheduling/v1alpha1",
"scheduling/v1beta1",
"settings/v1alpha1",
"storage/v1",
"storage/v1alpha1",
"storage/v1beta1",
]
pruneopts = "UT"
revision = "6e4e0e4f393bf5e8bbff570acd13217aa5a770cd"
version = "kubernetes-1.14.1"
[[projects]]
digest = "1:78f6a824d205c6cb0d011cce241407646b773cb57ee27e8c7e027753b4111075"
name = "k8s.io/apimachinery"
packages = [
"pkg/api/errors",
"pkg/api/meta",
"pkg/api/resource",
"pkg/apis/meta/v1",
"pkg/apis/meta/v1/unstructured",
"pkg/apis/meta/v1beta1",
"pkg/conversion",
"pkg/conversion/queryparams",
"pkg/fields",
"pkg/labels",
"pkg/runtime",
"pkg/runtime/schema",
"pkg/runtime/serializer",
"pkg/runtime/serializer/json",
"pkg/runtime/serializer/protobuf",
"pkg/runtime/serializer/recognizer",
"pkg/runtime/serializer/streaming",
"pkg/runtime/serializer/versioning",
"pkg/selection",
"pkg/types",
"pkg/util/clock",
"pkg/util/errors",
"pkg/util/framer",
"pkg/util/intstr",
"pkg/util/json",
"pkg/util/naming",
"pkg/util/net",
"pkg/util/runtime",
"pkg/util/sets",
"pkg/util/validation",
"pkg/util/validation/field",
"pkg/util/yaml",
"pkg/version",
"pkg/watch",
"third_party/forked/golang/reflect",
]
pruneopts = "UT"
revision = "6a84e37a896db9780c75367af8d2ed2bb944022e"
version = "kubernetes-1.14.1"
[[projects]]
digest = "1:37f699391265222af7da4bf8e443ca03dd834ce362fbb4b19b4d67492ff06781"
name = "k8s.io/client-go"
packages = [
"discovery",
"kubernetes",
"kubernetes/scheme",
"kubernetes/typed/admissionregistration/v1beta1",
"kubernetes/typed/apps/v1",
"kubernetes/typed/apps/v1beta1",
"kubernetes/typed/apps/v1beta2",
"kubernetes/typed/auditregistration/v1alpha1",
"kubernetes/typed/authentication/v1",
"kubernetes/typed/authentication/v1beta1",
"kubernetes/typed/authorization/v1",
"kubernetes/typed/authorization/v1beta1",
"kubernetes/typed/autoscaling/v1",
"kubernetes/typed/autoscaling/v2beta1",
"kubernetes/typed/autoscaling/v2beta2",
"kubernetes/typed/batch/v1",
"kubernetes/typed/batch/v1beta1",
"kubernetes/typed/batch/v2alpha1",
"kubernetes/typed/certificates/v1beta1",
"kubernetes/typed/coordination/v1",
"kubernetes/typed/coordination/v1beta1",
"kubernetes/typed/core/v1",
"kubernetes/typed/events/v1beta1",
"kubernetes/typed/extensions/v1beta1",
"kubernetes/typed/networking/v1",
"kubernetes/typed/networking/v1beta1",
"kubernetes/typed/node/v1alpha1",
"kubernetes/typed/node/v1beta1",
"kubernetes/typed/policy/v1beta1",
"kubernetes/typed/rbac/v1",
"kubernetes/typed/rbac/v1alpha1",
"kubernetes/typed/rbac/v1beta1",
"kubernetes/typed/scheduling/v1",
"kubernetes/typed/scheduling/v1alpha1",
"kubernetes/typed/scheduling/v1beta1",
"kubernetes/typed/settings/v1alpha1",
"kubernetes/typed/storage/v1",
"kubernetes/typed/storage/v1alpha1",
"kubernetes/typed/storage/v1beta1",
"pkg/apis/clientauthentication",
"pkg/apis/clientauthentication/v1alpha1",
"pkg/apis/clientauthentication/v1beta1",
"pkg/version",
"plugin/pkg/client/auth",
"plugin/pkg/client/auth/azure",
"plugin/pkg/client/auth/exec",
"plugin/pkg/client/auth/gcp",
"plugin/pkg/client/auth/oidc",
"plugin/pkg/client/auth/openstack",
"rest",
"rest/watch",
"third_party/forked/golang/template",
"tools/auth",
"tools/clientcmd",
"tools/clientcmd/api",
"tools/clientcmd/api/latest",
"tools/clientcmd/api/v1",
"tools/metrics",
"tools/reference",
"transport",
"util/cert",
"util/connrotation",
"util/flowcontrol",
"util/homedir",
"util/jsonpath",
"util/keyutil",
]
pruneopts = "UT"
revision = "1a26190bd76a9017e289958b9fba936430aa3704"
version = "kubernetes-1.14.1"
[[projects]]
digest = "1:c696379ad201c1e86591785579e16bf6cf886c362e9a7534e8eb0d1028b20582"
name = "k8s.io/klog"
packages = ["."]
pruneopts = "UT"
revision = "e531227889390a39d9533dde61f590fe9f4b0035"
version = "v0.3.0"
[[projects]]
branch = "master"
digest = "1:8b40227d4bf8b431fdab4f9026e6e346f00ac3be5662af367a183f78c57660b3"
name = "k8s.io/utils"
packages = ["integer"]
pruneopts = "UT"
revision = "8fab8cb257d50c8cf94ec9771e74826edbb68fb5"
[[projects]]
digest = "1:7719608fe0b52a4ece56c2dde37bedd95b938677d1ab0f84b8a7852e4c59f849"
name = "sigs.k8s.io/yaml"
packages = ["."]
pruneopts = "UT"
revision = "fd68e9863619f6ec2fdd8625fe1f02e7c877e480"
version = "v1.1.0"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
input-imports = [
"github.com/Pallinder/sillyname-go",
"github.com/briandowns/spinner",
"github.com/dapr/dapr/pkg/components",
"github.com/dapr/dapr/pkg/config/modes",
"github.com/docker/docker/client",
"github.com/fatih/color",
"github.com/gocarina/gocsv",
"github.com/google/uuid",
"github.com/mitchellh/go-ps",
"github.com/nightlyone/lockfile",
"github.com/olekukonko/tablewriter",
"github.com/phayes/freeport",
"github.com/spf13/cobra",
"github.com/spf13/viper",
"gopkg.in/yaml.v2",
"k8s.io/api/core/v1",
"k8s.io/apimachinery/pkg/apis/meta/v1",
"k8s.io/client-go/kubernetes",
"k8s.io/client-go/plugin/pkg/client/auth",
"k8s.io/client-go/plugin/pkg/client/auth/gcp",
"k8s.io/client-go/tools/clientcmd",
]
solver-name = "gps-cdcl"
solver-version = 1

View File

@ -1,54 +0,0 @@
# Gopkg.toml example
#
# Refer to https://golang.github.io/dep/docs/Gopkg.toml.html
# for detailed Gopkg.toml documentation.
#
# required = ["github.com/user/thing/cmd/thing"]
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
#
# [[constraint]]
# name = "github.com/user/project"
# version = "1.0.0"
#
# [[constraint]]
# name = "github.com/user/project2"
# branch = "dev"
# source = "github.com/myfork/project2"
#
# [[override]]
# name = "github.com/x/y"
# version = "2.4.0"
#
# [prune]
# non-go = false
# go-tests = true
# unused-packages = true
[[constraint]]
name = "github.com/fatih/color"
version = "1.7.0"
[[constraint]]
name = "github.com/spf13/cobra"
version = "0.0.3"
[[constraint]]
name = "github.com/spf13/viper"
version = "1.5.0"
[[override]]
version = "kubernetes-1.14.1"
name = "k8s.io/apimachinery"
[[override]]
version = "kubernetes-1.14.1"
name = "k8s.io/api"
[[override]]
version = "kubernetes-1.14.1"
name = "k8s.io/client-go"
[prune]
go-tests = true
unused-packages = true

View File

@ -59,6 +59,7 @@ ifeq ($(LOCAL_OS),Linux)
else ifeq ($(LOCAL_OS),Darwin)
TARGET_OS_LOCAL = darwin
GOLANGCI_LINT:=golangci-lint
PATH := $(PATH):$(HOME)/go/bin/darwin_$(GOARCH)
export ARCHIVE_EXT = .tar.gz
else
TARGET_OS_LOCAL ?= windows
@ -153,7 +154,14 @@ test: test-deps
################################################################################
.PHONY: test-e2e-k8s
test-e2e-k8s: test-deps
gotestsum --jsonfile $(TEST_OUTPUT_FILE) --format standard-verbose -- -timeout 20m -count=1 -tags=e2e ./tests/e2e/kubernetes/...
gotestsum --jsonfile $(TEST_OUTPUT_FILE) --format standard-verbose -- -timeout 25m -count=1 -tags=e2e ./tests/e2e/kubernetes/...
################################################################################
# E2E Tests for K8s Template exec #
################################################################################
.PHONY: test-e2e-k8s-template
test-e2e-k8s-template: test-deps
gotestsum --jsonfile $(TEST_OUTPUT_FILE) --format standard-verbose -- -timeout 25m -count=1 -tags=templatek8s ./tests/e2e/kubernetes/...
################################################################################
# Build, E2E Tests for Kubernetes #
@ -166,7 +174,7 @@ e2e-build-run-k8s: build test-e2e-k8s
################################################################################
.PHONY: test-e2e-upgrade
test-e2e-upgrade: test-deps
gotestsum --jsonfile $(TEST_OUTPUT_FILE) --format standard-verbose -- -timeout 30m -count=1 -tags=e2e ./tests/e2e/upgrade/...
gotestsum --jsonfile $(TEST_OUTPUT_FILE) --format standard-verbose -- -timeout 60m -count=1 -tags=e2e ./tests/e2e/upgrade/...
################################################################################
# Build, E2E Tests for Kubernetes Upgrade #
@ -200,7 +208,7 @@ e2e-build-run-sh: build test-e2e-sh
################################################################################
.PHONY: modtidy
modtidy:
go mod tidy -compat=1.20
go mod tidy -compat=1.21
################################################################################
# Target: check-diff #

View File

@ -68,7 +68,7 @@ Install windows Dapr CLI using MSI package.
### Install Dapr on your local machine (self-hosted)
In self-hosted mode, dapr can be initialized using the CLI with the placement, redis and zipkin containers enabled by default(recommended) or without them which also does not require docker to be available in the environment.
In self-hosted mode, dapr can be initialized using the CLI with the placement, scheduler, redis, and zipkin containers enabled by default(recommended) or without them which also does not require docker to be available in the environment.
#### Initialize Dapr
@ -89,10 +89,11 @@ Output should look like so:
✅ Downloaded binaries and completed components set up.
daprd binary has been installed to $HOME/.dapr/bin.
dapr_placement container is running.
dapr_scheduler container is running.
dapr_redis container is running.
dapr_zipkin container is running.
Use `docker ps` to check running containers.
✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started
✅ Success! Dapr is up and running. To get started, go here: https://docs.dapr.io/getting-started
```
> Note: To see that Dapr has been installed successfully, from a command prompt run the `docker ps` command and check that the `daprio/dapr:latest`, `dapr_redis` and `dapr_zipkin` container images are all running.
@ -118,10 +119,11 @@ Output should look like so:
✅ Downloaded binaries and completed components set up.
daprd binary has been installed to $HOME/.dapr/bin.
placement binary has been installed.
✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started
scheduler binary has been installed.
✅ Success! Dapr is up and running. To get started, go here: https://docs.dapr.io/getting-started
```
>Note: When initializing Dapr with the `--slim` flag only the Dapr runtime binary and the placement service binary are installed. An empty default components folder is created with no default configuration files. During `dapr run` user should use `--resources-path` (`--components-path` is deprecated and will be removed in future releases) to point to a components directory with custom configurations files or alternatively place these files in the default directory. For Linux/MacOS, the default components directory path is `$HOME/.dapr/components` and for Windows it is `%USERPROFILE%\.dapr\components`.
>Note: When initializing Dapr with the `--slim` flag only the Dapr runtime, placement, and scheduler service binaries are installed. An empty default components folder is created with no default configuration files. During `dapr run` user should use `--resources-path` (`--components-path` is deprecated and will be removed in future releases) to point to a components directory with custom configurations files or alternatively place these files in the default directory. For Linux/MacOS, the default components directory path is `$HOME/.dapr/components` and for Windows it is `%USERPROFILE%\.dapr\components`.
#### Install a specific runtime version
@ -171,7 +173,7 @@ Move to the bundle directory and run the following command:
> If you are not running the above command from the bundle directory, provide the full path to bundle directory as input. For example, assuming the bundle directory path is $HOME/daprbundle, run `$HOME/daprbundle/dapr init --from-dir $HOME/daprbundle` to have the same behavior.
> Note: Dapr Installer bundle just contains the placement container apart from the binaries and so `zipkin` and `redis` are not enabled by default. You can pull the images locally either from network or private registry and run as follows:
> Note: Dapr Installer bundle just contains the placement and scheduler containers apart from the binaries and so `zipkin` and `redis` are not enabled by default. You can pull the images locally either from network or private registry and run as follows:
```bash
docker run --name "dapr_zipkin" --restart always -d -p 9411:9411 openzipkin/zipkin
@ -199,6 +201,9 @@ dapr init --network dapr-network
> Note: When installed to a specific Docker network, you will need to add the `--placement-host-address` arguments to `dapr run` commands run in any containers within that network.
> The format of `--placement-host-address` argument is either `<hostname>` or `<hostname>:<port>`. If the port is omitted, the default port `6050` for Windows and `50005` for Linux/MacOS applies.
> Note: When installed to a specific Docker network, you will need to add the `--scheduler-host-address` arguments to `dapr run` commands run in any containers within that network.
> The format of `--scheduler-host-address` argument is either `<hostname>` or `<hostname>:<port>`. If the port is omitted, the default port `6060` for Windows and `50006` for Linux/MacOS applies.
#### Install with a specific container runtime
You can install the Dapr runtime using a specific container runtime
@ -228,7 +233,7 @@ For more details, see the docs for dev containers with [Visual Studio Code](http
### Uninstall Dapr in a standalone mode
Uninstalling will remove daprd binary and the placement container (if installed with Docker or the placement binary if not).
Uninstalling will remove daprd binary along with the placement and scheduler containers (if installed with Docker or the placement and scheduler binaries if not).
```bash
@ -237,7 +242,7 @@ dapr uninstall
> For Linux users, if you run your docker cmds with sudo, you need to use "**sudo dapr uninstall**" to remove the containers.
The command above won't remove the redis or zipkin containers by default in case you were using it for other purposes. It will also not remove the default dapr folder that was created on `dapr init`. To remove all the containers (placement, redis, zipkin) and also the default dapr folder created on init run:
The command above won't remove the redis or zipkin containers by default in case you were using it for other purposes. It will also not remove the default dapr folder that was created on `dapr init`. To remove all the containers (placement, scheduler, redis, zipkin) and also the default dapr folder created on init run:
```bash
dapr uninstall --all
@ -245,7 +250,7 @@ dapr uninstall --all
The above command can also be run when Dapr has been installed in a non-docker environment, it will only remove the installed binaries and the default dapr folder in that case.
> NB: The `dapr uninstall` command will always try to remove the placement binary/service and will throw an error is not able to.
> NB: The `dapr uninstall` command will always try to remove the placement and scheduler binaries/services and will throw an error is not able to.
**You should always run a `dapr uninstall` before running another `dapr init`.**
@ -284,7 +289,7 @@ Output should look like as follows:
Note: To install Dapr using Helm, see here: https://docs.dapr.io/getting-started/install-dapr/#install-with-helm-advanced
✅ Deploying the Dapr control plane to your cluster...
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run "dapr status -k" in your terminal. To get started, go here: https://aka.ms/dapr-getting-started
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run "dapr status -k" in your terminal. To get started, go here: https://docs.dapr.io/getting-started
```
#### Supplying Helm values
@ -351,6 +356,8 @@ dapr upgrade -k --runtime-version=1.0.0
The example above shows how to upgrade from your current version to version `1.0.0`.
*Note: `dapr upgrade` will retry up to 5 times upon failure*
#### Supplying Helm values
All available [Helm Chart values](https://github.com/dapr/dapr/tree/master/charts/dapr#configuration) can be set by using the `--set` flag:
@ -405,7 +412,7 @@ dapr init --network dapr-network
dapr run --app-id nodeapp --placement-host-address dapr_placement node app.js
```
> Note: When in a specific Docker network, the Redis, Zipkin and placement service containers are given specific network aliases, `dapr_redis`, `dapr_zipkin` and `dapr_placement`, respectively. The default configuration files reflect the network alias rather than `localhost` when a docker network is specified.
> Note: When in a specific Docker network, the Redis, Zipkin and placement and scheduler service containers are given specific network aliases, `dapr_redis`, `dapr_zipkin`, `dapr_placement`, and `dapr_scheduler`, respectively. The default configuration files reflect the network alias rather than `localhost` when a docker network is specified.
### Use gRPC
@ -742,7 +749,9 @@ See the [Reference Guide](https://docs.dapr.io/reference/cli/) for more informat
## Contributing to Dapr CLI
See the [Development Guide](https://github.com/dapr/cli/blob/master/docs/development/development.md) to get started with building and developing.
## Release process for Dapr CLI
See the [Release Guide](https://github.com/dapr/cli/blob/master/docs/development/release.md) for complete release process for Dapr CLI.
## Code of Conduct
Please refer to our [Dapr Community Code of Conduct](https://github.com/dapr/community/blob/master/CODE-OF-CONDUCT.md)

View File

@ -21,6 +21,11 @@ import (
"net/url"
"os"
"path/filepath"
"strconv"
"github.com/dapr/dapr/pkg/runtime"
"k8s.io/apimachinery/pkg/api/resource"
"github.com/spf13/cobra"
@ -60,8 +65,8 @@ var (
annotateReadinessProbeThreshold int
annotateDaprImage string
annotateAppSSL bool
annotateMaxRequestBodySize int
annotateReadBufferSize int
annotateMaxRequestBodySize string
annotateReadBufferSize string
annotateHTTPStreamRequestBody bool
annotateGracefulShutdownSeconds int
annotateEnableAPILogging bool
@ -221,7 +226,6 @@ func readInputsFromFS(path string) ([]io.Reader, error) {
inputs = append(inputs, file)
return nil
})
if err != nil {
return nil, err
}
@ -319,12 +323,23 @@ func getOptionsFromFlags() kubernetes.AnnotateOptions {
if annotateAppSSL {
o = append(o, kubernetes.WithAppSSL())
}
if annotateMaxRequestBodySize != -1 {
o = append(o, kubernetes.WithMaxRequestBodySize(annotateMaxRequestBodySize))
if annotateMaxRequestBodySize != "-1" {
if q, err := resource.ParseQuantity(annotateMaxRequestBodySize); err != nil {
print.FailureStatusEvent(os.Stderr, "error parsing value of max-body-size: %s, error: %s", annotateMaxRequestBodySize, err.Error())
os.Exit(1)
} else {
o = append(o, kubernetes.WithMaxRequestBodySize(int(q.Value())))
}
}
if annotateReadBufferSize != -1 {
o = append(o, kubernetes.WithReadBufferSize(annotateReadBufferSize))
if annotateReadBufferSize != "-1" {
if q, err := resource.ParseQuantity(annotateReadBufferSize); err != nil {
print.FailureStatusEvent(os.Stderr, "error parsing value of read-buffer-size: %s, error: %s", annotateMaxRequestBodySize, err.Error())
os.Exit(1)
} else {
o = append(o, kubernetes.WithReadBufferSize(int(q.Value())))
}
}
if annotateHTTPStreamRequestBody {
o = append(o, kubernetes.WithHTTPStreamRequestBody())
}
@ -386,8 +401,8 @@ func init() {
AnnotateCmd.Flags().StringVar(&annotateDaprImage, "dapr-image", "", "The image to use for the dapr sidecar container")
AnnotateCmd.Flags().BoolVar(&annotateAppSSL, "app-ssl", false, "Enable SSL for the app")
AnnotateCmd.Flags().MarkDeprecated("app-ssl", "This flag is deprecated and will be removed in a future release. Use \"app-protocol\" flag with https or grpcs instead")
AnnotateCmd.Flags().IntVar(&annotateMaxRequestBodySize, "max-request-body-size", -1, "The maximum request body size to use")
AnnotateCmd.Flags().IntVar(&annotateReadBufferSize, "http-read-buffer-size", -1, "The maximum size of HTTP header read buffer in kilobytes")
AnnotateCmd.Flags().StringVar(&annotateMaxRequestBodySize, "max-body-size", strconv.Itoa(runtime.DefaultMaxRequestBodySize>>20)+"Mi", "The maximum request body size to use")
AnnotateCmd.Flags().StringVar(&annotateReadBufferSize, "read-buffer-size", strconv.Itoa(runtime.DefaultReadBufferSize>>10)+"Ki", "The maximum size of HTTP header read buffer in kilobytes")
AnnotateCmd.Flags().BoolVar(&annotateHTTPStreamRequestBody, "http-stream-request-body", false, "Enable streaming request body for HTTP")
AnnotateCmd.Flags().IntVar(&annotateGracefulShutdownSeconds, "graceful-shutdown-seconds", -1, "The number of seconds to wait for the app to shutdown")
AnnotateCmd.Flags().BoolVar(&annotateEnableAPILogging, "enable-api-logging", false, "Enable API logging for the Dapr sidecar")

View File

@ -18,6 +18,7 @@ import (
"net"
"os"
"os/signal"
"strconv"
"github.com/pkg/browser"
"github.com/spf13/cobra"
@ -180,9 +181,9 @@ dapr dashboard -k -p 0
}()
// url for dashboard after port forwarding.
webURL := fmt.Sprintf("http://%s", net.JoinHostPort(dashboardHost, fmt.Sprint(portForward.LocalPort)))
webURL := "http://" + net.JoinHostPort(dashboardHost, strconv.Itoa(portForward.LocalPort))
print.InfoStatusEvent(os.Stdout, fmt.Sprintf("Dapr dashboard found in namespace:\t%s", foundNamespace))
print.InfoStatusEvent(os.Stdout, "Dapr dashboard found in namespace:\t"+foundNamespace)
print.InfoStatusEvent(os.Stdout, fmt.Sprintf("Dapr dashboard available at:\t%s\n", webURL))
err = browser.OpenURL(webURL)

View File

@ -33,6 +33,7 @@ var (
wait bool
timeout uint
slimMode bool
devMode bool
runtimeVersion string
dashboardVersion string
allNamespaces bool
@ -44,6 +45,7 @@ var (
fromDir string
containerRuntime string
imageVariant string
schedulerVolume string
)
var InitCmd = &cobra.Command{
@ -68,6 +70,9 @@ dapr init --image-registry <registry-url>
# Initialize Dapr in Kubernetes
dapr init -k
# Initialize Dapr in Kubernetes in dev mode
dapr init -k --dev
# Initialize Dapr in Kubernetes and wait for the installation to complete (default timeout is 300s/5m)
dapr init -k --wait --timeout 600
@ -127,6 +132,7 @@ dapr init --runtime-path <path-to-install-directory>
DashboardVersion: dashboardVersion,
EnableMTLS: enableMTLS,
EnableHA: enableHA,
EnableDev: devMode,
Args: values,
Wait: wait,
Timeout: timeout,
@ -141,7 +147,7 @@ dapr init --runtime-path <path-to-install-directory>
print.FailureStatusEvent(os.Stderr, err.Error())
os.Exit(1)
}
print.SuccessStatusEvent(os.Stdout, fmt.Sprintf("Success! Dapr has been installed to namespace %s. To verify, run `dapr status -k' in your terminal. To get started, go here: https://aka.ms/dapr-getting-started", config.Namespace))
print.SuccessStatusEvent(os.Stdout, fmt.Sprintf("Success! Dapr has been installed to namespace %s. To verify, run `dapr status -k' in your terminal. To get started, go here: https://docs.dapr.io/getting-started", config.Namespace))
} else {
dockerNetwork := ""
imageRegistryURI := ""
@ -165,12 +171,12 @@ dapr init --runtime-path <path-to-install-directory>
print.FailureStatusEvent(os.Stdout, "Invalid container runtime. Supported values are docker and podman.")
os.Exit(1)
}
err := standalone.Init(runtimeVersion, dashboardVersion, dockerNetwork, slimMode, imageRegistryURI, fromDir, containerRuntime, imageVariant, daprRuntimePath)
err := standalone.Init(runtimeVersion, dashboardVersion, dockerNetwork, slimMode, imageRegistryURI, fromDir, containerRuntime, imageVariant, daprRuntimePath, &schedulerVolume)
if err != nil {
print.FailureStatusEvent(os.Stderr, err.Error())
os.Exit(1)
}
print.SuccessStatusEvent(os.Stdout, "Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started")
print.SuccessStatusEvent(os.Stdout, "Success! Dapr is up and running. To get started, go here: https://docs.dapr.io/getting-started")
}
},
}
@ -202,9 +208,10 @@ func init() {
defaultContainerRuntime := string(utils.DOCKER)
InitCmd.Flags().BoolVarP(&kubernetesMode, "kubernetes", "k", false, "Deploy Dapr to a Kubernetes cluster")
InitCmd.Flags().BoolVarP(&devMode, "dev", "", false, "Use Dev mode. Deploy Redis, Zipkin also in the Kubernetes cluster")
InitCmd.Flags().BoolVarP(&wait, "wait", "", false, "Wait for Kubernetes initialization to complete")
InitCmd.Flags().UintVarP(&timeout, "timeout", "", 300, "The wait timeout for the Kubernetes installation")
InitCmd.Flags().BoolVarP(&slimMode, "slim", "s", false, "Exclude placement service, Redis and Zipkin containers from self-hosted installation")
InitCmd.Flags().BoolVarP(&slimMode, "slim", "s", false, "Exclude placement service, scheduler service, Redis and Zipkin containers from self-hosted installation")
InitCmd.Flags().StringVarP(&runtimeVersion, "runtime-version", "", defaultRuntimeVersion, "The version of the Dapr runtime to install, for example: 1.0.0")
InitCmd.Flags().StringVarP(&dashboardVersion, "dashboard-version", "", defaultDashboardVersion, "The version of the Dapr dashboard to install, for example: 0.13.0")
InitCmd.Flags().StringVarP(&initNamespace, "namespace", "n", "dapr-system", "The Kubernetes namespace to install Dapr in")
@ -213,6 +220,7 @@ func init() {
InitCmd.Flags().String("network", "", "The Docker network on which to deploy the Dapr runtime")
InitCmd.Flags().StringVarP(&fromDir, "from-dir", "", "", "Use Dapr artifacts from local directory for self-hosted installation")
InitCmd.Flags().StringVarP(&imageVariant, "image-variant", "", "", "The image variant to use for the Dapr runtime, for example: mariner")
InitCmd.Flags().StringVarP(&schedulerVolume, "scheduler-volume", "", "dapr_scheduler", "Self-hosted only. Specify a volume for the scheduler service data directory.")
InitCmd.Flags().BoolP("help", "h", false, "Print this help message")
InitCmd.Flags().StringArrayVar(&values, "set", []string{}, "set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)")
InitCmd.Flags().String("image-registry", "", "Custom/private docker image repository URL")

View File

@ -67,7 +67,7 @@ dapr mtls export -o ./certs
}
dir, _ := filepath.Abs(exportPath)
print.SuccessStatusEvent(os.Stdout, fmt.Sprintf("Trust certs successfully exported to %s", dir))
print.SuccessStatusEvent(os.Stdout, "Trust certs successfully exported to "+dir)
},
PostRun: func(cmd *cobra.Command, args []string) {
kubernetes.CheckForCertExpiry()

View File

@ -14,9 +14,11 @@ limitations under the License.
package cmd
import (
"errors"
"fmt"
"os"
"strings"
"sync"
"time"
"github.com/spf13/cobra"
@ -101,7 +103,7 @@ dapr mtls renew-cert -k --valid-until <no of days> --restart
print.InfoStatusEvent(os.Stdout, "Using password file to generate root certificate")
err = kubernetes.RenewCertificate(kubernetes.RenewCertificateParams{
RootPrivateKeyFilePath: privateKey,
ValidUntil: time.Hour * time.Duration(validUntil*24),
ValidUntil: time.Hour * time.Duration(validUntil*24), //nolint:gosec
Timeout: timeout,
ImageVariant: imageVariant,
})
@ -111,7 +113,7 @@ dapr mtls renew-cert -k --valid-until <no of days> --restart
} else {
print.InfoStatusEvent(os.Stdout, "generating fresh certificates")
err = kubernetes.RenewCertificate(kubernetes.RenewCertificateParams{
ValidUntil: time.Hour * time.Duration(validUntil*24),
ValidUntil: time.Hour * time.Duration(validUntil*24), //nolint:gosec
Timeout: timeout,
ImageVariant: imageVariant,
})
@ -127,7 +129,7 @@ dapr mtls renew-cert -k --valid-until <no of days> --restart
logErrorAndExit(err)
}
print.SuccessStatusEvent(os.Stdout,
fmt.Sprintf("Certificate rotation is successful! Your new certicate is valid through %s", expiry.Format(time.RFC1123)))
"Certificate rotation is successful! Your new certicate is valid through "+expiry.Format(time.RFC1123))
if restartDaprServices {
restartControlPlaneService()
@ -168,22 +170,43 @@ func logErrorAndExit(err error) {
}
func restartControlPlaneService() error {
controlPlaneServices := []string{"deploy/dapr-sentry", "deploy/dapr-operator", "statefulsets/dapr-placement-server"}
controlPlaneServices := []string{
"deploy/dapr-sentry",
"deploy/dapr-sidecar-injector",
"deploy/dapr-operator",
"statefulsets/dapr-placement-server",
}
namespace, err := kubernetes.GetDaprNamespace()
if err != nil {
print.FailureStatusEvent(os.Stdout, "Failed to fetch Dapr namespace")
}
for _, name := range controlPlaneServices {
print.InfoStatusEvent(os.Stdout, fmt.Sprintf("Restarting %s..", name))
_, err := utils.RunCmdAndWait("kubectl", "rollout", "restart", name, "-n", namespace)
if err != nil {
return fmt.Errorf("error in restarting deployment %s. Error is %w", name, err)
}
_, err = utils.RunCmdAndWait("kubectl", "rollout", "status", name, "-n", namespace)
if err != nil {
return fmt.Errorf("error in checking status for deployment %s. Error is %w", name, err)
}
errs := make([]error, len(controlPlaneServices))
var wg sync.WaitGroup
wg.Add(len(controlPlaneServices))
for i, name := range controlPlaneServices {
go func(i int, name string) {
defer wg.Done()
print.InfoStatusEvent(os.Stdout, fmt.Sprintf("Restarting %s..", name))
_, err := utils.RunCmdAndWait("kubectl", "rollout", "restart", "-n", namespace, name)
if err != nil {
errs[i] = fmt.Errorf("error in restarting deployment %s. Error is %w", name, err)
return
}
_, err = utils.RunCmdAndWait("kubectl", "rollout", "status", "-n", namespace, name)
if err != nil {
errs[i] = fmt.Errorf("error in checking status for deployment %s. Error is %w", name, err)
return
}
}(i, name)
}
wg.Wait()
if err := errors.Join(errs...); err != nil {
return err
}
print.SuccessStatusEvent(os.Stdout, "All control plane services have restarted successfully!")
return nil
}

View File

@ -20,50 +20,59 @@ import (
"os"
"path/filepath"
"runtime"
"slices"
"strconv"
"strings"
"time"
"golang.org/x/mod/semver"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/spf13/viper"
daprRuntime "github.com/dapr/dapr/pkg/runtime"
"github.com/dapr/cli/pkg/kubernetes"
"github.com/dapr/cli/pkg/metadata"
"github.com/dapr/cli/pkg/print"
runExec "github.com/dapr/cli/pkg/runexec"
"github.com/dapr/cli/pkg/runfileconfig"
"github.com/dapr/cli/pkg/standalone"
"github.com/dapr/cli/pkg/standalone/runfileconfig"
daprsyscall "github.com/dapr/cli/pkg/syscall"
"github.com/dapr/cli/utils"
)
var (
appPort int
profilePort int
appID string
configFile string
port int
grpcPort int
internalGRPCPort int
maxConcurrency int
enableProfiling bool
logLevel string
protocol string
componentsPath string
resourcesPaths []string
appSSL bool
metricsPort int
maxRequestBodySize int
readBufferSize int
unixDomainSocket string
enableAppHealth bool
appHealthPath string
appHealthInterval int
appHealthTimeout int
appHealthThreshold int
enableAPILogging bool
apiListenAddresses string
runFilePath string
appChannelAddress string
appPort int
profilePort int
appID string
configFile string
port int
grpcPort int
internalGRPCPort int
maxConcurrency int
enableProfiling bool
logLevel string
protocol string
componentsPath string
resourcesPaths []string
appSSL bool
metricsPort int
maxRequestBodySize string
readBufferSize string
unixDomainSocket string
enableAppHealth bool
appHealthPath string
appHealthInterval int
appHealthTimeout int
appHealthThreshold int
enableAPILogging bool
apiListenAddresses string
schedulerHostAddress string
runFilePath string
appChannelAddress string
enableRunK8s bool
)
const (
@ -71,6 +80,15 @@ const (
runtimeWaitTimeoutInSeconds = 60
)
// Flags that are compatible with --run-file
var runFileCompatibleFlags = []string{
"kubernetes",
"help",
"version",
"runtime-path",
"log-as-json",
}
var RunCmd = &cobra.Command{
Use: "run",
Short: "Run Dapr and (optionally) your application side by side. Supported platforms: Self-hosted",
@ -105,6 +123,15 @@ dapr run --run-file dapr.yaml
# Run multiple apps by providing a directory path containing the run config file(dapr.yaml)
dapr run --run-file /path/to/directory
# Run multiple apps by providing config via stdin
cat dapr.template.yaml | envsubst | dapr run --run-file -
# Run multiple apps in Kubernetes by proficing path of a run config file
dapr run --run-file dapr.yaml -k
# Run multiple apps in Kubernetes by providing a directory path containing the run config file(dapr.yaml)
dapr run --run-file /path/to/directory -k
`,
Args: cobra.MinimumNArgs(0),
PreRun: func(cmd *cobra.Command, args []string) {
@ -112,16 +139,20 @@ dapr run --run-file /path/to/directory
},
Run: func(cmd *cobra.Command, args []string) {
if len(runFilePath) > 0 {
if runtime.GOOS == string(windowsOsType) {
print.FailureStatusEvent(os.Stderr, "The run command with run file is not supported on Windows")
os.Exit(1)
// Check for incompatible flags
incompatibleFlags := detectIncompatibleFlags(cmd)
if len(incompatibleFlags) > 0 {
// Print warning message about incompatible flags
warningMsg := "The following flags are ignored when using --run-file and should be configured in the run file instead: " + strings.Join(incompatibleFlags, ", ")
print.WarningStatusEvent(os.Stdout, warningMsg)
}
runConfigFilePath, err := getRunFilePath(runFilePath)
if err != nil {
print.FailureStatusEvent(os.Stderr, "Failed to get run file path: %v", err)
os.Exit(1)
}
executeRunWithAppsConfigFile(runConfigFilePath)
executeRunWithAppsConfigFile(runConfigFilePath, enableRunK8s)
return
}
if len(args) == 0 {
@ -158,25 +189,26 @@ dapr run --run-file /path/to/directory
}
sharedRunConfig := &standalone.SharedRunConfig{
ConfigFile: configFile,
EnableProfiling: enableProfiling,
LogLevel: logLevel,
MaxConcurrency: maxConcurrency,
AppProtocol: protocol,
PlacementHostAddr: viper.GetString("placement-host-address"),
ComponentsPath: componentsPath,
ResourcesPaths: resourcesPaths,
AppSSL: appSSL,
MaxRequestBodySize: maxRequestBodySize,
HTTPReadBufferSize: readBufferSize,
EnableAppHealth: enableAppHealth,
AppHealthPath: appHealthPath,
AppHealthInterval: appHealthInterval,
AppHealthTimeout: appHealthTimeout,
AppHealthThreshold: appHealthThreshold,
EnableAPILogging: enableAPILogging,
APIListenAddresses: apiListenAddresses,
DaprdInstallPath: daprRuntimePath,
ConfigFile: configFile,
EnableProfiling: enableProfiling,
LogLevel: logLevel,
MaxConcurrency: maxConcurrency,
AppProtocol: protocol,
PlacementHostAddr: viper.GetString("placement-host-address"),
ComponentsPath: componentsPath,
ResourcesPaths: resourcesPaths,
AppSSL: appSSL,
MaxRequestBodySize: maxRequestBodySize,
HTTPReadBufferSize: readBufferSize,
EnableAppHealth: enableAppHealth,
AppHealthPath: appHealthPath,
AppHealthInterval: appHealthInterval,
AppHealthTimeout: appHealthTimeout,
AppHealthThreshold: appHealthThreshold,
EnableAPILogging: enableAPILogging,
APIListenAddresses: apiListenAddresses,
SchedulerHostAddress: schedulerHostAddress,
DaprdInstallPath: daprRuntimePath,
}
output, err := runExec.NewOutput(&standalone.RunConfig{
AppID: appID,
@ -218,6 +250,15 @@ dapr run --run-file /path/to/directory
output.DaprHTTPPort,
output.DaprGRPCPort)
}
if (daprVer.RuntimeVersion != "edge") && (semver.Compare(fmt.Sprintf("v%v", daprVer.RuntimeVersion), "v1.14.0-rc.1") == -1) {
print.InfoStatusEvent(os.Stdout, "The scheduler is only compatible with dapr runtime 1.14 onwards.")
for i, arg := range output.DaprCMD.Args {
if strings.HasPrefix(arg, "--scheduler-host-address") {
output.DaprCMD.Args[i] = ""
}
}
}
print.InfoStatusEvent(os.Stdout, startInfo)
output.DaprCMD.Stdout = os.Stdout
@ -299,14 +340,14 @@ dapr run --run-file /path/to/directory
stdErrPipe, pipeErr := output.AppCMD.StderrPipe()
if pipeErr != nil {
print.FailureStatusEvent(os.Stderr, fmt.Sprintf("Error creating stderr for App: %s", err.Error()))
print.FailureStatusEvent(os.Stderr, "Error creating stderr for App: "+err.Error())
appRunning <- false
return
}
stdOutPipe, pipeErr := output.AppCMD.StdoutPipe()
if pipeErr != nil {
print.FailureStatusEvent(os.Stderr, fmt.Sprintf("Error creating stdout for App: %s", err.Error()))
print.FailureStatusEvent(os.Stderr, "Error creating stdout for App: "+err.Error())
appRunning <- false
return
}
@ -315,13 +356,13 @@ dapr run --run-file /path/to/directory
outScanner := bufio.NewScanner(stdOutPipe)
go func() {
for errScanner.Scan() {
fmt.Println(print.Blue(fmt.Sprintf("== APP == %s", errScanner.Text())))
fmt.Println(print.Blue("== APP == " + errScanner.Text()))
}
}()
go func() {
for outScanner.Scan() {
fmt.Println(print.Blue(fmt.Sprintf("== APP == %s", outScanner.Text())))
fmt.Println(print.Blue("== APP == " + outScanner.Text()))
}
}()
@ -375,7 +416,7 @@ dapr run --run-file /path/to/directory
}
appCommand := strings.Join(args, " ")
print.InfoStatusEvent(os.Stdout, fmt.Sprintf("Updating metadata for app command: %s", appCommand))
print.InfoStatusEvent(os.Stdout, "Updating metadata for app command: "+appCommand)
err = metadata.Put(output.DaprHTTPPort, "appCommand", appCommand, output.AppID, unixDomainSocket)
if err != nil {
print.WarningStatusEvent(os.Stdout, "Could not update sidecar metadata for appCommand: %s", err.Error())
@ -447,13 +488,14 @@ func init() {
// By marking this as deprecated, the flag will be hidden from the help menu, but will continue to work. It will show a warning message when used.
RunCmd.Flags().MarkDeprecated("components-path", "This flag is deprecated and will be removed in the future releases. Use \"resources-path\" flag instead")
RunCmd.Flags().String("placement-host-address", "localhost", "The address of the placement service. Format is either <hostname> for default port or <hostname>:<port> for custom port")
RunCmd.Flags().StringVarP(&schedulerHostAddress, "scheduler-host-address", "", "localhost", "The address of the scheduler service. Format is either <hostname> for default port or <hostname>:<port> for custom port")
// TODO: Remove below flag once the flag is removed in runtime in future release.
RunCmd.Flags().BoolVar(&appSSL, "app-ssl", false, "Enable https when Dapr invokes the application")
RunCmd.Flags().MarkDeprecated("app-ssl", "This flag is deprecated and will be removed in the future releases. Use \"app-protocol\" flag with https or grpcs values instead")
RunCmd.Flags().IntVarP(&metricsPort, "metrics-port", "M", -1, "The port of metrics on dapr")
RunCmd.Flags().BoolP("help", "h", false, "Print this help message")
RunCmd.Flags().IntVarP(&maxRequestBodySize, "dapr-http-max-request-size", "", -1, "Max size of request body in MB")
RunCmd.Flags().IntVarP(&readBufferSize, "dapr-http-read-buffer-size", "", -1, "HTTP header read buffer in KB")
RunCmd.Flags().StringVarP(&maxRequestBodySize, "max-body-size", "", strconv.Itoa(daprRuntime.DefaultMaxRequestBodySize>>20)+"Mi", "Max size of request body in MB")
RunCmd.Flags().StringVarP(&readBufferSize, "read-buffer-size", "", strconv.Itoa(daprRuntime.DefaultReadBufferSize>>10)+"Ki", "HTTP header read buffer in KB")
RunCmd.Flags().StringVarP(&unixDomainSocket, "unix-domain-socket", "u", "", "Path to a unix domain socket dir. If specified, Dapr API servers will use Unix Domain Sockets")
RunCmd.Flags().BoolVar(&enableAppHealth, "enable-app-health-check", false, "Enable health checks for the application using the protocol defined with app-protocol")
RunCmd.Flags().StringVar(&appHealthPath, "app-health-check-path", "", "Path used for health checks; HTTP only")
@ -461,6 +503,7 @@ func init() {
RunCmd.Flags().IntVar(&appHealthTimeout, "app-health-probe-timeout", 0, "Timeout for app health probes in milliseconds")
RunCmd.Flags().IntVar(&appHealthThreshold, "app-health-threshold", 0, "Number of consecutive failures for the app to be considered unhealthy")
RunCmd.Flags().BoolVar(&enableAPILogging, "enable-api-logging", false, "Log API calls at INFO verbosity. Valid values are: true or false")
RunCmd.Flags().BoolVarP(&enableRunK8s, "kubernetes", "k", false, "Run the multi-app run template against Kubernetes environment.")
RunCmd.Flags().StringVar(&apiListenAddresses, "dapr-listen-addresses", "", "Comma separated list of IP addresses that sidecar will listen to")
RunCmd.Flags().StringVarP(&runFilePath, "run-file", "f", "", "Path to the run template file for the list of apps to run")
RunCmd.Flags().StringVarP(&appChannelAddress, "app-channel-address", "", utils.DefaultAppChannelAddress, "The network address the application listens on")
@ -481,13 +524,13 @@ func executeRun(runTemplateName, runFilePath string, apps []runfileconfig.App) (
// This is done to provide a better grouping, which can be used to control all the proceses started by "dapr run -f".
daprsyscall.CreateProcessGroupID()
print.WarningStatusEvent(os.Stdout, "This is a preview feature and subject to change in future releases.")
for _, app := range apps {
print.StatusEvent(os.Stdout, print.LogInfo, "Validating config and starting app %q", app.RunConfig.AppID)
// Set defaults if zero value provided in config yaml.
app.RunConfig.SetDefaultFromSchema()
app.RunConfig.SchedulerHostAddress = validateSchedulerHostAddress(daprVer.RuntimeVersion, app.RunConfig.SchedulerHostAddress)
// Validate validates the configs and modifies the ports to free ports, appId etc.
err := app.RunConfig.Validate()
if err != nil {
@ -504,6 +547,7 @@ func executeRun(runTemplateName, runFilePath string, apps []runfileconfig.App) (
exitWithError = true
break
}
// Combined multiwriter for logs.
var appDaprdWriter io.Writer
// appLogWriter is used when app command is present.
@ -511,11 +555,11 @@ func executeRun(runTemplateName, runFilePath string, apps []runfileconfig.App) (
// A custom writer used for trimming ASCII color codes from logs when writing to files.
var customAppLogWriter io.Writer
daprdLogWriterCloser := getLogWriter(app.DaprdLogWriteCloser, app.DaprdLogDestination)
daprdLogWriterCloser := runfileconfig.GetLogWriter(app.DaprdLogWriteCloser, app.DaprdLogDestination)
if len(runConfig.Command) == 0 {
print.StatusEvent(os.Stdout, print.LogWarning, "No application command found for app %q present in %s", runConfig.AppID, runFilePath)
appDaprdWriter = getAppDaprdWriter(app, true)
appDaprdWriter = runExec.GetAppDaprdWriter(app, true)
appLogWriter = app.DaprdLogWriteCloser
} else {
err = app.CreateAppLogFile()
@ -524,8 +568,8 @@ func executeRun(runTemplateName, runFilePath string, apps []runfileconfig.App) (
exitWithError = true
break
}
appDaprdWriter = getAppDaprdWriter(app, false)
appLogWriter = getLogWriter(app.AppLogWriteCloser, app.AppLogDestination)
appDaprdWriter = runExec.GetAppDaprdWriter(app, false)
appLogWriter = runfileconfig.GetLogWriter(app.AppLogWriteCloser, app.AppLogDestination)
}
customAppLogWriter = print.CustomLogWriter{W: appLogWriter}
runState, err := startDaprdAndAppProcesses(&runConfig, app.AppDirPath, sigCh,
@ -547,11 +591,23 @@ func executeRun(runTemplateName, runFilePath string, apps []runfileconfig.App) (
// Update extended metadata with run file path.
putRunTemplateNameInMeta(runState, runTemplateName)
// Update extended metadata with app log file path.
if app.AppLogDestination != standalone.Console {
putAppLogFilePathInMeta(runState, app.AppLogFileName)
}
// Update extended metadata with daprd log file path.
if app.DaprdLogDestination != standalone.Console {
putDaprLogFilePathInMeta(runState, app.DaprdLogFileName)
}
if runState.AppCMD.Command != nil {
putAppCommandInMeta(runConfig, runState)
if runState.AppCMD.Command.Process != nil {
putAppProcessIDInMeta(runState)
// Attach a windows job object to the app process.
utils.AttachJobObjectToProcess(strconv.Itoa(os.Getpid()), runState.AppCMD.Command.Process)
}
}
@ -582,43 +638,6 @@ func executeRun(runTemplateName, runFilePath string, apps []runfileconfig.App) (
return exitWithError, closeError
}
// getAppDaprdWriter returns the writer for writing logs common to both daprd, app and stdout.
func getAppDaprdWriter(app runfileconfig.App, isAppCommandEmpty bool) io.Writer {
var appDaprdWriter io.Writer
if isAppCommandEmpty {
if app.DaprdLogDestination != standalone.Console {
appDaprdWriter = io.MultiWriter(os.Stdout, app.DaprdLogWriteCloser)
} else {
appDaprdWriter = os.Stdout
}
} else {
if app.AppLogDestination != standalone.Console && app.DaprdLogDestination != standalone.Console {
appDaprdWriter = io.MultiWriter(app.AppLogWriteCloser, app.DaprdLogWriteCloser, os.Stdout)
} else if app.AppLogDestination != standalone.Console {
appDaprdWriter = io.MultiWriter(app.AppLogWriteCloser, os.Stdout)
} else if app.DaprdLogDestination != standalone.Console {
appDaprdWriter = io.MultiWriter(app.DaprdLogWriteCloser, os.Stdout)
} else {
appDaprdWriter = os.Stdout
}
}
return appDaprdWriter
}
// getLogWriter returns the log writer based on the log destination.
func getLogWriter(fileLogWriterCloser io.WriteCloser, logDestination standalone.LogDestType) io.Writer {
var logWriter io.Writer
switch logDestination {
case standalone.Console:
logWriter = os.Stdout
case standalone.File:
logWriter = fileLogWriterCloser
case standalone.FileAndConsole:
logWriter = io.MultiWriter(os.Stdout, fileLogWriterCloser)
}
return logWriter
}
func logInformationalStatusToStdout(app runfileconfig.App) {
print.InfoStatusEvent(os.Stdout, "Started Dapr with app id %q. HTTP Port: %d. gRPC Port: %d",
app.AppID, app.RunConfig.HTTPPort, app.RunConfig.GRPCPort)
@ -644,9 +663,8 @@ func gracefullyShutdownAppsAndCloseResources(runState []*runExec.RunExec, apps [
return err
}
func executeRunWithAppsConfigFile(runFilePath string) {
config := runfileconfig.RunFileConfig{}
apps, err := config.GetApps(runFilePath)
func executeRunWithAppsConfigFile(runFilePath string, k8sEnabled bool) {
config, apps, err := getRunConfigFromRunFile(runFilePath)
if err != nil {
print.StatusEvent(os.Stdout, print.LogFailure, "Error getting apps from config file: %s", err)
os.Exit(1)
@ -655,7 +673,13 @@ func executeRunWithAppsConfigFile(runFilePath string) {
print.StatusEvent(os.Stdout, print.LogFailure, "No apps to run")
os.Exit(1)
}
exitWithError, closeErr := executeRun(config.Name, runFilePath, apps)
var exitWithError bool
var closeErr error
if !k8sEnabled {
exitWithError, closeErr = executeRun(config.Name, runFilePath, apps)
} else {
exitWithError, closeErr = kubernetes.Run(runFilePath, config)
}
if exitWithError {
if closeErr != nil {
print.StatusEvent(os.Stdout, print.LogFailure, "Error closing resources: %s", closeErr)
@ -664,6 +688,23 @@ func executeRunWithAppsConfigFile(runFilePath string) {
}
}
// populate the scheduler host address based on the dapr version.
func validateSchedulerHostAddress(version, address string) string {
// If no SchedulerHostAddress is supplied, set it to default value.
if semver.Compare(fmt.Sprintf("v%v", version), "v1.15.0-rc.0") == 1 {
if address == "" {
return "localhost"
}
}
return address
}
func getRunConfigFromRunFile(runFilePath string) (runfileconfig.RunFileConfig, []runfileconfig.App, error) {
config := runfileconfig.RunFileConfig{}
apps, err := config.GetApps(runFilePath)
return config, apps, err
}
// startDaprdAndAppProcesses is a function to start the App process and the associated Daprd process.
// This should be called as a blocking function call.
func startDaprdAndAppProcesses(runConfig *standalone.RunConfig, commandDir string, sigCh chan os.Signal,
@ -999,10 +1040,29 @@ func putRunTemplateNameInMeta(runE *runExec.RunExec, runTemplateName string) {
}
}
// putAppLogFilePathInMeta puts the absolute path of app log file in metadata so that it can be used by the CLI to stop the app.
func putAppLogFilePathInMeta(runE *runExec.RunExec, appLogFilePath string) {
err := metadata.Put(runE.DaprHTTPPort, "appLogPath", appLogFilePath, runE.AppID, unixDomainSocket)
if err != nil {
print.StatusEvent(runE.DaprCMD.OutputWriter, print.LogWarning, "Could not update sidecar metadata for app log file path: %s", err.Error())
}
}
// putDaprLogFilePathInMeta puts the absolute path of Dapr log file in metadata so that it can be used by the CLI to stop the app.
func putDaprLogFilePathInMeta(runE *runExec.RunExec, daprLogFilePath string) {
err := metadata.Put(runE.DaprHTTPPort, "daprdLogPath", daprLogFilePath, runE.AppID, unixDomainSocket)
if err != nil {
print.StatusEvent(runE.DaprCMD.OutputWriter, print.LogWarning, "Could not update sidecar metadata for dapr log file path: %s", err.Error())
}
}
// getRunFilePath returns the path to the run file.
// If the provided path is a path to a YAML file then return the same.
// Else it returns the path of "dapr.yaml" in the provided directory.
func getRunFilePath(path string) (string, error) {
if path == "-" {
return path, nil // will be read from stdin later.
}
fileInfo, err := os.Stat(path)
if err != nil {
return "", fmt.Errorf("error getting file info for %s: %w", path, err)
@ -1020,3 +1080,26 @@ func getRunFilePath(path string) (string, error) {
}
return path, nil
}
// getConflictingFlags checks if any flags are set other than the ones passed in the excludedFlags slice.
// Used for logic or notifications when any of the flags are conflicting and should not be used together.
func getConflictingFlags(cmd *cobra.Command, excludedFlags ...string) []string {
var conflictingFlags []string
cmd.Flags().Visit(func(f *pflag.Flag) {
if !slices.Contains(excludedFlags, f.Name) {
conflictingFlags = append(conflictingFlags, f.Name)
}
})
return conflictingFlags
}
// detectIncompatibleFlags checks if any incompatible flags are used with --run-file
// and returns a slice of the flag names that were used
func detectIncompatibleFlags(cmd *cobra.Command) []string {
if runFilePath == "" {
return nil // No run file specified, so no incompatibilities
}
// Get all flags that are not in the compatible list
return getConflictingFlags(cmd, append(runFileCompatibleFlags, "run-file")...)
}

78
cmd/run_test.go Normal file
View File

@ -0,0 +1,78 @@
package cmd
import (
"testing"
"github.com/spf13/cobra"
"github.com/stretchr/testify/assert"
)
func TestValidateSchedulerHostAddress(t *testing.T) {
t.Run("test scheduler host address - v1.14.0-rc.0", func(t *testing.T) {
address := validateSchedulerHostAddress("1.14.0-rc.0", "")
assert.Equal(t, "", address)
})
t.Run("test scheduler host address - v1.15.0-rc.0", func(t *testing.T) {
address := validateSchedulerHostAddress("1.15.0", "")
assert.Equal(t, "localhost:50006", address)
})
}
func TestDetectIncompatibleFlags(t *testing.T) {
// Setup a temporary run file path to trigger the incompatible flag check
originalRunFilePath := runFilePath
runFilePath = "some/path"
defer func() {
// Restore the original runFilePath
runFilePath = originalRunFilePath
}()
t.Run("detect incompatible flags", func(t *testing.T) {
// Create a test command with flags
cmd := &cobra.Command{Use: "test"}
cmd.Flags().String("app-id", "", "")
cmd.Flags().String("dapr-http-port", "", "")
cmd.Flags().String("kubernetes", "", "") // Compatible flag
cmd.Flags().String("runtime-path", "", "") // Compatible flag
cmd.Flags().String("log-as-json", "", "") // Compatible flag
// Mark flags as changed
cmd.Flags().Set("app-id", "myapp")
cmd.Flags().Set("dapr-http-port", "3500")
cmd.Flags().Set("kubernetes", "true")
cmd.Flags().Set("runtime-path", "/path/to/runtime")
cmd.Flags().Set("log-as-json", "true")
// Test detection
incompatibleFlags := detectIncompatibleFlags(cmd)
assert.Len(t, incompatibleFlags, 2)
assert.Contains(t, incompatibleFlags, "app-id")
assert.Contains(t, incompatibleFlags, "dapr-http-port")
assert.NotContains(t, incompatibleFlags, "kubernetes")
assert.NotContains(t, incompatibleFlags, "runtime-path")
assert.NotContains(t, incompatibleFlags, "log-as-json")
})
t.Run("no incompatible flags when run file not specified", func(t *testing.T) {
// Create a test command with flags
cmd := &cobra.Command{Use: "test"}
cmd.Flags().String("app-id", "", "")
cmd.Flags().String("dapr-http-port", "", "")
// Mark flags as changed
cmd.Flags().Set("app-id", "myapp")
cmd.Flags().Set("dapr-http-port", "3500")
// Temporarily clear runFilePath
originalRunFilePath := runFilePath
runFilePath = ""
defer func() {
runFilePath = originalRunFilePath
}()
// Test detection
incompatibleFlags := detectIncompatibleFlags(cmd)
assert.Nil(t, incompatibleFlags)
})
}

View File

@ -17,15 +17,18 @@ import (
"fmt"
"os"
"path/filepath"
"runtime"
"github.com/spf13/cobra"
"github.com/dapr/cli/pkg/kubernetes"
"github.com/dapr/cli/pkg/print"
"github.com/dapr/cli/pkg/standalone"
)
var stopAppID string
var (
stopAppID string
stopK8s bool
)
var StopCmd = &cobra.Command{
Use: "stop",
@ -39,26 +42,38 @@ dapr stop --run-file dapr.yaml
# Stop multiple apps by providing a directory path containing the run config file(dapr.yaml)
dapr stop --run-file /path/to/directory
# Stop and delete Kubernetes deployment of multiple apps by providing a run config file
dapr stop --run-file dapr.yaml -k
# Stop and delete Kubernetes deployment of multiple apps by providing a directory path containing the run config file(dapr.yaml)
dapr stop --run-file /path/to/directory -k
`,
Run: func(cmd *cobra.Command, args []string) {
var err error
if len(runFilePath) > 0 {
if runtime.GOOS == string(windowsOsType) {
print.FailureStatusEvent(os.Stderr, "Stop command with run file is not supported on Windows")
os.Exit(1)
}
runFilePath, err = getRunFilePath(runFilePath)
if err != nil {
print.FailureStatusEvent(os.Stderr, "Failed to get run file path: %v", err)
os.Exit(1)
}
err = executeStopWithRunFile(runFilePath)
if err != nil {
print.FailureStatusEvent(os.Stderr, "Failed to stop Dapr and app processes: %s", err)
} else {
print.SuccessStatusEvent(os.Stdout, "Dapr and app processes stopped successfully")
if !stopK8s {
err = executeStopWithRunFile(runFilePath)
if err != nil {
print.FailureStatusEvent(os.Stderr, "Failed to stop Dapr and app processes: %s", err)
} else {
print.SuccessStatusEvent(os.Stdout, "Dapr and app processes stopped successfully")
}
return
}
config, _, cErr := getRunConfigFromRunFile(runFilePath)
if cErr != nil {
print.FailureStatusEvent(os.Stderr, "Failed to parse run template file %q: %s", runFilePath, cErr.Error())
}
err = kubernetes.Stop(runFilePath, config)
if err != nil {
print.FailureStatusEvent(os.Stderr, "Error stopping deployments from multi-app run template: %v", err)
}
return
}
if stopAppID != "" {
args = append(args, stopAppID)
@ -83,6 +98,7 @@ dapr stop --run-file /path/to/directory
func init() {
StopCmd.Flags().StringVarP(&stopAppID, "app-id", "a", "", "The application id to be stopped")
StopCmd.Flags().StringVarP(&runFilePath, "run-file", "f", "", "Path to the run template file for the list of apps to stop")
StopCmd.Flags().BoolVarP(&stopK8s, "kubernetes", "k", false, "Stop deployments in Kubernetes based on multi-app run file")
StopCmd.Flags().BoolP("help", "h", false, "Print this help message")
RootCmd.AddCommand(StopCmd)
}

View File

@ -30,6 +30,7 @@ import (
var (
uninstallNamespace string
uninstallKubernetes bool
uninstallDev bool
uninstallAll bool
uninstallContainerRuntime string
)
@ -42,12 +43,21 @@ var UninstallCmd = &cobra.Command{
# Uninstall from self-hosted mode
dapr uninstall
# Uninstall from self-hosted mode and remove .dapr directory, Redis, Placement and Zipkin containers
# Uninstall from self-hosted mode and remove .dapr directory, Redis, Placement, Scheduler, and Zipkin containers
dapr uninstall --all
# Uninstall from Kubernetes
dapr uninstall -k
# Uninstall from Kubernetes and remove CRDs
dapr uninstall -k --all
# Uninstall from Kubernetes remove dev deployments of Redis, Zipkin
dapr uninstall -k --dev
# Uninstall from Kubernetes remove dev deployments of Redis, Zipkin and CRDs
dapr uninstall -k --dev --all
# Uninstall Dapr from non-default install directory
# This will remove the .dapr directory present in the path <path-to-install-directory>
dapr uninstall --runtime-path <path-to-install-directory>
@ -66,7 +76,7 @@ dapr uninstall --runtime-path <path-to-install-directory>
}
print.InfoStatusEvent(os.Stdout, "Removing Dapr from your cluster...")
err = kubernetes.Uninstall(uninstallNamespace, uninstallAll, timeout)
err = kubernetes.Uninstall(uninstallNamespace, uninstallAll, uninstallDev, timeout)
} else {
if !utils.IsValidContainerRuntime(uninstallContainerRuntime) {
print.FailureStatusEvent(os.Stdout, "Invalid container runtime. Supported values are docker and podman.")
@ -87,8 +97,9 @@ dapr uninstall --runtime-path <path-to-install-directory>
func init() {
UninstallCmd.Flags().BoolVarP(&uninstallKubernetes, "kubernetes", "k", false, "Uninstall Dapr from a Kubernetes cluster")
UninstallCmd.Flags().BoolVarP(&uninstallDev, "dev", "", false, "Uninstall Dapr Redis and Zipking installations from Kubernetes cluster")
UninstallCmd.Flags().UintVarP(&timeout, "timeout", "", 300, "The timeout for the Kubernetes uninstall")
UninstallCmd.Flags().BoolVar(&uninstallAll, "all", false, "Remove .dapr directory, Redis, Placement and Zipkin containers on local machine, and CRDs on a Kubernetes cluster")
UninstallCmd.Flags().BoolVar(&uninstallAll, "all", false, "Remove .dapr directory, Redis, Placement, Scheduler (and default volume in self-hosted mode), and Zipkin containers on local machine, and CRDs on a Kubernetes cluster")
UninstallCmd.Flags().String("network", "", "The Docker network from which to remove the Dapr runtime")
UninstallCmd.Flags().StringVarP(&uninstallNamespace, "namespace", "n", "dapr-system", "The Kubernetes namespace to uninstall Dapr from")
UninstallCmd.Flags().BoolP("help", "h", false, "Print this help message")

View File

@ -7,7 +7,7 @@ This document helps you get started developing Dapr CLI. If you find any problem
### Linux and MacOS
1. The Go language environment `1.20` [(instructions)](https://golang.org/doc/install#tarball).
1. The Go language environment `1.21` [(instructions)](https://golang.org/doc/install#tarball).
* Make sure that your GOPATH and PATH are configured correctly
```bash
export GOPATH=~/go

View File

@ -0,0 +1,45 @@
# Release Guide
This document describes how to release Dapr CLI along with associated artifacts.
## Prerequisites
Only the repository maintainers and release team are allowed to execute the below steps.
## Pre-release build
Pre-release build will be built from `release-<major>.<minor>` branch and versioned by git version tag suffix e.g. `-rc.0`, `-rc.1`, etc. This build is not released to users who use the latest stable version.
**Pre-release process**
1. Create a PR to update the `Dapr runtime` and `Dapr dashboard` pre-release versions in workflow and tests files wherever applicable, and merge the PR to master branch. For example, please take a look at this PR - https://github.com/dapr/cli/pull/1019. Please note that this PR is just for reference and only reflects the absolute minimum number of file changes. These versions update could lead to some other changes also.
2. Create branch `release-<major>.<minor>` from master and push the branch. e.g. `release-1.11`. You can use the github web UI to create the branch or use the following command.
```sh
$ git checkout master && git reset --hard upstream/master && git pull upstream master
$ git checkout -b release-1.11
$ git push upstream release-1.11
```
3. Add pre-release version tag (with suffix -rc.0 e.g. v1.11.0-rc.0) and push the tag.
```sh
$ git tag "v1.11.0-rc.0" -m "v1.11.0-rc.0"
$ git push upstream v1.11.0-rc.0
```
4. CI creates the new build artifacts.
5. Test and validate the functionalities with the specific version.
6. If there are regressions and bugs, fix them in release-* branch. e.g `release-1.11` branch.
7. Create new pre-release version tag (with suffix -rc.1, -rc.2, etc).
8. Repeat from 5 to 7 until all bugs are fixed.
## Release the stable version to users
> **Note**: Make sure stable versions of `dapr runtime` and `dapr dashboard` are released before releasing the CLI and update their references in workflow and tests files wherever applicable.
Once all bugs are fixed, stable version can be released. Create a new git version tag (without the suffix -rc.x e.g. v1.11.0) and push the tag. CI will create the new build artifacts and release them.
## Release Patch version
Work on the existing `release-<major>.<minor>` branch to release a patch version. Once all bugs are fixed, create a new patch version tag, such as `v1.11.1-rc.0`. After verifying the fixes on this pre-release, create a new git version tag such as `v1.11.1` and push the tag. CI will create the new build artifacts and release them.

326
go.mod
View File

@ -1,249 +1,245 @@
module github.com/dapr/cli
go 1.20
go 1.23.5
require (
github.com/Azure/go-autorest/autorest v0.11.28 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.22 // indirect
github.com/Masterminds/semver v1.5.0
github.com/Masterminds/semver/v3 v3.3.0
github.com/Pallinder/sillyname-go v0.0.0-20130730142914-97aeae9e6ba1
github.com/briandowns/spinner v1.19.0
github.com/dapr/dapr v1.11.0
github.com/dapr/go-sdk v1.6.0
github.com/docker/docker v20.10.21+incompatible
github.com/fatih/color v1.15.0
github.com/dapr/dapr v1.15.0-rc.3.0.20250107220753-e073759df4c1
github.com/dapr/go-sdk v1.11.0
github.com/dapr/kit v0.13.1-0.20241127165251-30e2c24840b4
github.com/docker/docker v25.0.6+incompatible
github.com/evanphx/json-patch/v5 v5.9.0
github.com/fatih/color v1.17.0
github.com/gocarina/gocsv v0.0.0-20220927221512-ad3251f9fa25
github.com/hashicorp/go-retryablehttp v0.7.1
github.com/hashicorp/go-retryablehttp v0.7.7
github.com/hashicorp/go-version v1.6.0
github.com/kolesnikovae/go-winjob v1.0.0
github.com/mitchellh/go-ps v1.0.0
github.com/nightlyone/lockfile v1.0.0
github.com/olekukonko/tablewriter v0.0.5
github.com/phayes/freeport v0.0.0-20220201140144-74d24b5ae9f5
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c
github.com/shirou/gopsutil v3.21.11+incompatible
github.com/spf13/cobra v1.6.1
github.com/spf13/cobra v1.8.1
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.13.0
github.com/stretchr/testify v1.8.3
golang.org/x/sys v0.8.0
github.com/stretchr/testify v1.10.0
golang.org/x/mod v0.22.0
golang.org/x/sys v0.28.0
gopkg.in/yaml.v2 v2.4.0
helm.sh/helm/v3 v3.11.1
k8s.io/api v0.26.3
k8s.io/apiextensions-apiserver v0.26.3
k8s.io/apimachinery v0.26.3
k8s.io/cli-runtime v0.26.3
k8s.io/client-go v0.26.3
helm.sh/helm/v3 v3.17.1
k8s.io/api v0.32.1
k8s.io/apiextensions-apiserver v0.32.1
k8s.io/apimachinery v0.32.1
k8s.io/cli-runtime v0.32.1
k8s.io/client-go v0.32.1
k8s.io/helm v2.16.10+incompatible
sigs.k8s.io/yaml v1.3.0
sigs.k8s.io/yaml v1.4.0
)
require (
github.com/Masterminds/semver/v3 v3.2.0
github.com/evanphx/json-patch v5.6.0+incompatible
)
require (
cloud.google.com/go/compute v1.19.0 // indirect
cloud.google.com/go/compute/metadata v0.2.3 // indirect
cel.dev/expr v0.18.0 // indirect
contrib.go.opencensus.io/exporter/prometheus v0.4.2 // indirect
github.com/AdhityaRamadhanus/fasthttpcors v0.0.0-20170121111917-d4c07198763a // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/BurntSushi/toml v1.2.1 // indirect
dario.cat/mergo v1.0.1 // indirect
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 // indirect
github.com/BurntSushi/toml v1.4.0 // indirect
github.com/Code-Hex/go-generics-cache v1.3.1 // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/sprig/v3 v3.2.3 // indirect
github.com/Masterminds/squirrel v1.5.3 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/Microsoft/hcsshim v0.9.6 // indirect
github.com/PuerkitoBio/purell v1.2.0 // indirect
github.com/andybalholm/brotli v1.0.5 // indirect
github.com/antlr/antlr4/runtime/Go/antlr v1.4.10 // indirect
github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535 // indirect
github.com/Masterminds/sprig/v3 v3.3.0 // indirect
github.com/Masterminds/squirrel v1.5.4 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/PuerkitoBio/purell v1.2.1 // indirect
github.com/aavaz-ai/pii-scrubber v0.0.0-20220812094047-3fa450ab6973 // indirect
github.com/alphadose/haxmap v1.4.0 // indirect
github.com/anshal21/go-worker v1.1.0 // indirect
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bufbuild/protocompile v0.4.0 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/bufbuild/protocompile v0.6.0 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chai2010/gettext-go v1.0.2 // indirect
github.com/chebyrash/promise v0.0.0-20220530143319-1123826567d6 // indirect
github.com/cloudevents/sdk-go/binding/format/protobuf/v2 v2.13.0 // indirect
github.com/cloudevents/sdk-go/v2 v2.13.0 // indirect
github.com/containerd/containerd v1.6.18 // indirect
github.com/containerd/continuity v0.3.0 // indirect
github.com/cyphar/filepath-securejoin v0.2.3 // indirect
github.com/dapr/components-contrib v1.11.0-rc.11 // indirect
github.com/dapr/kit v0.11.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/chebyrash/promise v0.0.0-20230709133807-42ec49ba1459 // indirect
github.com/cloudevents/sdk-go/binding/format/protobuf/v2 v2.14.0 // indirect
github.com/cloudevents/sdk-go/v2 v2.15.2 // indirect
github.com/containerd/containerd v1.7.24 // indirect
github.com/containerd/errdefs v0.3.0 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/containerd/platforms v0.2.1 // indirect
github.com/cyphar/filepath-securejoin v0.3.6 // indirect
github.com/dapr/components-contrib v1.15.0-rc.1.0.20241216170750-aca5116d95c9 // indirect
github.com/dapr/durabletask-go v0.5.1-0.20241216172832-16da3e7c3530 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.2.0 // indirect
github.com/docker/cli v20.10.21+incompatible // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/dlclark/regexp2 v1.10.0 // indirect
github.com/docker/cli v25.0.1+incompatible // indirect
github.com/docker/distribution v2.8.3+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/docker/go-metrics v0.0.1 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/emicklei/go-restful/v3 v3.10.2 // indirect
github.com/evanphx/json-patch/v5 v5.6.0 // indirect
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d // indirect
github.com/fasthttp/router v1.4.18 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch v5.9.0+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/ghodss/yaml v1.0.1-0.20190212211648-25d852aebe32 // indirect
github.com/go-chi/chi/v5 v5.1.0 // indirect
github.com/go-chi/cors v1.2.1 // indirect
github.com/go-errors/errors v1.4.2 // indirect
github.com/go-gorp/gorp/v3 v3.0.2 // indirect
github.com/go-gorp/gorp/v3 v3.1.0 // indirect
github.com/go-kit/log v0.2.1 // indirect
github.com/go-logfmt/logfmt v0.5.1 // indirect
github.com/go-logr/logr v1.2.4 // indirect
github.com/go-logfmt/logfmt v0.6.0 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/swag v0.21.1 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.5.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/mock v1.6.0 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/btree v1.1.2 // indirect
github.com/google/cel-go v0.13.0 // indirect
github.com/google/gnostic v0.6.9 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/cel-go v0.22.0 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gorilla/mux v1.8.1 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
github.com/gosuri/uitable v0.0.4 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.4.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.15.2 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.2 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/huandu/xstrings v1.3.3 // indirect
github.com/imdario/mergo v0.3.13 // indirect
github.com/inconshreveable/mousetrap v1.0.1 // indirect
github.com/jhump/protoreflect v1.15.1 // indirect
github.com/jmoiron/sqlx v1.3.5 // indirect
github.com/huandu/xstrings v1.5.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jhump/protoreflect v1.15.3 // indirect
github.com/jmoiron/sqlx v1.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.16.3 // indirect
github.com/klauspost/compress v1.17.10 // indirect
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
github.com/lestrrat-go/blackmagic v1.0.1 // indirect
github.com/lestrrat-go/blackmagic v1.0.2 // indirect
github.com/lestrrat-go/httpcc v1.0.1 // indirect
github.com/lestrrat-go/httprc v1.0.4 // indirect
github.com/lestrrat-go/httprc v1.0.5 // indirect
github.com/lestrrat-go/iter v1.0.2 // indirect
github.com/lestrrat-go/jwx/v2 v2.0.9 // indirect
github.com/lestrrat-go/jwx/v2 v2.0.21 // indirect
github.com/lestrrat-go/option v1.0.1 // indirect
github.com/lib/pq v1.10.7 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/magiconair/properties v1.8.6 // indirect
github.com/magiconair/properties v1.8.7 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/marusama/semaphore/v2 v2.5.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.18 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.9 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/microsoft/durabletask-go v0.2.4 // indirect
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/mitchellh/mapstructure v1.5.1-0.20220423185008-bf980b35cac4 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/term v0.0.0-20221205130635-1aeaba878587 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.0-rc2 // indirect
github.com/opencontainers/runc v1.1.5 // indirect
github.com/openzipkin/zipkin-go v0.4.1 // indirect
github.com/opencontainers/image-spec v1.1.0 // indirect
github.com/openzipkin/zipkin-go v0.4.2 // indirect
github.com/panjf2000/ants/v2 v2.8.1 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/pelletier/go-toml/v2 v2.0.6 // indirect
github.com/pelletier/go-toml/v2 v2.0.9 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_golang v1.14.0 // indirect
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.42.0 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
github.com/pkoukk/tiktoken-go v0.1.6 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.20.4 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.59.1 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/prometheus/statsd_exporter v0.22.7 // indirect
github.com/rubenv/sql-migrate v1.2.0 // indirect
github.com/rubenv/sql-migrate v1.7.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/savsgio/gotils v0.0.0-20230208104028-c358bd845dee // indirect
github.com/shopspring/decimal v1.2.0 // indirect
github.com/sirupsen/logrus v1.9.2 // indirect
github.com/segmentio/asm v1.2.0 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/sony/gobreaker v0.5.0 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/spf13/afero v1.8.2 // indirect
github.com/spf13/cast v1.5.1 // indirect
github.com/spf13/cast v1.7.0 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stoewer/go-strcase v1.2.0 // indirect
github.com/stretchr/objx v0.5.0 // indirect
github.com/spiffe/go-spiffe/v2 v2.1.7 // indirect
github.com/stoewer/go-strcase v1.3.0 // indirect
github.com/subosito/gotenv v1.4.1 // indirect
github.com/tidwall/transform v0.0.0-20201103190739-32f242e2dbde // indirect
github.com/tklauser/go-sysconf v0.3.10 // indirect
github.com/tklauser/numcpus v0.4.0 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/valyala/fasthttp v1.47.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/tmc/langchaingo v0.1.12 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
github.com/xlab/treeprint v1.1.0 // indirect
github.com/yusufpapurcu/wmi v1.2.2 // indirect
github.com/xlab/treeprint v1.2.0 // indirect
github.com/yusufpapurcu/wmi v1.2.3 // indirect
github.com/zeebo/errs v1.3.0 // indirect
go.mongodb.org/mongo-driver v1.14.0 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/otel v1.14.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.14.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.14.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.14.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.14.0 // indirect
go.opentelemetry.io/otel/exporters/zipkin v1.14.0 // indirect
go.opentelemetry.io/otel/sdk v1.14.0 // indirect
go.opentelemetry.io/otel/trace v1.14.0 // indirect
go.opentelemetry.io/proto/otlp v0.19.0 // indirect
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5 // indirect
golang.org/x/crypto v0.9.0 // indirect
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 // indirect
golang.org/x/net v0.10.0 // indirect
golang.org/x/oauth2 v0.8.0 // indirect
golang.org/x/sync v0.2.0 // indirect
golang.org/x/term v0.8.0 // indirect
golang.org/x/text v0.9.0 // indirect
golang.org/x/time v0.3.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect
google.golang.org/grpc v1.54.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0 // indirect
go.opentelemetry.io/otel v1.32.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.30.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.30.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.26.0 // indirect
go.opentelemetry.io/otel/exporters/zipkin v1.26.0 // indirect
go.opentelemetry.io/otel/metric v1.32.0 // indirect
go.opentelemetry.io/otel/sdk v1.30.0 // indirect
go.opentelemetry.io/otel/trace v1.32.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/exp v0.0.0-20241204233417-43b7b7cde48d // indirect
golang.org/x/net v0.33.0 // indirect
golang.org/x/oauth2 v0.23.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.7.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240924160255-9d4c2d233b61 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20241202173237-19429a94021a // indirect
google.golang.org/grpc v1.68.1 // indirect
google.golang.org/protobuf v1.35.2 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiserver v0.26.3 // indirect
k8s.io/component-base v0.26.3 // indirect
k8s.io/klog/v2 v2.80.1 // indirect
k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280 // indirect
k8s.io/kubectl v0.26.0 // indirect
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 // indirect
oras.land/oras-go v1.2.2 // indirect
sigs.k8s.io/controller-runtime v0.14.6 // indirect
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect
sigs.k8s.io/kustomize/api v0.12.1 // indirect
sigs.k8s.io/kustomize/kyaml v0.13.9 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
)
replace (
github.com/Azure/go-autorest => github.com/Azure/go-autorest v14.2.0+incompatible
github.com/docker/docker => github.com/moby/moby v17.12.0-ce-rc1.0.20200618181300-9dc6525e6118+incompatible
github.com/russross/blackfriday => github.com/russross/blackfriday v1.5.2
k8s.io/cli-runtime => k8s.io/cli-runtime v0.25.2
k8s.io/client => github.com/kubernetes-client/go v0.0.0-20190928040339-c757968c4c36
k8s.io/client-go => k8s.io/client-go v0.25.2
k8s.io/apiserver v0.32.1 // indirect
k8s.io/component-base v0.32.1 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect
k8s.io/kubectl v0.32.1 // indirect
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect
oras.land/oras-go v1.2.5 // indirect
sigs.k8s.io/controller-runtime v0.19.0 // indirect
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect
sigs.k8s.io/kustomize/api v0.18.0 // indirect
sigs.k8s.io/kustomize/kyaml v0.18.1 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.2 // indirect
)

1542
go.sum

File diff suppressed because it is too large Load Diff

View File

@ -180,6 +180,7 @@ installFile() {
runAsRoot rm "$DAPR_CLI_FILE"
fi
chmod o+x $tmp_root_dapr_cli
mkdir -p $DAPR_INSTALL_DIR
runAsRoot cp "$tmp_root_dapr_cli" "$DAPR_INSTALL_DIR"
if [ -f "$DAPR_CLI_FILE" ]; then

View File

@ -9,7 +9,7 @@ import (
"strconv"
"strings"
jsonpatch "github.com/evanphx/json-patch"
jsonpatch "github.com/evanphx/json-patch/v5"
appsv1 "k8s.io/api/apps/v1"
batchv1 "k8s.io/api/batch/v1"
batchv1beta1 "k8s.io/api/batch/v1beta1"
@ -55,8 +55,8 @@ const (
daprReadinessProbeThresholdKey = "dapr.io/sidecar-readiness-probe-threshold"
daprImageKey = "dapr.io/sidecar-image"
daprAppSSLKey = "dapr.io/app-ssl"
daprMaxRequestBodySizeKey = "dapr.io/http-max-request-size"
daprReadBufferSizeKey = "dapr.io/http-read-buffer-size"
daprMaxRequestBodySizeKey = "dapr.io/max-body-size"
daprReadBufferSizeKey = "dapr.io/read-buffer-size"
daprHTTPStreamRequestBodyKey = "dapr.io/http-stream-request-body"
daprGracefulShutdownSecondsKey = "dapr.io/graceful-shutdown-seconds"
daprEnableAPILoggingKey = "dapr.io/enable-api-logging"
@ -82,7 +82,7 @@ const (
)
type Annotator interface {
Annotate(io.Reader, io.Writer) error
Annotate(io.Reader, io.Writer) error //nolint: inamedparam
}
type K8sAnnotator struct {
@ -341,12 +341,8 @@ func (p *K8sAnnotator) annotateYAML(input []byte, config AnnotateOptions) ([]byt
}
// Create a patch operation for the annotations.
patchOps := []patcher.PatchOperation{}
patchOps = append(patchOps, patcher.PatchOperation{
Op: "add",
Path: path,
Value: annotations,
})
var patchOps []jsonpatch.Operation
patchOps = append(patchOps, patcher.NewPatchOperation("add", path, annotations))
patchBytes, err := json.Marshal(patchOps)
if err != nil {
return nil, false, err

View File

@ -2,11 +2,11 @@ package kubernetes
import (
"bytes"
"fmt"
"io"
"os"
"path"
"sort"
"strconv"
"strings"
"testing"
@ -333,7 +333,7 @@ func TestAnnotate(t *testing.T) {
for i := range expectedDocs {
if tt.printOutput {
t.Logf(outDocs[i])
t.Logf(outDocs[i]) //nolint:govet
}
assert.YAMLEq(t, expectedDocs[i], outDocs[i])
}
@ -423,7 +423,7 @@ func TestGetDaprAnnotations(t *testing.T) {
assert.Equal(t, "true", annotations[daprEnabledKey])
assert.Equal(t, appID, annotations[daprAppIDKey])
assert.Equal(t, fmt.Sprintf("%d", appPort), annotations[daprAppPortKey])
assert.Equal(t, strconv.Itoa(appPort), annotations[daprAppPortKey])
assert.Equal(t, config, annotations[daprConfigKey])
assert.Equal(t, appProtocol, annotations[daprAppProtocolKey])
assert.Equal(t, "true", annotations[daprEnableProfilingKey])
@ -431,31 +431,31 @@ func TestGetDaprAnnotations(t *testing.T) {
assert.Equal(t, apiTokenSecret, annotations[daprAPITokenSecretKey])
assert.Equal(t, appTokenSecret, annotations[daprAppTokenSecretKey])
assert.Equal(t, "true", annotations[daprLogAsJSONKey])
assert.Equal(t, fmt.Sprintf("%d", appMaxConcurrency), annotations[daprAppMaxConcurrencyKey])
assert.Equal(t, strconv.Itoa(appMaxConcurrency), annotations[daprAppMaxConcurrencyKey])
assert.Equal(t, "true", annotations[daprEnableMetricsKey])
assert.Equal(t, fmt.Sprintf("%d", metricsPort), annotations[daprMetricsPortKey])
assert.Equal(t, strconv.Itoa(metricsPort), annotations[daprMetricsPortKey])
assert.Equal(t, "true", annotations[daprEnableDebugKey])
assert.Equal(t, fmt.Sprintf("%d", debugPort), annotations[daprDebugPortKey])
assert.Equal(t, strconv.Itoa(debugPort), annotations[daprDebugPortKey])
assert.Equal(t, env, annotations[daprEnvKey])
assert.Equal(t, cpuLimit, annotations[daprCPULimitKey])
assert.Equal(t, memoryLimit, annotations[daprMemoryLimitKey])
assert.Equal(t, cpuRequest, annotations[daprCPURequestKey])
assert.Equal(t, memoryRequest, annotations[daprMemoryRequestKey])
assert.Equal(t, listenAddresses, annotations[daprListenAddressesKey])
assert.Equal(t, fmt.Sprintf("%d", livenessProbeDelay), annotations[daprLivenessProbeDelayKey])
assert.Equal(t, fmt.Sprintf("%d", livenessProbeTimeout), annotations[daprLivenessProbeTimeoutKey])
assert.Equal(t, fmt.Sprintf("%d", livenessProbePeriod), annotations[daprLivenessProbePeriodKey])
assert.Equal(t, fmt.Sprintf("%d", livenessProbeThreshold), annotations[daprLivenessProbeThresholdKey])
assert.Equal(t, fmt.Sprintf("%d", readinessProbeDelay), annotations[daprReadinessProbeDelayKey])
assert.Equal(t, fmt.Sprintf("%d", readinessProbeTimeout), annotations[daprReadinessProbeTimeoutKey])
assert.Equal(t, fmt.Sprintf("%d", readinessProbePeriod), annotations[daprReadinessProbePeriodKey])
assert.Equal(t, fmt.Sprintf("%d", readinessProbeThreshold), annotations[daprReadinessProbeThresholdKey])
assert.Equal(t, strconv.Itoa(livenessProbeDelay), annotations[daprLivenessProbeDelayKey])
assert.Equal(t, strconv.Itoa(livenessProbeTimeout), annotations[daprLivenessProbeTimeoutKey])
assert.Equal(t, strconv.Itoa(livenessProbePeriod), annotations[daprLivenessProbePeriodKey])
assert.Equal(t, strconv.Itoa(livenessProbeThreshold), annotations[daprLivenessProbeThresholdKey])
assert.Equal(t, strconv.Itoa(readinessProbeDelay), annotations[daprReadinessProbeDelayKey])
assert.Equal(t, strconv.Itoa(readinessProbeTimeout), annotations[daprReadinessProbeTimeoutKey])
assert.Equal(t, strconv.Itoa(readinessProbePeriod), annotations[daprReadinessProbePeriodKey])
assert.Equal(t, strconv.Itoa(readinessProbeThreshold), annotations[daprReadinessProbeThresholdKey])
assert.Equal(t, daprImage, annotations[daprImageKey])
assert.Equal(t, "true", annotations[daprAppSSLKey])
assert.Equal(t, fmt.Sprintf("%d", maxRequestBodySize), annotations[daprMaxRequestBodySizeKey])
assert.Equal(t, fmt.Sprintf("%d", readBufferSize), annotations[daprReadBufferSizeKey])
assert.Equal(t, strconv.Itoa(maxRequestBodySize), annotations[daprMaxRequestBodySizeKey])
assert.Equal(t, strconv.Itoa(readBufferSize), annotations[daprReadBufferSizeKey])
assert.Equal(t, "true", annotations[daprHTTPStreamRequestBodyKey])
assert.Equal(t, fmt.Sprintf("%d", gracefulShutdownSeconds), annotations[daprGracefulShutdownSecondsKey])
assert.Equal(t, strconv.Itoa(gracefulShutdownSeconds), annotations[daprGracefulShutdownSecondsKey])
assert.Equal(t, "true", annotations[daprEnableAPILoggingKey])
assert.Equal(t, unixDomainSocketPath, annotations[daprUnixDomainSocketPathKey])
assert.Equal(t, volumeMountsReadOnly, annotations[daprVolumeMountsReadOnlyKey])

View File

@ -18,6 +18,7 @@ import (
"sync"
k8s "k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
scheme "github.com/dapr/dapr/pkg/client/clientset/versioned"
@ -30,10 +31,6 @@ import (
// oidc auth
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
// openstack auth
_ "k8s.io/client-go/plugin/pkg/client/auth/openstack"
"k8s.io/client-go/rest"
)
var (

View File

@ -25,6 +25,7 @@ import (
"github.com/dapr/cli/pkg/age"
"github.com/dapr/cli/utils"
v1alpha1 "github.com/dapr/dapr/pkg/apis/components/v1alpha1"
"github.com/dapr/dapr/pkg/client/clientset/versioned"
)
// ComponentsOutput represent a Dapr component.
@ -46,21 +47,37 @@ func PrintComponents(name, namespace, outputFormat string) error {
return nil, err
}
list, err := client.ComponentsV1alpha1().Components(namespace).List(meta_v1.ListOptions{})
// This means that the Dapr Components CRD is not installed and
// therefore no component items exist.
if apierrors.IsNotFound(err) {
list = &v1alpha1.ComponentList{
Items: []v1alpha1.Component{},
}
} else if err != nil {
return nil, err
}
return list, nil
return listComponents(client, namespace)
}, name, outputFormat)
}
func listComponents(client versioned.Interface, namespace string) (*v1alpha1.ComponentList, error) {
list, err := client.ComponentsV1alpha1().Components(namespace).List(meta_v1.ListOptions{})
// This means that the Dapr Components CRD is not installed and
// therefore no component items exist.
if apierrors.IsNotFound(err) {
list = &v1alpha1.ComponentList{
Items: []v1alpha1.Component{},
}
} else if err != nil {
return nil, err
}
return list, nil
}
func getComponent(client versioned.Interface, namespace string, componentName string) (*v1alpha1.Component, error) {
c, err := client.ComponentsV1alpha1().Components(namespace).Get(componentName, meta_v1.GetOptions{})
// This means that the Dapr Components CRD is not installed and
// therefore no component items exist.
if apierrors.IsNotFound(err) {
return &v1alpha1.Component{}, nil
} else if err != nil {
return nil, err
}
return c, err
}
func writeComponents(writer io.Writer, getConfigFunc func() (*v1alpha1.ComponentList, error), name, outputFormat string) error {
confs, err := getConfigFunc()
if err != nil {

View File

@ -15,7 +15,7 @@ package kubernetes
import (
"bytes"
"fmt"
"errors"
"testing"
"github.com/stretchr/testify/assert"
@ -242,7 +242,7 @@ func TestComponents(t *testing.T) {
err := writeComponents(&buff,
func() (*v1alpha1.ComponentList, error) {
if len(tc.errString) > 0 {
return nil, fmt.Errorf(tc.errString)
return nil, errors.New(tc.errString)
}
return &v1alpha1.ComponentList{Items: tc.k8sConfig}, nil

View File

@ -20,6 +20,7 @@ import (
"github.com/dapr/cli/utils"
v1alpha1 "github.com/dapr/dapr/pkg/apis/configuration/v1alpha1"
"github.com/dapr/kit/ptr"
)
func GetDefaultConfiguration() v1alpha1.Configuration {
@ -28,10 +29,10 @@ func GetDefaultConfiguration() v1alpha1.Configuration {
Name: "daprsystem",
},
Spec: v1alpha1.ConfigurationSpec{
MTLSSpec: v1alpha1.MTLSSpec{
Enabled: true,
WorkloadCertTTL: "24h",
AllowedClockSkew: "15m",
MTLSSpec: &v1alpha1.MTLSSpec{
Enabled: ptr.Of(true),
WorkloadCertTTL: ptr.Of("24h"),
AllowedClockSkew: ptr.Of("15m"),
},
},
}
@ -42,7 +43,7 @@ func GetDaprControlPlaneCurrentConfig() (*v1alpha1.Configuration, error) {
if err != nil {
return nil, err
}
output, err := utils.RunCmdAndWait("kubectl", "get", "configurations/daprsystem", "-n", namespace, "-o", "json")
output, err := utils.RunCmdAndWait("kubectl", "get", "configurations.dapr.io/daprsystem", "-n", namespace, "-o", "json")
if err != nil {
return nil, err
}

View File

@ -26,6 +26,7 @@ import (
"github.com/dapr/cli/pkg/age"
"github.com/dapr/cli/utils"
v1alpha1 "github.com/dapr/dapr/pkg/apis/configuration/v1alpha1"
"github.com/dapr/dapr/pkg/client/clientset/versioned"
)
type configurationsOutput struct {
@ -66,6 +67,18 @@ func PrintConfigurations(name, namespace, outputFormat string) error {
}, name, outputFormat)
}
func getDaprConfiguration(client versioned.Interface, namespace string, configurationName string) (*v1alpha1.Configuration, error) {
c, err := client.ConfigurationV1alpha1().Configurations(namespace).Get(configurationName, meta_v1.GetOptions{})
// This means that the Dapr Configurations CRD is not installed and
// therefore no configuration items exist.
if apierrors.IsNotFound(err) {
return &v1alpha1.Configuration{}, nil
} else if err != nil {
return nil, err
}
return c, err
}
func writeConfigurations(writer io.Writer, getConfigFunc func() (*v1alpha1.ConfigurationList, error), name, outputFormat string) error {
confs, err := getConfigFunc()
if err != nil {
@ -104,11 +117,15 @@ func writeConfigurations(writer io.Writer, getConfigFunc func() (*v1alpha1.Confi
func printConfigurationList(writer io.Writer, list []v1alpha1.Configuration) error {
co := []configurationsOutput{}
for _, c := range list {
var metricsEnabled bool
if c.Spec.MetricSpec != nil {
metricsEnabled = *c.Spec.MetricSpec.Enabled
}
co = append(co, configurationsOutput{
TracingEnabled: tracingEnabled(c.Spec.TracingSpec),
Name: c.GetName(),
Namespace: c.GetNamespace(),
MetricsEnabled: c.Spec.MetricSpec.Enabled,
MetricsEnabled: metricsEnabled,
Created: c.CreationTimestamp.Format("2006-01-02 15:04.05"),
Age: age.GetAge(c.CreationTimestamp.Time),
})
@ -121,7 +138,10 @@ func printConfigurationList(writer io.Writer, list []v1alpha1.Configuration) err
return utils.MarshalAndWriteTable(writer, co)
}
func tracingEnabled(spec v1alpha1.TracingSpec) bool {
func tracingEnabled(spec *v1alpha1.TracingSpec) bool {
if spec == nil {
return false
}
sr, err := strconv.ParseFloat(spec.SamplingRate, 32)
if err != nil {
return false

View File

@ -15,7 +15,7 @@ package kubernetes
import (
"bytes"
"fmt"
"errors"
"testing"
"github.com/stretchr/testify/assert"
@ -130,7 +130,7 @@ func TestConfigurations(t *testing.T) {
name: "Yaml one config",
configName: "",
outputFormat: "yaml",
expectedOutput: "- name: appConfig\n namespace: default\n spec:\n apphttppipelinespec:\n handlers: []\n httppipelinespec:\n handlers: []\n tracingspec:\n samplingrate: \"\"\n stdout: false\n zipkin:\n endpointaddresss: \"\"\n otel:\n protocol: \"\"\n endpointAddress: \"\"\n isSecure: false\n metricspec:\n enabled: false\n rules: []\n metricsspec:\n enabled: false\n rules: []\n mtlsspec:\n enabled: false\n workloadcertttl: \"\"\n allowedclockskew: \"\"\n secrets:\n scopes: []\n accesscontrolspec:\n defaultAction: \"\"\n trustDomain: \"\"\n policies: []\n nameresolutionspec:\n component: \"\"\n version: \"\"\n configuration:\n json:\n raw: []\n features: []\n apispec:\n allowed: []\n denied: []\n componentsspec: {}\n loggingspec:\n apiLogging:\n enabled: false\n obfuscateURLs: false\n omitHealthChecks: false\n",
expectedOutput: "- name: appConfig\n namespace: default\n spec:\n apphttppipelinespec: null\n httppipelinespec: null\n tracingspec: null\n metricspec: null\n metricsspec: null\n mtlsspec: null\n secrets: null\n accesscontrolspec: null\n nameresolutionspec: null\n features: []\n apispec: null\n componentsspec: null\n loggingspec: null\n wasmspec: null\n workflowspec: null\n",
errString: "",
errorExpected: false,
k8sConfig: []v1alpha1.Configuration{
@ -148,7 +148,7 @@ func TestConfigurations(t *testing.T) {
name: "Yaml two configs",
configName: "",
outputFormat: "yaml",
expectedOutput: "- name: appConfig1\n namespace: default\n spec:\n apphttppipelinespec:\n handlers: []\n httppipelinespec:\n handlers: []\n tracingspec:\n samplingrate: \"\"\n stdout: false\n zipkin:\n endpointaddresss: \"\"\n otel:\n protocol: \"\"\n endpointAddress: \"\"\n isSecure: false\n metricspec:\n enabled: false\n rules: []\n metricsspec:\n enabled: false\n rules: []\n mtlsspec:\n enabled: false\n workloadcertttl: \"\"\n allowedclockskew: \"\"\n secrets:\n scopes: []\n accesscontrolspec:\n defaultAction: \"\"\n trustDomain: \"\"\n policies: []\n nameresolutionspec:\n component: \"\"\n version: \"\"\n configuration:\n json:\n raw: []\n features: []\n apispec:\n allowed: []\n denied: []\n componentsspec: {}\n loggingspec:\n apiLogging:\n enabled: false\n obfuscateURLs: false\n omitHealthChecks: false\n- name: appConfig2\n namespace: default\n spec:\n apphttppipelinespec:\n handlers: []\n httppipelinespec:\n handlers: []\n tracingspec:\n samplingrate: \"\"\n stdout: false\n zipkin:\n endpointaddresss: \"\"\n otel:\n protocol: \"\"\n endpointAddress: \"\"\n isSecure: false\n metricspec:\n enabled: false\n rules: []\n metricsspec:\n enabled: false\n rules: []\n mtlsspec:\n enabled: false\n workloadcertttl: \"\"\n allowedclockskew: \"\"\n secrets:\n scopes: []\n accesscontrolspec:\n defaultAction: \"\"\n trustDomain: \"\"\n policies: []\n nameresolutionspec:\n component: \"\"\n version: \"\"\n configuration:\n json:\n raw: []\n features: []\n apispec:\n allowed: []\n denied: []\n componentsspec: {}\n loggingspec:\n apiLogging:\n enabled: false\n obfuscateURLs: false\n omitHealthChecks: false\n",
expectedOutput: "- name: appConfig1\n namespace: default\n spec:\n apphttppipelinespec: null\n httppipelinespec: null\n tracingspec: null\n metricspec: null\n metricsspec: null\n mtlsspec: null\n secrets: null\n accesscontrolspec: null\n nameresolutionspec: null\n features: []\n apispec: null\n componentsspec: null\n loggingspec: null\n wasmspec: null\n workflowspec: null\n- name: appConfig2\n namespace: default\n spec:\n apphttppipelinespec: null\n httppipelinespec: null\n tracingspec: null\n metricspec: null\n metricsspec: null\n mtlsspec: null\n secrets: null\n accesscontrolspec: null\n nameresolutionspec: null\n features: []\n apispec: null\n componentsspec: null\n loggingspec: null\n wasmspec: null\n workflowspec: null\n",
errString: "",
errorExpected: false,
k8sConfig: []v1alpha1.Configuration{
@ -174,7 +174,7 @@ func TestConfigurations(t *testing.T) {
name: "Json one config",
configName: "",
outputFormat: "json",
expectedOutput: "[\n {\n \"name\": \"appConfig\",\n \"namespace\": \"default\",\n \"spec\": {\n \"appHttpPipeline\": {\n \"handlers\": null\n },\n \"httpPipeline\": {\n \"handlers\": null\n },\n \"tracing\": {\n \"samplingRate\": \"\",\n \"stdout\": false,\n \"zipkin\": {\n \"endpointAddress\": \"\"\n },\n \"otel\": {\n \"protocol\": \"\",\n \"endpointAddress\": \"\",\n \"isSecure\": false\n }\n },\n \"metric\": {\n \"enabled\": false,\n \"rules\": null\n },\n \"metrics\": {\n \"enabled\": false,\n \"rules\": null\n },\n \"mtls\": {\n \"enabled\": false,\n \"workloadCertTTL\": \"\",\n \"allowedClockSkew\": \"\"\n },\n \"secrets\": {\n \"scopes\": null\n },\n \"accessControl\": {\n \"defaultAction\": \"\",\n \"trustDomain\": \"\",\n \"policies\": null\n },\n \"nameResolution\": {\n \"component\": \"\",\n \"version\": \"\",\n \"configuration\": null\n },\n \"api\": {},\n \"components\": {},\n \"logging\": {\n \"apiLogging\": {\n \"enabled\": false,\n \"obfuscateURLs\": false,\n \"omitHealthChecks\": false\n }\n }\n }\n }\n]",
expectedOutput: "[\n {\n \"name\": \"appConfig\",\n \"namespace\": \"default\",\n \"spec\": {}\n }\n]",
errString: "",
errorExpected: false,
k8sConfig: []v1alpha1.Configuration{
@ -192,7 +192,7 @@ func TestConfigurations(t *testing.T) {
name: "Json two configs",
configName: "",
outputFormat: "json",
expectedOutput: "[\n {\n \"name\": \"appConfig1\",\n \"namespace\": \"default\",\n \"spec\": {\n \"appHttpPipeline\": {\n \"handlers\": null\n },\n \"httpPipeline\": {\n \"handlers\": null\n },\n \"tracing\": {\n \"samplingRate\": \"\",\n \"stdout\": false,\n \"zipkin\": {\n \"endpointAddress\": \"\"\n },\n \"otel\": {\n \"protocol\": \"\",\n \"endpointAddress\": \"\",\n \"isSecure\": false\n }\n },\n \"metric\": {\n \"enabled\": false,\n \"rules\": null\n },\n \"metrics\": {\n \"enabled\": false,\n \"rules\": null\n },\n \"mtls\": {\n \"enabled\": false,\n \"workloadCertTTL\": \"\",\n \"allowedClockSkew\": \"\"\n },\n \"secrets\": {\n \"scopes\": null\n },\n \"accessControl\": {\n \"defaultAction\": \"\",\n \"trustDomain\": \"\",\n \"policies\": null\n },\n \"nameResolution\": {\n \"component\": \"\",\n \"version\": \"\",\n \"configuration\": null\n },\n \"api\": {},\n \"components\": {},\n \"logging\": {\n \"apiLogging\": {\n \"enabled\": false,\n \"obfuscateURLs\": false,\n \"omitHealthChecks\": false\n }\n }\n }\n },\n {\n \"name\": \"appConfig2\",\n \"namespace\": \"default\",\n \"spec\": {\n \"appHttpPipeline\": {\n \"handlers\": null\n },\n \"httpPipeline\": {\n \"handlers\": null\n },\n \"tracing\": {\n \"samplingRate\": \"\",\n \"stdout\": false,\n \"zipkin\": {\n \"endpointAddress\": \"\"\n },\n \"otel\": {\n \"protocol\": \"\",\n \"endpointAddress\": \"\",\n \"isSecure\": false\n }\n },\n \"metric\": {\n \"enabled\": false,\n \"rules\": null\n },\n \"metrics\": {\n \"enabled\": false,\n \"rules\": null\n },\n \"mtls\": {\n \"enabled\": false,\n \"workloadCertTTL\": \"\",\n \"allowedClockSkew\": \"\"\n },\n \"secrets\": {\n \"scopes\": null\n },\n \"accessControl\": {\n \"defaultAction\": \"\",\n \"trustDomain\": \"\",\n \"policies\": null\n },\n \"nameResolution\": {\n \"component\": \"\",\n \"version\": \"\",\n \"configuration\": null\n },\n \"api\": {},\n \"components\": {},\n \"logging\": {\n \"apiLogging\": {\n \"enabled\": false,\n \"obfuscateURLs\": false,\n \"omitHealthChecks\": false\n }\n }\n }\n }\n]",
expectedOutput: "[\n {\n \"name\": \"appConfig1\",\n \"namespace\": \"default\",\n \"spec\": {}\n },\n {\n \"name\": \"appConfig2\",\n \"namespace\": \"default\",\n \"spec\": {}\n }\n]",
errString: "",
errorExpected: false,
k8sConfig: []v1alpha1.Configuration{
@ -221,7 +221,7 @@ func TestConfigurations(t *testing.T) {
err := writeConfigurations(&buff,
func() (*v1alpha1.ConfigurationList, error) {
if len(tc.errString) > 0 {
return nil, fmt.Errorf(tc.errString)
return nil, errors.New(tc.errString)
}
return &v1alpha1.ConfigurationList{Items: tc.k8sConfig}, nil

View File

@ -55,6 +55,11 @@ func TestDashboardChart(t *testing.T) {
expectDashboard: false,
expectError: false,
},
{
runtimeVersion: "1.13.0",
expectDashboard: false,
expectError: false,
},
{
runtimeVersion: "Bad Version",
expectDashboard: false,

View File

@ -33,13 +33,28 @@ import (
"github.com/dapr/cli/pkg/print"
cli_ver "github.com/dapr/cli/pkg/version"
"github.com/dapr/cli/utils"
"github.com/dapr/dapr/pkg/client/clientset/versioned"
)
const (
daprReleaseName = "dapr"
dashboardReleaseName = "dapr-dashboard"
daprHelmRepo = "https://dapr.github.io/helm-charts"
latestVersion = "latest"
bitnamiStableVersion = "17.14.5"
// dev mode constants.
thirdPartyDevNamespace = "default"
zipkinChartName = "zipkin"
redisChartName = "redis"
zipkinReleaseName = "dapr-dev-zipkin"
redisReleaseName = "dapr-dev-redis"
redisVersion = "6.2.11"
bitnamiHelmRepo = "https://charts.bitnami.com/bitnami"
daprHelmRepo = "https://dapr.github.io/helm-charts"
zipkinHelmRepo = "https://openzipkin.github.io/zipkin"
stateStoreComponentName = "statestore"
pubsubComponentName = "pubsub"
zipkingConfigurationName = "appconfig"
)
type InitConfiguration struct {
@ -48,6 +63,7 @@ type InitConfiguration struct {
Namespace string
EnableMTLS bool
EnableHA bool
EnableDev bool
Args []string
Wait bool
Timeout uint
@ -60,7 +76,8 @@ type InitConfiguration struct {
// Init deploys the Dapr operator using the supplied runtime version.
func Init(config InitConfiguration) error {
err := installWithConsole(daprReleaseName, config.Version, "Dapr control plane", config)
helmRepoDapr := utils.GetEnv("DAPR_HELM_REPO_URL", daprHelmRepo)
err := installWithConsole(daprReleaseName, config.Version, helmRepoDapr, "Dapr control plane", config)
if err != nil {
return err
}
@ -75,19 +92,54 @@ func Init(config InitConfiguration) error {
}
}
err = installWithConsole(dashboardReleaseName, config.DashboardVersion, "Dapr dashboard", config)
err = installWithConsole(dashboardReleaseName, config.DashboardVersion, helmRepoDapr, "Dapr dashboard", config)
if err != nil {
return err
}
if config.EnableDev {
redisChartVals := []string{
"image.tag=" + redisVersion,
"replica.replicaCount=0",
}
err = installThirdPartyWithConsole(redisReleaseName, redisChartName, bitnamiStableVersion, bitnamiHelmRepo, "Dapr Redis", redisChartVals, config)
if err != nil {
return err
}
err = installThirdPartyWithConsole(zipkinReleaseName, zipkinChartName, latestVersion, zipkinHelmRepo, "Dapr Zipkin", []string{}, config)
if err != nil {
return err
}
err = initDevConfigs()
if err != nil {
return err
}
}
return nil
}
func installWithConsole(releaseName string, releaseVersion string, prettyName string, config InitConfiguration) error {
func installThirdPartyWithConsole(releaseName, chartName, releaseVersion, helmRepo string, prettyName string, chartValues []string, config InitConfiguration) error {
installSpinning := print.Spinner(os.Stdout, "Deploying the "+prettyName+" with "+releaseVersion+" version to your cluster...")
defer installSpinning(print.Failure)
err := install(releaseName, releaseVersion, config)
// releaseVersion of chart will always be latest version.
err := installThirdParty(releaseName, chartName, releaseVersion, helmRepo, chartValues, config)
if err != nil {
return err
}
installSpinning(print.Success)
return nil
}
func installWithConsole(releaseName, releaseVersion, helmRepo string, prettyName string, config InitConfiguration) error {
installSpinning := print.Spinner(os.Stdout, "Deploying the "+prettyName+" with "+releaseVersion+" version to your cluster...")
defer installSpinning(print.Failure)
err := install(releaseName, releaseVersion, helmRepo, config)
if err != nil {
return err
}
@ -156,15 +208,15 @@ func locateChartFile(dirPath string) (string, error) {
return filepath.Join(dirPath, files[0].Name()), nil
}
func daprChart(version string, releaseName string, config *helm.Configuration) (*chart.Chart, error) {
func getHelmChart(version, releaseName, helmRepo string, config *helm.Configuration) (*chart.Chart, error) {
pull := helm.NewPullWithOpts(helm.WithConfig(config))
pull.RepoURL = utils.GetEnv("DAPR_HELM_REPO_URL", daprHelmRepo)
pull.RepoURL = helmRepo
pull.Username = utils.GetEnv("DAPR_HELM_REPO_USERNAME", "")
pull.Password = utils.GetEnv("DAPR_HELM_REPO_PASSWORD", "")
pull.Settings = &cli.EnvSettings{}
if version != latestVersion && (releaseName == daprReleaseName || releaseName == dashboardReleaseName) {
if version != latestVersion && (releaseName == daprReleaseName || releaseName == dashboardReleaseName || releaseName == redisChartName) {
pull.Version = chartVersion(version)
}
@ -188,7 +240,7 @@ func daprChart(version string, releaseName string, config *helm.Configuration) (
return loader.Load(chartPath)
}
func chartValues(config InitConfiguration, version string) (map[string]interface{}, error) {
func daprChartValues(config InitConfiguration, version string) (map[string]interface{}, error) {
chartVals := map[string]interface{}{}
err := utils.ValidateImageVariant(config.ImageVariant)
if err != nil {
@ -197,10 +249,10 @@ func chartValues(config InitConfiguration, version string) (map[string]interface
helmVals := []string{
fmt.Sprintf("global.ha.enabled=%t", config.EnableHA),
fmt.Sprintf("global.mtls.enabled=%t", config.EnableMTLS),
fmt.Sprintf("global.tag=%s", utils.GetVariantVersion(version, config.ImageVariant)),
"global.tag=" + utils.GetVariantVersion(version, config.ImageVariant),
}
if len(config.ImageRegistryURI) != 0 {
helmVals = append(helmVals, fmt.Sprintf("global.registry=%s", config.ImageRegistryURI))
helmVals = append(helmVals, "global.registry="+config.ImageRegistryURI)
}
helmVals = append(helmVals, config.Args...)
@ -213,9 +265,9 @@ func chartValues(config InitConfiguration, version string) (map[string]interface
if err != nil {
return nil, err
}
helmVals = append(helmVals, fmt.Sprintf("dapr_sentry.tls.root.certPEM=%s", string(rootCertBytes)),
fmt.Sprintf("dapr_sentry.tls.issuer.certPEM=%s", string(issuerCertBytes)),
fmt.Sprintf("dapr_sentry.tls.issuer.keyPEM=%s", string(issuerKeyBytes)),
helmVals = append(helmVals, "dapr_sentry.tls.root.certPEM="+string(rootCertBytes),
"dapr_sentry.tls.issuer.certPEM="+string(issuerCertBytes),
"dapr_sentry.tls.issuer.keyPEM="+string(issuerKeyBytes),
)
}
@ -227,7 +279,7 @@ func chartValues(config InitConfiguration, version string) (map[string]interface
return chartVals, nil
}
func install(releaseName string, releaseVersion string, config InitConfiguration) error {
func install(releaseName, releaseVersion, helmRepo string, config InitConfiguration) error {
err := createNamespace(config.Namespace)
if err != nil {
return err
@ -238,7 +290,7 @@ func install(releaseName string, releaseVersion string, config InitConfiguration
return err
}
daprChart, err := daprChart(releaseVersion, releaseName, helmConf)
daprChart, err := getHelmChart(releaseVersion, releaseName, helmRepo, helmConf)
if err != nil {
return err
}
@ -249,7 +301,7 @@ func install(releaseName string, releaseVersion string, config InitConfiguration
}
if releaseName == daprReleaseName {
err = applyCRDs(fmt.Sprintf("v%s", version))
err = applyCRDs("v" + version)
if err != nil {
return err
}
@ -259,9 +311,9 @@ func install(releaseName string, releaseVersion string, config InitConfiguration
installClient.ReleaseName = releaseName
installClient.Namespace = config.Namespace
installClient.Wait = config.Wait
installClient.Timeout = time.Duration(config.Timeout) * time.Second
installClient.Timeout = time.Duration(config.Timeout) * time.Second //nolint:gosec
values, err := chartValues(config, version)
values, err := daprChartValues(config, version)
if err != nil {
return err
}
@ -273,6 +325,38 @@ func install(releaseName string, releaseVersion string, config InitConfiguration
return nil
}
func installThirdParty(releaseName, chartName, releaseVersion, helmRepo string, chartVals []string, config InitConfiguration) error {
helmConf, err := helmConfig(thirdPartyDevNamespace)
if err != nil {
return err
}
helmChart, err := getHelmChart(releaseVersion, chartName, helmRepo, helmConf)
if err != nil {
return err
}
installClient := helm.NewInstall(helmConf)
installClient.ReleaseName = releaseName
installClient.Namespace = thirdPartyDevNamespace
installClient.Wait = config.Wait
installClient.Timeout = time.Duration(config.Timeout) * time.Second //nolint:gosec
values := map[string]interface{}{}
for _, val := range chartVals {
if err = strvals.ParseInto(val, values); err != nil {
return err
}
}
if _, err = installClient.Run(helmChart, values); err != nil {
return err
}
return nil
}
func debugLogf(format string, v ...interface{}) {
}
@ -290,3 +374,164 @@ func confirmExist(cfg *helm.Configuration, releaseName string) (bool, error) {
return true, nil
}
func checkAndOverWriteFile(filePath string, b []byte) error {
_, err := os.Stat(filePath)
if os.IsNotExist(err) {
// #nosec G306
if err = os.WriteFile(filePath, b, 0o644); err != nil {
return err
}
}
return nil
}
func isComponentPresent(client versioned.Interface, namespace string, componentName string) (bool, error) {
c, err := getComponent(client, namespace, componentName)
if err != nil {
return false, err
}
return c.Name == componentName, err
}
func isConfigurationPresent(client versioned.Interface, namespace string, configurationName string) (bool, error) {
c, err := getDaprConfiguration(client, namespace, configurationName)
if err != nil {
return false, err
}
return c.Name == configurationName, nil
}
func initDevConfigs() error {
redisStatestore := `
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
version: v1
metadata:
# These settings will work out of the box if you use helm install
# bitnami/redis. If you have your own setup, replace
# redis-master:6379 with your own Redis master address, and the
# Redis password with your own Secret's name. For more information,
# see https://docs.dapr.io/operations/components/component-secrets .
- name: redisHost
value: dapr-dev-redis-master:6379
- name: redisPassword
secretKeyRef:
name: dapr-dev-redis
key: redis-password
auth:
secretStore: kubernetes
`
redisPubsub := `
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
spec:
type: pubsub.redis
version: v1
metadata:
# These settings will work out of the box if you use helm install
# bitnami/redis. If you have your own setup, replace
# redis-master:6379 with your own Redis master address, and the
# Redis password with your own Secret's name. For more information,
# see https://docs.dapr.io/operations/components/component-secrets .
- name: redisHost
value: dapr-dev-redis-master:6379
- name: redisPassword
secretKeyRef:
name: dapr-dev-redis
key: redis-password
auth:
secretStore: kubernetes
`
// This config needs to be named as appconfig, as it is used in later steps in `dapr run -f . -k`. See `pkg/kubernetes/run.go`.
zipkinConfig := `
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: "http://dapr-dev-zipkin.default.svc.cluster.local:9411/api/v2/spans"
`
tempDirPath, err := createTempDir()
defer os.RemoveAll(tempDirPath)
if err != nil {
return err
}
client, err := DaprClient()
if err != nil {
return err
}
present, err := isComponentPresent(client, thirdPartyDevNamespace, stateStoreComponentName)
if present || err != nil {
if err != nil {
print.WarningStatusEvent(os.Stderr, "Error listing components, skipping default dev component creation.")
} else {
print.WarningStatusEvent(os.Stderr, "Component with name %q already present in namespace %q. Skipping component creation.", stateStoreComponentName, thirdPartyDevNamespace)
}
return err
}
redisPath := filepath.Join(tempDirPath, "redis-statestore.yaml")
err = checkAndOverWriteFile(redisPath, []byte(redisStatestore))
if err != nil {
return err
}
print.InfoStatusEvent(os.Stdout, "Applying %q component to Kubernetes %q namespace.", stateStoreComponentName, thirdPartyDevNamespace)
_, err = utils.RunCmdAndWait("kubectl", "apply", "-f", redisPath)
if err != nil {
return err
}
present, err = isComponentPresent(client, thirdPartyDevNamespace, pubsubComponentName)
if present || err != nil {
if err != nil {
print.WarningStatusEvent(os.Stderr, "Error listing components, skipping default dev component creation.")
} else {
print.WarningStatusEvent(os.Stderr, "Component with name %q already present in namespace %q. Skipping component creation.", pubsubComponentName, thirdPartyDevNamespace)
}
return err
}
redisPath = filepath.Join(tempDirPath, "redis-pubsub.yaml")
err = checkAndOverWriteFile(redisPath, []byte(redisPubsub))
if err != nil {
return err
}
print.InfoStatusEvent(os.Stdout, "Applying %q component to Kubernetes %q namespace.", pubsubComponentName, thirdPartyDevNamespace)
_, err = utils.RunCmdAndWait("kubectl", "apply", "-f", redisPath)
if err != nil {
return err
}
present, err = isConfigurationPresent(client, thirdPartyDevNamespace, zipkingConfigurationName)
if present || err != nil {
if err != nil {
print.WarningStatusEvent(os.Stderr, "Error listing configurations, skipping default dev configuration creation.")
} else {
print.WarningStatusEvent(os.Stderr, "Configuration with name %q already present in namespace %q. Skipping configuration creation.", zipkingConfigurationName, thirdPartyDevNamespace)
}
return err
}
zipkinPath := filepath.Join(tempDirPath, "zipkin-config.yaml")
err = checkAndOverWriteFile(zipkinPath, []byte(zipkinConfig))
if err != nil {
return err
}
print.InfoStatusEvent(os.Stdout, "Applying %q zipkin configuration to Kubernetes %q namespace.", zipkingConfigurationName, thirdPartyDevNamespace)
_, err = utils.RunCmdAndWait("kubectl", "apply", "-f", zipkinPath)
if err != nil {
return err
}
return nil
}

View File

@ -14,17 +14,32 @@ limitations under the License.
package kubernetes
import (
"bufio"
"context"
"errors"
"fmt"
"io"
"os"
"strings"
"time"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
v1 "k8s.io/client-go/kubernetes/typed/core/v1"
"github.com/dapr/cli/pkg/print"
)
const (
daprdContainerName = "daprd"
appIDContainerArgName = "--app-id"
// number of retries when trying to list pods for getting logs.
maxListingRetry = 10
// delay between retries of pod listing.
listingDelay = 200 * time.Microsecond
// delay before retrying for getting logs.
streamingDelay = 100 * time.Millisecond
)
// Logs fetches Dapr sidecar logs from Kubernetes.
@ -84,3 +99,98 @@ func Logs(appID, podName, namespace string) error {
return nil
}
// streamContainerLogsToDisk streams all containers logs for the given selector to a given disk directory.
func streamContainerLogsToDisk(ctx context.Context, appID string, appLogWriter, daprdLogWriter io.Writer, podClient v1.PodInterface) error {
var err error
var podList *corev1.PodList
counter := 0
for {
podList, err = getPods(ctx, appID, podClient)
if err != nil {
return fmt.Errorf("error listing the pod with label %s=%s: %w", daprAppIDKey, appID, err)
}
if len(podList.Items) != 0 {
break
}
counter++
if counter == maxListingRetry {
return fmt.Errorf("error getting logs: error listing the pod with label %s=%s after %d retires", daprAppIDKey, appID, maxListingRetry)
}
// Retry after a delay.
time.Sleep(listingDelay)
}
for _, pod := range podList.Items {
print.InfoStatusEvent(os.Stdout, "Streaming logs for containers in pod %q", pod.GetName())
for _, container := range pod.Spec.Containers {
fileWriter := daprdLogWriter
if container.Name != daprdContainerName {
fileWriter = appLogWriter
}
// create a go routine for each container to stream logs into file/console.
go func(pod, containerName, appID string, fileWriter io.Writer) {
loop:
for {
req := podClient.GetLogs(pod, &corev1.PodLogOptions{
Container: containerName,
Follow: true,
})
stream, err := req.Stream(ctx)
if err != nil {
switch {
case strings.Contains(err.Error(), "Pending"):
// Retry after a delay.
time.Sleep(streamingDelay)
continue loop
case strings.Contains(err.Error(), "ContainerCreating"):
// Retry after a delay.
time.Sleep(streamingDelay)
continue loop
case errors.Is(err, context.Canceled):
return
default:
return
}
}
defer stream.Close()
if containerName != daprdContainerName {
streamScanner := bufio.NewScanner(stream)
for streamScanner.Scan() {
fmt.Fprintln(fileWriter, print.Blue(fmt.Sprintf("== APP - %s == %s", appID, streamScanner.Text())))
}
} else {
_, err = io.Copy(fileWriter, stream)
if err != nil {
switch {
case errors.Is(err, context.Canceled):
return
default:
return
}
}
}
return
}
}(pod.GetName(), container.Name, appID, fileWriter)
}
}
return nil
}
func getPods(ctx context.Context, appID string, podClient v1.PodInterface) (*corev1.PodList, error) {
listCtx, cancel := context.WithTimeout(ctx, 30*time.Second)
labelSelector := fmt.Sprintf("%s=%s", daprAppIDKey, appID)
podList, err := podClient.List(listCtx, metav1.ListOptions{
LabelSelector: labelSelector,
})
cancel()
if err != nil {
return nil, err
}
return podList, nil
}

View File

@ -42,7 +42,7 @@ func IsMTLSEnabled() (bool, error) {
if err != nil {
return false, err
}
return c.Spec.MTLSSpec.Enabled, nil
return *c.Spec.MTLSSpec.Enabled, nil
}
func getSystemConfig() (*v1alpha1.Configuration, error) {

View File

@ -15,24 +15,31 @@ package kubernetes
import (
"context"
"errors"
"fmt"
"strings"
core_v1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/watch"
k8s "k8s.io/client-go/kubernetes"
)
func ListPodsInterface(client k8s.Interface, labelSelector map[string]string) (*core_v1.PodList, error) {
opts := v1.ListOptions{}
const podWatchErrTemplate = "error creating pod watcher"
var errPodUnknown error = errors.New("pod in unknown/failed state")
func ListPodsInterface(client k8s.Interface, labelSelector map[string]string) (*corev1.PodList, error) {
opts := metav1.ListOptions{}
if labelSelector != nil {
opts.LabelSelector = labels.FormatLabels(labelSelector)
}
return client.CoreV1().Pods(v1.NamespaceAll).List(context.TODO(), opts)
return client.CoreV1().Pods(metav1.NamespaceAll).List(context.TODO(), opts)
}
func ListPods(client *k8s.Clientset, namespace string, labelSelector map[string]string) (*core_v1.PodList, error) {
opts := v1.ListOptions{}
func ListPods(client *k8s.Clientset, namespace string, labelSelector map[string]string) (*corev1.PodList, error) {
opts := metav1.ListOptions{}
if labelSelector != nil {
opts.LabelSelector = labels.FormatLabels(labelSelector)
}
@ -41,8 +48,8 @@ func ListPods(client *k8s.Clientset, namespace string, labelSelector map[string]
// CheckPodExists returns a boolean representing the pod's existence and the namespace that the given pod resides in,
// or empty if not present in the given namespace.
func CheckPodExists(client *k8s.Clientset, namespace string, labelSelector map[string]string, deployName string) (bool, string) {
opts := v1.ListOptions{}
func CheckPodExists(client k8s.Interface, namespace string, labelSelector map[string]string, deployName string) (bool, string) {
opts := metav1.ListOptions{}
if labelSelector != nil {
opts.LabelSelector = labels.FormatLabels(labelSelector)
}
@ -53,7 +60,7 @@ func CheckPodExists(client *k8s.Clientset, namespace string, labelSelector map[s
}
for _, pod := range podList.Items {
if pod.Status.Phase == core_v1.PodRunning {
if pod.Status.Phase == corev1.PodRunning {
if strings.HasPrefix(pod.Name, deployName) {
return true, pod.Namespace
}
@ -61,3 +68,61 @@ func CheckPodExists(client *k8s.Clientset, namespace string, labelSelector map[s
}
return false, ""
}
func createPodWatcher(ctx context.Context, client k8s.Interface, namespace, appID string) (watch.Interface, error) {
labelSelector := fmt.Sprintf("%s=%s", daprAppIDKey, appID)
opts := metav1.ListOptions{
TypeMeta: metav1.TypeMeta{},
LabelSelector: labelSelector,
}
return client.CoreV1().Pods(namespace).Watch(ctx, opts)
}
func waitPodDeleted(ctx context.Context, client k8s.Interface, namespace, appID string) error {
watcher, err := createPodWatcher(ctx, client, namespace, appID)
if err != nil {
return fmt.Errorf("%s : %w", podWatchErrTemplate, err)
}
defer watcher.Stop()
for {
select {
case event := <-watcher.ResultChan():
if event.Type == watch.Deleted {
return nil
}
case <-ctx.Done():
return fmt.Errorf("error context cancelled while waiting for pod deletion: %w", context.Canceled)
}
}
}
func waitPodRunning(ctx context.Context, client k8s.Interface, namespace, appID string) error {
watcher, err := createPodWatcher(ctx, client, namespace, appID)
if err != nil {
return fmt.Errorf("%s : %w", podWatchErrTemplate, err)
}
defer watcher.Stop()
for {
select {
case event := <-watcher.ResultChan():
pod := event.Object.(*corev1.Pod)
if pod.Status.Phase == corev1.PodRunning {
return nil
} else if pod.Status.Phase == corev1.PodFailed || pod.Status.Phase == corev1.PodUnknown {
return fmt.Errorf("error waiting for pod run: %w", errPodUnknown)
}
case <-ctx.Done():
return fmt.Errorf("error context cancelled while waiting for pod run: %w", context.Canceled)
}
}
}

View File

@ -28,9 +28,9 @@ func TestListPodsInterface(t *testing.T) {
output, err := ListPodsInterface(k8s, map[string]string{
"test": "test",
})
assert.Nil(t, err, "unexpected error")
assert.NoError(t, err, "unexpected error")
assert.NotNil(t, output, "Expected empty list")
assert.Equal(t, 0, len(output.Items), "Expected length 0")
assert.Empty(t, output.Items, "Expected length 0")
})
t.Run("one matching pod", func(t *testing.T) {
k8s := fake.NewSimpleClientset((&v1.Pod{
@ -46,9 +46,9 @@ func TestListPodsInterface(t *testing.T) {
output, err := ListPodsInterface(k8s, map[string]string{
"test": "test",
})
assert.Nil(t, err, "unexpected error")
assert.NoError(t, err, "unexpected error")
assert.NotNil(t, output, "Expected non empty list")
assert.Equal(t, 1, len(output.Items), "Expected length 0")
assert.Len(t, output.Items, 1, "Expected length 0")
assert.Equal(t, "test", output.Items[0].Name, "expected name to match")
assert.Equal(t, "test", output.Items[0].Namespace, "expected namespace to match")
})

View File

@ -14,6 +14,7 @@ limitations under the License.
package kubernetes
import (
"errors"
"fmt"
"io"
"net/http"
@ -133,7 +134,7 @@ func (pf *PortForward) Init() error {
return fmt.Errorf("can not get the local and remote ports: %w", err)
}
if len(ports) == 0 {
return fmt.Errorf("can not get the local and remote ports: error getting ports length")
return errors.New("can not get the local and remote ports: error getting ports length")
}
pf.LocalPort = int(ports[0].Local)

View File

@ -15,6 +15,8 @@ package kubernetes
import (
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"crypto/x509"
"encoding/pem"
"errors"
@ -27,8 +29,7 @@ import (
"github.com/dapr/cli/pkg/print"
"github.com/dapr/cli/utils"
"github.com/dapr/dapr/pkg/sentry/ca"
"github.com/dapr/dapr/pkg/sentry/certs"
"github.com/dapr/dapr/pkg/sentry/server/ca"
)
type RenewCertificateParams struct {
@ -51,7 +52,6 @@ func RenewCertificate(conf RenewCertificateParams) error {
conf.RootCertificateFilePath,
conf.IssuerCertificateFilePath,
conf.IssuerPrivateKeyFilePath)
if err != nil {
return err
}
@ -59,7 +59,6 @@ func RenewCertificate(conf RenewCertificateParams) error {
rootCertBytes, issuerCertBytes, issuerKeyBytes, err = GenerateNewCertificates(
conf.ValidUntil,
conf.RootPrivateKeyFilePath)
if err != nil {
return err
}
@ -112,7 +111,8 @@ func renewCertificate(rootCert, issuerCert, issuerKey []byte, timeout uint, imag
return err
}
daprChart, err := daprChart(daprVersion, "dapr", helmConf)
helmRepo := utils.GetEnv("DAPR_HELM_REPO_URL", daprHelmRepo)
daprChart, err := getHelmChart(daprVersion, "dapr", helmRepo, helmConf)
if err != nil {
return err
}
@ -121,7 +121,7 @@ func renewCertificate(rootCert, issuerCert, issuerKey []byte, timeout uint, imag
// Reuse the existing helm configuration values i.e. tags, registry, etc.
upgradeClient.ReuseValues = true
upgradeClient.Wait = true
upgradeClient.Timeout = time.Duration(timeout) * time.Second
upgradeClient.Timeout = time.Duration(timeout) * time.Second //nolint:gosec
upgradeClient.Namespace = status[0].Namespace
// Override the helm configuration values with the new certificates.
@ -146,12 +146,12 @@ func createHelmParamsForNewCertificates(ca, issuerCert, issuerKey string) (map[s
args := []string{}
if ca != "" && issuerCert != "" && issuerKey != "" {
args = append(args, fmt.Sprintf("dapr_sentry.tls.root.certPEM=%s", ca),
fmt.Sprintf("dapr_sentry.tls.issuer.certPEM=%s", issuerCert),
fmt.Sprintf("dapr_sentry.tls.issuer.keyPEM=%s", issuerKey),
args = append(args, "dapr_sentry.tls.root.certPEM="+ca,
"dapr_sentry.tls.issuer.certPEM="+issuerCert,
"dapr_sentry.tls.issuer.keyPEM="+issuerKey,
)
} else {
return nil, fmt.Errorf("parameters not found")
return nil, errors.New("parameters not found")
}
for _, v := range args {
@ -179,7 +179,7 @@ func GenerateNewCertificates(validUntil time.Duration, privateKeyFile string) ([
}
} else {
var err error
rootKey, err = certs.GenerateECPrivateKey()
rootKey, err = ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
if err != nil {
return nil, nil, nil, err
}
@ -188,13 +188,18 @@ func GenerateNewCertificates(validUntil time.Duration, privateKeyFile string) ([
if err != nil {
return nil, nil, nil, err
}
allowedClockSkew, err := time.ParseDuration(systemConfig.Spec.MTLSSpec.AllowedClockSkew)
var allowedClockSkew time.Duration
if systemConfig.Spec.MTLSSpec.AllowedClockSkew != nil {
allowedClockSkew, err = time.ParseDuration(*systemConfig.Spec.MTLSSpec.AllowedClockSkew)
if err != nil {
return nil, nil, nil, err
}
}
bundle, err := ca.GenerateBundle(rootKey, "cluster.local", allowedClockSkew, &validUntil)
if err != nil {
return nil, nil, nil, err
}
_, rootCertPem, issuerCertPem, issuerKeyPem, err := ca.GetNewSelfSignedCertificates(rootKey, validUntil, allowedClockSkew)
if err != nil {
return nil, nil, nil, err
}
return rootCertPem, issuerCertPem, issuerKeyPem, nil
return bundle.TrustAnchors, bundle.IssChainPEM, bundle.IssKeyPEM, nil
}

View File

@ -13,24 +13,447 @@ limitations under the License.
package kubernetes
// RunConfig represents the application configuration parameters.
type RunConfig struct {
AppID string
AppPort int
HTTPPort int
GRPCPort int
CodeDirectory string
Arguments []string
Image string
import (
"context"
"errors"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"sync"
"syscall"
"time"
appV1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
k8s "k8s.io/client-go/kubernetes"
podsv1 "k8s.io/client-go/kubernetes/typed/core/v1"
// Specifically use k8s sig yaml to marshal into json, then convert to yaml.
k8sYaml "sigs.k8s.io/yaml"
"github.com/dapr/cli/pkg/print"
"github.com/dapr/cli/pkg/runfileconfig"
daprsyscall "github.com/dapr/cli/pkg/syscall"
"github.com/dapr/cli/utils"
"github.com/dapr/dapr/pkg/client/clientset/versioned"
)
const (
serviceKind = "Service"
deploymentKind = "Deployment"
serviceAPIVersion = "v1"
deploymentAPIVersion = "apps/v1"
loadBalanceType = "LoadBalancer"
daprEnableAnnotationKey = "dapr.io/enabled"
daprConfigAnnotationKey = "dapr.io/config"
daprConfigAnnotationValue = "appconfig"
serviceFileName = "service.yaml"
deploymentFileName = "deployment.yaml"
appLabelKey = "app"
nameKey = "name"
namespaceKey = "namespace"
labelsKey = "labels"
tcpProtocol = "TCP"
podCreationDeletionTimeout = 1 * time.Minute
)
type deploymentConfig struct {
Kind string `json:"kind"`
APIVersion string `json:"apiVersion"`
Metadata map[string]any `json:"metadata"`
Spec appV1.DeploymentSpec `json:"spec"`
}
// RunOutput represents the run output.
type RunOutput struct {
Message string
type serviceConfig struct {
Kind string `json:"kind"`
APIVersion string `json:"apiVersion"`
Metadata map[string]any `json:"metadata"`
Spec corev1.ServiceSpec `json:"spec"`
}
// Run executes the application based on the run configuration.
func Run(config *RunConfig) (*RunOutput, error) {
//nolint
return nil, nil
type runState struct {
serviceFilePath string
deploymentFilePath string
app runfileconfig.App
logCancel context.CancelFunc
}
// Run executes the application based on the run file configuration.
// Run creates a temporary `deploy` folder within the app/.dapr directory and then applies that to the context pointed to
// kubectl client.
func Run(runFilePath string, config runfileconfig.RunFileConfig) (bool, error) {
// At this point, we expect the runfile to be parsed and the values within config
// Validations and default setting will only be done after this point.
var exitWithError bool
// get k8s client for PodsInterface.
client, cErr := Client()
if cErr != nil {
// exit with error.
return true, fmt.Errorf("error getting k8s client: %w", cErr)
}
// get dapr k8s client.
daprClient, cErr := DaprClient()
if cErr != nil {
// exit with error.
return true, fmt.Errorf("error getting dapr k8s client: %w", cErr)
}
namespace := corev1.NamespaceDefault
podsInterface := client.CoreV1().Pods(namespace)
// setup a monitoring context for shutdown call from another cli process.
monitoringContext, monitoringCancel := context.WithCancel(context.Background())
defer monitoringCancel()
// setup shutdown notify channel.
sigCh := make(chan os.Signal, 1)
daprsyscall.SetupShutdownNotify(sigCh)
runStates := []runState{}
print.InfoStatusEvent(os.Stdout, "This is a preview feature and subject to change in future releases.")
for _, app := range config.Apps {
print.StatusEvent(os.Stdout, print.LogInfo, "Validating config and starting app %q", app.RunConfig.AppID)
// Set defaults if zero value provided in config yaml.
app.RunConfig.SetDefaultFromSchema()
// Validate validates the configs for k8s and modifies appId etc.
err := app.RunConfig.ValidateK8s()
if err != nil {
print.FailureStatusEvent(os.Stderr, "Error validating run config for app %q present in %s: %s", app.RunConfig.AppID, runFilePath, err.Error())
exitWithError = true
break
}
var svc serviceConfig
// create default service config.
if app.ContainerConfiguration.CreateService {
svc = createServiceConfig(app)
}
// create default deployment config.
dep := createDeploymentConfig(daprClient, app)
if err != nil {
print.FailureStatusEvent(os.Stderr, "Error creating deployment file for app %q present in %s: %s", app.RunConfig.AppID, runFilePath, err.Error())
exitWithError = true
break
}
// overwrite <app-id>/.dapr/deploy/service.yaml.
// overwrite <app-id>/.dapr/deploy/deployment.yaml.
err = writeYamlFile(app, svc, dep)
if err != nil {
print.FailureStatusEvent(os.Stderr, "Error creating deployment/service yaml files: %s", err.Error())
exitWithError = true
break
}
deployDir := app.GetDeployDir()
print.InfoStatusEvent(os.Stdout, "Deploying app %q to Kubernetes", app.AppID)
serviceFilePath := filepath.Join(deployDir, serviceFileName)
deploymentFilePath := filepath.Join(deployDir, deploymentFileName)
rState := runState{}
if app.CreateService {
print.InfoStatusEvent(os.Stdout, "Deploying service YAML %q to Kubernetes", serviceFilePath)
err = deployYamlToK8s(serviceFilePath)
if err != nil {
print.FailureStatusEvent(os.Stderr, "Error deploying service yaml file %q : %s", serviceFilePath, err.Error())
exitWithError = true
break
}
rState.serviceFilePath = serviceFilePath
}
print.InfoStatusEvent(os.Stdout, "Deploying deployment YAML %q to Kubernetes", deploymentFilePath)
err = deployYamlToK8s(deploymentFilePath)
if err != nil {
print.FailureStatusEvent(os.Stderr, "Error deploying deployment yaml file %q : %s", deploymentFilePath, err.Error())
exitWithError = true
break
}
// create log files and save state.
err = app.CreateDaprdLogFile()
if err != nil {
print.StatusEvent(os.Stderr, print.LogFailure, "Error getting daprd log file for app %q present in %s: %s", app.AppID, runFilePath, err.Error())
exitWithError = true
break
}
err = app.CreateAppLogFile()
if err != nil {
print.StatusEvent(os.Stderr, print.LogFailure, "Error getting app log file for app %q present in %s: %s", app.AppID, runFilePath, err.Error())
exitWithError = true
break
}
daprdLogWriter := runfileconfig.GetLogWriter(app.DaprdLogWriteCloser, app.DaprdLogDestination)
// appDaprdWriter := runExec.GetAppDaprdWriter(app, false).
appLogWriter := runfileconfig.GetLogWriter(app.AppLogWriteCloser, app.AppLogDestination)
customAppLogWriter := print.CustomLogWriter{W: appLogWriter}
ctx, cancel := context.WithTimeout(context.Background(), podCreationDeletionTimeout)
err = waitPodRunning(ctx, client, namespace, app.AppID)
cancel()
if err != nil {
print.WarningStatusEvent(os.Stderr, "Error deploying pod to Kubernetes. See logs directly from Kubernetes command line.")
// Close the log files since there is deployment error, and the container might be in crash loop back off state.
app.CloseAppLogFile()
app.CloseDaprdLogFile()
} else {
logContext, cancel := context.WithCancel(context.Background())
rState.logCancel = cancel
err = setupLogs(logContext, app.AppID, daprdLogWriter, customAppLogWriter, podsInterface)
if err != nil {
print.StatusEvent(os.Stderr, print.LogWarning, "Error setting up logs for app %q present in %q . See logs directly from Kubernetes command line.: %s ", app.AppID, runFilePath, err.Error())
}
}
rState.deploymentFilePath = deploymentFilePath
rState.app = app
// append runSate only on successful k8s deploy.
runStates = append(runStates, rState)
print.InfoStatusEvent(os.Stdout, "Writing log files to directory : %s", app.GetLogsDir())
}
// If all apps have been started and there are no errors in starting the apps wait for signal from sigCh.
if !exitWithError {
print.InfoStatusEvent(os.Stdout, "Starting to monitor Kubernetes pods for deletion.")
go monitorK8sPods(monitoringContext, client, namespace, runStates, sigCh)
// After all apps started wait for sigCh.
<-sigCh
monitoringCancel()
print.InfoStatusEvent(os.Stdout, "Stopping Kubernetes pods monitoring.")
// To add a new line in Stdout.
fmt.Println()
print.InfoStatusEvent(os.Stdout, "Received signal to stop. Deleting K8s Dapr app deployments.")
}
closeErr := gracefullyShutdownK8sDeployment(runStates, client, namespace)
return exitWithError, closeErr
}
func createServiceConfig(app runfileconfig.App) serviceConfig {
return serviceConfig{
Kind: serviceKind,
APIVersion: serviceAPIVersion,
Metadata: map[string]any{
nameKey: app.RunConfig.AppID,
labelsKey: map[string]string{
appLabelKey: app.AppID,
},
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{
Protocol: tcpProtocol,
Port: 80,
TargetPort: intstr.FromInt(app.AppPort),
},
},
Selector: map[string]string{
appLabelKey: app.AppID,
},
Type: loadBalanceType,
},
}
}
func createDeploymentConfig(client versioned.Interface, app runfileconfig.App) deploymentConfig {
replicas := int32(1)
dep := deploymentConfig{
Kind: deploymentKind,
APIVersion: deploymentAPIVersion,
Metadata: map[string]any{
nameKey: app.AppID,
namespaceKey: corev1.NamespaceDefault,
},
}
dep.Spec = appV1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
appLabelKey: app.AppID,
},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
appLabelKey: app.AppID,
},
Annotations: app.RunConfig.GetAnnotations(),
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: app.AppID,
Image: app.ContainerImage,
Env: getEnv(app),
ImagePullPolicy: corev1.PullPolicy(app.ContainerImagePullPolicy),
},
},
},
},
}
// Set dapr.io/enable annotation.
dep.Spec.Template.ObjectMeta.Annotations[daprEnableAnnotationKey] = "true"
if ok, _ := isConfigurationPresent(client, corev1.NamespaceDefault, daprConfigAnnotationValue); ok {
// Set dapr.io/config annotation only if present.
dep.Spec.Template.ObjectMeta.Annotations[daprConfigAnnotationKey] = daprConfigAnnotationValue
} else {
print.WarningStatusEvent(os.Stderr, "Dapr configuration %q not found in namespace %q. Skipping annotation %q", daprConfigAnnotationValue, corev1.NamespaceDefault, daprConfigAnnotationKey)
}
// set containerPort only if app port is present.
if app.AppPort != 0 {
dep.Spec.Template.Spec.Containers[0].Ports = []corev1.ContainerPort{
{
ContainerPort: int32(app.AppPort), //nolint:gosec
},
}
}
return dep
}
func getEnv(app runfileconfig.App) []corev1.EnvVar {
envs := app.GetEnv()
envVars := make([]corev1.EnvVar, len(envs))
i := 0
for k, v := range app.GetEnv() {
envVars[i] = corev1.EnvVar{
Name: k,
Value: v,
}
i++
}
return envVars
}
func writeYamlFile(app runfileconfig.App, svc serviceConfig, dep deploymentConfig) error {
var yamlBytes []byte
var err error
var writeFile io.WriteCloser
deployDir := app.GetDeployDir()
if app.CreateService {
yamlBytes, err = k8sYaml.Marshal(svc)
if err != nil {
return fmt.Errorf("error marshalling service yaml: %w", err)
}
serviceFilePath := filepath.Join(deployDir, serviceFileName)
writeFile, err = os.Create(serviceFilePath)
if err != nil {
return fmt.Errorf("error creating file %s : %w", serviceFilePath, err)
}
_, err = writeFile.Write(yamlBytes)
if err != nil {
writeFile.Close()
return fmt.Errorf("error writing to file %s : %w", serviceFilePath, err)
}
writeFile.Close()
}
yamlBytes, err = k8sYaml.Marshal(dep)
if err != nil {
return fmt.Errorf("error marshalling deployment yaml: %w", err)
}
deploymentFilePath := filepath.Join(deployDir, deploymentFileName)
writeFile, err = os.Create(deploymentFilePath)
if err != nil {
return fmt.Errorf("error creating file %s : %w", deploymentFilePath, err)
}
_, err = writeFile.Write(yamlBytes)
if err != nil {
writeFile.Close()
return fmt.Errorf("error writing to file %s : %w", deploymentFilePath, err)
}
writeFile.Close()
return nil
}
func deployYamlToK8s(yamlToDeployPath string) error {
_, err := os.Stat(yamlToDeployPath)
if os.IsNotExist(err) {
return fmt.Errorf("error given file %q does not exist", yamlToDeployPath)
}
_, err = utils.RunCmdAndWait("kubectl", "apply", "-f", yamlToDeployPath)
if err != nil {
return fmt.Errorf("error deploying the yaml %s to Kubernetes: %w", yamlToDeployPath, err)
}
return nil
}
func deleteYamlK8s(yamlToDeletePath string) error {
print.InfoStatusEvent(os.Stdout, "Deleting %q from Kubernetes", yamlToDeletePath)
_, err := os.Stat(yamlToDeletePath)
if os.IsNotExist(err) {
return fmt.Errorf("error given file %q does not exist", yamlToDeletePath)
}
_, err = utils.RunCmdAndWait("kubectl", "delete", "-f", yamlToDeletePath)
if err != nil {
return fmt.Errorf("error deleting the yaml %s from Kubernetes: %w", yamlToDeletePath, err)
}
return nil
}
func setupLogs(ctx context.Context, appID string, daprdLogWriter, appLogWriter io.Writer, podInterface podsv1.PodInterface) error {
return streamContainerLogsToDisk(ctx, appID, appLogWriter, daprdLogWriter, podInterface)
}
func gracefullyShutdownK8sDeployment(runStates []runState, client k8s.Interface, namespace string) error {
errs := make([]error, 0, len(runStates)*4)
for _, r := range runStates {
if len(r.serviceFilePath) != 0 {
errs = append(errs, deleteYamlK8s(r.serviceFilePath))
}
errs = append(errs, deleteYamlK8s(r.deploymentFilePath))
labelSelector := map[string]string{
daprAppIDKey: r.app.AppID,
}
if ok, _ := CheckPodExists(client, namespace, labelSelector, r.app.AppID); ok {
ctx, cancel := context.WithTimeout(context.Background(), podCreationDeletionTimeout)
err := waitPodDeleted(ctx, client, namespace, r.app.AppID)
cancel()
if err != nil {
// swallowing err here intentionally.
print.WarningStatusEvent(os.Stderr, "Error waiting for pods to be deleted. Final logs might only be partially available.")
}
}
// shutdown logs.
if r.logCancel != nil { // checking nil, in scenarios where deployments are not run correctly.
r.logCancel()
}
errs = append(errs, r.app.CloseAppLogFile(), r.app.CloseDaprdLogFile())
}
return errors.Join(errs...)
}
func monitorK8sPods(ctx context.Context, client k8s.Interface, namespace string, runStates []runState, sigCh chan os.Signal) {
// for each app wait for pod to be deleted, if all pods are deleted, then send shutdown signal to the cli process.
wg := sync.WaitGroup{}
for _, r := range runStates {
wg.Add(1)
go func(appID string, wg *sync.WaitGroup) {
err := waitPodDeleted(ctx, client, namespace, appID)
if err != nil && strings.Contains(err.Error(), podWatchErrTemplate) {
print.WarningStatusEvent(os.Stderr, "Error monitoring Kubernetes pod(s) for app %q.", appID)
}
wg.Done()
}(r.app.AppID, &wg)
}
wg.Wait()
// Send signal to gracefully close log writers and shut down process.
sigCh <- syscall.SIGINT
}

View File

@ -33,6 +33,7 @@ var controlPlaneLabels = []string{
"dapr-placement-server",
"dapr-sidecar-injector",
"dapr-dashboard",
"dapr-scheduler-server",
}
type StatusClient struct {
@ -64,7 +65,6 @@ func NewStatusClient() (*StatusClient, error) {
// List status for Dapr resources.
func (s *StatusClient) Status() ([]StatusOutput, error) {
//nolint
client := s.client
if client == nil {
return nil, errors.New("kubernetes client not initialized")

View File

@ -83,7 +83,7 @@ func TestStatus(t *testing.T) {
if err != nil {
t.Fatalf("%s status should not raise an error", err.Error())
}
assert.Equal(t, 0, len(status), "Expected status to be empty list")
assert.Empty(t, status, "Expected status to be empty list")
})
t.Run("one status waiting", func(t *testing.T) {
@ -102,8 +102,8 @@ func TestStatus(t *testing.T) {
}
k8s := newTestSimpleK8s(newDaprControlPlanePod(pd))
status, err := k8s.Status()
assert.Nil(t, err, "status should not raise an error")
assert.Equal(t, 1, len(status), "Expected status to be non-empty list")
assert.NoError(t, err, "status should not raise an error")
assert.Len(t, status, 1, "Expected status to be non-empty list")
stat := status[0]
assert.Equal(t, "dapr-dashboard", stat.Name, "expected name to match")
assert.Equal(t, "dapr-system", stat.Namespace, "expected namespace to match")
@ -131,8 +131,8 @@ func TestStatus(t *testing.T) {
}
k8s := newTestSimpleK8s(newDaprControlPlanePod(pd))
status, err := k8s.Status()
assert.Nil(t, err, "status should not raise an error")
assert.Equal(t, 1, len(status), "Expected status to be non-empty list")
assert.NoError(t, err, "status should not raise an error")
assert.Len(t, status, 1, "Expected status to be non-empty list")
stat := status[0]
assert.Equal(t, "dapr-dashboard", stat.Name, "expected name to match")
assert.Equal(t, "dapr-system", stat.Namespace, "expected namespace to match")
@ -140,7 +140,7 @@ func TestStatus(t *testing.T) {
assert.Equal(t, "0.0.1", stat.Version, "expected version to match")
assert.Equal(t, 1, stat.Replicas, "expected replicas to match")
assert.Equal(t, "True", stat.Healthy, "expected health to match")
assert.Equal(t, stat.Status, "Running", "expected running status")
assert.Equal(t, "Running", stat.Status, "expected running status")
})
t.Run("one status terminated", func(t *testing.T) {
@ -160,8 +160,8 @@ func TestStatus(t *testing.T) {
k8s := newTestSimpleK8s(newDaprControlPlanePod(pd))
status, err := k8s.Status()
assert.Nil(t, err, "status should not raise an error")
assert.Equal(t, 1, len(status), "Expected status to be non-empty list")
assert.NoError(t, err, "status should not raise an error")
assert.Len(t, status, 1, "Expected status to be non-empty list")
stat := status[0]
assert.Equal(t, "dapr-dashboard", stat.Name, "expected name to match")
assert.Equal(t, "dapr-system", stat.Namespace, "expected namespace to match")
@ -169,7 +169,7 @@ func TestStatus(t *testing.T) {
assert.Equal(t, "0.0.1", stat.Version, "expected version to match")
assert.Equal(t, 1, stat.Replicas, "expected replicas to match")
assert.Equal(t, "False", stat.Healthy, "expected health to match")
assert.Equal(t, stat.Status, "Terminated", "expected terminated status")
assert.Equal(t, "Terminated", stat.Status, "expected terminated status")
})
t.Run("one status pending", func(t *testing.T) {
@ -193,8 +193,8 @@ func TestStatus(t *testing.T) {
k8s := newTestSimpleK8s(pod)
status, err := k8s.Status()
assert.Nil(t, err, "status should not raise an error")
assert.Equal(t, 1, len(status), "Expected status to be non-empty list")
assert.NoError(t, err, "status should not raise an error")
assert.Len(t, status, 1, "Expected status to be non-empty list")
stat := status[0]
assert.Equal(t, "dapr-dashboard", stat.Name, "expected name to match")
assert.Equal(t, "dapr-system", stat.Namespace, "expected namespace to match")
@ -202,13 +202,13 @@ func TestStatus(t *testing.T) {
assert.Equal(t, "0.0.1", stat.Version, "expected version to match")
assert.Equal(t, 1, stat.Replicas, "expected replicas to match")
assert.Equal(t, "False", stat.Healthy, "expected health to match")
assert.Equal(t, stat.Status, "Pending", "expected pending status")
assert.Equal(t, "Pending", stat.Status, "expected pending status")
})
t.Run("one status empty client", func(t *testing.T) {
k8s := &StatusClient{}
status, err := k8s.Status()
assert.NotNil(t, err, "status should raise an error")
assert.Error(t, err, "status should raise an error")
assert.Equal(t, "kubernetes client not initialized", err.Error(), "expected errors to match")
assert.Nil(t, status, "expected nil for status")
})
@ -233,6 +233,9 @@ func TestControlPlaneServices(t *testing.T) {
{"dapr-sidecar-injector-74648c9dcb-5bsmn", "dapr-sidecar-injector", daprImageTag},
{"dapr-sidecar-injector-74648c9dcb-6bsmn", "dapr-sidecar-injector", daprImageTag},
{"dapr-sidecar-injector-74648c9dcb-7bsmn", "dapr-sidecar-injector", daprImageTag},
{"dapr-scheduler-server-0", "dapr-scheduler-server", daprImageTag},
{"dapr-scheduler-server-1", "dapr-scheduler-server", daprImageTag},
{"dapr-scheduler-server-2", "dapr-scheduler-server", daprImageTag},
}
expectedReplicas := map[string]int{}
@ -260,7 +263,7 @@ func TestControlPlaneServices(t *testing.T) {
k8s := newTestSimpleK8s(runtimeObj...)
status, err := k8s.Status()
assert.Nil(t, err, "status should not raise an error")
assert.NoError(t, err, "status should not raise an error")
assert.Equal(t, len(expectedReplicas), len(status), "Expected status to be empty list")
@ -302,8 +305,8 @@ func TestControlPlaneVersion(t *testing.T) {
pd.imageURI = tc.imageURI
k8s := newTestSimpleK8s(newDaprControlPlanePod(pd))
status, err := k8s.Status()
assert.Nil(t, err, "status should not raise an error")
assert.Equal(t, 1, len(status), "Expected status to be non-empty list")
assert.NoError(t, err, "status should not raise an error")
assert.Len(t, status, 1, "Expected status to be non-empty list")
stat := status[0]
assert.Equal(t, tc.expectedVersion, stat.Version, "expected version to match")
}

67
pkg/kubernetes/stop.go Normal file
View File

@ -0,0 +1,67 @@
/*
Copyright 2023 The Dapr Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package kubernetes
import (
"context"
"errors"
"fmt"
"os"
"path/filepath"
corev1 "k8s.io/api/core/v1"
"github.com/dapr/cli/pkg/print"
"github.com/dapr/cli/pkg/runfileconfig"
)
func Stop(runFilePath string, config runfileconfig.RunFileConfig) error {
errs := []error{}
// get k8s client.
client, cErr := Client()
if cErr != nil {
return fmt.Errorf("error getting k8s client for monitoring pod deletion: %w", cErr)
}
var err error
namespace := corev1.NamespaceDefault
for _, app := range config.Apps {
appError := false
deployDir := app.GetDeployDir()
serviceFilePath := filepath.Join(deployDir, serviceFileName)
deploymentFilePath := filepath.Join(deployDir, deploymentFileName)
if app.CreateService {
err = deleteYamlK8s(serviceFilePath)
if err != nil {
appError = true
}
errs = append(errs, err)
}
err = deleteYamlK8s(deploymentFilePath)
if err != nil {
appError = true
}
errs = append(errs, err)
if !appError {
ctx, cancel := context.WithTimeout(context.Background(), podCreationDeletionTimeout)
// Ignoring errors here as it will anyway be printed in the other dapr cli process.
waitPodDeleted(ctx, client, namespace, app.AppID)
cancel()
} else {
print.WarningStatusEvent(os.Stderr, "Error stopping deployment for app %q in file %q", app.AppID, runFilePath)
}
}
return errors.Join(errs...)
}

View File

@ -24,7 +24,7 @@ import (
)
// Uninstall removes Dapr from a Kubernetes cluster.
func Uninstall(namespace string, uninstallAll bool, timeout uint) error {
func Uninstall(namespace string, uninstallAll bool, uninstallDev bool, timeout uint) error {
config, err := helmConfig(namespace)
if err != nil {
return err
@ -41,15 +41,19 @@ func Uninstall(namespace string, uninstallAll bool, timeout uint) error {
}
uninstallClient := helm.NewUninstall(config)
uninstallClient.Timeout = time.Duration(timeout) * time.Second
uninstallClient.Timeout = time.Duration(timeout) * time.Second //nolint:gosec
// Uninstall Dashboard as a best effort.
// Chart versions < 1.11 for Dapr will delete dashboard as part of the main chart.
// Deleting Dashboard here is for versions >= 1.11.
uninstallClient.Run(dashboardReleaseName)
_, err = uninstallClient.Run(daprReleaseName)
if uninstallDev {
// uninstall dapr-dev-zipkin and dapr-dev-redis as best effort.
uninstallThirdParty()
}
_, err = uninstallClient.Run(daprReleaseName)
if err != nil {
return err
}
@ -65,3 +69,14 @@ func Uninstall(namespace string, uninstallAll bool, timeout uint) error {
return nil
}
func uninstallThirdParty() {
print.InfoStatusEvent(os.Stdout, "Removing dapr-dev-redis and dapr-dev-zipkin from the cluster...")
// Uninstall dapr-dev-redis and dapr-dev-zipkin from k8s as best effort.
config, _ := helmConfig(thirdPartyDevNamespace)
uninstallClient := helm.NewUninstall(config)
uninstallClient.Run(redisReleaseName)
uninstallClient.Run(zipkinReleaseName)
}

View File

@ -14,15 +14,22 @@ limitations under the License.
package kubernetes
import (
"context"
"errors"
"fmt"
"net/http"
"os"
"strings"
"time"
helm "helm.sh/helm/v3/pkg/action"
"helm.sh/helm/v3/pkg/chart"
"helm.sh/helm/v3/pkg/release"
core_v1 "k8s.io/api/core/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/helm/pkg/strvals"
"github.com/Masterminds/semver/v3"
"github.com/hashicorp/go-version"
"github.com/dapr/cli/pkg/print"
@ -47,6 +54,8 @@ var crdsFullResources = []string{
"httpendpoints.dapr.io",
}
var versionWithHAScheduler = semver.MustParse("1.15.0-rc.1")
type UpgradeConfig struct {
RuntimeVersion string
DashboardVersion string
@ -56,7 +65,18 @@ type UpgradeConfig struct {
ImageVariant string
}
// UpgradeOptions represents options for the upgrade function.
type UpgradeOptions struct {
WithRetry bool
MaxRetries int
RetryInterval time.Duration
}
// UpgradeOption is a functional option type for configuring upgrade.
type UpgradeOption func(*UpgradeOptions)
func Upgrade(conf UpgradeConfig) error {
helmRepo := utils.GetEnv("DAPR_HELM_REPO_URL", daprHelmRepo)
status, err := GetDaprResourcesStatus()
if err != nil {
return err
@ -70,14 +90,14 @@ func Upgrade(conf UpgradeConfig) error {
return err
}
helmConf, err := helmConfig(status[0].Namespace)
upgradeClient, helmConf, err := newUpgradeClient(status[0].Namespace, conf)
if err != nil {
return err
return fmt.Errorf("unable to create helm client: %w", err)
}
controlPlaneChart, err := daprChart(conf.RuntimeVersion, "dapr", helmConf)
controlPlaneChart, err := getHelmChart(conf.RuntimeVersion, "dapr", helmRepo, helmConf)
if err != nil {
return err
return fmt.Errorf("unable to get helm chart: %w", err)
}
willHaveDashboardInDaprChart, err := IsDashboardIncluded(conf.RuntimeVersion)
@ -99,7 +119,7 @@ func Upgrade(conf UpgradeConfig) error {
if !hasDashboardInDaprChart && willHaveDashboardInDaprChart && dashboardExists {
print.InfoStatusEvent(os.Stdout, "Dashboard being uninstalled prior to Dapr control plane upgrade...")
uninstallClient := helm.NewUninstall(helmConf)
uninstallClient.Timeout = time.Duration(conf.Timeout) * time.Second
uninstallClient.Timeout = time.Duration(conf.Timeout) * time.Second //nolint:gosec
_, err = uninstallClient.Run(dashboardReleaseName)
if err != nil {
@ -109,19 +129,12 @@ func Upgrade(conf UpgradeConfig) error {
var dashboardChart *chart.Chart
if conf.DashboardVersion != "" {
dashboardChart, err = daprChart(conf.DashboardVersion, dashboardReleaseName, helmConf)
dashboardChart, err = getHelmChart(conf.DashboardVersion, dashboardReleaseName, helmRepo, helmConf)
if err != nil {
return err
}
}
upgradeClient := helm.NewUpgrade(helmConf)
upgradeClient.ResetValues = true
upgradeClient.Namespace = status[0].Namespace
upgradeClient.CleanupOnFail = true
upgradeClient.Wait = true
upgradeClient.Timeout = time.Duration(conf.Timeout) * time.Second
print.InfoStatusEvent(os.Stdout, "Starting upgrade...")
mtls, err := IsMTLSEnabled()
@ -151,13 +164,42 @@ func Upgrade(conf UpgradeConfig) error {
return err
}
// used to signal the deletion of the scheduler pods only when downgrading from 1.15 to previous versions to handle incompatible changes
// in other cases the channel should be nil
var downgradeDeletionChan chan error
if !isDowngrade(conf.RuntimeVersion, daprVersion) {
err = applyCRDs(fmt.Sprintf("v%s", conf.RuntimeVersion))
err = applyCRDs("v" + conf.RuntimeVersion)
if err != nil {
return err
return fmt.Errorf("unable to apply CRDs: %w", err)
}
} else {
print.InfoStatusEvent(os.Stdout, "Downgrade detected, skipping CRDs.")
targetVersion, errVersion := semver.NewVersion(conf.RuntimeVersion)
if errVersion != nil {
return fmt.Errorf("unable to parse dapr target version: %w", errVersion)
}
currentVersion, errVersion := semver.NewVersion(daprVersion)
if errVersion != nil {
return fmt.Errorf("unable to parse dapr current version: %w", errVersion)
}
if currentVersion.GreaterThanEqual(versionWithHAScheduler) && targetVersion.LessThan(versionWithHAScheduler) {
downgradeDeletionChan = make(chan error)
// Must delete all scheduler pods from cluster due to incompatible changes in version 1.15 with older versions.
go func() {
// Add an artificial delay to allow helm upgrade to progress and delete the pods only when necessary.
time.Sleep(15 * time.Second)
errDeletion := deleteSchedulerPods(status[0].Namespace, currentVersion, targetVersion)
if errDeletion != nil {
downgradeDeletionChan <- fmt.Errorf("failed to delete scheduler pods: %w", errDeletion)
print.FailureStatusEvent(os.Stderr, "Failed to delete scheduler pods: "+errDeletion.Error())
}
close(downgradeDeletionChan)
}()
}
}
chart, err := GetDaprHelmChartName(helmConf)
@ -165,8 +207,22 @@ func Upgrade(conf UpgradeConfig) error {
return err
}
if _, err = upgradeClient.Run(chart, controlPlaneChart, vals); err != nil {
return err
// Deal with known race condition when applying both CRD and CR close together. The Helm upgrade fails
// when a CR is applied tries to be applied before the CRD is fully registered. On each retry we need a
// fresh client since the kube client locally caches the last OpenAPI schema it received from the server.
// See https://github.com/kubernetes/kubectl/issues/1179
_, err = helmUpgrade(upgradeClient, chart, controlPlaneChart, vals, WithRetry(5, 100*time.Millisecond))
if err != nil {
return fmt.Errorf("failure while running upgrade: %w", err)
}
// wait for the deletion of the scheduler pods to finish
if downgradeDeletionChan != nil {
select {
case <-downgradeDeletionChan:
case <-time.After(3 * time.Minute):
return errors.New("timed out waiting for downgrade deletion")
}
}
if dashboardChart != nil {
@ -176,7 +232,7 @@ func Upgrade(conf UpgradeConfig) error {
}
} else {
// We need to install Dashboard since it does not exist yet.
err = install(dashboardReleaseName, conf.DashboardVersion, InitConfiguration{
err = install(dashboardReleaseName, conf.DashboardVersion, helmRepo, InitConfiguration{
DashboardVersion: conf.DashboardVersion,
Namespace: upgradeClient.Namespace,
Wait: upgradeClient.Wait,
@ -191,11 +247,132 @@ func Upgrade(conf UpgradeConfig) error {
return nil
}
func deleteSchedulerPods(namespace string, currentVersion *semver.Version, targetVersion *semver.Version) error {
ctxWithTimeout, cancel := context.WithTimeout(context.Background(), time.Second*30)
defer cancel()
var pods *core_v1.PodList
// wait for at least one pod of the target version to be in the list before deleting the rest
// check the label app.kubernetes.io/version to determine the version of the pod
foundTargetVersion := false
for {
if foundTargetVersion {
break
}
k8sClient, err := Client()
if err != nil {
return err
}
pods, err = k8sClient.CoreV1().Pods(namespace).List(ctxWithTimeout, meta_v1.ListOptions{
LabelSelector: "app=dapr-scheduler-server",
})
if err != nil && !errors.Is(err, context.DeadlineExceeded) {
return err
}
if len(pods.Items) == 0 {
return nil
}
for _, pod := range pods.Items {
pv, ok := pod.Labels["app.kubernetes.io/version"]
if ok {
podVersion, err := semver.NewVersion(pv)
if err == nil && podVersion.Equal(targetVersion) {
foundTargetVersion = true
break
}
}
}
time.Sleep(5 * time.Second)
}
if pods == nil {
return errors.New("no scheduler pods found")
}
// get a fresh client to ensure we have the latest state of the cluster
k8sClient, err := Client()
if err != nil {
return err
}
// delete scheduler pods of the current version, i.e. >1.15.0
for _, pod := range pods.Items {
if pv, ok := pod.Labels["app.kubernetes.io/version"]; ok {
podVersion, err := semver.NewVersion(pv)
if err == nil && podVersion.Equal(currentVersion) {
err = k8sClient.CoreV1().Pods(namespace).Delete(ctxWithTimeout, pod.Name, meta_v1.DeleteOptions{})
if err != nil {
return fmt.Errorf("failed to delete pod %s during downgrade: %w", pod.Name, err)
}
}
}
}
return nil
}
// WithRetry enables retry with the specified max retries and retry interval.
func WithRetry(maxRetries int, retryInterval time.Duration) UpgradeOption {
return func(o *UpgradeOptions) {
o.WithRetry = true
o.MaxRetries = maxRetries
o.RetryInterval = retryInterval
}
}
func helmUpgrade(client *helm.Upgrade, name string, chart *chart.Chart, vals map[string]interface{}, options ...UpgradeOption) (*release.Release, error) {
upgradeOptions := &UpgradeOptions{
WithRetry: false,
MaxRetries: 0,
RetryInterval: 0,
}
// Apply functional options.
for _, option := range options {
option(upgradeOptions)
}
var release *release.Release
for attempt := 1; ; attempt++ {
_, err := client.Run(name, chart, vals)
if err == nil {
// operation succeeded, no need to retry.
break
}
if !upgradeOptions.WithRetry || attempt >= upgradeOptions.MaxRetries {
// If not retrying or reached max retries, return the error.
return nil, fmt.Errorf("max retries reached, unable to run command: %w", err)
}
print.PendingStatusEvent(os.Stdout, "Retrying after %s...", upgradeOptions.RetryInterval)
time.Sleep(upgradeOptions.RetryInterval)
// create a totally new helm client, this ensures that we fetch a fresh openapi schema from the server on each attempt.
client, _, err = newUpgradeClient(client.Namespace, UpgradeConfig{
Timeout: uint(client.Timeout), //nolint:gosec
})
if err != nil {
return nil, fmt.Errorf("unable to create helm client: %w", err)
}
}
return release, nil
}
func highAvailabilityEnabled(status []StatusOutput) bool {
for _, s := range status {
if s.Name == "dapr-dashboard" {
continue
}
// Skip the scheduler server because it's in HA mode by default since version 1.15.0
// This will fall back to other dapr services to determine if HA mode is enabled.
if strings.HasPrefix(s.Name, "dapr-scheduler-server") {
continue
}
if s.Replicas > 1 {
return true
}
@ -208,7 +385,7 @@ func applyCRDs(version string) error {
url := fmt.Sprintf("https://raw.githubusercontent.com/dapr/dapr/%s/charts/dapr/crds/%s.yaml", version, crd)
resp, _ := http.Get(url) //nolint:gosec
if resp != nil && resp.StatusCode == 200 {
if resp != nil && resp.StatusCode == http.StatusOK {
defer resp.Body.Close()
_, err := utils.RunCmdAndWait("kubectl", "apply", "-f", url)
@ -227,18 +404,18 @@ func upgradeChartValues(ca, issuerCert, issuerKey string, haMode, mtls bool, con
if err != nil {
return nil, err
}
globalVals = append(globalVals, fmt.Sprintf("global.tag=%s", utils.GetVariantVersion(conf.RuntimeVersion, conf.ImageVariant)))
globalVals = append(globalVals, "global.tag="+utils.GetVariantVersion(conf.RuntimeVersion, conf.ImageVariant))
if mtls && ca != "" && issuerCert != "" && issuerKey != "" {
globalVals = append(globalVals, fmt.Sprintf("dapr_sentry.tls.root.certPEM=%s", ca),
fmt.Sprintf("dapr_sentry.tls.issuer.certPEM=%s", issuerCert),
fmt.Sprintf("dapr_sentry.tls.issuer.keyPEM=%s", issuerKey),
globalVals = append(globalVals, "dapr_sentry.tls.root.certPEM="+ca,
"dapr_sentry.tls.issuer.certPEM="+issuerCert,
"dapr_sentry.tls.issuer.keyPEM="+issuerKey,
)
} else {
globalVals = append(globalVals, "global.mtls.enabled=false")
}
if len(conf.ImageRegistryURI) != 0 {
globalVals = append(globalVals, fmt.Sprintf("global.registry=%s", conf.ImageRegistryURI))
globalVals = append(globalVals, "global.registry="+conf.ImageRegistryURI)
}
if haMode {
globalVals = append(globalVals, "global.ha.enabled=true")
@ -263,3 +440,19 @@ func isDowngrade(targetVersion, existingVersion string) bool {
}
return target.LessThan(existing)
}
func newUpgradeClient(namespace string, cfg UpgradeConfig) (*helm.Upgrade, *helm.Configuration, error) {
helmCfg, err := helmConfig(namespace)
if err != nil {
return nil, nil, err
}
client := helm.NewUpgrade(helmCfg)
client.ResetValues = true
client.Namespace = namespace
client.CleanupOnFail = true
client.Wait = true
client.Timeout = time.Duration(cfg.Timeout) * time.Second //nolint:gosec
return client, helmCfg, nil
}

View File

@ -31,6 +31,38 @@ func TestHAMode(t *testing.T) {
assert.True(t, r)
})
t.Run("ha mode with scheduler and other services", func(t *testing.T) {
s := []StatusOutput{
{
Name: "dapr-scheduler-server",
Replicas: 3,
},
{
Name: "dapr-placement-server",
Replicas: 3,
},
}
r := highAvailabilityEnabled(s)
assert.True(t, r)
})
t.Run("non-ha mode with only scheduler image variant", func(t *testing.T) {
s := []StatusOutput{
{
Name: "dapr-scheduler-server-mariner",
Replicas: 3,
},
{
Name: "dapr-placement-server-mariner",
Replicas: 3,
},
}
r := highAvailabilityEnabled(s)
assert.True(t, r)
})
t.Run("non-ha mode", func(t *testing.T) {
s := []StatusOutput{
{
@ -41,6 +73,46 @@ func TestHAMode(t *testing.T) {
r := highAvailabilityEnabled(s)
assert.False(t, r)
})
t.Run("non-ha mode with scheduler and other services", func(t *testing.T) {
s := []StatusOutput{
{
Name: "dapr-scheduler-server",
Replicas: 3,
},
{
Name: "dapr-placement-server",
Replicas: 1,
},
}
r := highAvailabilityEnabled(s)
assert.False(t, r)
})
t.Run("non-ha mode with only scheduler", func(t *testing.T) {
s := []StatusOutput{
{
Name: "dapr-scheduler-server",
Replicas: 3,
},
}
r := highAvailabilityEnabled(s)
assert.False(t, r)
})
t.Run("non-ha mode with only scheduler image variant", func(t *testing.T) {
s := []StatusOutput{
{
Name: "dapr-scheduler-server-mariner",
Replicas: 3,
},
}
r := highAvailabilityEnabled(s)
assert.False(t, r)
})
}
func TestMTLSChartValues(t *testing.T) {

View File

@ -69,7 +69,7 @@ func tryGetRunDataLock() (*lockfile.Lockfile, error) {
return nil, err
}
for i := 0; i < 10; i++ {
for range 10 {
err = lockFile.TryLock()
// Error handling is essential, as we only try to get the lock.

View File

@ -14,10 +14,12 @@ limitations under the License.
package runexec
import (
"fmt"
"errors"
"io"
"os"
"os/exec"
"github.com/dapr/cli/pkg/runfileconfig"
"github.com/dapr/cli/pkg/standalone"
)
@ -84,7 +86,7 @@ func (c *CmdProcess) WithOutputWriter(w io.Writer) {
// SetStdout should be called after WithOutputWriter.
func (c *CmdProcess) SetStdout() error {
if c.Command == nil {
return fmt.Errorf("command is nil")
return errors.New("command is nil")
}
c.Command.Stdout = c.OutputWriter
return nil
@ -97,7 +99,7 @@ func (c *CmdProcess) WithErrorWriter(w io.Writer) {
// SetStdErr should be called after WithErrorWriter.
func (c *CmdProcess) SetStderr() error {
if c.Command == nil {
return fmt.Errorf("command is nil")
return errors.New("command is nil")
}
c.Command.Stderr = c.ErrorWriter
return nil
@ -106,7 +108,7 @@ func (c *CmdProcess) SetStderr() error {
func NewOutput(config *standalone.RunConfig) (*RunOutput, error) {
// set default values from RunConfig struct's tag.
config.SetDefaultFromSchema()
//nolint
err := config.Validate()
if err != nil {
return nil, err
@ -129,3 +131,26 @@ func NewOutput(config *standalone.RunConfig) (*RunOutput, error) {
DaprGRPCPort: config.GRPCPort,
}, nil
}
// GetAppDaprdWriter returns the writer for writing logs common to both daprd, app and stdout.
func GetAppDaprdWriter(app runfileconfig.App, isAppCommandEmpty bool) io.Writer {
var appDaprdWriter io.Writer
if isAppCommandEmpty {
if app.DaprdLogDestination != standalone.Console {
appDaprdWriter = io.MultiWriter(os.Stdout, app.DaprdLogWriteCloser)
} else {
appDaprdWriter = os.Stdout
}
} else {
if app.AppLogDestination != standalone.Console && app.DaprdLogDestination != standalone.Console {
appDaprdWriter = io.MultiWriter(app.AppLogWriteCloser, app.DaprdLogWriteCloser, os.Stdout)
} else if app.AppLogDestination != standalone.Console {
appDaprdWriter = io.MultiWriter(app.AppLogWriteCloser, os.Stdout)
} else if app.DaprdLogDestination != standalone.Console {
appDaprdWriter = io.MultiWriter(app.DaprdLogWriteCloser, os.Stdout)
} else {
appDaprdWriter = os.Stdout
}
}
return appDaprdWriter
}

View File

@ -20,6 +20,8 @@ import (
"strings"
"testing"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/assert"
"github.com/dapr/cli/pkg/standalone"
@ -80,10 +82,10 @@ func setupRun(t *testing.T) {
componentsDir := standalone.GetDaprComponentsPath(myDaprPath)
configFile := standalone.GetDaprConfigPath(myDaprPath)
err = os.MkdirAll(componentsDir, 0o700)
assert.Equal(t, nil, err, "Unable to setup components dir before running test")
assert.NoError(t, err, "Unable to setup components dir before running test")
file, err := os.Create(configFile)
file.Close()
assert.Equal(t, nil, err, "Unable to create config file before running test")
assert.NoError(t, err, "Unable to create config file before running test")
}
func tearDownRun(t *testing.T) {
@ -94,9 +96,9 @@ func tearDownRun(t *testing.T) {
configFile := standalone.GetDaprConfigPath(myDaprPath)
err = os.RemoveAll(componentsDir)
assert.Equal(t, nil, err, "Unable to delete default components dir after running test")
assert.NoError(t, err, "Unable to delete default components dir after running test")
err = os.Remove(configFile)
assert.Equal(t, nil, err, "Unable to delete default config file after running test")
assert.NoError(t, err, "Unable to delete default config file after running test")
}
func assertCommonArgs(t *testing.T, basicConfig *standalone.RunConfig, output *RunOutput) {
@ -120,9 +122,9 @@ func assertCommonArgs(t *testing.T, basicConfig *standalone.RunConfig, output *R
assertArgumentEqual(t, "components-path", standalone.GetDaprComponentsPath(daprPath), output.DaprCMD.Args)
assertArgumentEqual(t, "app-ssl", "", output.DaprCMD.Args)
assertArgumentEqual(t, "metrics-port", "9001", output.DaprCMD.Args)
assertArgumentEqual(t, "dapr-http-max-request-size", "-1", output.DaprCMD.Args)
assertArgumentEqual(t, "max-body-size", "-1", output.DaprCMD.Args)
assertArgumentEqual(t, "dapr-internal-grpc-port", "5050", output.DaprCMD.Args)
assertArgumentEqual(t, "dapr-http-read-buffer-size", "-1", output.DaprCMD.Args)
assertArgumentEqual(t, "read-buffer-size", "-1", output.DaprCMD.Args)
assertArgumentEqual(t, "dapr-listen-addresses", "127.0.0.1", output.DaprCMD.Args)
}
@ -180,8 +182,8 @@ func TestRun(t *testing.T) {
AppProtocol: "http",
ComponentsPath: componentsDir,
AppSSL: true,
MaxRequestBodySize: -1,
HTTPReadBufferSize: -1,
MaxRequestBodySize: "-1",
HTTPReadBufferSize: "-1",
EnableAPILogging: true,
APIListenAddresses: "127.0.0.1",
}
@ -294,21 +296,21 @@ func TestRun(t *testing.T) {
basicConfig.ProfilePort = 0
basicConfig.EnableProfiling = true
basicConfig.MaxConcurrency = 0
basicConfig.MaxRequestBodySize = 0
basicConfig.HTTPReadBufferSize = 0
basicConfig.MaxRequestBodySize = ""
basicConfig.HTTPReadBufferSize = ""
basicConfig.AppProtocol = ""
basicConfig.SetDefaultFromSchema()
assert.Equal(t, -1, basicConfig.AppPort)
assert.True(t, basicConfig.HTTPPort == -1)
assert.True(t, basicConfig.GRPCPort == -1)
assert.True(t, basicConfig.MetricsPort == -1)
assert.True(t, basicConfig.ProfilePort == -1)
assert.Equal(t, -1, basicConfig.HTTPPort)
assert.Equal(t, -1, basicConfig.GRPCPort)
assert.Equal(t, -1, basicConfig.MetricsPort)
assert.Equal(t, -1, basicConfig.ProfilePort)
assert.True(t, basicConfig.EnableProfiling)
assert.Equal(t, -1, basicConfig.MaxConcurrency)
assert.Equal(t, -1, basicConfig.MaxRequestBodySize)
assert.Equal(t, -1, basicConfig.HTTPReadBufferSize)
assert.Equal(t, "4Mi", basicConfig.MaxRequestBodySize)
assert.Equal(t, "4Ki", basicConfig.HTTPReadBufferSize)
assert.Equal(t, "http", basicConfig.AppProtocol)
// Test after Validate gets called.
@ -316,14 +318,52 @@ func TestRun(t *testing.T) {
assert.NoError(t, err)
assert.Equal(t, 0, basicConfig.AppPort)
assert.True(t, basicConfig.HTTPPort > 0)
assert.True(t, basicConfig.GRPCPort > 0)
assert.True(t, basicConfig.MetricsPort > 0)
assert.True(t, basicConfig.ProfilePort > 0)
assert.Positive(t, basicConfig.HTTPPort)
assert.Positive(t, basicConfig.GRPCPort)
assert.Positive(t, basicConfig.MetricsPort)
assert.Positive(t, basicConfig.ProfilePort)
assert.True(t, basicConfig.EnableProfiling)
assert.Equal(t, -1, basicConfig.MaxConcurrency)
assert.Equal(t, -1, basicConfig.MaxRequestBodySize)
assert.Equal(t, -1, basicConfig.HTTPReadBufferSize)
assert.Equal(t, "4Mi", basicConfig.MaxRequestBodySize)
assert.Equal(t, "4Ki", basicConfig.HTTPReadBufferSize)
assert.Equal(t, "http", basicConfig.AppProtocol)
})
t.Run("run with max body size without units", func(t *testing.T) {
basicConfig.MaxRequestBodySize = "4000000"
output, err := NewOutput(basicConfig)
require.NoError(t, err)
assertArgumentEqual(t, "max-body-size", "4M", output.DaprCMD.Args)
})
t.Run("run with max body size with units", func(t *testing.T) {
basicConfig.MaxRequestBodySize = "4Mi"
output, err := NewOutput(basicConfig)
require.NoError(t, err)
assertArgumentEqual(t, "max-body-size", "4Mi", output.DaprCMD.Args)
basicConfig.MaxRequestBodySize = "5M"
output, err = NewOutput(basicConfig)
require.NoError(t, err)
assertArgumentEqual(t, "max-body-size", "5M", output.DaprCMD.Args)
})
t.Run("run with read buffer size set without units", func(t *testing.T) {
basicConfig.HTTPReadBufferSize = "16001"
output, err := NewOutput(basicConfig)
require.NoError(t, err)
assertArgumentEqual(t, "read-buffer-size", "16001", output.DaprCMD.Args)
})
t.Run("run with read buffer size set with units", func(t *testing.T) {
basicConfig.HTTPReadBufferSize = "4Ki"
output, err := NewOutput(basicConfig)
require.NoError(t, err)
assertArgumentEqual(t, "read-buffer-size", "4Ki", output.DaprCMD.Args)
})
}

View File

@ -27,6 +27,7 @@ const (
daprdLogFileNamePrefix = "daprd"
logFileExtension = ".log"
logsDir = "logs"
deployDir = "deploy"
)
// RunFileConfig represents the complete configuration options for the run file.
@ -38,14 +39,22 @@ type RunFileConfig struct {
Name string `yaml:"name,omitempty"`
}
// ContainerConfiguration represents the application container configuration parameters.
type ContainerConfiguration struct {
ContainerImage string `yaml:"containerImage"`
ContainerImagePullPolicy string `yaml:"containerImagePullPolicy"`
CreateService bool `yaml:"createService"`
}
// App represents the configuration options for the apps in the run file.
type App struct {
standalone.RunConfig `yaml:",inline"`
AppDirPath string `yaml:"appDirPath"`
AppLogFileName string
DaprdLogFileName string
AppLogWriteCloser io.WriteCloser
DaprdLogWriteCloser io.WriteCloser
standalone.RunConfig `yaml:",inline"`
ContainerConfiguration `yaml:",inline"`
AppDirPath string `yaml:"appDirPath"`
AppLogFileName string
DaprdLogFileName string
AppLogWriteCloser io.WriteCloser
DaprdLogWriteCloser io.WriteCloser
}
// Common represents the configuration options for the common section in the run file.
@ -59,6 +68,12 @@ func (a *App) GetLogsDir() string {
return logsPath
}
func (a *App) GetDeployDir() string {
logsPath := filepath.Join(a.AppDirPath, standalone.DefaultDaprDirName, deployDir)
os.MkdirAll(logsPath, 0o755)
return logsPath
}
// CreateAppLogFile creates the log file, sets internal file handle
// and returns error if any.
func (a *App) CreateAppLogFile() error {
@ -104,14 +119,32 @@ func (a *App) createLogFile(logType string) (*os.File, error) {
func (a *App) CloseAppLogFile() error {
if a.AppLogWriteCloser != nil {
return a.AppLogWriteCloser.Close()
err := a.AppLogWriteCloser.Close()
a.AppLogWriteCloser = nil
return err
}
return nil
}
func (a *App) CloseDaprdLogFile() error {
if a.DaprdLogWriteCloser != nil {
return a.DaprdLogWriteCloser.Close()
err := a.DaprdLogWriteCloser.Close()
a.DaprdLogWriteCloser = nil
return err
}
return nil
}
// GetLogWriter returns the log writer based on the log destination.
func GetLogWriter(fileLogWriterCloser io.WriteCloser, logDestination standalone.LogDestType) io.Writer {
var logWriter io.Writer
switch logDestination {
case standalone.Console:
logWriter = os.Stdout
case standalone.File:
logWriter = fileLogWriterCloser
case standalone.FileAndConsole:
logWriter = io.MultiWriter(os.Stdout, fileLogWriterCloser)
}
return logWriter
}

View File

@ -26,6 +26,8 @@ import (
"gopkg.in/yaml.v2"
)
var imagePullPolicyValuesAllowed = []string{"Always", "Never", "IfNotPresent"}
// Parse the provided run file into a RunFileConfig struct.
func (a *RunFileConfig) parseAppsConfig(runFilePath string) error {
var err error
@ -70,7 +72,7 @@ func (a *RunFileConfig) validateRunConfig(runFilePath string) error {
a.Common.ResourcesPaths = append(a.Common.ResourcesPaths, a.Common.ResourcesPath)
}
for i := 0; i < len(a.Apps); i++ {
for i := range len(a.Apps) {
if a.Apps[i].AppDirPath == "" {
return errors.New("required field 'appDirPath' not found in the provided app config file")
}
@ -97,6 +99,15 @@ func (a *RunFileConfig) validateRunConfig(runFilePath string) error {
if len(strings.TrimSpace(a.Apps[i].ResourcesPath)) > 0 {
a.Apps[i].ResourcesPaths = append(a.Apps[i].ResourcesPaths, a.Apps[i].ResourcesPath)
}
// Check containerImagePullPolicy is valid.
if a.Apps[i].ContainerImagePullPolicy != "" {
if !utils.Contains(imagePullPolicyValuesAllowed, a.Apps[i].ContainerImagePullPolicy) {
return fmt.Errorf("invalid containerImagePullPolicy: %s, allowed values: %s", a.Apps[i].ContainerImagePullPolicy, strings.Join(imagePullPolicyValuesAllowed, ", "))
}
} else {
a.Apps[i].ContainerImagePullPolicy = "Always"
}
}
return nil
}
@ -212,9 +223,6 @@ func (a *RunFileConfig) resolvePathToAbsAndValidate(baseDir string, paths ...*st
return err
}
absPath := utils.GetAbsPath(baseDir, *path)
if err != nil {
return err
}
*path = absPath
if err = utils.ValidateFilePath(*path); err != nil {
return err

View File

@ -14,6 +14,7 @@ limitations under the License.
package runfileconfig
import (
"fmt"
"os"
"path/filepath"
"strings"
@ -25,13 +26,16 @@ import (
)
var (
validRunFilePath = filepath.Join("..", "testdata", "runfileconfig", "test_run_config.yaml")
invalidRunFilePath1 = filepath.Join("..", "testdata", "runfileconfig", "test_run_config_invalid_path.yaml")
invalidRunFilePath2 = filepath.Join("..", "testdata", "runfileconfig", "test_run_config_empty_app_dir.yaml")
runFileForPrecedenceRule = filepath.Join("..", "testdata", "runfileconfig", "test_run_config_precedence_rule.yaml")
runFileForPrecedenceRuleDaprDir = filepath.Join("..", "testdata", "runfileconfig", "test_run_config_precedence_rule_dapr_dir.yaml")
runFileForLogDestination = filepath.Join("..", "testdata", "runfileconfig", "test_run_config_log_destination.yaml")
runFileForMultiResourcePaths = filepath.Join("..", "testdata", "runfileconfig", "test_run_config_multiple_resources_paths.yaml")
validRunFilePath = filepath.Join(".", "testdata", "test_run_config.yaml")
invalidRunFilePath1 = filepath.Join(".", "testdata", "test_run_config_invalid_path.yaml")
invalidRunFilePath2 = filepath.Join(".", "testdata", "test_run_config_empty_app_dir.yaml")
runFileForPrecedenceRule = filepath.Join(".", "testdata", "test_run_config_precedence_rule.yaml")
runFileForPrecedenceRuleDaprDir = filepath.Join(".", "testdata", "test_run_config_precedence_rule_dapr_dir.yaml")
runFileForLogDestination = filepath.Join(".", "testdata", "test_run_config_log_destination.yaml")
runFileForMultiResourcePaths = filepath.Join(".", "testdata", "test_run_config_multiple_resources_paths.yaml")
runFileForContainerImagePullPolicy = filepath.Join(".", "testdata", "test_run_config_container_image_pull_policy.yaml")
runFileForContainerImagePullPolicyInvalid = filepath.Join(".", "testdata", "test_run_config_container_image_pull_policy_invalid.yaml")
)
func TestRunConfigFile(t *testing.T) {
@ -41,7 +45,7 @@ func TestRunConfigFile(t *testing.T) {
err := appsRunConfig.parseAppsConfig(validRunFilePath)
assert.NoError(t, err)
assert.Equal(t, 2, len(appsRunConfig.Apps))
assert.Len(t, appsRunConfig.Apps, 2)
assert.Equal(t, 1, appsRunConfig.Version)
assert.NotEmpty(t, appsRunConfig.Common.ResourcesPath)
@ -60,7 +64,7 @@ func TestRunConfigFile(t *testing.T) {
apps, err := config.GetApps(validRunFilePath)
assert.NoError(t, err)
assert.Equal(t, 2, len(apps))
assert.Len(t, apps, 2)
assert.Equal(t, "webapp", apps[0].AppID)
assert.Equal(t, "backend", apps[1].AppID)
assert.Equal(t, "HTTP", apps[0].AppProtocol)
@ -86,8 +90,8 @@ func TestRunConfigFile(t *testing.T) {
assert.Equal(t, filepath.Join(apps[1].AppDirPath, ".dapr", "resources"), apps[1].ResourcesPaths[0])
// test merged envs from common and app sections.
assert.Equal(t, 2, len(apps[0].Env))
assert.Equal(t, 2, len(apps[1].Env))
assert.Len(t, apps[0].Env, 2)
assert.Len(t, apps[1].Env, 2)
assert.Contains(t, apps[0].Env, "DEBUG")
assert.Contains(t, apps[0].Env, "tty")
assert.Contains(t, apps[1].Env, "DEBUG")
@ -229,7 +233,7 @@ func TestRunConfigFile(t *testing.T) {
config := RunFileConfig{}
apps, err := config.GetApps(runFileForLogDestination)
assert.NoError(t, err)
assert.Equal(t, 6, len(apps))
assert.Len(t, apps, 6)
assert.Equal(t, "file", apps[0].DaprdLogDestination.String())
assert.Equal(t, "fileAndConsole", apps[0].AppLogDestination.String())
@ -251,6 +255,51 @@ func TestRunConfigFile(t *testing.T) {
})
}
func TestContainerImagePullPolicy(t *testing.T) {
testcases := []struct {
name string
runfFile string
expectedPullPolicies []string
expectedBadPolicyValue string
expectedErr bool
}{
{
name: "default value is Always",
runfFile: validRunFilePath,
expectedPullPolicies: []string{"Always", "Always"},
expectedErr: false,
},
{
name: "custom value is respected",
runfFile: runFileForContainerImagePullPolicy,
expectedPullPolicies: []string{"IfNotPresent", "Always"},
expectedErr: false,
},
{
name: "invalid value is rejected",
runfFile: runFileForContainerImagePullPolicyInvalid,
expectedPullPolicies: []string{"Always", "Always"},
expectedBadPolicyValue: "Invalid",
expectedErr: true,
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
config := RunFileConfig{}
config.parseAppsConfig(tc.runfFile)
err := config.validateRunConfig(tc.runfFile)
if tc.expectedErr {
assert.Error(t, err)
assert.Contains(t, err.Error(), fmt.Sprintf("invalid containerImagePullPolicy: %s, allowed values: Always, Never, IfNotPresent", tc.expectedBadPolicyValue))
return
}
assert.Equal(t, tc.expectedPullPolicies[0], config.Apps[0].ContainerImagePullPolicy)
assert.Equal(t, tc.expectedPullPolicies[1], config.Apps[1].ContainerImagePullPolicy)
})
}
}
func TestMultiResourcePathsResolution(t *testing.T) {
config := RunFileConfig{}
@ -297,7 +346,7 @@ func TestMultiResourcePathsResolution(t *testing.T) {
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
assert.Equal(t, tc.expectedNoOfResources, len(config.Apps[tc.appIndex].ResourcesPaths))
assert.Len(t, config.Apps[tc.appIndex].ResourcesPaths, tc.expectedNoOfResources)
var rsrcFound bool
for _, resourcePath := range config.Apps[tc.appIndex].ResourcesPaths {
if rsrcFound = strings.Contains(resourcePath, tc.expectedResourcesPathsContains); rsrcFound {

View File

@ -0,0 +1,24 @@
version: 1
common:
resourcesPath: ./app/resources
appProtocol: HTTP
appHealthProbeTimeout: 10
env:
DEBUG: false
tty: sts
apps:
- appDirPath: ./webapp/
resourcesPath: ./resources
configFilePath: ./config.yaml
appPort: 8080
appHealthProbeTimeout: 1
containerImagePullPolicy: IfNotPresent
containerImage: ghcr.io/dapr/dapr-workflows-python-sdk:latest
- appID: backend
appDirPath: ./backend/
appProtocol: GRPC
appPort: 3000
unixDomainSocket: /tmp/test-socket
env:
DEBUG: true
containerImage: ghcr.io/dapr/dapr-workflows-csharp-sdk:latest

View File

@ -0,0 +1,24 @@
version: 1
common:
resourcesPath: ./app/resources
appProtocol: HTTP
appHealthProbeTimeout: 10
env:
DEBUG: false
tty: sts
apps:
- appDirPath: ./webapp/
resourcesPath: ./resources
configFilePath: ./config.yaml
appPort: 8080
appHealthProbeTimeout: 1
containerImagePullPolicy: Invalid
containerImage: ghcr.io/dapr/dapr-workflows-python-sdk:latest
- appID: backend
appDirPath: ./backend/
appProtocol: GRPC
appPort: 3000
unixDomainSocket: /tmp/test-socket
env:
DEBUG: true
containerImage: ghcr.io/dapr/dapr-workflows-csharp-sdk:latest

View File

@ -55,10 +55,10 @@ func isStringNilOrEmpty(val *string) bool {
return val == nil || strings.TrimSpace(*val) == ""
}
func (b *bundleDetails) getPlacementImageName() string {
func (b *bundleDetails) getDaprImageName() string {
return *b.DaprImageName
}
func (b *bundleDetails) getPlacementImageFileName() string {
func (b *bundleDetails) getDaprImageFileName() string {
return *b.DaprImageFileName
}

View File

@ -44,8 +44,8 @@ func TestParseDetails(t *testing.T) {
assert.Equal(t, "0.10.0", *bd.DashboardVersion, "expected versions to match")
assert.Equal(t, "dist", *bd.BinarySubDir, "expected value to match")
assert.Equal(t, "docker", *bd.ImageSubDir, "expected value to match")
assert.Equal(t, "daprio/dapr:1.7.2", bd.getPlacementImageName(), "expected value to match")
assert.Equal(t, "daprio-dapr-1.7.2.tar.gz", bd.getPlacementImageFileName(), "expected value to match")
assert.Equal(t, "daprio/dapr:1.7.2", bd.getDaprImageName(), "expected value to match")
assert.Equal(t, "daprio-dapr-1.7.2.tar.gz", bd.getDaprImageFileName(), "expected value to match")
}
func TestParseDetailsMissingDetails(t *testing.T) {

View File

@ -25,8 +25,10 @@ const (
DefaultConfigFileName = "config.yaml"
DefaultResourcesDirName = "resources"
defaultDaprBinDirName = "bin"
defaultComponentsDirName = "components"
defaultDaprBinDirName = "bin"
defaultComponentsDirName = "components"
defaultSchedulerDirName = "scheduler"
defaultSchedulerDataDirName = "data"
)
// GetDaprRuntimePath returns the dapr runtime installation path.

View File

@ -85,7 +85,6 @@ func confirmContainerIsRunningOrExists(containerName string, isRunning bool, run
// If 'docker ps' failed due to some reason.
if err != nil {
//nolint
return false, fmt.Errorf("unable to confirm whether %s is running or exists. error\n%v", containerName, err.Error())
}
// 'docker ps' worked fine, but the response did not have the container name.
@ -100,7 +99,6 @@ func confirmContainerIsRunningOrExists(containerName string, isRunning bool, run
}
func isContainerRunError(err error) bool {
//nolint
if exitError, ok := err.(*exec.ExitError); ok {
exitCode := exitError.ExitCode()
return exitCode == 125
@ -109,7 +107,6 @@ func isContainerRunError(err error) bool {
}
func parseContainerRuntimeError(component string, err error) error {
//nolint
if exitError, ok := err.(*exec.ExitError); ok {
exitCode := exitError.ExitCode()
if exitCode == 125 { // see https://github.com/moby/moby/pull/14012

View File

@ -26,8 +26,8 @@ func TestDashboardRun(t *testing.T) {
assert.NoError(t, err)
assert.Contains(t, cmd.Args[0], "dashboard")
assert.Equal(t, cmd.Args[1], "--port")
assert.Equal(t, cmd.Args[2], "9090")
assert.Equal(t, "--port", cmd.Args[1])
assert.Equal(t, "9090", cmd.Args[2])
})
t.Run("start dashboard on random free port", func(t *testing.T) {
@ -35,7 +35,7 @@ func TestDashboardRun(t *testing.T) {
assert.NoError(t, err)
assert.Contains(t, cmd.Args[0], "dashboard")
assert.Equal(t, cmd.Args[1], "--port")
assert.NotEqual(t, cmd.Args[2], "0")
assert.Equal(t, "--port", cmd.Args[1])
assert.NotEqual(t, "0", cmd.Args[2])
})
}

View File

@ -64,7 +64,7 @@ func (s *Standalone) Invoke(appID, method string, data []byte, verb string, path
}
func makeEndpoint(lo ListOutput, method string) string {
return fmt.Sprintf("http://127.0.0.1:%s/v%s/invoke/%s/method/%s", fmt.Sprintf("%v", lo.HTTPPort), api.RuntimeAPIVersion, lo.AppID, method)
return fmt.Sprintf("http://127.0.0.1:%d/v%s/invoke/%s/method/%s", lo.HTTPPort, api.RuntimeAPIVersion, lo.AppID, method)
}
func handleResponse(response *http.Response) (string, error) {

View File

@ -44,6 +44,8 @@ type ListOutput struct {
MaxRequestBodySize int `csv:"-" json:"maxRequestBodySize" yaml:"maxRequestBodySize"` // Additional field, not displayed in table.
HTTPReadBufferSize int `csv:"-" json:"httpReadBufferSize" yaml:"httpReadBufferSize"` // Additional field, not displayed in table.
RunTemplatePath string `csv:"RUN_TEMPLATE_PATH" json:"runTemplatePath" yaml:"runTemplatePath"`
AppLogPath string `csv:"APP_LOG_PATH" json:"appLogPath" yaml:"appLogPath"`
DaprDLogPath string `csv:"DAPRD_LOG_PATH" json:"daprdLogPath" yaml:"daprdLogPath"`
RunTemplateName string `json:"runTemplateName" yaml:"runTemplateName"` // specifically omitted in csv output.
}
@ -64,7 +66,7 @@ func List() ([]ListOutput, error) {
for _, proc := range processes {
executable := strings.ToLower(proc.Executable())
if (executable == "daprd") || (executable == "daprd.exe") {
procDetails, err := process.NewProcess(int32(proc.Pid()))
procDetails, err := process.NewProcess(int32(proc.Pid())) //nolint:gosec
if err != nil {
continue
}
@ -103,15 +105,17 @@ func List() ([]ListOutput, error) {
enableMetrics = true
}
maxRequestBodySize := getIntArg(argumentsMap, "--dapr-http-max-request-size", runtime.DefaultMaxRequestBodySize)
maxRequestBodySize := getIntArg(argumentsMap, "max-body-size", runtime.DefaultMaxRequestBodySize)
httpReadBufferSize := getIntArg(argumentsMap, "--dapr-http-read-buffer-size", runtime.DefaultReadBufferSize)
httpReadBufferSize := getIntArg(argumentsMap, "read-buffer-size", runtime.DefaultReadBufferSize)
appID := argumentsMap["--app-id"]
appCmd := ""
appPIDString := ""
cliPIDString := ""
runTemplatePath := ""
appLogPath := ""
daprdLogPath := ""
runTemplateName := ""
socket := argumentsMap["--unix-domain-socket"]
appMetadata, err := metadata.Get(httpPort, appID, socket)
@ -121,6 +125,8 @@ func List() ([]ListOutput, error) {
cliPIDString = appMetadata.Extended["cliPID"]
runTemplatePath = appMetadata.Extended["runTemplatePath"]
runTemplateName = appMetadata.Extended["runTemplateName"]
appLogPath = appMetadata.Extended["appLogPath"]
daprdLogPath = appMetadata.Extended["daprdLogPath"]
}
appPID, err := strconv.Atoi(appPIDString)
@ -159,6 +165,8 @@ func List() ([]ListOutput, error) {
HTTPReadBufferSize: httpReadBufferSize,
RunTemplatePath: runTemplatePath,
RunTemplateName: runTemplateName,
AppLogPath: appLogPath,
DaprDLogPath: daprdLogPath,
}
// filter only dashboard instance.

View File

@ -62,7 +62,7 @@ func (s *Standalone) Publish(publishAppID, pubsubName, topic string, payload []b
},
}
} else {
url = fmt.Sprintf("http://localhost:%s/v%s/publish/%s/%s%s", fmt.Sprintf("%v", instance.HTTPPort), api.RuntimeAPIVersion, pubsubName, topic, queryParams)
url = fmt.Sprintf("http://localhost:%d/v%s/publish/%s/%s%s", instance.HTTPPort, api.RuntimeAPIVersion, pubsubName, topic, queryParams)
}
contentType := "application/json"
@ -94,7 +94,7 @@ func (s *Standalone) Publish(publishAppID, pubsubName, topic string, payload []b
}
func getDaprInstance(list []ListOutput, publishAppID string) (ListOutput, error) {
for i := 0; i < len(list); i++ {
for i := range list {
if list[i].AppID == publishAppID {
return list[i], nil
}
@ -112,7 +112,7 @@ func getQueryParams(metadata map[string]interface{}) string {
}
// Prefix with "?" and remove the last "&".
if queryParams != "" {
queryParams = fmt.Sprintf("?%s", queryParams[:len(queryParams)-1])
queryParams = "?" + queryParams[:len(queryParams)-1]
}
return queryParams
}

View File

@ -212,7 +212,7 @@ func TestGetQueryParams(t *testing.T) {
queryParams := getQueryParams(tc.metadata)
if queryParams != "" {
assert.True(t, queryParams[0] == '?', "expected query params to start with '?'")
assert.True(t, strings.HasPrefix(queryParams, "?"), "expected query params to start with '?'")
queryParams = queryParams[1:]
}

View File

@ -14,6 +14,7 @@ limitations under the License.
package standalone
import (
"context"
"fmt"
"net"
"os"
@ -23,12 +24,14 @@ import (
"strconv"
"strings"
"k8s.io/apimachinery/pkg/api/resource"
"github.com/Pallinder/sillyname-go"
"github.com/phayes/freeport"
"gopkg.in/yaml.v2"
"github.com/dapr/cli/pkg/print"
"github.com/dapr/dapr/pkg/components"
localloader "github.com/dapr/dapr/pkg/components/loader"
)
type LogDestType string
@ -47,43 +50,51 @@ const (
// RunConfig represents the application configuration parameters.
type RunConfig struct {
SharedRunConfig `yaml:",inline"`
AppID string `env:"APP_ID" arg:"app-id" yaml:"appID"`
AppID string `env:"APP_ID" arg:"app-id" annotation:"dapr.io/app-id" yaml:"appID"`
AppChannelAddress string `env:"APP_CHANNEL_ADDRESS" arg:"app-channel-address" ifneq:"127.0.0.1" yaml:"appChannelAddress"`
AppPort int `env:"APP_PORT" arg:"app-port" yaml:"appPort" default:"-1"`
AppPort int `env:"APP_PORT" arg:"app-port" annotation:"dapr.io/app-port" yaml:"appPort" default:"-1"`
HTTPPort int `env:"DAPR_HTTP_PORT" arg:"dapr-http-port" yaml:"daprHTTPPort" default:"-1"`
GRPCPort int `env:"DAPR_GRPC_PORT" arg:"dapr-grpc-port" yaml:"daprGRPCPort" default:"-1"`
ProfilePort int `arg:"profile-port" yaml:"profilePort" default:"-1"`
Command []string `yaml:"command"`
MetricsPort int `env:"DAPR_METRICS_PORT" arg:"metrics-port" yaml:"metricsPort" default:"-1"`
UnixDomainSocket string `arg:"unix-domain-socket" yaml:"unixDomainSocket"`
MetricsPort int `env:"DAPR_METRICS_PORT" arg:"metrics-port" annotation:"dapr.io/metrics-port" yaml:"metricsPort" default:"-1"`
UnixDomainSocket string `arg:"unix-domain-socket" annotation:"dapr.io/unix-domain-socket-path" yaml:"unixDomainSocket"`
InternalGRPCPort int `arg:"dapr-internal-grpc-port" yaml:"daprInternalGRPCPort" default:"-1"`
}
// SharedRunConfig represents the application configuration parameters, which can be shared across many apps.
type SharedRunConfig struct {
ConfigFile string `arg:"config" yaml:"configFilePath"`
AppProtocol string `arg:"app-protocol" yaml:"appProtocol" default:"http"`
APIListenAddresses string `arg:"dapr-listen-addresses" yaml:"apiListenAddresses"`
EnableProfiling bool `arg:"enable-profiling" yaml:"enableProfiling"`
LogLevel string `arg:"log-level" yaml:"logLevel"`
MaxConcurrency int `arg:"app-max-concurrency" yaml:"appMaxConcurrency" default:"-1"`
PlacementHostAddr string `arg:"placement-host-address" yaml:"placementHostAddress"`
ComponentsPath string `arg:"components-path"` // Deprecated in run template file: use ResourcesPaths instead.
ResourcesPath string `yaml:"resourcesPath"` // Deprecated in run template file: use ResourcesPaths instead.
ResourcesPaths []string `arg:"resources-path" yaml:"resourcesPaths"`
AppSSL bool `arg:"app-ssl" yaml:"appSSL"`
MaxRequestBodySize int `arg:"dapr-http-max-request-size" yaml:"daprHTTPMaxRequestSize" default:"-1"`
HTTPReadBufferSize int `arg:"dapr-http-read-buffer-size" yaml:"daprHTTPReadBufferSize" default:"-1"`
EnableAppHealth bool `arg:"enable-app-health-check" yaml:"enableAppHealthCheck"`
AppHealthPath string `arg:"app-health-check-path" yaml:"appHealthCheckPath"`
AppHealthInterval int `arg:"app-health-probe-interval" ifneq:"0" yaml:"appHealthProbeInterval"`
AppHealthTimeout int `arg:"app-health-probe-timeout" ifneq:"0" yaml:"appHealthProbeTimeout"`
AppHealthThreshold int `arg:"app-health-threshold" ifneq:"0" yaml:"appHealthThreshold"`
EnableAPILogging bool `arg:"enable-api-logging" yaml:"enableApiLogging"`
DaprdInstallPath string `yaml:"runtimePath"`
Env map[string]string `yaml:"env"`
DaprdLogDestination LogDestType `yaml:"daprdLogDestination"`
AppLogDestination LogDestType `yaml:"appLogDestination"`
// Specifically omitted from annotations see https://github.com/dapr/cli/issues/1324
ConfigFile string `arg:"config" yaml:"configFilePath"`
AppProtocol string `arg:"app-protocol" annotation:"dapr.io/app-protocol" yaml:"appProtocol" default:"http"`
APIListenAddresses string `arg:"dapr-listen-addresses" annotation:"dapr.io/sidecar-listen-address" yaml:"apiListenAddresses"`
EnableProfiling bool `arg:"enable-profiling" annotation:"dapr.io/enable-profiling" yaml:"enableProfiling"`
LogLevel string `arg:"log-level" annotation:"dapr.io.log-level" yaml:"logLevel"`
MaxConcurrency int `arg:"app-max-concurrency" annotation:"dapr.io/app-max-concurrerncy" yaml:"appMaxConcurrency" default:"-1"`
// Speicifcally omitted from annotations similar to config file path above.
PlacementHostAddr string `arg:"placement-host-address" yaml:"placementHostAddress"`
// Speicifcally omitted from annotations similar to config file path above.
ComponentsPath string `arg:"components-path"` // Deprecated in run template file: use ResourcesPaths instead.
// Speicifcally omitted from annotations similar to config file path above.
ResourcesPath string `yaml:"resourcesPath"` // Deprecated in run template file: use ResourcesPaths instead.
// Speicifcally omitted from annotations similar to config file path above.
ResourcesPaths []string `arg:"resources-path" yaml:"resourcesPaths"`
// Speicifcally omitted from annotations as appSSL is deprecated.
AppSSL bool `arg:"app-ssl" yaml:"appSSL"`
MaxRequestBodySize string `arg:"max-body-size" annotation:"dapr.io/max-body-size" yaml:"maxBodySize" default:"4Mi"`
HTTPReadBufferSize string `arg:"read-buffer-size" annotation:"dapr.io/read-buffer-size" yaml:"readBufferSize" default:"4Ki"`
EnableAppHealth bool `arg:"enable-app-health-check" annotation:"dapr.io/enable-app-health-check" yaml:"enableAppHealthCheck"`
AppHealthPath string `arg:"app-health-check-path" annotation:"dapr.io/app-health-check-path" yaml:"appHealthCheckPath"`
AppHealthInterval int `arg:"app-health-probe-interval" annotation:"dapr.io/app-health-probe-interval" ifneq:"0" yaml:"appHealthProbeInterval"`
AppHealthTimeout int `arg:"app-health-probe-timeout" annotation:"dapr.io/app-health-probe-timeout" ifneq:"0" yaml:"appHealthProbeTimeout"`
AppHealthThreshold int `arg:"app-health-threshold" annotation:"dapr.io/app-health-threshold" ifneq:"0" yaml:"appHealthThreshold"`
EnableAPILogging bool `arg:"enable-api-logging" annotation:"dapr.io/enable-api-logging" yaml:"enableApiLogging"`
// Specifically omitted from annotations see https://github.com/dapr/cli/issues/1324 .
DaprdInstallPath string `yaml:"runtimePath"`
Env map[string]string `yaml:"env"`
DaprdLogDestination LogDestType `yaml:"daprdLogDestination"`
AppLogDestination LogDestType `yaml:"appLogDestination"`
SchedulerHostAddress string `arg:"scheduler-host-address" yaml:"schedulerHostAddress"`
}
func (meta *DaprMeta) newAppID() string {
@ -105,8 +116,8 @@ func (config *RunConfig) validateResourcesPaths() error {
return fmt.Errorf("error validating resources path %q : %w", dirPath, err)
}
}
componentsLoader := components.NewLocalComponents(dirPath...)
_, err := componentsLoader.LoadComponents()
localLoader := localloader.NewLocalLoader(config.AppID, dirPath)
err := localLoader.Validate(context.Background())
if err != nil {
return fmt.Errorf("error validating components in resources path %q : %w", dirPath, err)
}
@ -120,15 +131,34 @@ func (config *RunConfig) validatePlacementHostAddr() error {
}
if indx := strings.Index(placementHostAddr, ":"); indx == -1 {
if runtime.GOOS == daprWindowsOS {
placementHostAddr = fmt.Sprintf("%s:6050", placementHostAddr)
placementHostAddr += ":6050"
} else {
placementHostAddr = fmt.Sprintf("%s:50005", placementHostAddr)
placementHostAddr += ":50005"
}
}
config.PlacementHostAddr = placementHostAddr
return nil
}
func (config *RunConfig) validateSchedulerHostAddr() error {
schedulerHostAddr := config.SchedulerHostAddress
if len(schedulerHostAddr) == 0 {
return nil
}
if indx := strings.Index(schedulerHostAddr, ":"); indx == -1 {
if runtime.GOOS == daprWindowsOS {
schedulerHostAddr += ":6060"
} else {
schedulerHostAddr += ":50006"
}
}
config.SchedulerHostAddress = schedulerHostAddr
return nil
}
func (config *RunConfig) validatePort(portName string, portPtr *int, meta *DaprMeta) error {
if *portPtr <= 0 {
port, err := freeport.GetFreePort()
@ -198,18 +228,83 @@ func (config *RunConfig) Validate() error {
if config.MaxConcurrency < 1 {
config.MaxConcurrency = -1
}
if config.MaxRequestBodySize < 0 {
config.MaxRequestBodySize = -1
qBody, err := resource.ParseQuantity(config.MaxRequestBodySize)
if err != nil {
return fmt.Errorf("invalid max request body size: %w", err)
}
if config.HTTPReadBufferSize < 0 {
config.HTTPReadBufferSize = -1
if qBody.Value() < 0 {
config.MaxRequestBodySize = "-1"
} else {
config.MaxRequestBodySize = qBody.String()
}
qBuffer, err := resource.ParseQuantity(config.HTTPReadBufferSize)
if err != nil {
return fmt.Errorf("invalid http read buffer size: %w", err)
}
if qBuffer.Value() < 0 {
config.HTTPReadBufferSize = "-1"
} else {
config.HTTPReadBufferSize = qBuffer.String()
}
err = config.validatePlacementHostAddr()
if err != nil {
return err
}
err = config.validateSchedulerHostAddr()
if err != nil {
return err
}
return nil
}
func (config *RunConfig) ValidateK8s() error {
meta, err := newDaprMeta()
if err != nil {
return err
}
if config.AppID == "" {
config.AppID = meta.newAppID()
}
if config.AppPort < 0 {
config.AppPort = 0
}
err = config.validatePort("MetricsPort", &config.MetricsPort, meta)
if err != nil {
return err
}
if config.MaxConcurrency < 1 {
config.MaxConcurrency = -1
}
qBody, err := resource.ParseQuantity(config.MaxRequestBodySize)
if err != nil {
return fmt.Errorf("invalid max request body size: %w", err)
}
if qBody.Value() < 0 {
config.MaxRequestBodySize = "-1"
} else {
config.MaxRequestBodySize = qBody.String()
}
qBuffer, err := resource.ParseQuantity(config.HTTPReadBufferSize)
if err != nil {
return fmt.Errorf("invalid http read buffer size: %w", err)
}
if qBuffer.Value() < 0 {
config.HTTPReadBufferSize = "-1"
} else {
config.HTTPReadBufferSize = qBuffer.String()
}
return nil
}
@ -227,7 +322,7 @@ func (meta *DaprMeta) portExists(port int) bool {
if port <= 0 {
return false
}
//nolint
_, ok := meta.ExistingPorts[port]
if ok {
return true
@ -284,7 +379,7 @@ func (config *RunConfig) getArgs() []string {
// Recursive function to get all the args from the config struct.
// This is needed because the config struct has embedded struct.
func getArgsFromSchema(schema reflect.Value, args []string) []string {
for i := 0; i < schema.NumField(); i++ {
for i := range schema.NumField() {
valueField := schema.Field(i).Interface()
typeField := schema.Type().Field(i)
key := typeField.Tag.Get("arg")
@ -326,7 +421,7 @@ func (config *RunConfig) SetDefaultFromSchema() {
}
func (config *RunConfig) setDefaultFromSchemaRecursive(schema reflect.Value) {
for i := 0; i < schema.NumField(); i++ {
for i := range schema.NumField() {
valueField := schema.Field(i)
typeField := schema.Type().Field(i)
if typeField.Type.Kind() == reflect.Struct {
@ -349,8 +444,10 @@ func (config *RunConfig) setDefaultFromSchemaRecursive(schema reflect.Value) {
func (config *RunConfig) getEnv() []string {
env := []string{}
// Handle values from config that have an "env" tag.
schema := reflect.ValueOf(*config)
for i := 0; i < schema.NumField(); i++ {
for i := range schema.NumField() {
valueField := schema.Field(i).Interface()
typeField := schema.Type().Field(i)
key := typeField.Tag.Get("env")
@ -365,12 +462,92 @@ func (config *RunConfig) getEnv() []string {
value := fmt.Sprintf("%v", reflect.ValueOf(valueField))
env = append(env, fmt.Sprintf("%s=%v", key, value))
}
// Handle APP_PROTOCOL separately since that requires some additional processing.
appProtocol := config.getAppProtocol()
if appProtocol != "" {
env = append(env, "APP_PROTOCOL="+appProtocol)
}
// Add user-defined env vars.
for k, v := range config.Env {
env = append(env, fmt.Sprintf("%s=%v", k, v))
}
return env
}
func (config *RunConfig) getAppProtocol() string {
appProtocol := strings.ToLower(config.AppProtocol)
switch appProtocol {
case string("grpcs"), string("https"), string("h2c"):
return appProtocol
case string("http"):
// For backwards compatibility, when protocol is HTTP and --app-ssl is set, use "https".
if config.AppSSL {
return "https"
} else {
return "http"
}
case string("grpc"):
// For backwards compatibility, when protocol is GRPC and --app-ssl is set, use "grpcs".
if config.AppSSL {
return string("grpcs")
} else {
return string("grpc")
}
case "":
return string("http")
default:
return ""
}
}
func (config *RunConfig) GetEnv() map[string]string {
env := map[string]string{}
schema := reflect.ValueOf(*config)
for i := range schema.NumField() {
valueField := schema.Field(i).Interface()
typeField := schema.Type().Field(i)
key := typeField.Tag.Get("env")
if len(key) == 0 {
continue
}
if value, ok := valueField.(int); ok && value <= 0 {
// ignore unset numeric variables.
continue
}
value := fmt.Sprintf("%v", reflect.ValueOf(valueField))
env[key] = value
}
for k, v := range config.Env {
env[k] = v
}
return env
}
func (config *RunConfig) GetAnnotations() map[string]string {
annotations := map[string]string{}
schema := reflect.ValueOf(*config)
for i := range schema.NumField() {
valueField := schema.Field(i).Interface()
typeField := schema.Type().Field(i)
key := typeField.Tag.Get("annotation")
if len(key) == 0 {
continue
}
if value, ok := valueField.(int); ok && value <= 0 {
// ignore unset numeric variables.
continue
}
value := fmt.Sprintf("%v", reflect.ValueOf(valueField))
annotations[key] = value
}
return annotations
}
func GetDaprCommand(config *RunConfig) (*exec.Cmd, error) {
daprCMD, err := lookupBinaryFilePath(config.DaprdInstallPath, "daprd")
if err != nil {

141
pkg/standalone/run_test.go Normal file
View File

@ -0,0 +1,141 @@
/*
Copyright 2021 The Dapr Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package standalone
import (
"sort"
"testing"
"github.com/stretchr/testify/assert"
)
func TestGetEnv(t *testing.T) {
config := &RunConfig{
SharedRunConfig: SharedRunConfig{},
AppID: "testapp",
AppChannelAddress: "localhost",
AppPort: 1234,
HTTPPort: 2345,
GRPCPort: 3456,
ProfilePort: 4567, // This is not included in env.
MetricsPort: 5678,
}
t.Run("no explicit app-protocol", func(t *testing.T) {
expect := []string{
"APP_ID=testapp",
"APP_CHANNEL_ADDRESS=localhost",
"APP_PORT=1234",
"APP_PROTOCOL=http",
"DAPR_HTTP_PORT=2345",
"DAPR_GRPC_PORT=3456",
"DAPR_METRICS_PORT=5678",
}
got := config.getEnv()
sort.Strings(expect)
sort.Strings(got)
assert.Equal(t, expect, got)
})
t.Run("app-protocol grpcs", func(t *testing.T) {
config.AppProtocol = "grpcs"
config.AppSSL = false
expect := []string{
"APP_ID=testapp",
"APP_CHANNEL_ADDRESS=localhost",
"APP_PORT=1234",
"APP_PROTOCOL=grpcs",
"DAPR_HTTP_PORT=2345",
"DAPR_GRPC_PORT=3456",
"DAPR_METRICS_PORT=5678",
}
got := config.getEnv()
sort.Strings(expect)
sort.Strings(got)
assert.Equal(t, expect, got)
})
t.Run("app-protocol http", func(t *testing.T) {
config.AppProtocol = "http"
config.AppSSL = false
expect := []string{
"APP_ID=testapp",
"APP_CHANNEL_ADDRESS=localhost",
"APP_PORT=1234",
"APP_PROTOCOL=http",
"DAPR_HTTP_PORT=2345",
"DAPR_GRPC_PORT=3456",
"DAPR_METRICS_PORT=5678",
}
got := config.getEnv()
sort.Strings(expect)
sort.Strings(got)
assert.Equal(t, expect, got)
})
t.Run("app-protocol http with app-ssl", func(t *testing.T) {
config.AppProtocol = "http"
config.AppSSL = true
expect := []string{
"APP_ID=testapp",
"APP_CHANNEL_ADDRESS=localhost",
"APP_PORT=1234",
"APP_PROTOCOL=https",
"DAPR_HTTP_PORT=2345",
"DAPR_GRPC_PORT=3456",
"DAPR_METRICS_PORT=5678",
}
got := config.getEnv()
sort.Strings(expect)
sort.Strings(got)
assert.Equal(t, expect, got)
})
t.Run("app-protocol grpc with app-ssl", func(t *testing.T) {
config.AppProtocol = "grpc"
config.AppSSL = true
expect := []string{
"APP_ID=testapp",
"APP_CHANNEL_ADDRESS=localhost",
"APP_PORT=1234",
"APP_PROTOCOL=grpcs",
"DAPR_HTTP_PORT=2345",
"DAPR_GRPC_PORT=3456",
"DAPR_METRICS_PORT=5678",
}
got := config.getEnv()
sort.Strings(expect)
sort.Strings(got)
assert.Equal(t, expect, got)
})
}

View File

@ -31,6 +31,7 @@ import (
"sync"
"time"
"github.com/Masterminds/semver"
"github.com/fatih/color"
"gopkg.in/yaml.v2"
@ -43,6 +44,7 @@ const (
daprRuntimeFilePrefix = "daprd"
dashboardFilePrefix = "dashboard"
placementServiceFilePrefix = "placement"
schedulerServiceFilePrefix = "scheduler"
daprWindowsOS = "windows"
@ -72,12 +74,23 @@ const (
// DaprPlacementContainerName is the container name of placement service.
DaprPlacementContainerName = "dapr_placement"
// DaprSchedulerContainerName is the container name of scheduler service.
DaprSchedulerContainerName = "dapr_scheduler"
// DaprRedisContainerName is the container name of redis.
DaprRedisContainerName = "dapr_redis"
// DaprZipkinContainerName is the container name of zipkin.
DaprZipkinContainerName = "dapr_zipkin"
errInstallTemplate = "please run `dapr uninstall` first before running `dapr init`"
healthPort = 58080
metricPort = 59090
schedulerHealthPort = 58081
schedulerMetricPort = 59091
schedulerEtcdPort = 52379
daprVersionsWithScheduler = ">= 1.14.x"
)
var (
@ -130,6 +143,7 @@ type initInfo struct {
imageRegistryURL string
containerRuntime string
imageVariant string
schedulerVolume *string
}
type daprImageInfo struct {
@ -151,8 +165,27 @@ func isBinaryInstallationRequired(binaryFilePrefix, binInstallDir string) (bool,
return true, nil
}
// isSchedulerIncluded returns true if scheduler is included a given version for Dapr.
func isSchedulerIncluded(runtimeVersion string) (bool, error) {
c, err := semver.NewConstraint(daprVersionsWithScheduler)
if err != nil {
return false, err
}
v, err := semver.NewVersion(runtimeVersion)
if err != nil {
return false, err
}
vNoPrerelease, err := v.SetPrerelease("")
if err != nil {
return false, err
}
return c.Check(&vNoPrerelease), nil
}
// Init installs Dapr on a local machine using the supplied runtimeVersion.
func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMode bool, imageRegistryURL string, fromDir string, containerRuntime string, imageVariant string, daprInstallPath string) error {
func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMode bool, imageRegistryURL string, fromDir string, containerRuntime string, imageVariant string, daprInstallPath string, schedulerVolume *string) error {
var err error
var bundleDet bundleDetails
containerRuntime = strings.TrimSpace(containerRuntime)
@ -162,8 +195,8 @@ func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMod
setAirGapInit(fromDir)
if !slimMode {
// If --slim installation is not requested, check if docker is installed.
conatinerRuntimeAvailable := utils.IsDockerInstalled() || utils.IsPodmanInstalled()
if !conatinerRuntimeAvailable {
containerRuntimeAvailable := utils.IsContainerRuntimeInstalled(containerRuntime)
if !containerRuntimeAvailable {
return fmt.Errorf("could not connect to %s. %s may not be installed or running", containerRuntime, containerRuntime)
}
@ -237,8 +270,10 @@ func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMod
createComponentsAndConfiguration,
installDaprRuntime,
installPlacement,
installScheduler,
installDashboard,
runPlacementService,
runSchedulerService,
runRedis,
runZipkin,
}
@ -271,6 +306,7 @@ func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMod
imageRegistryURL: imageRegistryURL,
containerRuntime: containerRuntime,
imageVariant: imageVariant,
schedulerVolume: schedulerVolume,
}
for _, step := range initSteps {
// Run init on the configurations and containers.
@ -299,6 +335,7 @@ func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMod
if slimMode {
// Print info on placement binary only on slim install.
print.InfoStatusEvent(os.Stdout, "%s binary has been installed to %s.", placementServiceFilePrefix, daprBinDir)
print.InfoStatusEvent(os.Stdout, "%s binary has been installed to %s.", schedulerServiceFilePrefix, daprBinDir)
} else {
runtimeCmd := utils.GetContainerRuntimeCmd(info.containerRuntime)
dockerContainerNames := []string{DaprPlacementContainerName, DaprRedisContainerName, DaprZipkinContainerName}
@ -306,6 +343,10 @@ func Init(runtimeVersion, dashboardVersion string, dockerNetwork string, slimMod
if isAirGapInit {
dockerContainerNames = []string{DaprPlacementContainerName}
}
hasScheduler, err := isSchedulerIncluded(info.runtimeVersion)
if err == nil && hasScheduler {
dockerContainerNames = append(dockerContainerNames, DaprSchedulerContainerName)
}
for _, container := range dockerContainerNames {
containerName := utils.CreateContainerName(container, dockerNetwork)
ok, err := confirmContainerIsRunningOrExists(containerName, true, runtimeCmd)
@ -376,7 +417,6 @@ func runZipkin(wg *sync.WaitGroup, errorChan chan<- error, info initInfo) {
args = append(args, imageName)
}
_, err = utils.RunCmdAndWait(runtimeCmd, args...)
if err != nil {
runError := isContainerRunError(err)
if !runError {
@ -442,7 +482,6 @@ func runRedis(wg *sync.WaitGroup, errorChan chan<- error, info initInfo) {
args = append(args, imageName)
}
_, err = utils.RunCmdAndWait(runtimeCmd, args...)
if err != nil {
runError := isContainerRunError(err)
if !runError {
@ -486,15 +525,15 @@ func runPlacementService(wg *sync.WaitGroup, errorChan chan<- error, info initIn
if isAirGapInit {
// if --from-dir flag is given load the image details from the installer-bundle.
dir := path_filepath.Join(info.fromDir, *info.bundleDet.ImageSubDir)
image = info.bundleDet.getPlacementImageName()
err = loadContainer(dir, info.bundleDet.getPlacementImageFileName(), info.containerRuntime)
image = info.bundleDet.getDaprImageName()
err = loadContainer(dir, info.bundleDet.getDaprImageFileName(), info.containerRuntime)
if err != nil {
errorChan <- err
return
}
} else {
// otherwise load the image from the specified repository.
image, err = getPlacementImageName(imgInfo, info)
image, err = getDaprImageName(imgInfo, info)
if err != nil {
errorChan <- err
return
@ -520,13 +559,15 @@ func runPlacementService(wg *sync.WaitGroup, errorChan chan<- error, info initIn
}
args = append(args,
"-p", fmt.Sprintf("%v:50005", osPort))
"-p", fmt.Sprintf("%v:50005", osPort),
"-p", fmt.Sprintf("%v:8080", healthPort),
"-p", fmt.Sprintf("%v:9090", metricPort),
)
}
args = append(args, image)
_, err = utils.RunCmdAndWait(runtimeCmd, args...)
if err != nil {
runError := isContainerRunError(err)
if !runError {
@ -539,6 +580,141 @@ func runPlacementService(wg *sync.WaitGroup, errorChan chan<- error, info initIn
errorChan <- nil
}
func runSchedulerService(wg *sync.WaitGroup, errorChan chan<- error, info initInfo) {
defer wg.Done()
if info.slimMode {
return
}
hasScheduler, err := isSchedulerIncluded(info.runtimeVersion)
if err != nil {
errorChan <- err
return
}
if !hasScheduler {
return
}
runtimeCmd := utils.GetContainerRuntimeCmd(info.containerRuntime)
schedulerContainerName := utils.CreateContainerName(DaprSchedulerContainerName, info.dockerNetwork)
exists, err := confirmContainerIsRunningOrExists(schedulerContainerName, false, runtimeCmd)
if err != nil {
errorChan <- err
return
} else if exists {
errorChan <- fmt.Errorf("%s container exists or is running. %s", schedulerContainerName, errInstallTemplate)
return
}
var image string
imgInfo := daprImageInfo{
ghcrImageName: daprGhcrImageName,
dockerHubImageName: daprDockerImageName,
imageRegistryURL: info.imageRegistryURL,
imageRegistryName: defaultImageRegistryName,
}
if isAirGapInit {
// if --from-dir flag is given load the image details from the installer-bundle.
dir := path_filepath.Join(info.fromDir, *info.bundleDet.ImageSubDir)
image = info.bundleDet.getDaprImageName()
err = loadContainer(dir, info.bundleDet.getDaprImageFileName(), info.containerRuntime)
if err != nil {
errorChan <- err
return
}
} else {
// otherwise load the image from the specified repository.
image, err = getDaprImageName(imgInfo, info)
if err != nil {
errorChan <- err
return
}
}
args := []string{
"run",
"--name", schedulerContainerName,
"--restart", "always",
"-d",
"--entrypoint", "./scheduler",
}
if info.schedulerVolume != nil {
// Don't touch this file location unless things start breaking.
// In Docker, when Docker creates a volume and mounts that volume. Docker
// assumes the file permissions of that directory if it exists in the container.
// If that directory didn't exist in the container previously, then Docker sets
// the permissions owned by root and not writeable.
// We are lucky in that the Dapr containers have a world writeable directory at
// /var/lock and can therefore mount the Docker volume here.
// TODO: update the Dapr scheduler dockerfile to create a scheduler user id writeable
// directory at /var/lib/dapr/scheduler, then update the path here.
if strings.EqualFold(info.imageVariant, "mariner") {
args = append(args, "--volume", *info.schedulerVolume+":/var/tmp")
} else {
args = append(args, "--volume", *info.schedulerVolume+":/var/lock")
}
}
osPort := 50006
if info.dockerNetwork != "" {
args = append(args,
"--network", info.dockerNetwork,
"--network-alias", DaprSchedulerContainerName)
} else {
if runtime.GOOS == daprWindowsOS {
osPort = 6060
}
args = append(args,
"-p", fmt.Sprintf("%v:50006", osPort),
"-p", fmt.Sprintf("%v:2379", schedulerEtcdPort),
"-p", fmt.Sprintf("%v:8080", schedulerHealthPort),
"-p", fmt.Sprintf("%v:9090", schedulerMetricPort),
)
}
if strings.EqualFold(info.imageVariant, "mariner") {
args = append(args, image, "--etcd-data-dir=/var/tmp/dapr/scheduler")
} else {
args = append(args, image, "--etcd-data-dir=/var/lock/dapr/scheduler")
}
if schedulerOverrideHostPort(info) {
args = append(args, fmt.Sprintf("--override-broadcast-host-port=localhost:%v", osPort))
}
_, err = utils.RunCmdAndWait(runtimeCmd, args...)
if err != nil {
runError := isContainerRunError(err)
if !runError {
errorChan <- parseContainerRuntimeError("scheduler service", err)
} else {
errorChan <- fmt.Errorf("%s %s failed with: %w", runtimeCmd, args, err)
}
return
}
errorChan <- nil
}
func schedulerOverrideHostPort(info initInfo) bool {
if info.runtimeVersion == "edge" || info.runtimeVersion == "dev" {
return true
}
runV, err := semver.NewVersion(info.runtimeVersion)
if err != nil {
return true
}
v115rc5, _ := semver.NewVersion("1.15.0-rc.5")
return runV.GreaterThan(v115rc5)
}
func moveDashboardFiles(extractedFilePath string, dir string) (string, error) {
// Move /release/os/web directory to /web.
oldPath := path_filepath.Join(path_filepath.Dir(extractedFilePath), "web")
@ -603,7 +779,29 @@ func installPlacement(wg *sync.WaitGroup, errorChan chan<- error, info initInfo)
}
}
// installBinary installs the daprd, placement or dashboard binaries and associated files inside the default dapr bin directory.
func installScheduler(wg *sync.WaitGroup, errorChan chan<- error, info initInfo) {
defer wg.Done()
if !info.slimMode {
return
}
hasScheduler, err := isSchedulerIncluded(info.runtimeVersion)
if err != nil {
errorChan <- err
return
}
if !hasScheduler {
return
}
err = installBinary(info.runtimeVersion, schedulerServiceFilePrefix, cli_ver.DaprGitHubRepo, info)
if err != nil {
errorChan <- err
}
}
// installBinary installs the daprd, placement, scheduler, or dashboard binaries and associated files inside the default dapr bin directory.
func installBinary(version, binaryFilePrefix, githubRepo string, info initInfo) error {
var (
err error
@ -709,7 +907,7 @@ func createSlimConfiguration(wg *sync.WaitGroup, errorChan chan<- error, info in
func makeDefaultComponentsDir(installDir string) error {
// Make default components directory.
componentsDir := GetDaprComponentsPath(installDir)
//nolint
_, err := os.Stat(componentsDir)
if os.IsNotExist(err) {
errDir := os.MkdirAll(componentsDir, 0o755)
@ -718,7 +916,7 @@ func makeDefaultComponentsDir(installDir string) error {
}
}
os.Chmod(componentsDir, 0o777)
os.Chmod(componentsDir, 0o755)
return nil
}
@ -776,7 +974,7 @@ func unzip(r *zip.Reader, targetDir string, binaryFilePrefix string) (string, er
return "", err
}
if strings.HasSuffix(fpath, fmt.Sprintf("%s.exe", binaryFilePrefix)) {
if strings.HasSuffix(fpath, binaryFilePrefix+".exe") {
foundBinary = fpath
}
@ -834,7 +1032,6 @@ func untar(reader io.Reader, targetDir string, binaryFilePrefix string) (string,
foundBinary := ""
for {
header, err := tr.Next()
//nolint
if err == io.EOF {
break
} else if err != nil {
@ -857,7 +1054,7 @@ func untar(reader io.Reader, targetDir string, binaryFilePrefix string) (string,
continue
}
f, err := os.OpenFile(path, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode))
f, err := os.OpenFile(path, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode)) //nolint:gosec
if err != nil {
return "", err
}
@ -905,14 +1102,15 @@ func moveFileToPath(filepath string, installLocation string) (string, error) {
p := os.Getenv("PATH")
if !strings.Contains(strings.ToLower(p), strings.ToLower(destDir)) {
pathCmd := "[System.Environment]::SetEnvironmentVariable('Path',[System.Environment]::GetEnvironmentVariable('Path','user') + '" + fmt.Sprintf(";%s", destDir) + "', 'user')"
destDir = utils.SanitizeDir(destDir)
pathCmd := "[System.Environment]::SetEnvironmentVariable('Path',[System.Environment]::GetEnvironmentVariable('Path','user') + '" + ";" + destDir + "', 'user')"
_, err := utils.RunCmdAndWait("powershell", pathCmd)
if err != nil {
return "", err
}
}
return fmt.Sprintf("%s\\daprd.exe", destDir), nil
return destDir + "\\daprd.exe", nil
}
if strings.HasPrefix(fileName, daprRuntimeFilePrefix) && installLocation != "" {
@ -937,7 +1135,7 @@ func createRedisStateStore(redisHost string, componentsPath string) error {
redisStore.Spec.Metadata = []componentMetadataItem{
{
Name: "redisHost",
Value: fmt.Sprintf("%s:6379", redisHost),
Value: redisHost + ":6379",
},
{
Name: "redisPassword",
@ -972,7 +1170,7 @@ func createRedisPubSub(redisHost string, componentsPath string) error {
redisPubSub.Spec.Metadata = []componentMetadataItem{
{
Name: "redisHost",
Value: fmt.Sprintf("%s:6379", redisHost),
Value: redisHost + ":6379",
},
{
Name: "redisPassword",
@ -1023,12 +1221,12 @@ func checkAndOverWriteFile(filePath string, b []byte) error {
}
func prepareDaprInstallDir(daprBinDir string) error {
err := os.MkdirAll(daprBinDir, 0o777)
err := os.MkdirAll(daprBinDir, 0o755)
if err != nil {
return err
}
err = os.Chmod(daprBinDir, 0o777)
err = os.Chmod(daprBinDir, 0o755)
if err != nil {
return err
}
@ -1182,16 +1380,16 @@ func copyWithTimeout(ctx context.Context, dst io.Writer, src io.Reader) (int64,
}
}
// getPlacementImageName returns the resolved placement image name for online `dapr init`.
// getDaprImageName returns the resolved Dapr image name for online `dapr init`.
// It can either be resolved to the image-registry if given, otherwise GitHub container registry if
// selected or fallback to Docker Hub.
func getPlacementImageName(imageInfo daprImageInfo, info initInfo) (string, error) {
func getDaprImageName(imageInfo daprImageInfo, info initInfo) (string, error) {
image, err := resolveImageURI(imageInfo)
if err != nil {
return "", err
}
image, err = getPlacementImageWithTag(image, info.runtimeVersion, info.imageVariant)
image, err = getDaprImageWithTag(image, info.runtimeVersion, info.imageVariant)
if err != nil {
return "", err
}
@ -1199,8 +1397,8 @@ func getPlacementImageName(imageInfo daprImageInfo, info initInfo) (string, erro
// if default registry is GHCR and the image is not available in or cannot be pulled from GHCR
// fallback to using dockerhub.
if useGHCR(imageInfo, info.fromDir) && !tryPullImage(image, info.containerRuntime) {
print.InfoStatusEvent(os.Stdout, "Placement image not found in Github container registry, pulling it from Docker Hub")
image, err = getPlacementImageWithTag(daprDockerImageName, info.runtimeVersion, info.imageVariant)
print.InfoStatusEvent(os.Stdout, "Image not found in Github container registry, pulling it from Docker Hub")
image, err = getDaprImageWithTag(daprDockerImageName, info.runtimeVersion, info.imageVariant)
if err != nil {
return "", err
}
@ -1208,7 +1406,7 @@ func getPlacementImageName(imageInfo daprImageInfo, info initInfo) (string, erro
return image, nil
}
func getPlacementImageWithTag(name, version, imageVariant string) (string, error) {
func getDaprImageWithTag(name, version, imageVariant string) (string, error) {
err := utils.ValidateImageVariant(imageVariant)
if err != nil {
return "", err

View File

@ -98,7 +98,6 @@ func TestResolveImageWithGHCR(t *testing.T) {
}
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
got, err := resolveImageURI(test.args)
@ -144,7 +143,6 @@ func TestResolveImageWithDockerHub(t *testing.T) {
}
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
got, err := resolveImageURI(test.args)
@ -190,7 +188,6 @@ func TestResolveImageWithPrivateRegistry(t *testing.T) {
}
for _, test := range tests {
test := test
t.Run(test.name, func(t *testing.T) {
t.Parallel()
got, err := resolveImageURI(test.args)
@ -321,15 +318,38 @@ func TestInitLogActualContainerRuntimeName(t *testing.T) {
{"podman", "Init should log podman as container runtime"},
{"docker", "Init should log docker as container runtime"},
}
conatinerRuntimeAvailable := utils.IsDockerInstalled() || utils.IsPodmanInstalled()
if conatinerRuntimeAvailable {
t.Skip("Skipping test as container runtime is available")
}
for _, test := range tests {
t.Run(test.testName, func(t *testing.T) {
err := Init(latestVersion, latestVersion, "", false, "", "", test.containerRuntime, "", "")
assert.NotNil(t, err)
containerRuntimeAvailable := utils.IsContainerRuntimeInstalled(test.containerRuntime)
if containerRuntimeAvailable {
t.Skip("Skipping test as container runtime is available")
}
err := Init(latestVersion, latestVersion, "", false, "", "", test.containerRuntime, "", "", nil)
assert.Error(t, err)
assert.Contains(t, err.Error(), test.containerRuntime)
})
}
}
func TestIsSchedulerIncluded(t *testing.T) {
scenarios := []struct {
version string
isIncluded bool
}{
{"1.13.0-rc.1", false},
{"1.13.0", false},
{"1.13.1", false},
{"1.14.0", true},
{"1.14.0-rc.1", true},
{"1.14.0-mycompany.1", true},
{"1.14.1", true},
}
for _, scenario := range scenarios {
t.Run("isSchedulerIncludedIn"+scenario.version, func(t *testing.T) {
included, err := isSchedulerIncluded(scenario.version)
assert.NoError(t, err)
assert.Equal(t, scenario.isIncluded, included)
})
}
}

View File

@ -18,6 +18,7 @@ package standalone
import (
"fmt"
"strconv"
"syscall"
"github.com/dapr/cli/utils"
@ -31,10 +32,10 @@ func Stop(appID string, cliPIDToNoOfApps map[int]int, apps []ListOutput) error {
// Kill the Daprd process if Daprd was started without CLI, otherwise
// kill the CLI process which also kills the associated Daprd process.
if a.CliPID == 0 || cliPIDToNoOfApps[a.CliPID] > 1 {
pid = fmt.Sprintf("%v", a.DaprdPID)
pid = strconv.Itoa(a.DaprdPID)
cliPIDToNoOfApps[a.CliPID]--
} else {
pid = fmt.Sprintf("%v", a.CliPID)
pid = strconv.Itoa(a.CliPID)
}
_, err := utils.RunCmdAndWait("kill", pid)
@ -57,7 +58,7 @@ func StopAppsWithRunFile(runTemplatePath string) error {
pgid, err := syscall.Getpgid(a.CliPID)
if err != nil {
// Fall back to cliPID if pgid is not available.
_, err = utils.RunCmdAndWait("kill", fmt.Sprintf("%v", a.CliPID))
_, err = utils.RunCmdAndWait("kill", fmt.Sprintf("%v", a.CliPID)) //nolint:perfsprint
return err
}
// Kill the whole process group.

View File

@ -14,10 +14,14 @@ limitations under the License.
package standalone
import (
"errors"
"fmt"
"strconv"
"syscall"
"time"
"github.com/dapr/cli/utils"
"github.com/kolesnikovae/go-winjob"
"golang.org/x/sys/windows"
)
@ -25,20 +29,47 @@ import (
func Stop(appID string, cliPIDToNoOfApps map[int]int, apps []ListOutput) error {
for _, a := range apps {
if a.AppID == appID {
eventName, _ := syscall.UTF16FromString(fmt.Sprintf("dapr_cli_%v", a.CliPID))
eventHandle, err := windows.OpenEvent(windows.EVENT_MODIFY_STATE, false, &eventName[0])
if err != nil {
return err
}
err = windows.SetEvent(eventHandle)
return err
return setStopEvent(a.CliPID)
}
}
return fmt.Errorf("couldn't find app id %s", appID)
}
// StopAppsWithRunFile terminates the daprd and application processes with the given run file.
func StopAppsWithRunFile(runFilePath string) error {
return errors.New("stopping apps with run template file is not supported on windows")
func StopAppsWithRunFile(runTemplatePath string) error {
apps, err := List()
if err != nil {
return err
}
for _, a := range apps {
if a.RunTemplatePath == runTemplatePath {
return disposeJobHandle(a.CliPID)
}
}
return fmt.Errorf("couldn't find apps with run file %q", runTemplatePath)
}
func disposeJobHandle(cliPID int) error {
jobObjectName := utils.GetJobObjectNameFromPID(strconv.Itoa(cliPID))
jbobj, err := winjob.Open(jobObjectName)
if err != nil {
return fmt.Errorf("error opening job object: %w", err)
}
err = jbobj.TerminateWithExitCode(0)
if err != nil {
return fmt.Errorf("error terminating job object: %w", err)
}
time.Sleep(5 * time.Second)
return setStopEvent(cliPID)
}
func setStopEvent(cliPID int) error {
eventName, _ := syscall.UTF16FromString(fmt.Sprintf("dapr_cli_%v", cliPID))
eventHandle, err := windows.OpenEvent(windows.EVENT_MODIFY_STATE, false, &eventName[0])
if err != nil {
return err
}
err = windows.SetEvent(eventHandle)
return err
}

View File

@ -24,13 +24,17 @@ import (
"github.com/dapr/cli/utils"
)
func removeContainers(uninstallPlacementContainer, uninstallAll bool, dockerNetwork, runtimeCmd string) []error {
func removeContainers(uninstallPlacementContainer, uninstallSchedulerContainer, uninstallAll bool, dockerNetwork, runtimeCmd string) []error {
var containerErrs []error
if uninstallPlacementContainer {
containerErrs = removeDockerContainer(containerErrs, DaprPlacementContainerName, dockerNetwork, runtimeCmd)
}
if uninstallSchedulerContainer {
containerErrs = removeDockerContainer(containerErrs, DaprSchedulerContainerName, dockerNetwork, runtimeCmd)
}
if uninstallAll {
containerErrs = removeDockerContainer(containerErrs, DaprRedisContainerName, dockerNetwork, runtimeCmd)
containerErrs = removeDockerContainer(containerErrs, DaprZipkinContainerName, dockerNetwork, runtimeCmd)
@ -59,6 +63,20 @@ func removeDockerContainer(containerErrs []error, containerName, network, runtim
return containerErrs
}
func removeSchedulerVolume(containerErrs []error, runtimeCmd string) []error {
print.InfoStatusEvent(os.Stdout, "Removing volume if it exists: dapr_scheduler")
_, err := utils.RunCmdAndWait(
runtimeCmd, "volume", "rm",
"--force",
"dapr_scheduler")
if err != nil {
containerErrs = append(
containerErrs,
fmt.Errorf("could not remove dapr_scheduler volume: %w", err))
}
return containerErrs
}
func removeDir(dirPath string) error {
_, err := os.Stat(dirPath)
if os.IsNotExist(err) {
@ -84,18 +102,28 @@ func Uninstall(uninstallAll bool, dockerNetwork string, containerRuntime string,
placementFilePath := binaryFilePathWithDir(daprBinDir, placementServiceFilePrefix)
_, placementErr := os.Stat(placementFilePath) // check if the placement binary exists.
uninstallPlacementContainer := errors.Is(placementErr, fs.ErrNotExist)
schedulerFilePath := binaryFilePathWithDir(daprBinDir, schedulerServiceFilePrefix)
_, schedulerErr := os.Stat(schedulerFilePath) // check if the scheduler binary exists.
uninstallSchedulerContainer := errors.Is(schedulerErr, fs.ErrNotExist)
// Remove .dapr/bin.
err = removeDir(daprBinDir)
if err != nil {
print.WarningStatusEvent(os.Stdout, "WARNING: could not delete dapr bin dir: %s", daprBinDir)
}
// We don't delete .dapr/scheduler by choice since it holds state.
// To delete .dapr/scheduler, user is expected to use the `--all` flag as it deletes the .dapr folder.
// The same happens for .dapr/components folder.
containerRuntime = strings.TrimSpace(containerRuntime)
runtimeCmd := utils.GetContainerRuntimeCmd(containerRuntime)
conatinerRuntimeAvailable := false
conatinerRuntimeAvailable = utils.IsDockerInstalled() || utils.IsPodmanInstalled()
if conatinerRuntimeAvailable {
containerErrs = removeContainers(uninstallPlacementContainer, uninstallAll, dockerNetwork, runtimeCmd)
containerRuntimeAvailable := false
containerRuntimeAvailable = utils.IsContainerRuntimeInstalled(containerRuntime)
if containerRuntimeAvailable {
containerErrs = removeContainers(uninstallPlacementContainer, uninstallSchedulerContainer, uninstallAll, dockerNetwork, runtimeCmd)
} else if uninstallSchedulerContainer || uninstallPlacementContainer || uninstallAll {
print.WarningStatusEvent(os.Stdout, "WARNING: could not delete supporting containers as container runtime is not installed or running")
}
if uninstallAll {
@ -103,6 +131,10 @@ func Uninstall(uninstallAll bool, dockerNetwork string, containerRuntime string,
if err != nil {
print.WarningStatusEvent(os.Stdout, "WARNING: could not delete dapr dir %s: %s", installDir, err)
}
if containerRuntimeAvailable {
containerErrs = removeSchedulerVolume(containerErrs, runtimeCmd)
}
}
err = errors.New("uninstall failed")

View File

@ -40,3 +40,9 @@ func CreateProcessGroupID() {
print.WarningStatusEvent(os.Stdout, "Failed to create process group id: %s", err.Error())
}
}
// AttachJobObjectToProcess attaches the process to a job object.
func AttachJobObjectToProcess(jobName string, proc *os.Process) {
// This is a no-op on Linux/Mac.
// Instead, we use process group ID to kill all the processes.
}

View File

@ -1,3 +1,6 @@
//go:build windows
// +build windows
/*
Copyright 2021 The Dapr Authors
Licensed under the Apache License, Version 2.0 (the "License");
@ -18,12 +21,18 @@ import (
"os"
"os/signal"
"syscall"
"unsafe"
"golang.org/x/sys/windows"
"github.com/dapr/cli/pkg/print"
"github.com/kolesnikovae/go-winjob"
"github.com/kolesnikovae/go-winjob/jobapi"
)
var jbObj *winjob.JobObject
func SetupShutdownNotify(sigCh chan os.Signal) {
signal.Notify(sigCh, syscall.SIGTERM, syscall.SIGINT)
@ -43,6 +52,41 @@ func SetupShutdownNotify(sigCh chan os.Signal) {
// CreateProcessGroupID creates a process group ID for the current process.
func CreateProcessGroupID() {
// No-op on Windows
print.WarningStatusEvent(os.Stdout, "Creating process group id is not implemented on Windows")
// This is a no-op on windows.
// Process group ID is not used for killing all the processes on windows.
// Instead, we use combination of named event and job object to kill all the processes.
}
// AttachJobObjectToProcess attaches the process to a job object.
// It creates the job object if it doesn't exist.
func AttachJobObjectToProcess(jobName string, proc *os.Process) {
if jbObj != nil {
err := jbObj.Assign(proc)
if err != nil {
print.WarningStatusEvent(os.Stdout, "failed to assign process to job object: %s", err.Error())
}
return
}
jbObj, err := winjob.Create(jobName)
if err != nil {
print.WarningStatusEvent(os.Stdout, "failed to create job object: %s", err.Error())
return
}
// Below lines control the relation between Job object and processes attached to it.
// By passing JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE flag, it will make sure that when
// job object is closed all the processed must also be exited.
info := windows.JOBOBJECT_EXTENDED_LIMIT_INFORMATION{
BasicLimitInformation: windows.JOBOBJECT_BASIC_LIMIT_INFORMATION{
LimitFlags: windows.JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE,
},
}
err = jobapi.SetInformationJobObject(jbObj.Handle, jobapi.JobObjectExtendedLimitInformation, unsafe.Pointer(&info), uint32(unsafe.Sizeof(info)))
if err != nil {
print.WarningStatusEvent(os.Stdout, "failed to set job object info: %s", err.Error())
return
}
err = jbObj.Assign(proc)
if err != nil {
print.WarningStatusEvent(os.Stdout, "failed to assign process to job object: %s", err.Error())
}
}

View File

@ -15,6 +15,7 @@ package version
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
@ -112,23 +113,30 @@ func GetLatestReleaseGithub(githubURL string) (string, error) {
}
if len(githubRepoReleases) == 0 {
return "", fmt.Errorf("no releases")
return "", errors.New("no releases")
}
defaultVersion, _ := version.NewVersion("0.0.0")
latestVersion := defaultVersion
for _, release := range githubRepoReleases {
if !strings.Contains(release.TagName, "-rc") {
cur, _ := version.NewVersion(strings.TrimPrefix(release.TagName, "v"))
if cur.GreaterThan(latestVersion) {
latestVersion = cur
}
cur, err := version.NewVersion(strings.TrimPrefix(release.TagName, "v"))
if err != nil || cur == nil {
print.WarningStatusEvent(os.Stdout, "Malformed version %s, skipping", release.TagName)
continue
}
// Prerelease versions and versions with metadata are skipped.
if cur.Prerelease() != "" || cur.Metadata() != "" {
continue
}
if cur.GreaterThan(latestVersion) {
latestVersion = cur
}
}
if latestVersion.Equal(defaultVersion) {
return "", fmt.Errorf("no releases")
return "", errors.New("no releases")
}
return latestVersion.String(), nil
@ -144,7 +152,7 @@ func GetLatestReleaseHelmChart(helmChartURL string) (string, error) {
return "", err
}
if len(helmChartReleases.Entries.Dapr) == 0 {
return "", fmt.Errorf("no releases")
return "", errors.New("no releases")
}
for _, release := range helmChartReleases.Entries.Dapr {
@ -159,6 +167,6 @@ func GetLatestReleaseHelmChart(helmChartURL string) (string, error) {
return release.Version, nil
}
return "", fmt.Errorf("no releases")
return "", errors.New("no releases")
})
}

View File

@ -162,6 +162,52 @@ func TestGetVersionsGithub(t *testing.T) {
"no releases",
"",
},
{
"Malformed version no releases",
"/malformed_version_no_releases",
`[
{
"url": "https://api.github.com/repos/dapr/dapr/releases/186741665",
"html_url": "https://github.com/dapr/dapr/releases/tag/vedge",
"id": 186741665,
"tag_name": "vedge",
"target_commitish": "master",
"name": "Dapr Runtime vedge",
"draft": false,
"prerelease": false
}
] `,
"no releases",
"",
},
{
"Malformed version with latest",
"/malformed_version_with_latest",
`[
{
"url": "https://api.github.com/repos/dapr/dapr/releases/186741665",
"html_url": "https://github.com/dapr/dapr/releases/tag/vedge",
"id": 186741665,
"tag_name": "vedge",
"target_commitish": "master",
"name": "Dapr Runtime vedge",
"draft": false,
"prerelease": false
},
{
"url": "https://api.github.com/repos/dapr/dapr/releases/44766923",
"html_url": "https://github.com/dapr/dapr/releases/tag/v1.5.1",
"id": 44766923,
"tag_name": "v1.5.1",
"target_commitish": "master",
"name": "Dapr Runtime v1.5.1",
"draft": false,
"prerelease": false
}
] `,
"",
"1.5.1",
},
}
m := http.NewServeMux()
s := http.Server{Addr: ":12345", Handler: m, ReadHeaderTimeout: time.Duration(5) * time.Second}
@ -179,7 +225,7 @@ func TestGetVersionsGithub(t *testing.T) {
for _, tc := range tests {
t.Run(tc.Name, func(t *testing.T) {
version, err := GetLatestReleaseGithub(fmt.Sprintf("http://localhost:12345%s", tc.Path))
version, err := GetLatestReleaseGithub("http://localhost:12345" + tc.Path)
assert.Equal(t, tc.ExpectedVer, version)
if tc.ExpectedErr != "" {
assert.EqualError(t, err, tc.ExpectedErr)
@ -288,7 +334,7 @@ entries:
for _, tc := range tests {
t.Run(tc.Name, func(t *testing.T) {
version, err := GetLatestReleaseHelmChart(fmt.Sprintf("http://localhost:12346%s", tc.Path))
version, err := GetLatestReleaseHelmChart("http://localhost:12346" + tc.Path)
assert.Equal(t, tc.ExpectedVer, version)
if tc.ExpectedErr != "" {
assert.EqualError(t, err, tc.ExpectedErr)

View File

@ -1,3 +1,3 @@
module emit-metrics
go 1.20
go 1.21

Some files were not shown because too many files have changed in this diff Show More