Compare commits

...

155 Commits

Author SHA1 Message Date
Will Velida a9dafe6bac
Adding support for custom endpoint for OpenAI Conversation Component (#3834)
Signed-off-by: Will Velida <willvelida@hotmail.co.uk>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-06-05 20:31:52 -07:00
Nelson Parente 4a508409d7
feat: GCP copy, move, rename bucket (#3810)
Signed-off-by: nelson.parente <nelson_parente@live.com.pt>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-06-05 07:45:27 -07:00
Oisin Grehan 649483dda6
Update metadata.yaml for azure eventhubs binding (#3837)
Signed-off-by: Oisin Grehan <oising@gmail.com>
2025-06-04 11:32:54 -07:00
Nelson Parente 3525032f7a
Add metadata overrides for sensitive connection string values (URL and DSN support) (#3825)
Signed-off-by: nelson.parente <nelson_parente@live.com.pt>
2025-06-03 06:00:57 -07:00
Mike Nguyen 2f78a401b5
fix(tests): refactor env handling - sqlserver cert (#3823)
Signed-off-by: Mike Nguyen <hey@mike.ee>
2025-05-28 22:36:49 -07:00
Josh van Leeuwen 88fce6a140
tests/certification: update dapr/dapr to HEAD (#3830)
Signed-off-by: joshvanl <me@joshvanl.dev>
2025-05-22 09:58:48 -07:00
Nelson Parente 026f99710a
GCP Storage Bucket binding: Bulk file transfer (#3811)
Signed-off-by: nelson.parente <nelson_parente@live.com.pt>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-05-22 07:03:56 -07:00
Josh van Leeuwen 132f562e48
Updates dapr/kit to main (#3816)
Signed-off-by: joshvanl <me@joshvanl.dev>
2025-05-22 06:59:46 -07:00
Gallardot 294dd75354
feat: kafka subpub and bindings support compression (#3676)
Signed-off-by: Gallardot <gallardot@apache.org>
Co-authored-by: Josh van Leeuwen <me@joshvanl.dev>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-05-16 10:45:37 -07:00
Nelson Parente 35b77e0c26
sec: bump opa (#3813)
Signed-off-by: nelson.parente <nelson_parente@live.com.pt>
2025-05-07 07:37:44 -07:00
Elena Kolevska 5f17025027
Adds Redis stream trimming by time(stream ID) (#3710)
Signed-off-by: Elena Kolevska <elena@kolevska.com>
Co-authored-by: Cassie Coyle <cassie@diagrid.io>
Co-authored-by: Nelson Parente <nelson_parente@live.com.pt>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Bernd Verst <github@bernd.dev>
2025-05-06 10:59:30 -07:00
Anton Troshin 14921af0e1
Fix MQTT3 pubsub certification error (#3805)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-05-06 08:03:08 -07:00
Adam shamis e53a258583
Feature/s3 add tagging to metadata (#3799)
Signed-off-by: adam6878 <adamshamis.dev@gmail.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-04-29 13:15:06 -07:00
Eric Shen 4dfc9e55d5
feat(pulsar): support subscribeInitialPosition on Pulsar consumer (#3700)
Signed-off-by: ericsyh <ericshenyuhao@outlook.com>
Signed-off-by: Eric Shen <ericshenyuhao@outlook.com>
Co-authored-by: Cassie Coyle <cassie@diagrid.io>
Co-authored-by: Cassie Coyle <cassie.i.coyle@gmail.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Josh van Leeuwen <me@joshvanl.dev>
2025-04-29 11:11:53 -07:00
Anton Troshin 01a3fe76d5
Add custom BulkGet method to Oracle Statestore (#3804)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-04-29 10:39:12 -07:00
Anton Troshin 397766a23e
Support Oracle Connect Descriptors (#3800)
Signed-off-by: Anton Troshin <anton@diagrid.io>
2025-04-24 13:19:04 -07:00
Anton Troshin a68ca2179e
Solace pubsub conformance test fix (#3802)
Signed-off-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-04-23 05:34:59 -07:00
Cassie Coyle b2c31ceba2
Add baggage header support to http binding (#3723)
Signed-off-by: Cassandra Coyle <cassie@diagrid.io>
2025-04-23 05:33:24 -07:00
MikelRev 31a2088aac
Updated sqlserver auth to utilize default scope. (#3698)
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-04-17 08:54:52 -07:00
Emmanuel Auffray 70c99725fd
Adding GoogleAI models too (#3689)
Signed-off-by: Emmanuel Auffray <emmanuel.auffray@gmail.com>
Co-authored-by: Josh van Leeuwen <me@joshvanl.dev>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Cassie Coyle <cassie@diagrid.io>
2025-04-09 15:06:37 -07:00
Josh van Leeuwen 1bf9852e86
CVE go mod dep update (#3716)
Signed-off-by: joshvanl <me@joshvanl.dev>
2025-03-27 00:55:55 +02:00
Josh van Leeuwen d3eb43b827
go.mod: CVE updates (#3713)
Signed-off-by: joshvanl <me@joshvanl.dev>
2025-03-24 22:50:50 +02:00
Emmanuel Auffray 47947d8770
Adding Ollama as a conversation component for local dev/running of LLMs (#3688)
Signed-off-by: Emmanuel Auffray <emmanuel.auffray@gmail.com>
Co-authored-by: Mike Nguyen <hey@mike.ee>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-03-23 16:38:17 +00:00
Emmanuel Auffray b348969d81
Fix reference links of conversation components (#3690)
Signed-off-by: Emmanuel Auffray <emmanuel.auffray@gmail.com>
2025-03-21 13:52:15 +02:00
Josh van Leeuwen d8ac01bc76
Update go -> 1.24.1 & golangci-lint -> 1.64.6 (#3699)
Signed-off-by: joshvanl <me@joshvanl.dev>
2025-03-12 13:50:32 -07:00
Yaron Schneider 637d18d0f9
Release/rebase 1.15 (#3691) 2025-03-10 18:56:46 -07:00
Mike Nguyen 716542ae32
Merge branch 'release/rebase-1.15' of github.com:mikeee/components-contrib into release/rebase-1.15 2025-03-09 15:19:26 +00:00
Guspan Tanadi f406580f8e
docs(middleware/README): recent links components (#3638)
Signed-off-by: Guspan Tanadi <36249910+guspan-tanadi@users.noreply.github.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-03-09 15:17:19 +00:00
Elena Kolevska 0c2330bc19
reenables cert tests (#3642)
Signed-off-by: Elena Kolevska <elena@kolevska.com>
2025-03-09 15:16:53 +00:00
Josh van Leeuwen 692560dd8f
Workflows: Make request types optional and use proto strings (#3624)
Signed-off-by: joshvanl <me@joshvanl.dev>
2025-03-09 15:16:32 +00:00
Sam e427822ad8
fix: make sure region field is required on other components (#3625)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2025-03-09 15:16:32 +00:00
Sam 1cae3ee094
feat(postgres): add iam roles anywhere auth profile (#3604)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
# Conflicts:
#	.build-tools/pkg/metadataschema/builtin-authentication-profiles.go
#	bindings/kafka/metadata.yaml
#	common/authentication/aws/static.go
#	common/authentication/aws/x509.go
#	common/authentication/postgresql/metadata.go
#	pubsub/kafka/metadata.yaml
2025-03-09 15:16:25 +00:00
Guspan Tanadi 2997e472d3
docs(middleware/README): recent links components (#3638)
Signed-off-by: Guspan Tanadi <36249910+guspan-tanadi@users.noreply.github.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-02-28 20:47:28 -08:00
Yaron Schneider 5ede374d0b
Update cert tests and build chain to latest (#3681)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2025-02-26 10:39:42 -08:00
Artur Souza cf42966101
[1.15] Fix gcp bucket binding create + improve error messages. (#3678)
Signed-off-by: Artur Souza <asouza.pro@gmail.com>
2025-02-25 14:38:37 -08:00
Mike Nguyen 27f04354f8
ci: fix artifact handling (#3670)
Signed-off-by: Mike Nguyen <hey@mike.ee>
2025-02-14 06:48:32 -08:00
Yaron Schneider dd13e6b083
Update deepseek dependency (#3668)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2025-02-13 12:03:01 -08:00
Mike Nguyen 53d848c9d4
Chore/rebase from main (#3666)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Signed-off-by: Shivam Kumar <shivamkumar@microsoft.com>
Signed-off-by: joshvanl <me@joshvanl.dev>
Signed-off-by: MattCosturos <48531957+MattCosturos@users.noreply.github.com>
Signed-off-by: Matt Costuros <mcosturos@moog.com>
Signed-off-by: Elena Kolevska <elena@kolevska.com>
Signed-off-by: Mike Nguyen <hey@mike.ee>
Co-authored-by: Sam <sam@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Shivam Kumar <shivamkm07@gmail.com>
Co-authored-by: Shivam Kumar <shivamkumar@microsoft.com>
Co-authored-by: Josh van Leeuwen <me@joshvanl.dev>
Co-authored-by: MattCosturos <48531957+MattCosturos@users.noreply.github.com>
Co-authored-by: Artur Souza <artursouza.ms@outlook.com>
Co-authored-by: Elena Kolevska <elena-kolevska@users.noreply.github.com>
2025-02-13 09:32:14 -08:00
Mike Nguyen 20f02776a6
ci: bump upload/download actions (#3667)
Signed-off-by: Mike Nguyen <hey@mike.ee>
2025-02-13 09:22:26 -08:00
Yaron Schneider a20547c324
Add deepseek support, update Go to 1.23.5 (#3659)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2025-01-29 13:41:28 -08:00
Elena Kolevska 1132db59d6
reenables cert tests (#3642)
Signed-off-by: Elena Kolevska <elena@kolevska.com>
2025-01-08 20:14:26 -08:00
MattCosturos 26808c927b
Recreate AEH Processor in the event of an error before retrying the processing operation (#3614)
Signed-off-by: MattCosturos <48531957+MattCosturos@users.noreply.github.com>
Signed-off-by: Matt Costuros <mcosturos@moog.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Artur Souza <artursouza.ms@outlook.com>
2024-12-17 09:22:11 -08:00
Josh van Leeuwen aca5116d95
Workflows: Make request types optional and use proto strings (#3624)
Signed-off-by: joshvanl <me@joshvanl.dev>
2024-12-16 09:07:50 -08:00
Shivam Kumar fc8636dbba
Fix Redis EntraId Token Refresh (#3632)
Signed-off-by: Shivam Kumar <shivamkumar@microsoft.com>
Co-authored-by: Shivam Kumar <shivamkumar@microsoft.com>
2024-12-16 08:50:51 -08:00
Sam 026ae762fa
fix(pg): add region field too (#3628)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2024-12-09 07:17:46 -08:00
Sam dcaa80eef8
style: pg cleaning up for things (#3627)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2024-12-06 22:56:29 +00:00
Sam 1e295a7056
fix: make sure region field is required on other components (#3625)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2024-12-06 14:00:34 -08:00
Sam 6200ea81de
style: improve clarity on aws changes (#3623)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2024-12-04 12:45:34 -08:00
Sam 72c92fb1fe
feat(postgres): add iam roles anywhere auth profile (#3604)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-12-03 13:17:13 -08:00
Elena Kolevska 2e4fc0bbd9
Fixes state cert tests (#3596)
Signed-off-by: Elena Kolevska <elena@kolevska.com>
2024-12-02 10:11:39 -08:00
Fabian Martinez 1e095ed25a
fix get aws creds from environment (#3617)
Signed-off-by: Fabian Martinez <46371672+famarting@users.noreply.github.com>
2024-11-28 07:57:18 -08:00
Sam f48b4128d6
feat(kafka): iam roles anywhere + assume role auth profiles (#3606)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-11-27 14:48:18 -08:00
Patrick Assuied 8c02ff33b4
Kafka Pubsub fixes: Avro serialization caching and retries (#3610)
Signed-off-by: Patrick Assuied <patrick.assuied@elationhealth.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-11-26 22:02:19 +00:00
Yaron Schneider 85cbbf123a
Enable eventhubs binding to read all message properties (#3615)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-11-26 11:50:50 -08:00
Fabian Martinez 913ba4ce6f
update aws-msk-iam-sasl-signer-go dependency (#3612)
Signed-off-by: Fabian Martinez <46371672+famarting@users.noreply.github.com>
2024-11-26 08:43:44 -08:00
Gustavo Chaín 1137759a9b
snssqs: fix consumer starvation (#3478)
Signed-off-by: Gustavo Chain <me@qustavo.cc>
Signed-off-by: Alessandro (Ale) Segala <43508+ItalyPaleAle@users.noreply.github.com>
Co-authored-by: Alessandro (Ale) Segala <43508+ItalyPaleAle@users.noreply.github.com>
Co-authored-by: Bernd Verst <github@bernd.dev>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-11-25 19:33:22 -08:00
Elena Kolevska 2aea31969f
Fixes `subscribeType` metadata field not being respected for Pulsar pub sub (#3603)
Signed-off-by: Elena Kolevska <elena@kolevska.com>
Co-authored-by: Artur Souza <artursouza.ms@outlook.com>
2024-11-22 09:39:41 -08:00
Sam f521a76f7b
fix: initialize the close chan (#3608)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2024-11-20 15:22:13 -08:00
Sam e2b27d3538
fix(aws): update close if aws auth provider is nil (#3607)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-11-20 09:24:49 -08:00
luigirende f3bd794b12
Mongo State: fix serialization value in the transaction method (#3576)
Signed-off-by: Luigi Rende <luigirende@gmail.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-11-20 08:27:46 -08:00
Fabian Martinez 0f09d25bcd
postgres binding, ping on init (#3595)
Signed-off-by: Fabian Martinez <46371672+famarting@users.noreply.github.com>
2024-11-20 06:47:40 -08:00
Yaron Schneider 1a6a75a1ce
Enable in order processing of eventhubs messages (#3605)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-11-18 13:18:20 -08:00
Sam a00a853556
feat(iam auth): allow iam roles anywhere auth profile (#3591)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Signed-off-by: Sam <sam@diagrid.io>
2024-11-14 12:04:56 -07:00
Sam 2b924c46c7
feat: add me to bot owners to run tests (#3600)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2024-11-13 13:47:41 -07:00
Elena Kolevska b05e19a431
Adds conformance tests for AWS Secrets store component (#3588)
Signed-off-by: Elena Kolevska <elena@kolevska.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-11-08 05:57:57 +00:00
Elena Kolevska 9833e56020
Add elena-kolevska to codeowners for dapr bot (#3592)
Signed-off-by: Elena Kolevska <elena-kolevska@users.noreply.github.com>
2024-11-05 17:27:24 -08:00
bhagya f0a99c114c
Fix metadata header value sanitization (#3581)
Signed-off-by: Bhagya Singh Purba <bhagyasingh05@gmail.com>
Co-authored-by: bhagyapurba <bhagya.singhpurba@fyndna.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-11-05 22:28:38 +00:00
Yaron Schneider b969bbfe88
Add receiverQueueSize to pulsar (#3589)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-11-03 21:25:09 -08:00
Yaron Schneider 8f5b880afd
Update sarama dependency (#3587)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-11-01 15:42:58 -07:00
Mustafa Arslan ab9422dff9
sftp binding component (#3505)
Signed-off-by: Mustafa Arslan <mustafa.arslan1992@gmail.com>
Signed-off-by: Bernd Verst <github@bernd.dev>
Co-authored-by: Bernd Verst <github@bernd.dev>
2024-10-25 16:56:14 -07:00
Fabian Martinez c6bac52cab
http binding fix nilpointer (#3536)
Signed-off-by: Fabian Martinez <46371672+famarting@users.noreply.github.com>
Co-authored-by: Bernd Verst <github@bernd.dev>
2024-10-25 16:43:22 -07:00
Seweryn Sirek 54d59d5fc4
[Bindings][BlobStorage] Update request.go - Log debug instead of warning (#3577)
Signed-off-by: Seweryn Sirek <128463368+ssi-spyro@users.noreply.github.com>
2024-10-25 16:33:41 -07:00
Elena Kolevska dae321130c
Merge 1.14 into master (#3579)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Signed-off-by: Elena Kolevska <elena@kolevska.com>
Signed-off-by: yaron2 <schneider.yaron@live.com>
Signed-off-by: Artur Souza <asouza.pro@gmail.com>
Signed-off-by: Anton Troshin <anton@diagrid.io>
Co-authored-by: Sam <sam@diagrid.io>
Co-authored-by: Artur Souza <artursouza.ms@outlook.com>
Co-authored-by: yaron2 <schneider.yaron@live.com>
Co-authored-by: Anton Troshin <troll.sic@gmail.com>
Co-authored-by: Bernd Verst <4535280+berndverst@users.noreply.github.com>
2024-10-24 16:52:48 -07:00
Yaron Schneider 4ca04dbb61
Conversation API: add cache support, add huggingface+mistral models (#3567)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-10-15 21:30:26 -07:00
Yaron Schneider 1cbedb3c0e
Add temperature support to conversational api (#3566)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-10-14 16:04:03 -07:00
Sam 28d46f6720
fix(redis): make auth profiles consistent for username/pwd (#3565)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2024-10-14 13:46:51 -07:00
Yaron Schneider c53499343a
Add component for Anthropic (#3564)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-10-11 15:01:43 -07:00
Artur Souza 69119d6f6c
Add AWS Bedrock support (#3563)
Signed-off-by: yaron2 <schneider.yaron@live.com>
Signed-off-by: Artur Souza <asouza.pro@gmail.com>
Co-authored-by: yaron2 <schneider.yaron@live.com>
2024-10-10 06:50:37 -07:00
Sam dc65da292f
fix(metadata): make access/secret keys optional (#3562)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2024-10-08 10:30:57 -07:00
Sam 9012bdce7f
fix(metadata): add missing field to http binding (#3560)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2024-10-03 09:27:19 -07:00
Josh van Leeuwen be7c19b742
Interfaces: Update all component interfaces to implement io.Closer (#3542)
Signed-off-by: joshvanl <me@joshvanl.dev>
2024-09-19 08:53:22 -07:00
Loong Dai 6d12e8408d
fix make target (#3548)
Signed-off-by: Loong <loong.dai@intel.com>
2024-09-19 06:52:21 -07:00
Loong Dai 414e997524
conversation: add echo implement (#3549)
Signed-off-by: Loong <loong.dai@intel.com>
2024-09-19 06:30:03 -07:00
Josh van Leeuwen ff8d562e8e
Update go to v1.23 (#3543)
Signed-off-by: joshvanl <me@joshvanl.dev>
2024-09-18 06:54:23 -07:00
Loong Dai 8225a11bf3
init conversation API with openai component (#3518)
Signed-off-by: Loong <loong.dai@intel.com>
2024-09-10 19:45:11 -07:00
Luis Rascão 3830b414d8
mongodb: fix goroutine leak on failed server connections (#3538)
Signed-off-by: Luis Rascao <luis.rascao@gmail.com>
2024-09-10 15:23:26 -07:00
Josh van Leeuwen 93f19c96d1
state: Adds `io.Closer` to interface (#3537)
Signed-off-by: joshvanl <me@joshvanl.dev>
2024-09-10 08:15:52 -07:00
Eileen Yu dab1faaabd
fix: drop duplicate aws auth field in postgresql (#3525)
Signed-off-by: Eileen Yu <eileenylj@gmail.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-09-09 17:08:45 -07:00
Loong Dai 181592079f
CI: correct upgrade lint path (#3527)
Signed-off-by: Loong <loong.dai@intel.com>
2024-09-09 08:45:10 -07:00
Ryan Despain 4c53816590
Binding AWS Kinesis - reuse client credentials (#3509)
Signed-off-by: arr <ryan.despain@dave.com>
Co-authored-by: arr <ryan.despain@dave.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-09-09 07:27:10 -07:00
Patrick Assuied 9ea3fee247
Resolving a weird edge case in case of a poison pill message being retried, followed by a pod restart (#3532)
Signed-off-by: Patrick Assuied <patrick.assuied@elationhealth.com>
2024-09-05 08:01:58 -07:00
Patrick Assuied e5322262f6
Addressed issue in Kafka-pubsub for avro null messages (#3531)
Signed-off-by: Patrick Assuied <patrick.assuied@elationhealth.com>
2024-09-04 21:16:20 -07:00
Elena Kolevska b6a5e80315
Removes the dummy check for AWS Parameter Store access validation (#3520)
Signed-off-by: Elena Kolevska <elena@kolevska.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-08-30 11:05:34 -07:00
Elena Kolevska dc8b4822d8
Removes check for dummy key in secret store (#3519)
Signed-off-by: Elena Kolevska <elena@kolevska.com>
2024-08-30 10:20:31 -07:00
Yaron Schneider c48f9046c2
Merges 1.14 into main (#3515) 2024-08-29 08:58:14 -07:00
Elena Kolevska 9156779aa8 Merges 1.14 into master 2024-08-29 17:12:07 +02:00
Mike Nguyen e53cf3401f
Fix MQTT3 component (#3501)
Signed-off-by: mikeee <hey@mike.ee>
2024-08-02 10:37:20 -07:00
Artur Souza 12d7c2ba4f
Fix init in sqssns + Docker Compose update (#3498)
Signed-off-by: Artur Souza <asouza.pro@gmail.com>
2024-08-01 15:43:27 -07:00
Artur Souza bffaeeb31f
Fix AWS sns/sqs panic (#3497)
Signed-off-by: yaron2 <schneider.yaron@live.com>
Signed-off-by: Artur Souza <asouza.pro@gmail.com>
Co-authored-by: yaron2 <schneider.yaron@live.com>
2024-08-01 12:00:53 -07:00
Eileen Yu a409bc1f96
fix: drop duplicate awsRegion/region field (#3490)
Signed-off-by: Eileen Yu <eileenylj@gmail.com>
2024-07-25 09:40:08 -07:00
Bernd Verst c4a4525aa5 Pin runtime 1.14.0rc1 in Cert tests, Update Go to 1.22.5 (#3477)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-07-04 11:57:36 -07:00
Bernd Verst beb3f8f456
Pin runtime 1.14.0rc1 in Cert tests, Update Go to 1.22.5 (#3477)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-07-04 11:56:32 -07:00
Bernd Verst 2e35e1f2a0 Update all Azure components (#3475)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-07-02 17:48:53 -07:00
Bernd Verst 8f53312898
Update all Azure components (#3475)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-07-02 17:40:17 -07:00
Bernd Verst 82a70735f0
Update linter command to support diff linting (#3474)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-07-02 12:23:22 -07:00
Bernd Verst 273bea12ef
[Release 1.14] Cherry Pick of 3470 - Azure Auth for all Redis Components (#3471)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-07-01 17:38:29 -07:00
Bernd Verst b656b0d5d5
Adds EntraID auth support to all Redis Components (#3470)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-07-01 17:20:29 -07:00
Josh van Leeuwen 128691d9b1
[1.14] Bindings http default transport (#3467)
Signed-off-by: joshvanl <me@joshvanl.dev>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-06-27 08:08:57 -07:00
Josh van Leeuwen f09c2c2941
bindings/http: Use Go Default Transport as base Transport (#3466)
Signed-off-by: joshvanl <me@joshvanl.dev>
2024-06-27 07:03:31 -07:00
mahparaashley 1375f081c6
Add configurable ackDeadline to GCP Pub/Sub component (#3422)
Signed-off-by: mashley@rechargeapps.com <mashley@rechargeapps.com>
Signed-off-by: Bernd Verst <github@bernd.dev>
Co-authored-by: mashley@rechargeapps.com <mashley@rechargeapps.com>
Co-authored-by: Bernd Verst <github@bernd.dev>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Alessandro (Ale) Segala <43508+ItalyPaleAle@users.noreply.github.com>
2024-06-26 12:59:44 -07:00
Sam bf07ca5078
fix(query): duplicate key violating unique restraint (#3446)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Co-authored-by: Alessandro (Ale) Segala <43508+ItalyPaleAle@users.noreply.github.com>
2024-06-26 11:08:00 -07:00
Mike Nguyen 7619e75236
ci: fix spelling in two workflows (#3462)
Signed-off-by: mikeee <hey@mike.ee>
2024-06-25 07:05:21 -07:00
Sam 5f3d7ea8eb
security: up dependencies to fix security vulnerabilities (#3390)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Signed-off-by: Bernd Verst <github@bernd.dev>
Signed-off-by: joshvanl <me@joshvanl.dev>
Co-authored-by: Bernd Verst <github@bernd.dev>
Co-authored-by: joshvanl <me@joshvanl.dev>
2024-06-24 14:24:10 -07:00
Nikola Nedeljkovic d09ffe13e0
change topicArns to map safe for concurrent access (#3459)
Co-authored-by: Alessandro (Ale) Segala <43508+ItalyPaleAle@users.noreply.github.com>
2024-06-24 10:51:18 -07:00
Mike Nguyen 43a89f3c2b
fix: conformance - state.mysql.mysql (#3458)
Signed-off-by: mikeee <hey@mike.ee>
2024-06-24 06:55:46 -07:00
Mike Nguyen 2bd28b4e34
request: add mikeee to daprbot (#3456)
Signed-off-by: mikeee <hey@mike.ee>
2024-06-21 16:38:57 -07:00
Nathan Lowry 864baaad3d
Support configurable MaxOutstanding* and NumGoroutines settings for GCP PubSub component (#3442)
Signed-off-by: Nathan Lowry <nathandl@gmail.com>
2024-06-21 13:02:29 -07:00
Mike Nguyen 0375e200b6
fix workflow stable component cert - state.mysql (#3450)
Signed-off-by: mikeee <hey@mike.ee>
2024-06-21 10:43:29 -07:00
Mike Nguyen 8414210c26
fix workflow stable component cert - bindings.rabbitmq (#3454)
Signed-off-by: mikeee <hey@mike.ee>
2024-06-21 10:42:51 -07:00
Jake Engelberg 499f66ff73
fix: pubsub.solace.amqp metadata.yaml name/title fix (#3452)
Signed-off-by: Jake Engelberg <jake@diagrid.io>
2024-06-20 14:00:31 -07:00
Elena Kolevska a385743e35
Kafka bulk publisher fix (#3445)
Signed-off-by: Elena Kolevska <elena@kolevska.com>
2024-06-18 06:27:27 -07:00
Arthur Poiret 51e0c79dd4
Add RabbitMQ single active consumer argument (#3437)
Signed-off-by: Arthur Poiret <dropsnorz@gmail.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-06-17 16:12:27 -07:00
bhagya 105dabb47a
Add ability to generate signed url in gcp bucket (#3393)
Signed-off-by: bhagya05 <bhagyasingh05@gmail.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-06-11 23:17:22 -07:00
Sam 787b23d5d0
build: fix github wf to generate metadata bundle (#3435)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Signed-off-by: Bernd Verst <github@bernd.dev>
Co-authored-by: Bernd Verst <github@bernd.dev>
2024-06-06 15:32:58 -07:00
Bernd Verst f23794f69b
ASB: Add support for ApplicationProperties in subscriptions (#3436)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-06-05 10:26:28 -07:00
Elena Kolevska f0be1a2d28
Redis private certificates (#3429)
Signed-off-by: Elena Kolevska <elena@kolevska.com>
Co-authored-by: Bernd Verst <github@bernd.dev>
2024-06-04 09:55:12 -07:00
Eileen Yu b0bb3d785b
feat: add aws sns metadata schema (#3433)
Signed-off-by: Eileen Yu <eileenylj@gmail.com>
2024-06-04 09:48:53 -07:00
Alessandro (Ale) Segala 93cf5cb2c9
Fixups from #3324 (#3419)
Signed-off-by: ItalyPaleAle <43508+ItalyPaleAle@users.noreply.github.com>
2024-05-21 10:26:21 -07:00
Sam eb82293623
feat(aws iam): support aws iam auth for postgresql components (#3324)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Signed-off-by: Bernd Verst <github@bernd.dev>
Signed-off-by: joshvanl <me@joshvanl.dev>
Co-authored-by: Alessandro (Ale) Segala <43508+ItalyPaleAle@users.noreply.github.com>
Co-authored-by: Bernd Verst <github@bernd.dev>
Co-authored-by: joshvanl <me@joshvanl.dev>
2024-05-16 15:45:57 -07:00
Sam 70fd16ab19
fix: metadata capitalization (#3413)
Signed-off-by: Bernd Verst <github@bernd.dev>
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Co-authored-by: Bernd Verst <github@bernd.dev>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-05-16 12:58:15 -07:00
denisbchrsk 4e4aa26c9e
Kafka PubSub: Propagate partition key to DLT (#3368)
Signed-off-by: denisbchrsk <155584191+denisbchrsk@users.noreply.github.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-05-15 07:23:33 -07:00
Yaron Schneider 58eef3bdb1
Fix linter for cosmosdb (#3416)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2024-05-14 08:38:54 -07:00
Bernd Verst 1f46231d87
Remove some unnecessary loglines (#3412)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-05-03 16:11:49 -07:00
Bernd Verst f599616ea2
Fix CosmosDB for latest API version (#3411)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-05-03 15:31:39 -07:00
Bernd Verst 1451363ab2
Support and use Go 1.22 (#3372)
Signed-off-by: Bernd Verst <github@bernd.dev>
Co-authored-by: Alessandro (Ale) Segala <43508+ItalyPaleAle@users.noreply.github.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2024-04-30 15:42:53 -07:00
Rutam Prita Mishra 16fef41401
Update holopin.yml to award components badge to contributors (#3396)
Signed-off-by: Rutam Prita Mishra <rutamprita@gmail.com>
2024-04-11 17:38:51 -07:00
Patrick Assuied cfee998aba
Fixing bug with new Avro conversion when message values are NULL (#3388)
Signed-off-by: Patrick Assuied <patrick.assuied@elationhealth.com>
Co-authored-by: Alessandro (Ale) Segala <43508+ItalyPaleAle@users.noreply.github.com>
2024-03-30 07:04:16 +03:00
Fabian Martinez a9548a7d68
kafka: bugfix possible nil pointer error on close (#3383)
Signed-off-by: Fabian Martinez <46371672+famarting@users.noreply.github.com>
2024-03-22 17:26:47 +02:00
denisbchrsk b9c12df7d4
Kafka: Add support to configure heartbeat interval and session timeout to kafka's consumer (#3375)
Signed-off-by: denisbchrsk <155584191+denisbchrsk@users.noreply.github.com>
2024-03-21 20:31:09 +00:00
Sam 85252beeef
feat(kafka): add producer config capabilities for connections (#3371)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Signed-off-by: Alessandro (Ale) Segala <43508+ItalyPaleAle@users.noreply.github.com>
Co-authored-by: Alessandro (Ale) Segala <43508+ItalyPaleAle@users.noreply.github.com>
2024-03-21 09:11:47 -07:00
Edoardo Vacchi 2502256a07
deps: updates wazero to v1.7.0 (#3378)
Signed-off-by: Edoardo Vacchi <evacchi@users.noreply.github.com>
Co-authored-by: Alessandro (Ale) Segala <43508+ItalyPaleAle@users.noreply.github.com>
2024-03-18 04:13:19 +00:00
Edoardo Vacchi 6290422551
fix all wasm guests build and refresh with TinyGo 0.28.1 (#3377) 2024-03-09 16:54:17 +01:00
nadavv169 49afc557e4
S3 add storage class to metadata (#3369)
Signed-off-by: nadavv169 <nadavv169@gmail.com>
2024-02-28 10:28:59 -08:00
Josh van Leeuwen 6413239626
PubSub Kafka: Respect Subscribe context (#3363)
Signed-off-by: joshvanl <me@joshvanl.dev>
2024-02-28 10:13:06 -08:00
José A.P fff4d41fb7
Update AWS SDK versions (#3355)
Signed-off-by: José A.P <jose@clovrlabs.com>
Signed-off-by: José A.P <53834183+Jossec101@users.noreply.github.com>
2024-02-27 13:45:26 -08:00
Alessandro (Ale) Segala e45dcba5a5
Update klauspost/compress dependency (#3364)
Signed-off-by: ItalyPaleAle <43508+ItalyPaleAle@users.noreply.github.com>
2024-02-27 17:12:23 +00:00
José Carlos Chávez cea703d082
chore: upgrades http-wasm host to v0.6.0. (#3360)
Signed-off-by: José Carlos Chávez <jcchavezs@gmail.com>
2024-02-22 09:16:03 -08:00
Bernd Verst f99604f063
Fix Azure App Config not working with gRPC (#3358)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-02-21 10:41:06 -08:00
Sam e114060fbb
fix: create table for migration only if not exists (#3356)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2024-02-20 13:50:42 -08:00
Alessandro (Ale) Segala 0a002b96f6
[chore] Update component-folders.yaml (#3353)
Signed-off-by: ItalyPaleAle <43508+ItalyPaleAle@users.noreply.github.com>
2024-02-14 10:03:22 -08:00
Bernd Verst 5445cead81
Recover interrupted eventhubs subscriptions (#3344)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-02-08 13:11:56 -08:00
Bernd Verst ba6609e624
Upgrade several component, use Dapr runtime `1.13.0-rc.2` in cert tests (#3341)
Signed-off-by: Bernd Verst <github@bernd.dev>
2024-02-07 13:34:06 -08:00
Bernd Verst 903b97d891
Azure Service Bus: enforce `autodeleteonidlesec` minimum 300 (#3340) 2024-02-07 07:50:57 -08:00
Alessandro (Ale) Segala 9acfcc16b8
Azure auth: do not use CLI provider by default when running in a cloud service (#3338)
Signed-off-by: ItalyPaleAle <43508+ItalyPaleAle@users.noreply.github.com>
2024-02-06 16:01:50 -08:00
Guido Spadotto 43d905db6b
Redis State Store query: numeric operators do not work correctly on large numbers (#3334)
Signed-off-by: Guido Spadotto <guido.spadotto@profesia.it>
Signed-off-by: Guido Spadotto <guido.spad8@gmail.com>
Co-authored-by: Guido Spadotto <guido.spadotto@profesia.it>
2024-02-05 13:52:21 -08:00
483 changed files with 15851 additions and 6813 deletions

View File

@ -3,27 +3,84 @@ aws:
description: |
Authenticate using an Access Key ID and Secret Access Key included in the metadata
metadata:
- name: region
type: string
required: false
description: |
The AWS Region where the AWS resource is deployed to.
This will be marked required in Dapr 1.17.
example: '"us-east-1"'
- name: awsRegion
type: string
required: false
description: |
This maintains backwards compatibility with existing fields.
It will be deprecated as of Dapr 1.17. Use 'region' instead.
The AWS Region where the AWS resource is deployed to.
example: '"us-east-1"'
- name: accessKey
description: AWS access key associated with an IAM account
required: true
required: false
sensitive: true
example: '"AKIAIOSFODNN7EXAMPLE"'
- name: secretKey
description: The secret key associated with the access key
required: true
required: false
sensitive: true
example: '"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"'
- name: sessionToken
type: string
required: false
sensitive: true
description: |
AWS session token to use. A session token is only required if you are using
temporary security credentials.
example: '"TOKEN"'
- title: "AWS: Assume IAM Role"
description: |
Assume a specific IAM role. Note: This is only supported for Kafka and PostgreSQL.
metadata:
- name: region
type: string
required: true
description: |
The AWS Region where the AWS resource is deployed to.
example: '"us-east-1"'
- name: assumeRoleArn
type: string
required: false
description: |
IAM role that has access to AWS resource.
This is another option to authenticate with MSK and RDS Aurora aside from the AWS Credentials.
This will be marked required in Dapr 1.17.
example: '"arn:aws:iam::123456789:role/mskRole"'
- name: sessionName
type: string
description: |
The session name for assuming a role.
example: '"MyAppSession"'
default: '"DaprDefaultSession"'
- title: "AWS: Credentials from Environment Variables"
description: Use AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from the environment
- title: "AWS: IAM Roles Anywhere"
description: Use X.509 certificates to establish trust between your AWS account and the Dapr cluster using AWS IAM Roles Anywhere.
metadata:
- name: trustAnchorArn
description: |
ARN of the AWS Trust Anchor in the AWS account granting trust to the Dapr Certificate Authority.
example: arn:aws:rolesanywhere:us-west-1:012345678910:trust-anchor/01234568-0123-0123-0123-012345678901
required: true
- name: trustProfileArn
description: |
ARN of the AWS IAM Profile in the trusting AWS account.
example: arn:aws:rolesanywhere:us-west-1:012345678910:profile/01234568-0123-0123-0123-012345678901
required: true
- name: assumeRoleArn
description: |
ARN of the AWS IAM role to assume in the trusting AWS account.
example: arn:aws:iam:012345678910:role/exampleIAMRoleName
required: true
azuread:
- title: "Azure AD: Managed identity"
description: Authenticate using Azure AD and a managed identity.

View File

@ -64,6 +64,7 @@ var bundleComponentMetadataCmd = &cobra.Command{
fmt.Fprintln(os.Stderr, "Info: metadata file not found in component "+component)
continue
}
fmt.Fprintln(os.Stderr, "Info: metadata file loaded for component "+component)
bundle.Components = append(bundle.Components, componentMetadata)
}

View File

@ -14,15 +14,21 @@ excludeFolders:
- bindings/alicloud
- bindings/aws
- bindings/azure
- bindings/cloudflare
- bindings/gcp
- bindings/http/testdata
- bindings/huawei
- bindings/rethinkdb
- bindings/twilio
- bindings/wasm/testdata
- bindings/zeebe
- configuration/azure
- configuration/redis/internal
- crypto/azure
- crypto/kubernetes
- middleware/http/oauth2clientcredentials/mocks
- middleware/http/wasm/example
- middleware/http/wasm/internal
- pubsub/aws
- pubsub/azure
- pubsub/azure/servicebus
@ -34,11 +40,15 @@ excludeFolders:
- secretstores/hashicorp
- secretstores/huaweicloud
- secretstores/local
- secretstores/tencentcloud
- state/alicloud
- state/aws
- state/azure
- state/azure/blobstorage/internal
- state/cloudflare
- state/gcp
- state/hashicorp
- state/oci
- state/postgresql
- state/query
- state/utils

View File

@ -1,8 +1,6 @@
module github.com/dapr/components-contrib/build-tools
go 1.21
toolchain go1.21.4
go 1.24.1
require (
github.com/dapr/components-contrib v0.0.0
@ -14,17 +12,17 @@ require (
)
require (
github.com/dapr/kit v0.12.2-0.20231031211530-0e1fd37fc4b3 // indirect
github.com/dapr/kit v0.15.3-0.20250516121556-bc7dc566c45d // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/iancoleman/orderedmap v0.0.0-20190318233801-ac98e3ecb4b0 // indirect
github.com/inconshreveable/mousetrap v1.0.1 // indirect
github.com/mitchellh/mapstructure v1.5.1-0.20220423185008-bf980b35cac4 // indirect
github.com/spf13/cast v1.5.1 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/spf13/cast v1.8.0 // indirect
github.com/spf13/pflag v1.0.6 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/apimachinery v0.26.10 // indirect
k8s.io/apimachinery v0.27.4 // indirect
)
replace github.com/dapr/components-contrib => ../

View File

@ -1,15 +1,16 @@
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/dapr/kit v0.12.2-0.20231031211530-0e1fd37fc4b3 h1:xsmVK3YOKRMOcaxqo50Ce0apQzq+LzAfWuFapQuu8Ro=
github.com/dapr/kit v0.12.2-0.20231031211530-0e1fd37fc4b3/go.mod h1:c3Z78F+h7UYtb0LmpzJNC/ChT240ycDJFViRUztdpoo=
github.com/dapr/kit v0.15.3-0.20250516121556-bc7dc566c45d h1:v+kZn9ami23xBsruyZmKErIOSlCdW9pR8wfHUg5+jys=
github.com/dapr/kit v0.15.3-0.20250516121556-bc7dc566c45d/go.mod h1:6w2Pr38zOAtBn+ld/jknwI4kgMfwanCIcFVnPykdPZQ=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/frankban/quicktest v1.14.4 h1:g2rn0vABPOOXmZUj+vbmUp0lPoXEMuhTpIluN0XL9UY=
github.com/frankban/quicktest v1.14.4/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/iancoleman/orderedmap v0.0.0-20190318233801-ac98e3ecb4b0 h1:i462o439ZjprVSFSZLZxcsoAe592sZB1rci2Z8j4wdk=
@ -26,25 +27,24 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mitchellh/mapstructure v1.5.1-0.20220423185008-bf980b35cac4 h1:BpfhmLKZf+SjVanKKhCgf3bg+511DmU9eDQTen7LLbY=
github.com/mitchellh/mapstructure v1.5.1-0.20220423185008-bf980b35cac4/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/spf13/cast v1.5.1 h1:R+kOtfhWQE6TVQzY+4D7wJLBgkdVasCEFxSUBYBYIlA=
github.com/spf13/cast v1.5.1/go.mod h1:b9PdjNptOpzXr7Rq1q9gJML/2cdGQAo69NKzQ10KN48=
github.com/spf13/cast v1.8.0 h1:gEN9K4b8Xws4EX0+a0reLmhq8moKn7ntRlQYgjPeCDk=
github.com/spf13/cast v1.8.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/cobra v1.6.1 h1:o94oiPyS4KD1mPy2fmcYYHHfCxLqYjJOhGsCHFZtEzA=
github.com/spf13/cobra v1.6.1/go.mod h1:IOw/AERYS7UzyrGinqmz6HLUo219MORXGxhbaJUqzrY=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o=
github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.3.1-0.20190311161405-34c6fa2dc709/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb h1:zGWFAtiMcyryUHoUjUJX0/lt1H2+i2Ka2n+D3DImSNo=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
@ -81,14 +81,13 @@ golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f h1:BLraFXnmrev5lT+xlilqcH8XK9/i0At2xKjWk4p6zsU=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
k8s.io/apimachinery v0.26.10 h1:aE+J2KIbjctFqPp3Y0q4Wh2PD+l1p2g3Zp4UYjSvtGU=
k8s.io/apimachinery v0.26.10/go.mod h1:iT1ZP4JBP34wwM+ZQ8ByPEQ81u043iqAcsJYftX9amM=
k8s.io/apimachinery v0.27.4 h1:CdxflD4AF61yewuid0fLl6bM4a3q04jWel0IlP+aYjs=
k8s.io/apimachinery v0.27.4/go.mod h1:XNfZ6xklnMCOGGFNqXG7bUrQCoR04dh/E7FprV6pb+E=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=

View File

@ -15,13 +15,14 @@ package metadataschema
import (
"fmt"
"strings"
)
// Built-in authentication profiles
var BuiltinAuthenticationProfiles map[string][]AuthenticationProfile
// ParseBuiltinAuthenticationProfile returns an AuthenticationProfile(s) from a given BuiltinAuthenticationProfile.
func ParseBuiltinAuthenticationProfile(bi BuiltinAuthenticationProfile) ([]AuthenticationProfile, error) {
func ParseBuiltinAuthenticationProfile(bi BuiltinAuthenticationProfile, componentTitle string) ([]AuthenticationProfile, error) {
profiles, ok := BuiltinAuthenticationProfiles[bi.Name]
if !ok {
return nil, fmt.Errorf("built-in authentication profile %s does not exist", bi.Name)
@ -30,8 +31,39 @@ func ParseBuiltinAuthenticationProfile(bi BuiltinAuthenticationProfile) ([]Authe
res := make([]AuthenticationProfile, len(profiles))
for i, profile := range profiles {
res[i] = profile
res[i].Metadata = mergedMetadata(bi.Metadata, res[i].Metadata...)
// deep copy the metadata slice to avoid side effects when manually updating some req -> non-req fields to deprecate some fields for kafka/postgres
// TODO: rm all of this manipulation in Dapr 1.17!!
originalMetadata := profile.Metadata
metadataCopy := make([]Metadata, len(originalMetadata))
copy(metadataCopy, originalMetadata)
if componentTitle == "Apache Kafka" || strings.ToLower(componentTitle) == "postgresql" {
removeRequiredOnSomeAWSFields(&metadataCopy)
}
merged := mergedMetadata(bi.Metadata, metadataCopy...)
// Note: We must apply the removal of deprecated fields after the merge!!
// Here, we remove some deprecated fields as we support the transition to a new auth profile
if profile.Title == "AWS: Assume IAM Role" && componentTitle == "Apache Kafka" || profile.Title == "AWS: Assume IAM Role" && strings.ToLower(componentTitle) == "postgresql" {
merged = removeSomeDeprecatedFieldsOnUnrelatedAuthProfiles(merged)
}
// Here, there are no metadata fields that need deprecating
if profile.Title == "AWS: Credentials from Environment Variables" && componentTitle == "Apache Kafka" || profile.Title == "AWS: Credentials from Environment Variables" && strings.ToLower(componentTitle) == "postgresql" {
merged = removeAllDeprecatedFieldsOnUnrelatedAuthProfiles(merged)
}
// Here, this is a new auth profile, so rm all deprecating fields as unrelated.
if profile.Title == "AWS: IAM Roles Anywhere" && componentTitle == "Apache Kafka" || profile.Title == "AWS: IAM Roles Anywhere" && strings.ToLower(componentTitle) == "postgresql" {
merged = removeAllDeprecatedFieldsOnUnrelatedAuthProfiles(merged)
}
res[i].Metadata = merged
}
return res, nil
}
@ -45,3 +77,58 @@ func mergedMetadata(base []Metadata, add ...Metadata) []Metadata {
res = append(res, add...)
return res
}
// removeRequiredOnSomeAWSFields needs to be removed in Dapr 1.17 as duplicated AWS IAM fields get removed,
// and we standardize on these fields.
// Currently, there are: awsAccessKey, accessKey and awsSecretKey, secretKey, and awsRegion and region fields.
// We normally have accessKey, secretKey, and region fields marked required as it is part of the builtin AWS auth profile fields.
// However, as we rm the aws prefixed ones, we need to then mark the normally required ones as not required only for postgres and kafka.
// This way we do not break existing users, and transition them to the standardized fields.
func removeRequiredOnSomeAWSFields(metadata *[]Metadata) {
if metadata == nil {
return
}
for i := range *metadata {
field := &(*metadata)[i]
if field == nil {
continue
}
if field.Name == "accessKey" || field.Name == "secretKey" || field.Name == "region" {
field.Required = false
}
}
}
func removeAllDeprecatedFieldsOnUnrelatedAuthProfiles(metadata []Metadata) []Metadata {
filteredMetadata := []Metadata{}
for _, field := range metadata {
if strings.HasPrefix(field.Name, "aws") {
continue
} else {
filteredMetadata = append(filteredMetadata, field)
}
}
return filteredMetadata
}
func removeSomeDeprecatedFieldsOnUnrelatedAuthProfiles(metadata []Metadata) []Metadata {
filteredMetadata := []Metadata{}
for _, field := range metadata {
// region is required in Assume Role auth profile, so this is needed for now.
if field.Name == "region" {
field.Required = true
}
if field.Name == "awsAccessKey" || field.Name == "awsSecretKey" || field.Name == "awsSessionToken" || field.Name == "awsRegion" {
continue
} else {
filteredMetadata = append(filteredMetadata, field)
}
}
return filteredMetadata
}

View File

@ -67,7 +67,7 @@ func (c *ComponentMetadata) IsValid() error {
// Append built-in authentication profiles
for _, profile := range c.BuiltInAuthenticationProfiles {
appendProfiles, err := ParseBuiltinAuthenticationProfile(profile)
appendProfiles, err := ParseBuiltinAuthenticationProfile(profile, c.Title)
if err != nil {
return err
}

6
.github/holopin.yml vendored
View File

@ -1,6 +1,6 @@
organization: dapr
defaultSticker: clmjkxscc122740fl0mkmb7egi
defaultSticker: clrqfypv0282430gjx4hys94pc
stickers:
-
id: clmjkxscc122740fl0mkmb7egi
alias: ghc2023
id: clrqfypv0282430gjx4hys94pc
alias: components-badge

View File

@ -2,7 +2,7 @@ version: '2'
services:
db:
image: mysql:8
command: --default-authentication-plugin=mysql_native_password
command: --mysql_native_password=ON
restart: always
environment:
MYSQL_ROOT_PASSWORD: root

View File

@ -0,0 +1,15 @@
version: "3.8"
services:
localstack:
container_name: "conformance-aws-secrets-manager"
image: localstack/localstack
ports:
- "127.0.0.1:4566:4566"
environment:
- DEBUG=1
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${PWD}/.github/scripts/docker-compose-init/init-conformance-state-aws-secrets-manager.sh:/etc/localstack/init/ready.d/init-aws.sh" # ready hook
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"

View File

@ -9,9 +9,13 @@ services:
shm_size: 1g
ulimits:
core: -1
# Setting nofile to 4096 and hard to 1048576, as recommended by Solace documentation
# Otherwise, the container will have an error and crash with:
# ERROR POST Violation [022]:Required system resource missing, Hard resource limit nofile 1048576 is required, 6592 detected
# https://docs.solace.com/Software-Broker/System-Resource-Requirements.htm#concurrent-open-files-considerations
nofile:
soft: 2448
hard: 6592
soft: 4096
hard: 1048576
deploy:
restart_policy:
condition: on-failure

View File

@ -1,4 +1,4 @@
version: '2'
version: '3'
services:
sqlserver:
image: mcr.microsoft.com/mssql/server:2019-latest

View File

@ -0,0 +1,54 @@
terraform {
required_version = ">=0.13"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
variable "TIMESTAMP" {
type = string
description = "Timestamp of the GitHub workflow run."
}
variable "UNIQUE_ID" {
type = string
description = "Unique ID of the GitHub workflow run."
}
provider "aws" {
region = "us-east-1"
default_tags {
tags = {
Purpose = "AutomatedConformanceTesting"
Timestamp = "${var.TIMESTAMP}"
}
}
}
# Create the first secret in AWS Secrets Manager
resource "aws_secretsmanager_secret" "conftestsecret" {
name = "conftestsecret"
description = "Secret for conformance test"
recovery_window_in_days = 0
}
resource "aws_secretsmanager_secret_version" "conftestsecret_value" {
secret_id = aws_secretsmanager_secret.conftestsecret.id
secret_string = "abcd"
}
# Create the second secret in AWS Secrets Manager
resource "aws_secretsmanager_secret" "secondsecret" {
name = "secondsecret"
description = "Another secret for conformance test"
recovery_window_in_days = 0
}
resource "aws_secretsmanager_secret_version" "secondsecret_value" {
secret_id = aws_secretsmanager_secret.secondsecret.id
secret_string = "efgh"
}

View File

@ -4,4 +4,4 @@ set -e
export INFLUX_TOKEN=$(openssl rand -base64 32)
echo "INFLUX_TOKEN=$INFLUX_TOKEN" >> $GITHUB_ENV
docker-compose -f .github/infrastructure/docker-compose-influxdb.yml -p influxdb up -d
docker compose -f .github/infrastructure/docker-compose-influxdb.yml -p influxdb up -d

View File

@ -0,0 +1,9 @@
#!/bin/sh
set +e
# Navigate to the Terraform directory
cd ".github/infrastructure/terraform/conformance/secretstores/aws/secretsmanager"
# Run Terraform
terraform destroy -auto-approve -var="UNIQUE_ID=$UNIQUE_ID" -var="TIMESTAMP=$CURRENT_TIME"

View File

@ -0,0 +1,15 @@
#!/bin/sh
set -e
# Set variables for GitHub Actions
echo "AWS_REGION=us-east-1" >> $GITHUB_ENV
# Navigate to the Terraform directory
cd ".github/infrastructure/terraform/conformance/secretstores/aws/secretsmanager"
# Run Terraform
terraform init
terraform validate -no-color
terraform plan -no-color -var="UNIQUE_ID=$UNIQUE_ID" -var="TIMESTAMP=$CURRENT_TIME"
terraform apply -auto-approve -var="UNIQUE_ID=$UNIQUE_ID" -var="TIMESTAMP=$CURRENT_TIME"

View File

@ -0,0 +1,8 @@
#!/bin/bash
set -e
FILE="$1"
PROJECT="${2:-$FILE}"
docker compose -f .github/infrastructure/docker-compose-${FILE}.yml -p ${PROJECT} logs

View File

@ -5,4 +5,4 @@ set -e
FILE="$1"
PROJECT="${2:-$FILE}"
docker-compose -f .github/infrastructure/docker-compose-${FILE}.yml -p ${PROJECT} up -d
docker compose -f .github/infrastructure/docker-compose-${FILE}.yml -p ${PROJECT} up -d

View File

@ -7,10 +7,12 @@ const owners = [
'berndverst',
'daixiang0',
'DeepanshuA',
'elena-kolevska',
'halspang',
'ItalyPaleAle',
'jjcollinge',
'joshvanl',
'mikeee',
'msfussell',
'mukundansundar',
'pkedy',
@ -19,6 +21,7 @@ const owners = [
'RyanLettieri',
'shivamkm07',
'shubham1172',
'sicoyle',
'skyao',
'Taction',
'tmacam',
@ -252,4 +255,4 @@ async function rerunWorkflow(github, issue, workflowrunid) {
repo: issue.repo,
run_id: workflowrunid,
});
}
}

View File

@ -0,0 +1,9 @@
#!/bin/bash
awslocal secretsmanager create-secret \
--name conftestsecret \
--secret-string "abcd"
awslocal secretsmanager create-secret \
--name secondsecret \
--secret-string "efgh"

View File

@ -440,6 +440,7 @@ const components = {
'pubsub.solace': {
conformance: true,
conformanceSetup: 'docker-compose.sh solace',
conformanceLogs: 'docker-compose-logs.sh solace',
},
'secretstores.azure.keyvault': {
certification: true,
@ -492,6 +493,17 @@ const components = {
conformance: true,
certification: true,
},
'secretstores.aws.secretsmanager.terraform': {
conformance: true,
requireAWSCredentials: true,
requireTerraform: true,
conformanceSetup: 'conformance-secretstores.aws.secretsmanager.secretsmanager-setup.sh',
conformanceDestroy: 'conformance-secretstores.aws.secretsmanager.secretsmanager-destroy.sh',
},
'secretstores.aws.secretsmanager.docker': {
conformance: true,
conformanceSetup: 'docker-compose.sh secrets-manager',
},
'state.aws.dynamodb': {
certification: true,
requireAWSCredentials: true,
@ -813,6 +825,7 @@ const components = {
* @property {boolean?} requireTerraform If true, requires Terraform
* @property {boolean?} requireKind If true, requires KinD
* @property {string?} conformanceSetup Setup script for conformance tests
* @property {string?} conformanceLogs Logs script for conformance tests
* @property {string?} conformanceDestroy Destroy script for conformance tests
* @property {string?} certificationSetup Setup script for certification tests
* @property {string?} certificationDestroy Destroy script for certification tests
@ -834,6 +847,7 @@ const components = {
* @property {boolean?} require-kind Requires KinD
* @property {string?} setup-script Setup script
* @property {string?} destroy-script Destroy script
* @property {string?} logs-script Logs script in case of failure
* @property {string?} nodejs-version Install the specified Node.js version if set
* @property {string?} mongodb-version Install the specified MongoDB version if set
* @property {string?} source-pkg Source package
@ -904,6 +918,7 @@ function GenerateMatrix(testKind, enableCloudTests) {
'require-kind': comp.requireKind ? 'true' : undefined,
'setup-script': comp[testKind + 'Setup'] || undefined,
'destroy-script': comp[testKind + 'Destroy'] || undefined,
'logs-script': comp[testKind + 'Logs'] || undefined,
'nodejs-version': comp.nodeJsVersion || undefined,
'mongodb-version': comp.mongoDbVersion || undefined,
'source-pkg': comp.sourcePkg

View File

@ -97,12 +97,12 @@ jobs:
run:
shell: bash
needs:
needs:
- generate-matrix
strategy:
fail-fast: false # Keep running even if one component fails
matrix:
matrix:
include: ${{ fromJson(needs.generate-matrix.outputs.test-matrix) }}
steps:
@ -254,12 +254,12 @@ jobs:
AWS_REGION: "${{ env.AWS_REGION }}"
run: |
echo "Running certification tests for ${{ matrix.component }} ... "
echo "Source Pacakge: " ${{ matrix.source-pkg }}
echo "Source Package: " ${{ matrix.source-pkg }}
export GOLANG_PROTOBUF_REGISTRATION_CONFLICT=ignore
set +e
gotestsum --jsonfile ${{ env.TEST_OUTPUT_FILE_PREFIX }}_certification.json \
--junitfile ${{ env.TEST_OUTPUT_FILE_PREFIX }}_certification.xml --format standard-quiet -- \
-coverprofile=cover.out -covermode=set -tags=certtests -timeout=30m -coverpkg=${{ matrix.source-pkg }}
-coverprofile=cover.out -covermode=set -tags=certtests,unit -timeout=30m -coverpkg=${{ matrix.source-pkg }}
status=$?
echo "Completed certification tests for ${{ matrix.component }} ... "
if test $status -ne 0; then
@ -292,10 +292,10 @@ jobs:
fi
- name: Upload Cert Coverage Report File
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: github.event_name == 'schedule'
with:
name: cert_code_cov
name: ${{ matrix.component }}_cert_code_cov
path: ${{ env.TEST_PATH }}/tmp/cert_code_cov_files
retention-days: 7
@ -311,10 +311,10 @@ jobs:
fi
- name: Upload result files
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: always()
with:
name: result_files
name: ${{ matrix.component }}_result_files
path: tmp/result_files
retention-days: 1
@ -334,7 +334,7 @@ jobs:
name: Post-completion
runs-on: ubuntu-22.04
if: always()
needs:
needs:
- certification
- generate-matrix
steps:
@ -349,11 +349,11 @@ jobs:
- name: Download test result artifact
if: always() && env.PR_NUMBER != ''
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
continue-on-error: true
id: testresults
with:
name: result_files
# name: not being specified which will result in all artifacts being downloaded
path: tmp/result_files
- name: Build message

View File

@ -65,7 +65,7 @@ jobs:
GOOS: ${{ matrix.target_os }}
GOARCH: ${{ matrix.target_arch }}
GOPROXY: https://proxy.golang.org
GOLANGCI_LINT_VER: "v1.55.2"
GOLANGCI_LINT_VER: "v1.64.6"
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
@ -143,10 +143,11 @@ jobs:
run: make check-component-metadata-schema-diff
- name: Run golangci-lint
if: matrix.target_arch == 'amd64' && matrix.target_os == 'linux' && steps.skip_check.outputs.should_skip != 'true'
uses: golangci/golangci-lint-action@v3.2.0
uses: golangci/golangci-lint-action@v6.0.1
with:
version: ${{ env.GOLANGCI_LINT_VER }}
skip-cache: true
only-new-issues: true
args: --timeout 15m
- name: Run go mod tidy check diff
if: matrix.target_arch == 'amd64' && matrix.target_os == 'linux' && steps.skip_check.outputs.should_skip != 'true'

View File

@ -33,7 +33,7 @@ jobs:
GOOS: linux
GOARCH: amd64
GOPROXY: https://proxy.golang.org
GOLANGCI_LINT_VER: "v1.55.2"
GOLANGCI_LINT_VER: "v1.64.6"
steps:
- name: Check out code into the Go module directory
if: ${{ steps.skip_check.outputs.should_skip != 'true' }}
@ -62,10 +62,11 @@ jobs:
run: make check-component-metadata
- name: Run golangci-lint
if: steps.skip_check.outputs.should_skip != 'true'
uses: golangci/golangci-lint-action@v3.4.0
uses: golangci/golangci-lint-action@v6.0.1
with:
version: ${{ env.GOLANGCI_LINT_VER }}
skip-cache: true
only-new-issues: true
args: --timeout 15m
- name: Run go mod tidy check diff
if: steps.skip_check.outputs.should_skip != 'true'

View File

@ -100,12 +100,12 @@ jobs:
run:
shell: bash
needs:
needs:
- generate-matrix
strategy:
fail-fast: false # Keep running even if one component fails
matrix:
matrix:
include: ${{ fromJson(needs.generate-matrix.outputs.test-matrix) }}
steps:
@ -267,7 +267,7 @@ jobs:
- name: Run tests
continue-on-error: true
run: |
set -e
set -e
KIND=$(echo ${{ matrix.component }} | cut -d. -f1)
NAME=$(echo ${{ matrix.component }} | cut -d. -f2-)
KIND_UPPER="$(tr '[:lower:]' '[:upper:]' <<< ${KIND:0:1})${KIND:1}"
@ -277,7 +277,7 @@ jobs:
fi
echo "Running tests for Test${KIND_UPPER}Conformance/${KIND}/${NAME} ... "
echo "Source Pacakge: " ${{ matrix.source-pkg }}
echo "Source Package: " ${{ matrix.source-pkg }}
set +e
gotestsum --jsonfile ${{ env.TEST_OUTPUT_FILE_PREFIX }}_conformance.json \
@ -317,6 +317,10 @@ jobs:
exit 1
fi
- name: Retrieve infrastructure failure logs
if: failure() && matrix.logs-script != ''
run: .github/scripts/components-scripts/${{ matrix.logs-script }}
- name: Prepare test result info
if: always()
run: |
@ -329,31 +333,31 @@ jobs:
fi
- name: Upload result files
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: always()
with:
name: result_files
name: ${{ matrix.component }}_result_files
path: tmp/result_files
retention-days: 1
- name: Prepare coverage report file to upload
if: github.event_name == 'schedule'
run: |
mkdir -p tmp/conf_code_cov
cp cover.out tmp/conf_code_cov/${{ env.SOURCE_PATH_LINEAR }}.out
- name: Upload coverage report file
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: github.event_name == 'schedule'
with:
name: conf_code_cov
name: ${{ matrix.component }}_conf_code_cov
path: tmp/conf_code_cov
retention-days: 7
# Upload logs for test analytics to consume
- name: Upload test results
if: always()
uses: actions/upload-artifact@main
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.component }}_conformance_test
path: ${{ env.TEST_OUTPUT_FILE_PREFIX }}_conformance.*
@ -381,11 +385,11 @@ jobs:
- name: Download test result artifact
if: always() && env.PR_NUMBER != ''
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
continue-on-error: true
id: testresults
with:
name: result_files
# name: not being specified which will result in all artifacts being downloaded
path: tmp/result_files
- name: Build message

View File

@ -9,17 +9,17 @@ jobs:
upload-bundle:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'
cache: 'false'
- name: Checkout code
uses: actions/checkout@v4
- name: Build component-metadata-bundle.json
run: make bundle-component-metadata
- name: Upload component-metadata-bundle.json
uses: softprops/action-gh-release@v1
if: startsWith(github.ref, 'refs/tags/')
with:
files: component-metadata-bundle.json
files: component-metadata-bundle.json

View File

@ -23,7 +23,7 @@ run:
# default value is empty list, but next dirs are always skipped independently
# from this option's value:
# vendor$, third_party$, testdata$, examples$, Godeps$, builtin$
skip-dirs:
issues.exclude-dirs:
- ^vendor$
# which files to skip: they will be analyzed, but issues from them
@ -37,7 +37,7 @@ run:
# output configuration options
output:
# colored-line-number|line-number|json|tab|checkstyle, default is "colored-line-number"
format: tab
formats: tab
# print lines of code with issue, default is true
print-issued-lines: true
@ -60,7 +60,7 @@ linters-settings:
# [deprecated] comma-separated list of pairs of the form pkg:regex
# the regex is used to ignore names within pkg. (default "fmt:.*").
# see https://github.com/kisielk/errcheck#the-deprecated-method for details
ignore: fmt:.*,io/ioutil:^Read.*
exclude-functions: fmt:.*,io/ioutil:^Read.*
# path to a file containing a list of functions to exclude from checking
# see https://github.com/kisielk/errcheck#excluding-functions for details
@ -71,9 +71,6 @@ linters-settings:
statements: 40
govet:
# report about shadowed variables
check-shadowing: true
# settings per analyzer
settings:
printf: # analyzer name, run `go tool vet help` to see all analyzers
@ -86,6 +83,7 @@ linters-settings:
# enable or disable analyzers by name
enable:
- atomicalign
- shadow
enable-all: false
disable:
- shadow
@ -106,9 +104,6 @@ linters-settings:
gocognit:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
maligned:
# print struct with more effective memory layout or not, false by default
suggest-new: true
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
@ -121,6 +116,10 @@ linters-settings:
rules:
main:
deny:
- pkg: "github.com/golang-jwt/jwt/v5"
desc: "must use github.com/lestrrat-go/jwx/v2/jwt"
- pkg: "github.com/golang-jwt/jwt/v4"
desc: "must use github.com/lestrrat-go/jwx/v2/jwt"
- pkg: "github.com/Sirupsen/logrus"
desc: "must use github.com/dapr/kit/logger"
- pkg: "github.com/agrea/ptr"
@ -262,6 +261,11 @@ linters-settings:
allow-case-traling-whitespace: true
# Allow declarations (var) to be cuddled.
allow-cuddle-declarations: false
testifylint:
disable:
- float-compare
- negative-positive
- go-require
linters:
fast: false
@ -277,28 +281,23 @@ linters:
- gocyclo
- gocognit
- godox
- interfacer
- lll
- maligned
- scopelint
- unparam
- wsl
- gomnd
- mnd
- testpackage
- goerr113
- err113
- nestif
- nlreturn
- exhaustive
- exhaustruct
- noctx
- gci
- golint
- tparallel
- paralleltest
- wrapcheck
- tagliatelle
- ireturn
- exhaustivestruct
- errchkjson
- contextcheck
- gomoddirectives
@ -307,7 +306,6 @@ linters:
- varnamelen
- errorlint
- forcetypeassert
- ifshort
- maintidx
- nilnil
- predeclared
@ -320,10 +318,8 @@ linters:
- asasalint
- rowserrcheck
- sqlclosecheck
- structcheck
- deadcode
- nosnakecase
- varcheck
- goconst
- tagalign
- inamedparam
- canonicalheader
- fatcontext

View File

@ -2,9 +2,9 @@
Thank you for your interest in Dapr!
This project welcomes contributions and suggestions. Most contributions require you to signoff on your commits via
the Developer Certificate of Origin (DCO). When you submit a pull request, a DCO-bot will automatically determine
whether you need to provide signoff for your commit. Please follow the instructions provided by DCO-bot, as pull
This project welcomes contributions and suggestions. Most contributions require you to signoff on your commits via
the Developer Certificate of Origin (DCO). When you submit a pull request, a DCO-bot will automatically determine
whether you need to provide signoff for your commit. Please follow the instructions provided by DCO-bot, as pull
requests cannot be merged until the author(s) have provided signoff to fulfill the DCO requirement.
You may find more information on the DCO requirements [below](#developer-certificate-of-origin-signing-your-work).
@ -64,7 +64,7 @@ All contributions come through pull requests. To submit a proposed change, we re
#### Use work-in-progress PRs for early feedback
A good way to communicate before investing too much time is to create a "Work-in-progress" PR and share it with your reviewers. The standard way of doing this is to add a "[WIP]" prefix in your PR's title and assign the **do-not-merge** label. This will let people looking at your PR know that it is not well baked yet.
A good way to communicate before investing too much time is to create a "Work-in-progress" PR and share it with your reviewers. The standard way of doing this is to open your PR as a draft, add a "[WIP]" prefix in your PR's title, and assign the **do-not-merge** label. This will let people looking at your PR know that it is not well baked yet.
### Developer Certificate of Origin: Signing your work

View File

@ -65,7 +65,7 @@ export GH_LINT_VERSION := $(shell grep 'GOLANGCI_LINT_VER:' .github/workflows/co
ifeq (,$(LINTER_BINARY))
INSTALLED_LINT_VERSION := "v0.0.0"
else
INSTALLED_LINT_VERSION=v$(shell $(LINTER_BINARY) version | grep -Eo '([0-9]+\.)+[0-9]+' - || "")
INSTALLED_LINT_VERSION=v$(shell $(LINTER_BINARY) version | grep -Eo '([0-9]+\.)+[0-9]+' - | head -1 || "")
endif
# Build tools
@ -100,6 +100,7 @@ verify-linter-version:
echo "[!] Yours: $(INSTALLED_LINT_VERSION)"; \
echo "[!] Theirs: $(GH_LINT_VERSION)"; \
echo "[!] Upgrade: curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin $(GH_LINT_VERSION)"; \
GOLANGCI_LINT=$(go env GOPATH)/bin/$(GOLANGCI_LINT) \
sleep 3; \
fi;
@ -116,7 +117,12 @@ test:
################################################################################
.PHONY: lint
lint: verify-linter-installed verify-linter-version
$(GOLANGCI_LINT) run --timeout=20m
ifdef LINT_BASE
@echo "LINT_BASE is set to "$(LINT_BASE)". Linter will only check diff."
$(GOLANGCI_LINT) run --timeout=20m --max-same-issues 0 --max-issues-per-linter 0 --new-from-rev $(shell git rev-parse $(LINT_BASE))
else
$(GOLANGCI_LINT) run --timeout=20m --max-same-issues 0 --max-issues-per-linter 0
endif
################################################################################
# Target: modtidy-all #
@ -249,4 +255,4 @@ prettier-format:
################################################################################
.PHONY: conf-tests
conf-tests:
CGO_ENABLED=$(CGO) go test -v -tags=conftests -count=1 ./tests/conformance
CGO_ENABLED=$(CGO) go test -v -tags=conftests -count=1 ./tests/conformance

View File

@ -57,11 +57,11 @@ func TestPublishMsg(t *testing.T) { //nolint:paralleltest
}}}
d := NewDingTalkWebhook(logger.NewLogger("test"))
err := d.Init(context.Background(), m)
err := d.Init(t.Context(), m)
require.NoError(t, err)
req := &bindings.InvokeRequest{Data: []byte(msg), Operation: bindings.CreateOperation, Metadata: map[string]string{}}
_, err = d.Invoke(context.Background(), req)
_, err = d.Invoke(t.Context(), req)
require.NoError(t, err)
}
@ -78,7 +78,7 @@ func TestBindingReadAndInvoke(t *testing.T) { //nolint:paralleltest
}}
d := NewDingTalkWebhook(logger.NewLogger("test"))
err := d.Init(context.Background(), m)
err := d.Init(t.Context(), m)
require.NoError(t, err)
var count int32
@ -92,11 +92,11 @@ func TestBindingReadAndInvoke(t *testing.T) { //nolint:paralleltest
return nil, nil
}
err = d.Read(context.Background(), handler)
err = d.Read(t.Context(), handler)
require.NoError(t, err)
req := &bindings.InvokeRequest{Data: []byte(msg), Operation: bindings.GetOperation, Metadata: map[string]string{}}
_, err = d.Invoke(context.Background(), req)
_, err = d.Invoke(t.Context(), req)
require.NoError(t, err)
select {
@ -117,7 +117,7 @@ func TestBindingClose(t *testing.T) {
"id": "x",
},
}}
require.NoError(t, d.Init(context.Background(), m))
require.NoError(t, d.Init(t.Context(), m))
require.NoError(t, d.Close())
require.NoError(t, d.Close(), "second close should not error")
}

View File

@ -114,3 +114,7 @@ func (s *AliCloudOSS) GetComponentMetadata() (metadataInfo metadata.MetadataMap)
metadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, metadata.BindingType)
return
}
func (s *AliCloudOSS) Close() error {
return nil
}

View File

@ -3,7 +3,7 @@ package sls
import (
"context"
"encoding/json"
"fmt"
"errors"
"reflect"
"time"
@ -61,16 +61,16 @@ func NewAliCloudSlsLogstorage(logger logger.Logger) bindings.OutputBinding {
func (s *AliCloudSlsLogstorage) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*bindings.InvokeResponse, error) {
// verify the metadata property
if logProject := req.Metadata["project"]; logProject == "" {
return nil, fmt.Errorf("SLS binding error: project property not supplied")
return nil, errors.New("SLS binding error: project property not supplied")
}
if logstore := req.Metadata["logstore"]; logstore == "" {
return nil, fmt.Errorf("SLS binding error: logstore property not supplied")
return nil, errors.New("SLS binding error: logstore property not supplied")
}
if topic := req.Metadata["topic"]; topic == "" {
return nil, fmt.Errorf("SLS binding error: topic property not supplied")
return nil, errors.New("SLS binding error: topic property not supplied")
}
if source := req.Metadata["source"]; source == "" {
return nil, fmt.Errorf("SLS binding error: source property not supplied")
return nil, errors.New("SLS binding error: source property not supplied")
}
log, err := s.parseLog(req)
@ -96,6 +96,7 @@ func (s *AliCloudSlsLogstorage) parseLog(req *bindings.InvokeRequest) (*sls.Log,
if err != nil {
return nil, err
}
//nolint:gosec
return producer.GenerateLog(uint32(time.Now().Unix()), logInfo), nil
}
@ -134,3 +135,11 @@ func (s *AliCloudSlsLogstorage) GetComponentMetadata() (metadataInfo metadata.Me
metadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, metadata.BindingType)
return
}
func (s *AliCloudSlsLogstorage) Close() error {
if s.producer != nil {
return s.producer.Close(time.Second.Milliseconds() * 5)
}
return nil
}

View File

@ -271,7 +271,6 @@ func (s *AliCloudTableStore) create(req *bindings.InvokeRequest, resp *bindings.
}
_, err = s.client.PutRow(putRequest)
if err != nil {
return err
}
@ -302,7 +301,6 @@ func (s *AliCloudTableStore) delete(req *bindings.InvokeRequest, resp *bindings.
change.SetCondition(tablestore.RowExistenceExpectation_IGNORE) //nolint:nosnakecase
deleteReq := &tablestore.DeleteRowRequest{DeleteRowChange: change}
_, err = s.client.DeleteRow(deleteReq)
if err != nil {
return err
}
@ -353,3 +351,7 @@ func (s *AliCloudTableStore) GetComponentMetadata() (metadataInfo contribMetadat
contribMetadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, contribMetadata.BindingType)
return
}
func (s *AliCloudTableStore) Close() error {
return nil
}

View File

@ -14,7 +14,6 @@ limitations under the License.
package tablestore
import (
"context"
"encoding/json"
"os"
"testing"
@ -52,7 +51,7 @@ func TestDataEncodeAndDecode(t *testing.T) {
metadata := bindings.Metadata{Base: metadata.Base{
Properties: getTestProperties(),
}}
aliCloudTableStore.Init(context.Background(), metadata)
aliCloudTableStore.Init(t.Context(), metadata)
// test create
putData := map[string]interface{}{
@ -71,7 +70,7 @@ func TestDataEncodeAndDecode(t *testing.T) {
Data: data,
}
putInvokeResp, err := aliCloudTableStore.Invoke(context.Background(), putRowReq)
putInvokeResp, err := aliCloudTableStore.Invoke(t.Context(), putRowReq)
require.NoError(t, err)
assert.NotNil(t, putInvokeResp)
@ -82,7 +81,7 @@ func TestDataEncodeAndDecode(t *testing.T) {
"column2": int64(2),
})
putInvokeResp, err = aliCloudTableStore.Invoke(context.Background(), putRowReq)
putInvokeResp, err = aliCloudTableStore.Invoke(t.Context(), putRowReq)
require.NoError(t, err)
assert.NotNil(t, putInvokeResp)
@ -102,7 +101,7 @@ func TestDataEncodeAndDecode(t *testing.T) {
Data: getData,
}
getInvokeResp, err := aliCloudTableStore.Invoke(context.Background(), getInvokeReq)
getInvokeResp, err := aliCloudTableStore.Invoke(t.Context(), getInvokeReq)
require.NoError(t, err)
assert.NotNil(t, getInvokeResp)
@ -136,7 +135,7 @@ func TestDataEncodeAndDecode(t *testing.T) {
Data: listData,
}
listResp, err := aliCloudTableStore.Invoke(context.Background(), listReq)
listResp, err := aliCloudTableStore.Invoke(t.Context(), listReq)
require.NoError(t, err)
assert.NotNil(t, listResp)
@ -164,12 +163,12 @@ func TestDataEncodeAndDecode(t *testing.T) {
Data: deleteData,
}
deleteResp, err := aliCloudTableStore.Invoke(context.Background(), deleteReq)
deleteResp, err := aliCloudTableStore.Invoke(t.Context(), deleteReq)
require.NoError(t, err)
assert.NotNil(t, deleteResp)
getInvokeResp, err = aliCloudTableStore.Invoke(context.Background(), getInvokeReq)
getInvokeResp, err = aliCloudTableStore.Invoke(t.Context(), getInvokeReq)
require.NoError(t, err)
assert.Nil(t, getInvokeResp.Data)

View File

@ -267,3 +267,7 @@ func (a *APNS) GetComponentMetadata() (metadataInfo contribMetadata.MetadataMap)
contribMetadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, contribMetadata.BindingType)
return
}
func (a *APNS) Close() error {
return nil
}

View File

@ -15,7 +15,6 @@ package apns
import (
"bytes"
"context"
"io"
"net/http"
"strings"
@ -52,7 +51,7 @@ func TestInit(t *testing.T) {
},
}}
binding := NewAPNS(testLogger).(*APNS)
err := binding.Init(context.Background(), metadata)
err := binding.Init(t.Context(), metadata)
require.NoError(t, err)
assert.Equal(t, developmentPrefix, binding.urlPrefix)
})
@ -67,7 +66,7 @@ func TestInit(t *testing.T) {
},
}}
binding := NewAPNS(testLogger).(*APNS)
err := binding.Init(context.Background(), metadata)
err := binding.Init(t.Context(), metadata)
require.NoError(t, err)
assert.Equal(t, productionPrefix, binding.urlPrefix)
})
@ -81,7 +80,7 @@ func TestInit(t *testing.T) {
},
}}
binding := NewAPNS(testLogger).(*APNS)
err := binding.Init(context.Background(), metadata)
err := binding.Init(t.Context(), metadata)
require.NoError(t, err)
assert.Equal(t, productionPrefix, binding.urlPrefix)
})
@ -94,7 +93,7 @@ func TestInit(t *testing.T) {
},
}}
binding := NewAPNS(testLogger).(*APNS)
err := binding.Init(context.Background(), metadata)
err := binding.Init(t.Context(), metadata)
require.Error(t, err, "the key-id parameter is required")
})
@ -107,7 +106,7 @@ func TestInit(t *testing.T) {
},
}}
binding := NewAPNS(testLogger).(*APNS)
err := binding.Init(context.Background(), metadata)
err := binding.Init(t.Context(), metadata)
require.NoError(t, err)
assert.Equal(t, testKeyID, binding.authorizationBuilder.keyID)
})
@ -120,7 +119,7 @@ func TestInit(t *testing.T) {
},
}}
binding := NewAPNS(testLogger).(*APNS)
err := binding.Init(context.Background(), metadata)
err := binding.Init(t.Context(), metadata)
require.Error(t, err, "the team-id parameter is required")
})
@ -133,7 +132,7 @@ func TestInit(t *testing.T) {
},
}}
binding := NewAPNS(testLogger).(*APNS)
err := binding.Init(context.Background(), metadata)
err := binding.Init(t.Context(), metadata)
require.NoError(t, err)
assert.Equal(t, testTeamID, binding.authorizationBuilder.teamID)
})
@ -146,7 +145,7 @@ func TestInit(t *testing.T) {
},
}}
binding := NewAPNS(testLogger).(*APNS)
err := binding.Init(context.Background(), metadata)
err := binding.Init(t.Context(), metadata)
require.Error(t, err, "the private-key parameter is required")
})
@ -159,7 +158,7 @@ func TestInit(t *testing.T) {
},
}}
binding := NewAPNS(testLogger).(*APNS)
err := binding.Init(context.Background(), metadata)
err := binding.Init(t.Context(), metadata)
require.NoError(t, err)
assert.NotNil(t, binding.authorizationBuilder.privateKey)
})
@ -192,7 +191,7 @@ func TestInvoke(t *testing.T) {
t.Run("operation must be create", func(t *testing.T) {
testBinding := makeTestBinding(t, testLogger)
req := &bindings.InvokeRequest{Operation: bindings.DeleteOperation}
_, err := testBinding.Invoke(context.TODO(), req)
_, err := testBinding.Invoke(t.Context(), req)
require.Error(t, err, "operation not supported: delete")
})
@ -202,7 +201,7 @@ func TestInvoke(t *testing.T) {
Operation: bindings.CreateOperation,
Metadata: map[string]string{},
}
_, err := testBinding.Invoke(context.TODO(), req)
_, err := testBinding.Invoke(t.Context(), req)
require.Error(t, err, "the device-token parameter is required")
})
@ -213,7 +212,7 @@ func TestInvoke(t *testing.T) {
return successResponse()
})
_, _ = testBinding.Invoke(context.TODO(), successRequest)
_, _ = testBinding.Invoke(t.Context(), successRequest)
})
t.Run("the push type header is sent", func(t *testing.T) {
@ -224,7 +223,7 @@ func TestInvoke(t *testing.T) {
return successResponse()
})
_, _ = testBinding.Invoke(context.TODO(), successRequest)
_, _ = testBinding.Invoke(t.Context(), successRequest)
})
t.Run("the message ID is sent", func(t *testing.T) {
@ -235,7 +234,7 @@ func TestInvoke(t *testing.T) {
return successResponse()
})
_, _ = testBinding.Invoke(context.TODO(), successRequest)
_, _ = testBinding.Invoke(t.Context(), successRequest)
})
t.Run("the expiration is sent", func(t *testing.T) {
@ -246,7 +245,7 @@ func TestInvoke(t *testing.T) {
return successResponse()
})
_, _ = testBinding.Invoke(context.TODO(), successRequest)
_, _ = testBinding.Invoke(t.Context(), successRequest)
})
t.Run("the priority is sent", func(t *testing.T) {
@ -257,7 +256,7 @@ func TestInvoke(t *testing.T) {
return successResponse()
})
_, _ = testBinding.Invoke(context.TODO(), successRequest)
_, _ = testBinding.Invoke(t.Context(), successRequest)
})
t.Run("the topic is sent", func(t *testing.T) {
@ -268,7 +267,7 @@ func TestInvoke(t *testing.T) {
return successResponse()
})
_, _ = testBinding.Invoke(context.TODO(), successRequest)
_, _ = testBinding.Invoke(t.Context(), successRequest)
})
t.Run("the collapse ID is sent", func(t *testing.T) {
@ -279,7 +278,7 @@ func TestInvoke(t *testing.T) {
return successResponse()
})
_, _ = testBinding.Invoke(context.TODO(), successRequest)
_, _ = testBinding.Invoke(t.Context(), successRequest)
})
t.Run("the message ID is returned", func(t *testing.T) {
@ -287,7 +286,7 @@ func TestInvoke(t *testing.T) {
testBinding.client = newTestClient(func(req *http.Request) *http.Response {
return successResponse()
})
response, err := testBinding.Invoke(context.TODO(), successRequest)
response, err := testBinding.Invoke(t.Context(), successRequest)
require.NoError(t, err)
assert.NotNil(t, response.Data)
var body notificationResponse
@ -307,7 +306,7 @@ func TestInvoke(t *testing.T) {
Body: io.NopCloser(strings.NewReader(body)),
}
})
_, err := testBinding.Invoke(context.TODO(), successRequest)
_, err := testBinding.Invoke(t.Context(), successRequest)
require.Error(t, err, "BadDeviceToken")
})
}
@ -322,7 +321,7 @@ func makeTestBinding(t *testing.T, log logger.Logger) *APNS {
privateKeyKey: testPrivateKey,
},
}}
err := testBinding.Init(context.Background(), bindingMetadata)
err := testBinding.Init(t.Context(), bindingMetadata)
require.NoError(t, err)
return testBinding

View File

@ -31,9 +31,9 @@ import (
// DynamoDB allows performing stateful operations on AWS DynamoDB.
type DynamoDB struct {
client *dynamodb.DynamoDB
table string
logger logger.Logger
authProvider awsAuth.Provider
table string
logger logger.Logger
}
type dynamoDBMetadata struct {
@ -51,18 +51,27 @@ func NewDynamoDB(logger logger.Logger) bindings.OutputBinding {
}
// Init performs connection parsing for DynamoDB.
func (d *DynamoDB) Init(_ context.Context, metadata bindings.Metadata) error {
func (d *DynamoDB) Init(ctx context.Context, metadata bindings.Metadata) error {
meta, err := d.getDynamoDBMetadata(metadata)
if err != nil {
return err
}
client, err := d.getClient(meta)
opts := awsAuth.Options{
Logger: d.logger,
Properties: metadata.Properties,
Region: meta.Region,
Endpoint: meta.Endpoint,
AccessKey: meta.AccessKey,
SecretKey: meta.SecretKey,
SessionToken: meta.SessionToken,
}
provider, err := awsAuth.NewProvider(ctx, opts, awsAuth.GetConfig(opts))
if err != nil {
return err
}
d.client = client
d.authProvider = provider
d.table = meta.Table
return nil
@ -84,7 +93,7 @@ func (d *DynamoDB) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*bi
return nil, err
}
_, err = d.client.PutItemWithContext(ctx, &dynamodb.PutItemInput{
_, err = d.authProvider.DynamoDB().DynamoDB.PutItemWithContext(ctx, &dynamodb.PutItemInput{
Item: item,
TableName: aws.String(d.table),
})
@ -105,19 +114,16 @@ func (d *DynamoDB) getDynamoDBMetadata(spec bindings.Metadata) (*dynamoDBMetadat
return &meta, nil
}
func (d *DynamoDB) getClient(metadata *dynamoDBMetadata) (*dynamodb.DynamoDB, error) {
sess, err := awsAuth.GetClient(metadata.AccessKey, metadata.SecretKey, metadata.SessionToken, metadata.Region, metadata.Endpoint)
if err != nil {
return nil, err
}
c := dynamodb.New(sess)
return c, nil
}
// GetComponentMetadata returns the metadata of the component.
func (d *DynamoDB) GetComponentMetadata() (metadataInfo metadata.MetadataMap) {
metadataStruct := dynamoDBMetadata{}
metadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, metadata.BindingType)
return
}
func (d *DynamoDB) Close() error {
if d.authProvider != nil {
return d.authProvider.Close()
}
return nil
}

View File

@ -23,12 +23,10 @@ import (
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/service/kinesis"
"github.com/cenkalti/backoff/v4"
"github.com/google/uuid"
"github.com/vmware/vmware-go-kcl/clientlibrary/config"
"github.com/vmware/vmware-go-kcl/clientlibrary/interfaces"
"github.com/vmware/vmware-go-kcl/clientlibrary/worker"
@ -41,15 +39,16 @@ import (
// AWSKinesis allows receiving and sending data to/from AWS Kinesis stream.
type AWSKinesis struct {
client *kinesis.Kinesis
metadata *kinesisMetadata
authProvider awsAuth.Provider
metadata *kinesisMetadata
worker *worker.Worker
workerConfig *config.KinesisClientLibConfiguration
worker *worker.Worker
streamARN *string
consumerARN *string
logger logger.Logger
streamName string
consumerName string
consumerARN *string
logger logger.Logger
consumerMode string
closed atomic.Bool
closeCh chan struct{}
@ -113,30 +112,25 @@ func (a *AWSKinesis) Init(ctx context.Context, metadata bindings.Metadata) error
return fmt.Errorf("%s invalid \"mode\" field %s", "aws.kinesis", m.KinesisConsumerMode)
}
client, err := a.getClient(m)
if err != nil {
return err
}
streamName := aws.String(m.StreamName)
stream, err := client.DescribeStreamWithContext(ctx, &kinesis.DescribeStreamInput{
StreamName: streamName,
})
if err != nil {
return err
}
if m.KinesisConsumerMode == SharedThroughput {
kclConfig := config.NewKinesisClientLibConfigWithCredential(m.ConsumerName,
m.StreamName, m.Region, m.ConsumerName,
credentials.NewStaticCredentials(m.AccessKey, m.SecretKey, ""))
a.workerConfig = kclConfig
}
a.streamARN = stream.StreamDescription.StreamARN
a.consumerMode = m.KinesisConsumerMode
a.streamName = m.StreamName
a.consumerName = m.ConsumerName
a.metadata = m
a.client = client
opts := awsAuth.Options{
Logger: a.logger,
Properties: metadata.Properties,
Region: m.Region,
AccessKey: m.AccessKey,
SecretKey: m.SecretKey,
SessionToken: "",
}
// extra configs needed per component type
provider, err := awsAuth.NewProvider(ctx, opts, awsAuth.GetConfig(opts))
if err != nil {
return err
}
a.authProvider = provider
return nil
}
@ -149,7 +143,7 @@ func (a *AWSKinesis) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*
if partitionKey == "" {
partitionKey = uuid.New().String()
}
_, err := a.client.PutRecordWithContext(ctx, &kinesis.PutRecordInput{
_, err := a.authProvider.Kinesis().Kinesis.PutRecordWithContext(ctx, &kinesis.PutRecordInput{
StreamName: &a.metadata.StreamName,
Data: req.Data,
PartitionKey: &partitionKey,
@ -162,16 +156,15 @@ func (a *AWSKinesis) Read(ctx context.Context, handler bindings.Handler) (err er
if a.closed.Load() {
return errors.New("binding is closed")
}
if a.metadata.KinesisConsumerMode == SharedThroughput {
a.worker = worker.NewWorker(a.recordProcessorFactory(ctx, handler), a.workerConfig)
a.worker = worker.NewWorker(a.recordProcessorFactory(ctx, handler), a.authProvider.Kinesis().WorkerCfg(ctx, a.streamName, a.consumerName, a.consumerMode))
err = a.worker.Start()
if err != nil {
return err
}
} else if a.metadata.KinesisConsumerMode == ExtendedFanout {
var stream *kinesis.DescribeStreamOutput
stream, err = a.client.DescribeStream(&kinesis.DescribeStreamInput{StreamName: &a.metadata.StreamName})
stream, err = a.authProvider.Kinesis().Kinesis.DescribeStream(&kinesis.DescribeStreamInput{StreamName: &a.metadata.StreamName})
if err != nil {
return err
}
@ -181,6 +174,10 @@ func (a *AWSKinesis) Read(ctx context.Context, handler bindings.Handler) (err er
}
}
stream, err := a.authProvider.Kinesis().Stream(ctx, a.streamName)
if err != nil {
return fmt.Errorf("failed to get kinesis stream arn: %v", err)
}
// Wait for context cancelation then stop
a.wg.Add(1)
go func() {
@ -192,7 +189,7 @@ func (a *AWSKinesis) Read(ctx context.Context, handler bindings.Handler) (err er
if a.metadata.KinesisConsumerMode == SharedThroughput {
a.worker.Shutdown()
} else if a.metadata.KinesisConsumerMode == ExtendedFanout {
a.deregisterConsumer(a.streamARN, a.consumerARN)
a.deregisterConsumer(ctx, stream, a.consumerARN)
}
}()
@ -227,8 +224,7 @@ func (a *AWSKinesis) Subscribe(ctx context.Context, streamDesc kinesis.StreamDes
return
default:
}
sub, err := a.client.SubscribeToShardWithContext(ctx, &kinesis.SubscribeToShardInput{
sub, err := a.authProvider.Kinesis().Kinesis.SubscribeToShardWithContext(ctx, &kinesis.SubscribeToShardInput{
ConsumerARN: consumerARN,
ShardId: s.ShardId,
StartingPosition: &kinesis.StartingPosition{Type: aws.String(kinesis.ShardIteratorTypeLatest)},
@ -270,6 +266,9 @@ func (a *AWSKinesis) Close() error {
close(a.closeCh)
}
a.wg.Wait()
if a.authProvider != nil {
return a.authProvider.Close()
}
return nil
}
@ -277,7 +276,7 @@ func (a *AWSKinesis) ensureConsumer(ctx context.Context, streamARN *string) (*st
// Only set timeout on consumer call.
conCtx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
consumer, err := a.client.DescribeStreamConsumerWithContext(conCtx, &kinesis.DescribeStreamConsumerInput{
consumer, err := a.authProvider.Kinesis().Kinesis.DescribeStreamConsumerWithContext(conCtx, &kinesis.DescribeStreamConsumerInput{
ConsumerName: &a.metadata.ConsumerName,
StreamARN: streamARN,
})
@ -289,7 +288,7 @@ func (a *AWSKinesis) ensureConsumer(ctx context.Context, streamARN *string) (*st
}
func (a *AWSKinesis) registerConsumer(ctx context.Context, streamARN *string) (*string, error) {
consumer, err := a.client.RegisterStreamConsumerWithContext(ctx, &kinesis.RegisterStreamConsumerInput{
consumer, err := a.authProvider.Kinesis().Kinesis.RegisterStreamConsumerWithContext(ctx, &kinesis.RegisterStreamConsumerInput{
ConsumerName: &a.metadata.ConsumerName,
StreamARN: streamARN,
})
@ -301,7 +300,6 @@ func (a *AWSKinesis) registerConsumer(ctx context.Context, streamARN *string) (*
ConsumerName: &a.metadata.ConsumerName,
StreamARN: streamARN,
})
if err != nil {
return nil, err
}
@ -309,11 +307,11 @@ func (a *AWSKinesis) registerConsumer(ctx context.Context, streamARN *string) (*
return consumer.Consumer.ConsumerARN, nil
}
func (a *AWSKinesis) deregisterConsumer(streamARN *string, consumerARN *string) error {
func (a *AWSKinesis) deregisterConsumer(ctx context.Context, streamARN *string, consumerARN *string) error {
if a.consumerARN != nil {
// Use a background context because the running context may have been canceled already
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
_, err := a.client.DeregisterStreamConsumerWithContext(ctx, &kinesis.DeregisterStreamConsumerInput{
_, err := a.authProvider.Kinesis().Kinesis.DeregisterStreamConsumerWithContext(ctx, &kinesis.DeregisterStreamConsumerInput{
ConsumerARN: consumerARN,
StreamARN: streamARN,
ConsumerName: &a.metadata.ConsumerName,
@ -344,7 +342,7 @@ func (a *AWSKinesis) waitUntilConsumerExists(ctx aws.Context, input *kinesis.Des
tmp := *input
inCpy = &tmp
}
req, _ := a.client.DescribeStreamConsumerRequest(inCpy)
req, _ := a.authProvider.Kinesis().Kinesis.DescribeStreamConsumerRequest(inCpy)
req.SetContext(ctx)
req.ApplyOptions(opts...)
@ -356,16 +354,6 @@ func (a *AWSKinesis) waitUntilConsumerExists(ctx aws.Context, input *kinesis.Des
return w.WaitWithContext(ctx)
}
func (a *AWSKinesis) getClient(metadata *kinesisMetadata) (*kinesis.Kinesis, error) {
sess, err := awsAuth.GetClient(metadata.AccessKey, metadata.SecretKey, metadata.SessionToken, metadata.Region, metadata.Endpoint)
if err != nil {
return nil, err
}
k := kinesis.New(sess)
return k, nil
}
func (a *AWSKinesis) parseMetadata(meta bindings.Metadata) (*kinesisMetadata, error) {
var m kinesisMetadata
err := kitmd.DecodeMetadata(meta.Properties, &m)

View File

@ -29,12 +29,6 @@ metadata:
The name of the S3 bucket to write to.
example: '"bucket"'
type: string
- name: region
required: true
description: |
The specific AWS region where the S3 bucket is located.
example: '"us-east-1"'
type: string
- name: endpoint
required: false
description: |
@ -75,4 +69,4 @@ metadata:
When connecting to `https://` endpoints, accepts self-signed or invalid certificates.
type: bool
default: 'false'
example: '"true", "false"'
example: '"true", "false"'

View File

@ -29,9 +29,7 @@ import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/google/uuid"
@ -42,7 +40,7 @@ import (
"github.com/dapr/kit/logger"
kitmd "github.com/dapr/kit/metadata"
"github.com/dapr/kit/ptr"
"github.com/dapr/kit/utils"
kitstrings "github.com/dapr/kit/strings"
)
const (
@ -50,6 +48,8 @@ const (
metadataEncodeBase64 = "encodeBase64"
metadataFilePath = "filePath"
metadataPresignTTL = "presignTTL"
metadataStorageClass = "storageClass"
metadataTags = "tags"
metatadataContentType = "Content-Type"
metadataKey = "key"
@ -60,11 +60,9 @@ const (
// AWSS3 is a binding for an AWS S3 storage bucket.
type AWSS3 struct {
metadata *s3Metadata
s3Client *s3.S3
uploader *s3manager.Uploader
downloader *s3manager.Downloader
logger logger.Logger
metadata *s3Metadata
authProvider awsAuth.Provider
logger logger.Logger
}
type s3Metadata struct {
@ -73,7 +71,7 @@ type s3Metadata struct {
SecretKey string `json:"secretKey" mapstructure:"secretKey" mdignore:"true"`
SessionToken string `json:"sessionToken" mapstructure:"sessionToken" mdignore:"true"`
Region string `json:"region" mapstructure:"region"`
Region string `json:"region" mapstructure:"region" mapstructurealiases:"awsRegion" mdignore:"true"`
Endpoint string `json:"endpoint" mapstructure:"endpoint"`
Bucket string `json:"bucket" mapstructure:"bucket"`
DecodeBase64 bool `json:"decodeBase64,string" mapstructure:"decodeBase64"`
@ -83,6 +81,7 @@ type s3Metadata struct {
InsecureSSL bool `json:"insecureSSL,string" mapstructure:"insecureSSL"`
FilePath string `json:"filePath" mapstructure:"filePath" mdignore:"true"`
PresignTTL string `json:"presignTTL" mapstructure:"presignTTL" mdignore:"true"`
StorageClass string `json:"storageClass" mapstructure:"storageClass" mdignore:"true"`
}
type createResponse struct {
@ -107,23 +106,11 @@ func NewAWSS3(logger logger.Logger) bindings.OutputBinding {
return &AWSS3{logger: logger}
}
// Init does metadata parsing and connection creation.
func (s *AWSS3) Init(_ context.Context, metadata bindings.Metadata) error {
m, err := s.parseMetadata(metadata)
if err != nil {
return err
}
session, err := s.getSession(m)
if err != nil {
return err
}
cfg := aws.NewConfig().
WithS3ForcePathStyle(m.ForcePathStyle).
WithDisableSSL(m.DisableSSL)
func (s *AWSS3) getAWSConfig(opts awsAuth.Options) *aws.Config {
cfg := awsAuth.GetConfig(opts).WithS3ForcePathStyle(s.metadata.ForcePathStyle).WithDisableSSL(s.metadata.DisableSSL)
// Use a custom HTTP client to allow self-signed certs
if m.InsecureSSL {
if s.metadata.InsecureSSL {
customTransport := http.DefaultTransport.(*http.Transport).Clone()
customTransport.TLSClientConfig = &tls.Config{
//nolint:gosec
@ -136,16 +123,40 @@ func (s *AWSS3) Init(_ context.Context, metadata bindings.Metadata) error {
s.logger.Infof("aws s3: you are using 'insecureSSL' to skip server config verify which is unsafe!")
}
return cfg
}
// Init does metadata parsing and connection creation.
func (s *AWSS3) Init(ctx context.Context, metadata bindings.Metadata) error {
m, err := s.parseMetadata(metadata)
if err != nil {
return err
}
s.metadata = m
s.s3Client = s3.New(session, cfg)
s.downloader = s3manager.NewDownloaderWithClient(s.s3Client)
s.uploader = s3manager.NewUploaderWithClient(s.s3Client)
opts := awsAuth.Options{
Logger: s.logger,
Properties: metadata.Properties,
Region: m.Region,
Endpoint: m.Endpoint,
AccessKey: m.AccessKey,
SecretKey: m.SecretKey,
SessionToken: m.SessionToken,
}
// extra configs needed per component type
provider, err := awsAuth.NewProvider(ctx, opts, s.getAWSConfig(opts))
if err != nil {
return err
}
s.authProvider = provider
return nil
}
func (s *AWSS3) Close() error {
if s.authProvider != nil {
return s.authProvider.Close()
}
return nil
}
@ -181,6 +192,15 @@ func (s *AWSS3) create(ctx context.Context, req *bindings.InvokeRequest) (*bindi
if contentTypeStr != "" {
contentType = &contentTypeStr
}
var tagging *string
if rawTags, ok := req.Metadata[metadataTags]; ok {
tagging, err = s.parseS3Tags(rawTags)
if err != nil {
return nil, fmt.Errorf("s3 binding error: parsing tags falied error: %w", err)
}
}
var r io.Reader
if metadata.FilePath != "" {
r, err = os.Open(metadata.FilePath)
@ -195,11 +215,18 @@ func (s *AWSS3) create(ctx context.Context, req *bindings.InvokeRequest) (*bindi
r = b64.NewDecoder(b64.StdEncoding, r)
}
resultUpload, err := s.uploader.UploadWithContext(ctx, &s3manager.UploadInput{
Bucket: ptr.Of(metadata.Bucket),
Key: ptr.Of(key),
Body: r,
ContentType: contentType,
var storageClass *string
if metadata.StorageClass != "" {
storageClass = aws.String(metadata.StorageClass)
}
resultUpload, err := s.authProvider.S3().Uploader.UploadWithContext(ctx, &s3manager.UploadInput{
Bucket: ptr.Of(metadata.Bucket),
Key: ptr.Of(key),
Body: r,
ContentType: contentType,
StorageClass: storageClass,
Tagging: tagging,
})
if err != nil {
return nil, fmt.Errorf("s3 binding error: uploading failed: %w", err)
@ -207,7 +234,7 @@ func (s *AWSS3) create(ctx context.Context, req *bindings.InvokeRequest) (*bindi
var presignURL string
if metadata.PresignTTL != "" {
url, presignErr := s.presignObject(metadata.Bucket, key, metadata.PresignTTL)
url, presignErr := s.presignObject(ctx, metadata.Bucket, key, metadata.PresignTTL)
if presignErr != nil {
return nil, fmt.Errorf("s3 binding error: %s", presignErr)
}
@ -247,7 +274,7 @@ func (s *AWSS3) presign(ctx context.Context, req *bindings.InvokeRequest) (*bind
return nil, fmt.Errorf("s3 binding error: required metadata '%s' missing", metadataPresignTTL)
}
url, err := s.presignObject(metadata.Bucket, key, metadata.PresignTTL)
url, err := s.presignObject(ctx, metadata.Bucket, key, metadata.PresignTTL)
if err != nil {
return nil, fmt.Errorf("s3 binding error: %w", err)
}
@ -264,13 +291,12 @@ func (s *AWSS3) presign(ctx context.Context, req *bindings.InvokeRequest) (*bind
}, nil
}
func (s *AWSS3) presignObject(bucket, key, ttl string) (string, error) {
func (s *AWSS3) presignObject(ctx context.Context, bucket, key, ttl string) (string, error) {
d, err := time.ParseDuration(ttl)
if err != nil {
return "", fmt.Errorf("s3 binding error: cannot parse duration %s: %w", ttl, err)
}
objReq, _ := s.s3Client.GetObjectRequest(&s3.GetObjectInput{
objReq, _ := s.authProvider.S3().S3.GetObjectRequest(&s3.GetObjectInput{
Bucket: ptr.Of(bucket),
Key: ptr.Of(key),
})
@ -294,8 +320,7 @@ func (s *AWSS3) get(ctx context.Context, req *bindings.InvokeRequest) (*bindings
}
buff := &aws.WriteAtBuffer{}
_, err = s.downloader.DownloadWithContext(ctx,
_, err = s.authProvider.S3().Downloader.DownloadWithContext(ctx,
buff,
&s3.GetObjectInput{
Bucket: ptr.Of(s.metadata.Bucket),
@ -305,7 +330,7 @@ func (s *AWSS3) get(ctx context.Context, req *bindings.InvokeRequest) (*bindings
if err != nil {
var awsErr awserr.Error
if errors.As(err, &awsErr) && awsErr.Code() == s3.ErrCodeNoSuchKey {
return nil, fmt.Errorf("object not found")
return nil, errors.New("object not found")
}
return nil, fmt.Errorf("s3 binding error: error downloading S3 object: %w", err)
}
@ -329,8 +354,7 @@ func (s *AWSS3) delete(ctx context.Context, req *bindings.InvokeRequest) (*bindi
if key == "" {
return nil, fmt.Errorf("s3 binding error: required metadata '%s' missing", metadataKey)
}
_, err := s.s3Client.DeleteObjectWithContext(
_, err := s.authProvider.S3().S3.DeleteObjectWithContext(
ctx,
&s3.DeleteObjectInput{
Bucket: ptr.Of(s.metadata.Bucket),
@ -340,7 +364,7 @@ func (s *AWSS3) delete(ctx context.Context, req *bindings.InvokeRequest) (*bindi
if err != nil {
var awsErr awserr.Error
if errors.As(err, &awsErr) && awsErr.Code() == s3.ErrCodeNoSuchKey {
return nil, fmt.Errorf("object not found")
return nil, errors.New("object not found")
}
return nil, fmt.Errorf("s3 binding error: delete operation failed: %w", err)
}
@ -359,8 +383,7 @@ func (s *AWSS3) list(ctx context.Context, req *bindings.InvokeRequest) (*binding
if payload.MaxResults < 1 {
payload.MaxResults = defaultMaxResults
}
result, err := s.s3Client.ListObjectsWithContext(ctx, &s3.ListObjectsInput{
result, err := s.authProvider.S3().S3.ListObjectsWithContext(ctx, &s3.ListObjectsInput{
Bucket: ptr.Of(s.metadata.Bucket),
MaxKeys: ptr.Of(int64(payload.MaxResults)),
Marker: ptr.Of(payload.Marker),
@ -407,13 +430,24 @@ func (s *AWSS3) parseMetadata(md bindings.Metadata) (*s3Metadata, error) {
return &m, nil
}
func (s *AWSS3) getSession(metadata *s3Metadata) (*session.Session, error) {
sess, err := awsAuth.GetClient(metadata.AccessKey, metadata.SecretKey, metadata.SessionToken, metadata.Region, metadata.Endpoint)
if err != nil {
return nil, err
// Helper for parsing s3 tags metadata
func (s *AWSS3) parseS3Tags(raw string) (*string, error) {
tagEntries := strings.Split(raw, ",")
pairs := make([]string, 0, len(tagEntries))
for _, tagEntry := range tagEntries {
kv := strings.SplitN(strings.TrimSpace(tagEntry), "=", 2)
isInvalidTag := len(kv) != 2 || strings.TrimSpace(kv[0]) == "" || strings.TrimSpace(kv[1]) == ""
if isInvalidTag {
return nil, fmt.Errorf("invalid tag format: '%s' (expected key=value)", tagEntry)
}
pairs = append(pairs, fmt.Sprintf("%s=%s", strings.TrimSpace(kv[0]), strings.TrimSpace(kv[1])))
}
return sess, nil
if len(pairs) == 0 {
return nil, nil
}
return aws.String(strings.Join(pairs, "&")), nil
}
// Helper to merge config and request metadata.
@ -421,11 +455,11 @@ func (metadata s3Metadata) mergeWithRequestMetadata(req *bindings.InvokeRequest)
merged := metadata
if val, ok := req.Metadata[metadataDecodeBase64]; ok && val != "" {
merged.DecodeBase64 = utils.IsTruthy(val)
merged.DecodeBase64 = kitstrings.IsTruthy(val)
}
if val, ok := req.Metadata[metadataEncodeBase64]; ok && val != "" {
merged.EncodeBase64 = utils.IsTruthy(val)
merged.EncodeBase64 = kitstrings.IsTruthy(val)
}
if val, ok := req.Metadata[metadataFilePath]; ok && val != "" {
@ -436,6 +470,10 @@ func (metadata s3Metadata) mergeWithRequestMetadata(req *bindings.InvokeRequest)
merged.PresignTTL = val
}
if val, ok := req.Metadata[metadataStorageClass]; ok && val != "" {
merged.StorageClass = val
}
return merged, nil
}

View File

@ -14,7 +14,6 @@ limitations under the License.
package s3
import (
"context"
"testing"
"github.com/stretchr/testify/assert"
@ -54,6 +53,24 @@ func TestParseMetadata(t *testing.T) {
})
}
func TestParseS3Tags(t *testing.T) {
t.Run("Has parsed s3 tags", func(t *testing.T) {
request := bindings.InvokeRequest{}
request.Metadata = map[string]string{
"decodeBase64": "yes",
"encodeBase64": "false",
"filePath": "/usr/vader.darth",
"storageClass": "STANDARD_IA",
"tags": "project=myproject,year=2024",
}
s3 := AWSS3{}
parsedTags, err := s3.parseS3Tags(request.Metadata["tags"])
require.NoError(t, err)
assert.Equal(t, "project=myproject&year=2024", *parsedTags)
})
}
func TestMergeWithRequestMetadata(t *testing.T) {
t.Run("Has merged metadata", func(t *testing.T) {
m := bindings.Metadata{}
@ -83,6 +100,7 @@ func TestMergeWithRequestMetadata(t *testing.T) {
"encodeBase64": "false",
"filePath": "/usr/vader.darth",
"presignTTL": "15s",
"storageClass": "STANDARD_IA",
}
mergedMeta, err := meta.mergeWithRequestMetadata(&request)
@ -99,6 +117,7 @@ func TestMergeWithRequestMetadata(t *testing.T) {
assert.False(t, mergedMeta.EncodeBase64)
assert.Equal(t, "/usr/vader.darth", mergedMeta.FilePath)
assert.Equal(t, "15s", mergedMeta.PresignTTL)
assert.Equal(t, "STANDARD_IA", mergedMeta.StorageClass)
})
t.Run("Has invalid merged metadata decodeBase64", func(t *testing.T) {
@ -174,7 +193,7 @@ func TestGetOption(t *testing.T) {
t.Run("return error if key is missing", func(t *testing.T) {
r := bindings.InvokeRequest{}
_, err := s3.get(context.Background(), &r)
_, err := s3.get(t.Context(), &r)
require.Error(t, err)
})
}
@ -185,7 +204,7 @@ func TestDeleteOption(t *testing.T) {
t.Run("return error if key is missing", func(t *testing.T) {
r := bindings.InvokeRequest{}
_, err := s3.delete(context.Background(), &r)
_, err := s3.delete(t.Context(), &r)
require.Error(t, err)
})
}

View File

@ -15,6 +15,7 @@ package ses
import (
"context"
"errors"
"fmt"
"reflect"
"strconv"
@ -37,9 +38,9 @@ const (
// AWSSES is an AWS SNS binding.
type AWSSES struct {
metadata *sesMetadata
logger logger.Logger
svc *ses.SES
authProvider awsAuth.Provider
metadata *sesMetadata
logger logger.Logger
}
type sesMetadata struct {
@ -60,19 +61,29 @@ func NewAWSSES(logger logger.Logger) bindings.OutputBinding {
}
// Init does metadata parsing.
func (a *AWSSES) Init(_ context.Context, metadata bindings.Metadata) error {
func (a *AWSSES) Init(ctx context.Context, metadata bindings.Metadata) error {
// Parse input metadata
meta, err := a.parseMetadata(metadata)
m, err := a.parseMetadata(metadata)
if err != nil {
return err
}
svc, err := a.getClient(meta)
a.metadata = m
opts := awsAuth.Options{
Logger: a.logger,
Properties: metadata.Properties,
Region: m.Region,
AccessKey: m.AccessKey,
SecretKey: m.SecretKey,
SessionToken: "",
}
// extra configs needed per component type
provider, err := awsAuth.NewProvider(ctx, opts, awsAuth.GetConfig(opts))
if err != nil {
return err
}
a.metadata = meta
a.svc = svc
a.authProvider = provider
return nil
}
@ -92,13 +103,13 @@ func (a *AWSSES) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*bind
metadata := a.metadata.mergeWithRequestMetadata(req)
if metadata.EmailFrom == "" {
return nil, fmt.Errorf("SES binding error: emailFrom property not supplied in configuration- or request-metadata")
return nil, errors.New("SES binding error: emailFrom property not supplied in configuration- or request-metadata")
}
if metadata.EmailTo == "" {
return nil, fmt.Errorf("SES binding error: emailTo property not supplied in configuration- or request-metadata")
return nil, errors.New("SES binding error: emailTo property not supplied in configuration- or request-metadata")
}
if metadata.Subject == "" {
return nil, fmt.Errorf("SES binding error: subject property not supplied in configuration- or request-metadata")
return nil, errors.New("SES binding error: subject property not supplied in configuration- or request-metadata")
}
body, err := strconv.Unquote(string(req.Data))
@ -140,7 +151,7 @@ func (a *AWSSES) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*bind
}
// Attempt to send the email.
result, err := a.svc.SendEmail(input)
result, err := a.authProvider.Ses().Ses.SendEmail(input)
if err != nil {
return nil, fmt.Errorf("SES binding error. Sending email failed: %w", err)
}
@ -157,21 +168,16 @@ func (metadata sesMetadata) mergeWithRequestMetadata(req *bindings.InvokeRequest
return merged
}
func (a *AWSSES) getClient(metadata *sesMetadata) (*ses.SES, error) {
sess, err := awsAuth.GetClient(metadata.AccessKey, metadata.SecretKey, metadata.SessionToken, metadata.Region, "")
if err != nil {
return nil, fmt.Errorf("SES binding error: error creating AWS session %w", err)
}
// Create an SES instance
svc := ses.New(sess)
return svc, nil
}
// GetComponentMetadata returns the metadata of the component.
func (a *AWSSES) GetComponentMetadata() (metadataInfo contribMetadata.MetadataMap) {
metadataStruct := sesMetadata{}
contribMetadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, contribMetadata.BindingType)
return
}
func (a *AWSSES) Close() error {
if a.authProvider != nil {
return a.authProvider.Close()
}
return nil
}

View File

@ -0,0 +1,32 @@
# yaml-language-server: $schema=../../../component-metadata-schema.json
schemaVersion: v1
type: bindings
name: aws.sns
version: v1
status: alpha
title: "AWS SNS"
urls:
- title: Reference
url: https://docs.dapr.io/reference/components-reference/supported-bindings/sns/
binding:
output: true
operations:
- name: create
description: "Create a new subscription"
capabilities: []
builtinAuthenticationProfiles:
- name: "aws"
metadata:
- name: topicArn
required: true
description: |
The SNS topic name.
example: '"arn:::topicarn"'
type: string
- name: endpoint
required: false
description: |
AWS endpoint for the component to use, to connect to SNS-compatible services or emulators.
Do not use this when running against production AWS.
example: '"http://localhost:4566"'
type: string

View File

@ -30,19 +30,22 @@ import (
// AWSSNS is an AWS SNS binding.
type AWSSNS struct {
client *sns.SNS
topicARN string
authProvider awsAuth.Provider
topicARN string
logger logger.Logger
}
type snsMetadata struct {
TopicArn string `json:"topicArn"`
Region string `json:"region"`
Endpoint string `json:"endpoint"`
AccessKey string `json:"accessKey"`
SecretKey string `json:"secretKey"`
SessionToken string `json:"sessionToken"`
// Ignored by metadata parser because included in built-in authentication profile
AccessKey string `json:"accessKey" mapstructure:"accessKey" mdignore:"true"`
SecretKey string `json:"secretKey" mapstructure:"secretKey" mdignore:"true"`
SessionToken string `json:"sessionToken" mapstructure:"sessionToken" mdignore:"true"`
TopicArn string `json:"topicArn"`
// TODO: in Dapr 1.17 rm the alias on region as we remove the aws prefix on these fields
Region string `json:"region" mapstructure:"region" mapstructurealiases:"awsRegion" mdignore:"true"`
Endpoint string `json:"endpoint"`
}
type dataPayload struct {
@ -56,16 +59,27 @@ func NewAWSSNS(logger logger.Logger) bindings.OutputBinding {
}
// Init does metadata parsing.
func (a *AWSSNS) Init(_ context.Context, metadata bindings.Metadata) error {
func (a *AWSSNS) Init(ctx context.Context, metadata bindings.Metadata) error {
m, err := a.parseMetadata(metadata)
if err != nil {
return err
}
client, err := a.getClient(m)
opts := awsAuth.Options{
Logger: a.logger,
Properties: metadata.Properties,
Region: m.Region,
Endpoint: m.Endpoint,
AccessKey: m.AccessKey,
SecretKey: m.SecretKey,
SessionToken: m.SessionToken,
}
// extra configs needed per component type
provider, err := awsAuth.NewProvider(ctx, opts, awsAuth.GetConfig(opts))
if err != nil {
return err
}
a.client = client
a.authProvider = provider
a.topicARN = m.TopicArn
return nil
@ -81,16 +95,6 @@ func (a *AWSSNS) parseMetadata(meta bindings.Metadata) (*snsMetadata, error) {
return &m, nil
}
func (a *AWSSNS) getClient(metadata *snsMetadata) (*sns.SNS, error) {
sess, err := awsAuth.GetClient(metadata.AccessKey, metadata.SecretKey, metadata.SessionToken, metadata.Region, metadata.Endpoint)
if err != nil {
return nil, err
}
c := sns.New(sess)
return c, nil
}
func (a *AWSSNS) Operations() []bindings.OperationKind {
return []bindings.OperationKind{bindings.CreateOperation}
}
@ -105,7 +109,7 @@ func (a *AWSSNS) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*bind
msg := fmt.Sprintf("%v", payload.Message)
subject := fmt.Sprintf("%v", payload.Subject)
_, err = a.client.PublishWithContext(ctx, &sns.PublishInput{
_, err = a.authProvider.Sns().Sns.PublishWithContext(ctx, &sns.PublishInput{
Message: &msg,
Subject: &subject,
TopicArn: &a.topicARN,
@ -123,3 +127,10 @@ func (a *AWSSNS) GetComponentMetadata() (metadataInfo metadata.MetadataMap) {
metadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, metadata.BindingType)
return
}
func (a *AWSSNS) Close() error {
if a.authProvider != nil {
return a.authProvider.Close()
}
return nil
}

View File

@ -33,13 +33,12 @@ import (
// AWSSQS allows receiving and sending data to/from AWS SQS.
type AWSSQS struct {
Client *sqs.SQS
QueueURL *string
logger logger.Logger
wg sync.WaitGroup
closeCh chan struct{}
closed atomic.Bool
authProvider awsAuth.Provider
queueName string
logger logger.Logger
wg sync.WaitGroup
closeCh chan struct{}
closed atomic.Bool
}
type sqsMetadata struct {
@ -66,21 +65,22 @@ func (a *AWSSQS) Init(ctx context.Context, metadata bindings.Metadata) error {
return err
}
client, err := a.getClient(m)
opts := awsAuth.Options{
Logger: a.logger,
Properties: metadata.Properties,
Region: m.Region,
Endpoint: m.Endpoint,
AccessKey: m.AccessKey,
SecretKey: m.SecretKey,
SessionToken: m.SessionToken,
}
// extra configs needed per component type
provider, err := awsAuth.NewProvider(ctx, opts, awsAuth.GetConfig(opts))
if err != nil {
return err
}
queueName := m.QueueName
resultURL, err := client.GetQueueUrlWithContext(ctx, &sqs.GetQueueUrlInput{
QueueName: aws.String(queueName),
})
if err != nil {
return err
}
a.QueueURL = resultURL.QueueUrl
a.Client = client
a.authProvider = provider
a.queueName = m.QueueName
return nil
}
@ -91,9 +91,14 @@ func (a *AWSSQS) Operations() []bindings.OperationKind {
func (a *AWSSQS) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*bindings.InvokeResponse, error) {
msgBody := string(req.Data)
_, err := a.Client.SendMessageWithContext(ctx, &sqs.SendMessageInput{
url, err := a.authProvider.Sqs().QueueURL(ctx, a.queueName)
if err != nil {
a.logger.Errorf("failed to get queue url: %v", err)
}
_, err = a.authProvider.Sqs().Sqs.SendMessageWithContext(ctx, &sqs.SendMessageInput{
MessageBody: &msgBody,
QueueUrl: a.QueueURL,
QueueUrl: url,
})
return nil, err
@ -113,9 +118,13 @@ func (a *AWSSQS) Read(ctx context.Context, handler bindings.Handler) error {
if ctx.Err() != nil || a.closed.Load() {
return
}
url, err := a.authProvider.Sqs().QueueURL(ctx, a.queueName)
if err != nil {
a.logger.Errorf("failed to get queue url: %v", err)
}
result, err := a.Client.ReceiveMessageWithContext(ctx, &sqs.ReceiveMessageInput{
QueueUrl: a.QueueURL,
result, err := a.authProvider.Sqs().Sqs.ReceiveMessageWithContext(ctx, &sqs.ReceiveMessageInput{
QueueUrl: url,
AttributeNames: aws.StringSlice([]string{
"SentTimestamp",
}),
@ -126,7 +135,7 @@ func (a *AWSSQS) Read(ctx context.Context, handler bindings.Handler) error {
WaitTimeSeconds: aws.Int64(20),
})
if err != nil {
a.logger.Errorf("Unable to receive message from queue %q, %v.", *a.QueueURL, err)
a.logger.Errorf("Unable to receive message from queue %q, %v.", url, err)
}
if len(result.Messages) > 0 {
@ -140,8 +149,8 @@ func (a *AWSSQS) Read(ctx context.Context, handler bindings.Handler) error {
msgHandle := m.ReceiptHandle
// Use a background context here because ctx may be canceled already
a.Client.DeleteMessageWithContext(context.Background(), &sqs.DeleteMessageInput{
QueueUrl: a.QueueURL,
a.authProvider.Sqs().Sqs.DeleteMessageWithContext(context.Background(), &sqs.DeleteMessageInput{
QueueUrl: url,
ReceiptHandle: msgHandle,
})
}
@ -164,6 +173,9 @@ func (a *AWSSQS) Close() error {
close(a.closeCh)
}
a.wg.Wait()
if a.authProvider != nil {
return a.authProvider.Close()
}
return nil
}
@ -177,16 +189,6 @@ func (a *AWSSQS) parseSQSMetadata(meta bindings.Metadata) (*sqsMetadata, error)
return &m, nil
}
func (a *AWSSQS) getClient(metadata *sqsMetadata) (*sqs.SQS, error) {
sess, err := awsAuth.GetClient(metadata.AccessKey, metadata.SecretKey, metadata.SessionToken, metadata.Region, metadata.Endpoint)
if err != nil {
return nil, err
}
c := sqs.New(sess)
return c, nil
}
// GetComponentMetadata returns the metadata of the component.
func (a *AWSSQS) GetComponentMetadata() (metadataInfo metadata.MetadataMap) {
metadataStruct := sqsMetadata{}

View File

@ -154,7 +154,6 @@ func (a *AzureBlobStorage) create(ctx context.Context, req *bindings.InvokeReque
blockBlobClient := a.containerClient.NewBlockBlobClient(blobName)
_, err = blockBlobClient.UploadBuffer(ctx, req.Data, &uploadOptions)
if err != nil {
return nil, fmt.Errorf("error uploading az blob: %w", err)
}
@ -192,7 +191,7 @@ func (a *AzureBlobStorage) get(ctx context.Context, req *bindings.InvokeRequest)
blobDownloadResponse, err := blockBlobClient.DownloadStream(ctx, &downloadOptions)
if err != nil {
if bloberror.HasCode(err, bloberror.BlobNotFound) {
return nil, fmt.Errorf("blob not found")
return nil, errors.New("blob not found")
}
return nil, fmt.Errorf("error downloading az blob: %w", err)
}
@ -261,7 +260,7 @@ func (a *AzureBlobStorage) delete(ctx context.Context, req *bindings.InvokeReque
_, err := blockBlobClient.Delete(ctx, &deleteOptions)
if bloberror.HasCode(err, bloberror.BlobNotFound) {
return nil, fmt.Errorf("blob not found")
return nil, errors.New("blob not found")
}
return nil, err
@ -377,3 +376,7 @@ func (a *AzureBlobStorage) GetComponentMetadata() (metadataInfo contribMetadata.
contribMetadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, contribMetadata.BindingType)
return
}
func (a *AzureBlobStorage) Close() error {
return nil
}

View File

@ -14,7 +14,6 @@ limitations under the License.
package blobstorage
import (
"context"
"testing"
"github.com/stretchr/testify/require"
@ -28,7 +27,7 @@ func TestGetOption(t *testing.T) {
t.Run("return error if blobName is missing", func(t *testing.T) {
r := bindings.InvokeRequest{}
_, err := blobStorage.get(context.Background(), &r)
_, err := blobStorage.get(t.Context(), &r)
require.Error(t, err)
require.ErrorIs(t, err, ErrMissingBlobName)
})
@ -39,7 +38,7 @@ func TestDeleteOption(t *testing.T) {
t.Run("return error if blobName is missing", func(t *testing.T) {
r := bindings.InvokeRequest{}
_, err := blobStorage.delete(context.Background(), &r)
_, err := blobStorage.delete(t.Context(), &r)
require.Error(t, err)
require.ErrorIs(t, err, ErrMissingBlobName)
})
@ -50,7 +49,7 @@ func TestDeleteOption(t *testing.T) {
"blobName": "foo",
"deleteSnapshots": "invalid",
}
_, err := blobStorage.delete(context.Background(), &r)
_, err := blobStorage.delete(t.Context(), &r)
require.Error(t, err)
})
}

View File

@ -16,6 +16,7 @@ package cosmosdb
import (
"context"
"encoding/json"
"errors"
"fmt"
"reflect"
"strings"
@ -158,7 +159,7 @@ func (c *CosmosDB) getPartitionKeyValue(key string, obj interface{}) (string, er
}
val, ok := valI.(string)
if !ok {
return "", fmt.Errorf("partition key is not a string")
return "", errors.New("partition key is not a string")
}
if val == "" {
@ -172,7 +173,7 @@ func (c *CosmosDB) lookup(m map[string]interface{}, ks []string) (val interface{
var ok bool
if len(ks) == 0 {
return nil, fmt.Errorf("needs at least one key")
return nil, errors.New("needs at least one key")
}
if val, ok = m[ks[0]]; !ok {
@ -198,3 +199,7 @@ func (c *CosmosDB) GetComponentMetadata() (metadataInfo contribMetadata.Metadata
contribMetadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, contribMetadata.BindingType)
return
}
func (c *CosmosDB) Close() error {
return nil
}

View File

@ -46,7 +46,7 @@ const (
// CosmosDBGremlinAPI allows performing state operations on collections.
type CosmosDBGremlinAPI struct {
metadata *cosmosDBGremlinAPICredentials
client *gremcos.Cosmos
client gremcos.Cosmos
logger logger.Logger
}
@ -77,7 +77,7 @@ func (c *CosmosDBGremlinAPI) Init(_ context.Context, metadata bindings.Metadata)
return errors.New("CosmosDBGremlinAPI Error: failed to create the Cosmos Graph DB connector")
}
c.client = &client
c.client = client
return nil
}
@ -116,7 +116,7 @@ func (c *CosmosDBGremlinAPI) Invoke(_ context.Context, req *bindings.InvokeReque
respStartTimeKey: startTime.Format(time.RFC3339Nano),
},
}
d, err := (*c.client).Execute(gq)
d, err := c.client.Execute(gq)
if err != nil {
return nil, errors.New("CosmosDBGremlinAPI Error:error excuting gremlin")
}
@ -136,3 +136,10 @@ func (c *CosmosDBGremlinAPI) GetComponentMetadata() (metadataInfo metadata.Metad
metadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, metadata.BindingType)
return
}
func (c *CosmosDBGremlinAPI) Close() error {
if c.client != nil {
return c.client.Stop()
}
return nil
}

View File

@ -94,7 +94,7 @@ func createEventHubsBindingsAADMetadata() bindings.Metadata {
}
func testEventHubsBindingsAADAuthentication(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
log := logger.NewLogger("bindings.azure.eventhubs.integration.test")
@ -102,7 +102,7 @@ func testEventHubsBindingsAADAuthentication(t *testing.T) {
metadata := createEventHubsBindingsAADMetadata()
eventHubsBindings := NewAzureEventHubs(log)
err := eventHubsBindings.Init(context.Background(), metadata)
err := eventHubsBindings.Init(t.Context(), metadata)
require.NoError(t, err)
req := &bindings.InvokeRequest{
@ -142,11 +142,11 @@ func testEventHubsBindingsAADAuthentication(t *testing.T) {
}
func testReadIotHubEvents(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
logger := logger.NewLogger("bindings.azure.eventhubs.integration.test")
eh := NewAzureEventHubs(logger)
err := eh.Init(context.Background(), createIotHubBindingsMetadata())
err := eh.Init(t.Context(), createIotHubBindingsMetadata())
require.NoError(t, err)
// Invoke az CLI via bash script to send test IoT device events

View File

@ -55,7 +55,15 @@ builtinAuthenticationProfiles:
default: "false"
example: "false"
description: |
Allow management of the Event Hub namespace and storage account.
- name: enableInOrderMessageDelivery
type: bool
required: false
default: "false"
example: "false"
description: |
Enable in order processing of messages within a partition.
- name: resourceGroupName
type: string
required: false
@ -90,8 +98,8 @@ builtinAuthenticationProfiles:
entity management is enabled.
metadata:
# Input-only metadata
# consumerGroup is an alias for consumerId, if both are defined consumerId takes precedence.
- name: consumerId
# consumerGroup is an alias for consumerID, if both are defined consumerID takes precedence.
- name: consumerID
type: string
required: true # consumerGroup is an alias for this field, let's promote this to default
binding:
@ -108,7 +116,7 @@ metadata:
output: false
description: |
The name of the Event Hubs Consumer Group to listen on.
Alias to consumerId.
Alias to consumerID.
example: '"group1"'
deprecated: true
- name: storageAccountKey
@ -153,3 +161,13 @@ metadata:
description: |
Storage container name.
example: '"myeventhubstoragecontainer"'
- name: getAllMessageProperties
type: bool
required: false
default: false
example: "false"
binding:
input: true
output: false
description: |
When set to true, will retrieve all message properties and include them in the returned event metadata

View File

@ -16,12 +16,13 @@ package openai
import (
"context"
"encoding/json"
"errors"
"fmt"
"reflect"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/dapr/components-contrib/bindings"
azauth "github.com/dapr/components-contrib/common/authentication/azure"
@ -120,10 +121,10 @@ func (p *AzOpenAI) Init(ctx context.Context, meta bindings.Metadata) error {
if m.APIKey != "" {
// use API key authentication
var keyCredential azopenai.KeyCredential
keyCredential, err = azopenai.NewKeyCredential(m.APIKey)
if err != nil {
return fmt.Errorf("error getting credentials object: %w", err)
var keyCredential *azcore.KeyCredential
keyCredential = azcore.NewKeyCredential(m.APIKey)
if keyCredential == nil {
return errors.New("error getting credentials object")
}
p.client, err = azopenai.NewClientWithKeyCredential(m.Endpoint, keyCredential, nil)
@ -163,7 +164,7 @@ func (p *AzOpenAI) Operations() []bindings.OperationKind {
// Invoke handles all invoke operations.
func (p *AzOpenAI) Invoke(ctx context.Context, req *bindings.InvokeRequest) (resp *bindings.InvokeResponse, err error) {
if req == nil || len(req.Metadata) == 0 {
return nil, fmt.Errorf("invalid request: metadata is required")
return nil, errors.New("invalid request: metadata is required")
}
startTime := time.Now().UTC()
@ -228,7 +229,7 @@ func (p *AzOpenAI) completion(ctx context.Context, message []byte, metadata map[
}
if prompt.Prompt == "" {
return nil, fmt.Errorf("prompt is required for completion operation")
return nil, errors.New("prompt is required for completion operation")
}
if prompt.DeploymentID == "" {
@ -240,13 +241,13 @@ func (p *AzOpenAI) completion(ctx context.Context, message []byte, metadata map[
}
resp, err := p.client.GetCompletions(ctx, azopenai.CompletionsOptions{
Deployment: prompt.DeploymentID,
Prompt: []string{prompt.Prompt},
MaxTokens: &prompt.MaxTokens,
Temperature: &prompt.Temperature,
TopP: &prompt.TopP,
N: &prompt.N,
Stop: prompt.Stop,
DeploymentName: &prompt.DeploymentID,
Prompt: []string{prompt.Prompt},
MaxTokens: &prompt.MaxTokens,
Temperature: &prompt.Temperature,
TopP: &prompt.TopP,
N: &prompt.N,
Stop: prompt.Stop,
}, nil)
if err != nil {
return nil, fmt.Errorf("error getting completion api: %w", err)
@ -280,7 +281,7 @@ func (p *AzOpenAI) chatCompletion(ctx context.Context, messageRequest []byte, me
}
if len(messages.Messages) == 0 {
return nil, fmt.Errorf("messages are required for chat-completion operation")
return nil, errors.New("messages are required for chat-completion operation")
}
if messages.DeploymentID == "" {
@ -291,11 +292,32 @@ func (p *AzOpenAI) chatCompletion(ctx context.Context, messageRequest []byte, me
messages.Stop = nil
}
messageReq := make([]azopenai.ChatMessage, len(messages.Messages))
messageReq := make([]azopenai.ChatRequestMessageClassification, len(messages.Messages))
for i, m := range messages.Messages {
messageReq[i] = azopenai.ChatMessage{
Role: to.Ptr(azopenai.ChatRole(m.Role)),
Content: to.Ptr(m.Message),
currentMsg := m.Message
switch azopenai.ChatRole(m.Role) {
case azopenai.ChatRoleUser:
messageReq[i] = &azopenai.ChatRequestUserMessage{
Content: azopenai.NewChatRequestUserMessageContent(currentMsg),
}
case azopenai.ChatRoleAssistant:
messageReq[i] = &azopenai.ChatRequestAssistantMessage{
Content: &currentMsg,
}
case azopenai.ChatRoleFunction:
messageReq[i] = &azopenai.ChatRequestFunctionMessage{
Content: &currentMsg,
}
case azopenai.ChatRoleSystem:
messageReq[i] = &azopenai.ChatRequestSystemMessage{
Content: &currentMsg,
}
case azopenai.ChatRoleTool:
messageReq[i] = &azopenai.ChatRequestToolMessage{
Content: &currentMsg,
}
default:
return nil, fmt.Errorf("invalid role: %s", m.Role)
}
}
@ -305,13 +327,13 @@ func (p *AzOpenAI) chatCompletion(ctx context.Context, messageRequest []byte, me
}
res, err := p.client.GetChatCompletions(ctx, azopenai.ChatCompletionsOptions{
Deployment: messages.DeploymentID,
MaxTokens: maxTokens,
Temperature: &messages.Temperature,
TopP: &messages.TopP,
N: &messages.N,
Messages: messageReq,
Stop: messages.Stop,
DeploymentName: &messages.DeploymentID,
MaxTokens: maxTokens,
Temperature: &messages.Temperature,
TopP: &messages.TopP,
N: &messages.N,
Messages: messageReq,
Stop: messages.Stop,
}, nil)
if err != nil {
return nil, fmt.Errorf("error getting chat completion api: %w", err)
@ -343,8 +365,8 @@ func (p *AzOpenAI) getEmbedding(ctx context.Context, messageRequest []byte, meta
}
res, err := p.client.GetEmbeddings(ctx, azopenai.EmbeddingsOptions{
Deployment: message.DeploymentID,
Input: []string{message.Message},
DeploymentName: &message.DeploymentID,
Input: []string{message.Message},
}, nil)
if err != nil {
return nil, fmt.Errorf("error getting embedding api: %w", err)

View File

@ -156,7 +156,7 @@ func (s *SignalR) parseMetadata(md map[string]string) (err error) {
s.accessKey = connectionValue[i+1:]
case "AuthType":
if connectionValue[i+1:] != "aad" {
return fmt.Errorf("invalid value for AuthType in the connection string; only 'aad' is supported")
return errors.New("invalid value for AuthType in the connection string; only 'aad' is supported")
}
useAAD = true
case "ClientId", "ClientSecret", "TenantId":
@ -171,14 +171,14 @@ func (s *SignalR) parseMetadata(md map[string]string) (err error) {
}
}
} else if len(connectionValue) != 0 {
return fmt.Errorf("the connection string is invalid or malformed")
return errors.New("the connection string is invalid or malformed")
}
}
// Check here because if we use a connection string, we'd have an explicit "AuthType=aad" option
// We would otherwise catch this issue later, but here we can be more explicit with the error
if s.accessKey == "" && !useAAD {
return fmt.Errorf("missing AccessKey in the connection string")
return errors.New("missing AccessKey in the connection string")
}
}
@ -198,7 +198,7 @@ func (s *SignalR) parseMetadata(md map[string]string) (err error) {
// Check for required values
if s.endpoint == "" {
return fmt.Errorf("missing endpoint in the metadata or connection string")
return errors.New("missing endpoint in the metadata or connection string")
}
return nil
@ -333,7 +333,7 @@ func (s *SignalR) GetAadClientAccessToken(ctx context.Context, hub string, user
u := fmt.Sprintf("%s/api/hubs/%s/:generateToken?api-version=%s", s.endpoint, hub, apiVersion)
if user != "" {
u += fmt.Sprintf("&userId=%s", url.QueryEscape(user))
u += "&userId=" + url.QueryEscape(user)
}
body, err := s.sendRequestToSignalR(ctx, u, aadToken, nil)
@ -419,3 +419,7 @@ func (s *SignalR) GetComponentMetadata() (metadataInfo metadata.MetadataMap) {
metadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, metadata.BindingType)
return
}
func (s *SignalR) Close() error {
return nil
}

View File

@ -313,7 +313,7 @@ func TestWriteShouldFail(t *testing.T) {
t.Run("Missing hub should fail", func(t *testing.T) {
httpTransport.reset()
_, err := s.Invoke(context.Background(), &bindings.InvokeRequest{
_, err := s.Invoke(t.Context(), &bindings.InvokeRequest{
Data: []byte("hello world"),
Metadata: map[string]string{},
})
@ -325,7 +325,7 @@ func TestWriteShouldFail(t *testing.T) {
httpTransport.reset()
httpErr := errors.New("fake error")
httpTransport.errToReturn = httpErr
_, err := s.Invoke(context.Background(), &bindings.InvokeRequest{
_, err := s.Invoke(t.Context(), &bindings.InvokeRequest{
Data: []byte("hello world"),
Metadata: map[string]string{
hubKey: "testHub",
@ -339,7 +339,7 @@ func TestWriteShouldFail(t *testing.T) {
t.Run("SignalR call returns status != [200, 202]", func(t *testing.T) {
httpTransport.reset()
httpTransport.response.StatusCode = 401
_, err := s.Invoke(context.Background(), &bindings.InvokeRequest{
_, err := s.Invoke(t.Context(), &bindings.InvokeRequest{
Data: []byte("hello world"),
Metadata: map[string]string{
hubKey: "testHub",
@ -364,7 +364,7 @@ func TestWriteShouldSucceed(t *testing.T) {
t.Run("Has authorization", func(t *testing.T) {
httpTransport.reset()
_, err := s.Invoke(context.Background(), &bindings.InvokeRequest{
_, err := s.Invoke(t.Context(), &bindings.InvokeRequest{
Data: []byte("hello world"),
Metadata: map[string]string{
hubKey: "testHub",
@ -394,11 +394,10 @@ func TestWriteShouldSucceed(t *testing.T) {
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
httpTransport.reset()
s.hub = tt.hubInMetadata
_, err := s.Invoke(context.Background(), &bindings.InvokeRequest{
_, err := s.Invoke(t.Context(), &bindings.InvokeRequest{
Data: []byte("hello world"),
Metadata: map[string]string{
hubKey: tt.hubInWriteRequest,
@ -434,7 +433,7 @@ func TestGetShouldSucceed(t *testing.T) {
t.Run("Can get negotiate response with accessKey", func(t *testing.T) {
s.aadToken = nil
s.accessKey = "AAbbcCsGEQKoLEH6oodDR0jK104Fu1c39Qgk+AA8D+M="
res, err := s.Invoke(context.Background(), &bindings.InvokeRequest{
res, err := s.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{
hubKey: "testHub",
},
@ -464,7 +463,7 @@ func TestGetShouldSucceed(t *testing.T) {
t.Run("Can get negotiate response with accessKey and userId", func(t *testing.T) {
s.aadToken = nil
s.accessKey = "AAbbcCsGEQKoLEH6oodDR0jK104Fu1c39Qgk+AA8D+M="
res, err := s.Invoke(context.Background(), &bindings.InvokeRequest{
res, err := s.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{
hubKey: "testHub",
userKey: "user1",
@ -500,7 +499,7 @@ func TestGetShouldSucceed(t *testing.T) {
}
httpTransport.reset()
res, err := s.Invoke(context.Background(), &bindings.InvokeRequest{
res, err := s.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{
hubKey: "testHub",
userKey: "user?1&2",

View File

@ -220,7 +220,7 @@ func (d *AzureQueueHelper) Read(ctx context.Context, consumer *consumer) error {
}
return nil
} else {
return fmt.Errorf("could not delete message from queue: message ID or pop receipt is nil")
return errors.New("could not delete message from queue: message ID or pop receipt is nil")
}
}

View File

@ -96,12 +96,12 @@ func TestWriteQueue(t *testing.T) {
m := bindings.Metadata{}
m.Properties = map[string]string{"storageAccessKey": "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==", "queue": "queue1", "storageAccount": "devstoreaccount1"}
err := a.Init(context.Background(), m)
err := a.Init(t.Context(), m)
require.NoError(t, err)
r := bindings.InvokeRequest{Data: []byte("This is my message")}
_, err = a.Invoke(context.Background(), &r)
_, err = a.Invoke(t.Context(), &r)
require.NoError(t, err)
require.NoError(t, a.Close())
@ -109,7 +109,7 @@ func TestWriteQueue(t *testing.T) {
func TestWriteWithTTLInQueue(t *testing.T) {
mm := new(MockHelper)
mm.On("Write", mock.AnythingOfTypeArgument("[]uint8"), mock.MatchedBy(func(in *time.Duration) bool {
mm.On("Write", mock.AnythingOfType("[]uint8"), mock.MatchedBy(func(in *time.Duration) bool {
return in != nil && *in == time.Second
})).Return(nil)
@ -118,12 +118,12 @@ func TestWriteWithTTLInQueue(t *testing.T) {
m := bindings.Metadata{}
m.Properties = map[string]string{"storageAccessKey": "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==", "queue": "queue1", "storageAccount": "devstoreaccount1", metadata.TTLMetadataKey: "1"}
err := a.Init(context.Background(), m)
err := a.Init(t.Context(), m)
require.NoError(t, err)
r := bindings.InvokeRequest{Data: []byte("This is my message")}
_, err = a.Invoke(context.Background(), &r)
_, err = a.Invoke(t.Context(), &r)
require.NoError(t, err)
require.NoError(t, a.Close())
@ -131,7 +131,7 @@ func TestWriteWithTTLInQueue(t *testing.T) {
func TestWriteWithTTLInWrite(t *testing.T) {
mm := new(MockHelper)
mm.On("Write", mock.AnythingOfTypeArgument("[]uint8"), mock.MatchedBy(func(in *time.Duration) bool {
mm.On("Write", mock.AnythingOfType("[]uint8"), mock.MatchedBy(func(in *time.Duration) bool {
return in != nil && *in == time.Second
})).Return(nil)
@ -140,7 +140,7 @@ func TestWriteWithTTLInWrite(t *testing.T) {
m := bindings.Metadata{}
m.Properties = map[string]string{"storageAccessKey": "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==", "queue": "queue1", "storageAccount": "devstoreaccount1", metadata.TTLMetadataKey: "1"}
err := a.Init(context.Background(), m)
err := a.Init(t.Context(), m)
require.NoError(t, err)
r := bindings.InvokeRequest{
@ -148,7 +148,7 @@ func TestWriteWithTTLInWrite(t *testing.T) {
Metadata: map[string]string{metadata.TTLMetadataKey: "1"},
}
_, err = a.Invoke(context.Background(), &r)
_, err = a.Invoke(t.Context(), &r)
require.NoError(t, err)
require.NoError(t, a.Close())
@ -162,7 +162,7 @@ func TestWriteWithTTLInWrite(t *testing.T) {
m := bindings.Metadata{}
m.Properties = map[string]string{"storageAccessKey": "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==", "queue": "queue1", "storageAccount": "devstoreaccount1"}
err := a.Init(context.Background(), m)
err := a.Init(t.Context(), m)
require.NoError(t, err)
r := bindings.InvokeRequest{Data: []byte("This is my message")}
@ -181,12 +181,12 @@ func TestReadQueue(t *testing.T) {
m := bindings.Metadata{}
m.Properties = map[string]string{"storageAccessKey": "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==", "queue": "queue1", "storageAccount": "devstoreaccount1"}
err := a.Init(context.Background(), m)
err := a.Init(t.Context(), m)
require.NoError(t, err)
r := bindings.InvokeRequest{Data: []byte("This is my message")}
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
_, err = a.Invoke(ctx, &r)
require.NoError(t, err)
@ -223,12 +223,12 @@ func TestReadQueueDecode(t *testing.T) {
m := bindings.Metadata{}
m.Properties = map[string]string{"storageAccessKey": "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==", "queue": "queue1", "storageAccount": "devstoreaccount1", "decodeBase64": "true"}
err := a.Init(context.Background(), m)
err := a.Init(t.Context(), m)
require.NoError(t, err)
r := bindings.InvokeRequest{Data: []byte("VGhpcyBpcyBteSBtZXNzYWdl")}
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
_, err = a.Invoke(ctx, &r)
require.NoError(t, err)
@ -263,7 +263,7 @@ func TestReadQueueDecode(t *testing.T) {
m := bindings.Metadata{}
m.Properties = map[string]string{"storageAccessKey": "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==", "queue": "queue1", "storageAccount": "devstoreaccount1"}
err := a.Init(context.Background(), m)
err := a.Init(t.Context(), m)
require.NoError(t, err)
r := bindings.InvokeRequest{Data: []byte("This is my message")}
@ -294,10 +294,10 @@ func TestReadQueueNoMessage(t *testing.T) {
m := bindings.Metadata{}
m.Properties = map[string]string{"storageAccessKey": "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==", "queue": "queue1", "storageAccount": "devstoreaccount1"}
err := a.Init(context.Background(), m)
err := a.Init(t.Context(), m)
require.NoError(t, err)
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
received := 0
handler := func(ctx context.Context, data *bindings.ReadResponse) ([]byte, error) {
received++

View File

@ -75,7 +75,7 @@ func (q *CFQueues) Init(_ context.Context, metadata bindings.Metadata) error {
}
// Operations returns the supported operations for this binding.
func (q CFQueues) Operations() []bindings.OperationKind {
func (q *CFQueues) Operations() []bindings.OperationKind {
return []bindings.OperationKind{bindings.CreateOperation, "publish"}
}

View File

@ -65,7 +65,7 @@ func (ct *Binding) Init(_ context.Context, metadata bindings.Metadata) error {
baseURLdomain := fmt.Sprintf("%s.%s.commercetools.com", commercetoolsM.Region, commercetoolsM.Provider)
authURL := fmt.Sprintf("https://auth.%s/oauth/token", baseURLdomain)
apiURL := fmt.Sprintf("https://api.%s", baseURLdomain)
apiURL := "https://api." + baseURLdomain
// Create the new client. When an empty value is passed it will use the CTP_*
// environment variables to get the value. The HTTPClient arg is optional,
@ -201,7 +201,7 @@ func (ct *Binding) Close() error {
}
// GetComponentMetadata returns the metadata of the component.
func (ct Binding) GetComponentMetadata() (metadataInfo contribMetadata.MetadataMap) {
func (ct *Binding) GetComponentMetadata() (metadataInfo contribMetadata.MetadataMap) {
metadataStruct := commercetoolsMetadata{}
contribMetadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, contribMetadata.BindingType)
return

View File

@ -76,7 +76,7 @@ func (b *Binding) Init(ctx context.Context, meta bindings.Metadata) error {
return err
}
if m.Schedule == "" {
return fmt.Errorf("schedule not set")
return errors.New("schedule not set")
}
_, err = b.parser.Parse(m.Schedule)
if err != nil {

View File

@ -85,7 +85,7 @@ func TestCronInitSuccess(t *testing.T) {
for _, test := range initTests {
c := getNewCron()
err := c.Init(context.Background(), getTestMetadata(test.schedule))
err := c.Init(t.Context(), getTestMetadata(test.schedule))
if test.errorExpected {
require.Errorf(t, err, "Got no error while initializing an invalid schedule: %s", test.schedule)
} else {
@ -100,16 +100,16 @@ func TestCronRead(t *testing.T) {
clk := clocktesting.NewFakeClock(time.Now())
c := getNewCronWithClock(clk)
schedule := "@every 1s"
require.NoErrorf(t, c.Init(context.Background(), getTestMetadata(schedule)), "error initializing valid schedule")
require.NoErrorf(t, c.Init(t.Context(), getTestMetadata(schedule)), "error initializing valid schedule")
expectedCount := int32(5)
var observedCount atomic.Int32
err := c.Read(context.Background(), func(ctx context.Context, res *bindings.ReadResponse) ([]byte, error) {
err := c.Read(t.Context(), func(ctx context.Context, res *bindings.ReadResponse) ([]byte, error) {
assert.NotNil(t, res)
observedCount.Add(1)
return nil, nil
})
// Check if cron triggers 5 times in 5 seconds
for i := int32(0); i < expectedCount; i++ {
for range expectedCount {
// Add time to mock clock in 1 second intervals using loop to allow cron go routine to run
clk.Step(time.Second)
runtime.Gosched()
@ -128,10 +128,10 @@ func TestCronReadWithContextCancellation(t *testing.T) {
clk := clocktesting.NewFakeClock(time.Now())
c := getNewCronWithClock(clk)
schedule := "@every 1s"
require.NoErrorf(t, c.Init(context.Background(), getTestMetadata(schedule)), "error initializing valid schedule")
require.NoErrorf(t, c.Init(t.Context(), getTestMetadata(schedule)), "error initializing valid schedule")
expectedCount := int32(5)
var observedCount atomic.Int32
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
err := c.Read(ctx, func(ctx context.Context, res *bindings.ReadResponse) ([]byte, error) {
assert.NotNil(t, res)
assert.LessOrEqualf(t, observedCount.Load(), expectedCount, "Invoke didn't stop the schedule")
@ -143,7 +143,7 @@ func TestCronReadWithContextCancellation(t *testing.T) {
return nil, nil
})
// Check if cron triggers only 5 times in 10 seconds since context should be cancelled after 5 triggers
for i := 0; i < 10; i++ {
for range 10 {
// Add time to mock clock in 1 second intervals using loop to allow cron go routine to run
clk.Step(time.Second)
runtime.Gosched()

View File

@ -97,3 +97,7 @@ func (out *DubboOutputBinding) Operations() []bindings.OperationKind {
func (out *DubboOutputBinding) GetComponentMetadata() metadata.MetadataMap {
return metadata.MetadataMap{}
}
func (out *DubboOutputBinding) Close() error {
return nil
}

View File

@ -78,7 +78,7 @@ func TestInvoke(t *testing.T) {
reqData := enc.Buffer()
// 3. invoke dapr dubbo output binding, get rsp bytes
rsp, err := output.Invoke(context.Background(), &bindings.InvokeRequest{
rsp, err := output.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{
metadataRPCProviderPort: dubboPort,
metadataRPCProviderHostname: localhostIP,

View File

@ -25,9 +25,12 @@ import (
"net/url"
"reflect"
"strconv"
"sync"
"time"
"cloud.google.com/go/storage"
"github.com/google/uuid"
"go.uber.org/multierr"
"google.golang.org/api/googleapi"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
@ -36,18 +39,24 @@ import (
"github.com/dapr/components-contrib/metadata"
"github.com/dapr/kit/logger"
kitmd "github.com/dapr/kit/metadata"
"github.com/dapr/kit/utils"
"github.com/dapr/kit/strings"
)
const (
objectURLBase = "https://storage.googleapis.com/%s/%s"
metadataDecodeBase64 = "decodeBase64"
metadataEncodeBase64 = "encodeBase64"
metadataSignTTL = "signTTL"
metadataKey = "key"
maxResults = 1000
metadataKeyBC = "name"
metadataKeyBC = "name"
signOperation = "sign"
bulkGetOperation = "bulkGet"
copyOperation = "copy"
renameOperation = "rename"
moveOperation = "move"
)
// GCPStorage allows saving data to GCP bucket storage.
@ -73,6 +82,7 @@ type gcpMetadata struct {
Bucket string `json:"bucket" mapstructure:"bucket"`
DecodeBase64 bool `json:"decodeBase64,string" mapstructure:"decodeBase64"`
EncodeBase64 bool `json:"encodeBase64,string" mapstructure:"encodeBase64"`
SignTTL string `json:"signTTL" mapstructure:"signTTL" mdignore:"true"`
}
type listPayload struct {
@ -81,6 +91,10 @@ type listPayload struct {
Delimiter string `json:"delimiter"`
}
type signResponse struct {
SignURL string `json:"signURL"`
}
type createResponse struct {
ObjectURL string `json:"objectURL"`
}
@ -130,6 +144,11 @@ func (g *GCPStorage) Operations() []bindings.OperationKind {
bindings.GetOperation,
bindings.DeleteOperation,
bindings.ListOperation,
signOperation,
bulkGetOperation,
copyOperation,
renameOperation,
moveOperation,
}
}
@ -145,6 +164,16 @@ func (g *GCPStorage) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*
return g.delete(ctx, req)
case bindings.ListOperation:
return g.list(ctx, req)
case signOperation:
return g.sign(ctx, req)
case bulkGetOperation:
return g.bulkGet(ctx, req)
case copyOperation:
return g.copy(ctx, req)
case renameOperation:
return g.rename(ctx, req)
case moveOperation:
return g.move(ctx, req)
default:
return nil, fmt.Errorf("unsupported operation %s", req.Operation)
}
@ -154,7 +183,7 @@ func (g *GCPStorage) create(ctx context.Context, req *bindings.InvokeRequest) (*
var err error
metadata, err := g.metadata.mergeWithRequestMetadata(req)
if err != nil {
return nil, fmt.Errorf("gcp bucket binding error. error merge metadata : %w", err)
return nil, fmt.Errorf("gcp bucket binding error while merging metadata : %w", err)
}
var name string
@ -176,14 +205,23 @@ func (g *GCPStorage) create(ctx context.Context, req *bindings.InvokeRequest) (*
}
h := g.client.Bucket(g.metadata.Bucket).Object(name).NewWriter(ctx)
defer h.Close()
// Cannot do `defer h.Close()` as Close() will flush the bytes and need to have error handling.
if _, err = io.Copy(h, r); err != nil {
return nil, fmt.Errorf("gcp bucket binding error. Uploading: %w", err)
cerr := h.Close()
if cerr != nil {
return nil, fmt.Errorf("gcp bucket binding error while uploading and closing: %w", err)
}
return nil, fmt.Errorf("gcp bucket binding error while uploading: %w", err)
}
err = h.Close()
if err != nil {
return nil, fmt.Errorf("gcp bucket binding error while flushing: %w", err)
}
objectURL, err := url.Parse(fmt.Sprintf(objectURLBase, g.metadata.Bucket, name))
if err != nil {
return nil, fmt.Errorf("gcp bucket binding error. error building url response: %w", err)
return nil, fmt.Errorf("gcp bucket binding error while building url response: %w", err)
}
resp := createResponse{
@ -192,7 +230,7 @@ func (g *GCPStorage) create(ctx context.Context, req *bindings.InvokeRequest) (*
b, err := json.Marshal(resp)
if err != nil {
return nil, fmt.Errorf("gcp binding error. error marshalling create response: %w", err)
return nil, fmt.Errorf("gcp bucket binding error while marshalling the create response: %w", err)
}
return &bindings.InvokeResponse{
@ -203,14 +241,14 @@ func (g *GCPStorage) create(ctx context.Context, req *bindings.InvokeRequest) (*
func (g *GCPStorage) get(ctx context.Context, req *bindings.InvokeRequest) (*bindings.InvokeResponse, error) {
metadata, err := g.metadata.mergeWithRequestMetadata(req)
if err != nil {
return nil, fmt.Errorf("gcp binding error. error merge metadata : %w", err)
return nil, fmt.Errorf("gcp binding error while merging metadata : %w", err)
}
var key string
if val, ok := req.Metadata[metadataKey]; ok && val != "" {
key = val
} else {
return nil, fmt.Errorf("gcp bucket binding error: can't read key value")
return nil, errors.New("gcp bucket binding error: can't read key value")
}
var rc io.ReadCloser
@ -221,13 +259,13 @@ func (g *GCPStorage) get(ctx context.Context, req *bindings.InvokeRequest) (*bin
return nil, errors.New("object not found")
}
return nil, fmt.Errorf("gcp bucketgcp bucket binding error: error downloading bucket object: %w", err)
return nil, fmt.Errorf("gcp bucketgcp bucket binding error while downloading object: %w", err)
}
defer rc.Close()
data, err := io.ReadAll(rc)
if err != nil {
return nil, fmt.Errorf("gcp bucketgcp bucket binding error: io.ReadAll: %v", err)
return nil, fmt.Errorf("gcp bucketgcp bucket binding error while reading: %v", err)
}
if metadata.EncodeBase64 {
@ -246,7 +284,7 @@ func (g *GCPStorage) delete(ctx context.Context, req *bindings.InvokeRequest) (*
if val, ok := req.Metadata[metadataKey]; ok && val != "" {
key = val
} else {
return nil, fmt.Errorf("gcp bucketgcp bucket binding error: can't read key value")
return nil, errors.New("gcp bucketgcp bucket binding error: can't read key value")
}
object := g.client.Bucket(g.metadata.Bucket).Object(key)
@ -289,7 +327,7 @@ func (g *GCPStorage) list(ctx context.Context, req *bindings.InvokeRequest) (*bi
jsonResponse, err := json.Marshal(result)
if err != nil {
return nil, fmt.Errorf("gcp bucketgcp bucket binding error. list operation. cannot marshal blobs to json: %w", err)
return nil, fmt.Errorf("gcp bucketgcp bucket binding error while listing: cannot marshal blobs to json: %w", err)
}
return &bindings.InvokeResponse{
@ -306,13 +344,15 @@ func (metadata gcpMetadata) mergeWithRequestMetadata(req *bindings.InvokeRequest
merged := metadata
if val, ok := req.Metadata[metadataDecodeBase64]; ok && val != "" {
merged.DecodeBase64 = utils.IsTruthy(val)
merged.DecodeBase64 = strings.IsTruthy(val)
}
if val, ok := req.Metadata[metadataEncodeBase64]; ok && val != "" {
merged.EncodeBase64 = utils.IsTruthy(val)
merged.EncodeBase64 = strings.IsTruthy(val)
}
if val, ok := req.Metadata[metadataSignTTL]; ok && val != "" {
merged.SignTTL = val
}
return merged, nil
}
@ -332,3 +372,249 @@ func (g *GCPStorage) GetComponentMetadata() (metadataInfo metadata.MetadataMap)
metadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, metadata.BindingType)
return
}
func (g *GCPStorage) sign(ctx context.Context, req *bindings.InvokeRequest) (*bindings.InvokeResponse, error) {
metadata, err := g.metadata.mergeWithRequestMetadata(req)
if err != nil {
return nil, fmt.Errorf("gcp binding error while merging metadata : %w", err)
}
var key string
if val, ok := req.Metadata[metadataKey]; ok && val != "" {
key = val
} else {
return nil, errors.New("gcp bucket binding error: can't read key value")
}
if metadata.SignTTL == "" {
return nil, fmt.Errorf("gcp bucket binding error: required metadata '%s' missing", metadataSignTTL)
}
signURL, err := g.signObject(metadata.Bucket, key, metadata.SignTTL)
if err != nil {
return nil, fmt.Errorf("gcp bucket binding error: %w", err)
}
jsonResponse, err := json.Marshal(signResponse{
SignURL: signURL,
})
if err != nil {
return nil, fmt.Errorf("gcp bucket binding error while marshalling sign response: %w", err)
}
return &bindings.InvokeResponse{
Data: jsonResponse,
}, nil
}
func (g *GCPStorage) signObject(bucket, object, ttl string) (string, error) {
d, err := time.ParseDuration(ttl)
if err != nil {
return "", fmt.Errorf("gcp bucket binding error while parsing signTTL: %w", err)
}
opts := &storage.SignedURLOptions{
Scheme: storage.SigningSchemeV4,
Method: "GET",
Expires: time.Now().Add(d),
}
u, err := g.client.Bucket(g.metadata.Bucket).SignedURL(object, opts)
if err != nil {
return "", fmt.Errorf("Bucket(%q).SignedURL: %w", bucket, err)
}
return u, nil
}
type objectData struct {
Name string `json:"name"`
Data []byte `json:"data"`
Attrs storage.ObjectAttrs `json:"attrs"`
}
func (g *GCPStorage) bulkGet(ctx context.Context, req *bindings.InvokeRequest) (*bindings.InvokeResponse, error) {
metadata, err := g.metadata.mergeWithRequestMetadata(req)
if err != nil {
return nil, fmt.Errorf("gcp binding error while merging metadata : %w", err)
}
if g.metadata.Bucket == "" {
return nil, errors.New("gcp bucket binding error: bucket is required")
}
var allObjs []*storage.ObjectAttrs
it := g.client.Bucket(g.metadata.Bucket).Objects(ctx, nil)
for {
attrs, err2 := it.Next()
if err2 == iterator.Done {
break
}
allObjs = append(allObjs, attrs)
}
var wg sync.WaitGroup
objectsCh := make(chan objectData, len(allObjs))
errCh := make(chan error, len(allObjs))
for i, obj := range allObjs {
wg.Add(1)
go func(idx int, object *storage.ObjectAttrs) {
defer wg.Done()
rc, err3 := g.client.Bucket(g.metadata.Bucket).Object(object.Name).NewReader(ctx)
if err3 != nil {
errCh <- err3
return
}
defer rc.Close()
data, readErr := io.ReadAll(rc)
if readErr != nil {
errCh <- readErr
return
}
if metadata.EncodeBase64 {
encoded := b64.StdEncoding.EncodeToString(data)
data = []byte(encoded)
}
objectsCh <- objectData{
Name: object.Name,
Data: data,
Attrs: *object,
}
}(i, obj)
}
wg.Wait()
close(errCh)
var multiErr error
for err := range errCh {
multierr.AppendInto(&multiErr, err)
}
if multiErr != nil {
return nil, multiErr
}
response := make([]objectData, 0, len(allObjs))
for obj := range objectsCh {
response = append(response, obj)
}
jsonResponse, err := json.Marshal(response)
if err != nil {
return nil, fmt.Errorf("gcp bucket binding error while marshalling bulk get response: %w", err)
}
return &bindings.InvokeResponse{
Data: jsonResponse,
}, nil
}
type movePayload struct {
DestinationBucket string `json:"destinationBucket"`
}
func (g *GCPStorage) move(ctx context.Context, req *bindings.InvokeRequest) (*bindings.InvokeResponse, error) {
var key string
if val, ok := req.Metadata[metadataKey]; ok && val != "" {
key = val
} else {
return nil, errors.New("gcp bucket binding error: can't read key value")
}
var payload movePayload
err := json.Unmarshal(req.Data, &payload)
if err != nil {
return nil, errors.New("gcp bucket binding error: invalid move payload")
}
if payload.DestinationBucket == "" {
return nil, errors.New("gcp bucket binding error: required 'destinationBucket' missing")
}
src := g.client.Bucket(g.metadata.Bucket).Object(key)
dst := g.client.Bucket(payload.DestinationBucket).Object(key)
if _, err := dst.CopierFrom(src).Run(ctx); err != nil {
return nil, fmt.Errorf("gcp bucket binding error while copying object: %w", err)
}
if err := src.Delete(ctx); err != nil {
return nil, fmt.Errorf("gcp bucket binding error while deleting object: %w", err)
}
return &bindings.InvokeResponse{
Data: []byte(fmt.Sprintf("object %s moved to %s", key, payload.DestinationBucket)),
}, nil
}
type renamePayload struct {
NewName string `json:"newName"`
}
func (g *GCPStorage) rename(ctx context.Context, req *bindings.InvokeRequest) (*bindings.InvokeResponse, error) {
var key string
if val, ok := req.Metadata[metadataKey]; ok && val != "" {
key = val
} else {
return nil, errors.New("gcp bucket binding error: can't read key value")
}
var payload renamePayload
err := json.Unmarshal(req.Data, &payload)
if err != nil {
return nil, errors.New("gcp bucket binding error: invalid rename payload")
}
if payload.NewName == "" {
return nil, errors.New("gcp bucket binding error: required 'newName' missing")
}
src := g.client.Bucket(g.metadata.Bucket).Object(key)
dst := g.client.Bucket(g.metadata.Bucket).Object(payload.NewName)
if _, err := dst.CopierFrom(src).Run(ctx); err != nil {
return nil, fmt.Errorf("gcp bucket binding error while copying object: %w", err)
}
if err := src.Delete(ctx); err != nil {
return nil, fmt.Errorf("gcp bucket binding error while deleting object: %w", err)
}
return &bindings.InvokeResponse{
Data: []byte(fmt.Sprintf("object %s renamed to %s", key, payload.NewName)),
}, nil
}
type copyPayload struct {
DestinationBucket string `json:"destinationBucket"`
}
func (g *GCPStorage) copy(ctx context.Context, req *bindings.InvokeRequest) (*bindings.InvokeResponse, error) {
var key string
if val, ok := req.Metadata[metadataKey]; ok && val != "" {
key = val
} else {
return nil, errors.New("gcp bucket binding error: can't read key value")
}
var payload copyPayload
err := json.Unmarshal(req.Data, &payload)
if err != nil {
return nil, errors.New("gcp bucket binding error: invalid copy payload")
}
if payload.DestinationBucket == "" {
return nil, errors.New("gcp bucket binding error: required 'destinationBucket' missing")
}
src := g.client.Bucket(g.metadata.Bucket).Object(key)
dst := g.client.Bucket(payload.DestinationBucket).Object(key)
if _, err := dst.CopierFrom(src).Run(ctx); err != nil {
return nil, fmt.Errorf("gcp bucket binding error while copying object: %w", err)
}
return &bindings.InvokeResponse{
Data: []byte(fmt.Sprintf("object %s copied to %s", key, payload.DestinationBucket)),
}, nil
}

View File

@ -14,7 +14,6 @@ limitations under the License.
package bucket
import (
"context"
"encoding/json"
"testing"
@ -40,6 +39,7 @@ func TestParseMetadata(t *testing.T) {
"projectID": "my_project_id",
"tokenURI": "my_token_uri",
"type": "my_type",
"signTTL": "15s",
}
gs := GCPStorage{logger: logger.NewLogger("test")}
meta, err := gs.parseMetadata(m)
@ -57,17 +57,18 @@ func TestParseMetadata(t *testing.T) {
assert.Equal(t, "my_project_id", meta.ProjectID)
assert.Equal(t, "my_token_uri", meta.TokenURI)
assert.Equal(t, "my_type", meta.Type)
assert.Equal(t, "15s", meta.SignTTL)
})
t.Run("Metadata is correctly marshalled to JSON", func(t *testing.T) {
json, err := json.Marshal(meta)
require.NoError(t, err)
assert.Equal(t,
assert.JSONEq(t,
"{\"type\":\"my_type\",\"project_id\":\"my_project_id\",\"private_key_id\":\"my_private_key_id\","+
"\"private_key\":\"my_private_key\",\"client_email\":\"my_email@mail.dapr\",\"client_id\":\"my_client_id\","+
"\"auth_uri\":\"my_auth_uri\",\"token_uri\":\"my_token_uri\",\"auth_provider_x509_cert_url\":\"my_auth_provider_x509\","+
"\"client_x509_cert_url\":\"my_client_x509\",\"bucket\":\"my_bucket\",\"decodeBase64\":\"false\","+
"\"encodeBase64\":\"false\"}", string(json))
"\"encodeBase64\":\"false\",\"signTTL\":\"15s\"}", string(json))
})
})
@ -238,7 +239,7 @@ func TestGetOption(t *testing.T) {
gs.metadata = &gcpMetadata{}
t.Run("return error if key is missing", func(t *testing.T) {
r := bindings.InvokeRequest{}
_, err := gs.get(context.TODO(), &r)
_, err := gs.get(t.Context(), &r)
require.Error(t, err)
})
}
@ -249,7 +250,127 @@ func TestDeleteOption(t *testing.T) {
t.Run("return error if key is missing", func(t *testing.T) {
r := bindings.InvokeRequest{}
_, err := gs.delete(context.TODO(), &r)
_, err := gs.delete(t.Context(), &r)
require.Error(t, err)
})
}
func TestBulkGetOption(t *testing.T) {
gs := GCPStorage{logger: logger.NewLogger("test")}
gs.metadata = &gcpMetadata{}
t.Run("return error if bucket is missing", func(t *testing.T) {
r := bindings.InvokeRequest{}
_, err := gs.bulkGet(t.Context(), &r)
require.Error(t, err)
})
}
func TestCopyOption(t *testing.T) {
gs := GCPStorage{logger: logger.NewLogger("test")}
gs.metadata = &gcpMetadata{}
t.Run("return error if key is missing", func(t *testing.T) {
r := bindings.InvokeRequest{}
_, err := gs.copy(t.Context(), &r)
require.Error(t, err)
assert.Equal(t, "gcp bucket binding error: can't read key value", err.Error())
})
t.Run("return error if data is not valid json", func(t *testing.T) {
r := bindings.InvokeRequest{
Metadata: map[string]string{
"key": "my_key",
},
}
_, err := gs.copy(t.Context(), &r)
require.Error(t, err)
assert.Equal(t, "gcp bucket binding error: invalid copy payload", err.Error())
})
t.Run("return error if destinationBucket is missing", func(t *testing.T) {
r := bindings.InvokeRequest{
Data: []byte(`{}`),
Metadata: map[string]string{
"key": "my_key",
},
}
_, err := gs.copy(t.Context(), &r)
require.Error(t, err)
assert.Equal(t, "gcp bucket binding error: required 'destinationBucket' missing", err.Error())
})
}
func TestRenameOption(t *testing.T) {
gs := GCPStorage{logger: logger.NewLogger("test")}
gs.metadata = &gcpMetadata{}
t.Run("return error if key is missing", func(t *testing.T) {
r := bindings.InvokeRequest{
Data: []byte(`{"newName": "my_new_name"}`),
}
_, err := gs.rename(t.Context(), &r)
require.Error(t, err)
assert.Equal(t, "gcp bucket binding error: can't read key value", err.Error())
})
t.Run("return error if data is not valid json", func(t *testing.T) {
r := bindings.InvokeRequest{
Metadata: map[string]string{
"key": "my_key",
},
}
_, err := gs.rename(t.Context(), &r)
require.Error(t, err)
assert.Equal(t, "gcp bucket binding error: invalid rename payload", err.Error())
})
t.Run("return error if newName is missing", func(t *testing.T) {
r := bindings.InvokeRequest{
Data: []byte(`{}`),
Metadata: map[string]string{
"key": "my_key",
},
}
_, err := gs.rename(t.Context(), &r)
require.Error(t, err)
assert.Equal(t, "gcp bucket binding error: required 'newName' missing", err.Error())
})
}
func TestMoveOption(t *testing.T) {
gs := GCPStorage{logger: logger.NewLogger("test")}
gs.metadata = &gcpMetadata{}
t.Run("return error if key is missing", func(t *testing.T) {
r := bindings.InvokeRequest{
Data: []byte(`{"destinationBucket": "my_bucket"}`),
}
_, err := gs.move(t.Context(), &r)
require.Error(t, err)
assert.Equal(t, "gcp bucket binding error: can't read key value", err.Error())
})
t.Run("return error if data is not valid json", func(t *testing.T) {
r := bindings.InvokeRequest{
Metadata: map[string]string{
"key": "my_key",
},
}
_, err := gs.move(t.Context(), &r)
require.Error(t, err)
assert.Equal(t, "gcp bucket binding error: invalid move payload", err.Error())
})
t.Run("return error if destinationBucket is missing", func(t *testing.T) {
r := bindings.InvokeRequest{
Data: []byte(`{}`),
Metadata: map[string]string{
"key": "my_key",
},
}
_, err := gs.move(t.Context(), &r)
require.Error(t, err)
assert.Equal(t, "gcp bucket binding error: required 'destinationBucket' missing", err.Error())
})
}

View File

@ -23,6 +23,12 @@ metadata:
The bucket name.
example: '"mybucket"'
type: string
- name: signTTL
required: false
description: |
Specifies the duration that the signed URL should be valid.
example: '"15m, 1h"'
type: string
- name: decodeBase64
type: bool
required: false

View File

@ -16,6 +16,7 @@ package graphql
import (
"context"
"encoding/json"
"errors"
"fmt"
"reflect"
"regexp"
@ -73,7 +74,7 @@ func (gql *GraphQL) Init(_ context.Context, meta bindings.Metadata) error {
}
if m.Endpoint == "" {
return fmt.Errorf("GraphQL Error: Missing GraphQL URL")
return errors.New("GraphQL Error: Missing GraphQL URL")
}
// Connect to GraphQL Server
@ -101,11 +102,11 @@ func (gql *GraphQL) Operations() []bindings.OperationKind {
// Invoke handles all invoke operations.
func (gql *GraphQL) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*bindings.InvokeResponse, error) {
if req == nil {
return nil, fmt.Errorf("GraphQL Error: Invoke request required")
return nil, errors.New("GraphQL Error: Invoke request required")
}
if req.Metadata == nil {
return nil, fmt.Errorf("GraphQL Error: Metadata required")
return nil, errors.New("GraphQL Error: Metadata required")
}
gql.logger.Debugf("operation: %v", req.Operation)
@ -192,3 +193,7 @@ func (gql *GraphQL) GetComponentMetadata() (metadataInfo metadata.MetadataMap) {
metadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, metadata.BindingType)
return
}
func (gql *GraphQL) Close() error {
return nil
}

View File

@ -108,6 +108,6 @@ func TestGraphQlRequestHeadersAndVariables(t *testing.T) {
"variable:episode": "JEDI",
},
}
_, err = gql.Invoke(context.Background(), req)
_, err = gql.Invoke(t.Context(), req)
require.NoError(t, err)
}

View File

@ -34,7 +34,7 @@ import (
"github.com/dapr/components-contrib/metadata"
"github.com/dapr/kit/logger"
kitmd "github.com/dapr/kit/metadata"
"github.com/dapr/kit/utils"
kitstrings "github.com/dapr/kit/strings"
)
const (
@ -44,6 +44,7 @@ const (
TraceparentHeaderKey = "traceparent"
TracestateHeaderKey = "tracestate"
BaggageHeaderKey = "baggage"
TraceMetadataKey = "traceHeaders"
securityToken = "securityToken"
securityTokenHeader = "securityTokenHeader"
@ -99,6 +100,9 @@ func (h *HTTPSource) Init(_ context.Context, meta bindings.Metadata) error {
if err != nil {
return err
}
if tlsConfig == nil {
tlsConfig = &tls.Config{MinVersion: tls.VersionTLS12}
}
if h.metadata.MTLSClientCert != "" && h.metadata.MTLSClientKey != "" {
err = h.readMTLSClientCertificates(tlsConfig)
if err != nil {
@ -122,11 +126,10 @@ func (h *HTTPSource) Init(_ context.Context, meta bindings.Metadata) error {
dialer := &net.Dialer{
Timeout: 15 * time.Second,
}
netTransport := &http.Transport{
Dial: dialer.Dial,
TLSHandshakeTimeout: 15 * time.Second,
TLSClientConfig: tlsConfig,
}
netTransport := http.DefaultTransport.(*http.Transport).Clone()
netTransport.DialContext = dialer.DialContext
netTransport.TLSHandshakeTimeout = 15 * time.Second
netTransport.TLSClientConfig = tlsConfig
h.client = &http.Client{
Timeout: 0, // no time out here, we use request timeouts instead
@ -134,7 +137,7 @@ func (h *HTTPSource) Init(_ context.Context, meta bindings.Metadata) error {
}
if val := meta.Properties["errorIfNot2XX"]; val != "" {
h.errorIfNot2XX = utils.IsTruthy(val)
h.errorIfNot2XX = kitstrings.IsTruthy(val)
} else {
// Default behavior
h.errorIfNot2XX = true
@ -157,9 +160,6 @@ func (h *HTTPSource) readMTLSClientCertificates(tlsConfig *tls.Config) error {
if err != nil {
return fmt.Errorf("failed to load client certificate: %w", err)
}
if tlsConfig == nil {
tlsConfig = &tls.Config{MinVersion: tls.VersionTLS12}
}
tlsConfig.Certificates = []tls.Certificate{cert}
return nil
}
@ -252,7 +252,7 @@ func (h *HTTPSource) Invoke(parentCtx context.Context, req *bindings.InvokeReque
u = strings.TrimRight(u, "/") + "/" + strings.TrimLeft(req.Metadata["path"], "/")
}
if req.Metadata["errorIfNot2XX"] != "" {
errorIfNot2XX = utils.IsTruthy(req.Metadata["errorIfNot2XX"])
errorIfNot2XX = kitstrings.IsTruthy(req.Metadata["errorIfNot2XX"])
}
var body io.Reader
@ -319,6 +319,13 @@ func (h *HTTPSource) Invoke(parentCtx context.Context, req *bindings.InvokeReque
request.Header.Set(TracestateHeaderKey, ts)
}
if baggage, ok := req.Metadata[BaggageHeaderKey]; ok && baggage != "" {
if _, ok := request.Header[http.CanonicalHeaderKey(BaggageHeaderKey)]; ok {
h.logger.Warn("Tracing is enabled. A custom Baggage request header cannot be specified and is ignored.")
}
request.Header.Set(BaggageHeaderKey, baggage)
}
// Send the question
resp, err := h.client.Do(request)
@ -371,3 +378,7 @@ func (h *HTTPSource) GetComponentMetadata() (metadataInfo metadata.MetadataMap)
metadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, metadata.BindingType)
return
}
func (h *HTTPSource) Close() error {
return nil
}

View File

@ -330,7 +330,7 @@ func (h *HTTPHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
h.Path = req.URL.Path
if strings.TrimPrefix(h.Path, "/") == "large" {
// Write 5KB
for i := 0; i < 1<<10; i++ {
for range 1 << 10 {
fmt.Fprint(w, "12345")
}
return
@ -439,7 +439,7 @@ func TestSecurityTokenHeaderForwarded(t *testing.T) {
err: "",
statusCode: 200,
}.ToInvokeRequest()
_, err = hs.Invoke(context.Background(), &req)
_, err = hs.Invoke(t.Context(), &req)
require.NoError(t, err)
assert.Equal(t, "12345", handler.Headers["X-Token"])
})
@ -455,7 +455,7 @@ func TestSecurityTokenHeaderForwarded(t *testing.T) {
err: "",
statusCode: 200,
}.ToInvokeRequest()
_, err = hs.Invoke(context.Background(), &req)
_, err = hs.Invoke(t.Context(), &req)
require.NoError(t, err)
assert.Empty(t, handler.Headers["X-Token"])
})
@ -473,47 +473,51 @@ func TestTraceHeadersForwarded(t *testing.T) {
req := TestCase{
input: "GET",
operation: "get",
metadata: map[string]string{"path": "/", "traceparent": "12345", "tracestate": "67890"},
metadata: map[string]string{"path": "/", "traceparent": "12345", "tracestate": "67890", "baggage": "key1=value1"},
path: "/",
err: "",
statusCode: 200,
}.ToInvokeRequest()
_, err = hs.Invoke(context.Background(), &req)
_, err = hs.Invoke(t.Context(), &req)
require.NoError(t, err)
assert.Equal(t, "12345", handler.Headers["Traceparent"])
assert.Equal(t, "67890", handler.Headers["Tracestate"])
assert.Equal(t, "key1=value1", handler.Headers["Baggage"])
})
t.Run("trace headers should not be forwarded if empty", func(t *testing.T) {
req := TestCase{
input: "GET",
operation: "get",
metadata: map[string]string{"path": "/", "traceparent": "", "tracestate": ""},
metadata: map[string]string{"path": "/", "traceparent": "", "tracestate": "", "baggage": ""},
path: "/",
err: "",
statusCode: 200,
}.ToInvokeRequest()
_, err = hs.Invoke(context.Background(), &req)
_, err = hs.Invoke(t.Context(), &req)
require.NoError(t, err)
_, traceParentExists := handler.Headers["Traceparent"]
assert.False(t, traceParentExists)
_, traceStateExists := handler.Headers["Tracestate"]
assert.False(t, traceStateExists)
_, baggageExists := handler.Headers["Baggage"]
assert.False(t, baggageExists)
})
t.Run("trace headers override headers in request metadata", func(t *testing.T) {
req := TestCase{
input: "GET",
operation: "get",
metadata: map[string]string{"path": "/", "Traceparent": "abcde", "Tracestate": "fghijk", "traceparent": "12345", "tracestate": "67890"},
metadata: map[string]string{"path": "/", "Traceparent": "abcde", "Tracestate": "fghijk", "Baggage": "oldvalue", "traceparent": "12345", "tracestate": "67890", "baggage": "key1=value1"},
path: "/",
err: "",
statusCode: 200,
}.ToInvokeRequest()
_, err = hs.Invoke(context.Background(), &req)
_, err = hs.Invoke(t.Context(), &req)
require.NoError(t, err)
assert.Equal(t, "12345", handler.Headers["Traceparent"])
assert.Equal(t, "67890", handler.Headers["Tracestate"])
assert.Equal(t, "key1=value1", handler.Headers["Baggage"])
})
}
@ -624,7 +628,7 @@ func TestHTTPSBinding(t *testing.T) {
err: "",
statusCode: 200,
}.ToInvokeRequest()
response, err := hs.Invoke(context.Background(), &req)
response, err := hs.Invoke(t.Context(), &req)
require.NoError(t, err)
peerCerts, err := strconv.Atoi(string(response.Data))
require.NoError(t, err)
@ -638,7 +642,7 @@ func TestHTTPSBinding(t *testing.T) {
err: "",
statusCode: 201,
}.ToInvokeRequest()
response, err = hs.Invoke(context.Background(), &req)
response, err = hs.Invoke(t.Context(), &req)
require.NoError(t, err)
peerCerts, err = strconv.Atoi(string(response.Data))
require.NoError(t, err)
@ -657,7 +661,7 @@ func TestHTTPSBinding(t *testing.T) {
err: "",
statusCode: 200,
}.ToInvokeRequest()
_, err = hs.Invoke(context.Background(), &req)
_, err = hs.Invoke(t.Context(), &req)
require.Error(t, err)
})
@ -677,7 +681,7 @@ func TestHTTPSBinding(t *testing.T) {
err: "",
statusCode: 200,
}.ToInvokeRequest()
response, err := hs.Invoke(context.Background(), &req)
response, err := hs.Invoke(t.Context(), &req)
require.NoError(t, err)
peerCerts, err := strconv.Atoi(string(response.Data))
require.NoError(t, err)
@ -694,7 +698,7 @@ func TestHTTPSBinding(t *testing.T) {
err: "",
statusCode: 201,
}.ToInvokeRequest()
response, err = hs.Invoke(context.Background(), &req)
response, err = hs.Invoke(t.Context(), &req)
require.NoError(t, err)
peerCerts, err = strconv.Atoi(string(response.Data))
require.NoError(t, err)
@ -851,7 +855,7 @@ func verifyDefaultBehaviors(t *testing.T, hs bindings.OutputBinding, handler *HT
for name, tc := range tests {
t.Run(name, func(t *testing.T) {
req := tc.ToInvokeRequest()
response, err := hs.Invoke(context.Background(), &req)
response, err := hs.Invoke(t.Context(), &req)
if tc.err == "" {
require.NoError(t, err)
assert.Equal(t, tc.path, handler.Path)
@ -915,7 +919,7 @@ func verifyNon2XXErrorsSuppressed(t *testing.T, hs bindings.OutputBinding, handl
for name, tc := range tests {
t.Run(name, func(t *testing.T) {
req := tc.ToInvokeRequest()
response, err := hs.Invoke(context.Background(), &req)
response, err := hs.Invoke(t.Context(), &req)
if tc.err == "" {
require.NoError(t, err)
assert.Equal(t, tc.path, handler.Path)
@ -965,7 +969,7 @@ func verifyTimeoutBehavior(t *testing.T, hs bindings.OutputBinding, handler *HTT
for name, tc := range tests {
t.Run(name, func(t *testing.T) {
req := tc.ToInvokeRequest()
response, err := hs.Invoke(context.Background(), &req)
response, err := hs.Invoke(t.Context(), &req)
if tc.err == "" {
require.NoError(t, err)
assert.Equal(t, tc.path, handler.Path)
@ -999,7 +1003,7 @@ func TestMaxBodySizeHonored(t *testing.T) {
}
req := tc.ToInvokeRequest()
response, err := hs.Invoke(context.Background(), &req)
response, err := hs.Invoke(t.Context(), &req)
require.NoError(t, err)
// Should have only read 1KB

View File

@ -75,3 +75,7 @@ metadata:
required: false
description: "The header name on an outgoing HTTP request for a security token"
example: '"X-Security-Token"'
- name: errorIfNot2XX
required: false
default: 'true'
description: "Create an error if a non-2XX status code is returned"

View File

@ -104,16 +104,16 @@ func (o *HuaweiOBS) parseMetadata(meta bindings.Metadata) (*obsMetadata, error)
}
if m.Bucket == "" {
return nil, fmt.Errorf("missing obs bucket name")
return nil, errors.New("missing obs bucket name")
}
if m.Endpoint == "" {
return nil, fmt.Errorf("missing obs endpoint")
return nil, errors.New("missing obs endpoint")
}
if m.AccessKey == "" {
return nil, fmt.Errorf("missing the huawei access key")
return nil, errors.New("missing the huawei access key")
}
if m.SecretKey == "" {
return nil, fmt.Errorf("missing the huawei secret key")
return nil, errors.New("missing the huawei secret key")
}
o.logger.Debugf("Huawei OBS metadata=[%s]", m)
@ -212,7 +212,7 @@ func (o *HuaweiOBS) get(ctx context.Context, req *bindings.InvokeRequest) (*bind
if val, ok := req.Metadata[metadataKey]; ok && val != "" {
key = val
} else {
return nil, fmt.Errorf("obs binding error: can't read key value")
return nil, errors.New("obs binding error: can't read key value")
}
input := &obs.GetObjectInput{}
@ -252,7 +252,7 @@ func (o *HuaweiOBS) delete(ctx context.Context, req *bindings.InvokeRequest) (*b
if val, ok := req.Metadata[metadataKey]; ok && val != "" {
key = val
} else {
return nil, fmt.Errorf("obs binding error: can't read key value")
return nil, errors.New("obs binding error: can't read key value")
}
input := &obs.DeleteObjectInput{}
@ -338,3 +338,10 @@ func (o *HuaweiOBS) GetComponentMetadata() (metadataInfo metadata.MetadataMap) {
metadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, metadata.BindingType)
return
}
func (o *HuaweiOBS) Close() error {
if o.service != nil {
o.service.Close()
}
return nil
}

View File

@ -27,6 +27,7 @@ type HuaweiOBSAPI interface {
GetObject(ctx context.Context, input *obs.GetObjectInput) (output *obs.GetObjectOutput, err error)
DeleteObject(ctx context.Context, input *obs.DeleteObjectInput) (output *obs.DeleteObjectOutput, err error)
ListObjects(ctx context.Context, input *obs.ListObjectsInput) (output *obs.ListObjectsOutput, err error)
Close()
}
// HuaweiOBSService is a service layer which wraps the actual OBS SDK client to provide the API functions
@ -54,3 +55,7 @@ func (s *HuaweiOBSService) DeleteObject(ctx context.Context, input *obs.DeleteOb
func (s *HuaweiOBSService) ListObjects(ctx context.Context, input *obs.ListObjectsInput) (output *obs.ListObjectsOutput, err error) {
return s.client.ListObjects(input, obs.WithRequestContext(ctx))
}
func (s *HuaweiOBSService) Close() {
s.client.Close()
}

View File

@ -17,7 +17,6 @@ import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"strings"
"testing"
@ -61,6 +60,8 @@ func (m *MockHuaweiOBSService) ListObjects(ctx context.Context, input *obs.ListO
return m.ListObjectsFn(ctx, input)
}
func (m *MockHuaweiOBSService) Close() {}
func TestParseMetadata(t *testing.T) {
obs := NewHuaweiOBS(logger.NewLogger("test")).(*HuaweiOBS)
@ -93,7 +94,7 @@ func TestInit(t *testing.T) {
"accessKey": "dummy-ak",
"secretKey": "dummy-sk",
}
err := obs.Init(context.Background(), m)
err := obs.Init(t.Context(), m)
require.NoError(t, err)
})
t.Run("Init with missing bucket name", func(t *testing.T) {
@ -103,9 +104,9 @@ func TestInit(t *testing.T) {
"accessKey": "dummy-ak",
"secretKey": "dummy-sk",
}
err := obs.Init(context.Background(), m)
err := obs.Init(t.Context(), m)
require.Error(t, err)
assert.Equal(t, err, fmt.Errorf("missing obs bucket name"))
assert.Equal(t, err, errors.New("missing obs bucket name"))
})
t.Run("Init with missing access key", func(t *testing.T) {
m := bindings.Metadata{}
@ -114,9 +115,9 @@ func TestInit(t *testing.T) {
"endpoint": "dummy-endpoint",
"secretKey": "dummy-sk",
}
err := obs.Init(context.Background(), m)
err := obs.Init(t.Context(), m)
require.Error(t, err)
assert.Equal(t, err, fmt.Errorf("missing the huawei access key"))
assert.Equal(t, err, errors.New("missing the huawei access key"))
})
t.Run("Init with missing secret key", func(t *testing.T) {
m := bindings.Metadata{}
@ -125,9 +126,9 @@ func TestInit(t *testing.T) {
"endpoint": "dummy-endpoint",
"accessKey": "dummy-ak",
}
err := obs.Init(context.Background(), m)
err := obs.Init(t.Context(), m)
require.Error(t, err)
assert.Equal(t, err, fmt.Errorf("missing the huawei secret key"))
assert.Equal(t, err, errors.New("missing the huawei secret key"))
})
t.Run("Init with missing endpoint", func(t *testing.T) {
m := bindings.Metadata{}
@ -136,9 +137,9 @@ func TestInit(t *testing.T) {
"accessKey": "dummy-ak",
"secretKey": "dummy-sk",
}
err := obs.Init(context.Background(), m)
err := obs.Init(t.Context(), m)
require.Error(t, err)
assert.Equal(t, err, fmt.Errorf("missing obs endpoint"))
assert.Equal(t, err, errors.New("missing obs endpoint"))
})
}
@ -177,7 +178,7 @@ func TestCreateOperation(t *testing.T) {
Data: []byte(`"Hello OBS"`),
}
out, err := mo.create(context.Background(), req)
out, err := mo.create(t.Context(), req)
require.NoError(t, err)
var data createResponse
@ -208,7 +209,7 @@ func TestCreateOperation(t *testing.T) {
Data: []byte(`"Hello OBS"`),
}
out, err := mo.create(context.Background(), req)
out, err := mo.create(t.Context(), req)
require.NoError(t, err)
var data createResponse
@ -241,7 +242,7 @@ func TestCreateOperation(t *testing.T) {
},
}
_, err := mo.create(context.Background(), req)
_, err := mo.create(t.Context(), req)
require.NoError(t, err)
})
@ -249,7 +250,7 @@ func TestCreateOperation(t *testing.T) {
mo := &HuaweiOBS{
service: &MockHuaweiOBSService{
PutObjectFn: func(ctx context.Context, input *obs.PutObjectInput) (output *obs.PutObjectOutput, err error) {
return nil, fmt.Errorf("error while creating object")
return nil, errors.New("error while creating object")
},
},
logger: logger.NewLogger("test"),
@ -266,7 +267,7 @@ func TestCreateOperation(t *testing.T) {
Data: []byte(`"Hello OBS"`),
}
_, err := mo.create(context.Background(), req)
_, err := mo.create(t.Context(), req)
require.Error(t, err)
})
}
@ -297,7 +298,7 @@ func TestUploadOperation(t *testing.T) {
Data: []byte(`{"sourceFile": "dummy-path"}`),
}
out, err := mo.upload(context.Background(), req)
out, err := mo.upload(t.Context(), req)
require.NoError(t, err)
var data createResponse
@ -328,7 +329,7 @@ func TestUploadOperation(t *testing.T) {
Data: []byte(`{"sourceFile": "dummy-path"}`),
}
out, err := mo.upload(context.Background(), req)
out, err := mo.upload(t.Context(), req)
require.NoError(t, err)
var data createResponse
@ -341,7 +342,7 @@ func TestUploadOperation(t *testing.T) {
mo := &HuaweiOBS{
service: &MockHuaweiOBSService{
PutFileFn: func(ctx context.Context, input *obs.PutFileInput) (output *obs.PutObjectOutput, err error) {
return nil, fmt.Errorf("error while creating object")
return nil, errors.New("error while creating object")
},
},
logger: logger.NewLogger("test"),
@ -358,7 +359,7 @@ func TestUploadOperation(t *testing.T) {
Data: []byte(`{"sourceFile": "dummy-path"}`),
}
_, err := mo.upload(context.Background(), req)
_, err := mo.upload(t.Context(), req)
require.Error(t, err)
})
}
@ -392,7 +393,7 @@ func TestGetOperation(t *testing.T) {
},
}
_, err := mo.get(context.Background(), req)
_, err := mo.get(t.Context(), req)
require.NoError(t, err)
})
@ -409,7 +410,7 @@ func TestGetOperation(t *testing.T) {
Operation: "get",
}
_, err := mo.get(context.Background(), req)
_, err := mo.get(t.Context(), req)
require.Error(t, err)
})
@ -417,7 +418,7 @@ func TestGetOperation(t *testing.T) {
mo := &HuaweiOBS{
service: &MockHuaweiOBSService{
GetObjectFn: func(ctx context.Context, input *obs.GetObjectInput) (output *obs.GetObjectOutput, err error) {
return nil, fmt.Errorf("error while getting object")
return nil, errors.New("error while getting object")
},
},
logger: logger.NewLogger("test"),
@ -433,7 +434,7 @@ func TestGetOperation(t *testing.T) {
},
}
_, err := mo.get(context.Background(), req)
_, err := mo.get(t.Context(), req)
require.Error(t, err)
})
@ -465,7 +466,7 @@ func TestGetOperation(t *testing.T) {
},
}
_, err := mo.get(context.Background(), req)
_, err := mo.get(t.Context(), req)
require.Error(t, err)
})
}
@ -495,7 +496,7 @@ func TestDeleteOperation(t *testing.T) {
},
}
out, err := mo.delete(context.Background(), req)
out, err := mo.delete(t.Context(), req)
require.NoError(t, err)
var data createResponse
@ -517,7 +518,7 @@ func TestDeleteOperation(t *testing.T) {
Operation: "delete",
}
_, err := mo.delete(context.Background(), req)
_, err := mo.delete(t.Context(), req)
require.Error(t, err)
})
@ -525,7 +526,7 @@ func TestDeleteOperation(t *testing.T) {
mo := &HuaweiOBS{
service: &MockHuaweiOBSService{
DeleteObjectFn: func(ctx context.Context, input *obs.DeleteObjectInput) (output *obs.DeleteObjectOutput, err error) {
return nil, fmt.Errorf("error while deleting object")
return nil, errors.New("error while deleting object")
},
},
logger: logger.NewLogger("test"),
@ -541,7 +542,7 @@ func TestDeleteOperation(t *testing.T) {
},
}
_, err := mo.delete(context.Background(), req)
_, err := mo.delete(t.Context(), req)
require.Error(t, err)
})
}
@ -572,7 +573,7 @@ func TestListOperation(t *testing.T) {
Data: []byte("{\"maxResults\": 10}"),
}
_, err := mo.list(context.Background(), req)
_, err := mo.list(t.Context(), req)
require.NoError(t, err)
})
@ -580,7 +581,7 @@ func TestListOperation(t *testing.T) {
mo := &HuaweiOBS{
service: &MockHuaweiOBSService{
ListObjectsFn: func(ctx context.Context, input *obs.ListObjectsInput) (output *obs.ListObjectsOutput, err error) {
return nil, fmt.Errorf("error while listing objects")
return nil, errors.New("error while listing objects")
},
},
logger: logger.NewLogger("test"),
@ -597,7 +598,7 @@ func TestListOperation(t *testing.T) {
Data: []byte("{\"maxResults\": 10}"),
}
_, err := mo.list(context.Background(), req)
_, err := mo.list(t.Context(), req)
require.Error(t, err)
})
@ -626,7 +627,7 @@ func TestListOperation(t *testing.T) {
Data: []byte("{\"key\": \"value\"}"),
}
_, err := mo.list(context.Background(), req)
_, err := mo.list(t.Context(), req)
require.NoError(t, err)
})
}
@ -653,7 +654,7 @@ func TestInvoke(t *testing.T) {
Operation: "create",
}
_, err := mo.Invoke(context.Background(), req)
_, err := mo.Invoke(t.Context(), req)
require.NoError(t, err)
})
@ -685,7 +686,7 @@ func TestInvoke(t *testing.T) {
},
}
_, err := mo.Invoke(context.Background(), req)
_, err := mo.Invoke(t.Context(), req)
require.NoError(t, err)
})
@ -713,7 +714,7 @@ func TestInvoke(t *testing.T) {
},
}
_, err := mo.Invoke(context.Background(), req)
_, err := mo.Invoke(t.Context(), req)
require.NoError(t, err)
})
@ -742,7 +743,7 @@ func TestInvoke(t *testing.T) {
Data: []byte("{\"maxResults\": 10}"),
}
_, err := mo.Invoke(context.Background(), req)
_, err := mo.Invoke(t.Context(), req)
require.NoError(t, err)
})
@ -759,7 +760,7 @@ func TestInvoke(t *testing.T) {
Operation: "unknown",
}
_, err := mo.Invoke(context.Background(), req)
_, err := mo.Invoke(t.Context(), req)
require.Error(t, err)
})
}

View File

@ -167,7 +167,7 @@ func (mr *MockQueryAPIMockRecorder) QueryRawWithParams(arg0, arg1, arg2, arg3 in
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "QueryRawWithParams", reflect.TypeOf((*MockQueryAPI)(nil).QueryRaw), arg0, arg1, arg2, arg3)
}
// QueryWithParams executes flux parametrized query on the InfluxDB server and returns QueryTableResult which parses streamed response into structures representing flux table parts
func (m *MockQueryAPI) QueryWithParams(ctx context.Context, query string, params interface{}) (*api.QueryTableResult, error) {
m.ctrl.T.Helper()
@ -181,4 +181,4 @@ func (m *MockQueryAPI) QueryWithParams(ctx context.Context, query string, params
func (mr *MockQueryAPIMockRecorder) QueryWithParams(arg0, arg1, arg2 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "QueryWithParams", reflect.TypeOf((*MockQueryAPI)(nil).QueryWithParams), arg0, arg1, arg2)
}
}

View File

@ -14,7 +14,6 @@ limitations under the License.
package influx
import (
"context"
"testing"
"github.com/golang/mock/gomock"
@ -55,7 +54,7 @@ func TestInflux_Init(t *testing.T) {
assert.Nil(t, influx.client)
m := bindings.Metadata{Base: metadata.Base{Properties: map[string]string{"Url": "a", "Token": "a", "Org": "a", "Bucket": "a"}}}
err := influx.Init(context.Background(), m)
err := influx.Init(t.Context(), m)
require.NoError(t, err)
assert.NotNil(t, influx.queryAPI)
@ -90,12 +89,12 @@ func TestInflux_Invoke_BindingCreateOperation(t *testing.T) {
defer ctrl.Finish()
w := NewMockWriteAPIBlocking(ctrl)
w.EXPECT().WriteRecord(gomock.Eq(context.TODO()), gomock.Eq("a,a a")).Return(nil)
w.EXPECT().WriteRecord(gomock.Eq(t.Context()), gomock.Eq("a,a a")).Return(nil)
influx := &Influx{
writeAPI: w,
}
for _, test := range tests {
resp, err := influx.Invoke(context.TODO(), test.request)
resp, err := influx.Invoke(t.Context(), test.request)
assert.Equal(t, test.want.resp, resp)
assert.Equal(t, test.want.err, err)
}
@ -117,7 +116,7 @@ func TestInflux_Invoke_BindingInvalidOperation(t *testing.T) {
}
for _, test := range tests {
resp, err := (*Influx)(nil).Invoke(context.TODO(), test.request)
resp, err := (*Influx)(nil).Invoke(t.Context(), test.request)
assert.Equal(t, test.want.resp, resp)
assert.Equal(t, test.want.err, err)
}
@ -153,13 +152,13 @@ func TestInflux_Invoke_BindingQueryOperation(t *testing.T) {
defer ctrl.Finish()
q := NewMockQueryAPI(ctrl)
q.EXPECT().QueryRaw(gomock.Eq(context.TODO()), gomock.Eq("a"), gomock.Eq(influxdb2.DefaultDialect())).Return("ok", nil)
q.EXPECT().QueryRaw(gomock.Eq(t.Context()), gomock.Eq("a"), gomock.Eq(influxdb2.DefaultDialect())).Return("ok", nil)
influx := &Influx{
queryAPI: q,
logger: logger.NewLogger("test"),
}
for _, test := range tests {
resp, err := influx.Invoke(context.TODO(), test.request)
resp, err := influx.Invoke(t.Context(), test.request)
assert.Equal(t, test.want.resp, resp)
assert.Equal(t, test.want.err, err)
}

View File

@ -15,7 +15,7 @@ package bindings
import (
"context"
"fmt"
"errors"
"io"
"github.com/dapr/components-contrib/health"
@ -43,6 +43,6 @@ func PingInpBinding(ctx context.Context, inputBinding InputBinding) error {
if inputBindingWithPing, ok := inputBinding.(health.Pinger); ok {
return inputBindingWithPing.Ping(ctx)
} else {
return fmt.Errorf("ping is not implemented by this input binding")
return errors.New("ping is not implemented by this input binding")
}
}

View File

@ -100,29 +100,26 @@ func (b *Binding) Read(ctx context.Context, handler bindings.Handler) error {
return nil
}
handlerConfig := kafka.SubscriptionHandlerConfig{
IsBulkSubscribe: false,
Handler: adaptHandler(handler),
}
for _, t := range b.topics {
b.kafka.AddTopicHandler(t, handlerConfig)
}
ctx, cancel := context.WithCancel(ctx)
b.wg.Add(1)
go func() {
defer b.wg.Done()
// Wait for context cancelation or closure.
select {
case <-ctx.Done():
case <-b.closeCh:
}
// Remove the topic handlers.
for _, t := range b.topics {
b.kafka.RemoveTopicHandler(t)
}
cancel()
b.wg.Done()
}()
return b.kafka.Subscribe(ctx)
handlerConfig := kafka.SubscriptionHandlerConfig{
IsBulkSubscribe: false,
Handler: adaptHandler(handler),
}
b.kafka.Subscribe(ctx, handlerConfig, b.topics...)
return nil
}
func adaptHandler(handler bindings.Handler) kafka.EventHandler {

View File

@ -14,6 +14,67 @@ binding:
operations:
- name: create
description: "Publish a new message in the topic."
# This auth profile has duplicate fields intentionally as we maintain backwards compatibility,
# but also move Kafka to utilize the noramlized AWS fields in the builtin auth profiles.
# TODO: rm the duplicate aws prefixed fields in Dapr 1.17.
builtinAuthenticationProfiles:
- name: "aws"
metadata:
- name: authType
type: string
required: true
description: |
Authentication type.
This must be set to "awsiam" for this authentication profile.
example: '"awsiam"'
allowedValues:
- "awsiam"
- name: awsAccessKey
type: string
required: false
description: |
This maintains backwards compatibility with existing fields.
It will be deprecated as of Dapr 1.17. Use 'accessKey' instead.
If both fields are set, then 'accessKey' value will be used.
AWS access key associated with an IAM account.
example: '"AKIAIOSFODNN7EXAMPLE"'
- name: awsSecretKey
type: string
required: false
sensitive: true
description: |
This maintains backwards compatibility with existing fields.
It will be deprecated as of Dapr 1.17. Use 'secretKey' instead.
If both fields are set, then 'secretKey' value will be used.
The secret key associated with the access key.
example: '"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"'
- name: awsSessionToken
type: string
sensitive: true
description: |
This maintains backwards compatibility with existing fields.
It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead.
If both fields are set, then 'sessionToken' value will be used.
AWS session token to use. A session token is only required if you are using temporary security credentials.
example: '"TOKEN"'
- name: awsIamRoleArn
type: string
required: false
description: |
This maintains backwards compatibility with existing fields.
It will be deprecated as of Dapr 1.17. Use 'assumeRoleArn' instead.
If both fields are set, then 'assumeRoleArn' value will be used.
IAM role that has access to MSK. This is another option to authenticate with MSK aside from the AWS Credentials.
example: '"arn:aws:iam::123456789:role/mskRole"'
- name: awsStsSessionName
type: string
description: |
This maintains backwards compatibility with existing fields.
It will be deprecated as of Dapr 1.17. Use 'sessionName' instead.
If both fields are set, then 'sessionName' value will be used.
Represents the session name for assuming a role.
example: '"MyAppSession"'
default: '"DaprDefaultSession"'
authenticationProfiles:
- title: "OIDC Authentication"
description: |
@ -139,55 +200,6 @@ authenticationProfiles:
example: '"none"'
allowedValues:
- "none"
- title: "AWS IAM"
description: "Authenticate using AWS IAM credentials or role for AWS MSK"
metadata:
- name: authType
type: string
required: true
description: |
Authentication type.
This must be set to "awsiam" for this authentication profile.
example: '"awsiam"'
allowedValues:
- "awsiam"
- name: awsRegion
type: string
required: true
description: |
The AWS Region where the MSK Kafka broker is deployed to.
example: '"us-east-1"'
- name: awsAccessKey
type: string
required: true
description: |
AWS access key associated with an IAM account.
example: '"AKIAIOSFODNN7EXAMPLE"'
- name: awsSecretKey
type: string
required: true
sensitive: true
description: |
The secret key associated with the access key.
example: '"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"'
- name: awsSessionToken
type: string
sensitive: true
description: |
AWS session token to use. A session token is only required if you are using\ntemporary security credentials.
example: '"TOKEN"'
- name: awsIamRoleArn
type: string
required: true
description: |
IAM role that has access to MSK. This is another option to authenticate with MSK aside from the AWS Credentials.
example: '"arn:aws:iam::123456789:role/mskRole"'
- name: awsStsSessionName
type: string
description: |
Represents the session name for assuming a role.
example: '"MyAppSession"'
default: '"MSKSASLDefaultSession"'
metadata:
- name: topics
type: string
@ -222,6 +234,18 @@ metadata:
example: '"group1"'
binding:
input: true
- name: clientConnectionTopicMetadataRefreshInterval
type: duration
description: |
The interval for the client connection's topic metadata to be refreshed with the broker as a Go duration.
example: '4m'
default: '9m'
- name: clientConnectionKeepAliveInterval
type: duration
description: |
The max amount of time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely.
example: '4m'
default: '0'
- name: clientID
type: string
description: |
@ -258,6 +282,18 @@ metadata:
Disables consumer retry by setting this to "false".
example: '"true"'
default: '"false"'
- name: heartbeatInterval
type: duration
description: |
The interval between heartbeats to the consumer coordinator.
example: '"5s"'
default: '"3s"'
- name: sessionTimeout
type: duration
description: |
The maximum time between heartbeats before the consumer is considered inactive and will timeout.
example: '"20s"'
default: '"10s"'
- name: version
type: string
description: |
@ -307,4 +343,21 @@ metadata:
description: |
The TTL for schema caching when publishing a message with latest schema available.
example: '"5m"'
default: '"5m"'
default: '"5m"'
- name: escapeHeaders
type: bool
required: false
description: |
Enables URL escaping of the message header values.
It allows sending headers with special characters that are usually not allowed in HTTP headers.
example: "true"
default: "false"
- name: compression
type: string
required: false
description: |
Enables message compression.
There are five types of compression available: none, gzip, snappy, lz4, and zstd.
The default is none.
example: '"gzip"'
default: "none"

View File

@ -83,3 +83,7 @@ func (out *kitexOutputBinding) Operations() []bindings.OperationKind {
func (out *kitexOutputBinding) GetComponentMetadata() (metadataInfo metadata.MetadataMap) {
return
}
func (out *kitexOutputBinding) Close() error {
return nil
}

View File

@ -14,7 +14,6 @@ limitations under the License.
package kitex
import (
"context"
"testing"
"time"
@ -61,7 +60,7 @@ func TestInvoke(t *testing.T) {
metadataRPCMethodName: MethodName,
}
resp, err := output.Invoke(context.Background(), &bindings.InvokeRequest{
resp, err := output.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: metadata,
Data: reqData,
Operation: bindings.GetOperation,

View File

@ -106,7 +106,7 @@ func Test_kubeMQ_Init(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
kubemq := NewKubeMQ(logger.NewLogger("test"))
err := kubemq.Init(context.Background(), tt.meta)
err := kubemq.Init(t.Context(), tt.meta)
if tt.wantErr {
require.Error(t, err)
} else {
@ -117,10 +117,10 @@ func Test_kubeMQ_Init(t *testing.T) {
}
func Test_kubeMQ_Invoke_Read_Single_Message(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), time.Second*5)
ctx, cancel := context.WithTimeout(t.Context(), time.Second*5)
defer cancel()
kubemq := NewKubeMQ(logger.NewLogger("test"))
err := kubemq.Init(context.Background(), getDefaultMetadata("test.read.single"))
err := kubemq.Init(t.Context(), getDefaultMetadata("test.read.single"))
require.NoError(t, err)
dataReadCh := make(chan []byte)
invokeRequest := &bindings.InvokeRequest{
@ -142,12 +142,12 @@ func Test_kubeMQ_Invoke_Read_Single_Message(t *testing.T) {
}
func Test_kubeMQ_Invoke_Read_Single_MessageWithHandlerError(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
ctx, cancel := context.WithTimeout(t.Context(), time.Second*10)
defer cancel()
kubemq := NewKubeMQ(logger.NewLogger("test"))
md := getDefaultMetadata("test.read.single.error")
md.Properties["autoAcknowledged"] = "false"
err := kubemq.Init(context.Background(), md)
err := kubemq.Init(t.Context(), md)
require.NoError(t, err)
invokeRequest := &bindings.InvokeRequest{
Data: []byte("test"),
@ -156,7 +156,7 @@ func Test_kubeMQ_Invoke_Read_Single_MessageWithHandlerError(t *testing.T) {
_, err = kubemq.Invoke(ctx, invokeRequest)
require.NoError(t, err)
firstReadCtx, firstReadCancel := context.WithTimeout(context.Background(), time.Second*3)
firstReadCtx, firstReadCancel := context.WithTimeout(t.Context(), time.Second*3)
defer firstReadCancel()
_ = kubemq.Read(firstReadCtx, func(ctx context.Context, req *bindings.ReadResponse) ([]byte, error) {
return nil, fmt.Errorf("handler error")
@ -164,7 +164,7 @@ func Test_kubeMQ_Invoke_Read_Single_MessageWithHandlerError(t *testing.T) {
<-firstReadCtx.Done()
dataReadCh := make(chan []byte)
secondReadCtx, secondReadCancel := context.WithTimeout(context.Background(), time.Second*3)
secondReadCtx, secondReadCancel := context.WithTimeout(t.Context(), time.Second*3)
defer secondReadCancel()
_ = kubemq.Read(secondReadCtx, func(ctx context.Context, req *bindings.ReadResponse) ([]byte, error) {
dataReadCh <- req.Data
@ -179,10 +179,10 @@ func Test_kubeMQ_Invoke_Read_Single_MessageWithHandlerError(t *testing.T) {
}
func Test_kubeMQ_Invoke_Error(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), time.Second*5)
ctx, cancel := context.WithTimeout(t.Context(), time.Second*5)
defer cancel()
kubemq := NewKubeMQ(logger.NewLogger("test"))
err := kubemq.Init(context.Background(), getDefaultMetadata("***test***"))
err := kubemq.Init(t.Context(), getDefaultMetadata("***test***"))
require.NoError(t, err)
invokeRequest := &bindings.InvokeRequest{

View File

@ -1,7 +1,7 @@
package kubemq
import (
"fmt"
"errors"
"strconv"
"strings"
@ -27,15 +27,15 @@ func parseAddress(address string) (string, int, error) {
var err error
hostPort := strings.Split(address, ":")
if len(hostPort) != 2 {
return "", 0, fmt.Errorf("invalid kubemq address, address format is invalid")
return "", 0, errors.New("invalid kubemq address, address format is invalid")
}
host = hostPort[0]
if len(host) == 0 {
return "", 0, fmt.Errorf("invalid kubemq address, host is empty")
return "", 0, errors.New("invalid kubemq address, host is empty")
}
port, err = strconv.Atoi(hostPort[1])
if err != nil {
return "", 0, fmt.Errorf("invalid kubemq address, port is invalid")
return "", 0, errors.New("invalid kubemq address, port is invalid")
}
return host, port, nil
}
@ -64,19 +64,19 @@ func createOptions(md bindings.Metadata) (*options, error) {
return nil, err
}
} else {
return nil, fmt.Errorf("invalid kubemq address, address is empty")
return nil, errors.New("invalid kubemq address, address is empty")
}
if result.Channel == "" {
return nil, fmt.Errorf("invalid kubemq channel, channel is empty")
return nil, errors.New("invalid kubemq channel, channel is empty")
}
if result.PollMaxItems < 1 {
return nil, fmt.Errorf("invalid kubemq pollMaxItems value, value must be greater than 0")
return nil, errors.New("invalid kubemq pollMaxItems value, value must be greater than 0")
}
if result.PollTimeoutSeconds < 1 {
return nil, fmt.Errorf("invalid kubemq pollTimeoutSeconds value, value must be greater than 0")
return nil, errors.New("invalid kubemq pollTimeoutSeconds value, value must be greater than 0")
}
return result, nil

View File

@ -342,3 +342,7 @@ func (ls *LocalStorage) GetComponentMetadata() (metadataInfo metadata.MetadataMa
metadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, metadata.BindingType)
return
}
func (ls *LocalStorage) Close() error {
return nil
}

View File

@ -95,7 +95,7 @@ func (m *MQTT) getProducer() (mqtt.Client, error) {
}
// mqtt broker allows only one connection at a given time from a clientID.
producerClientID := fmt.Sprintf("%s-producer", m.metadata.ClientID)
producerClientID := m.metadata.ClientID + "-producer"
p, err := m.connect(producerClientID, false)
if err != nil {
return nil, err
@ -170,7 +170,7 @@ func (m *MQTT) Read(ctx context.Context, handler bindings.Handler) error {
m.readHandler = handler
// mqtt broker allows only one connection at a given time from a clientID
consumerClientID := fmt.Sprintf("%s-consumer", m.metadata.ClientID)
consumerClientID := m.metadata.ClientID + "-consumer"
// Establish the connection
// This will also create the subscription in the OnConnect handler
@ -299,7 +299,7 @@ func (m *MQTT) handleMessage() func(client mqtt.Client, mqttMsg mqtt.Message) {
return func(client mqtt.Client, mqttMsg mqtt.Message) {
bo := m.backOff
if m.metadata.BackOffMaxRetries >= 0 {
bo = backoff.WithMaxRetries(bo, uint64(m.metadata.BackOffMaxRetries))
bo = backoff.WithMaxRetries(bo, uint64(m.metadata.BackOffMaxRetries)) //nolint:gosec
}
err := retry.NotifyRecover(

View File

@ -16,7 +16,6 @@ limitations under the License.
package mqtt
import (
"context"
"os"
"testing"
"time"
@ -50,7 +49,7 @@ func getConnectionString() string {
func TestInvokeWithTopic(t *testing.T) {
t.Parallel()
ctx := context.Background()
ctx := t.Context()
url := getConnectionString()
if url == "" {
@ -106,7 +105,7 @@ func TestInvokeWithTopic(t *testing.T) {
}()
// Test invoke with default topic configured for component.
_, err = r.Invoke(context.Background(), &bindings.InvokeRequest{Data: dataDefault})
_, err = r.Invoke(t.Context(), &bindings.InvokeRequest{Data: dataDefault})
require.NoError(t, err)
m := <-msgCh
@ -116,7 +115,7 @@ func TestInvokeWithTopic(t *testing.T) {
assert.Equal(t, topicDefault, mqttMessage.Topic())
// Test invoke with customized topic.
_, err = r.Invoke(context.Background(), &bindings.InvokeRequest{
_, err = r.Invoke(t.Context(), &bindings.InvokeRequest{
Data: dataCustomized,
Metadata: map[string]string{
mqttTopic: topicCustomized,

View File

@ -114,7 +114,7 @@ func (m *Mysql) Init(ctx context.Context, md bindings.Metadata) error {
}
if meta.URL == "" {
return fmt.Errorf("missing MySql connection string")
return errors.New("missing MySql connection string")
}
m.db, err = initDB(meta.URL, meta.PemPath)
@ -281,7 +281,7 @@ func initDB(url, pemPath string) (*sql.DB, error) {
ok := rootCertPool.AppendCertsFromPEM(pem)
if !ok {
return nil, fmt.Errorf("failed to append PEM")
return nil, errors.New("failed to append PEM")
}
err = mysql.RegisterTLSConfig("custom", &tls.Config{

View File

@ -14,7 +14,6 @@ limitations under the License.
package mysql
import (
"context"
"encoding/json"
"fmt"
"os"
@ -60,13 +59,13 @@ func TestMysqlIntegration(t *testing.T) {
b := NewMysql(logger.NewLogger("test")).(*Mysql)
m := bindings.Metadata{Base: metadata.Base{Properties: map[string]string{connectionURLKey: url}}}
err := b.Init(context.Background(), m)
err := b.Init(t.Context(), m)
require.NoError(t, err)
defer b.Close()
t.Run("Invoke create table", func(t *testing.T) {
res, err := b.Invoke(context.Background(), &bindings.InvokeRequest{
res, err := b.Invoke(t.Context(), &bindings.InvokeRequest{
Operation: execOperation,
Metadata: map[string]string{
commandSQLKey: `CREATE TABLE IF NOT EXISTS foo (
@ -81,7 +80,7 @@ func TestMysqlIntegration(t *testing.T) {
})
t.Run("Invoke delete", func(t *testing.T) {
res, err := b.Invoke(context.Background(), &bindings.InvokeRequest{
res, err := b.Invoke(t.Context(), &bindings.InvokeRequest{
Operation: execOperation,
Metadata: map[string]string{
commandSQLKey: "DELETE FROM foo",
@ -91,8 +90,8 @@ func TestMysqlIntegration(t *testing.T) {
})
t.Run("Invoke insert", func(t *testing.T) {
for i := 0; i < 10; i++ {
res, err := b.Invoke(context.Background(), &bindings.InvokeRequest{
for i := range 10 {
res, err := b.Invoke(t.Context(), &bindings.InvokeRequest{
Operation: execOperation,
Metadata: map[string]string{
commandSQLKey: fmt.Sprintf(
@ -106,8 +105,8 @@ func TestMysqlIntegration(t *testing.T) {
t.Run("Invoke update", func(t *testing.T) {
date := time.Now().Add(time.Hour)
for i := 0; i < 10; i++ {
res, err := b.Invoke(context.Background(), &bindings.InvokeRequest{
for i := range 10 {
res, err := b.Invoke(t.Context(), &bindings.InvokeRequest{
Operation: execOperation,
Metadata: map[string]string{
commandSQLKey: fmt.Sprintf(
@ -122,8 +121,8 @@ func TestMysqlIntegration(t *testing.T) {
t.Run("Invoke update with parameters", func(t *testing.T) {
date := time.Now().Add(2 * time.Hour)
for i := 0; i < 10; i++ {
res, err := b.Invoke(context.Background(), &bindings.InvokeRequest{
for i := range 10 {
res, err := b.Invoke(t.Context(), &bindings.InvokeRequest{
Operation: execOperation,
Metadata: map[string]string{
commandSQLKey: "UPDATE foo SET ts = ? WHERE id = ?",
@ -136,7 +135,7 @@ func TestMysqlIntegration(t *testing.T) {
})
t.Run("Invoke select", func(t *testing.T) {
res, err := b.Invoke(context.Background(), &bindings.InvokeRequest{
res, err := b.Invoke(t.Context(), &bindings.InvokeRequest{
Operation: queryOperation,
Metadata: map[string]string{
commandSQLKey: "SELECT * FROM foo WHERE id < 3",
@ -167,7 +166,7 @@ func TestMysqlIntegration(t *testing.T) {
})
t.Run("Invoke select with parameters", func(t *testing.T) {
res, err := b.Invoke(context.Background(), &bindings.InvokeRequest{
res, err := b.Invoke(t.Context(), &bindings.InvokeRequest{
Operation: queryOperation,
Metadata: map[string]string{
commandSQLKey: "SELECT * FROM foo WHERE id = ?",
@ -190,7 +189,7 @@ func TestMysqlIntegration(t *testing.T) {
})
t.Run("Invoke drop", func(t *testing.T) {
res, err := b.Invoke(context.Background(), &bindings.InvokeRequest{
res, err := b.Invoke(t.Context(), &bindings.InvokeRequest{
Operation: execOperation,
Metadata: map[string]string{
commandSQLKey: "DROP TABLE foo",
@ -200,7 +199,7 @@ func TestMysqlIntegration(t *testing.T) {
})
t.Run("Invoke close", func(t *testing.T) {
_, err := b.Invoke(context.Background(), &bindings.InvokeRequest{
_, err := b.Invoke(t.Context(), &bindings.InvokeRequest{
Operation: closeOperation,
})
require.NoError(t, err)

View File

@ -14,7 +14,6 @@ limitations under the License.
package mysql
import (
"context"
"encoding/json"
"errors"
"testing"
@ -39,7 +38,7 @@ func TestQuery(t *testing.T) {
AddRow(3, "value-3", time.Now().Add(2000))
mock.ExpectQuery("SELECT \\* FROM foo WHERE id < 4").WillReturnRows(rows)
ret, err := m.query(context.Background(), `SELECT * FROM foo WHERE id < 4`)
ret, err := m.query(t.Context(), `SELECT * FROM foo WHERE id < 4`)
require.NoError(t, err)
t.Logf("query result: %s", ret)
assert.Contains(t, string(ret), "\"id\":1")
@ -58,7 +57,7 @@ func TestQuery(t *testing.T) {
AddRow(2, 2.2, time.Now().Add(1000)).
AddRow(3, 3.3, time.Now().Add(2000))
mock.ExpectQuery("SELECT \\* FROM foo WHERE id < 4").WillReturnRows(rows)
ret, err := m.query(context.Background(), "SELECT * FROM foo WHERE id < 4")
ret, err := m.query(t.Context(), "SELECT * FROM foo WHERE id < 4")
require.NoError(t, err)
t.Logf("query result: %s", ret)
@ -85,7 +84,7 @@ func TestExec(t *testing.T) {
m, mock, _ := mockDatabase(t)
defer m.Close()
mock.ExpectExec("INSERT INTO foo \\(id, v1, ts\\) VALUES \\(.*\\)").WillReturnResult(sqlmock.NewResult(1, 1))
i, err := m.exec(context.Background(), "INSERT INTO foo (id, v1, ts) VALUES (1, 'test-1', '2021-01-22')")
i, err := m.exec(t.Context(), "INSERT INTO foo (id, v1, ts) VALUES (1, 'test-1', '2021-01-22')")
assert.Equal(t, int64(1), i)
require.NoError(t, err)
}
@ -102,7 +101,7 @@ func TestInvoke(t *testing.T) {
Metadata: metadata,
Operation: execOperation,
}
resp, err := m.Invoke(context.Background(), req)
resp, err := m.Invoke(t.Context(), req)
require.NoError(t, err)
assert.Equal(t, "1", resp.Metadata[respRowsAffectedKey])
})
@ -115,7 +114,7 @@ func TestInvoke(t *testing.T) {
Metadata: metadata,
Operation: execOperation,
}
resp, err := m.Invoke(context.Background(), req)
resp, err := m.Invoke(t.Context(), req)
assert.Nil(t, resp)
require.Error(t, err)
})
@ -133,7 +132,7 @@ func TestInvoke(t *testing.T) {
Metadata: metadata,
Operation: queryOperation,
}
resp, err := m.Invoke(context.Background(), req)
resp, err := m.Invoke(t.Context(), req)
require.NoError(t, err)
var data []any
err = json.Unmarshal(resp.Data, &data)
@ -149,7 +148,7 @@ func TestInvoke(t *testing.T) {
Metadata: metadata,
Operation: queryOperation,
}
resp, err := m.Invoke(context.Background(), req)
resp, err := m.Invoke(t.Context(), req)
assert.Nil(t, resp)
require.Error(t, err)
})
@ -159,7 +158,7 @@ func TestInvoke(t *testing.T) {
req := &bindings.InvokeRequest{
Operation: closeOperation,
}
resp, _ := m.Invoke(context.Background(), req)
resp, _ := m.Invoke(t.Context(), req)
assert.Nil(t, resp)
})
@ -169,7 +168,7 @@ func TestInvoke(t *testing.T) {
Metadata: map[string]string{},
Operation: "unsupported",
}
resp, err := m.Invoke(context.Background(), req)
resp, err := m.Invoke(t.Context(), req)
assert.Nil(t, resp)
require.Error(t, err)
})

View File

@ -15,7 +15,8 @@ package bindings
import (
"context"
"fmt"
"errors"
"io"
"github.com/dapr/components-contrib/health"
"github.com/dapr/components-contrib/metadata"
@ -28,6 +29,7 @@ type OutputBinding interface {
Init(ctx context.Context, metadata Metadata) error
Invoke(ctx context.Context, req *InvokeRequest) (*InvokeResponse, error)
Operations() []OperationKind
io.Closer
}
func PingOutBinding(ctx context.Context, outputBinding OutputBinding) error {
@ -35,6 +37,6 @@ func PingOutBinding(ctx context.Context, outputBinding OutputBinding) error {
if outputBindingWithPing, ok := outputBinding.(health.Pinger); ok {
return outputBindingWithPing.Ping(ctx)
} else {
return fmt.Errorf("ping is not implemented by this output binding")
return errors.New("ping is not implemented by this output binding")
}
}

View File

@ -14,29 +14,49 @@ limitations under the License.
package postgres
import (
"errors"
"time"
"github.com/dapr/components-contrib/common/authentication/aws"
pgauth "github.com/dapr/components-contrib/common/authentication/postgresql"
kitmd "github.com/dapr/kit/metadata"
)
const (
defaultTimeout = 20 * time.Second // Default timeout for network requests
)
type psqlMetadata struct {
pgauth.PostgresAuthMetadata `mapstructure:",squash"`
aws.DeprecatedPostgresIAM `mapstructure:",squash"`
Timeout time.Duration `mapstructure:"timeout" mapstructurealiases:"timeoutInSeconds"`
}
func (m *psqlMetadata) InitWithMetadata(meta map[string]string) error {
// Reset the object
m.PostgresAuthMetadata.Reset()
m.Timeout = defaultTimeout
err := kitmd.DecodeMetadata(meta, &m)
if err != nil {
return err
}
opts := pgauth.InitWithMetadataOpts{
AzureADEnabled: true,
AWSIAMEnabled: true,
}
// Validate and sanitize input
// Azure AD auth is supported for this component
err = m.PostgresAuthMetadata.InitWithMetadata(meta, true)
// Azure AD & AWS IAM auth is supported for this component
err = m.PostgresAuthMetadata.InitWithMetadata(meta, opts)
if err != nil {
return err
}
if m.Timeout < 1*time.Second {
return errors.New("invalid value for 'timeout': must be greater than 1s")
}
return nil
}

View File

@ -38,6 +38,41 @@ builtinAuthenticationProfiles:
example: |
"host=mydb.postgres.database.azure.com user=myapplication port=5432 database=dapr_test sslmode=require"
type: string
- name: "aws"
metadata:
- name: useAWSIAM
required: true
type: bool
example: '"true"'
description: |
Must be set to `true` to enable the component to retrieve access tokens from AWS IAM.
This authentication method only works with AWS Relational Database Service for PostgreSQL databases.
- name: connectionString
required: true
sensitive: true
description: |
The connection string for the PostgreSQL database
This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS.
example: |
"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=dapr_test sslmode=require"
type: string
- name: awsAccessKey
type: string
required: false
description: |
Deprecated as of Dapr 1.17. Use 'accessKey' instead if using AWS IAM.
If both fields are set, then 'accessKey' value will be used.
AWS access key associated with an IAM account.
example: '"AKIAIOSFODNN7EXAMPLE"'
- name: awsSecretKey
type: string
required: false
sensitive: true
description: |
Deprecated as of Dapr 1.17. Use 'secretKey' instead if using AWS IAM.
If both fields are set, then 'secretKey' value will be used.
The secret key associated with the access key.
example: '"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"'
authenticationProfiles:
- title: "Connection string"
description: "Authenticate using a Connection String"
@ -54,6 +89,12 @@ authenticationProfiles:
or "postgres://dapr:secret@dapr.example.com:5432/dapr?sslmode=verify-ca"
type: string
metadata:
- name: timeout
required: false
description: Timeout for all database operations.
example: "30s"
default: "20s"
type: duration
- name: maxConns
required: false
description: |
@ -82,4 +123,39 @@ metadata:
- "exec"
- "simple_protocol"
example: "cache_describe"
default: ""
default: ""
- name: host
required: false
description: The host of the PostgreSQL database
example: "localhost"
type: string
- name: hostaddr
required: false
description: The host address of the PostgreSQL database
example: "127.0.0.1"
type: string
- name: port
required: false
description: The port of the PostgreSQL database
example: "5432"
type: string
- name: database
required: false
description: The database of the PostgreSQL database
example: "postgres"
type: string
- name: user
required: false
description: The user of the PostgreSQL database
example: "postgres"
type: string
- name: password
required: false
description: The password of the PostgreSQL database
example: "password"
type: string
- name: sslRootCert
required: false
description: The path to the SSL root certificate file
example: "/path/to/ssl/root/cert.pem"
type: string

View File

@ -0,0 +1,88 @@
/*
Copyright 2023 The Dapr Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package postgres
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestMetadata(t *testing.T) {
t.Run("missing connection string", func(t *testing.T) {
m := psqlMetadata{}
props := map[string]string{}
err := m.InitWithMetadata(props)
require.Error(t, err)
require.ErrorContains(t, err, "connection string")
})
t.Run("has connection string", func(t *testing.T) {
m := psqlMetadata{}
props := map[string]string{
"connectionString": "foo=bar",
}
err := m.InitWithMetadata(props)
require.NoError(t, err)
})
t.Run("default timeout", func(t *testing.T) {
m := psqlMetadata{}
props := map[string]string{
"connectionString": "foo=bar",
}
err := m.InitWithMetadata(props)
require.NoError(t, err)
assert.Equal(t, 20*time.Second, m.Timeout)
})
t.Run("invalid timeout", func(t *testing.T) {
m := psqlMetadata{}
props := map[string]string{
"connectionString": "foo=bar",
"timeout": "NaN",
}
err := m.InitWithMetadata(props)
require.Error(t, err)
})
t.Run("positive timeout", func(t *testing.T) {
m := psqlMetadata{}
props := map[string]string{
"connectionString": "foo=bar",
"timeout": "42",
}
err := m.InitWithMetadata(props)
require.NoError(t, err)
assert.Equal(t, 42*time.Second, m.Timeout)
})
t.Run("zero timeout", func(t *testing.T) {
m := psqlMetadata{}
props := map[string]string{
"connectionString": "foo=bar",
"timeout": "0",
}
err := m.InitWithMetadata(props)
require.Error(t, err)
})
}

View File

@ -26,6 +26,8 @@ import (
"github.com/jackc/pgx/v5/pgxpool"
"github.com/dapr/components-contrib/bindings"
awsAuth "github.com/dapr/components-contrib/common/authentication/aws"
pgauth "github.com/dapr/components-contrib/common/authentication/postgresql"
"github.com/dapr/components-contrib/metadata"
"github.com/dapr/kit/logger"
)
@ -45,6 +47,11 @@ type Postgres struct {
logger logger.Logger
db *pgxpool.Pool
closed atomic.Bool
enableAzureAD bool
enableAWSIAM bool
awsAuthProvider awsAuth.Provider
}
// NewPostgres returns a new PostgreSQL output binding.
@ -59,25 +66,52 @@ func (p *Postgres) Init(ctx context.Context, meta bindings.Metadata) error {
if p.closed.Load() {
return errors.New("cannot initialize a previously-closed component")
}
opts := pgauth.InitWithMetadataOpts{
AzureADEnabled: p.enableAzureAD,
AWSIAMEnabled: p.enableAWSIAM,
}
m := psqlMetadata{}
err := m.InitWithMetadata(meta.Properties)
if err := m.InitWithMetadata(meta.Properties); err != nil {
return err
}
var err error
poolConfig, err := m.GetPgxPoolConfig()
if err != nil {
return err
}
poolConfig, err := m.GetPgxPoolConfig()
if err != nil {
return fmt.Errorf("error opening DB connection: %w", err)
if opts.AWSIAMEnabled && m.UseAWSIAM {
opts, validateErr := m.BuildAwsIamOptions(p.logger, meta.Properties)
if validateErr != nil {
return fmt.Errorf("failed to validate AWS IAM authentication fields: %w", validateErr)
}
var provider awsAuth.Provider
provider, err = awsAuth.NewProvider(ctx, *opts, awsAuth.GetConfig(*opts))
if err != nil {
return err
}
p.awsAuthProvider = provider
p.awsAuthProvider.UpdatePostgres(ctx, poolConfig)
}
// This context doesn't control the lifetime of the connection pool, and is
// only scoped to postgres creating resources at init.
p.db, err = pgxpool.NewWithConfig(ctx, poolConfig)
connCtx, connCancel := context.WithTimeout(ctx, m.Timeout)
defer connCancel()
p.db, err = pgxpool.NewWithConfig(connCtx, poolConfig)
if err != nil {
return fmt.Errorf("unable to connect to the DB: %w", err)
}
pingCtx, pingCancel := context.WithTimeout(ctx, m.Timeout)
defer pingCancel()
err = p.db.Ping(pingCtx)
if err != nil {
return fmt.Errorf("failed to ping the DB: %w", err)
}
return nil
}
@ -177,7 +211,11 @@ func (p *Postgres) Close() error {
}
p.db = nil
return nil
errs := make([]error, 1)
if p.awsAuthProvider != nil {
errs[0] = p.awsAuthProvider.Close()
}
return errors.Join(errs...)
}
func (p *Postgres) query(ctx context.Context, sql string, args ...any) (result []byte, err error) {

View File

@ -14,7 +14,7 @@ limitations under the License.
package postgres
import (
"context"
"errors"
"fmt"
"os"
"testing"
@ -62,10 +62,14 @@ func TestPostgresIntegration(t *testing.T) {
t.SkipNow()
}
t.Run("Test init configurations", func(t *testing.T) {
testInitConfiguration(t, url)
})
// live DB test
b := NewPostgres(logger.NewLogger("test")).(*Postgres)
m := bindings.Metadata{Base: metadata.Base{Properties: map[string]string{"connectionString": url}}}
if err := b.Init(context.Background(), m); err != nil {
if err := b.Init(t.Context(), m); err != nil {
t.Fatal(err)
}
@ -74,7 +78,7 @@ func TestPostgresIntegration(t *testing.T) {
Operation: execOperation,
Metadata: map[string]string{commandSQLKey: testTableDDL},
}
ctx := context.TODO()
ctx := t.Context()
t.Run("Invoke create table", func(t *testing.T) {
res, err := b.Invoke(ctx, req)
assertResponse(t, res, err)
@ -87,7 +91,7 @@ func TestPostgresIntegration(t *testing.T) {
})
t.Run("Invoke insert", func(t *testing.T) {
for i := 0; i < 10; i++ {
for i := range 10 {
req.Metadata[commandSQLKey] = fmt.Sprintf(testInsert, i, i, time.Now().Format(time.RFC3339))
res, err := b.Invoke(ctx, req)
assertResponse(t, res, err)
@ -95,7 +99,7 @@ func TestPostgresIntegration(t *testing.T) {
})
t.Run("Invoke update", func(t *testing.T) {
for i := 0; i < 10; i++ {
for i := range 10 {
req.Metadata[commandSQLKey] = fmt.Sprintf(testUpdate, time.Now().Format(time.RFC3339), i)
res, err := b.Invoke(ctx, req)
assertResponse(t, res, err)
@ -131,6 +135,46 @@ func TestPostgresIntegration(t *testing.T) {
})
}
// testInitConfiguration tests valid and invalid config settings.
func testInitConfiguration(t *testing.T, connectionString string) {
logger := logger.NewLogger("test")
tests := []struct {
name string
props map[string]string
expectedErr error
}{
{
name: "Empty",
props: map[string]string{},
expectedErr: errors.New("missing connection string"),
},
{
name: "Valid connection string",
props: map[string]string{"connectionString": connectionString},
expectedErr: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
p := NewPostgres(logger).(*Postgres)
defer p.Close()
metadata := bindings.Metadata{
Base: metadata.Base{Properties: tt.props},
}
err := p.Init(t.Context(), metadata)
if tt.expectedErr == nil {
require.NoError(t, err)
} else {
require.Error(t, err)
assert.Equal(t, tt.expectedErr, err)
}
})
}
}
func assertResponse(t *testing.T, res *bindings.InvokeResponse, err error) {
require.NoError(t, err)
assert.NotNil(t, res)

View File

@ -104,7 +104,7 @@ func (p *Postmark) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*bi
email.From = req.Metadata["emailFrom"]
}
if len(email.From) == 0 {
return nil, fmt.Errorf("error Postmark from email not supplied")
return nil, errors.New("error Postmark from email not supplied")
}
// Build email to address, this is required
@ -115,7 +115,7 @@ func (p *Postmark) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*bi
email.To = req.Metadata["emailTo"]
}
if len(email.To) == 0 {
return nil, fmt.Errorf("error Postmark to email not supplied")
return nil, errors.New("error Postmark to email not supplied")
}
// Build email subject, this is required
@ -126,7 +126,7 @@ func (p *Postmark) Invoke(ctx context.Context, req *bindings.InvokeRequest) (*bi
email.Subject = req.Metadata["subject"]
}
if len(email.Subject) == 0 {
return nil, fmt.Errorf("error Postmark subject not supplied")
return nil, errors.New("error Postmark subject not supplied")
}
// Build email cc address, this is optional
@ -167,3 +167,7 @@ func (p *Postmark) GetComponentMetadata() (metadataInfo metadata.MetadataMap) {
metadata.GetMetadataInfoFromStructType(reflect.TypeOf(metadataStruct), &metadataInfo, metadata.BindingType)
return
}
func (p *Postmark) Close() error {
return nil
}

View File

@ -34,7 +34,7 @@ import (
"github.com/dapr/components-contrib/metadata"
"github.com/dapr/kit/logger"
kitmd "github.com/dapr/kit/metadata"
"github.com/dapr/kit/utils"
"github.com/dapr/kit/strings"
)
const (
@ -313,7 +313,7 @@ func (r *RabbitMQ) parseMetadata(meta bindings.Metadata) error {
}
if val, ok := meta.Properties[externalSasl]; ok && val != "" {
m.ExternalSasl = utils.IsTruthy(val)
m.ExternalSasl = strings.IsTruthy(val)
}
if val, ok := meta.Properties[caCert]; ok && val != "" {
@ -336,7 +336,7 @@ func (r *RabbitMQ) parseMetadata(meta bindings.Metadata) error {
}
if val, ok := meta.Properties[externalSasl]; ok && val != "" {
m.ExternalSasl = utils.IsTruthy(val)
m.ExternalSasl = strings.IsTruthy(val)
}
ttl, ok, err := metadata.TryGetTTL(meta.Properties)

View File

@ -86,7 +86,7 @@ func TestQueuesWithTTL(t *testing.T) {
logger := logger.NewLogger("test")
r := NewRabbitMQ(logger).(*RabbitMQ)
err := r.Init(context.Background(), metadata)
err := r.Init(t.Context(), metadata)
require.NoError(t, err)
// Assert that if waited too long, we won't see any message
@ -99,7 +99,7 @@ func TestQueuesWithTTL(t *testing.T) {
defer ch.Close()
const tooLateMsgContent = "too_late_msg"
_, err = r.Invoke(context.Background(), &bindings.InvokeRequest{Data: []byte(tooLateMsgContent)})
_, err = r.Invoke(t.Context(), &bindings.InvokeRequest{Data: []byte(tooLateMsgContent)})
require.NoError(t, err)
time.Sleep(time.Second + (ttlInSeconds * time.Second))
@ -110,7 +110,7 @@ func TestQueuesWithTTL(t *testing.T) {
// Getting before it is expired, should return it
const testMsgContent = "test_msg"
_, err = r.Invoke(context.Background(), &bindings.InvokeRequest{Data: []byte(testMsgContent)})
_, err = r.Invoke(t.Context(), &bindings.InvokeRequest{Data: []byte(testMsgContent)})
require.NoError(t, err)
msg, ok, err := getMessageWithRetries(ch, queueName, maxGetDuration)
@ -150,14 +150,14 @@ func TestQueuesReconnect(t *testing.T) {
logger := logger.NewLogger("test")
r := NewRabbitMQ(logger).(*RabbitMQ)
err := r.Init(context.Background(), metadata)
err := r.Init(t.Context(), metadata)
require.NoError(t, err)
err = r.Read(context.Background(), handler)
err = r.Read(t.Context(), handler)
require.NoError(t, err)
const tooLateMsgContent = "success_msg1"
_, err = r.Invoke(context.Background(), &bindings.InvokeRequest{Data: []byte(tooLateMsgContent)})
_, err = r.Invoke(t.Context(), &bindings.InvokeRequest{Data: []byte(tooLateMsgContent)})
require.NoError(t, err)
// perform a close connection with the rabbitmq server
@ -165,7 +165,7 @@ func TestQueuesReconnect(t *testing.T) {
time.Sleep(3 * defaultReconnectWait)
const testMsgContent = "reconnect_msg"
_, err = r.Invoke(context.Background(), &bindings.InvokeRequest{Data: []byte(testMsgContent)})
_, err = r.Invoke(t.Context(), &bindings.InvokeRequest{Data: []byte(testMsgContent)})
require.NoError(t, err)
time.Sleep(defaultReconnectWait)
@ -199,7 +199,7 @@ func TestPublishingWithTTL(t *testing.T) {
logger := logger.NewLogger("test")
rabbitMQBinding1 := NewRabbitMQ(logger).(*RabbitMQ)
err := rabbitMQBinding1.Init(context.Background(), metadata)
err := rabbitMQBinding1.Init(t.Context(), metadata)
require.NoError(t, err)
// Assert that if waited too long, we won't see any message
@ -219,7 +219,7 @@ func TestPublishingWithTTL(t *testing.T) {
},
}
_, err = rabbitMQBinding1.Invoke(context.Background(), &writeRequest)
_, err = rabbitMQBinding1.Invoke(t.Context(), &writeRequest)
require.NoError(t, err)
time.Sleep(time.Second + (ttlInSeconds * time.Second))
@ -230,7 +230,7 @@ func TestPublishingWithTTL(t *testing.T) {
// Getting before it is expired, should return it
rabbitMQBinding2 := NewRabbitMQ(logger).(*RabbitMQ)
err = rabbitMQBinding2.Init(context.Background(), metadata)
err = rabbitMQBinding2.Init(t.Context(), metadata)
require.NoError(t, err)
const testMsgContent = "test_msg"
@ -240,7 +240,7 @@ func TestPublishingWithTTL(t *testing.T) {
contribMetadata.TTLMetadataKey: strconv.Itoa(ttlInSeconds * 1000),
},
}
_, err = rabbitMQBinding2.Invoke(context.Background(), &writeRequest)
_, err = rabbitMQBinding2.Invoke(t.Context(), &writeRequest)
require.NoError(t, err)
msg, ok, err := getMessageWithRetries(ch, queueName, maxGetDuration)
@ -280,7 +280,7 @@ func TestExclusiveQueue(t *testing.T) {
logger := logger.NewLogger("test")
r := NewRabbitMQ(logger).(*RabbitMQ)
err := r.Init(context.Background(), metadata)
err := r.Init(t.Context(), metadata)
require.NoError(t, err)
// Assert that if waited too long, we won't see any message
@ -334,7 +334,7 @@ func TestPublishWithPriority(t *testing.T) {
logger := logger.NewLogger("test")
r := NewRabbitMQ(logger).(*RabbitMQ)
err := r.Init(context.Background(), metadata)
err := r.Init(t.Context(), metadata)
require.NoError(t, err)
// Assert that if waited too long, we won't see any message
@ -347,7 +347,7 @@ func TestPublishWithPriority(t *testing.T) {
defer ch.Close()
const middlePriorityMsgContent = "middle"
_, err = r.Invoke(context.Background(), &bindings.InvokeRequest{
_, err = r.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{
contribMetadata.PriorityMetadataKey: "5",
},
@ -356,7 +356,7 @@ func TestPublishWithPriority(t *testing.T) {
require.NoError(t, err)
const lowPriorityMsgContent = "low"
_, err = r.Invoke(context.Background(), &bindings.InvokeRequest{
_, err = r.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{
contribMetadata.PriorityMetadataKey: "1",
},
@ -365,7 +365,7 @@ func TestPublishWithPriority(t *testing.T) {
require.NoError(t, err)
const highPriorityMsgContent = "high"
_, err = r.Invoke(context.Background(), &bindings.InvokeRequest{
_, err = r.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{
contribMetadata.PriorityMetadataKey: "10",
},
@ -416,7 +416,7 @@ func TestPublishWithHeaders(t *testing.T) {
logger := logger.NewLogger("test")
r := NewRabbitMQ(logger).(*RabbitMQ)
err := r.Init(context.Background(), metadata)
err := r.Init(t.Context(), metadata)
require.NoError(t, err)
// Assert that if waited too long, we won't see any message
@ -429,7 +429,7 @@ func TestPublishWithHeaders(t *testing.T) {
defer ch.Close()
const msgContent = "some content"
_, err = r.Invoke(context.Background(), &bindings.InvokeRequest{
_, err = r.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{
"custom_header1": "some value",
"custom_header2": "some other value",

View File

@ -52,10 +52,19 @@ metadata:
type: bool
required: false
description: |
If the Redis instance supports TLS with public certificates, can be
configured to be enabled or disabled.
If the Redis instance supports TLS; can be configured to be enabled or disabled.
example: "true"
default: "false"
- name: clientCert
required: false
description: Client certificate for Redis host. No Default. Can be secretKeyRef to use a secret reference
example: ""
type: string
- name: clientKey
required: false
description: Client key for Redis host. No Default. Can be secretKeyRef to use a secret reference
example: ""
type: string
- name: redisMaxRetries
type: number
required: false
@ -184,3 +193,18 @@ metadata:
"-1" disables idle timeout check.
default: "5m"
example: "10m"
builtinAuthenticationProfiles:
- name: "azuread"
metadata:
- name: useEntraID
required: false
default: "false"
example: "true"
type: bool
description: |
If set, enables authentication to Azure Cache for Redis using Microsoft EntraID. The Redis server must explicitly enable EntraID authentication. Note that
Azure Cache for Redis also requires the use of TLS, so `enableTLS` should be set. No username or password should be set.
- name: enableTLS
required: true
description: Must be set to true if using EntraID
example: "true"

View File

@ -44,7 +44,7 @@ func NewRedis(logger logger.Logger) bindings.OutputBinding {
// Init performs metadata parsing and connection creation.
func (r *Redis) Init(ctx context.Context, meta bindings.Metadata) (err error) {
r.client, r.clientSettings, err = rediscomponent.ParseClientFromProperties(meta.Properties, metadata.BindingType)
r.client, r.clientSettings, err = rediscomponent.ParseClientFromProperties(meta.Properties, metadata.BindingType, ctx, &r.logger)
if err != nil {
return err
}

View File

@ -14,7 +14,6 @@ limitations under the License.
package redis
import (
"context"
"testing"
"time"
@ -44,10 +43,10 @@ func TestInvokeCreate(t *testing.T) {
logger: logger.NewLogger("test"),
}
_, err := c.DoRead(context.Background(), "GET", testKey)
_, err := c.DoRead(t.Context(), "GET", testKey)
assert.Equal(t, redis.Nil, err)
bindingRes, err := bind.Invoke(context.TODO(), &bindings.InvokeRequest{
bindingRes, err := bind.Invoke(t.Context(), &bindings.InvokeRequest{
Data: []byte(testData),
Metadata: map[string]string{"key": testKey},
Operation: bindings.CreateOperation,
@ -55,9 +54,9 @@ func TestInvokeCreate(t *testing.T) {
require.NoError(t, err)
assert.Nil(t, bindingRes)
getRes, err := c.DoRead(context.Background(), "GET", testKey)
getRes, err := c.DoRead(t.Context(), "GET", testKey)
require.NoError(t, err)
assert.Equal(t, testData, getRes)
assert.JSONEq(t, testData, getRes.(string))
}
func TestInvokeGetWithoutDeleteFlag(t *testing.T) {
@ -69,24 +68,24 @@ func TestInvokeGetWithoutDeleteFlag(t *testing.T) {
logger: logger.NewLogger("test"),
}
err := c.DoWrite(context.Background(), "SET", testKey, testData)
err := c.DoWrite(t.Context(), "SET", testKey, testData)
require.NoError(t, err)
bindingRes, err := bind.Invoke(context.TODO(), &bindings.InvokeRequest{
bindingRes, err := bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": testKey},
Operation: bindings.GetOperation,
})
require.NoError(t, err)
assert.Equal(t, testData, string(bindingRes.Data))
assert.JSONEq(t, testData, string(bindingRes.Data))
bindingResGet, err := bind.Invoke(context.TODO(), &bindings.InvokeRequest{
bindingResGet, err := bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": testKey},
Operation: bindings.GetOperation,
})
require.NoError(t, err)
assert.Equal(t, testData, string(bindingResGet.Data))
assert.JSONEq(t, testData, string(bindingResGet.Data))
}
func TestInvokeGetWithDeleteFlag(t *testing.T) {
@ -98,17 +97,17 @@ func TestInvokeGetWithDeleteFlag(t *testing.T) {
logger: logger.NewLogger("test"),
}
err := c.DoWrite(context.Background(), "SET", testKey, testData)
err := c.DoWrite(t.Context(), "SET", testKey, testData)
require.NoError(t, err)
bindingRes, err := bind.Invoke(context.TODO(), &bindings.InvokeRequest{
bindingRes, err := bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": testKey, "delete": "true"},
Operation: bindings.GetOperation,
})
require.NoError(t, err)
assert.Equal(t, testData, string(bindingRes.Data))
assert.JSONEq(t, testData, string(bindingRes.Data))
bindingResGet, err := bind.Invoke(context.TODO(), &bindings.InvokeRequest{
bindingResGet, err := bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": testKey},
Operation: bindings.GetOperation,
})
@ -127,23 +126,23 @@ func TestInvokeDelete(t *testing.T) {
logger: logger.NewLogger("test"),
}
err := c.DoWrite(context.Background(), "SET", testKey, testData)
err := c.DoWrite(t.Context(), "SET", testKey, testData)
require.NoError(t, err)
getRes, err := c.DoRead(context.Background(), "GET", testKey)
getRes, err := c.DoRead(t.Context(), "GET", testKey)
require.NoError(t, err)
assert.Equal(t, testData, getRes)
assert.JSONEq(t, testData, getRes.(string))
_, err = bind.Invoke(context.TODO(), &bindings.InvokeRequest{
_, err = bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": testKey},
Operation: bindings.DeleteOperation,
})
require.NoError(t, err)
rgetRep, err := c.DoRead(context.Background(), "GET", testKey)
rgetRep, err := c.DoRead(t.Context(), "GET", testKey)
assert.Equal(t, redis.Nil, err)
assert.Equal(t, nil, rgetRep)
assert.Nil(t, rgetRep)
}
func TestCreateExpire(t *testing.T) {
@ -154,35 +153,35 @@ func TestCreateExpire(t *testing.T) {
client: c,
logger: logger.NewLogger("test"),
}
_, err := bind.Invoke(context.TODO(), &bindings.InvokeRequest{
_, err := bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": testKey, metadata.TTLMetadataKey: "1"},
Operation: bindings.CreateOperation,
Data: []byte(testData),
})
require.NoError(t, err)
rgetRep, err := c.DoRead(context.Background(), "TTL", testKey)
rgetRep, err := c.DoRead(t.Context(), "TTL", testKey)
require.NoError(t, err)
assert.Equal(t, int64(1), rgetRep)
res, err2 := bind.Invoke(context.TODO(), &bindings.InvokeRequest{
res, err2 := bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": testKey},
Operation: bindings.GetOperation,
})
require.NoError(t, err2)
assert.Equal(t, res.Data, []byte(testData))
assert.JSONEq(t, testData, string(res.Data))
// wait for ttl to expire
s.FastForward(2 * time.Second)
res, err2 = bind.Invoke(context.TODO(), &bindings.InvokeRequest{
res, err2 = bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": testKey},
Operation: bindings.GetOperation,
})
require.NoError(t, err2)
assert.Equal(t, []byte(nil), res.Data)
_, err = bind.Invoke(context.TODO(), &bindings.InvokeRequest{
_, err = bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": testKey},
Operation: bindings.DeleteOperation,
})
@ -197,30 +196,30 @@ func TestIncrement(t *testing.T) {
client: c,
logger: logger.NewLogger("test"),
}
_, err := bind.Invoke(context.TODO(), &bindings.InvokeRequest{
_, err := bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": "incKey"},
Operation: IncrementOperation,
})
require.NoError(t, err)
res, err2 := bind.Invoke(context.TODO(), &bindings.InvokeRequest{
res, err2 := bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": "incKey"},
Operation: bindings.GetOperation,
})
assert.Nil(t, nil, err2)
require.NoError(t, err2)
assert.Equal(t, res.Data, []byte("1"))
_, err = bind.Invoke(context.TODO(), &bindings.InvokeRequest{
_, err = bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": "incKey", metadata.TTLMetadataKey: "5"},
Operation: IncrementOperation,
})
require.NoError(t, err)
rgetRep, err := c.DoRead(context.Background(), "TTL", "incKey")
rgetRep, err := c.DoRead(t.Context(), "TTL", "incKey")
require.NoError(t, err)
assert.Equal(t, int64(5), rgetRep)
res, err2 = bind.Invoke(context.TODO(), &bindings.InvokeRequest{
res, err2 = bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": "incKey"},
Operation: bindings.GetOperation,
})
@ -230,14 +229,14 @@ func TestIncrement(t *testing.T) {
// wait for ttl to expire
s.FastForward(10 * time.Second)
res, err2 = bind.Invoke(context.TODO(), &bindings.InvokeRequest{
res, err2 = bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": "incKey"},
Operation: bindings.GetOperation,
})
require.NoError(t, err2)
assert.Equal(t, []byte(nil), res.Data)
_, err = bind.Invoke(context.TODO(), &bindings.InvokeRequest{
_, err = bind.Invoke(t.Context(), &bindings.InvokeRequest{
Metadata: map[string]string{"key": "incKey"},
Operation: bindings.DeleteOperation,
})

Some files were not shown because too many files have changed in this diff Show More