From 196cf3df9d1fbfaf6cba7aa45556ce59d6514e91 Mon Sep 17 00:00:00 2001 From: Whit Waldo Date: Wed, 9 Apr 2025 13:31:41 -0500 Subject: [PATCH 01/20] Upped stalebot period from 5 to 30 days (#4610) Signed-off-by: Whit Waldo --- .github/workflows/stale-pr-monitor.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/stale-pr-monitor.yml b/.github/workflows/stale-pr-monitor.yml index ee56977ab..f08ac369f 100644 --- a/.github/workflows/stale-pr-monitor.yml +++ b/.github/workflows/stale-pr-monitor.yml @@ -18,4 +18,4 @@ jobs: stale-pr-message: 'Stale PR, paging all reviewers' stale-pr-label: 'stale' exempt-pr-labels: 'question,"help wanted",do-not-merge,waiting-on-code-pr' - days-before-stale: 5 + days-before-stale: 30 From 5435bd43a4d6702a93b2bd4469c66fbd1af0a9bc Mon Sep 17 00:00:00 2001 From: Whit Waldo Date: Wed, 9 Apr 2025 16:12:50 -0500 Subject: [PATCH 02/20] Added troubleshooting step to resolve port conflicts during `dapr init` on Windows (#4602) Signed-off-by: Whit Waldo Co-authored-by: Mark Fussell --- .../troubleshooting/common_issues.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/daprdocs/content/en/operations/troubleshooting/common_issues.md b/daprdocs/content/en/operations/troubleshooting/common_issues.md index 40281dd2f..8d6294f6b 100644 --- a/daprdocs/content/en/operations/troubleshooting/common_issues.md +++ b/daprdocs/content/en/operations/troubleshooting/common_issues.md @@ -291,3 +291,21 @@ kubectl config get-users ``` You may learn more about webhooks [here](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/). + +## Ports not available during `dapr init` +You might encounter the following error on Windows after attempting to execute `dapr init`: + +> PS C:\Users\You> dapr init +Making the jump to hyperspace... +Container images will be pulled from Docker Hub +Installing runtime version 1.14.4 +Downloading binaries and setting up components... +docker: Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:52379 -> 0.0.0.0:0: listen tcp4 0.0.0.0:52379: bind: An attempt was made to access a socket in a way forbidden by its access permissions. + +To resolve this error, open a command prompt in an elevated terminal and run: + +```bash +nat stop winnat +dapr init +net start winnat +``` \ No newline at end of file From 4341935c3a1a48bb4ef30e8a6593ea6b3da45e73 Mon Sep 17 00:00:00 2001 From: Fabian Steinbach <63794579+fabistb@users.noreply.github.com> Date: Thu, 10 Apr 2025 16:45:49 +0200 Subject: [PATCH 03/20] change application insights example from insights key to connection string (#4598) Signed-off-by: fabistb Co-authored-by: Whit Waldo Co-authored-by: Mark Fussell --- .../otel-collector/open-telemetry-collector-appinsights.md | 4 ++-- .../open-telemetry-collector-appinsights.yaml | 3 +-- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/daprdocs/content/en/operations/observability/tracing/otel-collector/open-telemetry-collector-appinsights.md b/daprdocs/content/en/operations/observability/tracing/otel-collector/open-telemetry-collector-appinsights.md index c851ec8a4..fc104263b 100644 --- a/daprdocs/content/en/operations/observability/tracing/otel-collector/open-telemetry-collector-appinsights.md +++ b/daprdocs/content/en/operations/observability/tracing/otel-collector/open-telemetry-collector-appinsights.md @@ -11,7 +11,7 @@ Dapr integrates with [OpenTelemetry (OTEL) Collector](https://github.com/open-te ## Prerequisites - [Install Dapr on Kubernetes]({{< ref kubernetes >}}) -- [Set up an App Insights resource](https://docs.microsoft.com/azure/azure-monitor/app/create-new-resource) and make note of your App Insights instrumentation key. +- [Set up an App Insights resource](https://docs.microsoft.com/azure/azure-monitor/app/create-new-resource) and make note of your App Insights connection string. ## Set up OTEL Collector to push to your App Insights instance @@ -19,7 +19,7 @@ To push events to your App Insights instance, install the OTEL Collector to your 1. Check out the [`open-telemetry-collector-appinsights.yaml`](/docs/open-telemetry-collector/open-telemetry-collector-appinsights.yaml) file. -1. Replace the `` placeholder with your App Insights instrumentation key. +1. Replace the `` placeholder with your App Insights connection string. 1. Apply the configuration with: diff --git a/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector-appinsights.yaml b/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector-appinsights.yaml index 3b2688953..797b62829 100644 --- a/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector-appinsights.yaml +++ b/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector-appinsights.yaml @@ -20,8 +20,7 @@ data: debug: verbosity: basic azuremonitor: - endpoint: "https://dc.services.visualstudio.com/v2/track" - instrumentation_key: "" + connection_string: "" # maxbatchsize is the maximum number of items that can be # queued before calling to the configured endpoint maxbatchsize: 100 From 58be5f36cea75b02476d6bde9b6caa218d8dbe00 Mon Sep 17 00:00:00 2001 From: Joey Freeland <30938344+jfreeland@users.noreply.github.com> Date: Thu, 10 Apr 2025 13:54:22 -0400 Subject: [PATCH 04/20] docs: bindings.cron every 15m (#4605) Signed-off-by: Joey Freeland <30938344+jfreeland@users.noreply.github.com> Co-authored-by: Mark Fussell Co-authored-by: Cassie Coyle --- .../reference/components-reference/supported-bindings/cron.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/cron.md b/daprdocs/content/en/reference/components-reference/supported-bindings/cron.md index 6a046f781..72daed2dc 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/cron.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/cron.md @@ -50,7 +50,7 @@ The Dapr cron binding supports following formats: For example: * `30 * * * * *` - every 30 seconds -* `0 15 * * * *` - every 15 minutes +* `0 */15 * * * *` - every 15 minutes * `0 30 3-6,20-23 * * *` - every hour on the half hour in the range 3-6am, 8-11pm * `CRON_TZ=America/New_York 0 30 04 * * *` - every day at 4:30am New York time From 04c9b586e01f828a3d2c531396a94a688f0a12de Mon Sep 17 00:00:00 2001 From: thrubovc <34124990+thrubovc@users.noreply.github.com> Date: Sat, 12 Apr 2025 22:39:32 +0200 Subject: [PATCH 05/20] fix: broken link due to typo (#4604) Signed-off-by: thrubovc <34124990+thrubovc@users.noreply.github.com> Co-authored-by: Mark Fussell --- .../en/operations/hosting/kubernetes/kubernetes-production.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md index fa64b4386..3f6addf1f 100644 --- a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md @@ -93,7 +93,7 @@ For a new Dapr deployment, HA mode can be set with both: - The [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}}), and - [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}}) -For an existing Dapr deployment, [you can enable HA mode in a few extra steps]({{< ref "#enabling-high-availability-in-an-existing-dapr-deployment" >}}). +For an existing Dapr deployment, [you can enable HA mode in a few extra steps]({{< ref "#enable-high-availability-in-an-existing-dapr-deployment" >}}). ### Individual service HA Helm configuration @@ -353,4 +353,4 @@ Watch this video for a deep dive into the best practices for running Dapr in pro ## Related links - [Deploy Dapr on Kubernetes]({{< ref kubernetes-deploy.md >}}) -- [Upgrade Dapr on Kubernetes]({{< ref kubernetes-upgrade.md >}}) \ No newline at end of file +- [Upgrade Dapr on Kubernetes]({{< ref kubernetes-upgrade.md >}}) From f83e3dc9ead48dab18d054d15942658f9fcc927e Mon Sep 17 00:00:00 2001 From: Whit Waldo Date: Sat, 12 Apr 2025 17:52:38 -0500 Subject: [PATCH 06/20] Updated .NET workflow method names (#4586) * Updated .NET workflow method names Signed-off-by: Whit Waldo * Fixed incorrect .NET method name for purging workflow instances Signed-off-by: Whit Waldo --------- Signed-off-by: Whit Waldo Co-authored-by: Mark Fussell --- .../workflow/howto-manage-workflow.md | 20 +++++++++---------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-manage-workflow.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-manage-workflow.md index c9e847ebe..b29667a0e 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-manage-workflow.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-manage-workflow.md @@ -138,31 +138,29 @@ Manage your workflow within your code. In the `OrderProcessingWorkflow` example ```csharp string orderId = "exampleOrderId"; -string workflowComponent = "dapr"; -string workflowName = "OrderProcessingWorkflow"; OrderPayload input = new OrderPayload("Paperclips", 99.95); Dictionary workflowOptions; // This is an optional parameter -// Start the workflow. This returns back a "StartWorkflowResponse" which contains the instance ID for the particular workflow instance. -StartWorkflowResponse startResponse = await daprClient.StartWorkflowAsync(orderId, workflowComponent, workflowName, input, workflowOptions); +// Start the workflow using the orderId as our workflow ID. This returns a string containing the instance ID for the particular workflow instance, whether we provide it ourselves or not. +await daprWorkflowClient.ScheduleNewWorkflowAsync(nameof(OrderProcessingWorkflow), orderId, input, workflowOptions); // Get information on the workflow. This response contains information such as the status of the workflow, when it started, and more! -GetWorkflowResponse getResponse = await daprClient.GetWorkflowAsync(orderId, workflowComponent, eventName); +WorkflowState currentState = await daprWorkflowClient.GetWorkflowStateAsync(orderId, orderId); // Terminate the workflow -await daprClient.TerminateWorkflowAsync(orderId, workflowComponent); +await daprWorkflowClient.TerminateWorkflowAsync(orderId); -// Raise an event (an incoming purchase order) that your workflow will wait for. This returns the item waiting to be purchased. -await daprClient.RaiseWorkflowEventAsync(orderId, workflowComponent, workflowName, input); +// Raise an event (an incoming purchase order) that your workflow will wait for +await daprWorkflowClient.RaiseEventAsync(orderId, "incoming-purchase-order", input); // Pause -await daprClient.PauseWorkflowAsync(orderId, workflowComponent); +await daprWorkflowClient.SuspendWorkflowAsync(orderId); // Resume -await daprClient.ResumeWorkflowAsync(orderId, workflowComponent); +await daprWorkflowClient.ResumeWorkflowAsync(orderId); // Purge the workflow, removing all inbox and history information from associated instance -await daprClient.PurgeWorkflowAsync(orderId, workflowComponent); +await daprWorkflowClient.PurgeInstanceAsync(orderId); ``` {{% /codetab %}} From aa7a1155d01e8e023b4c29e4320df64d145b6d3b Mon Sep 17 00:00:00 2001 From: Anton Troshin Date: Sun, 13 Apr 2025 14:51:58 -0500 Subject: [PATCH 07/20] Update documentation for GCP Secret Manager and Object Store support of implicit authentication (#4592) Signed-off-by: Anton Troshin Co-authored-by: Mark Fussell --- .../supported-bindings/gcpbucket.md | 25 +++++++++++------- .../supported-pubsub/setup-gcp-pubsub.md | 2 +- .../gcp-secret-manager.md | 26 ++++++++++++------- 3 files changed, 32 insertions(+), 21 deletions(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/gcpbucket.md b/daprdocs/content/en/reference/components-reference/supported-bindings/gcpbucket.md index f2d14d320..4aff149d3 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/gcpbucket.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/gcpbucket.md @@ -58,19 +58,24 @@ The above example uses secrets as plain strings. It is recommended to use a secr | Field | Required | Binding support | Details | Example | |--------------------|:--------:|------------|-----|---------| | `bucket` | Y | Output | The bucket name | `"mybucket"` | -| `type` | Y | Output | Tge GCP credentials type | `"service_account"` | -| `project_id` | Y | Output | GCP project id| `projectId` -| `private_key_id` | Y | Output | GCP private key id | `"privateKeyId"` -| `private_key` | Y | Output | GCP credentials private key. Replace with x509 cert | `12345-12345` -| `client_email` | Y | Output | GCP client email | `"client@email.com"` -| `client_id` | Y | Output | GCP client id | `0123456789-0123456789` -| `auth_uri` | Y | Output | Google account OAuth endpoint | `https://accounts.google.com/o/oauth2/auth` -| `token_uri` | Y | Output | Google account token uri | `https://oauth2.googleapis.com/token` -| `auth_provider_x509_cert_url` | Y | Output | GCP credentials cert url | `https://www.googleapis.com/oauth2/v1/certs` -| `client_x509_cert_url` | Y | Output | GCP credentials project x509 cert url | `https://www.googleapis.com/robot/v1/metadata/x509/.iam.gserviceaccount.com` +| `project_id` | Y | Output | GCP project ID | `projectId` | +| `type` | N | Output | The GCP credentials type | `"service_account"` | +| `private_key_id` | N | Output | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"privateKeyId"` | +| `private_key` | N | Output | If using explicit credentials, this field should contain the `private_key` field from the service account json. Replace with x509 cert | `12345-12345` | +| `client_email` | N | Output | If using explicit credentials, this field should contain the `client_email` field from the service account json | `"client@email.com"` | +| `client_id` | N | Output | If using explicit credentials, this field should contain the `client_id` field from the service account json | `0123456789-0123456789` | +| `auth_uri` | N | Output | If using explicit credentials, this field should contain the `auth_uri` field from the service account json | `https://accounts.google.com/o/oauth2/auth` | +| `token_uri` | N | Output | If using explicit credentials, this field should contain the `token_uri` field from the service account json | `https://oauth2.googleapis.com/token`| +| `auth_provider_x509_cert_url` | N | Output | If using explicit credentials, this field should contain the `auth_provider_x509_cert_url` field from the service account json | `https://www.googleapis.com/oauth2/v1/certs`| +| `client_x509_cert_url` | N | Output | If using explicit credentials, this field should contain the `client_x509_cert_url` field from the service account json | `https://www.googleapis.com/robot/v1/metadata/x509/.iam.gserviceaccount.com`| | `decodeBase64` | N | Output | Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` | | `encodeBase64` | N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` | +## GCP Credentials + +Since the GCP Storage Bucket component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained further in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide. +Also, see how to [Set up Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc). + ## Binding support This component supports **output binding** with the following operations: diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md index 088126b1c..849f11863 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md @@ -76,7 +76,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr | Field | Required | Details | Example | |--------------------|:--------:|---------|---------| -| projectId | Y | GCP project id| `myproject-123` +| projectId | Y | GCP project ID | `myproject-123` | endpoint | N | GCP endpoint for the component to use. Only used for local development (for example) with [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"http://localhost:8085"` | `consumerID` | N | The Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. The `consumerID`, along with the `topic` provided as part of the request, are used to build the Pub/Sub subscription ID | Can be set to string value (such as `"channel1"`) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}}) | identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"` diff --git a/daprdocs/content/en/reference/components-reference/supported-secret-stores/gcp-secret-manager.md b/daprdocs/content/en/reference/components-reference/supported-secret-stores/gcp-secret-manager.md index 24a1a155b..4557e436f 100644 --- a/daprdocs/content/en/reference/components-reference/supported-secret-stores/gcp-secret-manager.md +++ b/daprdocs/content/en/reference/components-reference/supported-secret-stores/gcp-secret-manager.md @@ -50,16 +50,22 @@ The above example uses secrets as plain strings. It is recommended to use a loca | Field | Required | Details | Example | |--------------------|:--------:|--------------------------------|---------------------| -| type | Y | The type of the account. | `"service_account"` | -| project_id | Y | The project ID associated with this component. | `"project_id"` | -| private_key_id | N | The private key ID | `"privatekey"` | -| client_email | Y | The client email address | `"client@example.com"` | -| client_id | N | The ID of the client | `"11111111"` | -| auth_uri | N | The authentication URI | `"https://accounts.google.com/o/oauth2/auth"` | -| token_uri | N | The authentication token URI | `"https://oauth2.googleapis.com/token"` | -| auth_provider_x509_cert_url | N | The certificate URL for the auth provider | `"https://www.googleapis.com/oauth2/v1/certs"` | -| client_x509_cert_url | N | The certificate URL for the client | `"https://www.googleapis.com/robot/v1/metadata/x509/.iam.gserviceaccount.com"`| -| private_key | Y | The private key for authentication | `"privateKey"` | +| `project_id` | Y | The project ID associated with this component. | `"project_id"` | +| `type` | N | The type of the account. | `"service_account"` | +| `private_key_id` | N | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"privateKeyId"`| +| `private_key` | N | If using explicit credentials, this field should contain the `private_key` field from the service account json. Replace with x509 cert | `12345-12345`| +| `client_email` | N | If using explicit credentials, this field should contain the `client_email` field from the service account json | `"client@email.com"`| +| `client_id` | N | If using explicit credentials, this field should contain the `client_id` field from the service account json | `0123456789-0123456789`| +| `auth_uri` | N | If using explicit credentials, this field should contain the `auth_uri` field from the service account json | `https://accounts.google.com/o/oauth2/auth`| +| `token_uri` | N | If using explicit credentials, this field should contain the `token_uri` field from the service account json | `https://oauth2.googleapis.com/token`| +| `auth_provider_x509_cert_url` | N | If using explicit credentials, this field should contain the `auth_provider_x509_cert_url` field from the service account json | `https://www.googleapis.com/oauth2/v1/certs`| +| `client_x509_cert_url` | N | If using explicit credentials, this field should contain the `client_x509_cert_url` field from the service account json | `https://www.googleapis.com/robot/v1/metadata/x509/.iam.gserviceaccount.com`| + + +## GCP Credentials + +Since the GCP Secret Manager component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained further in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide. +Also, see how to [Set up Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc). ## Optional per-request metadata properties From 89c3d6090fba17a33a8ab036667838d0e6f8dd43 Mon Sep 17 00:00:00 2001 From: Josh van Leeuwen Date: Sun, 13 Apr 2025 18:44:50 -0300 Subject: [PATCH 08/20] Adds Warning that Actor Reminder Partition is not relevant by default (#4561) * Adds Warning that Actor Reminder Partition is not relevant by default Updates the Actor Reminder Partition config page with a Warning that the feature is only relevant when using state store Actor Reminders which are no longer used by default. De-references this page from the actor rutime config page to softly hide it. Updates some verbiage around using Scheduler reminders & the feature gate as it's on by default. This should be merged in dapr/docs@v1.16. Signed-off-by: joshvanl * Update daprdocs/content/en/developing-applications/building-blocks/actors/howto-actors-partitioning.md Signed-off-by: Mark Fussell * Update daprdocs/content/en/developing-applications/building-blocks/actors/howto-actors-partitioning.md Signed-off-by: Mark Fussell * Update kubernetes-persisting-scheduler.md Co-authored-by: Cassie Coyle Signed-off-by: Josh van Leeuwen * Update daprdocs/content/en/developing-applications/building-blocks/actors/howto-actors-partitioning.md Signed-off-by: Mark Fussell --------- Signed-off-by: joshvanl Signed-off-by: Mark Fussell Signed-off-by: Josh van Leeuwen Co-authored-by: Yaron Schneider Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Co-authored-by: Mark Fussell Co-authored-by: Cassie Coyle --- .../building-blocks/actors/actors-runtime-config.md | 6 +----- .../building-blocks/actors/howto-actors-partitioning.md | 9 ++++++++- .../kubernetes/kubernetes-persisting-scheduler.md | 6 +++--- 3 files changed, 12 insertions(+), 9 deletions(-) diff --git a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-runtime-config.md b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-runtime-config.md index 99b080402..61a6e671f 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-runtime-config.md +++ b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-runtime-config.md @@ -195,12 +195,8 @@ func configHandler(w http.ResponseWriter, r *http.Request) { {{< /tabs >}} -## Next steps - -{{< button text="Enable actor reminder partitioning >>" page="howto-actors-partitioning.md" >}} - ## Related links - Refer to the [Dapr SDK documentation and examples]({{< ref "developing-applications/sdks/#sdk-languages" >}}). - [Actors API reference]({{< ref actors_api.md >}}) -- [Actors overview]({{< ref actors-overview.md >}}) \ No newline at end of file +- [Actors overview]({{< ref actors-overview.md >}}) diff --git a/daprdocs/content/en/developing-applications/building-blocks/actors/howto-actors-partitioning.md b/daprdocs/content/en/developing-applications/building-blocks/actors/howto-actors-partitioning.md index 79c7303ab..843e656f0 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/actors/howto-actors-partitioning.md +++ b/daprdocs/content/en/developing-applications/building-blocks/actors/howto-actors-partitioning.md @@ -8,6 +8,13 @@ aliases: - "/developing-applications/building-blocks/actors/actors-background" --- +{{% alert title="Warning" color="warning" %}} +This feature is only relevant when using state store actor reminders, no longer enabled by default. +As of v1.15, Dapr uses the far more performant [Scheduler Actor Reminders]({{< ref "scheduler.md#actor-reminders" >}}) by default. +This page is only relevant if you are using the legacy state store actor reminders, enabled via setting the [`SchedulerReminders` feature flag]({{< ref "support-preview-features.md#current-preview-features" >}}) to false. +It is highly recommended you use using the Scheduler Actor Reminders feature. +{{% /alert %}} + [Actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) are persisted and continue to be triggered after sidecar restarts. Applications with multiple reminders registered can experience the following issues: - Low throughput on reminders registration and de-registration @@ -193,4 +200,4 @@ Watch [this video for a demo of actor reminder partitioning](https://youtu.be/Zw ## Related links - [Actors API reference]({{< ref actors_api.md >}}) -- [Actors overview]({{< ref actors-overview.md >}}) \ No newline at end of file +- [Actors overview]({{< ref actors-overview.md >}}) diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md index d6c187b19..01f757756 100644 --- a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md @@ -12,7 +12,7 @@ This means that there is no additional parameter required to run the scheduler s {{% alert title="Warning" color="warning" %}} The default storage size for the Scheduler is `1Gi`, which is likely not sufficient for most production deployments. -Remember that the Scheduler is used for [Actor Reminders]({{< ref actors-timers-reminders.md >}}) & [Workflows]({{< ref workflow-overview.md >}}) when the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature is enabled, and the [Jobs API]({{< ref jobs_api.md >}}). +Remember that the Scheduler is used for [Actor Reminders]({{< ref actors-timers-reminders.md >}}) & [Workflows]({{< ref workflow-overview.md >}}), and the [Jobs API]({{< ref jobs_api.md >}}). You may want to consider reinstalling Dapr with a larger Scheduler storage of at least `16Gi` or more. For more information, see the [ETCD Storage Disk Size](#etcd-storage-disk-size) section below. {{% /alert %}} @@ -30,8 +30,8 @@ error running scheduler: etcdserver: mvcc: database space exceeded ``` Knowing the safe upper bound for your storage size is not an exact science, and relies heavily on the number, persistence, and the data payload size of your application jobs. -The [Job API]({{< ref jobs_api.md >}}) and [Actor Reminders]({{< ref actors-timers-reminders.md >}}) (with the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature enabled) transparently maps one to one to the usage of your applications. -Workflows (when the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature is enabled) create a large number of jobs as Actor Reminders, however these jobs are short lived- matching the lifecycle of each workflow execution. +The [Job API]({{< ref jobs_api.md >}}) and [Actor Reminders]({{< ref actors-timers-reminders.md >}}) transparently maps one to one to the usage of your applications. +Workflows create a large number of jobs as Actor Reminders, however these jobs are short lived- matching the lifecycle of each workflow execution. The data payload of jobs created by Workflows is typically empty or small. The Scheduler uses Etcd as its storage backend database. From 5521405b27cf11eb9cd60ba8ac57c34a29b124c8 Mon Sep 17 00:00:00 2001 From: Fabian Martinez <46371672+famarting@users.noreply.github.com> Date: Sun, 13 Apr 2025 23:49:33 +0200 Subject: [PATCH 09/20] add conversation API to allow list (#4581) Signed-off-by: Fabian Martinez <46371672+famarting@users.noreply.github.com> Co-authored-by: Mark Fussell --- daprdocs/content/en/operations/configuration/api-allowlist.md | 1 + 1 file changed, 1 insertion(+) diff --git a/daprdocs/content/en/operations/configuration/api-allowlist.md b/daprdocs/content/en/operations/configuration/api-allowlist.md index dffb8db39..eee4ad3b2 100644 --- a/daprdocs/content/en/operations/configuration/api-allowlist.md +++ b/daprdocs/content/en/operations/configuration/api-allowlist.md @@ -128,6 +128,7 @@ See this list of values corresponding to the different Dapr APIs: | [Distributed Lock]({{< ref distributed_lock_api.md >}}) | `lock` (`v1.0-alpha1`)
`unlock` (`v1.0-alpha1`) | `lock` (`v1alpha1`)
`unlock` (`v1alpha1`) | | [Cryptography]({{< ref cryptography_api.md >}}) | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) | | [Workflow]({{< ref workflow_api.md >}}) | `workflows` (`v1.0`) |`workflows` (`v1`) | +| [Conversation]({{< ref conversation_api.md >}}) | `conversation` (`v1.0-alpha1`) | `conversation` (`v1alpha1`) | | [Health]({{< ref health_api.md >}}) | `healthz` (`v1.0`) | n/a | | Shutdown | `shutdown` (`v1.0`) | `shutdown` (`v1`) | From 2a9367884d709638dc5ad2da7a317bf1ba3f2c63 Mon Sep 17 00:00:00 2001 From: Whit Waldo Date: Sun, 13 Apr 2025 16:56:02 -0500 Subject: [PATCH 10/20] Added local echo conversation component (#4587) * Added local echo conversation component Signed-off-by: Whit Waldo * Update daprdocs/content/en/reference/components-reference/supported-conversation/local-echo.md Co-authored-by: Alice Gibbons Signed-off-by: Whit Waldo * Update local-echo.md Fixed repeated text Signed-off-by: Mark Fussell * Update generic.yaml Marking Echo stable, since it is only used for local testing Signed-off-by: Mark Fussell --------- Signed-off-by: Whit Waldo Signed-off-by: Mark Fussell Co-authored-by: Alice Gibbons Co-authored-by: Mark Fussell --- .../supported-conversation/local-echo.md | 28 +++++++++++++++++++ .../data/components/conversation/generic.yaml | 5 ++++ 2 files changed, 33 insertions(+) create mode 100644 daprdocs/content/en/reference/components-reference/supported-conversation/local-echo.md diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/local-echo.md b/daprdocs/content/en/reference/components-reference/supported-conversation/local-echo.md new file mode 100644 index 000000000..e2a2770a3 --- /dev/null +++ b/daprdocs/content/en/reference/components-reference/supported-conversation/local-echo.md @@ -0,0 +1,28 @@ +--- +type: docs +title: "Local Testing" +linkTitle: "Echo" +description: Detailed information on the echo conversation component used for local testing +--- + +## Component format + +A Dapr `conversation.yaml` component file has the following structure: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: echo +spec: + type: conversation.echo + version: v1 +``` + +{{% alert title="Information" color="warning" %}} +This component is only meant for local validation and testing of a Conversation component implementation. It does not actually send the data to any LLM but rather echos the input back directly. +{{% /alert %}} + +## Related links + +- [Conversation API overview]({{< ref conversation-overview.md >}}) diff --git a/daprdocs/data/components/conversation/generic.yaml b/daprdocs/data/components/conversation/generic.yaml index b8961c868..48eab18af 100644 --- a/daprdocs/data/components/conversation/generic.yaml +++ b/daprdocs/data/components/conversation/generic.yaml @@ -23,3 +23,8 @@ state: Alpha version: v1 since: "1.15" +- component: Local echo + link: local-echo + state: Stable + version: v1 + since: "1.15" From 52f4c15ee3d26a2a9c78d7ae267754711705228f Mon Sep 17 00:00:00 2001 From: Siri Varma Vegiraju Date: Mon, 14 Apr 2025 03:31:13 +0530 Subject: [PATCH 11/20] Update conversation_api.md (#4589) Signed-off-by: Siri Varma Vegiraju Co-authored-by: Whit Waldo Co-authored-by: Mark Fussell --- daprdocs/content/en/reference/api/conversation_api.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/daprdocs/content/en/reference/api/conversation_api.md b/daprdocs/content/en/reference/api/conversation_api.md index 44fa52d28..415bd4faa 100644 --- a/daprdocs/content/en/reference/api/conversation_api.md +++ b/daprdocs/content/en/reference/api/conversation_api.md @@ -32,7 +32,7 @@ POST http://localhost:/v1.0-alpha1/conversation//converse | --------- | ----------- | | `inputs` | Inputs for the conversation. Multiple inputs at one time are supported. Required | | `cacheTTL` | A time-to-live value for a prompt cache to expire. Uses Golang duration format. Optional | -| `scrubPII` | A boolean value to enable obfuscation of sensitive information returning from the LLM. Optional | +| `scrubPII` | A boolean value to enable obfuscation of sensitive information returning from the LLM. Set this value if all PII (across contents) in the request needs to be scrubbed. Optional | | `temperature` | A float value to control the temperature of the model. Used to optimize for consistency and creativity. Optional | | `metadata` | [Metadata](#metadata) passed to conversation components. Optional | @@ -42,7 +42,7 @@ POST http://localhost:/v1.0-alpha1/conversation//converse | --------- | ----------- | | `content` | The message content to send to the LLM. Required | | `role` | The role for the LLM to assume. Possible values: 'user', 'tool', 'assistant' | -| `scrubPII` | A boolean value to enable obfuscation of sensitive information present in the content field. Optional | +| `scrubPII` | A boolean value to enable obfuscation of sensitive information present in the content field. Set this value if PII for this specific content needs to be scrubbed exclusively. Optional | ### Request content example @@ -89,4 +89,4 @@ RESPONSE = { ## Next steps - [Conversation API overview]({{< ref conversation-overview.md >}}) -- [Supported conversation components]({{< ref supported-conversation >}}) \ No newline at end of file +- [Supported conversation components]({{< ref supported-conversation >}}) From bb573c0a76b62bcc765f8ccc21aad36f60b6f77a Mon Sep 17 00:00:00 2001 From: Martin Cambal Date: Mon, 14 Apr 2025 17:59:40 +0200 Subject: [PATCH 12/20] Corrected typo namepsace to namespace (#4613) Signed-off-by: Martin Cambal Co-authored-by: Mark Fussell --- .../content/en/reference/resource-specs/component-schema.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/resource-specs/component-schema.md b/daprdocs/content/en/reference/resource-specs/component-schema.md index 875744c28..180eccae9 100644 --- a/daprdocs/content/en/reference/resource-specs/component-schema.md +++ b/daprdocs/content/en/reference/resource-specs/component-schema.md @@ -8,7 +8,7 @@ description: "The basic spec for a Dapr component" Dapr defines and registers components using a [resource specifications](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). All components are defined as a resource and can be applied to any hosting environment where Dapr is running, not just Kubernetes. -Typically, components are restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. The namespace is either explicit on the component manifest itself, or set by the API server, which derives the namespace through context with applying to Kubernetes. +Typically, components are restricted to a particular [namespace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. The namespace is either explicit on the component manifest itself, or set by the API server, which derives the namespace through context with applying to Kubernetes. {{% alert title="Note" color="primary" %}} The exception to this rule is in self-hosted mode, where daprd ingests component resources when the namespace field is omitted. However, the security profile is mute, as daprd has access to the manifest anyway, unlike in Kubernetes. From fcddd67d576672dbcc91f7c2ded00d69ea78e721 Mon Sep 17 00:00:00 2001 From: Whit Waldo Date: Mon, 21 Apr 2025 07:05:16 -0500 Subject: [PATCH 13/20] Fixed typo => "loggings" to "logging" (#4619) Signed-off-by: Whit Waldo --- daprdocs/content/en/operations/observability/logging/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/operations/observability/logging/_index.md b/daprdocs/content/en/operations/observability/logging/_index.md index 444f4cc9c..96998b55a 100644 --- a/daprdocs/content/en/operations/observability/logging/_index.md +++ b/daprdocs/content/en/operations/observability/logging/_index.md @@ -3,6 +3,6 @@ type: docs title: "Logging" linkTitle: "Logging" weight: 400 -description: "How to setup loggings for Dapr sidecar, and your application" +description: "How to setup logging for Dapr sidecar, and your application" --- From 44634df0b7850dd6fcbf9bdec000b8f6041f7409 Mon Sep 17 00:00:00 2001 From: Whit Waldo Date: Mon, 21 Apr 2025 07:12:19 -0500 Subject: [PATCH 14/20] Fixed typo -> "Fleuntd" to "Fluentd" (#4618) Signed-off-by: Whit Waldo Co-authored-by: Mark Fussell --- daprdocs/content/en/operations/observability/logging/logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/operations/observability/logging/logs.md b/daprdocs/content/en/operations/observability/logging/logs.md index 5d0d9492a..a3b18c6cd 100644 --- a/daprdocs/content/en/operations/observability/logging/logs.md +++ b/daprdocs/content/en/operations/observability/logging/logs.md @@ -127,7 +127,7 @@ If you are using the Azure Kubernetes Service, you can use [Azure Monitor for co ## References -- [How-to: Set up Fleuntd, Elastic search, and Kibana]({{< ref fluentd.md >}}) +- [How-to: Set up Fluentd, Elastic search, and Kibana]({{< ref fluentd.md >}}) - [How-to: Set up Azure Monitor in Azure Kubernetes Service]({{< ref azure-monitor.md >}}) - [Configure and view Dapr Logs]({{< ref "logs-troubleshooting.md" >}}) - [Configure and view Dapr API Logs]({{< ref "api-logs-troubleshooting.md" >}}) From 3c4960802192f5847316ce52d09463ac59910748 Mon Sep 17 00:00:00 2001 From: Whit Waldo Date: Mon, 21 Apr 2025 07:27:06 -0500 Subject: [PATCH 15/20] On the page covering binding inputs (triggers), the text erroneously suggested that the following examples demonstrated output bindings. (#4620) Signed-off-by: Whit Waldo Co-authored-by: Mark Fussell --- .../building-blocks/bindings/howto-triggers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/developing-applications/building-blocks/bindings/howto-triggers.md b/daprdocs/content/en/developing-applications/building-blocks/bindings/howto-triggers.md index 4a5f6d4dd..ec590c62f 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/bindings/howto-triggers.md +++ b/daprdocs/content/en/developing-applications/building-blocks/bindings/howto-triggers.md @@ -113,7 +113,7 @@ Configure your application to receive incoming events. If you're using HTTP, you - Listen on a `POST` endpoint with the name of the binding, as specified in `metadata.name` in the `binding.yaml` file. - Verify your application allows Dapr to make an `OPTIONS` request for this endpoint. -Below are code examples that leverage Dapr SDKs to demonstrate an output binding. +Below are code examples that leverage Dapr SDKs to demonstrate an input binding. {{< tabs ".NET" Java Python Go JavaScript>}} From 16889638eec82550f3f1ed3a99239b468eb3ba9c Mon Sep 17 00:00:00 2001 From: Joe Bowbeer Date: Tue, 22 Apr 2025 05:42:04 -0700 Subject: [PATCH 16/20] Update kubernetes-production.md (#4616) Signed-off-by: Joe Bowbeer Co-authored-by: Mark Fussell --- .../en/operations/hosting/kubernetes/kubernetes-production.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md index 3f6addf1f..89a0c03a4 100644 --- a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md @@ -159,7 +159,7 @@ spec: ## Deploy Dapr with Helm -[Visit the full guide on deploying Dapr with Helm]({{< ref "kubernetes-deploy.md#install-with-helm-advanced" >}}). +[Visit the full guide on deploying Dapr with Helm]({{< ref "kubernetes-deploy.md#install-with-helm" >}}). ### Parameters file From 101b0266bddd2082bfb2fb0aa51e3d70736b1472 Mon Sep 17 00:00:00 2001 From: Whit Waldo Date: Wed, 23 Apr 2025 15:41:32 -0500 Subject: [PATCH 17/20] Added .NET SDK examples to serialization document (#4596) * Added .NET SDK examples to serialization document + modernized it some Signed-off-by: Whit Waldo * Update sdk-serialization.md Updated formatting and grammar Signed-off-by: Mark Fussell * Update daprdocs/content/en/developing-applications/local-development/sdk-serialization.md Co-authored-by: Alice Gibbons Signed-off-by: Whit Waldo * Update daprdocs/content/en/developing-applications/local-development/sdk-serialization.md Co-authored-by: Alice Gibbons Signed-off-by: Whit Waldo * Update daprdocs/content/en/developing-applications/local-development/sdk-serialization.md Co-authored-by: Alice Gibbons Signed-off-by: Whit Waldo * Update daprdocs/content/en/developing-applications/local-development/sdk-serialization.md Co-authored-by: Alice Gibbons Signed-off-by: Whit Waldo * Updated serialization package links + added reference to actor serialization documentation Signed-off-by: Whit Waldo * Added input binding examples for .NET Signed-off-by: Whit Waldo * Update daprdocs/content/en/developing-applications/local-development/sdk-serialization.md Co-authored-by: Alice Gibbons Signed-off-by: Whit Waldo * Update daprdocs/content/en/developing-applications/local-development/sdk-serialization.md Co-authored-by: Alice Gibbons Signed-off-by: Whit Waldo * Update daprdocs/content/en/developing-applications/local-development/sdk-serialization.md Co-authored-by: Alice Gibbons Signed-off-by: Whit Waldo * Update daprdocs/content/en/developing-applications/local-development/sdk-serialization.md Co-authored-by: Alice Gibbons Signed-off-by: Whit Waldo * Update daprdocs/content/en/developing-applications/local-development/sdk-serialization.md Signed-off-by: Mark Fussell * Updated to use the correct method for service invocation Signed-off-by: Whit Waldo --------- Signed-off-by: Whit Waldo Signed-off-by: Mark Fussell Co-authored-by: Mark Fussell Co-authored-by: Alice Gibbons --- .../local-development/sdk-serialization.md | 216 ++++++++++++++++-- 1 file changed, 200 insertions(+), 16 deletions(-) diff --git a/daprdocs/content/en/developing-applications/local-development/sdk-serialization.md b/daprdocs/content/en/developing-applications/local-development/sdk-serialization.md index 4e22d8b58..172fab4cc 100644 --- a/daprdocs/content/en/developing-applications/local-development/sdk-serialization.md +++ b/daprdocs/content/en/developing-applications/local-development/sdk-serialization.md @@ -8,16 +8,40 @@ aliases: - '/developing-applications/sdks/serialization/' --- -An SDK for Dapr should provide serialization for two use cases. First, for API objects sent through request and response payloads. Second, for objects to be persisted. For both these use cases, a default serialization is provided. In the Java SDK, it is the [DefaultObjectSerializer](https://dapr.github.io/java-sdk/io/dapr/serializer/DefaultObjectSerializer.html) class, providing JSON serialization. +Dapr SDKs provide serialization for two use cases. First, for API objects sent through request and response payloads. Second, for objects to be persisted. For both of these cases, a default serialization method is provided in each language SDK. + +| Language SDK | Default Serializer | +|------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| [.NET]({{< ref dotnet >}}) | [DataContracts](https://learn.microsoft.com/dotnet/framework/wcf/feature-details/using-data-contracts) for remoted actors, [System.Text.Json](https://www.nuget.org/packages/System.Text.Json) otherwise. Read more about .NET serialization [here]({{< ref dotnet-actors-serialization >}}) | | +| [Java]({{< ref java >}}) | [DefaultObjectSerializer](https://dapr.github.io/java-sdk/io/dapr/serializer/DefaultObjectSerializer.html) for JSON serialization | +| [JavaScript]({{< ref js >}}) | JSON | ## Service invocation -```java - DaprClient client = (new DaprClientBuilder()).build(); - client.invokeService("myappid", "saySomething", "My Message", HttpExtension.POST).block(); +{{< tabs ".NET" "Java" >}} + + +{{% codetab %}} + +```csharp + using var client = (new DaprClientBuilder()).Build(); + await client.InvokeMethodAsync("myappid", "saySomething", "My Message"); ``` -In the example above, the app will receive a `POST` request for the `saySomething` method with the request payload as `"My Message"` - quoted since the serializer will serialize the input String to JSON. +{{% /codetab %}} + + +{{% codetab %}} + +```java + DaprClient client = (new DaprClientBuilder()).build(); + client.invokeMethod("myappid", "saySomething", "My Message", HttpExtension.POST).block(); +``` + +{{% /codetab %}} + +In the example above, the app `myappid` receives a `POST` request for the `saySomething` method with the request payload as +`"My Message"` - quoted since the serializer will serialize the input String to JSON. ```text POST /saySomething HTTP/1.1 @@ -30,11 +54,35 @@ Content-Length: 12 ## State management +{{< tabs ".NET" "Java" >}} + + +{{% codetab %}} + +```csharp + using var client = (new DaprClientBuilder()).Build(); + var state = new Dictionary + { + { "key": "MyKey" }, + { "value": "My Message" } + }; + await client.SaveStateAsync("MyStateStore", "MyKey", state); +``` + +{{% /codetab %}} + + +{{% codetab %}} + ```java DaprClient client = (new DaprClientBuilder()).build(); client.saveState("MyStateStore", "MyKey", "My Message").block(); ``` -In this example, `My Message` will be saved. It is not quoted because Dapr's API will internally parse the JSON request object before saving it. + +{{% /codetab %}} + +In this example, `My Message` is saved. It is not quoted because Dapr's API internally parse the JSON request +object before saving it. ```JSON [ @@ -47,12 +95,45 @@ In this example, `My Message` will be saved. It is not quoted because Dapr's API ## PubSub +{{< tabs ".NET" "Java" >}} + + +{{% codetab %}} + +```csharp + using var client = (new DaprClientBuilder()).Build(); + await client.PublishEventAsync("MyPubSubName", "TopicName", "My Message"); +``` + +The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber receives it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. The Dapr SDK also provides a built-in deserializer for `CloudEvent` object. + +```csharp +public async Task HandleMessage(string message) +{ + //ASP.NET Core automatically deserializes the UTF-8 encoded bytes to a string + return new Ok(); +} +``` + +or + +```csharp +app.MapPost("/TopicName", [Topic("MyPubSubName", "TopicName")] (string message) => { + return Results.Ok(); +} +``` + +{{% /codetab %}} + + +{{% codetab %}} + ```java DaprClient client = (new DaprClientBuilder()).build(); client.publishEvent("TopicName", "My Message").block(); ``` -The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber will receive it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. Dapr SDK also provides a built-in deserializer for `CloudEvent` object. +The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber receives it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. The Dapr SDK also provides a built-in deserializer for `CloudEvent` objects. ```java @PostMapping(path = "/TopicName") @@ -62,9 +143,50 @@ The event is published and the content is serialized to `byte[]` and sent to Dap } ``` +{{% /codetab %}} + ## Bindings -In this case, the object is serialized to `byte[]` as well and the input binding receives the raw `byte[]` as-is and deserializes it to the expected object type. +For output bindings the object is serialized to `byte[]` whereas the input binding receives the raw `byte[]` as-is and deserializes it to the expected object type. + +{{< tabs ".NET" "Java" >}} + + +{{% codetab %}} + +* Output binding: +```csharp + using var client = (new DaprClientBuilder()).Build(); + await client.InvokeBindingAsync("sample", "My Message"); +``` + +* Input binding (controllers): +```csharp + [ApiController] + public class SampleController : ControllerBase + { + [HttpPost("propagate")] + public ActionResult GetValue([FromBody] int itemId) + { + Console.WriteLine($"Received message: {itemId}"); + return $"itemID:{itemId}"; + } + } + ``` + +* Input binding (minimal API): +```csharp +app.MapPost("value", ([FromBody] int itemId) => +{ + Console.WriteLine($"Received message: {itemId}"); + return ${itemID:{itemId}"; +}); +* ``` + +{{% /codetab %}} + + +{{% codetab %}} * Output binding: ```java @@ -80,15 +202,49 @@ In this case, the object is serialized to `byte[]` as well and the input binding System.out.println(message); } ``` + +{{% /codetab %}} + It should print: ``` My Message ``` ## Actor Method invocation -Object serialization and deserialization for invocation of Actor's methods are same as for the service method invocation, the only difference is that the application does not need to deserialize the request or serialize the response since it is all done transparently by the SDK. +Object serialization and deserialization for Actor method invocation are same as for the service method invocation, +the only difference is that the application does not need to deserialize the request or serialize the response since it +is all done transparently by the SDK. -For Actor's methods, the SDK only supports methods with zero or one parameter. +For Actor methods, the SDK only supports methods with zero or one parameter. + +{{< tabs ".NET" "Java" >}} + +The .NET SDK supports two different serialization types based on whether you're using strongly-typed (DataContracts) +or weakly-typed (DataContracts or System.Text.JSON) actor client. [This document]({{< ref dotnet-actors-serialization >}}) +can provide more information about the differences between each and additional considerations to keep in mind. + + +{{% codetab %}} + +* Invoking an Actor's method using the weakly-typed client and System.Text.JSON: +```csharp + var proxy = this.ProxyFactory.Create(ActorId.CreateRandom(), "DemoActor"); + await proxy.SayAsync("My message"); +``` + +* Implementing an Actor's method: +```csharp +public Task SayAsync(string message) +{ + Console.WriteLine(message); + return Task.CompletedTask; +} +``` + +{{% /codetab %}} + + +{{% codetab %}} * Invoking an Actor's method: ```java @@ -105,13 +261,37 @@ public String say(String something) { return "OK"; } ``` + +{{% /codetab %}} + It should print: ``` My Message ``` ## Actor's state management -Actors can also have state. In this case, the state manager will serialize and deserialize the objects using the state serializer and handle it transparently to the application. +Actors can also have state. In this case, the state manager will serialize and deserialize the objects using the state +serializer and handle it transparently to the application. + + +{{% codetab %}} + +```csharp +public Task SayAsync(string message) +{ + // Reads state from a key + var previousMessage = await this.StateManager.GetStateAsync("lastmessage"); + + // Sets the new state for the key after serializing it + await this.StateManager.SetStateAsync("lastmessage", message); + return previousMessage; +} +``` + +{{% /codetab %}} + + +{{% codetab %}} ```java public String actorMethod(String message) { @@ -124,12 +304,17 @@ public String actorMethod(String message) { } ``` +{{% /codetab %}} + ## Default serializer The default serializer for Dapr is a JSON serializer with the following expectations: -1. Use of basic [JSON data types](https://www.w3schools.com/js/js_json_datatypes.asp) for cross-language and cross-platform compatibility: string, number, array, boolean, null and another JSON object. Every complex property type in application's serializable objects (DateTime, for example), should be represented as one of the JSON's basic types. -2. Data persisted with the default serializer should be saved as JSON objects too, without extra quotes or encoding. The example below shows how a string and a JSON object would look like in a Redis store. +1. Use of basic [JSON data types](https://www.w3schools.com/js/js_json_datatypes.asp) for cross-language and cross-platform compatibility: string, number, array, +boolean, null and another JSON object. Every complex property type in application's serializable objects (DateTime, +for example), should be represented as one of the JSON's basic types. +2. Data persisted with the default serializer should be saved as JSON objects too, without extra quotes or encoding. +The example below shows how a string and a JSON object would look like in a Redis store. ```bash redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message "This is a message to be saved and retrieved." @@ -140,7 +325,8 @@ redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928 ``` 3. Custom serializers must serialize object to `byte[]`. 4. Custom serializers must deserialize `byte[]` to object. -5. When user provides a custom serializer, it should be transferred or persisted as `byte[]`. When persisting, also encode as Base64 string. This is done natively by most JSON libraries. +5. When user provides a custom serializer, it should be transferred or persisted as `byte[]`. When persisting, also +encode as Base64 string. This is done natively by most JSON libraries. ```bash redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message "VGhpcyBpcyBhIG1lc3NhZ2UgdG8gYmUgc2F2ZWQgYW5kIHJldHJpZXZlZC4=" @@ -149,5 +335,3 @@ redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928 redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata "eyJ2YWx1ZSI6Ik15IGRhdGEgdmFsdWUuIn0=" ``` - -*As of now, the [Java SDK](https://github.com/dapr/java-sdk/) is the only Dapr SDK that implements this specification. In the near future, other SDKs will also implement the same.* From a0027cb42b3e2ccadaf9c2890944f10f7fc13303 Mon Sep 17 00:00:00 2001 From: Fernando Rocha Date: Fri, 25 Apr 2025 01:26:50 -0700 Subject: [PATCH 18/20] apache ignite description on cassandra (#4630) Signed-off-by: Fernando Rocha --- .../supported-state-stores/setup-cassandra.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-cassandra.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-cassandra.md index b5aff291d..b16520cb8 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-cassandra.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-cassandra.md @@ -88,6 +88,10 @@ For example, if installing using the example above, the Cassandra DNS would be: {{< /tabs >}} +## Apache Ignite + +[Apache Ignite](https://ignite.apache.org/)'s integration with Cassandra as a caching layer is not supported by this component. + ## Related links - [Basic schema for a Dapr component]({{< ref component-schema >}}) - Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components From 41ddb27be65817c1ba91d08ec80ccf1a28568b3d Mon Sep 17 00:00:00 2001 From: Fernando Rocha Date: Mon, 28 Apr 2025 10:25:57 -0700 Subject: [PATCH 19/20] Updating AKS Cluster creation command (#4632) * updating aks cluster creation command Signed-off-by: Fernando Rocha * adding missing space Signed-off-by: Fernando Rocha --------- Signed-off-by: Fernando Rocha --- .../en/operations/hosting/kubernetes/cluster/setup-aks.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-aks.md b/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-aks.md index 79d29f274..2b3909232 100644 --- a/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-aks.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-aks.md @@ -39,7 +39,7 @@ This guide walks you through installing an Azure Kubernetes Service (AKS) cluste 1. Create an AKS cluster. To use a specific version of Kubernetes, use `--kubernetes-version` (1.13.x or newer version required). ```bash - az aks create --resource-group [your_resource_group] --name [your_aks_cluster_name] --node-count 2 --enable-addons http_application_routing --generate-ssh-keys + az aks create --resource-group [your_resource_group] --name [your_aks_cluster_name] --location [region] --node-count 2 --enable-app-routing --generate-ssh-keys ``` 1. Get the access credentials for the AKS cluster. From 58ef38fa3cec12f07fb24cecad43399b3d592f59 Mon Sep 17 00:00:00 2001 From: Fernando Rocha Date: Tue, 29 Apr 2025 03:31:35 -0700 Subject: [PATCH 20/20] Argo CD Integration docs (#4629) * Argo CD Integration docs Signed-off-by: Fernando Rocha * page weight Signed-off-by: Fernando Rocha * Update daprdocs/content/en/developing-applications/integrations/argo-cd.md Signed-off-by: Mark Fussell * Update daprdocs/content/en/developing-applications/integrations/argo-cd.md Signed-off-by: Mark Fussell * Update daprdocs/content/en/developing-applications/integrations/argo-cd.md Signed-off-by: Mark Fussell * Update daprdocs/content/en/developing-applications/integrations/argo-cd.md Signed-off-by: Mark Fussell --------- Signed-off-by: Fernando Rocha Signed-off-by: Mark Fussell Co-authored-by: Mark Fussell --- .../integrations/argo-cd.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) create mode 100644 daprdocs/content/en/developing-applications/integrations/argo-cd.md diff --git a/daprdocs/content/en/developing-applications/integrations/argo-cd.md b/daprdocs/content/en/developing-applications/integrations/argo-cd.md new file mode 100644 index 000000000..0377a6099 --- /dev/null +++ b/daprdocs/content/en/developing-applications/integrations/argo-cd.md @@ -0,0 +1,17 @@ +--- +type: docs +title: "How to: Integrate with Argo CD" +linkTitle: "Argo CD" +weight: 9000 +description: "Integrate Dapr into your GitOps pipeline" +--- + +[Argo CD](https://argo-cd.readthedocs.io/en/stable/) is a declarative, GitOps continuous delivery tool for Kubernetes. It enables you to manage your Kubernetes deployments by tracking the desired application state in Git repositories and automatically syncing it to your clusters. + +## Integration with Dapr + +You can use Argo CD to manage the deployment of Dapr control plane components and Dapr-enabled applications. By adopting a GitOps approach, you ensure that Dapr's configurations and applications are consistently deployed, versioned, and auditable across your environments. Argo CD can be easily configured to deploy Helm charts, manifests, and Dapr components stored in Git repositories. + +## Sample code + +A sample project demonstrating Dapr deployment with Argo CD is available at [https://github.com/dapr/samples/tree/master/dapr-argocd](https://github.com/dapr/samples/tree/master/dapr-argocd).