diff --git a/daprdocs/content/en/_index.md b/daprdocs/content/en/_index.md index 642107a7b..0e1251403 100644 --- a/daprdocs/content/en/_index.md +++ b/daprdocs/content/en/_index.md @@ -87,6 +87,13 @@ you tackle the challenges that come with building microservices and keeps your c +
+
+
Roadmap
+

Learn about Dapr's roadmap and change process.

+ +
+
diff --git a/daprdocs/content/en/concepts/overview.md b/daprdocs/content/en/concepts/overview.md index 106a772e5..ff0f7ad31 100644 --- a/daprdocs/content/en/concepts/overview.md +++ b/daprdocs/content/en/concepts/overview.md @@ -108,7 +108,7 @@ Deploying and running a Dapr-enabled application into your Kubernetes cluster is ### Clusters of physical or virtual machines -The Dapr control plane services can be deployed in high availability (HA) mode to clusters of physical or virtual machines in production. In the diagram below, the Actor `Placement` and security `Sentry` services are started on three different VMs to provide HA control plane. In order to provide name resolution using DNS for the applications running in the cluster, Dapr uses [Hashicorp Consul service]({{< ref setup-nr-consul >}}), also running in HA mode. +The Dapr control plane services can be deployed in high availability (HA) mode to clusters of physical or virtual machines in production. In the diagram below, the Actor `Placement` and security `Sentry` services are started on three different VMs to provide HA control plane. In order to provide name resolution using DNS for the applications running in the cluster, Dapr uses multicast DNS by default, but can also optionally support [Hashicorp Consul service]({{< ref setup-nr-consul >}}). Architecture diagram of Dapr control plane and Consul deployed to VMs in high availability mode diff --git a/daprdocs/content/en/contributing/roadmap.md b/daprdocs/content/en/contributing/roadmap.md index d3a790935..6c1093ecb 100644 --- a/daprdocs/content/en/contributing/roadmap.md +++ b/daprdocs/content/en/contributing/roadmap.md @@ -2,47 +2,9 @@ type: docs title: "Dapr Roadmap" linkTitle: "Roadmap" -description: "The Dapr Roadmap is a tool to help with visibility into investments across the Dapr project" +description: "The Dapr Roadmap gives the community visibility into the different priorities of the projecs" weight: 30 no_list: true --- - -Dapr encourages the community to help with prioritization. A GitHub project board is available to view and provide feedback on proposed issues and track them across development. - -[Screenshot of the Dapr Roadmap board](https://aka.ms/dapr/roadmap) - -{{< button text="View the backlog" link="https://aka.ms/dapr/roadmap" color="primary" >}} -
- -Please vote by adding a 👍 on the GitHub issues for the feature capabilities you would most like to see Dapr support. This will help the Dapr maintainers understand which features will provide the most value. - -Contributions from the community is also welcomed. If there are features on the roadmap that you are interested in contributing to, please comment on the GitHub issue and include your solution proposal. - -{{% alert title="Note" color="primary" %}} -The Dapr roadmap includes issues only from the v1.2 release and onwards. Issues closed and released prior to v1.2 are not included. -{{% /alert %}} - -## Stages - -The Dapr Roadmap progresses through the following stages: - -{{< cardpane >}} -{{< card title="**[📄 Backlog](https://github.com/orgs/dapr/projects/52#column-14691591)**" >}} - Issues (features) that need 👍 votes from the community to prioritize. Updated by Dapr maintainers. -{{< /card >}} -{{< card title="**[⏳ Planned (Committed)](https://github.com/orgs/dapr/projects/52#column-14561691)**" >}} - Issues with a proposal and/or targeted release milestone. This is where design proposals are discussed and designed. -{{< /card >}} -{{< card title="**[👩‍💻 In Progress (Development)](https://github.com/orgs/dapr/projects/52#column-14561696)**" >}} - Implementation specifics have been agreed upon and the feature is under active development. -{{< /card >}} -{{< /cardpane >}} -{{< cardpane >}} -{{< card title="**[☑ Done](https://github.com/orgs/dapr/projects/52#column-14561700)**" >}} - The feature capability has been completed and is scheduled for an upcoming release. -{{< /card >}} -{{< card title="**[✅ Released](https://github.com/orgs/dapr/projects/52#column-14659973)**" >}} - The feature is released and available for use. -{{< /card >}} -{{< /cardpane >}} +See [this document](https://github.com/dapr/community/blob/master/roadmap.md) to view the Dapr project's roadmap. diff --git a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md index bb96b9b23..7bb1bcf0e 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md +++ b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md @@ -104,7 +104,7 @@ The Dapr actor runtime provides a simple turn-based access model for accessing a ### State -Transactional state stores can be used to store actor state. To specify which state store to use for actors, specify value of property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors. +Transactional state stores can be used to store actor state. Regardless of whether you intend to store any state in your actor, you must specify a value for property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors. ### Actor timers and reminders diff --git a/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md b/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md index 27dc31fc2..0a5dba3b1 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md +++ b/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md @@ -94,6 +94,75 @@ In this example, at trigger time, which is `@every 1s` according to the `Schedul At the trigger time, the `prodDBBackupHandler` function is called, executing the desired business logic for this job at trigger time. For example: +#### HTTP + +When you create a job using Dapr's Jobs API, Dapr will automatically assume there is an endpoint available at +`/job/`. For instance, if you schedule a job named `test`, Dapr expects your application to listen for job +events at `/job/test`. Ensure your application has a handler set up for this endpoint to process the job when it is +triggered. For example: + +*Note: The following example is in Go but applies to any programming language.* + +```go + +func main() { + ... + http.HandleFunc("/job/", handleJob) + http.HandleFunc("/job/", specificJob) + ... +} + +func specificJob(w http.ResponseWriter, r *http.Request) { + // Handle specific triggered job +} + +func handleJob(w http.ResponseWriter, r *http.Request) { + // Handle the triggered jobs +} +``` + +#### gRPC + +When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following +callback function: + +*Note: The following example is in Go but applies to any programming language with gRPC support.* + +```go +import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1" +... +func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) { + // Handle the triggered job +} +``` + +This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that +you register the callback server, which will invoke this function when a job is triggered: + +```go +... +js := &JobService{} +rtv1.RegisterAppCallbackAlphaServer(server, js) +``` + +In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly +through this gRPC method. + +#### SDKs + +For SDK users, handling triggered jobs is simpler. When a job is triggered, Dapr will automatically route the job to the +event handler you set up during the server initialization. For example, in Go, you'd register the event handler like this: + +```go +... +if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil { + log.Fatalf("failed to register job event handler: %v", err) +} +``` + +Dapr takes care of the underlying routing. When the job is triggered, your `prodDBBackupHandler` function is called with +the triggered job data. Here’s an example of handling the triggered job: + ```go // ... @@ -103,11 +172,9 @@ func prodDBBackupHandler(ctx context.Context, job *common.JobEvent) error { if err := json.Unmarshal(job.Data, &jobData); err != nil { // ... } - decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value) - // ... var jobPayload api.DBBackup - if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil { + if err := json.Unmarshal(job.Data, &jobPayload); err != nil { // ... } fmt.Printf("job %d received:\n type: %v \n typeurl: %v\n value: %v\n extracted payload: %v\n", jobCount, job.JobType, jobData.TypeURL, jobData.Value, jobPayload) @@ -146,4 +213,4 @@ dapr run --app-id=distributed-scheduler \ ## Next steps - [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}}) -- [Jobs API reference]({{< ref jobs_api.md >}}) \ No newline at end of file +- [Jobs API reference]({{< ref jobs_api.md >}}) diff --git a/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md index 28c3cb8f1..680b03611 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md +++ b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md @@ -38,11 +38,9 @@ The diagram below is an overview of how Dapr's service invocation works when inv Diagram showing the steps of service invocation to non-Dapr endpoints 1. Service A makes an HTTP call targeting Service B, a non-Dapr endpoint. The call goes to the local Dapr sidecar. -2. Dapr discovers Service B's location using the `HTTPEndpoint` or FQDN URL. -3. Dapr forwards the message to Service B. -4. Service B runs its business logic code. -5. Service B sends a response to Service A's Dapr sidecar. -6. Service A receives the response. +2. Dapr discovers Service B's location using the `HTTPEndpoint` or FQDN URL then forwards the message to Service B. +3. Service B sends a response to Service A's Dapr sidecar. +4. Service A receives the response. ## Using an HTTPEndpoint resource or FQDN URL for non-Dapr endpoints There are two ways to invoke a non-Dapr endpoint when communicating either to Dapr applications or non-Dapr applications. A Dapr application can invoke a non-Dapr endpoint by providing one of the following: diff --git a/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md b/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md index 6a6c27d4b..e630365db 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md +++ b/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md @@ -10,8 +10,6 @@ State management is one of the most common needs of any new, legacy, monolith, o In this guide, you'll learn the basics of using the key/value state API to allow an application to save, get, and delete state. -## Example - The code example below _loosely_ describes an application that processes orders with an order processing service which has a Dapr sidecar. The order processing service uses Dapr to store state in a Redis state store. Diagram showing state management of example service @@ -554,7 +552,7 @@ namespace EventService string DAPR_STORE_NAME = "statestore"; //Using Dapr SDK to retrieve multiple states using var client = new DaprClientBuilder().Build(); - IReadOnlyList mulitpleStateResult = await client.GetBulkStateAsync(DAPR_STORE_NAME, new List { "order_1", "order_2" }, parallelism: 1); + IReadOnlyList multipleStateResult = await client.GetBulkStateAsync(DAPR_STORE_NAME, new List { "order_1", "order_2" }, parallelism: 1); } } } diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md index b4fa5a443..ed9f747b8 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md @@ -106,8 +106,25 @@ Want to skip the quickstarts? Not a problem. You can try out the workflow buildi ## Limitations -- **State stores:** As of the 1.12.0 beta release of Dapr Workflow, using the NoSQL databases as a state store results in limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request. -- **Horizontal scaling:** As of the 1.12.0 beta release of Dapr Workflow, if you scale out Dapr sidecars or your application pods to more than 2, then the concurrency of the workflow execution drops. It is recommended to test with 1 or 2 instances, and no more than 2. +- **State stores:** Due to underlying limitations in some database choices, more commonly NoSQL databases, you might run into limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request. +- **Horizontal scaling:** As of the 1.12.0 beta release of Dapr Workflow, it is recommended to use a maximum of two instances of Dapr per workflow application. This limitation is resolved in Dapr 1.14.x when enabling the scheduler service. + +To enable the scheduler service to work for Dapr Workflows, make sure you're using Dapr 1.14.x or later and assign the following configuration to your app: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Configuration +metadata: + name: schedulerconfig +spec: + tracing: + samplingRate: "1" + features: + - name: SchedulerReminders + enabled: true +``` + +See more info about [enabling preview features]({{}}). ## Watch the demo diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md index fe6f69b63..ba3ab432f 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md @@ -749,7 +749,7 @@ def status_monitor_workflow(ctx: wf.DaprWorkflowContext, job: JobStatus): ctx.call_activity(send_alert, input=f"Job '{job.job_id}' is unhealthy!") next_sleep_interval = 5 # check more frequently when unhealthy - yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(seconds=next_sleep_interval)) + yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(minutes=next_sleep_interval)) # restart from the beginning with a new JobStatus input ctx.continue_as_new(job) @@ -896,7 +896,7 @@ func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) { } if status == "healthy" { job.IsHealthy = true - sleepInterval = time.Second * 60 + sleepInterval = time.Minutes * 60 } else { if job.IsHealthy { job.IsHealthy = false @@ -905,7 +905,7 @@ func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) { return "", err } } - sleepInterval = time.Second * 5 + sleepInterval = time.Minutes * 5 } if err := ctx.CreateTimer(sleepInterval).Await(nil); err != nil { return "", err diff --git a/daprdocs/content/en/developing-applications/integrations/Diagrid/diagrid-conductor.md b/daprdocs/content/en/developing-applications/integrations/Diagrid/diagrid-conductor.md index 554ca118a..c7504b56c 100644 --- a/daprdocs/content/en/developing-applications/integrations/Diagrid/diagrid-conductor.md +++ b/daprdocs/content/en/developing-applications/integrations/Diagrid/diagrid-conductor.md @@ -26,6 +26,4 @@ By studying past resource behavior, recommend application resource optimization The application graph facilitates collaboration between dev and ops by providing a dynamic overview of your services and infrastructure components. -Try out [Conductor Free](https://www.diagrid.io/pricing), ideal for individual developers building and testing Dapr applications on Kubernetes. - {{< button text="Learn more about Diagrid Conductor" link="https://www.diagrid.io/conductor" >}} diff --git a/daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md index 8b52eedb1..9435df194 100644 --- a/daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md +++ b/daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md @@ -273,23 +273,20 @@ func deleteJob(ctx context.Context, in *common.InvocationEvent) (out *common.Con // Handler that handles job events func handleJob(ctx context.Context, job *common.JobEvent) error { - var jobData common.Job - if err := json.Unmarshal(job.Data, &jobData); err != nil { - return fmt.Errorf("failed to unmarshal job: %v", err) - } - decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value) - if err != nil { - return fmt.Errorf("failed to decode job payload: %v", err) - } - var jobPayload JobData - if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil { - return fmt.Errorf("failed to unmarshal payload: %v", err) - } + var jobData common.Job + if err := json.Unmarshal(job.Data, &jobData); err != nil { + return fmt.Errorf("failed to unmarshal job: %v", err) + } - fmt.Println("Starting droid:", jobPayload.Droid) - fmt.Println("Executing maintenance job:", jobPayload.Task) + var jobPayload JobData + if err := json.Unmarshal(job.Data, &jobPayload); err != nil { + return fmt.Errorf("failed to unmarshal payload: %v", err) + } - return nil + fmt.Println("Starting droid:", jobPayload.Droid) + fmt.Println("Executing maintenance job:", jobPayload.Task) + + return nil } ``` diff --git a/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md index 56f758b57..da1ec1590 100644 --- a/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md +++ b/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md @@ -646,25 +646,24 @@ OrderPayload orderInfo = new OrderPayload(itemToPurchase, 15000, ammountToPurcha // Start the workflow Console.WriteLine("Starting workflow {0} purchasing {1} {2}", orderId, ammountToPurchase, itemToPurchase); -await daprClient.StartWorkflowAsync( - workflowComponent: DaprWorkflowComponent, - workflowName: nameof(OrderProcessingWorkflow), +await daprWorkflowClient.ScheduleNewWorkflowAsync( + name: nameof(OrderProcessingWorkflow), input: orderInfo, instanceId: orderId); // Wait for the workflow to start and confirm the input -GetWorkflowResponse state = await daprClient.WaitForWorkflowStartAsync( - instanceId: orderId, - workflowComponent: DaprWorkflowComponent); +WorkflowState state = await daprWorkflowClient.WaitForWorkflowStartAsync( + instanceId: orderId); -Console.WriteLine("Your workflow has started. Here is the status of the workflow: {0}", state.RuntimeStatus); +Console.WriteLine($"{nameof(OrderProcessingWorkflow)} (ID = {orderId}) started successfully with {state.ReadInputAs()}"); // Wait for the workflow to complete +using var ctx = new CancellationTokenSource(TimeSpan.FromSeconds(5)); state = await daprClient.WaitForWorkflowCompletionAsync( instanceId: orderId, - workflowComponent: DaprWorkflowComponent); + cancellation: ctx.Token); -Console.WriteLine("Workflow Status: {0}", state.RuntimeStatus); +Console.WriteLine("Workflow Status: {0}", state.ReadCustomStatusAs()); ``` #### `order-processor/Workflows/OrderProcessingWorkflow.cs` @@ -715,7 +714,7 @@ class OrderProcessingWorkflow : Workflow nameof(UpdateInventoryActivity), new PaymentRequest(RequestId: orderId, order.Name, order.Quantity, order.TotalCost)); } - catch (TaskFailedException) + catch (WorkflowTaskFailedException) { // Let them know their payment was processed await context.CallActivityAsync( diff --git a/daprdocs/content/en/operations/configuration/configuration-overview.md b/daprdocs/content/en/operations/configuration/configuration-overview.md index bda1cae0c..7225fc11f 100644 --- a/daprdocs/content/en/operations/configuration/configuration-overview.md +++ b/daprdocs/content/en/operations/configuration/configuration-overview.md @@ -86,7 +86,7 @@ The `tracing` section under the `Configuration` spec contains the following prop tracing: samplingRate: "1" otel: - endpointAddress: "https://..." + endpointAddress: "otelcollector.observability.svc.cluster.local:4317" zipkin: endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans" ``` @@ -97,10 +97,10 @@ The following table lists the properties for tracing: |--------------|--------|-------------| | `samplingRate` | string | Set sampling rate for tracing to be enabled or disabled. | `stdout` | bool | True write more verbose information to the traces -| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) server address to send traces to +| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) server address to send traces to. This may or may not require the https:// or http:// depending on your OTEL provider. | `otel.isSecure` | bool | Is the connection to the endpoint address encrypted | `otel.protocol` | string | Set to `http` or `grpc` protocol -| `zipkin.endpointAddress` | string | Set the Zipkin server address to send traces to +| `zipkin.endpointAddress` | string | Set the Zipkin server address to send traces to. This should include the protocol (http:// or https://) on the endpoint. ##### `samplingRate` diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md index 9172a28fe..b4e8f02e6 100644 --- a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md @@ -7,10 +7,123 @@ description: "Configure Scheduler to persist its database to make it resilient t --- The [Scheduler]({{< ref scheduler.md >}}) service is responsible for writing jobs to its embedded Etcd database and scheduling them for execution. -By default, the Scheduler service database writes this data to a Persistent Volume Claim of 1Gb of size using the cluster's default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/). This means that there is no additional parameter required to run the scheduler service reliably on most Kubernetes deployments, although you will need additional configuration in some deployments or for a production environment. +By default, the Scheduler service database writes data to a Persistent Volume Claim volume of size `1Gb`, using the cluster's default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/). +This means that there is no additional parameter required to run the scheduler service reliably on most Kubernetes deployments, although you will need [additional configuration](#storage-class) if a default StorageClass is not available or when running a production environment. + +{{% alert title="Warning" color="warning" %}} +The default storage size for the Scheduler is `1Gi`, which is likely not sufficient for most production deployments. +Remember that the Scheduler is used for [Actor Reminders]({{< ref actors-timers-reminders.md >}}) & [Workflows]({{< ref workflow-overview.md >}}) when the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature is enabled, and the [Jobs API]({{< ref jobs_api.md >}}). +You may want to consider reinstalling Dapr with a larger Scheduler storage of at least `16Gi` or more. +For more information, see the [ETCD Storage Disk Size](#etcd-storage-disk-size) section below. +{{% /alert %}} ## Production Setup +### ETCD Storage Disk Size + +The default storage size for the Scheduler is `1Gb`. +This size is likely not sufficient for most production deployments. +When the storage size is exceeded, the Scheduler will log an error similar to the following: + +``` +error running scheduler: etcdserver: mvcc: database space exceeded +``` + +Knowing the safe upper bound for your storage size is not an exact science, and relies heavily on the number, persistence, and the data payload size of your application jobs. +The [Job API]({{< ref jobs_api.md >}}) and [Actor Reminders]({{< ref actors-timers-reminders.md >}}) (with the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature enabled) transparently maps one to one to the usage of your applications. +Workflows (when the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature is enabled) create a large number of jobs as Actor Reminders, however these jobs are short lived- matching the lifecycle of each workflow execution. +The data payload of jobs created by Workflows is typically empty or small. + +The Scheduler uses Etcd as its storage backend database. +By design, Etcd persists historical transactions and data in form of [Write-Ahead Logs (WAL) and snapshots](https://etcd.io/docs/v3.5/learning/persistent-storage-files/). +This means the actual disk usage of Scheduler will be higher than the current observable database state, often by a number of multiples. + +### Setting the Storage Size on Installation + +If you need to increase an **existing** Scheduler storage size, see the [Increase Scheduler Storage Size](#increase-existing-scheduler-storage-size) section below. +To increase the storage size (in this example- `16Gi`) for a **fresh** Dapr instalation, you can use the following command: + +{{< tabs "Dapr CLI" "Helm" >}} + +{{% codetab %}} + +```bash +dapr init -k --set dapr_scheduler.cluster.storageSize=16Gi --set dapr_scheduler.etcdSpaceQuota=16Gi +``` + +{{% /codetab %}} + + +{{% codetab %}} + +```bash +helm upgrade --install dapr dapr/dapr \ +--version={{% dapr-latest-version short="true" %}} \ +--namespace dapr-system \ +--create-namespace \ +--set dapr_scheduler.cluster.storageSize=16Gi \ +--set dapr_scheduler.etcdSpaceQuota=16Gi \ +--wait +``` + +{{% /codetab %}} +{{< /tabs >}} + +#### Increase existing Scheduler Storage Size + +{{% alert title="Warning" color="warning" %}} +Not all storage providers support dynamic volume expansion. +Please see your storage provider documentation to determine if this feature is supported, and what to do if it is not. +{{% /alert %}} + +By default, each Scheduler will create a Persistent Volume and Persistent Volume Claim of size `1Gi` against the [default `standard` storage class](#storage-class) for each Scheduler replica. +These will look similar to the following, where in this example we are running Scheduler in HA mode. + +``` +NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE +dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 Bound pvc-9f699d2e-f347-43b0-aa98-57dcf38229c5 1Gi RWO standard 3m25s +dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-1 Bound pvc-f4c8be7b-ffbe-407b-954e-7688f2482caa 1Gi RWO standard 3m25s +dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-2 Bound pvc-eaad5fb1-98e9-42a5-bcc8-d45dba1c4b9f 1Gi RWO standard 3m25s +``` + +``` +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE +pvc-9f699d2e-f347-43b0-aa98-57dcf38229c5 1Gi RWO Delete Bound dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-0 standard 4m24s +pvc-eaad5fb1-98e9-42a5-bcc8-d45dba1c4b9f 1Gi RWO Delete Bound dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-2 standard 4m24s +pvc-f4c8be7b-ffbe-407b-954e-7688f2482caa 1Gi RWO Delete Bound dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-1 standard 4m24s +``` + +To expand the storage size of the Scheduler, follow these steps: + +1. First, ensure that the storage class supports volume expansion, and that the `allowVolumeExpansion` field is set to `true` if it is not already. + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: standard +provisioner: my.driver +allowVolumeExpansion: true +... +``` + +2. Delete the Scheduler StatefulSet whilst preserving the Bound Persistent Volume Claims. + +```bash +kubectl delete sts -n dapr-system dapr-scheduler-server --cascade=orphan +``` + +3. Increase the size of the Persistent Volume Claims to the desired size by editing the `spec.resources.requests.storage` field. + Again in this case, we are assuming that the Scheduler is running in HA mode with 3 replicas. + +```bash +kubectl edit pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 dapr-scheduler-data-dir-dapr-scheduler-server-1 dapr-scheduler-data-dir-dapr-scheduler-server-2 +``` + +4. Recreate the Scheduler StatefulSet by [installing Dapr with the desired storage size](#setting-the-storage-size-on-installation). + +### Storage Class + In case your Kubernetes deployment does not have a default storage class or you are configuring a production cluster, defining a storage class is required. A persistent volume is backed by a real disk that is provided by the hosted Cloud Provider or Kubernetes infrastructure platform. diff --git a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md index 3e7c090cb..78f0e2c75 100644 --- a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md +++ b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md @@ -138,6 +138,18 @@ services: command: ["./placement", "--port", "50006"] ports: - "50006:50006" + + scheduler: + image: "daprio/dapr" + command: ["./scheduler", "--port", "50007"] + ports: + - "50007:50007" + # WARNING - This is a tmpfs volume, your state will not be persisted across restarts + volumes: + - type: tmpfs + target: /data + tmpfs: + size: "10000" networks: hello-dapr: null @@ -147,6 +159,8 @@ services: To further learn how to run Dapr with Docker Compose, see the [Docker-Compose Sample](https://github.com/dapr/samples/tree/master/hello-docker-compose). +The above example also includes a scheduler definition that uses a non-persistent data store for testing and development purposes. + ## Run on Kubernetes If your deployment target is Kubernetes please use Dapr's first-class integration. Refer to the diff --git a/daprdocs/content/en/operations/support/support-release-policy.md b/daprdocs/content/en/operations/support/support-release-policy.md index 55500ed60..fbba03b5f 100644 --- a/daprdocs/content/en/operations/support/support-release-policy.md +++ b/daprdocs/content/en/operations/support/support-release-policy.md @@ -24,7 +24,7 @@ A supported release means: From the 1.8.0 release onwards three (3) versions of Dapr are supported; the current and previous two (2) versions. Typically these are `MINOR`release updates. This means that there is a rolling window that moves forward for supported releases and it is your operational responsibility to remain up to date with these supported versions. If you have an older version of Dapr you may have to do intermediate upgrades to get to a supported version. -There will be at least 6 weeks between major.minor version releases giving users a 12 week (3 month) rolling window for upgrading. +There will be at least 13 weeks (3 months) between major.minor version releases giving users at least a 9 month rolling window for upgrading from a non-supported version. For more details on the release process read [release cycle and cadence](https://github.com/dapr/community/blob/master/release-process.md) Patch support is for supported versions (current and previous). @@ -45,6 +45,10 @@ The table below shows the versions of Dapr releases that have been tested togeth | Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes | |--------------------|:--------:|:--------|---------|---------|---------|------------| +| September 16th 2024 | 1.14.4
| 1.14.1 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.4) | +| September 13th 2024 | 1.14.3
| 1.14.1 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | ⚠️ Recalled | [v1.14.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.3) | +| September 6th 2024 | 1.14.2
| 1.14.1 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.2) | +| August 14th 2024 | 1.14.1
| 1.14.1 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.1) | | August 14th 2024 | 1.14.0
| 1.14.0 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.0) | | May 29th 2024 | 1.13.4
| 1.13.0 | Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0 | 0.14.0 | Supported | [v1.13.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.4) | | May 21st 2024 | 1.13.3
| 1.13.0 | Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0 | 0.14.0 | Supported | [v1.13.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.3) | @@ -139,7 +143,7 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h | 1.11.0 to 1.11.4 | N/A | 1.12.4 | | 1.12.0 to 1.12.4 | N/A | 1.13.5 | | 1.13.0 to 1.13.5 | N/A | 1.14.0 | -| 1.14.0 | N/A | 1.14.0 | +| 1.14.0 to 1.14.2 | N/A | 1.14.2 | ## Upgrade on Hosting platforms diff --git a/daprdocs/content/en/operations/support/support-security-issues.md b/daprdocs/content/en/operations/support/support-security-issues.md index 1ae3fce27..6e7b24a2d 100644 --- a/daprdocs/content/en/operations/support/support-security-issues.md +++ b/daprdocs/content/en/operations/support/support-security-issues.md @@ -52,7 +52,7 @@ The people who should have access to read your security report are listed in [`m code which allows the issue to be reproduced. Explain why you believe this to be a security issue in Dapr. 2. Put that information into an email. Use a descriptive title. -3. Send the email to [Dapr Maintainers (dapr@dapr.io)](mailto:dapr@dapr.io?subject=[Security%20Disclosure]:%20ISSUE%20TITLE) +3. Send an email to [Security (security@dapr.io)](mailto:security@dapr.io?subject=[Security%20Disclosure]:%20ISSUE%20TITLE) ## Response diff --git a/daprdocs/content/en/reference/api/jobs_api.md b/daprdocs/content/en/reference/api/jobs_api.md index 3a04ed1a9..454598676 100644 --- a/daprdocs/content/en/reference/api/jobs_api.md +++ b/daprdocs/content/en/reference/api/jobs_api.md @@ -32,7 +32,7 @@ At least one of `schedule` or `dueTime` must be provided, but they can also be p Parameter | Description --------- | ----------- `name` | Name of the job you're scheduling -`data` | A protobuf message `@type`/`value` pair. `@type` must be of a [well-known type](https://protobuf.dev/reference/protobuf/google.protobuf). `value` is the serialized data. +`data` | A JSON serialized value or object. `schedule` | An optional schedule at which the job is to be run. Details of the format are below. `dueTime` | An optional time at which the job should be active, or the "one shot" time, if other scheduling type fields are not provided. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from creation time), or non-repeating ISO8601. `repeats` | An optional number of times in which the job should be triggered. If not set, the job runs indefinitely or until expiration. @@ -43,9 +43,13 @@ Parameter | Description Systemd timer style cron accepts 6 fields: seconds | minutes | hours | day of month | month | day of week -0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-7/sun-sat +--- | --- | --- | --- | --- | --- +0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-6/sun-sat +##### Example 1 "0 30 * * * *" - every hour on the half hour + +##### Example 2 "0 15 3 * * *" - every day at 03:15 Period string expressions: @@ -63,13 +67,8 @@ Entry | Description | Equivalent ```json { - "job": { - "data": { - "@type": "type.googleapis.com/google.protobuf.StringValue", - "value": "\"someData\"" - }, - "dueTime": "30s" - } + "data": "some data", + "dueTime": "30s" } ``` @@ -88,20 +87,14 @@ The following example curl command creates a job, naming the job `jobforjabba` a ```bash $ curl -X POST \ http://localhost:3500/v1.0-alpha1/jobs/jobforjabba \ - -H "Content-Type: application/json" + -H "Content-Type: application/json" \ -d '{ - "job": { - "data": { - "@type": "type.googleapis.com/google.protobuf.StringValue", - "value": "Running spice" - }, - "schedule": "@every 1m", - "repeats": 5 - } + "data": "{\"value\":\"Running spice\"}", + "schedule": "@every 1m", + "repeats": 5 }' ``` - ## Get job data Get a job from its name. @@ -137,10 +130,7 @@ $ curl -X GET http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Typ "name": "jobforjabba", "schedule": "@every 1m", "repeats": 5, - "data": { - "@type": "type.googleapis.com/google.protobuf.StringValue", - "value": "Running spice" - } + "data": 123 } ``` ## Delete a job diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md b/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md index addfba98a..413e1893f 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md @@ -63,6 +63,8 @@ spec: value: true - name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. value: 5m + - name: escapeHeaders # Optional. + value: false ``` ## Spec metadata fields @@ -99,6 +101,7 @@ spec: | `consumerFetchDefault` | N | Input/Output | The default number of message bytes to fetch from the broker in each request. Default is `"1048576"` bytes. | `"2097152"` | | `heartbeatInterval` | N | Input | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to `"3s"`. | `"5s"` | | `sessionTimeout` | N | Input | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to `"10s"`. | `"20s"` | +| `escapeHeaders` | N | Input | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` | #### Note The metadata `version` must be set to `1.0.0` when using Azure EventHubs with Kafka. diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md index cafcee537..e6091d87e 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md @@ -63,6 +63,8 @@ spec: value: true - name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. value: 5m + - name: escapeHeaders # Optional. + value: false ``` @@ -112,6 +114,7 @@ spec: | consumerFetchDefault | N | The default number of message bytes to fetch from the broker in each request. Default is `"1048576"` bytes. | `"2097152"` | | heartbeatInterval | N | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to "3s". | `"5s"` | | sessionTimeout | N | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s". | `"20s"` | +| escapeHeaders | N | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` | The `secretKeyRef` above is referencing a [kubernetes secrets store]({{< ref kubernetes-secret-store.md >}}) to access the tls information. Visit [here]({{< ref setup-secret-store.md >}}) to learn more about how to configure a secret store component. @@ -485,6 +488,39 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correla }' ``` +## Receiving message headers with special characters + +The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors. +HTTP header values must follow specifications, making some characters not allowed. [Learn more about the protocols](https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2). +In this case, you can enable `escapeHeaders` configuration setting, which uses URL escaping to encode header values on the consumer side. + +{{% alert title="Note" color="primary" %}} +When using this setting, the received message headers are URL escaped, and you need to URL "un-escape" it to get the original value. +{{% /alert %}} + +Set `escapeHeaders` to `true` to URL escape. + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: kafka-pubsub-escape-headers +spec: + type: pubsub.kafka + version: v1 + metadata: + - name: brokers # Required. Kafka broker connection setting + value: "dapr-kafka.myapp.svc.cluster.local:9092" + - name: consumerGroup # Optional. Used for input bindings. + value: "group1" + - name: clientID # Optional. Used as client tracing ID by Kafka brokers. + value: "my-dapr-app-id" + - name: authType # Required. + value: "none" + - name: escapeHeaders + value: "true" +``` + ## Avro Schema Registry serialization/deserialization You can configure pub/sub to publish or consume data encoded using [Avro binary serialization](https://avro.apache.org/docs/), leveraging an [Apache Schema Registry](https://developer.confluent.io/courses/apache-kafka/schema-registry/) (for example, [Confluent Schema Registry](https://developer.confluent.io/courses/apache-kafka/schema-registry/), [Apicurio](https://www.apicur.io/registry/)). @@ -597,6 +633,7 @@ To run Kafka on Kubernetes, you can use any Kafka operator, such as [Strimzi](ht {{< /tabs >}} + ## Related links - [Basic schema for a Dapr component]({{< ref component-schema >}}) - Read [this guide]({{< ref "howto-publish-subscribe.md##step-1-setup-the-pubsub-component" >}}) for instructions on configuring pub/sub components diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md index 831f6aa72..cc357b5bc 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md @@ -83,8 +83,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr | `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `0` (unlimited) | `10` | `disableEntityManagement` | N | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"` | `defaultMessageTimeToLiveInSec` | N | Default message time to live, in seconds. Used during subscription creation only. | `10` -| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600` -| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Must be 300s or greater. Default set by server. | `10` +| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: `0` (disabled) | `3600` +| `maxDeliveryCount` | N | Defines the number of attempts the server makes to deliver a message. Used during subscription creation only. Default set by server. | `10` | `lockDurationInSec` | N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30` | `minConnectionRecoveryInSec` | N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5` | `maxConnectionRecoveryInSec` | N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600` diff --git a/daprdocs/content/en/reference/resource-specs/component-schema.md b/daprdocs/content/en/reference/resource-specs/component-schema.md index 349ff4923..875744c28 100644 --- a/daprdocs/content/en/reference/resource-specs/component-schema.md +++ b/daprdocs/content/en/reference/resource-specs/component-schema.md @@ -8,27 +8,33 @@ description: "The basic spec for a Dapr component" Dapr defines and registers components using a [resource specifications](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). All components are defined as a resource and can be applied to any hosting environment where Dapr is running, not just Kubernetes. +Typically, components are restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. The namespace is either explicit on the component manifest itself, or set by the API server, which derives the namespace through context with applying to Kubernetes. + +{{% alert title="Note" color="primary" %}} +The exception to this rule is in self-hosted mode, where daprd ingests component resources when the namespace field is omitted. However, the security profile is mute, as daprd has access to the manifest anyway, unlike in Kubernetes. +{{% /alert %}} + ## Format ```yaml apiVersion: dapr.io/v1alpha1 kind: Component auth: - secretstore: [SECRET-STORE-NAME] + secretstore: metadata: - name: [COMPONENT-NAME] - namespace: [COMPONENT-NAMESPACE] + name: + namespace: spec: - type: [COMPONENT-TYPE] + type: version: v1 - initTimeout: [TIMEOUT-DURATION] - ignoreErrors: [BOOLEAN] + initTimeout: + ignoreErrors: metadata: - - name: [METADATA-NAME] - value: [METADATA-VALUE] + - name: + value: scopes: - - [APPID] - - [APPID] + - + - ``` ## Spec fields diff --git a/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md b/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md index a85a25315..5e2b8f45d 100644 --- a/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md +++ b/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md @@ -10,6 +10,10 @@ aliases: The `HTTPEndpoint` is a Dapr resource that is used to enable the invocation of non-Dapr endpoints from a Dapr application. +{{% alert title="Note" color="primary" %}} +Any HTTPEndpoint resource can be restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. +{{% /alert %}} + ## Format ```yaml diff --git a/daprdocs/content/en/reference/resource-specs/resiliency-schema.md b/daprdocs/content/en/reference/resource-specs/resiliency-schema.md index 32888adc7..06733d1d8 100644 --- a/daprdocs/content/en/reference/resource-specs/resiliency-schema.md +++ b/daprdocs/content/en/reference/resource-specs/resiliency-schema.md @@ -8,6 +8,10 @@ description: "The basic spec for a Dapr resiliency resource" The `Resiliency` Dapr resource allows you to define and apply fault tolerance resiliency policies. Resiliency specs are applied when the Dapr sidecar starts. +{{% alert title="Note" color="primary" %}} +Any resiliency resource can be restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. +{{% /alert %}} + ## Format ```yml diff --git a/daprdocs/content/en/reference/resource-specs/subscription-schema.md b/daprdocs/content/en/reference/resource-specs/subscription-schema.md index e1eb8ecc5..c047fd40f 100644 --- a/daprdocs/content/en/reference/resource-specs/subscription-schema.md +++ b/daprdocs/content/en/reference/resource-specs/subscription-schema.md @@ -6,7 +6,13 @@ weight: 2000 description: "The basic spec for a Dapr subscription" --- -The `Subscription` Dapr resource allows you to subscribe declaratively to a topic using an external component YAML file. This guide demonstrates two subscription API versions: +The `Subscription` Dapr resource allows you to subscribe declaratively to a topic using an external component YAML file. + +{{% alert title="Note" color="primary" %}} +Any subscription can be restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. +{{% /alert %}} + +This guide demonstrates two subscription API versions: - `v2alpha` (default spec) - `v1alpha1` (deprecated) diff --git a/daprdocs/layouts/shortcodes/dapr-latest-version.html b/daprdocs/layouts/shortcodes/dapr-latest-version.html index c64a87827..79be56261 100644 --- a/daprdocs/layouts/shortcodes/dapr-latest-version.html +++ b/daprdocs/layouts/shortcodes/dapr-latest-version.html @@ -1 +1 @@ -{{- if .Get "short" }}1.14{{ else if .Get "long" }}1.14.0{{ else if .Get "cli" }}1.14.0{{ else }}1.14.0{{ end -}} +{{- if .Get "short" }}1.14{{ else if .Get "long" }}1.14.4{{ else if .Get "cli" }}1.14.1{{ else }}1.14.1{{ end -}}