mirror of https://github.com/dapr/docs.git
Merge branch 'v1.14' into streaming-subscription-dotnet
This commit is contained in:
commit
8526e2937a
|
@ -87,6 +87,13 @@ you tackle the challenges that come with building microservices and keeps your c
|
|||
<a href="{{< ref contributing >}}" class="stretched-link"></a>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card">
|
||||
<div class="card-body">
|
||||
<h5 class="card-title"><b>Roadmap</b></h5>
|
||||
<p class="card-text">Learn about Dapr's roadmap and change process.</p>
|
||||
<a href="{{< ref roadmap.md >}}" class="stretched-link"></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
||||
|
|
|
@ -108,7 +108,7 @@ Deploying and running a Dapr-enabled application into your Kubernetes cluster is
|
|||
|
||||
### Clusters of physical or virtual machines
|
||||
|
||||
The Dapr control plane services can be deployed in high availability (HA) mode to clusters of physical or virtual machines in production. In the diagram below, the Actor `Placement` and security `Sentry` services are started on three different VMs to provide HA control plane. In order to provide name resolution using DNS for the applications running in the cluster, Dapr uses [Hashicorp Consul service]({{< ref setup-nr-consul >}}), also running in HA mode.
|
||||
The Dapr control plane services can be deployed in high availability (HA) mode to clusters of physical or virtual machines in production. In the diagram below, the Actor `Placement` and security `Sentry` services are started on three different VMs to provide HA control plane. In order to provide name resolution using DNS for the applications running in the cluster, Dapr uses multicast DNS by default, but can also optionally support [Hashicorp Consul service]({{< ref setup-nr-consul >}}).
|
||||
|
||||
<img src="/images/overview-vms-hosting.png" width=1200 alt="Architecture diagram of Dapr control plane and Consul deployed to VMs in high availability mode">
|
||||
|
||||
|
|
|
@ -104,7 +104,7 @@ The Dapr actor runtime provides a simple turn-based access model for accessing a
|
|||
|
||||
### State
|
||||
|
||||
Transactional state stores can be used to store actor state. To specify which state store to use for actors, specify value of property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors.
|
||||
Transactional state stores can be used to store actor state. Regardless of whether you intend to store any state in your actor, you must specify a value for property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors.
|
||||
|
||||
### Actor timers and reminders
|
||||
|
||||
|
|
|
@ -94,6 +94,75 @@ In this example, at trigger time, which is `@every 1s` according to the `Schedul
|
|||
|
||||
At the trigger time, the `prodDBBackupHandler` function is called, executing the desired business logic for this job at trigger time. For example:
|
||||
|
||||
#### HTTP
|
||||
|
||||
When you create a job using Dapr's Jobs API, Dapr will automatically assume there is an endpoint available at
|
||||
`/job/<job-name>`. For instance, if you schedule a job named `test`, Dapr expects your application to listen for job
|
||||
events at `/job/test`. Ensure your application has a handler set up for this endpoint to process the job when it is
|
||||
triggered. For example:
|
||||
|
||||
*Note: The following example is in Go but applies to any programming language.*
|
||||
|
||||
```go
|
||||
|
||||
func main() {
|
||||
...
|
||||
http.HandleFunc("/job/", handleJob)
|
||||
http.HandleFunc("/job/<job-name>", specificJob)
|
||||
...
|
||||
}
|
||||
|
||||
func specificJob(w http.ResponseWriter, r *http.Request) {
|
||||
// Handle specific triggered job
|
||||
}
|
||||
|
||||
func handleJob(w http.ResponseWriter, r *http.Request) {
|
||||
// Handle the triggered jobs
|
||||
}
|
||||
```
|
||||
|
||||
#### gRPC
|
||||
|
||||
When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following
|
||||
callback function:
|
||||
|
||||
*Note: The following example is in Go but applies to any programming language with gRPC support.*
|
||||
|
||||
```go
|
||||
import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1"
|
||||
...
|
||||
func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) {
|
||||
// Handle the triggered job
|
||||
}
|
||||
```
|
||||
|
||||
This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that
|
||||
you register the callback server, which will invoke this function when a job is triggered:
|
||||
|
||||
```go
|
||||
...
|
||||
js := &JobService{}
|
||||
rtv1.RegisterAppCallbackAlphaServer(server, js)
|
||||
```
|
||||
|
||||
In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly
|
||||
through this gRPC method.
|
||||
|
||||
#### SDKs
|
||||
|
||||
For SDK users, handling triggered jobs is simpler. When a job is triggered, Dapr will automatically route the job to the
|
||||
event handler you set up during the server initialization. For example, in Go, you'd register the event handler like this:
|
||||
|
||||
```go
|
||||
...
|
||||
if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil {
|
||||
log.Fatalf("failed to register job event handler: %v", err)
|
||||
}
|
||||
```
|
||||
|
||||
Dapr takes care of the underlying routing. When the job is triggered, your `prodDBBackupHandler` function is called with
|
||||
the triggered job data. Here’s an example of handling the triggered job:
|
||||
|
||||
```go
|
||||
// ...
|
||||
|
||||
|
@ -103,11 +172,9 @@ func prodDBBackupHandler(ctx context.Context, job *common.JobEvent) error {
|
|||
if err := json.Unmarshal(job.Data, &jobData); err != nil {
|
||||
// ...
|
||||
}
|
||||
decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value)
|
||||
// ...
|
||||
|
||||
var jobPayload api.DBBackup
|
||||
if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil {
|
||||
if err := json.Unmarshal(job.Data, &jobPayload); err != nil {
|
||||
// ...
|
||||
}
|
||||
fmt.Printf("job %d received:\n type: %v \n typeurl: %v\n value: %v\n extracted payload: %v\n", jobCount, job.JobType, jobData.TypeURL, jobData.Value, jobPayload)
|
||||
|
|
|
@ -38,11 +38,9 @@ The diagram below is an overview of how Dapr's service invocation works when inv
|
|||
<img src="/images/service-invocation-overview-non-dapr-endpoint.png" width=800 alt="Diagram showing the steps of service invocation to non-Dapr endpoints">
|
||||
|
||||
1. Service A makes an HTTP call targeting Service B, a non-Dapr endpoint. The call goes to the local Dapr sidecar.
|
||||
2. Dapr discovers Service B's location using the `HTTPEndpoint` or FQDN URL.
|
||||
3. Dapr forwards the message to Service B.
|
||||
4. Service B runs its business logic code.
|
||||
5. Service B sends a response to Service A's Dapr sidecar.
|
||||
6. Service A receives the response.
|
||||
2. Dapr discovers Service B's location using the `HTTPEndpoint` or FQDN URL then forwards the message to Service B.
|
||||
3. Service B sends a response to Service A's Dapr sidecar.
|
||||
4. Service A receives the response.
|
||||
|
||||
## Using an HTTPEndpoint resource or FQDN URL for non-Dapr endpoints
|
||||
There are two ways to invoke a non-Dapr endpoint when communicating either to Dapr applications or non-Dapr applications. A Dapr application can invoke a non-Dapr endpoint by providing one of the following:
|
||||
|
|
|
@ -106,8 +106,25 @@ Want to skip the quickstarts? Not a problem. You can try out the workflow buildi
|
|||
|
||||
## Limitations
|
||||
|
||||
- **State stores:** As of the 1.12.0 beta release of Dapr Workflow, using the NoSQL databases as a state store results in limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request.
|
||||
- **Horizontal scaling:** As of the 1.12.0 beta release of Dapr Workflow, if you scale out Dapr sidecars or your application pods to more than 2, then the concurrency of the workflow execution drops. It is recommended to test with 1 or 2 instances, and no more than 2.
|
||||
- **State stores:** Due to underlying limitations in some database choices, more commonly NoSQL databases, you might run into limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request.
|
||||
- **Horizontal scaling:** As of the 1.12.0 beta release of Dapr Workflow, it is recommended to use a maximum of two instances of Dapr per workflow application. This limitation is resolved in Dapr 1.14.x when enabling the scheduler service.
|
||||
|
||||
To enable the scheduler service to work for Dapr Workflows, make sure you're using Dapr 1.14.x or later and assign the following configuration to your app:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: schedulerconfig
|
||||
spec:
|
||||
tracing:
|
||||
samplingRate: "1"
|
||||
features:
|
||||
- name: SchedulerReminders
|
||||
enabled: true
|
||||
```
|
||||
|
||||
See more info about [enabling preview features]({{<ref preview-features>}}).
|
||||
|
||||
## Watch the demo
|
||||
|
||||
|
|
|
@ -749,7 +749,7 @@ def status_monitor_workflow(ctx: wf.DaprWorkflowContext, job: JobStatus):
|
|||
ctx.call_activity(send_alert, input=f"Job '{job.job_id}' is unhealthy!")
|
||||
next_sleep_interval = 5 # check more frequently when unhealthy
|
||||
|
||||
yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(seconds=next_sleep_interval))
|
||||
yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(minutes=next_sleep_interval))
|
||||
|
||||
# restart from the beginning with a new JobStatus input
|
||||
ctx.continue_as_new(job)
|
||||
|
@ -896,7 +896,7 @@ func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) {
|
|||
}
|
||||
if status == "healthy" {
|
||||
job.IsHealthy = true
|
||||
sleepInterval = time.Second * 60
|
||||
sleepInterval = time.Minutes * 60
|
||||
} else {
|
||||
if job.IsHealthy {
|
||||
job.IsHealthy = false
|
||||
|
@ -905,7 +905,7 @@ func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) {
|
|||
return "", err
|
||||
}
|
||||
}
|
||||
sleepInterval = time.Second * 5
|
||||
sleepInterval = time.Minutes * 5
|
||||
}
|
||||
if err := ctx.CreateTimer(sleepInterval).Await(nil); err != nil {
|
||||
return "", err
|
||||
|
|
|
@ -277,12 +277,9 @@ func handleJob(ctx context.Context, job *common.JobEvent) error {
|
|||
if err := json.Unmarshal(job.Data, &jobData); err != nil {
|
||||
return fmt.Errorf("failed to unmarshal job: %v", err)
|
||||
}
|
||||
decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decode job payload: %v", err)
|
||||
}
|
||||
|
||||
var jobPayload JobData
|
||||
if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil {
|
||||
if err := json.Unmarshal(job.Data, &jobPayload); err != nil {
|
||||
return fmt.Errorf("failed to unmarshal payload: %v", err)
|
||||
}
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ A supported release means:
|
|||
|
||||
From the 1.8.0 release onwards three (3) versions of Dapr are supported; the current and previous two (2) versions. Typically these are `MINOR`release updates. This means that there is a rolling window that moves forward for supported releases and it is your operational responsibility to remain up to date with these supported versions. If you have an older version of Dapr you may have to do intermediate upgrades to get to a supported version.
|
||||
|
||||
There will be at least 6 weeks between major.minor version releases giving users a 12 week (3 month) rolling window for upgrading.
|
||||
There will be at least 13 weeks (3 months) between major.minor version releases giving users at least a 9 month rolling window for upgrading from a non-supported version. For more details on the release process read [release cycle and cadence](https://github.com/dapr/community/blob/master/release-process.md)
|
||||
|
||||
Patch support is for supported versions (current and previous).
|
||||
|
||||
|
|
|
@ -83,8 +83,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `0` (unlimited) | `10`
|
||||
| `disableEntityManagement` | N | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
|
||||
| `defaultMessageTimeToLiveInSec` | N | Default message time to live, in seconds. Used during subscription creation only. | `10`
|
||||
| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600`
|
||||
| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Must be 300s or greater. Default set by server. | `10`
|
||||
| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: `0` (disabled) | `3600`
|
||||
| `maxDeliveryCount` | N | Defines the number of attempts the server makes to deliver a message. Used during subscription creation only. Default set by server. | `10`
|
||||
| `lockDurationInSec` | N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30`
|
||||
| `minConnectionRecoveryInSec` | N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5`
|
||||
| `maxConnectionRecoveryInSec` | N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600`
|
||||
|
|
Loading…
Reference in New Issue