Merge branch 'v1.15' into cb
|
@ -22,7 +22,7 @@ Dapr provides the following building blocks:
|
||||||
|----------------|----------|-------------|
|
|----------------|----------|-------------|
|
||||||
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
|
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
|
||||||
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
|
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
|
||||||
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0-beta1/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
|
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
|
||||||
| [**State management**]({{< ref "state-management-overview.md" >}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
|
| [**State management**]({{< ref "state-management-overview.md" >}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
|
||||||
| [**Bindings**]({{< ref "bindings-overview.md" >}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
|
| [**Bindings**]({{< ref "bindings-overview.md" >}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
|
||||||
| [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.
|
| [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.
|
||||||
|
|
|
@ -336,14 +336,13 @@ Status | Description
|
||||||
`RETRY` | Message to be retried by Dapr
|
`RETRY` | Message to be retried by Dapr
|
||||||
`DROP` | Warning is logged and message is dropped
|
`DROP` | Warning is logged and message is dropped
|
||||||
|
|
||||||
Please refer [Expected HTTP Response for Bulk Subscribe]({{< ref pubsub_api.md >}}) for further insights on response.
|
Refer to [Expected HTTP Response for Bulk Subscribe]({{< ref pubsub_api.md >}}) for further insights on response.
|
||||||
|
|
||||||
### Example
|
### Example
|
||||||
|
|
||||||
Please refer following code samples for how to use Bulk Subscribe:
|
The following code examples demonstrate how to use Bulk Subscribe.
|
||||||
|
|
||||||
{{< tabs "Java" "JavaScript" ".NET" >}}
|
|
||||||
|
|
||||||
|
{{< tabs "Java" "JavaScript" ".NET" "Python" >}}
|
||||||
{{% codetab %}}
|
{{% codetab %}}
|
||||||
|
|
||||||
```java
|
```java
|
||||||
|
@ -471,7 +470,50 @@ public class BulkMessageController : ControllerBase
|
||||||
|
|
||||||
{{% /codetab %}}
|
{{% /codetab %}}
|
||||||
|
|
||||||
|
{{% codetab %}}
|
||||||
|
Currently, you can only bulk subscribe in Python using an HTTP client.
|
||||||
|
|
||||||
|
```python
|
||||||
|
import json
|
||||||
|
from flask import Flask, request, jsonify
|
||||||
|
|
||||||
|
app = Flask(__name__)
|
||||||
|
|
||||||
|
@app.route('/dapr/subscribe', methods=['GET'])
|
||||||
|
def subscribe():
|
||||||
|
# Define the bulk subscribe configuration
|
||||||
|
subscriptions = [{
|
||||||
|
"pubsubname": "pubsub",
|
||||||
|
"topic": "TOPIC_A",
|
||||||
|
"route": "/checkout",
|
||||||
|
"bulkSubscribe": {
|
||||||
|
"enabled": True,
|
||||||
|
"maxMessagesCount": 3,
|
||||||
|
"maxAwaitDurationMs": 40
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
print('Dapr pub/sub is subscribed to: ' + json.dumps(subscriptions))
|
||||||
|
return jsonify(subscriptions)
|
||||||
|
|
||||||
|
|
||||||
|
# Define the endpoint to handle incoming messages
|
||||||
|
@app.route('/checkout', methods=['POST'])
|
||||||
|
def checkout():
|
||||||
|
messages = request.json
|
||||||
|
print(messages)
|
||||||
|
for message in messages:
|
||||||
|
print(f"Received message: {message}")
|
||||||
|
return json.dumps({'success': True}), 200, {'ContentType': 'application/json'}
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
app.run(port=5000)
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
{{% /codetab %}}
|
||||||
|
|
||||||
{{< /tabs >}}
|
{{< /tabs >}}
|
||||||
|
|
||||||
## How components handle publishing and subscribing to bulk messages
|
## How components handle publishing and subscribing to bulk messages
|
||||||
|
|
||||||
For event publish/subscribe, two kinds of network transfers are involved.
|
For event publish/subscribe, two kinds of network transfers are involved.
|
||||||
|
|
|
@ -821,7 +821,7 @@ func main() {
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
|
|
||||||
// Start workflow test
|
// Start workflow test
|
||||||
respStart, err := daprClient.StartWorkflowBeta1(ctx, &client.StartWorkflowRequest{
|
respStart, err := daprClient.StartWorkflow(ctx, &client.StartWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
WorkflowName: "TestWorkflow",
|
WorkflowName: "TestWorkflow",
|
||||||
|
@ -835,7 +835,7 @@ func main() {
|
||||||
fmt.Printf("workflow started with id: %v\n", respStart.InstanceID)
|
fmt.Printf("workflow started with id: %v\n", respStart.InstanceID)
|
||||||
|
|
||||||
// Pause workflow test
|
// Pause workflow test
|
||||||
err = daprClient.PauseWorkflowBeta1(ctx, &client.PauseWorkflowRequest{
|
err = daprClient.PauseWorkflow(ctx, &client.PauseWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
})
|
})
|
||||||
|
@ -844,7 +844,7 @@ func main() {
|
||||||
log.Fatalf("failed to pause workflow: %v", err)
|
log.Fatalf("failed to pause workflow: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
respGet, err := daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{
|
respGet, err := daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
})
|
})
|
||||||
|
@ -859,7 +859,7 @@ func main() {
|
||||||
fmt.Printf("workflow paused\n")
|
fmt.Printf("workflow paused\n")
|
||||||
|
|
||||||
// Resume workflow test
|
// Resume workflow test
|
||||||
err = daprClient.ResumeWorkflowBeta1(ctx, &client.ResumeWorkflowRequest{
|
err = daprClient.ResumeWorkflow(ctx, &client.ResumeWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
})
|
})
|
||||||
|
@ -868,7 +868,7 @@ func main() {
|
||||||
log.Fatalf("failed to resume workflow: %v", err)
|
log.Fatalf("failed to resume workflow: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{
|
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
})
|
})
|
||||||
|
@ -886,7 +886,7 @@ func main() {
|
||||||
|
|
||||||
// Raise Event Test
|
// Raise Event Test
|
||||||
|
|
||||||
err = daprClient.RaiseEventWorkflowBeta1(ctx, &client.RaiseEventWorkflowRequest{
|
err = daprClient.RaiseEventWorkflow(ctx, &client.RaiseEventWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
EventName: "testEvent",
|
EventName: "testEvent",
|
||||||
|
@ -904,7 +904,7 @@ func main() {
|
||||||
|
|
||||||
fmt.Printf("stage: %d\n", stage)
|
fmt.Printf("stage: %d\n", stage)
|
||||||
|
|
||||||
respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{
|
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
})
|
})
|
||||||
|
@ -915,7 +915,7 @@ func main() {
|
||||||
fmt.Printf("workflow status: %v\n", respGet.RuntimeStatus)
|
fmt.Printf("workflow status: %v\n", respGet.RuntimeStatus)
|
||||||
|
|
||||||
// Purge workflow test
|
// Purge workflow test
|
||||||
err = daprClient.PurgeWorkflowBeta1(ctx, &client.PurgeWorkflowRequest{
|
err = daprClient.PurgeWorkflow(ctx, &client.PurgeWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
})
|
})
|
||||||
|
@ -923,7 +923,7 @@ func main() {
|
||||||
log.Fatalf("failed to purge workflow: %v", err)
|
log.Fatalf("failed to purge workflow: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{
|
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
})
|
})
|
||||||
|
@ -936,7 +936,7 @@ func main() {
|
||||||
fmt.Printf("stage: %d\n", stage)
|
fmt.Printf("stage: %d\n", stage)
|
||||||
|
|
||||||
// Terminate workflow test
|
// Terminate workflow test
|
||||||
respStart, err = daprClient.StartWorkflowBeta1(ctx, &client.StartWorkflowRequest{
|
respStart, err = daprClient.StartWorkflow(ctx, &client.StartWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
WorkflowName: "TestWorkflow",
|
WorkflowName: "TestWorkflow",
|
||||||
|
@ -950,7 +950,7 @@ func main() {
|
||||||
|
|
||||||
fmt.Printf("workflow started with id: %s\n", respStart.InstanceID)
|
fmt.Printf("workflow started with id: %s\n", respStart.InstanceID)
|
||||||
|
|
||||||
err = daprClient.TerminateWorkflowBeta1(ctx, &client.TerminateWorkflowRequest{
|
err = daprClient.TerminateWorkflow(ctx, &client.TerminateWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
})
|
})
|
||||||
|
@ -958,7 +958,7 @@ func main() {
|
||||||
log.Fatalf("failed to terminate workflow: %v", err)
|
log.Fatalf("failed to terminate workflow: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{
|
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
})
|
})
|
||||||
|
@ -971,12 +971,12 @@ func main() {
|
||||||
|
|
||||||
fmt.Println("workflow terminated")
|
fmt.Println("workflow terminated")
|
||||||
|
|
||||||
err = daprClient.PurgeWorkflowBeta1(ctx, &client.PurgeWorkflowRequest{
|
err = daprClient.PurgeWorkflow(ctx, &client.PurgeWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
})
|
})
|
||||||
|
|
||||||
respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{
|
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
|
||||||
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
|
||||||
WorkflowComponent: workflowComponent,
|
WorkflowComponent: workflowComponent,
|
||||||
})
|
})
|
||||||
|
|
|
@ -324,7 +324,7 @@ Manage your workflow using HTTP calls. The example below plugs in the properties
|
||||||
To start your workflow with an ID `12345678`, run:
|
To start your workflow with an ID `12345678`, run:
|
||||||
|
|
||||||
```http
|
```http
|
||||||
POST http://localhost:3500/v1.0-beta1/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678
|
POST http://localhost:3500/v1.0/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678
|
||||||
```
|
```
|
||||||
|
|
||||||
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
|
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
|
||||||
|
@ -334,7 +334,7 @@ Note that workflow instance IDs can only contain alphanumeric characters, unders
|
||||||
To terminate your workflow with an ID `12345678`, run:
|
To terminate your workflow with an ID `12345678`, run:
|
||||||
|
|
||||||
```http
|
```http
|
||||||
POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/terminate
|
POST http://localhost:3500/v1.0/workflows/dapr/12345678/terminate
|
||||||
```
|
```
|
||||||
|
|
||||||
### Raise an event
|
### Raise an event
|
||||||
|
@ -342,7 +342,7 @@ POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/terminate
|
||||||
For workflow components that support subscribing to external events, such as the Dapr Workflow engine, you can use the following "raise event" API to deliver a named event to a specific workflow instance.
|
For workflow components that support subscribing to external events, such as the Dapr Workflow engine, you can use the following "raise event" API to deliver a named event to a specific workflow instance.
|
||||||
|
|
||||||
```http
|
```http
|
||||||
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceID>/raiseEvent/<eventName>
|
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceID>/raiseEvent/<eventName>
|
||||||
```
|
```
|
||||||
|
|
||||||
> An `eventName` can be any function.
|
> An `eventName` can be any function.
|
||||||
|
@ -352,13 +352,13 @@ POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanc
|
||||||
To plan for down-time, wait for inputs, and more, you can pause and then resume a workflow. To pause a workflow with an ID `12345678` until triggered to resume, run:
|
To plan for down-time, wait for inputs, and more, you can pause and then resume a workflow. To pause a workflow with an ID `12345678` until triggered to resume, run:
|
||||||
|
|
||||||
```http
|
```http
|
||||||
POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/pause
|
POST http://localhost:3500/v1.0/workflows/dapr/12345678/pause
|
||||||
```
|
```
|
||||||
|
|
||||||
To resume a workflow with an ID `12345678`, run:
|
To resume a workflow with an ID `12345678`, run:
|
||||||
|
|
||||||
```http
|
```http
|
||||||
POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/resume
|
POST http://localhost:3500/v1.0/workflows/dapr/12345678/resume
|
||||||
```
|
```
|
||||||
|
|
||||||
### Purge a workflow
|
### Purge a workflow
|
||||||
|
@ -368,7 +368,7 @@ The purge API can be used to permanently delete workflow metadata from the under
|
||||||
Only workflow instances in the COMPLETED, FAILED, or TERMINATED state can be purged. If the workflow is in any other state, calling purge returns an error.
|
Only workflow instances in the COMPLETED, FAILED, or TERMINATED state can be purged. If the workflow is in any other state, calling purge returns an error.
|
||||||
|
|
||||||
```http
|
```http
|
||||||
POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/purge
|
POST http://localhost:3500/v1.0/workflows/dapr/12345678/purge
|
||||||
```
|
```
|
||||||
|
|
||||||
### Get information about a workflow
|
### Get information about a workflow
|
||||||
|
@ -376,7 +376,7 @@ POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/purge
|
||||||
To fetch workflow information (outputs and inputs) with an ID `12345678`, run:
|
To fetch workflow information (outputs and inputs) with an ID `12345678`, run:
|
||||||
|
|
||||||
```http
|
```http
|
||||||
GET http://localhost:3500/v1.0-beta1/workflows/dapr/12345678
|
GET http://localhost:3500/v1.0/workflows/dapr/12345678
|
||||||
```
|
```
|
||||||
|
|
||||||
Learn more about these HTTP calls in the [workflow API reference guide]({{< ref workflow_api.md >}}).
|
Learn more about these HTTP calls in the [workflow API reference guide]({{< ref workflow_api.md >}}).
|
||||||
|
|
|
@ -647,7 +647,7 @@ The Dapr workflow HTTP API supports the asynchronous request-reply pattern out-o
|
||||||
The following `curl` commands illustrate how the workflow APIs support this pattern.
|
The following `curl` commands illustrate how the workflow APIs support this pattern.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -X POST http://localhost:3500/v1.0-beta1/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678 -d '{"Name":"Paperclips","Quantity":1,"TotalCost":9.95}'
|
curl -X POST http://localhost:3500/v1.0/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678 -d '{"Name":"Paperclips","Quantity":1,"TotalCost":9.95}'
|
||||||
```
|
```
|
||||||
|
|
||||||
The previous command will result in the following response JSON:
|
The previous command will result in the following response JSON:
|
||||||
|
@ -659,7 +659,7 @@ The previous command will result in the following response JSON:
|
||||||
The HTTP client can then construct the status query URL using the workflow instance ID and poll it repeatedly until it sees the "COMPLETE", "FAILURE", or "TERMINATED" status in the payload.
|
The HTTP client can then construct the status query URL using the workflow instance ID and poll it repeatedly until it sees the "COMPLETE", "FAILURE", or "TERMINATED" status in the payload.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl http://localhost:3500/v1.0-beta1/workflows/dapr/12345678
|
curl http://localhost:3500/v1.0/workflows/dapr/12345678
|
||||||
```
|
```
|
||||||
|
|
||||||
The following is an example of what an in-progress workflow status might look like.
|
The following is an example of what an in-progress workflow status might look like.
|
||||||
|
@ -1365,7 +1365,7 @@ func raiseEvent() {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("failed to initialize the client")
|
log.Fatalf("failed to initialize the client")
|
||||||
}
|
}
|
||||||
err = daprClient.RaiseEventWorkflowBeta1(context.Background(), &client.RaiseEventWorkflowRequest{
|
err = daprClient.RaiseEventWorkflow(context.Background(), &client.RaiseEventWorkflowRequest{
|
||||||
InstanceID: "instance_id",
|
InstanceID: "instance_id",
|
||||||
WorkflowComponent: "dapr",
|
WorkflowComponent: "dapr",
|
||||||
EventName: "approval_received",
|
EventName: "approval_received",
|
||||||
|
|
|
@ -18,7 +18,7 @@ Currently, you can experience this actors quickstart using the .NET SDK.
|
||||||
As a quick overview of the .NET actors quickstart:
|
As a quick overview of the .NET actors quickstart:
|
||||||
|
|
||||||
1. Using a `SmartDevice.Service` microservice, you host:
|
1. Using a `SmartDevice.Service` microservice, you host:
|
||||||
- Two `SmartDectectorActor` smoke alarm objects
|
- Two `SmokeDetectorActor` smoke alarm objects
|
||||||
- A `ControllerActor` object that commands and controls the smart devices
|
- A `ControllerActor` object that commands and controls the smart devices
|
||||||
1. Using a `SmartDevice.Client` console app, the client app interacts with each actor, or the controller, to perform actions in aggregate.
|
1. Using a `SmartDevice.Client` console app, the client app interacts with each actor, or the controller, to perform actions in aggregate.
|
||||||
1. The `SmartDevice.Interfaces` contains the shared interfaces and data types used by both the service and client apps.
|
1. The `SmartDevice.Interfaces` contains the shared interfaces and data types used by both the service and client apps.
|
||||||
|
@ -119,7 +119,7 @@ If you have Zipkin configured for Dapr locally on your machine, you can view the
|
||||||
|
|
||||||
When you ran the client app, a few things happened:
|
When you ran the client app, a few things happened:
|
||||||
|
|
||||||
1. Two `SmartDetectorActor` actors were [created in the client application](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/client/Program.cs) and initialized with object state with:
|
1. Two `SmokeDetectorActor` actors were [created in the client application](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/client/Program.cs) and initialized with object state with:
|
||||||
- `ActorProxy.Create<ISmartDevice>(actorId, actorType)`
|
- `ActorProxy.Create<ISmartDevice>(actorId, actorType)`
|
||||||
- `proxySmartDevice.SetDataAsync(data)`
|
- `proxySmartDevice.SetDataAsync(data)`
|
||||||
|
|
||||||
|
@ -177,7 +177,7 @@ When you ran the client app, a few things happened:
|
||||||
Console.WriteLine($"Device 2 state: {storedDeviceData2}");
|
Console.WriteLine($"Device 2 state: {storedDeviceData2}");
|
||||||
```
|
```
|
||||||
|
|
||||||
1. The [`DetectSmokeAsync` method of `SmartDetectorActor 1` is called](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L70).
|
1. The [`DetectSmokeAsync` method of `SmokeDetectorActor 1` is called](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L70).
|
||||||
|
|
||||||
```csharp
|
```csharp
|
||||||
public async Task DetectSmokeAsync()
|
public async Task DetectSmokeAsync()
|
||||||
|
@ -216,7 +216,7 @@ When you ran the client app, a few things happened:
|
||||||
await proxySmartDevice1.DetectSmokeAsync();
|
await proxySmartDevice1.DetectSmokeAsync();
|
||||||
```
|
```
|
||||||
|
|
||||||
1. The [`SoundAlarm` methods](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L78) of `SmartDetectorActor 1` and `2` are called.
|
1. The [`SoundAlarm` methods](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L78) of `SmokeDetectorActor 1` and `2` are called.
|
||||||
|
|
||||||
```csharp
|
```csharp
|
||||||
storedDeviceData1 = await proxySmartDevice1.GetDataAsync();
|
storedDeviceData1 = await proxySmartDevice1.GetDataAsync();
|
||||||
|
@ -234,9 +234,9 @@ When you ran the client app, a few things happened:
|
||||||
|
|
||||||
For full context of the sample, take a look at the following code:
|
For full context of the sample, take a look at the following code:
|
||||||
|
|
||||||
- [`SmartDetectorActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs): Implements the smart device actors
|
- [`SmokeDetectorActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs): Implements the smart device actors
|
||||||
- [`ControllerActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/ControllerActor.cs): Implements the controller actor that manages all devices
|
- [`ControllerActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/ControllerActor.cs): Implements the controller actor that manages all devices
|
||||||
- [`ISmartDevice`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/ISmartDevice.cs): The method definitions and shared data types for each `SmartDetectorActor`
|
- [`ISmartDevice`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/ISmartDevice.cs): The method definitions and shared data types for each `SmokeDetectorActor`
|
||||||
- [`IController`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/IController.cs): The method definitions and shared data types for the `ControllerActor`
|
- [`IController`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/IController.cs): The method definitions and shared data types for the `ControllerActor`
|
||||||
|
|
||||||
{{% /codetab %}}
|
{{% /codetab %}}
|
||||||
|
|
|
@ -127,7 +127,7 @@ See this list of values corresponding to the different Dapr APIs:
|
||||||
| [Configuration]({{< ref configuration_api.md >}}) | `configuration` (`v1.0` and `v1.0-alpha1`) | `configuration` (`v1` and `v1alpha1`) |
|
| [Configuration]({{< ref configuration_api.md >}}) | `configuration` (`v1.0` and `v1.0-alpha1`) | `configuration` (`v1` and `v1alpha1`) |
|
||||||
| [Distributed Lock]({{< ref distributed_lock_api.md >}}) | `lock` (`v1.0-alpha1`)<br/>`unlock` (`v1.0-alpha1`) | `lock` (`v1alpha1`)<br/>`unlock` (`v1alpha1`) |
|
| [Distributed Lock]({{< ref distributed_lock_api.md >}}) | `lock` (`v1.0-alpha1`)<br/>`unlock` (`v1.0-alpha1`) | `lock` (`v1alpha1`)<br/>`unlock` (`v1alpha1`) |
|
||||||
| [Cryptography]({{< ref cryptography_api.md >}}) | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) |
|
| [Cryptography]({{< ref cryptography_api.md >}}) | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) |
|
||||||
| [Workflow]({{< ref workflow_api.md >}}) | `workflows` (`v1.0-alpha1`) |`workflows` (`v1alpha1`) |
|
| [Workflow]({{< ref workflow_api.md >}}) | `workflows` (`v1.0`) |`workflows` (`v1`) |
|
||||||
| [Health]({{< ref health_api.md >}}) | `healthz` (`v1.0`) | n/a |
|
| [Health]({{< ref health_api.md >}}) | `healthz` (`v1.0`) | n/a |
|
||||||
| Shutdown | `shutdown` (`v1.0`) | `shutdown` (`v1`) |
|
| Shutdown | `shutdown` (`v1.0`) | `shutdown` (`v1`) |
|
||||||
|
|
||||||
|
|
|
@ -16,6 +16,7 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
|
||||||
- [AWS CLI](https://aws.amazon.com/cli/)
|
- [AWS CLI](https://aws.amazon.com/cli/)
|
||||||
- [eksctl](https://eksctl.io/)
|
- [eksctl](https://eksctl.io/)
|
||||||
- [An existing VPC and subnets](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html)
|
- [An existing VPC and subnets](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html)
|
||||||
|
- [Dapr CLI](https://docs.dapr.io/getting-started/install-dapr-cli/)
|
||||||
|
|
||||||
## Deploy an EKS cluster
|
## Deploy an EKS cluster
|
||||||
|
|
||||||
|
@ -25,20 +26,57 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
|
||||||
aws configure
|
aws configure
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Create an EKS cluster. To use a specific version of Kubernetes, use `--version` (1.13.x or newer version required).
|
1. Create a new file called `cluster-config.yaml` and add the content below to it, replacing `[your_cluster_name]`, `[your_cluster_region]`, and `[your_k8s_version]` with the appropriate values:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: eksctl.io/v1alpha5
|
||||||
|
kind: ClusterConfig
|
||||||
|
|
||||||
|
metadata:
|
||||||
|
name: [your_cluster_name]
|
||||||
|
region: [your_cluster_region]
|
||||||
|
version: [your_k8s_version]
|
||||||
|
tags:
|
||||||
|
karpenter.sh/discovery: [your_cluster_name]
|
||||||
|
|
||||||
|
iam:
|
||||||
|
withOIDC: true
|
||||||
|
|
||||||
|
managedNodeGroups:
|
||||||
|
- name: mng-od-4vcpu-8gb
|
||||||
|
desiredCapacity: 2
|
||||||
|
minSize: 1
|
||||||
|
maxSize: 5
|
||||||
|
instanceType: c5.xlarge
|
||||||
|
privateNetworking: true
|
||||||
|
|
||||||
|
addons:
|
||||||
|
- name: vpc-cni
|
||||||
|
attachPolicyARNs:
|
||||||
|
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
|
||||||
|
- name: coredns
|
||||||
|
version: latest
|
||||||
|
- name: kube-proxy
|
||||||
|
version: latest
|
||||||
|
- name: aws-ebs-csi-driver
|
||||||
|
wellKnownPolicies:
|
||||||
|
ebsCSIController: true
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Create the cluster by running the following command:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
eksctl create cluster --name [your_eks_cluster_name] --region [your_aws_region] --version [kubernetes_version] --vpc-private-subnets [subnet_list_seprated_by_comma] --without-nodegroup
|
eksctl create cluster -f cluster.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Change the values for `vpc-private-subnets` to meet your requirements. You can also add additional IDs. You must specify at least two subnet IDs. If you'd rather specify public subnets, you can change `--vpc-private-subnets` to `--vpc-public-subnets`.
|
1. Verify the kubectl context:
|
||||||
|
|
||||||
1. Verify kubectl context:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl config current-context
|
kubectl config current-context
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Add Dapr requirements for sidecar access and default storage class:
|
||||||
|
|
||||||
1. Update the security group rule to allow the EKS cluster to communicate with the Dapr Sidecar by creating an inbound rule for port 4000.
|
1. Update the security group rule to allow the EKS cluster to communicate with the Dapr Sidecar by creating an inbound rule for port 4000.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -49,11 +87,37 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
|
||||||
--source-group [your_security_group]
|
--source-group [your_security_group]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
2. Add a default storage class if you don't have one:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Install Dapr
|
||||||
|
|
||||||
|
Install Dapr on your cluster by running:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dapr init -k
|
||||||
|
```
|
||||||
|
|
||||||
|
You should see the following response:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
⌛ Making the jump to hyperspace...
|
||||||
|
ℹ️ Note: To install Dapr using Helm, see here: https://docs.dapr.io/getting-started/install-dapr-kubernetes/#install-with-helm-advanced
|
||||||
|
|
||||||
|
ℹ️ Container images will be pulled from Docker Hub
|
||||||
|
✅ Deploying the Dapr control plane with latest version to your cluster...
|
||||||
|
✅ Deploying the Dapr dashboard with latest version to your cluster...
|
||||||
|
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run `dapr status -k' in your terminal. To get started, go here: https://docs.dapr.io/getting-started
|
||||||
|
```
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
### Access permissions
|
### Access permissions
|
||||||
|
|
||||||
If you face any access permissions, make sure you are using the same AWS profile that was used to create the cluster. If needed, update the kubectl configuration with the correct profile:
|
If you face any access permissions, make sure you are using the same AWS profile that was used to create the cluster. If needed, update the kubectl configuration with the correct profile. More information [here](https://repost.aws/knowledge-center/eks-api-server-unauthorized-error):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
aws eks --region [your_aws_region] update-kubeconfig --name [your_eks_cluster_name] --profile [your_profile_name]
|
aws eks --region [your_aws_region] update-kubeconfig --name [your_eks_cluster_name] --profile [your_profile_name]
|
||||||
|
|
|
@ -6,7 +6,7 @@ weight: 60
|
||||||
description: See and measure the message calls to components and between networked services
|
description: See and measure the message calls to components and between networked services
|
||||||
---
|
---
|
||||||
|
|
||||||
[The following overview video and demo](https://www.youtube.com/live/0y7ne6teHT4?si=3bmNSSyIEIVSF-Ej&t=9931) demonstrates how observability in Dapr works.
|
[The following overview video and demo](https://www.youtube.com/watch?v=0y7ne6teHT4&t=12652s) demonstrates how observability in Dapr works.
|
||||||
|
|
||||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/0y7ne6teHT4?si=iURnLk57t2zN-7zP&start=12653" title="YouTube video player" style="padding-bottom:25px;" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
<iframe width="560" height="315" src="https://www.youtube.com/embed/0y7ne6teHT4?si=iURnLk57t2zN-7zP&start=12653" title="YouTube video player" style="padding-bottom:25px;" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||||
|
|
||||||
|
|
|
@ -49,6 +49,15 @@ The following retry options are configurable:
|
||||||
| `duration` | Determines the time interval between retries. Only applies to the `constant` policy.<br/>Valid values are of the form `200ms`, `15s`, `2m`, etc.<br/> Defaults to `5s`.|
|
| `duration` | Determines the time interval between retries. Only applies to the `constant` policy.<br/>Valid values are of the form `200ms`, `15s`, `2m`, etc.<br/> Defaults to `5s`.|
|
||||||
| `maxInterval` | Determines the maximum interval between retries to which the `exponential` back-off policy can grow.<br/>Additional retries always occur after a duration of `maxInterval`. Defaults to `60s`. Valid values are of the form `5s`, `1m`, `1m30s`, etc |
|
| `maxInterval` | Determines the maximum interval between retries to which the `exponential` back-off policy can grow.<br/>Additional retries always occur after a duration of `maxInterval`. Defaults to `60s`. Valid values are of the form `5s`, `1m`, `1m30s`, etc |
|
||||||
| `maxRetries` | The maximum number of retries to attempt. <br/>`-1` denotes an unlimited number of retries, while `0` means the request will not be retried (essentially behaving as if the retry policy were not set).<br/>Defaults to `-1`. |
|
| `maxRetries` | The maximum number of retries to attempt. <br/>`-1` denotes an unlimited number of retries, while `0` means the request will not be retried (essentially behaving as if the retry policy were not set).<br/>Defaults to `-1`. |
|
||||||
|
| `matching.httpStatusCodes` | Optional: a comma-separated string of HTTP status codes or code ranges to retry. Status codes not listed are not retried.<br/>Valid values: 100-599, [Reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status)<br/>Format: `<code>` or range `<start>-<end>`<br/>Example: "429,501-503"<br/>Default: empty string `""` or field is not set. Retries on all HTTP errors. |
|
||||||
|
| `matching.gRPCStatusCodes` | Optional: a comma-separated string of gRPC status codes or code ranges to retry. Status codes not listed are not retried.<br/>Valid values: 0-16, [Reference](https://grpc.io/docs/guides/status-codes/)<br/>Format: `<code>` or range `<start>-<end>`<br/>Example: "1,501-503"<br/>Default: empty string `""` or field is not set. Retries on all gRPC errors. |
|
||||||
|
|
||||||
|
|
||||||
|
{{% alert title="httpStatusCodes and gRPCStatusCodes format" color="warning" %}}
|
||||||
|
The field values should follow the format as specified in the field description or in the "Example 2" below.
|
||||||
|
An incorrectly formatted value will produce an error log ("Could not read resiliency policy") and `daprd` startup sequence will proceed.
|
||||||
|
{{% /alert %}}
|
||||||
|
|
||||||
|
|
||||||
The exponential back-off window uses the following formula:
|
The exponential back-off window uses the following formula:
|
||||||
|
|
||||||
|
@ -77,7 +86,20 @@ spec:
|
||||||
maxRetries: -1 # Retry indefinitely
|
maxRetries: -1 # Retry indefinitely
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Example 2:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
spec:
|
||||||
|
policies:
|
||||||
|
retries:
|
||||||
|
retry5xxOnly:
|
||||||
|
policy: constant
|
||||||
|
duration: 5s
|
||||||
|
maxRetries: 3
|
||||||
|
matches:
|
||||||
|
httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried.
|
||||||
|
gRPCStatusCodes: "1-4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.
|
||||||
|
```
|
||||||
|
|
||||||
## Circuit Breakers
|
## Circuit Breakers
|
||||||
|
|
||||||
|
|
|
@ -68,6 +68,7 @@ After announcing a future breaking change, the change will happen in 2 releases
|
||||||
| Hazelcast PubSub Component | 1.9.0 | 1.11.0 |
|
| Hazelcast PubSub Component | 1.9.0 | 1.11.0 |
|
||||||
| Twitter Binding Component | 1.10.0 | 1.11.0 |
|
| Twitter Binding Component | 1.10.0 | 1.11.0 |
|
||||||
| NATS Streaming PubSub Component | 1.11.0 | 1.13.0 |
|
| NATS Streaming PubSub Component | 1.11.0 | 1.13.0 |
|
||||||
|
| Workflows API Alpha1 `/v1.0-alpha1/workflows` being deprecated in favor of Workflow Client | 1.15.0 | 1.17.0 |
|
||||||
|
|
||||||
## Related links
|
## Related links
|
||||||
|
|
||||||
|
|
|
@ -302,7 +302,7 @@ other | warning is logged and all messages to be retried
|
||||||
|
|
||||||
## Message envelope
|
## Message envelope
|
||||||
|
|
||||||
Dapr pub/sub adheres to version 1.0 of CloudEvents.
|
Dapr pub/sub adheres to [version 1.0 of CloudEvents](https://github.com/cloudevents/spec/blob/v1.0/spec.md).
|
||||||
|
|
||||||
## Related links
|
## Related links
|
||||||
|
|
||||||
|
|
|
@ -17,7 +17,7 @@ Dapr provides users with the ability to interact with workflows and comes with a
|
||||||
Start a workflow instance with the given name and optionally, an instance ID.
|
Start a workflow instance with the given name and optionally, an instance ID.
|
||||||
|
|
||||||
```
|
```
|
||||||
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<workflowName>/start[?instanceID=<instanceID>]
|
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<workflowName>/start[?instanceID=<instanceID>]
|
||||||
```
|
```
|
||||||
|
|
||||||
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
|
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
|
||||||
|
@ -57,7 +57,7 @@ The API call will provide a response similar to this:
|
||||||
Terminate a running workflow instance with the given name and instance ID.
|
Terminate a running workflow instance with the given name and instance ID.
|
||||||
|
|
||||||
```
|
```
|
||||||
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/terminate
|
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceId>/terminate
|
||||||
```
|
```
|
||||||
|
|
||||||
{{% alert title="Note" color="primary" %}}
|
{{% alert title="Note" color="primary" %}}
|
||||||
|
@ -91,7 +91,7 @@ This API does not return any content.
|
||||||
For workflow components that support subscribing to external events, such as the Dapr Workflow engine, you can use the following "raise event" API to deliver a named event to a specific workflow instance.
|
For workflow components that support subscribing to external events, such as the Dapr Workflow engine, you can use the following "raise event" API to deliver a named event to a specific workflow instance.
|
||||||
|
|
||||||
```
|
```
|
||||||
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceID>/raiseEvent/<eventName>
|
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceID>/raiseEvent/<eventName>
|
||||||
```
|
```
|
||||||
|
|
||||||
{{% alert title="Note" color="primary" %}}
|
{{% alert title="Note" color="primary" %}}
|
||||||
|
@ -124,7 +124,7 @@ None.
|
||||||
Pause a running workflow instance.
|
Pause a running workflow instance.
|
||||||
|
|
||||||
```
|
```
|
||||||
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/pause
|
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceId>/pause
|
||||||
```
|
```
|
||||||
|
|
||||||
### URL parameters
|
### URL parameters
|
||||||
|
@ -151,7 +151,7 @@ None.
|
||||||
Resume a paused workflow instance.
|
Resume a paused workflow instance.
|
||||||
|
|
||||||
```
|
```
|
||||||
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/resume
|
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceId>/resume
|
||||||
```
|
```
|
||||||
|
|
||||||
### URL parameters
|
### URL parameters
|
||||||
|
@ -178,7 +178,7 @@ None.
|
||||||
Purge the workflow state from your state store with the workflow's instance ID.
|
Purge the workflow state from your state store with the workflow's instance ID.
|
||||||
|
|
||||||
```
|
```
|
||||||
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/purge
|
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceId>/purge
|
||||||
```
|
```
|
||||||
|
|
||||||
{{% alert title="Note" color="primary" %}}
|
{{% alert title="Note" color="primary" %}}
|
||||||
|
@ -209,7 +209,7 @@ None.
|
||||||
Get information about a given workflow instance.
|
Get information about a given workflow instance.
|
||||||
|
|
||||||
```
|
```
|
||||||
GET http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>
|
GET http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceId>
|
||||||
```
|
```
|
||||||
|
|
||||||
### URL parameters
|
### URL parameters
|
||||||
|
|
|
@ -36,6 +36,8 @@ spec:
|
||||||
value: "namespace"
|
value: "namespace"
|
||||||
- name: enableEntityManagement
|
- name: enableEntityManagement
|
||||||
value: "false"
|
value: "false"
|
||||||
|
- name: enableInOrderMessageDelivery
|
||||||
|
value: "false"
|
||||||
# The following four properties are needed only if enableEntityManagement is set to true
|
# The following four properties are needed only if enableEntityManagement is set to true
|
||||||
- name: resourceGroupName
|
- name: resourceGroupName
|
||||||
value: "test-rg"
|
value: "test-rg"
|
||||||
|
@ -71,7 +73,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
||||||
| `eventHub` | Y* | Input/Output | The name of the Event Hubs hub ("topic"). Required if using Microsoft Entra ID authentication or if the connection string doesn't contain an `EntityPath` value | `mytopic` |
|
| `eventHub` | Y* | Input/Output | The name of the Event Hubs hub ("topic"). Required if using Microsoft Entra ID authentication or if the connection string doesn't contain an `EntityPath` value | `mytopic` |
|
||||||
| `connectionString` | Y* | Input/Output | Connection string for the Event Hub or the Event Hub namespace.<br>* Mutally exclusive with `eventHubNamespace` field.<br>* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
|
| `connectionString` | Y* | Input/Output | Connection string for the Event Hub or the Event Hub namespace.<br>* Mutally exclusive with `eventHubNamespace` field.<br>* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
|
||||||
| `eventHubNamespace` | Y* | Input/Output | The Event Hub Namespace name.<br>* Mutally exclusive with `connectionString` field.<br>* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
|
| `eventHubNamespace` | Y* | Input/Output | The Event Hub Namespace name.<br>* Mutally exclusive with `connectionString` field.<br>* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
|
||||||
| `enableEntityManagement` | N | Input/Output | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
|
| `enableEntityManagement` | N | Input/Output | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true"`, `"false"`
|
||||||
|
| `enableInOrderMessageDelivery` | N | Input/Output | Boolean value to allow messages to be delivered in the order in which they were posted. This assumes `partitionKey` is set when publishing or posting to ensure ordering across partitions. Default: `false` | `"true"`, `"false"`
|
||||||
| `resourceGroupName` | N | Input/Output | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"`
|
| `resourceGroupName` | N | Input/Output | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"`
|
||||||
| `subscriptionID` | N | Input/Output | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"`
|
| `subscriptionID` | N | Input/Output | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"`
|
||||||
| `partitionCount` | N | Input/Output | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"`
|
| `partitionCount` | N | Input/Output | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"`
|
||||||
|
|
|
@ -468,7 +468,7 @@ Apache Kafka supports the following bulk metadata options:
|
||||||
|
|
||||||
When invoking the Kafka pub/sub, its possible to provide an optional partition key by using the `metadata` query param in the request url.
|
When invoking the Kafka pub/sub, its possible to provide an optional partition key by using the `metadata` query param in the request url.
|
||||||
|
|
||||||
The param name is `partitionKey`.
|
The param name can either be `partitionKey` or `__key`
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
|
@ -484,7 +484,7 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.partiti
|
||||||
|
|
||||||
### Message headers
|
### Message headers
|
||||||
|
|
||||||
All other metadata key/value pairs (that are not `partitionKey`) are set as headers in the Kafka message. Here is an example setting a `correlationId` for the message.
|
All other metadata key/value pairs (that are not `partitionKey` or `__key`) are set as headers in the Kafka message. Here is an example setting a `correlationId` for the message.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correlationId=myCorrelationID&metadata.partitionKey=key1 \
|
curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correlationId=myCorrelationID&metadata.partitionKey=key1 \
|
||||||
|
@ -495,7 +495,51 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correla
|
||||||
}
|
}
|
||||||
}'
|
}'
|
||||||
```
|
```
|
||||||
|
### Kafka Pubsub special message headers received on consumer side
|
||||||
|
|
||||||
|
When consuming messages, special message metadata are being automatically passed as headers. These are:
|
||||||
|
- `__key`: the message key if available
|
||||||
|
- `__topic`: the topic for the message
|
||||||
|
- `__partition`: the partition number for the message
|
||||||
|
- `__offset`: the offset of the message in the partition
|
||||||
|
- `__timestamp`: the timestamp for the message
|
||||||
|
|
||||||
|
You can access them within the consumer endpoint as follows:
|
||||||
|
{{< tabs "Python (FastAPI)" >}}
|
||||||
|
|
||||||
|
{{% codetab %}}
|
||||||
|
|
||||||
|
```python
|
||||||
|
from fastapi import APIRouter, Body, Response, status
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
|
||||||
|
app = FastAPI()
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
|
||||||
|
@router.get('/dapr/subscribe')
|
||||||
|
def subscribe():
|
||||||
|
subscriptions = [{'pubsubname': 'pubsub',
|
||||||
|
'topic': 'my-topic',
|
||||||
|
'route': 'my_topic_subscriber',
|
||||||
|
}]
|
||||||
|
return subscriptions
|
||||||
|
|
||||||
|
@router.post('/my_topic_subscriber')
|
||||||
|
def my_topic_subscriber(
|
||||||
|
key: Annotated[str, Header(alias="__key")],
|
||||||
|
offset: Annotated[int, Header(alias="__offset")],
|
||||||
|
event_data=Body()):
|
||||||
|
print(f"key={key} - offset={offset} - data={event_data}", flush=True)
|
||||||
|
return Response(status_code=status.HTTP_200_OK)
|
||||||
|
|
||||||
|
app.include_router(router)
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
{{% /codetab %}}
|
||||||
## Receiving message headers with special characters
|
## Receiving message headers with special characters
|
||||||
|
|
||||||
The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors.
|
The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors.
|
||||||
|
|
|
@ -33,6 +33,8 @@ spec:
|
||||||
value: "channel1"
|
value: "channel1"
|
||||||
- name: enableEntityManagement
|
- name: enableEntityManagement
|
||||||
value: "false"
|
value: "false"
|
||||||
|
- name: enableInOrderMessageDelivery
|
||||||
|
value: "false"
|
||||||
# The following four properties are needed only if enableEntityManagement is set to true
|
# The following four properties are needed only if enableEntityManagement is set to true
|
||||||
- name: resourceGroupName
|
- name: resourceGroupName
|
||||||
value: "test-rg"
|
value: "test-rg"
|
||||||
|
@ -65,11 +67,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
||||||
| `connectionString` | Y* | Connection string for the Event Hub or the Event Hub namespace.<br>* Mutally exclusive with `eventHubNamespace` field.<br>* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
|
| `connectionString` | Y* | Connection string for the Event Hub or the Event Hub namespace.<br>* Mutally exclusive with `eventHubNamespace` field.<br>* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
|
||||||
| `eventHubNamespace` | Y* | The Event Hub Namespace name.<br>* Mutally exclusive with `connectionString` field.<br>* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
|
| `eventHubNamespace` | Y* | The Event Hub Namespace name.<br>* Mutally exclusive with `connectionString` field.<br>* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
|
||||||
| `consumerID` | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
|
| `consumerID` | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
|
||||||
|
| `enableEntityManagement` | N | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
|
||||||
|
| `enableInOrderMessageDelivery` | N | Input/Output | Boolean value to allow messages to be delivered in the order in which they were posted. This assumes `partitionKey` is set when publishing or posting to ensure ordering across partitions. Default: `false` | `"true"`, `"false"`
|
||||||
| `storageAccountName` | Y | Storage account name to use for the checkpoint store. |`"myeventhubstorage"`
|
| `storageAccountName` | Y | Storage account name to use for the checkpoint store. |`"myeventhubstorage"`
|
||||||
| `storageAccountKey` | Y* | Storage account key for the checkpoint store account.<br>* When using Microsoft Entra ID, it's possible to omit this if the service principal has access to the storage account too. | `"112233445566778899"`
|
| `storageAccountKey` | Y* | Storage account key for the checkpoint store account.<br>* When using Microsoft Entra ID, it's possible to omit this if the service principal has access to the storage account too. | `"112233445566778899"`
|
||||||
| `storageConnectionString` | Y* | Connection string for the checkpoint store, alternative to specifying `storageAccountKey` | `"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>"`
|
| `storageConnectionString` | Y* | Connection string for the checkpoint store, alternative to specifying `storageAccountKey` | `"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>"`
|
||||||
| `storageContainerName` | Y | Storage container name for the storage account name. | `"myeventhubstoragecontainer"`
|
| `storageContainerName` | Y | Storage container name for the storage account name. | `"myeventhubstoragecontainer"`
|
||||||
| `enableEntityManagement` | N | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
|
|
||||||
| `resourceGroupName` | N | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"`
|
| `resourceGroupName` | N | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"`
|
||||||
| `subscriptionID` | N | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"`
|
| `subscriptionID` | N | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"`
|
||||||
| `partitionCount` | N | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"`
|
| `partitionCount` | N | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"`
|
||||||
|
|
Before Width: | Height: | Size: 46 KiB After Width: | Height: | Size: 86 KiB |
Before Width: | Height: | Size: 47 KiB After Width: | Height: | Size: 65 KiB |
Before Width: | Height: | Size: 30 KiB After Width: | Height: | Size: 46 KiB |
Before Width: | Height: | Size: 27 KiB After Width: | Height: | Size: 37 KiB |
Before Width: | Height: | Size: 116 KiB After Width: | Height: | Size: 91 KiB |
Before Width: | Height: | Size: 100 KiB After Width: | Height: | Size: 62 KiB |
Before Width: | Height: | Size: 85 KiB After Width: | Height: | Size: 147 KiB |
Before Width: | Height: | Size: 137 KiB After Width: | Height: | Size: 105 KiB |
Before Width: | Height: | Size: 31 KiB After Width: | Height: | Size: 65 KiB |