This commit is contained in:
Josh van Leeuwen 2025-09-12 18:06:56 +00:00 committed by GitHub
commit 37d1df003c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
7 changed files with 52 additions and 6 deletions

View File

@ -175,6 +175,20 @@ Similarly, if a state store imposes restrictions on the size of a batch transact
Workflow state can be purged from a state store, including all its history.
Each Dapr SDK exposes APIs for purging all metadata related to specific workflow instances.
#### State store record count
The number of records which are saved as history in the state store per workflow run is determined by its complexity or "shape". In other words, the number of activities, timers, sub-workflows etc.
The following table shows a general guide to the number of records that are saved by different workflow tasks.
This number may be larger or smaller depending on retries or concurrency.
| Task type | Number of records saved |
| ----------|-------------------------|
| Start workflow | 5 records |
| Call activity | 3 records |
| Timer | 3 records |
| Raise event | 3 records |
| Start child workflow | 8 records |
## Workflow scalability
Because Dapr Workflows are internally implemented using actors, Dapr Workflows have the same scalability characteristics as actors.

View File

@ -116,7 +116,9 @@ Want to skip the quickstarts? Not a problem. You can try out the workflow buildi
## Limitations
- **State stores:** Due to underlying limitations in some database choices, more commonly NoSQL databases, you might run into limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request.
- **State stores:** You can only use state stores which support workflows, as [described here]({{% ref supported-state-stores %}}).
- Azure Cosmos DB has [payload and workflow complexity limitations]({{% ref "setup-azure-cosmosdb.md#workflow-limitations" %}}).
- AWS DynamoDB has [workflow complexity limitations]({{% ref "setup-azure-cosmosdb.md#workflow-limitations" %}}).
## Watch the demo

View File

@ -225,6 +225,22 @@ This particular optimization only makes sense if you are saving large objects to
{{% /alert %}}
## Workflow Limitations
{{% alert title="Note" color="primary" %}}
As described below, CosmosDB has limitations that likely make it unsuitable for production environments.
There is currently no path for migrating Workflow data from CosmosDB to another state store, meaning exceeding these limits in production will result in failed workflows with no workaround.
{{% /alert %}}
The more complex a workflow is with number of activities, child workflows, etc, the more DB state operations it performs per state store transaction.
All input & output values are saved to the workflow history, and are part of an operation of these transactions.
CosmosDB has a [maximum document size of 2MB and maximum transaction size of 100 operations.](https://learn.microsoft.com/azure/cosmos-db/concepts-limits#per-request-limits).
Attempting to write to CosmosDB beyond these limits results in an error code of `413`.
This means that the workflow history must not exceed this size, meaning that CosmosDB is not suitable for workflows with large input/output values or larger complex workflows.
A general guide to the number of records that are saved during a workflow executon can be found [here]({{% ref "workflow-architecture.md#state-store-record-count" %}}).
## Related links
- [Basic schema for a Dapr component]({{% ref component-schema %}})

View File

@ -158,6 +158,20 @@ $ aws dynamodb get-item \
}
```
## Workflow Limitations
{{% alert title="Note" color="primary" %}}
As described below, DynamoDB has limitations that likely make it unsuitable for production environments.
There is currently no path for migrating Workflow data from DynamoDB to another state store, meaning exceeding these limits in production will result in failed workflows with no workaround.
{{% /alert %}}
The more complex a workflow is (number of activities, child workflows, etc.), the more state operations it performs per state store transaction.
The maximum number of operations that can be performed by DynamoDB in a [single transaction is 100](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transaction-apis.html).
This means that DynamoDB can only handle workflows with a limited complexity, meaning it is not suitable for all workflow scenarios.
A general guide to the number of records that are saved during a workflow executon can be found [here]({{% ref "workflow-architecture.md#state-store-record-count" %}}).
## Related links
- [Basic schema for a Dapr component]({{% ref component-schema %}})

View File

@ -30,7 +30,7 @@
transactions: true
etag: true
ttl: true
workflow: false
workflow: true
- component: Azure Table Storage
link: setup-azure-tablestorage
state: Stable

View File

@ -52,7 +52,7 @@
transactions: true
etag: true
ttl: true
workflow: false
workflow: true
- component: Hashicorp Consul
link: setup-consul
state: Alpha
@ -140,7 +140,7 @@
transactions: true
etag: true
ttl: true
workflow: false
workflow: true
- component: PostgreSQL v1
link: setup-postgresql-v1
state: Stable
@ -195,7 +195,7 @@
transactions: true
etag: true
ttl: true
workflow: false
workflow: true
- component: Zookeeper
link: setup-zookeeper
state: Alpha

View File

@ -9,7 +9,7 @@
etag: true
ttl: true
query: false
workflow: false
workflow: true
- component: Coherence
link: setup-coherence
state: Alpha