mirror of https://github.com/dapr/docs.git
Adds section on the number of records that are saved by workflow shape
Signed-off-by: joshvanl <me@joshvanl.dev>
This commit is contained in:
parent
3451cd3346
commit
2c9382c905
|
@ -175,6 +175,20 @@ Similarly, if a state store imposes restrictions on the size of a batch transact
|
|||
Workflow state can be purged from a state store, including all its history.
|
||||
Each Dapr SDK exposes APIs for purging all metadata related to specific workflow instances.
|
||||
|
||||
#### State store record count
|
||||
|
||||
The number of records which will be saved as history in the state store per workflow run is determined by its complexity or "shape", i.e. the number of activities, timers, sub-workflows etc.
|
||||
The following table shows a general guide to the number of records that are saved by different workflow tasks.
|
||||
This number may be larger or smaller depending on retries or concurrency.
|
||||
|
||||
| Task type | Number of records saved |
|
||||
| ----------|-------------------------|
|
||||
| Start workflow | 5 records |
|
||||
| Call activity | 3 records |
|
||||
| Timer | 3 records |
|
||||
| Raise event | 3 records |
|
||||
| Start child workflow | 8 records |
|
||||
|
||||
## Workflow scalability
|
||||
|
||||
Because Dapr Workflows are internally implemented using actors, Dapr Workflows have the same scalability characteristics as actors.
|
||||
|
|
|
@ -230,8 +230,9 @@ This particular optimization only makes sense if you are saving large objects to
|
|||
The more complex a workflow is with number of activities, child workflows, etc, the more DB state operations it performs per state store transaction.
|
||||
All input & output values are saved to the workflow history, and are part of an operation of these transactions.
|
||||
CosmosDB has a [maximum document size of 2MB and maximum transaction size of 100 operations.](https://learn.microsoft.com/azure/cosmos-db/concepts-limits#per-request-limits).
|
||||
Attempting to write to CosmosDB beyond these limits will result in an error code of `413`.
|
||||
This means that the workflow history must not exceed this size, meaning that CosmosDB is not suitable for workflows with large input/output values or larger complex workflows.
|
||||
|
||||
A general guide to the number of records that are saved during a workflow executon can be found [here]({{% ref "workflow-architecture.md#state-store-record-count" %}}).
|
||||
|
||||
## Related links
|
||||
|
||||
|
|
|
@ -163,6 +163,7 @@ $ aws dynamodb get-item \
|
|||
The more complex a workflow is (number of activities, child workflows, etc.), the more state operations it will perform per state store transaction.
|
||||
The maximum number of operations that can be performed by DynamoDB in a [single transaction is 100](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transaction-apis.html).
|
||||
This means that DynamoDB can only hadle workflows with a limited complexity, meaning it is not suitable for all workflow scenarios.
|
||||
A general guide to the number of records that are saved during a workflow executon can be found [here]({{% ref "workflow-architecture.md#state-store-record-count" %}}).
|
||||
|
||||
## Related links
|
||||
|
||||
|
|
Loading…
Reference in New Issue