Merge branch 'v1.13' into issue_3868

This commit is contained in:
Hannah Hunter 2024-02-05 12:43:48 -05:00 committed by GitHub
commit 664cff1b16
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
11 changed files with 189 additions and 45 deletions

View File

@ -97,9 +97,7 @@ Child workflows have many benefits:
The return value of a child workflow is its output. If a child workflow fails with an exception, then that exception is surfaced to the parent workflow, just like it is when an activity task fails with an exception. Child workflows also support automatic retry policies.
{{% alert title="Note" color="primary" %}}
Because child workflows are independent of their parents, terminating a parent workflow does not affect any child workflows. You must terminate each child workflow independently using its instance ID.
{{% /alert %}}
Terminating a parent workflow terminates all of the child workflows created by the workflow instance. You can disable this by setting the query parameter `non_recursive` to `true` while sending the terminate request to the parent workflow. See [the terminate workflow api]({{< ref "workflow_api.md#terminate-workflow-request" >}}) for more information.
## Durable timers

View File

@ -41,7 +41,7 @@ With Dapr Workflow, you can write activities and then orchestrate those activiti
### Child workflows
In addition to activities, you can write workflows to schedule other workflows as child workflows. A child workflow is independent of the parent workflow that started it and support automatic retry policies.
In addition to activities, you can write workflows to schedule other workflows as child workflows. A child workflow has its own instance ID, history, and status that is independent of the parent workflow that started it, except for the fact that terminating the parent workflow terminates all of the child workflows created by it. Child workflow also supports automatic retry policies.
[Learn more about child workflows.]({{< ref "workflow-features-concepts.md#child-workflows" >}})

View File

@ -106,21 +106,27 @@ The `metrics` section under the `Configuration` spec contains the following prop
```yml
metrics:
enabled: true
rules: []
http:
increasedCardinality: true
```
The following table lists the properties for metrics:
| Property | Type | Description |
|--------------|--------|-------------|
| `enabled` | boolean | Whether metrics should to be enabled. |
| `rules` | boolean | Named rule to filter metrics. Each rule contains a set of `labels` to filter on and a`regex`expression to apply to the metrics path. |
| `enabled` | boolean | When set to true, the default, enables metrics collection and the metrics endpoint. |
| `rules` | array | Named rule to filter metrics. Each rule contains a set of `labels` to filter on and a `regex` expression to apply to the metrics path. |
| `http.increasedCardinality` | boolean | When set to true, in the Dapr HTTP server each request path causes the creation of a new "bucket" of metrics. This can cause issues, including excessive memory consumption, when there many different requested endpoints (such as when interacting with RESTful APIs).<br>In Dapr 1.13 the default value is `true` (to preserve the behavior of Dapr <= 1.12), but will change to `false` in Dapr 1.14. |
To mitigate high memory usage and egress costs associated with [high cardinality metrics]({{< ref "metrics-overview.md#high-cardinality-metrics" >}}), you can set regular expressions for every metric exposed by the Dapr sidecar. For example:
To mitigate high memory usage and egress costs associated with [high cardinality metrics]({{< ref "metrics-overview.md#high-cardinality-metrics" >}}) with the HTTP server, you should set the `metrics.http.increasedCardinality` property to `false`.
Using rules, you can set regular expressions for every metric exposed by the Dapr sidecar. For example:
```yml
metric:
enabled: true
rules:
metrics:
enabled: true
rules:
- name: dapr_runtime_service_invocation_req_sent_total
labels:
- name: method

View File

@ -3,22 +3,24 @@ type: docs
title: "Configure metrics"
linkTitle: "Overview"
weight: 4000
description: "Enable or disable Dapr metrics "
description: "Enable or disable Dapr metrics"
---
By default, each Dapr system process emits Go runtime/process metrics and has their own [Dapr metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md).
## Prometheus endpoint
The Dapr sidecars exposes a [Prometheus](https://prometheus.io/) metrics endpoint that you can scrape to gain a greater understanding of how Dapr is behaving.
The Dapr sidecar exposes a [Prometheus](https://prometheus.io/)-compatible metrics endpoint that you can scrape to gain a greater understanding of how Dapr is behaving.
## Configuring metrics using the CLI
The metrics application endpoint is enabled by default. You can disable it by passing the command line argument `--enable-metrics=false`.
The default metrics port is `9090`. You can override this by passing the command line argument `--metrics-port` to Daprd.
The default metrics port is `9090`. You can override this by passing the command line argument `--metrics-port` to daprd.
## Configuring metrics in Kubernetes
You can also enable/disable the metrics for a specific application by setting the `dapr.io/enable-metrics: "false"` annotation on your application deployment. With the metrics exporter disabled, `daprd` does not open the metrics listening port.
You can also enable/disable the metrics for a specific application by setting the `dapr.io/enable-metrics: "false"` annotation on your application deployment. With the metrics exporter disabled, daprd does not open the metrics listening port.
The following Kubernetes deployment example shows how metrics are explicitly enabled with the port specified as "9090".
@ -54,10 +56,8 @@ spec:
```
## Configuring metrics using application configuration
You can also enable metrics via application configuration. To disable the metrics collection in the Dapr sidecars running in a specific namespace:
- Use the `metrics` spec configuration.
- Set `enabled: false` to disable the metrics in the Dapr runtime.
You can also enable metrics via application configuration. To disable the metrics collection in the Dapr sidecars by default, set `spec.metrics.enabled` to `false`.
```yaml
apiVersion: dapr.io/v1alpha1
@ -66,17 +66,25 @@ metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
metrics:
enabled: false
```
## High cardinality metrics
Depending on your use case, some metrics emitted by Dapr might contain values that have a high cardinality. This might cause increased memory usage for the Dapr process/container and incur expensive egress costs in certain cloud environments. To mitigate this issue, you can set regular expressions for every metric exposed by the Dapr sidecar. [See a list of all Dapr metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md).
When invoking Dapr using HTTP, the legacy behavior (and current default as of Dapr 1.13) is to create a separate "bucket" for each requested method. When working with RESTful APIs, this can cause very high cardinality, with potential negative impact on memory usage and CPU.
The following example shows how to apply a regular expression for the label `method` in the metric `dapr_runtime_service_invocation_req_sent_total`:
Dapr 1.13 introduces a new option for the Dapr Configuration resource `spec.metrics.http.increasedCardinality`: when set to `false`, it reports metrics for the HTTP server for each "abstract" method (for example, requesting from a state store) instead of creating a "bucket" for each concrete request path.
The default value of `spec.metrics.http.increasedCardinality` is `true` in Dapr 1.13, to maintain the same behavior as Dapr 1.12 and older. However, the value will change to `false` (low-cardinality metrics by default) in Dapr 1.14.
Setting `spec.metrics.http.increasedCardinality` to `false` is **recommended** to all Dapr users, to reduce resource consumption. The pre-1.13 behavior, which is used when the option is `true`, is considered legacy and is only maintained for users who have special requirements around backwards-compatibility.
## Transform metrics with regular expressions
You can set regular expressions for every metric exposed by the Dapr sidecar to "transform" their values. [See a list of all Dapr metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md).
The name of the rule must match the name of the metric that is transformed. The following example shows how to apply a regular expression for the label `method` in the metric `dapr_runtime_service_invocation_req_sent_total`:
```yaml
apiVersion: dapr.io/v1alpha1
@ -84,9 +92,10 @@ kind: Configuration
metadata:
name: daprConfig
spec:
metric:
enabled: true
rules:
metrics:
enabled: true
increasedCardinality: true
rules:
- name: dapr_runtime_service_invocation_req_sent_total
labels:
- name: method
@ -94,14 +103,9 @@ spec:
"orders/": "orders/.+"
```
When this configuration is applied, a recorded metric with the `method` label of `orders/a746dhsk293972nz` will be replaced with `orders/`.
### Watch the demo
Watch [this video to walk through handling high cardinality metrics](https://youtu.be/pOT8teL6j_k?t=1524):
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/pOT8teL6j_k?start=1524" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
When this configuration is applied, a recorded metric with the `method` label of `orders/a746dhsk293972nz` is replaced with `orders/`.
Using regular expressions to reduce metrics cardinality is considered legacy. We encourage all users to set `spec.metrics.http.increasedCardinality` to `false` instead, which is simpler to configure and offers better performance.
## References

View File

@ -57,15 +57,23 @@ The API call will provide a response similar to this:
Terminate a running workflow instance with the given name and instance ID.
```
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/terminate
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/terminate[?non_recursive=false]
```
{{% alert title="Note" color="primary" %}}
Terminating a workflow terminates all of the child workflows created by the workflow instance. You can disable this by setting the query parameter `non_recursive` to `true`.
Terminating a workflow has no effect on any in-flight activity executions that were started by the terminated instance.
{{% /alert %}}
### URL parameters
Parameter | Description
--------- | -----------
`workflowComponentName` | Use `dapr` for Dapr Workflows
`instanceId` | Unique value created for each run of a specific workflow
`non_recursive` | (Optional) Boolean to determine if Dapr should not recursively terminate child workflows created by the workflow instance. Default value is `false`.
### HTTP response codes
@ -171,15 +179,21 @@ None.
Purge the workflow state from your state store with the workflow's instance ID.
```
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/purge
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/purge[?non_recursive=false]
```
{{% alert title="Note" color="primary" %}}
Purging a workflow purges all of the child workflows created by the workflow instance. You can disable this by setting the query parameter `non_recursive` to `true`.
{{% /alert %}}
### URL parameters
Parameter | Description
--------- | -----------
`workflowComponentName` | Use `dapr` for Dapr Workflows
`instanceId` | Unique value created for each run of a specific workflow
`non_recursive` | (Optional) Boolean to determine if Dapr should not recursively purge child workflows created by the workflow instance. Default value is `false`.
### HTTP response codes

View File

@ -3,7 +3,7 @@ type: docs
title: "Name resolution provider component specs"
linkTitle: "Name resolution"
weight: 8000
description: The supported name resolution providers that interface with Dapr service invocation
description: The supported name resolution providers to enable Dapr service invocation
no_list: true
---

View File

@ -0,0 +1,54 @@
---
type: docs
title: "SQLite name resolution provider"
linkTitle: "SQLite"
description: Detailed information on the SQLite name resolution component
---
As an alternative to mDNS, the SQLite name resolution component can be used for running Dapr on single-node environments and for local development scenarios. Dapr sidecars that are part of the cluster store their information in a SQLite database on the local machine.
{{% alert title="Note" color="primary" %}}
This component is optimized to be used in scenarios where all Dapr instances are running on the same physical machine, where the database is accessed through the same, locally-mounted disk.
Using the SQLite nameresolver with a database file accessed over the network (including via SMB/NFS) can lead to issues such as data corruption, and is **not supported**.
{{% /alert %}}
## Configuration format
Name resolution is configured via the [Dapr Configuration]({{< ref configuration-overview.md >}}).
Within the Configuration YAML, set the `spec.nameResolution.component` property to `"sqlite"`, then pass configuration options in the `spec.nameResolution.configuration` dictionary.
This is the basic example of a Configuration resource:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
nameResolution:
component: "sqlite"
version: "v1"
configuration:
connectionString: "/home/user/.dapr/nr.db"
```
## Spec configuration fields
When using the SQLite name resolver component, the `spec.nameResolution.configuration` dictionary contains these options:
| Field | Required | Type | Details | Examples |
|--------------|:--------:|-----:|:---------|----------|
| `connectionString` | Y | `string` | The connection string for the SQLite database. Normally, this is the path to a file on disk, relative to the current working directory, or absolute. | `"nr.db"` (relative to the working directory), `"/home/user/.dapr/nr.db"` |
| `updateInterval` | N | [Go duration](https://pkg.go.dev/time#ParseDuration) (as a `string`) | Interval for active Dapr sidecars to update their status in the database, which is used as healthcheck.<br>Smaller intervals reduce the likelihood of stale data being returned if an application goes offline, but increase the load on the database.<br>Must be at least 1s greater than `timeout`. Values with fractions of seconds are truncated (for example, `1500ms` becomes `1s`). Default: `5s` | `"2s"` |
| `timeout` | N | [Go duration](https://pkg.go.dev/time#ParseDuration) (as a `string`).<br>Must be at least 1s. | Timeout for operations on the database. Integers are interpreted as number of seconds. Defaults to `1s` | `"2s"`, `2` |
| `tableName` | N | `string` | Name of the table where the data is stored. If the table does not exist, the table is created by Dapr. Defaults to `hosts`. | `"hosts"` |
| `metadataTableName` | N | `string` | Name of the table used by Dapr to store metadata for the component. If the table does not exist, the table is created by Dapr. Defaults to `metadata`. | `"metadata"` |
| `cleanupInterval` | N | [Go duration](https://pkg.go.dev/time#ParseDuration) (as a `string`) | Interval to remove stale records from the database. Default: `1h` (1 hour) | `"10m"` |
| `busyTimeout` | N | [Go duration](https://pkg.go.dev/time#ParseDuration) (as a `string`) | Interval to wait in case the SQLite database is currently busy serving another request, before returning a "database busy" error. This is an advanced setting.</br>`busyTimeout` controls how locking works in SQLite. With SQLite, writes are exclusive, so every time any app is writing the database is locked. If another app tries to write, it waits up to `busyTimeout` before returning the "database busy" error. However the `timeout` setting controls the timeout for the entire operation. For example if the query "hangs", after the database has acquired the lock (so after busy timeout is cleared), then `timeout` comes into effect. Default: `800ms` (800 milliseconds) | `"100ms"` |
| `disableWAL` | N | `bool` | If set to true, disables Write-Ahead Logging for journaling of the SQLite database. This is for advanced scenarios only | `true`, `false` |
## Related links
- [Service invocation building block]({{< ref service-invocation >}})

View File

@ -73,7 +73,7 @@ spec:
| consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. If a value for `consumerGroup` is provided, any value for `consumerID` is ignored - a combination of the consumer group and a random unique identifier will be set for the `consumerID` instead. | `"channel1"`
| clientID | N | A user-provided string sent with every request to the Kafka brokers for logging, debugging, and auditing purposes. Defaults to `"namespace.appID"` for Kubernetes mode or `"appID"` for Self-Hosted mode. | `"my-namespace.my-dapr-app"`, `"my-dapr-app"`
| authRequired | N | *Deprecated* Enable [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) authentication with the Kafka brokers. | `"true"`, `"false"`
| authType | Y | Configure or disable authentication. Supported values: `none`, `password`, `mtls`, or `oidc` | `"password"`, `"none"`
| authType | Y | Configure or disable authentication. Supported values: `none`, `password`, `mtls`, `oidc` or `awsiam` | `"password"`, `"none"`
| saslUsername | N | The SASL username used for authentication. Only required if `authType` is set to `"password"`. | `"adminuser"`
| saslPassword | N | The SASL password used for authentication. Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}). Only required if `authType is set to `"password"`. | `""`, `"KeFg23!"`
| saslMechanism | N | The SASL Authentication Mechanism you wish to use. Only required if `authType` is set to `"password"`. Defaults to `PLAINTEXT` | `"SHA-512", "SHA-256", "PLAINTEXT"`
@ -92,6 +92,12 @@ spec:
| oidcClientSecret | N | The OAuth2 client secret that has been provisioned in the identity provider: Required when `authType` is set to `oidc` | `"KeFg23!"` |
| oidcScopes | N | Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when `authType` is set to `oidc`. Defaults to `"openid"` | `"openid,kafka-prod"` |
| oidcExtensions | N | Input/Output | String containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token | `{"cluster":"kafka","poolid":"kafkapool"}` |
| awsRegion | N | The AWS region where the Kafka cluster is deployed to. Required when `authType` is set to `awsiam` | `us-west-1` |
| awsAccessKey | N | AWS access key associated with an IAM account. | `"accessKey"`
| awsSecretKey | N | The secret key associated with the access key. | `"secretKey"`
| awsSessionToken | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"sessionToken"`
| awsIamRoleArn | N | IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | `"arn:aws:iam::123456789:role/mskRole"`
| awsStsSessionName | N | Represents the session name for assuming a role. | `"MSKSASLDefaultSession"`
| schemaRegistryURL | N | Required when using Schema Registry Avro serialization/deserialization. The Schema Registry URL. | `http://localhost:8081` |
| schemaRegistryAPIKey | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key. | `XYAXXAZ` |
| schemaRegistryAPISecret | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret. | `ABCDEFGMEADFF` |
@ -107,7 +113,17 @@ The metadata `version` must be set to `1.0.0` when using Azure EventHubs with Ka
Kafka supports a variety of authentication schemes and Dapr supports several: SASL password, mTLS, OIDC/OAuth2. With the added authentication methods, the `authRequired` field has
been deprecated from the v1.6 release and instead the `authType` field should be used. If `authRequired` is set to `true`, Dapr will attempt to configure `authType` correctly
based on the value of `saslPassword`. There are four valid values for `authType`: `none`, `password`, `certificate`, `mtls`, and `oidc`. Note this is authentication only; authorization is still configured within Kafka.
based on the value of `saslPassword`. The valid values for `authType` are:
- `none`
- `password`
- `certificate`
- `mtls`
- `oidc`
- `awsiam`
{{% alert title="Note" color="primary" %}}
`authType` is _authentication_ only. _Authorization_ is still configured within Kafka, except for `awsiam`, which can also drive authorization decisions configured in AWS IAM.
{{% /alert %}}
#### None
@ -276,6 +292,44 @@ spec:
value: 0.10.2.0
```
#### AWS IAM
Authenticating with AWS IAM is supported with MSK. Setting `authType` to `awsiam` uses AWS SDK to generate auth tokens to authenticate.
{{% alert title="Note" color="primary" %}}
The only required metadata field is `awsRegion`. If no `awsAccessKey` and `awsSecretKey` are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster.
{{% /alert %}}
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub-awsiam
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers # Required. Kafka broker connection setting
value: "dapr-kafka.myapp.svc.cluster.local:9092"
- name: consumerGroup # Optional. Used for input bindings.
value: "group1"
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authType # Required.
value: "awsiam"
- name: awsRegion # Required.
value: "us-west-1"
- name: awsAccessKey # Optional.
value: <AWS_ACCESS_KEY>
- name: awsSecretKey # Optional.
value: <AWS_SECRET_KEY>
- name: awsSessionToken # Optional.
value: <AWS_SESSION_KEY>
- name: awsIamRoleArn # Optional.
value: "arn:aws:iam::123456789:role/mskRole"
- name: awsStsSessionName # Optional.
value: "MSKSASLDefaultSession"
```
### Communication using TLS
By default TLS is enabled to secure the transport layer to Kafka. To disable TLS, set `disableTls` to `true`. When TLS is enabled, you can

View File

@ -50,14 +50,14 @@ spec:
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `connectionString` | Y | The connection string for the SQLite database. See below for more details. | `"path/to/data.db"`, `"file::memory:?cache=shared"`
| `timeoutInSeconds` | N | Timeout, in seconds, for all database operations. Defaults to `20` | `30`
| `tableName` | N | Name of the table where the data is stored. Defaults to `state`. | `"state"`
| `metadataTableName` | N | Name of the table used by Dapr to store metadata for the component. Defaults to `metadata`. | `"metadata"`
| `cleanupInterval` | N | Interval, as a [Go duration](https://pkg.go.dev/time#ParseDuration), to clean up rows with an expired TTL. Setting this to values <=0 disables the periodic cleanup. Default: `0` (i.e. disabled) | `"2h"`, `"30m"`, `-1`
| `busyTimeout` | N | Interval, as a [Go duration](https://pkg.go.dev/time#ParseDuration), to wait in case the SQLite database is currently busy serving another request, before returning a "database busy" error. Default: `2s` | `"100ms"`, `"5s"`
| `disableWAL` | N | If set to true, disables Write-Ahead Logging for journaling of the SQLite database. You should set this to `false` if the database is stored on a network file system (e.g. a folder mounted as a SMB or NFS share). This option is ignored for read-only or in-memory databases. | `"100ms"`, `"5s"`
| `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
| `connectionString` | Y | The connection string for the SQLite database. See below for more details. | `"path/to/data.db"`, `"file::memory:?cache=shared"` |
| `timeout` | N | Timeout for operations on the database, as a [Go duration](https://pkg.go.dev/time#ParseDuration). Integers are interpreted as number of seconds. Defaults to `20s` | `"30s"`, `30` |
| `tableName` | N | Name of the table where the data is stored. Defaults to `state`. | `"state"` |
| `metadataTableName` | N | Name of the table used by Dapr to store metadata for the component. Defaults to `metadata`. | `"metadata"` |
| `cleanupInterval` | N | Interval, as a [Go duration](https://pkg.go.dev/time#ParseDuration), to clean up rows with an expired TTL. Setting this to values <=0 disables the periodic cleanup. Default: `0` (i.e. disabled) | `"2h"`, `"30m"`, `-1` |
| `busyTimeout` | N | Interval, as a [Go duration](https://pkg.go.dev/time#ParseDuration), to wait in case the SQLite database is currently busy serving another request, before returning a "database busy" error. Default: `2s` | `"100ms"`, `"5s"` |
| `disableWAL` | N | If set to true, disables Write-Ahead Logging for journaling of the SQLite database. You should set this to `false` if the database is stored on a network file system (for example, a folder mounted as a SMB or NFS share). This option is ignored for read-only or in-memory databases. | `"true"`, `"false"` |
| `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` |
The **`connectionString`** parameter configures how to open the SQLite database.

View File

@ -27,8 +27,17 @@ spec:
stdout: true
otel:
endpointAddress: <REPLACE-WITH-ENDPOINT-ADDRESS>
isSecure: false
isSecure: <TRUE-OR-FALSE>
protocol: <HTTP-OR-GRPC>
metrics:
enabled: <TRUE-OR-FALSE>
rules:
- name: <METRIC-NAME>
labels:
- name: <LABEL-NAME>
regex: {}
http:
increasedCardinality: <TRUE-OR-FALSE>
httpPipeline: # for incoming http calls
handlers:
- name: <HANDLER-NAME>

View File

@ -3,3 +3,8 @@
state: Alpha
version: v1
since: "1.2"
- component: SQLite
link: nr-sqlite
state: Alpha
version: v1
since: "1.13"