mirror of https://github.com/dapr/docs.git
Merge branch 'v1.13' into v1.13
This commit is contained in:
commit
fe915beb5d
|
@ -13,7 +13,7 @@ We welcome community members giving presentations on Dapr and spreading the word
|
|||
{{% alert color="primary" %}}
|
||||
If you're using the PowerPoint template with MacOS, please install the Space Grotesk font to ensure the text is rendered properly:
|
||||
```sh
|
||||
brew install --cask homebrew/cask-fonts/font-space-grotesk
|
||||
brew install --cask font-space-grotesk
|
||||
```
|
||||
{{% /alert %}}
|
||||
|
||||
|
|
|
@ -52,6 +52,12 @@ You would use Dapr Workflow when you need to define and orchestrate complex work
|
|||
|
||||
[Learn more about Dapr Workflow and how to use workflows in your application.]({{< ref workflow-overview.md >}})
|
||||
|
||||
## Actor types and actor IDs
|
||||
|
||||
Actors are uniquely defined as an instance of an actor type, similar to how an object is an instance of a class. For example, you might have an actor type that implements the functionality of a calculator. There could be many actors of that type distributed across various nodes in a cluster.
|
||||
|
||||
Each actor is uniquely identified by an actor ID. An actor ID can be _any_ string value you choose. If you do not provide an actor ID, Dapr generates a random string for you as an ID.
|
||||
|
||||
## Features
|
||||
|
||||
### Actor lifetime
|
||||
|
|
|
@ -70,7 +70,7 @@ There are two ways to invoke a non-Dapr endpoint when communicating either to Da
|
|||
```
|
||||
|
||||
### Using appId when calling Dapr enabled applications
|
||||
AppIDs are always used to call Dapr applications with the `appID` and `my-method``. Read the [How-To: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}}) guide for more information. For example:
|
||||
AppIDs are always used to call Dapr applications with the `appID` and `my-method`. Read the [How-To: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}}) guide for more information. For example:
|
||||
|
||||
```sh
|
||||
localhost:3500/v1.0/invoke/<appID>/method/<my-method>
|
||||
|
|
|
@ -6,9 +6,6 @@ description: "Access Dapr capabilities from your Azure Functions runtime applica
|
|||
weight: 3000
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
The Dapr extension for Azure Functions is currently in preview.
|
||||
{{% /alert %}}
|
||||
|
||||
Dapr integrates with the [Azure Functions runtime](https://learn.microsoft.com/azure/azure-functions/functions-overview) via an extension that lets a function seamlessly interact with Dapr.
|
||||
- **Azure Functions** provides an event-driven programming model.
|
||||
|
|
|
@ -75,15 +75,14 @@ spec:
|
|||
type: bindings.azure.servicebusqueues
|
||||
version: v1
|
||||
metadata:
|
||||
-name: connectionString
|
||||
secretKeyRef:
|
||||
- name: connectionString
|
||||
secretKeyRef:
|
||||
name: asbNsConnString
|
||||
key: asbNsConnString
|
||||
-name: queueName
|
||||
value: servicec-inputq
|
||||
- name: queueName
|
||||
value: servicec-inputq
|
||||
auth:
|
||||
secretStore: <SECRET_STORE_NAME>
|
||||
|
||||
```
|
||||
The above "Secret is a string" case yaml tells Dapr to extract a connection string named `asbNsConnstring` from the defined `secretStore` and assign the value to the `connectionString` field in the component since there is no key embedded in the "secret" from the `secretStore` because it is a plain string. This requires the secret `name` and secret `key` to be identical.
|
||||
|
||||
|
@ -95,7 +94,7 @@ The following example shows you how to create a Kubernetes secret to hold the co
|
|||
|
||||
1. First, create the Kubernetes secret:
|
||||
```bash
|
||||
kubectl create secret generic eventhubs-secret --from-literal=connectionString=*********
|
||||
kubectl create secret generic eventhubs-secret --from-literal=connectionString=*********
|
||||
```
|
||||
|
||||
2. Next, reference the secret in your binding:
|
||||
|
|
|
@ -52,7 +52,7 @@ Since you are running Dapr in the same host as the component, verify that this f
|
|||
|
||||
### Component discovery and multiplexing
|
||||
|
||||
A pluggable component accessible through a [Unix Domain Socket][UDS] (UDS) can host multiple distinct component APIs . During the components' initial discovery process, Dapr uses reflection to enumerate all the component APIs behind a UDS. The `my-component` pluggable component in the example above can contain both state store (`state`) and a pub/sub (`pubsub`) component APIs.
|
||||
A pluggable component accessible through a [Unix Domain Socket][UDS] (UDS) can host multiple distinct component APIs. During the components' initial discovery process, Dapr uses reflection to enumerate all the component APIs behind a UDS. The `my-component` pluggable component in the example above can contain both state store (`state`) and a pub/sub (`pubsub`) component APIs.
|
||||
|
||||
Typically, a pluggable component implements a single component API for packaging and deployment. However, at the expense of increasing its dependencies and broadening its security attack surface, a pluggable component can have multiple component APIs implemented. This could be done to ease the deployment and monitoring burden. Best practice for isolation, fault tolerance, and security is a single component API implementation for each pluggable component.
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ spec:
|
|||
tracing:
|
||||
samplingRate: "1"
|
||||
otel:
|
||||
endpointAddress: "https://..."
|
||||
endpointAddress: "myendpoint.cluster.local:4317"
|
||||
zipkin:
|
||||
endpointAddress: "https://..."
|
||||
|
||||
|
@ -32,10 +32,10 @@ The following table lists the properties for tracing:
|
|||
|--------------|--------|-------------|
|
||||
| `samplingRate` | string | Set sampling rate for tracing to be enabled or disabled.
|
||||
| `stdout` | bool | True write more verbose information to the traces
|
||||
| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) server address.
|
||||
| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) target hostname and optionally port. If this is used, you do not need to specify the 'zipkin' section.
|
||||
| `otel.isSecure` | bool | Is the connection to the endpoint address encrypted.
|
||||
| `otel.protocol` | string | Set to `http` or `grpc` protocol.
|
||||
| `zipkin.endpointAddress` | string | Set the Zipkin server address. If this is used, you do not need to specify the `otel` section.
|
||||
| `zipkin.endpointAddress` | string | Set the Zipkin server URL. If this is used, you do not need to specify the `otel` section.
|
||||
|
||||
To enable tracing, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (every span is sampled), and sends trace using OTEL protocol to the OTEL server at localhost:4317
|
||||
|
||||
|
@ -66,7 +66,7 @@ turns on tracing for the sidecar.
|
|||
|
||||
| Environment Variable | Description |
|
||||
|----------------------|-------------|
|
||||
| `OTEL_EXPORTER_OTLP_ENDPOINT` | Sets the Open Telemetry (OTEL) server address, turns on tracing |
|
||||
| `OTEL_EXPORTER_OTLP_ENDPOINT` | Sets the Open Telemetry (OTEL) server hostname and optionally port, turns on tracing |
|
||||
| `OTEL_EXPORTER_OTLP_INSECURE` | Sets the connection to the endpoint as unencrypted (true/false) |
|
||||
| `OTEL_EXPORTER_OTLP_PROTOCOL` | Transport protocol (`grpc`, `http/protobuf`, `http/json`) |
|
||||
|
||||
|
|
|
@ -175,7 +175,9 @@ az cosmosdb sql role assignment create \
|
|||
--role-definition-id "$ROLE_ID"
|
||||
```
|
||||
|
||||
## Optimizing Cosmos DB for bulk operation write performance
|
||||
## Optimizations
|
||||
|
||||
### Optimizing Cosmos DB for bulk operation write performance
|
||||
|
||||
If you are building a system that only ever reads data from Cosmos DB via key (`id`), which is the default Dapr behavior when using the state management API or actors, there are ways you can optimize Cosmos DB for improved write speeds. This is done by excluding all paths from indexing. By default, Cosmos DB indexes all fields inside of a document. On systems that are write-heavy and run little-to-no queries on values within a document, this indexing policy slows down the time it takes to write or update a document in Cosmos DB. This is exacerbated in high-volume systems.
|
||||
|
||||
|
@ -211,6 +213,18 @@ This optimization comes at the cost of queries against fields inside of document
|
|||
|
||||
{{% /alert %}}
|
||||
|
||||
### Optimizing Cosmos DB for cost savings
|
||||
|
||||
If you intend to use Cosmos DB only as a key-value pair, it may be in your interest to consider converting your state object to JSON and compressing it before persisting it to state, and subsequently decompressing it when reading it out of state. This is because Cosmos DB bills your usage based on the maximum number of RU/s used in a given time period (typically each hour). Furthermore, RU usage is calculated as 1 RU per 1 KB of data you read or write. Compression helps by reducing the size of the data stored in Cosmos DB and subsequently reducing RU usage.
|
||||
|
||||
This savings is particularly significant for Dapr actors. While the Dapr State Management API does a base64 encoding of your object before saving, Dapr actor state is saved as raw, formatted JSON. This means multiple lines with indentations for formatting. Compressing can signficantly reduce the size of actor state objects. For example, if you have an actor state object that is 75KB in size when the actor is hydrated, you will use 75 RU/s to read that object out of state. If you then modify the state object and it grows to 100KB, you will use 100 RU/s to write that object to Cosmos DB, totalling 175 RU/s for the I/O operation. Let's say your actors are concurrently handling 1000 requests per second, you will need at least 175,000 RU/s to meet that load. With effective compression, the size reduction can be in the region of 90%, which means you will only need in the region of 17,500 RU/s to meet the load.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
|
||||
This particular optimization only makes sense if you are saving large objects to state. The performance and memory tradeoff for performing the compression and decompression on either end need to make sense for your use case. Furthermore, once the data is saved to state, it is not human readable, nor is it queryable. You should only adopt this optimization if you are saving large state objects as key-value pairs.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
|
|
Binary file not shown.
Loading…
Reference in New Issue