Merge branch 'v1.5' into feature/pubsub_pulsar0901

This commit is contained in:
greenie-msft 2021-11-10 14:30:14 -08:00 committed by GitHub
commit a7007de419
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
70 changed files with 1268 additions and 233 deletions

View File

@ -16,6 +16,16 @@ jobs:
PYTHON_VER: 3.7
steps:
- uses: actions/checkout@v2
- name: Check Microsoft URLs do not pin localized versions
run: |
localized=$(find . -name '*.md' | xargs grep -ol "\.microsoft\.com/[[:alpha:]]\{2\}-[[:alpha:]]\{2\}/") || true
if [ -z "$localized" ]; then
echo "All Microsoft Docs links ok."
else
echo "The following files contain links to Microsoft Docs that pin a localized version:"
echo $localized
exit 1
fi
- name: Set up Python ${{ env.PYTHON_VER }}
uses: actions/setup-python@v2
with:
@ -27,3 +37,4 @@ jobs:
- name: Check Markdown Files
run: |
for name in `find . -name "*.md"`; do echo -e "------\n$name" ; mm.py -l $name || exit 1 ;done

View File

@ -25,7 +25,7 @@ Before making your first contribution, make sure to review the [contributing sec
## Overview
The Dapr docs are built using [Hugo](https://gohugo.io/) with the [Docsy](https://docsy.dev) theme, hosted on an [Azure Static Web App](https://docs.microsoft.com/en-us/azure/static-web-apps/overview).
The Dapr docs are built using [Hugo](https://gohugo.io/) with the [Docsy](https://docsy.dev) theme, hosted on an [Azure Static Web App](https://docs.microsoft.com/azure/static-web-apps/overview).
The [daprdocs](./daprdocs) directory contains the hugo project, markdown files, and theme configurations.

View File

@ -8,12 +8,12 @@ no_list: true
<div class="card-body">
<h5 class="card-title">
<img src="/images/daprcon.png" alt="DaprCon logo" width=40>
<b> Join us for DaprCon on October 19th-20th, 2021!</b>
<b> Watch DaprCon sessions on-demand!</b>
</h5>
<p class="card-text">
The first ever DaprCon will take place October 19th-20th, 2021 virtually! Tune in for free and attend technical sessions, panels and real world examples from the community on building applications with Dapr! <br></br><i><b>Learn more >></b></i>
The first ever DaprCon took place October 19th-20th, 2021. Read this recap and find links to all on-demand content <br></br><i><b>Learn more >></b></i>
</p>
<a href="https://blog.dapr.io/posts/2021/10/05/join-us-for-daprcon-october-19th-20th-2021/" class="stretched-link"></a>
<a href="https://blog.dapr.io/posts/2021/10/21/thanks-for-a-great-first-daprcon/" class="stretched-link"></a>
</div>
</div>
</div>

View File

@ -99,7 +99,7 @@ Dapr can be used from any developer framework. Here are some that have been inte
| Language | Frameworks | Description |
|----------|------------|-------------|
| [.NET]({{< ref dotnet >}}) | [ASP.NET]({{< ref dotnet-aspnet.md >}}) | Brings stateful routing controllers that respond to pub/sub events from other services. Can also take advantage of [ASP.NET Core gRPC Services](https://docs.microsoft.com/en-us/aspnet/core/grpc/).
| [.NET]({{< ref dotnet >}}) | [ASP.NET]({{< ref dotnet-aspnet.md >}}) | Brings stateful routing controllers that respond to pub/sub events from other services. Can also take advantage of [ASP.NET Core gRPC Services](https://docs.microsoft.com/aspnet/core/grpc/).
| [Java]({{< ref java >}}) | [Spring Boot](https://spring.io/)
| [Python]({{< ref python >}}) | [Flask]({{< ref python-flask.md >}})
| [Javascript](https://github.com/dapr/js-sdk) | [Express](http://expressjs.com/)

View File

@ -99,7 +99,7 @@ Dapr uses the configured authentication method to authenticate with the underlyi
When deploying on Kubernetes, you can use regular [Kubernetes RBAC]( https://kubernetes.io/docs/reference/access-authn-authz/rbac/) to control access to management activities.
When deploying on Azure Kubernetes Service (AKS), you can use [Azure Active Directory (AD) service principals]( https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) to control access to management activities and resource management.
When deploying on Azure Kubernetes Service (AKS), you can use [Azure Active Directory (AD) service principals]( https://docs.microsoft.com/azure/active-directory/develop/app-objects-and-service-principals) to control access to management activities and resource management.
## Threat model
Threat modeling is a process by which potential threats, such as structural vulnerabilities or the absence of appropriate safeguards, can be identified and enumerated, and mitigations can be prioritized. The Dapr threat model is below.

View File

@ -18,4 +18,4 @@ This page details all of the common terms you may come across in the Dapr docs.
| Dapr control plane | A collection of services that are part of a Dapr installation on a hosting platform such as a Kubernetes cluster. This allows Dapr-enabled applications to run on the platform and handles Dapr capabilities such as actor placement, Dapr sidecar injection, or certificate issuance/rollover. | [Self-hosted overview]({{< ref self-hosted-overview >}})<br />[Kubernetes overview]({{< ref kubernetes-overview >}})
| Self-hosted | Windows/macOS/Linux machine(s) where you can run your applications with Dapr. Dapr provides the capability to run on machines in "self-hosted" mode. | [Self-hosted mode]({{< ref self-hosted-overview.md >}})
| Service | A running application or binary. This can refer to your application or to a Dapr application.
| Sidecar | A program that runs alongside your application as a separate process or container. | [Sidecar pattern](https://docs.microsoft.com/en-us/azure/architecture/patterns/sidecar)
| Sidecar | A program that runs alongside your application as a separate process or container. | [Sidecar pattern](https://docs.microsoft.com/azure/architecture/patterns/sidecar)

View File

@ -334,6 +334,8 @@ The shortcode would be:
To create a button in a webpage, use the `button` shortcode.
An optional "newtab" parameter will indicate if the page should open in a new tab. Options are "true" or "false". Default is "false", where the page will open in the same tab.
#### Link to an external page
```
@ -346,10 +348,10 @@ To create a button in a webpage, use the `button` shortcode.
You can also reference pages in your button as well:
```
{{</* button text="My Button" page="contributing" */>}}
{{</* button text="My Button" page="contributing" newtab="true" */>}}
```
{{< button text="My Button" page="contributing" >}}
{{< button text="My Button" page="contributing" newtab="true" >}}
#### Button colors

View File

@ -15,7 +15,7 @@ While your code processes a message, it can send one or more messages to other a
A large number of actors can execute simultaneously, and actors execute independently from each other.
Dapr includes a runtime that specifically implements the [Virtual Actor pattern](https://www.microsoft.com/en-us/research/project/orleans-virtual-actors/). With Dapr's implementation, you write your Dapr actors according to the Actor model, and Dapr leverages the scalability and reliability guarantees that the underlying platform provides.
Dapr includes a runtime that specifically implements the [Virtual Actor pattern](https://www.microsoft.com/research/project/orleans-virtual-actors/). With Dapr's implementation, you write your Dapr actors according to the Actor model, and Dapr leverages the scalability and reliability guarantees that the underlying platform provides.
### When to use actors

View File

@ -6,7 +6,7 @@ weight: 1000
description: "Use Dapr tracing to get visibility for distributed application"
---
Dapr uses the Zipkin protocol for distributed traces and metrics collection. Due to the ubiquity of the Zipkin protocol, many backends are supported out of the box, for examples [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io), [New Relic](https://newrelic.com) and others. Combining with the OpenTelemetry Collector, Dapr can export traces to many other backends including but not limted to [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), Instana, [Jaeger](https://www.jaegertracing.io/), and [SignalFX](https://www.signalfx.com/).
Dapr uses the Zipkin protocol for distributed traces and metrics collection. Due to the ubiquity of the Zipkin protocol, many backends are supported out of the box, for examples [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io), [New Relic](https://newrelic.com) and others. Combining with the OpenTelemetry Collector, Dapr can export traces to many other backends including but not limted to [Azure Monitor](https://azure.microsoft.com/services/monitor/), [Datadog](https://www.datadoghq.com), Instana, [Jaeger](https://www.jaegertracing.io/), and [SignalFX](https://www.signalfx.com/).
<img src="/images/tracing.png" width=600>

View File

@ -47,7 +47,7 @@ client.InvokeService(ctx, &pb.InvokeServiceRequest{
### Retrieve trace context in C#
#### For HTTP calls
To retrieve the trace context from HTTP response, you can use [.NET API](https://docs.microsoft.com/en-us/dotnet/api/system.net.http.headers.httpresponseheaders?view=netcore-3.1) :
To retrieve the trace context from HTTP response, you can use [.NET API](https://docs.microsoft.com/dotnet/api/system.net.http.headers.httpresponseheaders?view=netcore-3.1) :
```csharp
// client is HttpClient. req is HttpRequestMessage
@ -75,7 +75,7 @@ var response = await call.ResponseAsync;
var headers = await call.ResponseHeadersAsync();
var tracecontext = headers.First(e => e.Key == "grpc-trace-bin");
```
Additional general details on calling gRPC services with .NET client [here](https://docs.microsoft.com/en-us/aspnet/core/grpc/client?view=aspnetcore-3.1).
Additional general details on calling gRPC services with .NET client [here](https://docs.microsoft.com/aspnet/core/grpc/client?view=aspnetcore-3.1).
## How to propagate trace context in a request
`Note: There are no helper methods exposed in Dapr SDKs to propagate and retrieve trace context. You need to use http/gRPC clients to propagate and retrieve trace headers through http headers and gRPC metadata.`
@ -109,7 +109,7 @@ You can then continuing passing this go context `ctx` in subsequent Dapr gRPC ca
### Pass trace context in C#
#### For HTTP calls
To pass trace context in HTTP request, you can use [.NET API](https://docs.microsoft.com/en-us/dotnet/api/system.net.http.headers.httprequestheaders?view=netcore-3.1) :
To pass trace context in HTTP request, you can use [.NET API](https://docs.microsoft.com/dotnet/api/system.net.http.headers.httprequestheaders?view=netcore-3.1) :
```csharp
// client is HttpClient. req is HttpRequestMessage
@ -127,7 +127,7 @@ var headers = new Metadata();
headers.Add("grpc-trace-bin", tracecontext);
using var call = client.InvokeServiceAsync(req, headers);
```
Additional general details on calling gRPC services with .NET client [here](https://docs.microsoft.com/en-us/aspnet/core/grpc/client?view=aspnetcore-3.1).
Additional general details on calling gRPC services with .NET client [here](https://docs.microsoft.com/aspnet/core/grpc/client?view=aspnetcore-3.1).
## How to create trace context
You can create a trace context using the recommended OpenCensus SDKs. OpenCensus supports several different programming languages.

View File

@ -486,6 +486,33 @@ If you want to use your own custom CloudEvent, make sure to specify the content
Read about content types [here](#content-types), and about the [Cloud Events message format]({{< ref "pubsub-overview.md#cloud-events-message-format" >}}).
#### Example
{{< tabs "Dapr CLI" "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}}
Publish a custom CloudEvent to the `deathStarStatus` topic:
```bash
dapr publish --publish-app-id testpubsub --pubsub pubsub --topic deathStarStatus --data '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"status": "completed"}}'
```
{{% /codetab %}}
{{% codetab %}}
Publish a custom CloudEvent to the `deathStarStatus` topic:
```bash
curl -X POST http://localhost:3500/v1.0/publish/pubsub/deathStarStatus -H "Content-Type: application/cloudevents+json" -d '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"status": "completed"}}'
```
{{% /codetab %}}
{{% codetab %}}
Publish a custom CloudEvent to the `deathStarStatus` topic:
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/cloudevents+json' -Body '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"status": "completed"}}' -Uri 'http://localhost:3500/v1.0/publish/pubsub/deathStarStatus'
```
{{% /codetab %}}
{{< /tabs >}}
## Next steps
- Try the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/pub-sub)
@ -494,4 +521,4 @@ Read about content types [here](#content-types), and about the [Cloud Events mes
- Learn about [message time-to-live]({{< ref pubsub-message-ttl.md >}})
- Learn [how to configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
- List of [pub/sub components]({{< ref setup-pubsub >}})
- Read the [API reference]({{< ref pubsub_api.md >}})
- Read the [API reference]({{< ref pubsub_api.md >}})

View File

@ -22,7 +22,7 @@ When message time-to-live has native support in the pub/sub component, Dapr simp
#### Azure Service Bus
Azure Service Bus supports [entity level time-to-live](https://docs.microsoft.com/en-us/azure/service-bus-messaging/message-expiration). This means that messages have a default time-to-live but can also be set with a shorter timespan at publishing time. Dapr propagates the time-to-live metadata for the message and lets Azure Service Bus handle the expiration directly.
Azure Service Bus supports [entity level time-to-live](https://docs.microsoft.com/azure/service-bus-messaging/message-expiration). This means that messages have a default time-to-live but can also be set with a shorter timespan at publishing time. Dapr propagates the time-to-live metadata for the message and lets Azure Service Bus handle the expiration directly.
## Non-Dapr subscribers

View File

@ -34,7 +34,7 @@ Applications can use the secrets API to access secrets from a Kubernetes secret
<img src="/images/secrets-overview-kubernetes-store.png" width=600>
In Azure Dapr can be configured to use Managed Identities to authenticate with Azure Key Vault in order to retrieve secrets. In the example below, an Azure Kubernetes Service (AKS) cluster is configured to use managed identities. Then Dapr uses [pod identities](https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-identity#use-pod-identities) to retrieve secrets from Azure Key Vault on behalf of the application.
In Azure Dapr can be configured to use Managed Identities to authenticate with Azure Key Vault in order to retrieve secrets. In the example below, an Azure Kubernetes Service (AKS) cluster is configured to use managed identities. Then Dapr uses [pod identities](https://docs.microsoft.com/azure/aks/operator-best-practices-identity#use-pod-identities) to retrieve secrets from Azure Key Vault on behalf of the application.
<img src="/images/secrets-overview-azure-aks-keyvault.png" width=600>

View File

@ -0,0 +1,422 @@
---
type: docs
title: "How-To: Query state"
linkTitle: "How-To: Query state"
weight: 250
description: "API for querying state stores"
---
{{% alert title="alpha" color="warning" %}}
The state query API is in **alpha** stage.
{{% /alert %}}
## Introduction
The state query API provides a way of querying the key/value data stored in state store components. This query API is not a replacement for a complete query language, and is focused on retrieving, filtering and sorting key/value data that you have saved through the state management APIs.
Even though the state store is a key/value store, the `value` might be a JSON document with its own hierarchy, keys, and values.
The query API allows you to use those keys and values to retrive corresponding documents.
This query API does not support querying of actor state stored in a state store. For that you need to use the query API for the specific database.
See [querying actor state]({{< ref "state-management-overview.md#querying-actor-state" >}}).
You can find additional information in the [related links]({{< ref "#related-links" >}}) section.
## Querying the state
You submit query requests via HTTP POST/PUT or gRPC.
The body of the request is the JSON map with 3 entries: `filter`, `sort`, and `pagination`.
The `filter` is an optional section. It specifies the query conditions in the form of a tree of key/value operations, where the key is the operator and the value is the operands.
The following operations are supported:
| Operator | Operands | Description |
|----------|-------------|--------------|
| `EQ` | key:value | key == value |
| `IN` | key:[]value | key == value[0] OR key == value[1] OR ... OR key == value[n] |
| `AND` | []operation | operation[0] AND operation[1] AND ... AND operation[n] |
| `OR` | []operation | operation[0] OR operation[1] OR ... OR operation[n] |
If `filter` section is omitted, the query returns all entries.
The `sort` is an optional section and is an ordered array of `key:order` pairs, where `key` is a key in the state store, and the `order` is an optional string indicating sorting order: `"ASC"` for ascending and `"DESC"` for descending. If omitted, ascending order is the default.
The `pagination` is an optional section containing `limit` and `token` parameters. `limit` sets the page size. `token` is an iteration token returned by the component, and is used in subsequent queries.
For some background understanding, this query request is translated into the native query language and executed by the state store component.
## Example data and query
Let's look at some real examples, starting with simple and progressing towards more complex ones.
As a dataset, let's consider a [collection of with employee records](../query-api-examples/dataset.json) containing employee ID, organization, state, and city.
Notice that this dataset is an array of key/value pairs where `key` is the unique ID, and the `value` is the JSON object with employee record.
To better illustrate functionality, let's have organization name (org) and employee ID (id) as a nested JSON person object.
First, you need to create an instance of MongoDB, which is your state store.
```bash
docker run -d --rm -p 27017:27017 --name mongodb mongo:5
```
Next is to start a Dapr application. Refer to this [component configuration file](../query-api-examples/components/mongodb.yml), which instructs Dapr to use MongoDB as its state store.
```bash
dapr run --app-id demo --dapr-http-port 3500 --components-path query-api-examples/components
```
Now populate the state store with the employee dataset, so you can then query it later.
```bash
curl -X POST -H "Content-Type: application/json" -d @query-api-examples/dataset.json http://localhost:3500/v1.0/state/statestore
```
Once populated, you can examine the data in the state store. The image below a section of the MongoDB UI displaying employee records.
<table><tr><td>
<img src="/images/state-management-query-mongodb-dataset.png" width=500 alt="Sample dataset" class="center">
</td></tr></table>
Each entry has the `_id` member as a concatenated object key, and the `value` member containing the JSON record.
The query API allows you to select records from this JSON structure.
Now you can run the queries.
### Example 1
First, let's find all employees in the state of California and sort them by their employee ID in descending order.
This is the [query](../query-api-examples/query1.json):
```json
{
"query": {
"filter": {
"EQ": { "value.state": "CA" }
},
"sort": [
{
"key": "value.person.id",
"order": "DESC"
}
]
}
}
```
An equivalent of this query in SQL is:
```sql
SELECT * FROM c WHERE
value.state = "CA"
ORDER BY
value.person.id DESC
```
Execute the query with the following command:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" >}}
{{% codetab %}}
```bash
curl -s -X POST -H "Content-Type: application/json" -d @query-api-examples/query1.json http://localhost:3500/v1.0-alpha1/state/statestore/query | jq .
```
{{% /codetab %}}
{{% codetab %}}
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -InFile query-api-examples/query1.json -Uri 'http://localhost:3500/v1.0-alpha1/state/statestore/query'
```
{{% /codetab %}}
{{< /tabs >}}
The query result is an array of matching key/value pairs in the requested order:
```json
{
"results": [
{
"key": "3",
"data": {
"person": {
"org": "Finance",
"id": 1071
},
"city": "Sacramento",
"state": "CA"
},
"etag": "44723d41-deb1-4c23-940e-3e6896c3b6f7"
},
{
"key": "7",
"data": {
"city": "San Francisco",
"state": "CA",
"person": {
"id": 1015,
"org": "Dev Ops"
}
},
"etag": "0e69e69f-3dbc-423a-9db8-26767fcd2220"
},
{
"key": "5",
"data": {
"state": "CA",
"person": {
"org": "Hardware",
"id": 1007
},
"city": "Los Angeles"
},
"etag": "f87478fa-e5c5-4be0-afa5-f9f9d75713d8"
},
{
"key": "9",
"data": {
"person": {
"org": "Finance",
"id": 1002
},
"city": "San Diego",
"state": "CA"
},
"etag": "f5cf05cd-fb43-4154-a2ec-445c66d5f2f8"
}
]
}
```
### Example 2
Let's now find all employees from the "Dev Ops" and "Hardware" organizations.
This is the [query](../query-api-examples/query2.json):
```json
{
"query": {
"filter": {
"IN": { "value.person.org": [ "Dev Ops", "Hardware" ] }
}
}
}
```
An equivalent of this query in SQL is:
```sql
SELECT * FROM c WHERE
value.person.org IN ("Dev Ops", "Hardware")
```
Execute the query with the following command:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" >}}
{{% codetab %}}
```bash
curl -s -X POST -H "Content-Type: application/json" -d @query-api-examples/query2.json http://localhost:3500/v1.0-alpha1/state/statestore/query | jq .
```
{{% /codetab %}}
{{% codetab %}}
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -InFile query-api-examples/query2.json -Uri 'http://localhost:3500/v1.0-alpha1/state/statestore/query'
```
{{% /codetab %}}
{{< /tabs >}}
Similar to the previous example, the result is an array of matching key/value pairs.
### Example 3
In this example let's find all employees from the "Dev Ops" department
and those employees from the "Finance" departing residing in the states of Washington and California.
In addition, let's sort the results first by state in descending alphabetical order, and then by employee ID in ascending order.
Also, let's process up to 3 records at a time.
This is the [query](../query-api-examples/query3.json):
```json
{
"query": {
"filter": {
"OR": [
{
"EQ": { "value.person.org": "Dev Ops" }
},
{
"AND": [
{
"EQ": { "value.person.org": "Finance" }
},
{
"IN": { "value.state": [ "CA", "WA" ] }
}
]
}
]
},
"sort": [
{
"key": "value.state",
"order": "DESC"
},
{
"key": "value.person.id"
}
],
"pagination": {
"limit": 3
}
}
}
```
An equivalent of this query in SQL is:
```sql
SELECT * FROM c WHERE
value.person.org = "Dev Ops" OR
(value.person.org = "Finance" AND value.state IN ("CA", "WA"))
ORDER BY
value.state DESC,
value.person.id ASC
LIMIT 3
```
Execute the query with the following command:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" >}}
{{% codetab %}}
```bash
curl -s -X POST -H "Content-Type: application/json" -d @query-api-examples/query3.json http://localhost:3500/v1.0-alpha1/state/statestore/query | jq .
```
{{% /codetab %}}
{{% codetab %}}
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -InFile query-api-examples/query3.json -Uri 'http://localhost:3500/v1.0-alpha1/state/statestore/query'
```
{{% /codetab %}}
{{< /tabs >}}
Upon successful execution, the state store returns a JSON object with a list of matching records and the pagination token:
```json
{
"results": [
{
"key": "1",
"data": {
"person": {
"org": "Dev Ops",
"id": 1036
},
"city": "Seattle",
"state": "WA"
},
"etag": "6f54ad94-dfb9-46f0-a371-e42d550adb7d"
},
{
"key": "4",
"data": {
"person": {
"org": "Dev Ops",
"id": 1042
},
"city": "Spokane",
"state": "WA"
},
"etag": "7415707b-82ce-44d0-bf15-6dc6305af3b1"
},
{
"key": "10",
"data": {
"person": {
"org": "Dev Ops",
"id": 1054
},
"city": "New York",
"state": "NY"
},
"etag": "26bbba88-9461-48d1-8a35-db07c374e5aa"
}
],
"token": "3"
}
```
The pagination token is used "as is" in the [subsequent query](../query-api-examples/query3-token.json) to get the next batch of records:
```json
{
"query": {
"filter": {
"OR": [
{
"EQ": { "value.person.org": "Dev Ops" }
},
{
"AND": [
{
"EQ": { "value.person.org": "Finance" }
},
{
"IN": { "value.state": [ "CA", "WA" ] }
}
]
}
]
},
"sort": [
{
"key": "value.state",
"order": "DESC"
},
{
"key": "value.person.id"
}
],
"pagination": {
"limit": 3,
"token": "3"
}
}
}
```
And the result of this query is:
```json
{
"results": [
{
"key": "9",
"data": {
"person": {
"org": "Finance",
"id": 1002
},
"city": "San Diego",
"state": "CA"
},
"etag": "f5cf05cd-fb43-4154-a2ec-445c66d5f2f8"
},
{
"key": "7",
"data": {
"city": "San Francisco",
"state": "CA",
"person": {
"id": 1015,
"org": "Dev Ops"
}
},
"etag": "0e69e69f-3dbc-423a-9db8-26767fcd2220"
},
{
"key": "3",
"data": {
"person": {
"org": "Finance",
"id": 1071
},
"city": "Sacramento",
"state": "CA"
},
"etag": "44723d41-deb1-4c23-940e-3e6896c3b6f7"
}
],
"token": "6"
}
```
That way you can update the pagination token in the query and iterate through the results until no more records are returned.
## Related links
- [Query API reference ]({{< ref "state_api.md#state-query" >}})
- [State store components with those that implement query support]({{< ref supported-state-stores.md >}})
- [State store query API implementation guide](https://github.com/dapr/components-contrib/blob/master/state/Readme.md#implementing-state-query-api)

View File

@ -0,0 +1,10 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.mongodb
version: v1
metadata:
- name: host
value: localhost:27017

View File

@ -0,0 +1,112 @@
[
{
"key": "1",
"value": {
"person": {
"org": "Dev Ops",
"id": 1036
},
"city": "Seattle",
"state": "WA"
}
},
{
"key": "2",
"value": {
"person": {
"org": "Hardware",
"id": 1028
},
"city": "Portland",
"state": "OR"
}
},
{
"key": "3",
"value": {
"person": {
"org": "Finance",
"id": 1071
},
"city": "Sacramento",
"state": "CA"
}
},
{
"key": "4",
"value": {
"person": {
"org": "Dev Ops",
"id": 1042
},
"city": "Spokane",
"state": "WA"
}
},
{
"key": "5",
"value": {
"person": {
"org": "Hardware",
"id": 1007
},
"city": "Los Angeles",
"state": "CA"
}
},
{
"key": "6",
"value": {
"person": {
"org": "Finance",
"id": 1094
},
"city": "Eugene",
"state": "OR"
}
},
{
"key": "7",
"value": {
"person": {
"org": "Dev Ops",
"id": 1015
},
"city": "San Francisco",
"state": "CA"
}
},
{
"key": "8",
"value": {
"person": {
"org": "Hardware",
"id": 1077
},
"city": "Redmond",
"state": "WA"
}
},
{
"key": "9",
"value": {
"person": {
"org": "Finance",
"id": 1002
},
"city": "San Diego",
"state": "CA"
}
},
{
"key": "10",
"value": {
"person": {
"org": "Dev Ops",
"id": 1054
},
"city": "New York",
"state": "NY"
}
}
]

View File

@ -0,0 +1,13 @@
{
"query": {
"filter": {
"EQ": { "value.state": "CA" }
},
"sort": [
{
"key": "value.person.id",
"order": "DESC"
}
]
}
}

View File

@ -0,0 +1,7 @@
{
"query": {
"filter": {
"IN": { "value.person.org": [ "Dev Ops", "Hardware" ] }
}
}
}

View File

@ -0,0 +1,34 @@
{
"query": {
"filter": {
"OR": [
{
"EQ": { "value.person.org": "Dev Ops" }
},
{
"AND": [
{
"EQ": { "value.person.org": "Finance" }
},
{
"IN": { "value.state": [ "CA", "WA" ] }
}
]
}
]
},
"sort": [
{
"key": "value.state",
"order": "DESC"
},
{
"key": "value.person.id"
}
],
"pagination": {
"limit": 3,
"token": "3"
}
}
}

View File

@ -0,0 +1,33 @@
{
"query": {
"filter": {
"OR": [
{
"EQ": { "value.person.org": "Dev Ops" }
},
{
"AND": [
{
"EQ": { "value.person.org": "Finance" }
},
{
"IN": { "value.state": [ "CA", "WA" ] }
}
]
}
]
},
"sort": [
{
"key": "value.state",
"order": "DESC"
},
{
"key": "value.person.id"
}
],
"pagination": {
"limit": 3
}
}
}

View File

@ -8,13 +8,13 @@ description: "Use Azure Cosmos DB as a backend state store"
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{< ref state_api.md >}}). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
> **NOTE:** Azure Cosmos DB is a multi-modal database that supports multiple APIs. The default Dapr Cosmos DB state store implementation uses the [Azure Cosmos DB SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started).
> **NOTE:** Azure Cosmos DB is a multi-modal database that supports multiple APIs. The default Dapr Cosmos DB state store implementation uses the [Azure Cosmos DB SQL API](https://docs.microsoft.com/azure/cosmos-db/sql-query-getting-started).
## 1. Connect to Azure Cosmos DB
The easiest way to connect to your Cosmos DB instance is to use the Data Explorer on [Azure Management Portal](https://portal.azure.com). Alternatively, you can use [various SDKs and tools](https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction).
The easiest way to connect to your Cosmos DB instance is to use the Data Explorer on [Azure Management Portal](https://portal.azure.com). Alternatively, you can use [various SDKs and tools](https://docs.microsoft.com/azure/cosmos-db/mongodb-introduction).
> **NOTE:** The following samples use Cosmos DB [SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). When you configure an Azure Cosmos DB for Dapr, you need to specify the exact database and collection to use. The follow samples assume you've already connected to the right database and a collection named "states".
> **NOTE:** The following samples use Cosmos DB [SQL API](https://docs.microsoft.com/azure/cosmos-db/sql-query-getting-started). When you configure an Azure Cosmos DB for Dapr, you need to specify the exact database and collection to use. The follow samples assume you've already connected to the right database and a collection named "states".
## 2. List keys by App ID

View File

@ -8,39 +8,37 @@ description: "Overview of the state management building block"
## Introduction
Using state management, your application can store data as key/value pairs in the [supported state stores]({{< ref supported-state-stores.md >}}).
Using state management, your application can store and query data as key/value pairs in the [supported state stores]({{< ref supported-state-stores.md >}}). This enables you to build stateful, long running applications that can save and retrieve their state, for example a shopping cart or a game's session state.
When using state management your application can leverage features that would otherwise be complicated and error-prone to build yourself such as:
When using state management, your application can leverage features that would otherwise be complicated and error-prone to build yourself such as:
- Distributed concurrency and data consistency
- Bulk [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations
- Setting the choices on concurrency control and data consistency.
- Performing bulk update operations [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) including multiple transactional operations.
- Querying and filtering the key/value data.
Your application can use Dapr's state management API to save and read key/value pairs using a state store component, as shown in the diagram below. For example, by using HTTP POST you can save key/value pairs and by using HTTP GET you can read a key and have its value returned.
Your application can use Dapr's state management API to save, read and query key/value pairs using a state store component, as shown in the diagram below. For example, by using HTTP POST you can save or query key/value pairs and by using HTTP GET you can read a specific key and have its value returned.
<img src="/images/state-management-overview.png" width=900>
## Features
These are the features available as part of the state management API:
### Pluggable state stores
Dapr data stores are modeled as components, which can be swapped out without any changes to your service code. See [supported state stores]({{< ref supported-state-stores >}}) to see the list.
### Configurable state store behavior
Dapr allows developers to attach additional metadata to a state operation request that describes how the request is expected to be handled. You can attach:
### Configurable state store behaviors
Dapr allows you to include additional metadata in a state operation request that describes how the request is expected to be handled. You can attach:
- Concurrency requirements
- Consistency requirements
By default, your application should assume a data store is **eventually consistent** and uses a **last-write-wins** concurrency pattern.
[Not all stores are created equal]({{< ref supported-state-stores.md >}}). To ensure portability of your application you can query the capabilities of the store and make your code adaptive to different store capabilities.
[Not all stores are created equal]({{< ref supported-state-stores.md >}}). To ensure portability of your application you can query the metadata capabilities of the store and make your code adaptive to different store capabilities.
### Concurrency
Dapr supports Optimistic Concurrency Control (OCC) using ETags. When a state value is requested, Dapr always attaches an ETag property to the returned state. When the user code tries to update or delete a state, its expected to attach the ETag either through the request body for updates or the `If-Match` header for deletes. The write operation can succeed only when the provided ETag matches with the ETag in the state store.
Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an ETag property to the returned state. When the user code tries to update or delete a state, its expected to attach the ETag either through the request body for updates or the `If-Match` header for deletes. The write operation can succeed only when the provided ETag matches with the ETag in the state store.
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a retry policy to compensate for such conflicts when using ETags.
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a retry policy in your code to compensate for such conflicts when using ETags.
If your application omits ETags in writing requests, Dapr skips ETag checks while handling the requests. This essentially enables the **last-write-wins** pattern, compared to the **first-write-wins** pattern with ETags.
@ -50,14 +48,7 @@ For stores that don't natively support ETags, it's expected that the correspondi
Read the [API reference]({{< ref state_api.md >}}) to learn how to set concurrency options.
### Automatic encryption
Dapr supports automatic client encryption of application state with support for key rotations. This is a preview feature and it is supported on all Dapr state stores.
For more info, read the [How-To: Encrypt application state]({{< ref howto-encrypt-state.md >}}) section.
### Consistency
Dapr supports both **strong consistency** and **eventual consistency**, with eventual consistency as the default behavior.
When strong consistency is used, Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request. When eventual consistency is used, Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica.
@ -66,25 +57,40 @@ Read the [API reference]({{< ref state_api.md >}}) to learn how to set consisten
### Bulk operations
Dapr supports two types of bulk operations - **bulk** or **multi**. You can group several requests of the same type into a bulk (or a batch). Dapr submits requests in the bulk as individual requests to the underlying data store. In other words, bulk operations are not transactional. On the other hand, you can group requests of different types into a multi-operation, which is handled as an atomic transaction.
Dapr supports two types of bulk operations: **bulk** or **multi**. You can group several requests of the same type into a bulk (or a batch). Dapr submits requests in bulk operations as individual requests to the underlying data store. In other words, bulk operations are not transactional. On the other hand, you can group requests of different types into a multi-operation, which is then handled as an atomic transaction.
Read the [API reference]({{< ref state_api.md >}}) to learn how use bulk and multi options.
### State encryption
Dapr supports automatic client encryption of application state with support for key rotations. This is supported on all Dapr state stores. For more info, read the [How-To: Encrypt application state]({{< ref howto-encrypt-state.md >}}) topic.
### Shared state between applications
Different applications might have different needs when it comes to sharing state. For example, in one scenario you may want to encapsulate all state within a given application and have Dapr manage the access for you. In a different scenario, you may need to have two applications working on the same state be able to get and save the same keys. Dapr enable states to be isolated to an application, shared in a state store between applications or have multiple applications share state across different state stores. For more details read [How-To: Share state between applications]({{< ref howto-share-state.md >}}),
### Actor state
Transactional state stores can be used to store actor state. To specify which state store to be used for actors, specify value of property `actorStateStore` as `true` in the metadata section of the state store component. Actors state is stored with a specific scheme in transactional state stores, which allows for consistent querying. Only a single state store component can be used as the statestore for all actors. Read the [API reference]({{< ref state_api.md >}}) to learn more about state stores for actors and the [actors API reference]({{< ref actors_api.md >}})
Transactional state stores can be used to store actor state. To specify which state store to be used for actors, specify value of property `actorStateStore` as `true` in the metadata section of the state store component. Actors state is stored with a specific scheme in transactional state stores, which allows for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [API reference]({{< ref state_api.md >}}) to learn more about state stores for actors and the [actors API reference]({{< ref actors_api.md >}})
### Query state store directly
### Querying state
There are two ways to query the state:
* Using the [state management query API]({{< ref "#state-query-api" >}}) provided in Dapr runtime.
* Querying state store [directly]({{< ref "#query-state-store-directly" >}}) with the store's native SDK.
#### Query API
The query API provides a way of querying the key/value data saved using state management in state stores regardless of underlying database or storage technology. It is an optional state management API. Using the state management query API you can filter, sort and paginate the key/value data. For more details read [How-To: Query state]({{< ref howto-state-query-api.md >}}).
#### Querying state store directly
Dapr saves and retrieves state values without any transformation. You can query and aggregate state directly from the [underlying state store]({{< ref query-state-store >}}).
For example, to get all state keys associated with an application ID "myApp" in Redis, use:
```bash
KEYS "myApp*"
```
#### Querying actor state
{{% alert title="Note on direct queries" color="primary" %}}
Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the Dapr state management or actors APIs.
{{% /alert %}}
##### Querying actor state
If the data store supports SQL queries, you can query an actor's state using SQL queries. For example use:
```sql
@ -97,20 +103,20 @@ You can also perform aggregate queries across actor instances, avoiding the comm
SELECT AVG(value) FROM StateTable WHERE Id LIKE '<app-id>||<thermometer>||*||temperature'
```
{{% alert title="Note on direct queries" color="primary" %}}
Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the Dapr state management or actors APIs.
{{% /alert %}}
### State Time-to-Live (TTL)
Dapr enables per state set request time-to-live (TTL). This means that applications can set time-to-live per state stored, and these states cannot be retrieved after expiration.
### State management API
The API for state management can be found in the [state management API reference]({{< ref state_api.md >}}) which describes how to retrieve, save and delete state values by providing keys.
The state management API can be found in the [state management API reference]({{< ref state_api.md >}}) which describes how to retrieve, save, delete and query state values by providing keys.
## Next steps
* Follow these guides on:
* [How-To: Save and get state]({{< ref howto-get-save-state.md >}})
* [How-To: Build a stateful service]({{< ref howto-stateful-service.md >}})
* [How-To: Share state between applications]({{< ref howto-share-state.md >}})
* [How-To: Query state]({{< ref howto-state-query-api.md >}})
* [How-To: Encrypt application state]({{< ref howto-encrypt-state.md >}})
* [State Time-to-Live]({{< ref state-store-ttl.md >}})
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use state management or try the samples in the [Dapr SDKs]({{< ref sdks >}})
* List of [state store components]({{< ref supported-state-stores.md >}})
* Read the [state management API reference]({{< ref state_api.md >}})

View File

@ -6,6 +6,6 @@ description: "Publish APIs for Dapr services and components through Azure API Ma
weight: 6000
---
Azure API Management (APIM) is a way to create consistent and modern API gateways for back-end services, including as those built with Dapr. Dapr support can be enabled in self-hosted API Management gateways to allow them to forward requests to Dapr services, send messages to Dapr Pub/Sub topics, or trigger Dapr output bindings. For more information, read the guide on [API Management Dapr Integration policies](https://docs.microsoft.com/en-us/azure/api-management/api-management-dapr-policies) and try out the [Dapr & Azure API Management Integration Demo](https://github.com/dapr/samples/tree/master/dapr-apim-integration).
Azure API Management (APIM) is a way to create consistent and modern API gateways for back-end services, including as those built with Dapr. Dapr support can be enabled in self-hosted API Management gateways to allow them to forward requests to Dapr services, send messages to Dapr Pub/Sub topics, or trigger Dapr output bindings. For more information, read the guide on [API Management Dapr Integration policies](https://docs.microsoft.com/azure/api-management/api-management-dapr-policies) and try out the [Dapr & Azure API Management Integration Demo](https://github.com/dapr/samples/tree/master/dapr-apim-integration).
{{< button text="Learn more" link="https://docs.microsoft.com/en-us/azure/api-management/api-management-dapr-policies" >}}
{{< button text="Learn more" link="https://docs.microsoft.com/azure/api-management/api-management-dapr-policies" >}}

View File

@ -206,7 +206,7 @@ Note that the value above is the ID of the **Service Principal** which is differ
Keep in mind that the Service Principal that was just created does not have access to any Azure resource by default. Access will need to be granted to each resource as needed, as documented in the docs for the components.
> Note: this step is different from the [official documentation](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) as the short-hand commands included there create a Service Principal that has broad read-write access to all Azure resources in your subscription.
> Note: this step is different from the [official documentation](https://docs.microsoft.com/cli/azure/create-an-azure-service-principal-azure-cli) as the short-hand commands included there create a Service Principal that has broad read-write access to all Azure resources in your subscription.
> Not only doing that would grant our Service Principal more access than you are likely going to desire, but this also applies only to the Azure management plane (Azure Resource Manager, or ARM), which is irrelevant for Dapr anyways (all Azure components are designed to interact with the data plane of various services, and not ARM).
### Example usage in a Dapr component

View File

@ -6,7 +6,7 @@ description: "Learn how to build workflows using Dapr Workflows and Logic Apps"
weight: 4000
---
Dapr Workflows is a lightweight host that allows developers to run cloud-native workflows locally, on-premises or any cloud environment using the [Azure Logic Apps](https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-overview) workflow engine and Dapr.
Dapr Workflows is a lightweight host that allows developers to run cloud-native workflows locally, on-premises or any cloud environment using the [Azure Logic Apps](https://docs.microsoft.com/azure/logic-apps/logic-apps-overview) workflow engine and Dapr.
## Benefits
@ -31,21 +31,21 @@ Once a workflow request comes in, Dapr Workflows uses the Logic Apps SDK to exec
### Supported actions and triggers
- [HTTP](https://docs.microsoft.com/en-us/azure/connectors/connectors-native-http)
- [Schedule](https://docs.microsoft.com/en-us/azure/logic-apps/concepts-schedule-automated-recurring-tasks-workflows)
- [Request / Response](https://docs.microsoft.com/en-us/azure/connectors/connectors-native-reqres)
- [HTTP](https://docs.microsoft.com/azure/connectors/connectors-native-http)
- [Schedule](https://docs.microsoft.com/azure/logic-apps/concepts-schedule-automated-recurring-tasks-workflows)
- [Request / Response](https://docs.microsoft.com/azure/connectors/connectors-native-reqres)
### Supported control workflows
- [All control workflows](https://docs.microsoft.com/en-us/azure/connectors/apis-list#control-workflow)
- [All control workflows](https://docs.microsoft.com/azure/connectors/apis-list#control-workflow)
### Supported data manipulation
- [All data operations](https://docs.microsoft.com/en-us/azure/connectors/apis-list#manage-or-manipulate-data)
- [All data operations](https://docs.microsoft.com/azure/connectors/apis-list#manage-or-manipulate-data)
### Not supported
- [Managed connectors](https://docs.microsoft.com/en-us/azure/connectors/apis-list#managed-connectors)
- [Managed connectors](https://docs.microsoft.com/azure/connectors/apis-list#managed-connectors)
## Example
@ -67,7 +67,7 @@ Since Dapr supports many pluggable state stores and bindings, the workflow becom
Prerequisites:
1. Install the [Dapr CLI]({{< ref install-dapr-cli.md >}})
2. [Azure blob storage account](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-create-account-block-blob?tabs=azure-portal)
2. [Azure blob storage account](https://docs.microsoft.com/azure/storage/blobs/storage-blob-create-account-block-blob?tabs=azure-portal)
### Self-hosted

View File

@ -22,7 +22,7 @@ To make sure a component conforms to the standards set by Dapr, there are a set
The levels are as follows:
- [Alpha](#alpha)
- [Beta](#beta)
- [General availability (GA)](#general-availability-ga)
- [Stable](#stable)
### Alpha
@ -42,11 +42,16 @@ All components start at the Alpha stage.
- The component contains a record of the conformance test result reviewed and approved by Dapr maintainers with specific components-contrib version
- Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases
### General Availability (GA)
### Stable
- Has at least two different users using the component in production
- A GA component has a maintainer in the Dapr community or the Dapr maintainers
- The component is well documented, tested and maintained across multiple versions of components-contrib repo
- The component must have component [certification tests](#certification-tests) validating functionality and resiliency
- The component is maintained by Dapr maintainers and supported by the community
- The component is well documented and tested
- A maintainer will address component security, core functionality and test issues according to the Dapr support policy and issue a patch release that includes the patched stable component
### Previous Generally Available (GA) components
Any component that was previously certified as GA is allowed into Stable even if the new requirements are not met.
## Conformance tests
@ -66,24 +71,45 @@ To understand more about them see the readme [here](https://github.com/dapr/comp
- The tests should validate the functional behavior and robustness of component based on the component specification
- All the details needed to reproduce the tests are added as part of the component conformance test documentation
## Certification tests
Each stable component in the [components-contrib](https://github.com/dapr/components-contrib) repository must have a certification test plan and automated certification tests validating all features supported by the component via Dapr.
Test plan for stable components should include the following scenarios:
- Client reconnection: in case the client library cannot connect to the service for a moment, Dapr sidecar should not require a restart once the service is back online.
- Authentication options: validate the component can authenticate with all the supported options.
- Validate resource provisioning: validate if the component automatically provisions resources on initialization, if applicable.
- All scenarios relevant to the corresponding building block and component.
The test plan must be approved by a Dapr maintainer and be published in a `README.md` file along with the component code.
### Test requirements
- The tests should validate the functional behavior and robustness of the component based on the component specification, reflecting the scenarios from the test plan
- The tests must run successfully as part of the continuous integration of the [components-contrib](https://github.com/dapr/components-contrib) repository
## Component certification process
For a component to be certified tests are run in an environment maintained by the Dapr team.
In order for a component to be certified, tests are run in an environment maintained by the Dapr project.
### New component certification: Alpha->Beta or Beta->GA
### New component certification: Alpha->Beta
For a new component requiring a certification change from Alpha to Beta or Beta to GA, a request for component certification follows these steps:
- An issue is created with a request for certification of the component with the current and the new certification levels
- A user of a component submits a PR for integrating the component to run with the defined conformance test suite
- The user details the environment setup in the issue created, so that a Dapr maintainer can setup the service in a managed environment
- After the environment setup is complete, Dapr maintainers review the PR and if approved merges that PR
- Dapr maintainers review functional correctness with the test being run in an environment maintained by the Dapr team
- Dapr maintainers update the component status document categorized by Dapr Runtime version. This is done as part of the release process in the next release of Dapr runtime
### Existing GA certified component
For an existing GA certified component, conformance test should be run against any changes made to component code or the backing service version or the client version.
In the scenarios where a component version is updated, the component again starts from Alpha stage and then the new component certification is followed for that.
For a new component requiring a certification change from Alpha to Beta, a request for component certification follows these steps:
- Requestor creates an issue in the [components-contrib](https://github.com/dapr/components-contrib) repository for certification of the component with the current and the new certification levels
- Requestor submits a PR to integrate the component with the defined conformance test suite, if not already included
- The user details the environment setup in the issue created, so a Dapr maintainer can setup the service in a managed environment
- After the environment setup is complete, Dapr maintainers review the PR and if approved merges that PR
- Requestor submits a PR in the [docs](https://github.com/dapr/docs) repository, updating the component's certification level
### New component certification: Beta->Stable
For a new component requiring a certification change from Beta to Stable, a request for component certification follows these steps:
- Requestor creates an issue in the [components-contrib](https://github.com/dapr/components-contrib) repository for certification of the component with the current and the new certification levels
- Requestor submits a PR for the test plan as a `README.md` file in the component's source code directory
- The requestor details the test environment requirements in the created PR, including any manual steps or credentials needed
- A Dapr maintainer reviews the test plan, provides feedback or approves it, and eventually merges the PR
- Requestor submits a PR for the automated certification tests, including scripts to provision resources when applicable
- After the test environment setup is completed and credentials provisioned, Dapr maintainers review the PR and, if approved, merges the PR
- Requestor submits a PR in the [docs](https://github.com/dapr/docs) repository, updating the component's certification level

View File

@ -13,11 +13,11 @@ description: >
- [Docker](https://docs.docker.com/install/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest)
## Deploy an Azure Kubernetes Service cluster
This guide walks you through installing an Azure Kubernetes Service cluster. If you need more information, refer to [Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure CLI](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough)
This guide walks you through installing an Azure Kubernetes Service cluster. If you need more information, refer to [Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure CLI](https://docs.microsoft.com/azure/aks/kubernetes-walkthrough)
1. Login to Azure

View File

@ -15,7 +15,7 @@ description: >
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Minikube](https://minikube.sigs.k8s.io/docs/start/)
> Note: For Windows, enable Virtualization in BIOS and [install Hyper-V](https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v)
> Note: For Windows, enable Virtualization in BIOS and [install Hyper-V](https://docs.microsoft.com/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v)
## Start the Minikube cluster

View File

@ -16,7 +16,7 @@ You will need a Kubernetes cluster with Windows nodes. Many Kubernetes providers
1. Follow your preferred provider's instructions for setting up a cluster with Windows enabled
- [Setting up Windows on Azure AKS](https://docs.microsoft.com/en-us/azure/aks/windows-container-cli)
- [Setting up Windows on Azure AKS](https://docs.microsoft.com/azure/aks/windows-container-cli)
- [Setting up Windows on AWS EKS](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html)
- [Setting up Windows on Google Cloud GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster-windows)
@ -49,7 +49,7 @@ helm install dapr dapr/dapr --set global.daprControlPlaneOs=windows
## Installing Dapr applications
### Windows applications
In order to launch a Dapr application on Windows, you'll first need to create a Docker container with your application installed. For a step by step guide see [Get started: Prep Windows for containers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment). Once you have a docker container with your application, create a deployment YAML file with node affinity set to kubernetes.io/os: windows.
In order to launch a Dapr application on Windows, you'll first need to create a Docker container with your application installed. For a step by step guide see [Get started: Prep Windows for containers](https://docs.microsoft.com/virtualization/windowscontainers/quick-start/set-up-environment). Once you have a docker container with your application, create a deployment YAML file with node affinity set to kubernetes.io/os: windows.
1. Create a deployment YAML
@ -162,8 +162,8 @@ helm uninstall dapr
## Related links
- See the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) for examples of more advanced configuration via node affinity
- [Get started: Prep Windows for containers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment)
- [Setting up a Windows enabled Kubernetes cluster on Azure AKS](https://docs.microsoft.com/en-us/azure/aks/windows-container-cli)
- [Get started: Prep Windows for containers](https://docs.microsoft.com/virtualization/windowscontainers/quick-start/set-up-environment)
- [Setting up a Windows enabled Kubernetes cluster on Azure AKS](https://docs.microsoft.com/azure/aks/windows-container-cli)
- [Setting up a Windows enabled Kubernetes cluster on AWS EKS](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html)
- [Setting up Windows on Google Cloud GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster-windows)

View File

@ -38,7 +38,7 @@ Dapr works seamlessly with any user application container image, regardless of i
The Dapr control-plane and sidecar images come from the [daprio Docker Hub](https://hub.docker.com/u/daprio) container registry, which is a public registry.
For information about pulling your application images from a private registry, reference the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/). If you are using Azure Container Registry with Azure Kubernetes Service, reference the [AKS documentation](https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration).
For information about pulling your application images from a private registry, reference the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/). If you are using Azure Container Registry with Azure Kubernetes Service, reference the [AKS documentation](https://docs.microsoft.com/azure/aks/cluster-container-registry-integration).
## Quickstart

View File

@ -49,14 +49,26 @@ Then proceed with the `dapr upgrade --runtime-version {{% dapr-latest-version lo
From version 1.0.0 onwards, upgrading Dapr using Helm is no longer a disruptive action since existing certificate values will automatically be re-used.
1. Upgrade Dapr from 1.0.0 (or newer) to any [NEW VERSION] > v1.0.0:
1. Upgrade Dapr from 1.0.0 (or newer) to any [NEW VERSION] > 1.0.0:
*Helm does not handle upgrading CRDs, so you need to perform that manually. CRDs are backward-compatible and should only be installed forward.*
>Note: The Dapr version is included in the commands below.
For version {{% dapr-latest-version long="true" %}}:
```bash
kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/v{{% dapr-latest-version long="true" %}}/charts/dapr/crds/components.yaml
kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/v{{% dapr-latest-version long="true" %}}/charts/dapr/crds/configuration.yaml
kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/v{{% dapr-latest-version long="true" %}}/charts/dapr/crds/subscription.yaml
```
```bash
helm repo update
```
```bash
helm upgrade dapr dapr/dapr --version [NEW VERSION] --namespace dapr-system --wait
helm upgrade dapr dapr/dapr --version {{% dapr-latest-version long="true" %}} --namespace dapr-system --wait
```
*If you're using a values file, remember to add the `--values` option when running the upgrade command.*

View File

@ -94,7 +94,7 @@ If you are using the Azure Kubernetes Service, you can use the default OMS Agent
If you use [Fluentd](https://www.fluentd.org/), we recommend to using Elastic Search and Kibana. This [how-to]({{< ref fluentd.md >}}) shows how to set up Elastic Search and Kibana in your Kubernetes cluster.
If you are using the Azure Kubernetes Service, you can use [Azure monitor for containers](https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-overview) without indstalling any additional monitoring tools. Also read [How to enable Azure Monitor for containers](https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-onboard)
If you are using the Azure Kubernetes Service, you can use [Azure monitor for containers](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-overview) without indstalling any additional monitoring tools. Also read [How to enable Azure Monitor for containers](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-onboard)
## References

View File

@ -8,8 +8,8 @@ description: "Enable Dapr metrics and logs with Azure Monitor for Azure Kubernet
## Prerequisites
- [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/)
- [Enable Azure Monitor For containers in AKS](https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-overview)
- [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/)
- [Enable Azure Monitor For containers in AKS](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-overview)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Helm 3](https://helm.sh/)
@ -127,6 +127,6 @@ InsightsMetrics
# References
* [Configure scraping of Prometheus metrics with Azure Monitor for containers](https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-prometheus-integration)
* [Configure agent data collection for Azure Monitor for containers](https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-agent-config)
* [Azure Monitor Query](https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/query-language)
* [Configure scraping of Prometheus metrics with Azure Monitor for containers](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-prometheus-integration)
* [Configure agent data collection for Azure Monitor for containers](https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-agent-config)
* [Azure Monitor Query](https://docs.microsoft.com/azure/azure-monitor/log-query/query-language)

View File

@ -17,7 +17,7 @@ A installation of Dapr on Kubernetes.
### Setup Application Insights
1. First, you'll need an Azure account. See instructions [here](https://azure.microsoft.com/free/) to apply for a **free** Azure account.
2. Follow instructions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource) to create a new Application Insights resource.
2. Follow instructions [here](https://docs.microsoft.com/azure/azure-monitor/app/create-new-resource) to create a new Application Insights resource.
3. Get the Application Insights Intrumentation key from your Application Insights page.
### Run OpenTelemetry Collector to push to your Application Insights instance

View File

@ -16,7 +16,7 @@ The main difference between the two flows is that the `Authorization Code Grant
Different authorization servers provide different application registration experiences. Here are some samples:
* [Azure AAD](https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-protocols-oauth-code)
* [Azure AAD](https://docs.microsoft.com/azure/active-directory/develop/v1-protocols-oauth-code)
* [Facebook](https://developers.facebook.com/apps)
* [Fitbit](https://dev.fitbit.com/build/reference/web-api/oauth2/)
* [GitHub](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/)

View File

@ -55,7 +55,7 @@ Components are implemented in the components-contrib repository and follow a `MA
The [components-contrib](https://github.com/dapr/components-contrib/) repo release is a flat version across all components inside. That is, a version for the components-contrib repo release is made up of all the schemas for the components inside it. A new version of Dapr does not mean there is a new release of components-contrib if there are no component changes.
Note: Components have a production usage lifecycle status: Alpha, Beta and GA (stable). These statuses are not related to their versioning. The tables of supported components shows both their versions and their status.
Note: Components have a production usage lifecycle status: Alpha, Beta and Stable. These statuses are not related to their versioning. The tables of supported components shows both their versions and their status.
* List of [state store components]({{< ref supported-state-stores.md >}})
* List of [pub/sub components]({{< ref supported-pubsub.md >}})
* List of [secret store components]({{< ref supported-secret-stores.md >}})

View File

@ -292,6 +292,132 @@ None.
curl -X "DELETE" http://localhost:3500/v1.0/state/starwars/planet -H "If-Match: xxxxxxx"
```
## Query state
This endpoint lets you query the key/value state.
{{% alert title="alpha" color="warning" %}}
This API is in alpha stage.
{{% /alert %}}
### HTTP Request
```
POST/PUT http://localhost:<daprPort>/v1.0-alpha/state/<storename>/query
```
#### URL Parameters
Parameter | Description
--------- | -----------
daprPort | the Dapr port
storename | ```metadata.name``` field in the user configured state store component yaml. Refer to the Dapr state store configuration structure mentioned above.
metadata | (optional) metadata as query parameters to the state store
> Note, all URL parameters are case-sensitive.
#### Response Codes
Code | Description
---- | -----------
200 | State query successful
400 | State store is missing or misconfigured
500 | State query failed
#### Response Body
An array of JSON-encoded values
### Example
```shell
curl http://localhost:3500/v1.0-alpha/state/myStore/query \
-H "Content-Type: application/json" \
-d '{
"query": {
"filter": {
"OR": [
{
"EQ": { "value.person.org": "Dev Ops" }
},
{
"AND": [
{
"EQ": { "value.person.org": "Finance" }
},
{
"IN": { "value.state": [ "CA", "WA" ] }
}
]
}
]
},
"sort": [
{
"key": "value.state",
"order": "DESC"
},
{
"key": "value.person.id"
}
],
"pagination": {
"limit": 3
}
}
}'
```
> The above command returns an array of objects along with a token:
```json
{
"results": [
{
"key": "1",
"data": {
"person": {
"org": "Dev Ops",
"id": 1036
},
"city": "Seattle",
"state": "WA"
},
"etag": "6f54ad94-dfb9-46f0-a371-e42d550adb7d"
},
{
"key": "4",
"data": {
"person": {
"org": "Dev Ops",
"id": 1042
},
"city": "Spokane",
"state": "WA"
},
"etag": "7415707b-82ce-44d0-bf15-6dc6305af3b1"
},
{
"key": "10",
"data": {
"person": {
"org": "Dev Ops",
"id": 1054
},
"city": "New York",
"state": "NY"
},
"etag": "26bbba88-9461-48d1-8a35-db07c374e5aa"
}
],
"token": "3"
}
```
To pass metadata as query parammeter:
```
POST http://localhost:3500/v1.0-alpha/state/myStore/query?metadata.partitionKey=mypartitionKey
```
## State transactions
Persists the changes to the state store as a multi-item transaction.

View File

@ -27,6 +27,7 @@ This table is meant to help users understand the equivalent options for running
| `--enable-metrics` | not supported | | configuration spec | Enable prometheus metric (default true) |
| `--enable-mtls` | not supported | | configuration spec | Enables automatic mTLS for daprd to daprd communication channels |
| `--enable-profiling` | `--enable-profiling` | | `dapr.io/enable-profiling` | Enable profiling |
| `--unix-domain-socket` | `--unix-domain-socket` | `-u` | not supported | On Linux, when communicating with the Dapr sidecar, use unix domain sockets for lower latency and greater throughput compared to TCP ports. Not available on Windows OS |
| `--log-as-json` | not supported | | `dapr.io/log-as-json` | Setting this parameter to `true` outputs logs in JSON format. Default is `false` |
| `--log-level` | `--log-level` | | `dapr.io/log-level` | Sets the log level for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` |
| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the concurrency of your application. A valid value is any number larger than `0`

View File

@ -38,6 +38,7 @@ dapr run [flags] [command]
| `--log-level` | | `info` | The log verbosity. Valid values are: `debug`, `info`, `warn`, `error`, `fatal`, or `panic` |
| `--metrics-port` | `DAPR_METRICS_PORT` | `9090` | The port that Dapr sends its metrics information to |
| `--profile-port` | | `7777` | The port for the profile server to listen on |
| `--unix-domain-socket`, `-u` | | | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows OS |
| `--dapr-http-max-request-size` | | `4` | Max size of request body in MB. |
### Examples
@ -46,6 +47,9 @@ dapr run [flags] [command]
# Run a .NET application
dapr run --app-id myapp --app-port 5000 -- dotnet run
# Run a .Net application with unix domain sockets
dapr run --app-id myapp --app-port 5000 --unix-domain-socket /tmp -- dotnet run
# Run a Java application
dapr run --app-id myapp -- java -jar myapp.jar

View File

@ -17,7 +17,7 @@ Table captions:
> `Status`: [Component certification]({{<ref "certification-lifecycle.md">}}) status
- [Alpha]({{<ref "certification-lifecycle.md#alpha">}})
- [Beta]({{<ref "certification-lifecycle.md#beta">}})
- [GA]({{<ref "certification-lifecycle.md#general-availability-ga">}})
- [Stable]({{<ref "certification-lifecycle.md#stable">}})
> `Since`: defines from which Dapr Runtime version, the component is in the current status
> `Component version`: defines the version of the component
@ -28,7 +28,7 @@ Table captions:
| [Apple Push Notifications (APN)]({{< ref apns.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [Cron (Scheduler)]({{< ref cron.md >}}) | ✅ | ✅ | Alpha | v1 | 1.0 |
| [GraphQL]({{< ref graghql.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [HTTP]({{< ref http.md >}}) | | ✅ | GA | v1 | 1.0 |
| [HTTP]({{< ref http.md >}}) | | ✅ | Stable| v1 | 1.0 |
| [InfluxDB]({{< ref influxdb.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [Kafka]({{< ref kafka.md >}}) | ✅ | ✅ | Alpha | v1 | 1.0 |
| [Kubernetes Events]({{< ref "kubernetes-binding.md" >}}) | ✅ | | Alpha | v1 | 1.0 |
@ -76,11 +76,12 @@ Table captions:
|------|:----------------:|:-----------------:|--------| --------- | ---------- |
| [Azure Blob Storage]({{< ref blobstorage.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [Azure CosmosDB]({{< ref cosmosdb.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [Azure CosmosDBGremlinAPI]({{< ref cosmosdbgremlinapi.md >}}) | | ✅ | Alpha | v1 | 1.5 |
| [Azure Event Grid]({{< ref eventgrid.md >}}) | ✅ | ✅ | Alpha | v1 | 1.0 |
| [Azure Event Hubs]({{< ref eventhubs.md >}}) | ✅ | ✅ | Alpha | v1 | 1.0 |
| [Azure Service Bus Queues]({{< ref servicebusqueues.md >}}) | ✅ | ✅ | Alpha | v1 | 1.0 |
| [Azure SignalR]({{< ref signalr.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [Azure Storage Queues]({{< ref storagequeues.md >}}) | ✅ | ✅ | GA | v1 | 1.0 |
| [Azure Storage Queues]({{< ref storagequeues.md >}}) | ✅ | ✅ | Stable| v1 | 1.0 |
### Zeebe (Camunda Cloud)

View File

@ -32,6 +32,8 @@ spec:
value: <bool>
- name: getBlobRetryCount
value: <integer>
- name: publicAccessLevel
value: <publicAccessLevel>
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
@ -46,6 +48,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| container | Y | Output | The name of the Blob Storage container to write to | `myexamplecontainer` |
| decodeBase64 | N | Output | Configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
| getBlobRetryCount | N | Output | Specifies the maximum number of HTTP GET requests that will be made while reading from a RetryReader Defaults to `10` | `1`, `2`
| publicAccessLevel | N | Output | Specifies whether data in the container may be accessed publicly and the level of access (only used if the container is created by Dapr). Defaults to `none` | `blob`, `container`, `none`
## Binding support

View File

@ -48,7 +48,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| collection | Y | Output | The name of the container inside the database. | `"Orders"` |
| partitionKey | Y | Output | The name of the partitionKey to extract from the payload and is used in the container | `"OrderId"`, `"message"` |
For more information see [Azure Cosmos DB resource model](https://docs.microsoft.com/en-us/azure/cosmos-db/account-databases-containers-items).
For more information see [Azure Cosmos DB resource model](https://docs.microsoft.com/azure/cosmos-db/account-databases-containers-items).
## Binding support

View File

@ -0,0 +1,57 @@
---
type: docs
title: "Azure CosmosDBGremlinAPI binding spec"
linkTitle: "Azure CosmosDBGremlinAPI"
description: "Detailed documentation on the Azure CosmosDBGremlinAPI binding component"
---
## Component format
To setup Azure CosmosDBGremlinAPI binding create a component of type `bindings.azure.cosmosdb.gremlinapi`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: bindings.azure.cosmosdb.gremlinapi
version: v1
metadata:
- name: url
value: wss://******.gremlin.cosmos.azure.com:443/
- name: masterKey
value: *****
- name: username
value: *****
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|--------|---------|---------|
| url | Y | Output | The CosmosDBGremlinAPI url | `"wss://******.gremlin.cosmos.azure.com:443/"` |
| masterKey | Y | Output | The CosmosDBGremlinAPI account master key | `"masterKey"` |
| database | Y | Output | The username of the CosmosDBGremlinAPI database | `"username"` |
For more information see [Quickstart: Azure Cosmos Graph DB using Gremlin ](https://docs.microsoft.com/azure/cosmos-db/graph/create-graph-console).
## Binding support
This component supports **output binding** with the following operations:
- `query`
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -59,6 +59,20 @@ For ease of use, the Dapr cron binding also supports few shortcuts:
* `@every 15s` where `s` is seconds, `m` minutes, and `h` hours
* `@daily` or `@hourly` which runs at that period from the time the binding is initialized
## Listen to the cron binding
After setting up the cron binding, all you need to do is listen on an endpoint that matches the name of your component. Assume the [NAME] is `scheduled`. This will be made as a HTTP `POST` request. The below example shows how a simple Node.js Express application can receive calls on the `/scheduled` endpoint and write a message to the console.
```js
app.post('/scheduled', async function(req, res){
console.log("scheduled endpoint called", req.body)
res.status(200).send()
});
```
When running this code, note that the `/scheduled` endpoint is called every five minutes by the Dapr sidecar.
## Binding support
This component supports both **input and output** binding interfaces.

View File

@ -11,7 +11,7 @@ aliases:
To setup Azure Event Grid binding create a component of type `bindings.azure.eventgrid`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
See [this](https://docs.microsoft.com/en-us/azure/event-grid/) for Azure Event Grid documentation.
See [this](https://docs.microsoft.com/azure/event-grid/) for Azure Event Grid documentation.
```yml
apiVersion: dapr.io/v1alpha1
@ -83,7 +83,7 @@ This component supports **output binding** with the following operations:
- `create`
## Additional information
Event Grid Binding creates an [event subscription](https://docs.microsoft.com/en-us/azure/event-grid/concepts#event-subscriptions) when Dapr initializes. Your Service Principal needs to have the RBAC permissions to enable this.
Event Grid Binding creates an [event subscription](https://docs.microsoft.com/azure/event-grid/concepts#event-subscriptions) when Dapr initializes. Your Service Principal needs to have the RBAC permissions to enable this.
```bash
# First ensure that Azure Resource Manager provider is registered for Event Grid
@ -137,7 +137,7 @@ helm install nginx-ingress ingress-nginx/ingress-nginx -f ./dapr-annotations.yam
kubectl get svc -l component=controller -o jsonpath='Public IP is: {.items[0].status.loadBalancer.ingress[0].ip}{"\n"}'
```
If deploying to Azure Kubernetes Service, you can follow [the official MS documentation for rest of the steps](https://docs.microsoft.com/en-us/azure/aks/ingress-tls)
If deploying to Azure Kubernetes Service, you can follow [the official MS documentation for rest of the steps](https://docs.microsoft.com/azure/aks/ingress-tls)
- Add an A record to your DNS zone
- Install cert-manager
- Create a CA cluster issuer

View File

@ -11,7 +11,7 @@ aliases:
To setup Azure Event Hubs binding create a component of type `bindings.azure.eventhubs`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
See [this](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-framework-getstarted-send) for instructions on how to set up an Event Hub.
See [this](https://docs.microsoft.com/azure/event-hubs/event-hubs-dotnet-framework-getstarted-send) for instructions on how to set up an Event Hub.
```yaml
apiVersion: dapr.io/v1alpha1
@ -45,8 +45,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| connectionString | Y | Output | The [EventHubs connection string](https://docs.microsoft.com/en-us/azure/event-hubs/authorize-access-shared-access-signature). Note that this is the EventHub itself and not the EventHubs namespace. Make sure to use the child EventHub shared access policy connection string | `"Endpoint=sb://****"` |
| consumerGroup | Y | Output | The name of an [EventHubs Consumer Group](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features#consumer-groups) to listen on | `"group1"` |
| connectionString | Y | Output | The [EventHubs connection string](https://docs.microsoft.com/azure/event-hubs/authorize-access-shared-access-signature). Note that this is the EventHub itself and not the EventHubs namespace. Make sure to use the child EventHub shared access policy connection string | `"Endpoint=sb://****"` |
| consumerGroup | Y | Output | The name of an [EventHubs Consumer Group](https://docs.microsoft.com/azure/event-hubs/event-hubs-features#consumer-groups) to listen on | `"group1"` |
| storageAccountName | Y | Output | The name of the account of the Azure Storage account to persist checkpoints data on | `"accountName"` |
| storageAccountKey | Y | Output | The account key for the Azure Storage account to persist checkpoints data on | `"accountKey"` |
| storageContainerName | Y | Output | The name of the container in the Azure Storage account to persist checkpoints data on | `"contianerName"` |
@ -60,16 +60,16 @@ This component supports **output binding** with the following operations:
## Input Binding to Azure IoT Hub Events
Azure IoT Hub provides an [endpoint that is compatible with Event Hubs](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-read-builtin#read-from-the-built-in-endpoint), so Dapr apps can create input bindings to read Azure IoT Hub events using the Event Hubs bindings component.
Azure IoT Hub provides an [endpoint that is compatible with Event Hubs](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-messages-read-builtin#read-from-the-built-in-endpoint), so Dapr apps can create input bindings to read Azure IoT Hub events using the Event Hubs bindings component.
The device-to-cloud events created by Azure IoT Hub devices will contain additional [IoT Hub System Properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-construct#system-properties-of-d2c-iot-hub-messages), and the Azure Event Hubs binding for Dapr will return the following as part of the response metadata:
The device-to-cloud events created by Azure IoT Hub devices will contain additional [IoT Hub System Properties](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-messages-construct#system-properties-of-d2c-iot-hub-messages), and the Azure Event Hubs binding for Dapr will return the following as part of the response metadata:
| System Property Name | Description & Routing Query Keyword |
|----------------------|:------------------------------------|
| `iothub-connection-auth-generation-id` | The **connectionDeviceGenerationId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
| `iothub-connection-auth-generation-id` | The **connectionDeviceGenerationId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
| `iothub-connection-auth-method` | The **connectionAuthMethod** used to authenticate the device that sent the message. |
| `iothub-connection-device-id` | The **deviceId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
| `iothub-connection-module-id` | The **moduleId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
| `iothub-connection-device-id` | The **deviceId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
| `iothub-connection-module-id` | The **moduleId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
| `iothub-enqueuedtime` | The **enqueuedTime** in RFC3339 format that the device-to-cloud message was received by IoT Hub. |
| `message-id` | The user-settable AMQP **messageId**. |

View File

@ -25,7 +25,7 @@ spec:
- name: namespace
value: <NAMESPACE>
- name: resyncPeriodInSec
vale: "<seconds>"
value: "<seconds>"
```
## Spec metadata fields

View File

@ -11,7 +11,6 @@ aliases:
To setup MQTT binding create a component of type `bindings.mqtt`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
@ -25,14 +24,17 @@ spec:
- name: url
value: "tcp://[username][:password]@host.domain[:port]"
- name: topic
value: "topic1"
value: "mytopic"
- name: qos
value: 1
- name: retain
value: "false"
- name: cleanSession
value: "false"
value: "true"
- name: backOffMaxRetries
value: "0"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
@ -41,19 +43,20 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|---------|---------|---------|
| url | Y | Input/Output | Address of the MQTT broker | Use `**tcp://**` scheme for non-TLS communication. Use`**ssl://**` scheme for TLS communication. <br> "tcp://[username][:password]@host.domain[:port]"
| topic | Y | Input/Output | The topic to listen on or send events to | `"mytopic"` |
| qos | N | Input/Output | Indicates the Quality of Service Level (QoS) of the message. Default 0|`1`
| retain | N | Input/Output | Defines whether the message is saved by the broker as the last known good value for a specified topic. Default `"false"` | `"true"`, `"false"`
| cleanSession | N | Input/Output | will set the "clean session" in the connect message when client connects to an MQTT broker. Default `"true"` | `"true"`, `"false"`
| caCert | Required for using TLS | Input/Output | Certificate authority certificate. Can be `secretKeyRef` to use a secret reference | `0123456789-0123456789`
| clientCert | Required for using TLS | Input/Output | Client certificate. Can be `secretKeyRef` to use a secret reference | `0123456789-0123456789`
| clientKey | Required for using TLS | Input/Output | Client key. Can be `secretKeyRef` to use a secret reference | `012345`
| url | Y | Input/Output | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"`
| topic | Y | Input/Output | The topic to listen on or send events to. | `"mytopic"` |
| consumerID | N | Input/Output | The client ID used to connect to the MQTT broker. Defaults to the Dapr app ID. | `"myMqttClientApp"`
| qos | N | Input/Output | Indicates the Quality of Service Level (QoS) of the message. Defaults to `0`. |`1`
| retain | N | Input/Output | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"`
| cleanSession | N | Input/Output | Sets the `clean_session` flag in the connection message to the MQTT broker if `"true"`. Defaults to `"true"`. | `"true"`, `"false"`
| caCert | Required for using TLS | Input/Output | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientCert | Required for using TLS | Input/Output | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientKey | Required for using TLS | Input/Output | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
| backOffMaxRetries | N | Input | The maximum number of retries to process the message before returning an error. Defaults to `"0"`, which means that no retries will be attempted. `"-1"` can be specified to indicate that messages should be retried indefinitely until they are successfully processed or the application is shutdown. The component will wait 5 seconds between retries. | `"3"`
### Communication using TLS
To configure communication using TLS, ensure mosquitto broker is configured to support certificates.
Pre-requisite includes `certficate authority certificate`, `ca issued client certificate`, `client private key`.
Here is an example.
To configure communication using TLS, ensure that the MQTT broker (e.g. mosquitto) is configured to support certificates and provide the `caCert`, `clientCert`, `clientKey` metadata in the component configuration. For example:
```yaml
apiVersion: dapr.io/v1alpha1
@ -75,23 +78,31 @@ spec:
value: "false"
- name: cleanSession
value: "false"
- name: backoffMaxRetries
value: "0"
- name: caCert
value: ''
value: ${{ myLoadedCACert }}
- name: clientCert
value: ''
value: ${{ myLoadedClientCert }}
- name: clientKey
value: ''
secretKeyRef:
name: myMqttClientKey
key: myMqttClientKey
auth:
secretStore: <SECRET_STORE_NAME>
```
Note that while the `caCert` and `clientCert` values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
### Consuming a shared topic
When consuming a shared topic, each consumer must have a unique identifier. By default, the application Id is used to uniquely identify each consumer and publisher. In self-hosted mode, running each Dapr run with a different application Id is sufficient to have them consume from the same shared topic. However on Kubernetes, a pod with multiple application instances shares the same application Id, prohibiting all instances from consuming the same topic. To overcome this, configure the component's `ConsumerID` metadata with a `{uuid}` tag, making each instance to have a randomly generated `ConsumerID` value on start up. For example:
When consuming a shared topic, each consumer must have a unique identifier. By default, the application ID is used to uniquely identify each consumer and publisher. In self-hosted mode, invoking each `dapr run` with a different application ID is sufficient to have them consume from the same shared topic. However, on Kubernetes, multiple instances of an application pod will share the same application ID, prohibiting all instances from consuming the same topic. To overcome this, configure the component's `consumerID` metadata with a `{uuid}` tag, which will give each instance a randomly generated `consumerID` value on start up. For example:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
name: mqtt-binding
namespace: default
spec:
type: bindings.mqtt
@ -109,6 +120,8 @@ spec:
value: "false"
- name: cleanSession
value: "false"
- name: backoffMaxRetries
value: "0"
```
{{% alert title="Warning" color="warning" %}}
@ -122,6 +135,7 @@ This component supports both **input and output** binding interfaces.
This component supports **output binding** with the following operations:
- `create`
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})

View File

@ -119,7 +119,7 @@ You can use [Helm](https://helm.sh/) to quickly create a Redis instance in our K
{{% /codetab %}}
{{% codetab %}}
[Azure Redis](https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/quickstart-create-redis)
[Azure Redis](https://docs.microsoft.com/azure/azure-cache-for-redis/quickstart-create-redis)
{{% /codetab %}}
{{< /tabs >}}

View File

@ -73,7 +73,7 @@ Applications publishing to an Azure SignalR output binding should send a message
}
```
For more information on integration Azure SignalR into a solution check the [documentation](https://docs.microsoft.com/en-us/azure/azure-signalr/)
For more information on integration Azure SignalR into a solution check the [documentation](https://docs.microsoft.com/azure/azure-signalr/)
## Related links

View File

@ -9,6 +9,16 @@ aliases:
- /developing-applications/middleware/supported-middleware/
---
Table captions:
> `Status`: [Component certification]({{<ref "certification-lifecycle.md">}}) status
- [Alpha]({{<ref "certification-lifecycle.md#alpha">}})
- [Beta]({{<ref "certification-lifecycle.md#beta">}})
- [Stable]({{<ref "certification-lifecycle.md#stable">}})
> `Since`: defines from which Dapr Runtime version, the component is in the current status
> `Component version`: defines the version of the component
### HTTP
| Name | Description | Status | Component version |
@ -18,4 +28,4 @@ aliases:
| [OAuth2 client credentials]({{< ref middleware-oauth2clientcredentials.md >}}) | Enables the [OAuth2 Client Credentials Grant flow](https://tools.ietf.org/html/rfc6749#section-4.4) on a Web API | Alpha | v1|
| [Bearer]({{< ref middleware-bearer.md >}}) | Verifies a [Bearer Token](https://tools.ietf.org/html/rfc6750) using [OpenID Connect](https://openid.net/connect/) on a Web API | Alpha | v1|
| [Open Policy Agent]({{< ref middleware-opa.md >}}) | Applies [Rego/OPA Policies](https://www.openpolicyagent.org/) to incoming Dapr HTTP requests | Alpha | v1|
| [Uppercase]({{< ref middleware-uppercase.md >}}) | Converts the body of the request to uppercase letters | GA (For local development) | v1|
| [Uppercase]({{< ref middleware-uppercase.md >}}) | Converts the body of the request to uppercase letters | Stable(For local development) | v1|

View File

@ -99,7 +99,7 @@ This middleware supplies a [`HTTPRequest`](#httprequest) as input.
### HTTPRequest
The `HTTPRequest` input contains all the relevant information about an incoming HTTP Request except it's body.
The `HTTPRequest` input contains all the relevant information about an incoming HTTP Request.
```go
type Input struct {
@ -123,6 +123,8 @@ type HTTPRequest struct {
headers map[string]string
// The request scheme (e.g. http, https)
scheme string
// The request body (e.g. http, https)
body string
}
```

View File

@ -7,9 +7,17 @@ description: The supported name resolution providers that interface with Dapr se
no_list: true
---
## Supported name resolution components
The following components provide name resolution for the service invocation building block.
The following components provide name resolution for the service invocation building block
Table captions:
> `Status`: [Component certification]({{<ref "certification-lifecycle.md">}}) status
- [Alpha]({{<ref "certification-lifecycle.md#alpha">}})
- [Beta]({{<ref "certification-lifecycle.md#beta">}})
- [Stable]({{<ref "certification-lifecycle.md#stable">}})
> `Since`: defines from which Dapr Runtime version, the component is in the current status
> `Component version`: defines the version of the component
### Generic
@ -21,19 +29,10 @@ The following components provide name resolution for the service invocation buil
| Name | Status | Component version | Since |
|------|:------:|:-----------------:|:-----:|
| [mDNS]({{< ref nr-mdns.md >}}) | GA | v1 | 1.0 |
| [mDNS]({{< ref nr-mdns.md >}}) | Stable| v1 | 1.0 |
### Kubernetes
| Name | Status | Component version | Since |
|------------|:------:|:-----------------:|:-----:|
| [Kubernetes]({{< ref nr-kubernetes.md >}}) | GA | v1 | 1.0 |
## Definitions
- **Status**: [component certification]({{<ref "certification-lifecycle.md">}}) status
- [Alpha]({{<ref "certification-lifecycle.md#alpha">}})
- [Beta]({{<ref "certification-lifecycle.md#beta">}})
- [GA]({{<ref "certification-lifecycle.md#general-availability-ga">}})
- **Since**: defines from which Dapr Runtime version, the component is in the current status
- **Component version**: defines the version of the component
| [Kubernetes]({{< ref nr-kubernetes.md >}}) | Stable| v1 | 1.0 |

View File

@ -14,7 +14,7 @@ Table captions:
> `Status`: [Component certification]({{<ref "certification-lifecycle.md">}}) status
- [Alpha]({{<ref "certification-lifecycle.md#alpha">}})
- [Beta]({{<ref "certification-lifecycle.md#beta">}})
- [GA]({{<ref "certification-lifecycle.md#general-availability-ga">}})
- [Stable]({{<ref "certification-lifecycle.md#stable">}})
> `Since`: defines from which Dapr Runtime version, the component is in the current status
> `Component version`: defines the version of the component
@ -22,15 +22,15 @@ Table captions:
| Name | Status | Component version | Since |
|-------------------------------------------------------|--------| -----| ------------- |
| [Apache Kafka]({{< ref setup-apache-kafka.md >}}) | Beta | v1 | 1.0 |
| [Apache Kafka]({{< ref setup-apache-kafka.md >}}) | Stable | v1 | 1.5 |
| [Hazelcast]({{< ref setup-hazelcast.md >}}) | Alpha | v1 | 1.0 |
| [MQTT]({{< ref setup-mqtt.md >}}) | Alpha | v1 | 1.0 |
| [NATS Streaming]({{< ref setup-nats-streaming.md >}}) | Beta | v1 | 1.0 |
| [In Memory]({{< ref setup-inmemory.md >}}) | Alpha | v1 | 1.4 |
| [JetStream]({{< ref setup-jetstream.md >}}) | Alpha | v1 | 1.4 |
| [NATS Streaming]({{< ref setup-nats-streaming.md >}}) | Beta | v1 | 1.0 |
| [In Memory]({{< ref setup-inmemory.md >}}) | Alpha | v1 | 1.4 |
| [JetStream]({{< ref setup-jetstream.md >}}) | Alpha | v1 | 1.4 |
| [Pulsar]({{< ref setup-pulsar.md >}}) | Alpha | v1 | 1.0 |
| [RabbitMQ]({{< ref setup-rabbitmq.md >}}) | Alpha | v1 | 1.0 |
| [Redis Streams]({{< ref setup-redis-pubsub.md >}}) | GA | v1 | 1.0 |
| [Redis Streams]({{< ref setup-redis-pubsub.md >}}) | Stable | v1 | 1.0 |
### Amazon Web Services (AWS)
@ -49,4 +49,4 @@ Table captions:
| Name | Status | Component version | Since |
|-----------------------------------------------------------|--------| ----------------| -- |
| [Azure Event Hubs]({{< ref setup-azure-eventhubs.md >}}) | Alpha | v1 | 1.0 |
| [Azure Service Bus]({{< ref setup-azure-servicebus.md >}})| GA | v1 | 1.0 |
| [Azure Service Bus]({{< ref setup-azure-servicebus.md >}})| Stable | v1 | 1.0 |

View File

@ -39,6 +39,8 @@ spec:
value: 1024
- name: consumeRetryInterval # Optional.
value: 200ms
- name: version # Optional.
value: 0.10.2.0
```
## Spec metadata fields
@ -54,6 +56,65 @@ spec:
| initialOffset | N | The initial offset to use if no offset was previously committed. Should be "newest" or "oldest". Defaults to "newest". | `"oldest"`
| maxMessageBytes | N | The maximum size in bytes allowed for a single Kafka message. Defaults to 1024. | `2048`
| consumeRetryInterval | N | The interval between retries when attempting to consume topics. Treats numbers without suffix as milliseconds. Defaults to 100ms. | `200ms`
| version | N | Kafka cluster version. Defaults to 2.0.0.0 | `0.10.2.0`
| caCert | N | Certificate authority certificate, required for using TLS. Can be `secretKeyRef` to use a secret reference | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientCert | N | Client certificate, required for using TLS. Can be `secretKeyRef` to use a secret reference | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientKey | N | Client key, required for using TLS. Can be `secretKeyRef` to use a secret reference | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
| skipVerify | N | Skip TLS verification, this is not recommended for use in production. Defaults to `"false"` | `"true"`, `"false"` |
### Communication using TLS
To configure communication using TLS, ensure the Kafka broker is configured to support certificates.
Pre-requisite includes `certficate authority certificate`, `ca issued client certificate`, `client private key`.
Below is an example of a Kafka pubsub component configured to use TLS:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub
namespace: default
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers # Required. Kafka broker connection setting
value: "dapr-kafka.myapp.svc.cluster.local:9092"
- name: consumerGroup # Optional. Used for input bindings.
value: "group1"
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authRequired # Required.
value: "true"
- name: saslUsername # Required if authRequired is `true`.
value: "adminuser"
- name: consumeRetryInterval # Optional.
value: 200ms
- name: version # Optional.
value: 0.10.2.0
- name: saslPassword # Required if authRequired is `true`.
secretKeyRef:
name: kafka-secrets
key: saslPasswordSecret
- name: maxMessageBytes # Optional.
value: 1024
- name: caCert # Certificate authority certificate.
secretKeyRef:
name: kafka-tls
key: caCert
- name: clientCert # Client certificate.
secretKeyRef:
name: kafka-tls
key: clientCert
- name: clientKey # Client key.
secretKeyRef:
name: kafka-tls
key: clientKey
auth:
secretStore: <SECRET_STORE_NAME>
```
The `secretKeyRef` above is referencing a [kubernetes secrets store]({{< ref kubernetes-secret-store.md >}}) to access the tls information. Visit [here]({{< ref setup-secret-store.md >}}) to learn more about how to configure a secret store component.
## Per-call metadata fields

View File

@ -46,10 +46,10 @@ The above example uses secrets as plain strings. It is recommended to use a secr
## Create an Azure Event Hub
Follow the instructions [here](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-create) on setting up Azure Event Hubs.
Since this implementation uses the Event Processor Host, you will also need an [Azure Storage Account](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal). Follow the instructions [here](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage) to manage the storage account access keys.
Follow the instructions [here](https://docs.microsoft.com/azure/event-hubs/event-hubs-create) on setting up Azure Event Hubs.
Since this implementation uses the Event Processor Host, you will also need an [Azure Storage Account](https://docs.microsoft.com/azure/storage/common/storage-account-create?tabs=azure-portal). Follow the instructions [here](https://docs.microsoft.com/azure/storage/common/storage-account-keys-manage) to manage the storage account access keys.
See [here](https://docs.microsoft.com/en-us/azure/event-hubs/authorize-access-shared-access-signature) on how to get the Event Hubs connection string. Note this is not the Event Hubs namespace.
See [here](https://docs.microsoft.com/azure/event-hubs/authorize-access-shared-access-signature) on how to get the Event Hubs connection string. Note this is not the Event Hubs namespace.
### Create consumer groups for each subscriber
@ -60,16 +60,16 @@ Note: Dapr passes the name of the Consumer group to the EventHub and so this is
## Subscribing to Azure IoT Hub Events
Azure IoT Hub provides an [endpoint that is compatible with Event Hubs](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-read-builtin#read-from-the-built-in-endpoint), so the Azure Event Hubs pubsub component can also be used to subscribe to Azure IoT Hub events.
Azure IoT Hub provides an [endpoint that is compatible with Event Hubs](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-messages-read-builtin#read-from-the-built-in-endpoint), so the Azure Event Hubs pubsub component can also be used to subscribe to Azure IoT Hub events.
The device-to-cloud events created by Azure IoT Hub devices will contain additional [IoT Hub System Properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-construct#system-properties-of-d2c-iot-hub-messages), and the Azure Event Hubs pubsub component for Dapr will return the following as part of the response metadata:
The device-to-cloud events created by Azure IoT Hub devices will contain additional [IoT Hub System Properties](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-messages-construct#system-properties-of-d2c-iot-hub-messages), and the Azure Event Hubs pubsub component for Dapr will return the following as part of the response metadata:
| System Property Name | Description & Routing Query Keyword |
|----------------------|:------------------------------------|
| `iothub-connection-auth-generation-id` | The **connectionDeviceGenerationId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
| `iothub-connection-auth-generation-id` | The **connectionDeviceGenerationId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
| `iothub-connection-auth-method` | The **connectionAuthMethod** used to authenticate the device that sent the message. |
| `iothub-connection-device-id` | The **deviceId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
| `iothub-connection-module-id` | The **moduleId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
| `iothub-connection-device-id` | The **deviceId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
| `iothub-connection-module-id` | The **moduleId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
| `iothub-enqueuedtime` | The **enqueuedTime** in RFC3339 format that the device-to-cloud message was received by IoT Hub. |
| `message-id` | The user-settable AMQP **messageId**. |

View File

@ -116,11 +116,11 @@ In addition to the [settable metadata listed above](#sending-a-message-with-meta
- `metadata.EnqueuedTimeUtc`
- `metadata.SequenceNumber`
To find out more details on the purpose of any of these metadata properties, please refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/en-us/rest/api/servicebus/message-headers-and-properties#message-headers).
To find out more details on the purpose of any of these metadata properties, please refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
## Create an Azure Service Bus
Follow the instructions [here](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal) on setting up Azure Service Bus Topics.
Follow the instructions [here](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal) on setting up Azure Service Bus Topics.
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})

View File

@ -28,26 +28,32 @@ spec:
- name: retain
value: "false"
- name: cleanSession
value: "false"
value: "true"
- name: backOffMaxRetries
value: "0"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| url | Y | Address of the MQTT broker | Use `**tcp://**` scheme for non-TLS communication. Use`**tcps://**` scheme for TLS communication. <br> "tcp://[username][:password]@host.domain[:port]"
| qos | N | Indicates the Quality of Service Level (QoS) of the message. Default 0|`1`
| retain | N | Defines whether the message is saved by the broker as the last known good value for a specified topic. Default `"false"` | `"true"`, `"false"`
| cleanSession | N | will set the "clean session" in the connect message when client connects to an MQTT broker. Default `"true"` | `"true"`, `"false"`
| caCert | Required for using TLS | Certificate authority certificate. Can be `secretKeyRef` to use a secret reference | `0123456789-0123456789`
| clientCert | Required for using TLS | Client certificate. Can be `secretKeyRef` to use a secret reference | `0123456789-0123456789`
| clientKey | Required for using TLS | Client key. Can be `secretKeyRef` to use a secret reference | `012345`
| backOffMaxRetries | N | The maximum number of retries to process the message before returning an error. Defaults to `"0"` which means the component will not retry processing the message. `"-1"` will retry indefinitely until the message is processed or the application is shutdown. And positive number is treated as the maximum retry count. The component will wait 5 seconds between retries. | `"3"`
| url | Y | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"`
| consumerID | N | The client ID used to connect to the MQTT broker. Defaults to the Dapr app ID. | `"myMqttClientApp"`
| qos | N | Indicates the Quality of Service Level (QoS) of the message. Defaults to `0`. |`1`
| retain | N | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"`
| cleanSession | N | Sets the `clean_session` flag in the connection message to the MQTT broker if `"true"`. Defaults to `"true"`. | `"true"`, `"false"`
| caCert | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientCert | Required for using TLS | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientKey | Required for using TLS | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
| backOffMaxRetries | N | The maximum number of retries to process the message before returning an error. Defaults to `"0"`, which means that no retries will be attempted. `"-1"` can be specified to indicate that messages should be retried indefinitely until they are successfully processed or the application is shutdown. The component will wait 5 seconds between retries. | `"3"`
### Communication using TLS
To configure communication using TLS, ensure mosquitto broker is configured to support certificates.
Pre-requisite includes `certficate authority certificate`, `ca issued client certificate`, `client private key`.
Here is an example.
To configure communication using TLS, ensure that the MQTT broker (e.g. mosquitto) is configured to support certificates and provide the `caCert`, `clientCert`, `clientKey` metadata in the component configuration. For example:
```yaml
apiVersion: dapr.io/v1alpha1
@ -60,30 +66,38 @@ spec:
version: v1
metadata:
- name: url
value: "tcps://host.domain[:port]"
value: "ssl://host.domain[:port]"
- name: qos
value: 1
- name: retain
value: "false"
- name: cleanSession
value: "false"
- name: backoffMaxRetries
value: "0"
- name: caCert
value: ''
value: ${{ myLoadedCACert }}
- name: clientCert
value: ''
value: ${{ myLoadedClientCert }}
- name: clientKey
value: ''
secretKeyRef:
name: myMqttClientKey
key: myMqttClientKey
auth:
secretStore: <SECRET_STORE_NAME>
```
Note that while the `caCert` and `clientCert` values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
### Consuming a shared topic
When consuming a shared topic, each consumer must have a unique identifier. By default, the application Id is used to uniquely identify each consumer and publisher. In self-hosted mode, running each Dapr run with a different application Id is sufficient to have them consume from the same shared topic. However on Kubernetes, a pod with multiple application instances shares the same application Id, prohibiting all instances from consuming the same topic. To overcome this, configure the component's `ConsumerID` metadata with a `{uuid}` tag, making each instance to have a randomly generated `ConsumerID` value on start up. For example:
When consuming a shared topic, each consumer must have a unique identifier. By default, the application ID is used to uniquely identify each consumer and publisher. In self-hosted mode, invoking each `dapr run` with a different application ID is sufficient to have them consume from the same shared topic. However, on Kubernetes, multiple instances of an application pod will share the same application ID, prohibiting all instances from consuming the same topic. To overcome this, configure the component's `consumerID` metadata with a `{uuid}` tag, which will give each instance a randomly generated `consumerID` value on start up. For example:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
name: mqtt-pubsub
namespace: default
spec:
type: pubsub.mqtt
@ -99,13 +113,14 @@ spec:
value: "false"
- name: cleanSession
value: "false"
- name: backoffMaxRetries
value: "0"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Create a MQTT broker
{{< tabs "Self-Hosted" "Kubernetes">}}
@ -116,6 +131,7 @@ You can run a MQTT broker [locally using Docker](https://hub.docker.com/_/eclips
```bash
docker run -d -p 1883:1883 -p 9001:9001 --name mqtt eclipse-mosquitto:1.6.9
```
You can then interact with the server using the client port: `mqtt://localhost:1883`
{{% /codetab %}}
@ -171,12 +187,14 @@ spec:
name: websocket
protocol: TCP
```
You can then interact with the server using the client port: `tcp://mqtt-broker.default.svc.cluster.local:1883`
{{% /codetab %}}
{{< /tabs >}}
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
- [Pub/Sub building block]({{< ref pubsub >}})

View File

@ -43,6 +43,12 @@ spec:
value: "100"
- name: backOffMaxRetries
value: "16"
- name: enableDeadLetter # Optional enable dead Letter or not
value: "true"
- name: maxLen # Optional max message count in a queue
value: "3000"
- name: maxLenBytes # Optional maximum length in bytes of a queue.
value: "10485760"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
@ -69,6 +75,9 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| backOffRandomizationFactor | N | Randomization factor, between 1 and 0, including 0 but not 1. Randomized interval = RetryInterval * (1 ± backOffRandomizationFactor). Defaults to `"0.5"`. | `"0.5"` |
| backOffMultiplier | N | Backoff multiplier for the policy. Increments the interval by multiplying it with the multiplier. Defaults to `"1.5"` | `"1.5"` |
| backOffMaxElapsedTime | N | After MaxElapsedTime the ExponentialBackOff returns Stop. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"15m"` | `"15m"` |
| enableDeadLetter | N | Enable forwarding Messages that cannot be handled to a dead-letter topic. Defaults to `"false"` | `"true"`, `"false"` |
| maxLen | N | The maximum number of messages of a queue and its dead letter queue (if dead letter enabled). If both `maxLen` and `maxLenBytes` are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. | `"1000"` |
| maxLenBytes | N | Maximum length in bytes of a queue and its dead letter queue (if dead letter enabled). If both `maxLen` and `maxLenBytes` are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. | `"1048576"` |
### Backoff policy introduction

View File

@ -117,7 +117,7 @@ You can use [Helm](https://helm.sh/) to quickly create a Redis instance in our K
{{% /codetab %}}
{{% codetab %}}
[Azure Redis](https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/quickstart-create-redis)
[Azure Redis](https://docs.microsoft.com/azure/azure-cache-for-redis/quickstart-create-redis)
{{% /codetab %}}
{{< /tabs >}}

View File

@ -14,7 +14,7 @@ Table captions:
> `Status`: [Component certification]({{<ref "certification-lifecycle.md">}}) status
- [Alpha]({{<ref "certification-lifecycle.md#alpha">}})
- [Beta]({{<ref "certification-lifecycle.md#beta">}})
- [GA]({{<ref "certification-lifecycle.md#general-availability-ga">}})
- [Stable]({{<ref "certification-lifecycle.md#stable">}})
> `Since`: defines from which Dapr Runtime version, the component is in the current status
> `Component version`: defines the version of the component
@ -23,10 +23,10 @@ Table captions:
| Name | Status | Component version | Since |
|-------------------------------------------------------------------|------------------------------| ---------------- |-- |
| [Local environment variables]({{< ref envvar-secret-store.md >}}) | Beta | v1 | 1.0 |
| [Local file]({{< ref file-secret-store.md >}}) | Beta | v1 | 1.0 |
| [Local environment variables]({{< ref envvar-secret-store.md >}}) | Beta | v1 | 1.0 |
| [Local file]({{< ref file-secret-store.md >}}) | Beta | v1 | 1.0 |
| [HashiCorp Vault]({{< ref hashicorp-vault.md >}}) | Alpha | v1 | 1.0 |
| [Kubernetes secrets]({{< ref kubernetes-secret-store.md >}}) | GA | v1 | 1.0 |
| [Kubernetes secrets]({{< ref kubernetes-secret-store.md >}}) | Stable | v1 | 1.0 |
### Amazon Web Services (AWS)
@ -45,4 +45,4 @@ Table captions:
| Name | Status | Component version | Since |
|---------------------------------------------------------------------------------------|--------| ---- |--------------|
| [Azure Key Vault]({{< ref azure-keyvault.md >}}) | GA | v1 | 1.0 |
| [Azure Key Vault]({{< ref azure-keyvault.md >}}) | Stable | v1 | 1.0 |

View File

@ -265,7 +265,7 @@ To use a **certificate**:
## References
- [Authenticating to Azure]({{< ref authenticating-azure.md >}})
- [Azure CLI: keyvault commands](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
- [Azure CLI: keyvault commands](https://docs.microsoft.com/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
- [Secrets building block]({{< ref secrets >}})
- [How-To: Retrieve a secret]({{< ref "howto-secrets.md" >}})
- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}})

View File

@ -14,7 +14,7 @@ Table captions:
> `Status`: [Component certification]({{<ref "certification-lifecycle.md">}}) status
- [Alpha]({{<ref "certification-lifecycle.md#alpha">}})
- [Beta]({{<ref "certification-lifecycle.md#beta">}})
- [GA]({{<ref "certification-lifecycle.md#general-availability-ga">}})
- [Stable]({{<ref "certification-lifecycle.md#stable">}})
> `Since`: defines from which Dapr Runtime version, the component is in the current status
> `Component version`: defines the version of the component
@ -26,38 +26,38 @@ The following stores are supported, at various levels, by the Dapr state managem
### Generic
| Name | CRUD | Transactional | ETag | [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | Status | Component version | Since |
|----------------------------------------------------------------|------|---------------------|------|-----|------|--------| -------|------|
| [Aerospike]({{< ref setup-aerospike.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Apache Cassandra]({{< ref setup-cassandra.md >}}) | ✅ | ❌ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
| [Cloudstate]({{< ref setup-cloudstate.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Couchbase]({{< ref setup-couchbase.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Hashicorp Consul]({{< ref setup-consul.md >}}) | ✅ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Hazelcast]({{< ref setup-hazelcast.md >}}) | ✅ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Memcached]({{< ref setup-memcached.md >}}) | ✅ | ❌ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
| [MongoDB]({{< ref setup-mongodb.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | GA | v1 | 1.0 |
| [MySQL]({{< ref setup-mysql.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | Alpha | v1 | 1.0 |
| [PostgreSQL]({{< ref setup-postgresql.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | Alpha | v1 | 1.0 |
| [Redis]({{< ref setup-redis.md >}}) | ✅ | ✅ | ✅ | ✅ | ✅ | GA | v1 | 1.0 |
| [RethinkDB]({{< ref setup-rethinkdb.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | Alpha | v1 | 1.0 |
| [Zookeeper]({{< ref setup-zookeeper.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
| Name |CRUD|Transactional|ETag| [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | [Query]({{< ref howto-state-query-api.md >}}) | Status | Component version | Since |
|----------------------------------------------------|----|-------------|----|----|----|----|-------|----|-----|
| [Aerospike]({{< ref setup-aerospike.md >}}) | ✅ | ❌ | ✅ | | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Apache Cassandra]({{< ref setup-cassandra.md >}}) | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Cloudstate]({{< ref setup-cloudstate.md >}}) | ✅ | ❌ | ✅ | | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Couchbase]({{< ref setup-couchbase.md >}}) | ✅ | ❌ | ✅ | | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Hashicorp Consul]({{< ref setup-consul.md >}}) | ✅ | ❌ | ❌ | | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Hazelcast]({{< ref setup-hazelcast.md >}}) | ✅ | ❌ | ❌ | | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Memcached]({{< ref setup-memcached.md >}}) | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [MongoDB]({{< ref setup-mongodb.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | Stable | v1 | 1.0 |
| [MySQL]({{< ref setup-mysql.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
| [PostgreSQL]({{< ref setup-postgresql.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
| [Redis]({{< ref setup-redis.md >}}) | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | Stable | v1 | 1.0 |
| [RethinkDB]({{< ref setup-rethinkdb.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
| [Zookeeper]({{< ref setup-zookeeper.md >}}) | ✅ | ❌ | ✅ | | ❌ | ❌ | Alpha | v1 | 1.0 |
### Amazon Web Services (AWS)
| Name | CRUD | Transactional | ETag | [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | Status | Component version | Since |
|------------------------------------------------------------------|------|---------------------|------|-----|--------|-----|-----|-------|
| [AWS DynamoDB]({{< ref setup-dynamodb.md>}}) | ✅ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| Name |CRUD|Transactional|ETag| [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | [Query]({{< ref howto-state-query-api.md >}}) | Status | Component version | Since |
|----------------------------------------------------|----|-------------|----|----|----|----|------|----|-----|
| [AWS DynamoDB]({{< ref setup-dynamodb.md>}}) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
### Google Cloud Platform (GCP)
| Name | CRUD | Transactional | ETag | [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | Status | Component version | Since |
|------------------------------------------------------------------|------|---------------------|------|-----|--------|-----|-----|-------|
| [GCP Firestore]({{< ref setup-firestore.md >}}) | ✅ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| Name |CRUD|Transactional|ETag| [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | [Query]({{< ref howto-state-query-api.md >}}) | Status | Component version | Since |
|----------------------------------------------------|------|---------------|----|----|----|----|------|----|-----|
| [GCP Firestore]({{< ref setup-firestore.md >}}) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
### Microsoft Azure
| Name | CRUD | Transactional | ETag | [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | Status | Component version | Since |
|------------------------------------------------------------------|------|---------------------|------|-----|--------|-----|-----|-------|
| [Azure Blob Storage]({{< ref setup-azure-blobstorage.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | GA | v1 | 1.0 |
| [Azure CosmosDB]({{< ref setup-azure-cosmosdb.md >}}) | ✅ | ✅ | ✅ | ✅ | ✅ | GA | v1 | 1.0 |
| [Azure SQL Server]({{< ref setup-sqlserver.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | Alpha | v1 | 1.0 |
| [Azure Table Storage]({{< ref setup-azure-tablestorage.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
| Name |CRUD|Transactional|ETag| [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | [Query]({{< ref howto-state-query-api.md >}}) | Status | Component version | Since |
|------------------------------------------------------------------|----|-------------|----|----|----|----|-------|----|-----|
| [Azure Blob Storage]({{< ref setup-azure-blobstorage.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | Stable | v1 | 1.0 |
| [Azure CosmosDB]({{< ref setup-azure-cosmosdb.md >}}) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Stable | v1 | 1.0 |
| [Azure SQL Server]({{< ref setup-sqlserver.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | ❌ | Stable | v1 | 1.5 |
| [Azure Table Storage]({{< ref setup-azure-tablestorage.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |

View File

@ -50,7 +50,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
## Setup Azure Blob Storage
[Follow the instructions](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal) from the Azure documentation on how to create an Azure Storage Account.
[Follow the instructions](https://docs.microsoft.com/azure/storage/common/storage-account-create?tabs=azure-portal) from the Azure documentation on how to create an Azure Storage Account.
If you wish to create a container for Dapr to use, you can do so beforehand. However, the Blob Storage state provider will create one for you automatically if it doesn't exist.
@ -155,7 +155,7 @@ This creates the blob file in the container with `key` as filename and `value` a
## Concurrency
Azure Blob Storage state concurrency is achieved by using `ETag`s according to [the Azure Blob Storage documentation](https://docs.microsoft.com/en-us/azure/storage/common/storage-concurrency#managing-concurrency-in-blob-storage).
Azure Blob Storage state concurrency is achieved by using `ETag`s according to [the Azure Blob Storage documentation](https://docs.microsoft.com/azure/storage/common/storage-concurrency#managing-concurrency-in-blob-storage).
## Related links

View File

@ -54,7 +54,7 @@ If you wish to use CosmosDb as an actor store, append the following to the yaml.
## Setup Azure Cosmos DB
[Follow the instructions](https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-manage-database-account) from the Azure documentation on how to create an Azure CosmosDB account. The database and collection must be created in CosmosDB before Dapr can use it.
[Follow the instructions](https://docs.microsoft.com/azure/cosmos-db/how-to-manage-database-account) from the Azure documentation on how to create an Azure CosmosDB account. The database and collection must be created in CosmosDB before Dapr can use it.
**Note : The partition key for the collection must be named "/partitionKey". Note: this is case-sensitive.**

View File

@ -43,7 +43,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
## Setup Azure Table Storage
[Follow the instructions](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal) from the Azure documentation on how to create an Azure Storage Account.
[Follow the instructions](https://docs.microsoft.com/azure/storage/common/storage-account-create?tabs=azure-portal) from the Azure documentation on how to create an Azure Storage Account.
If you wish to create a table for Dapr to use, you can do so beforehand. However, Table Storage state provider will create one for you automatically if it doesn't exist.
@ -79,7 +79,7 @@ will create the following record in a table:
## Concurrency
Azure Table Storage state concurrency is achieved by using `ETag`s according to [the official documentation]( https://docs.microsoft.com/en-us/azure/storage/common/storage-concurrency#managing-concurrency-in-table-storage).
Azure Table Storage state concurrency is achieved by using `ETag`s according to [the official documentation]( https://docs.microsoft.com/azure/storage/common/storage-concurrency#managing-concurrency-in-table-storage).
## Related links

View File

@ -15,6 +15,6 @@ The following table lists the environment variables used by the Dapr runtime, CL
| APP_API_TOKEN | Your application | The token used by the application to authenticate requests from Dapr API. Read [authenticate requests from Dapr using token authentication]({{< ref app-api-token >}}) for more information. |
| DAPR_HTTP_PORT | Your application | The HTTP port that the Dapr sidecar is listening on. Your application should use this variable to connect to Dapr sidecar instead of hardcoding the port value. Set by the Dapr CLI run command for self hosted or injected by the dapr-sidecar-injector into all the containers in the pod. |
| DAPR_GRPC_PORT | Your application | The gRPC port that the Dapr sidecar is listening on. Your application should use this variable to connect to Dapr sidecar instead of hardcoding the port value. Set by the Dapr CLI run command for self hosted or injected by the dapr-sidecar-injector into all the containers in the pod. |
| DAPR_METRICS_PORT | Your application | The HTTP [Prometheus]({{< ref prometheus >}}) port that Dapr sends its metrics information to. Your application can use this variable to send its application specific metrics to have both Dapr metrics and application metrics together. See [metrics-port] ({{< ref arguments-annotations-overview>}}) for more information |
| DAPR_METRICS_PORT | Your application | The HTTP [Prometheus]({{< ref prometheus >}}) port that Dapr sends its metrics information to. Your application can use this variable to send its application specific metrics to have both Dapr metrics and application metrics together. See [metrics-port]({{< ref arguments-annotations-overview>}}) for more information |
| DAPR_API_TOKEN | Dapr sidecar | The token used for Dapr API authentication for requests from the application. Read [enable API token authentication in Dapr]({{< ref api-token >}}) for more information. |
| NAMESPACE | Dapr sidecar | Used to specify a component's [namespace in self-hosted mode]({{< ref component-scopes >}}) |

View File

@ -2,7 +2,8 @@
{{ $page := .Get "page" }}
{{ $link := .Get "link" | default "#" }}
{{ $text := .Get "text" }}
{{ $newtab := .Get "newtab" | default "false" }}
{{- if $page -}}{{- $link = ref . $page -}}{{- end -}}
<a class="btn btn-{{ $color }}" href="{{ $link }}" role="button">{{ $text }}</a>
<a class="btn btn-{{ $color }}" href="{{ $link }}" role="button" {{- if eq $newtab "true" -}}target="_blank"{{- end -}}>{{ $text }}</a>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

After

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 205 KiB