Merge branch 'v1.11' into issue_1298

This commit is contained in:
Hannah Hunter 2023-06-08 13:19:26 -04:00 committed by GitHub
commit 22a5144f73
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
106 changed files with 3450 additions and 1215 deletions

View File

@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
name: Build and Deploy Job
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
submodules: recursive
fetch-depth: 0
@ -23,22 +23,19 @@ jobs:
run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli
- name: Build And Deploy
id: builddeploy
uses: Azure/static-web-apps-deploy@v0.0.1-preview
uses: Azure/static-web-apps-deploy@v1
env:
HUGO_ENV: production
HUGO_VERSION: "0.100.2"
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_PROUD_BAY_0E9E0E81E }}
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
skip_deploy_on_missing_secrets: true
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
action: "upload"
###### Repository/Build Configurations - These values can be configured to match your app requirements. ######
# For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
app_location: "/daprdocs" # App source code path
api_location: "api" # Api source code path - optional
output_location: "public" # Built app content directory - optional
app_build_command: "hugo"
###### End of Repository/Build Configurations ######
app_location: "/daprdocs"
app_build_command: "git config --global --add safe.directory /github/workspace && hugo"
output_location: "public"
skip_api_build: true
close_pull_request_job:
if: github.event_name == 'pull_request' && github.event.action == 'closed'
@ -47,8 +44,7 @@ jobs:
steps:
- name: Close Pull Request
id: closepullrequest
uses: Azure/static-web-apps-deploy@v0.0.1-preview
uses: Azure/static-web-apps-deploy@v1
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_PROUD_BAY_0E9E0E81E }}
skip_deploy_on_missing_secrets: true
action: "close"

View File

@ -27,6 +27,7 @@ The following are the building blocks provided by Dapr:
| [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.
| [**Observability**]({{< ref "observability-concept.md" >}}) | `N/A` | Dapr system components and runtime emit metrics, logs, and traces to debug, operate and monitor Dapr system services, components and user applications.
| [**Secrets**]({{< ref "secrets-overview.md" >}}) | `/v1.0/secrets` | Dapr provides a secrets building block API and integrates with secret stores such as public cloud stores, local stores and Kubernetes to store the secrets. Services can call the secrets API to retrieve secrets, for example to get a connection string to a database.
| [**Configuration**]({{< ref "configuration-api-overview.md" >}}) | `/v1.0-alpha1/configuration` | The Configuration API enables you to retrieve and subscribe to application configuration items for supported configuration stores. This enables an application to retrieve specific configuration information, for example, at start up or when configuration changes are made in the store.
| [**Configuration**]({{< ref "configuration-api-overview.md" >}}) | `/v1.0/configuration` | The Configuration API enables you to retrieve and subscribe to application configuration items for supported configuration stores. This enables an application to retrieve specific configuration information, for example, at start up or when configuration changes are made in the store.
| [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | `/v1.0-alpha1/lock` | The distributed lock API enables you to take a lock on a resource so that multiple instances of an application can access the resource without conflicts and provide consistency guarantees.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0-alpha1/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
| [**Cryptography**]({{< ref "cryptography-overview.md" >}}) | `/v1.0-alpha1/crypto` | The Cryptography API enables you to perform cryptographic operations, such as encrypting and decrypting messages, without exposing keys to your application.

View File

@ -108,6 +108,13 @@ A [workflow]({{< ref workflow-overview.md >}}) is custom application logic that
<!--- [List of supported workflows]()
- [Workflow implementations](https://github.com/dapr/components-contrib/tree/master/workflows)-->
### Cryptography
[Cryptography]({{< ref cryptography-overview.md >}}) components are used to perform crypographic operations, including encrypting and decrypting messages, without exposing keys to your application.
- [List of supported cryptography components]({{< ref supported-cryptography >}})
- [Cryptography implementations](https://github.com/dapr/components-contrib/tree/master/crypto)
### Middleware
Dapr allows custom [middleware]({{< ref "middleware.md" >}}) to be plugged into the HTTP request processing pipeline. Middleware can perform additional actions on an HTTP request (such as authentication, encryption, and message transformation) before the request is routed to the user code, or the response is returned to the client. The middleware components are used with the [service invocation]({{< ref "service-invocation-overview.md" >}}) building block.

View File

@ -44,8 +44,8 @@ Each of these building block APIs is independent, meaning that you can use one,
| [**Secrets**]({{< ref "secrets-overview.md" >}}) | The secrets management API integrates with public cloud and local secret stores to retrieve the secrets for use in application code.
| [**Configuration**]({{< ref "configuration-api-overview.md" >}}) | The configuration API enables you to retrieve and subscribe to application configuration items from configuration stores.
| [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | The distributed lock API enables your application to acquire a lock for any resource that gives it exclusive access until either the lock is released by the application, or a lease timeout occurs.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0-alpha1/workflow` | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components.
| [**Cryptography**]({{< ref "cryptography-overview.md" >}}) | The cryptography API provides an abstraction layer on top of security infrastructure such as key vaults. It contains APIs that allow you to perform cryptographic operations, such as encrypting and decrypting messages, without exposing keys to your applications.
## Sidecar architecture

View File

@ -101,9 +101,6 @@ The Dapr actor runtime provides a simple turn-based access model for accessing a
Transactional state stores can be used to store actor state. To specify which state store to use for actors, specify value of property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors.
#### Time to Live (TTL) on state
You should always set the TTL metadata field (`ttlInSeconds`), or the equivalent API call in your chosen SDK when saving actor state to ensure that state eventually removed. Read [actors overview]({{< ref actors-overview.md >}}) for more information.
### Actor timers and reminders
Actors can schedule periodic work on themselves by registering either timers or reminders.

View File

@ -8,7 +8,7 @@ aliases:
- "/developing-applications/building-blocks/actors/actors-background"
---
[Actor reminders]({{< ref "howto-actors-partitioning.md#actor-reminders" >}}) are persisted and continue to be triggered after sidecar restarts. Applications with multiple reminders registered can experience the following issues:
[Actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) are persisted and continue to be triggered after sidecar restarts. Applications with multiple reminders registered can experience the following issues:
- Low throughput on reminders registration and de-registration
- Limited number of reminders registered based on the single record size limit on the state store

View File

@ -40,6 +40,11 @@ Want to put the Dapr configuration API to the test? Walk through the following q
Want to skip the quickstarts? Not a problem. You can try out the configuration building block directly in your application to read and manage configuration data. After [Dapr is installed]({{< ref "getting-started/_index.md" >}}), you can begin using the configuration API starting with [the configuration how-to guide]({{< ref howto-manage-configuration.md >}}).
## Watch the demo
Watch [this demo of using the Dapr Configuration building block](https://youtu.be/tNq-n1XQuLA?t=496)
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/tNq-n1XQuLA?start=496" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
## Next steps
Follow these guides on:

View File

@ -67,9 +67,11 @@ spec:
```
## Retrieve Configuration Items
### Get configuration items using Dapr SDKs
### Get configuration items
{{< tabs ".NET" Java Python>}}
The following example shows how to get a saved configuration item using the Dapr Configuration API.
{{< tabs ".NET" Java Python Go Javascript "HTTP API (BASH)" "HTTP API (Powershell)">}}
{{% codetab %}}
@ -87,7 +89,6 @@ namespace ConfigurationApi
{
private static readonly string CONFIG_STORE_NAME = "configstore";
[Obsolete]
public static async Task Main(string[] args)
{
using var client = new DaprClientBuilder().Build();
@ -105,7 +106,7 @@ namespace ConfigurationApi
```java
//dependencies
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.DaprPreviewClient;
import io.dapr.client.DaprClient;
import io.dapr.client.domain.ConfigurationItem;
import io.dapr.client.domain.GetConfigurationRequest;
import io.dapr.client.domain.SubscribeConfigurationRequest;
@ -116,7 +117,7 @@ import reactor.core.publisher.Mono;
private static final String CONFIG_STORE_NAME = "configstore";
public static void main(String[] args) throws Exception {
try (DaprPreviewClient client = (new DaprClientBuilder()).buildPreviewClient()) {
try (DaprClient client = (new DaprClientBuilder()).build()) {
List<String> keys = new ArrayList<>();
keys.add("orderId1");
keys.add("orderId2");
@ -150,186 +151,536 @@ with DaprClient() as d:
{{% /codetab %}}
{{< /tabs >}}
### Get configuration items using gRPC API
Using your [favorite language](https://grpc.io/docs/languages/), create a Dapr gRPC client from the [Dapr proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto). The following examples show Java, C#, Python and Javascript clients.
{{< tabs Java Dotnet Python Javascript >}}
{{% codetab %}}
```java
```go
package main
Dapr.ServiceBlockingStub stub = Dapr.newBlockingStub(channel);
stub.GetConfigurationAlpha1(new GetConfigurationRequest{ StoreName = "redisconfigstore", Keys = new String[]{"myconfig"} });
import (
"context"
"fmt"
dapr "github.com/dapr/go-sdk/client"
)
func main() {
ctx := context.Background()
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
items, err := client.GetConfigurationItems(ctx, "configstore", ["orderId1","orderId2"])
if err != nil {
panic(err)
}
for key, item := range items {
fmt.Printf("get config: key = %s value = %s version = %s",key,(*item).Value, (*item).Version)
}
}
```
{{% /codetab %}}
{{% codetab %}}
```csharp
```js
import { CommunicationProtocolEnum, DaprClient } from "@dapr/dapr";
var call = client.GetConfigurationAlpha1(new GetConfigurationRequest { StoreName = "redisconfigstore", Keys = new String[]{"myconfig"} });
```
// JS SDK does not support Configuration API over HTTP protocol yet
const protocol = CommunicationProtocolEnum.GRPC;
const host = process.env.DAPR_HOST ?? "localhost";
const port = process.env.DAPR_GRPC_PORT ?? 3500;
{{% /codetab %}}
const DAPR_CONFIGURATION_STORE = "configstore";
const CONFIGURATION_ITEMS = ["orderId1", "orderId2"];
{{% codetab %}}
```python
response = stub.GetConfigurationAlpha1(request={ StoreName: 'redisconfigstore', Keys = ['myconfig'] })
```
{{% /codetab %}}
{{% codetab %}}
```javascript
client.GetConfigurationAlpha1({ StoreName: 'redisconfigstore', Keys = ['myconfig'] })
```
{{% /codetab %}}
{{< /tabs >}}
### Watch configuration items using Dapr SDKs
{{< tabs "Dotnet Extension" "Dotnet Client">}}
{{% codetab %}}
```csharp
[Obsolete("Configuration API is an Alpha API. Obsolete will be removed when the API is no longer Alpha")]
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
async function main() {
const client = new DaprClient(host, port, protocol);
// Get config items from the config store
try {
const config = await client.configuration.get(DAPR_CONFIGURATION_STORE, CONFIGURATION_ITEMS);
Object.keys(config.items).forEach((key) => {
console.log("Configuration for " + key + ":", JSON.stringify(config.items[key]));
});
} catch (error) {
console.log("Could not get config item, err:" + error);
process.exit(1);
}
}
public static IHostBuilder CreateHostBuilder(string[] args)
main().catch((e) => console.error(e));
```
{{% /codetab %}}
{{% codetab %}}
Launch a dapr sidecar:
```bash
dapr run --app-id orderprocessing --dapr-http-port 3601
```
In a separate terminal, get the configuration item saved earlier:
```bash
curl http://localhost:3601/v1.0/configuration/configstore?key=orderId1
```
{{% /codetab %}}
{{% codetab %}}
Launch a Dapr sidecar:
```bash
dapr run --app-id orderprocessing --dapr-http-port 3601
```
In a separate terminal, get the configuration item saved earlier:
```powershell
Invoke-RestMethod -Uri 'http://localhost:3601/v1.0/configuration/configstore?key=orderId1'
```
{{% /codetab %}}
{{< /tabs >}}
### Subscribe to configuration item updates
Below are code examples that leverage SDKs to subscribe to keys `[orderId1, orderId2]` using `configstore` store component.
{{< tabs ".NET" "ASP.NET Core" Java Python Go Javascript>}}
{{% codetab %}}
```csharp
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Dapr.Client;
const string DAPR_CONFIGURATION_STORE = "configstore";
var CONFIGURATION_KEYS = new List<string> { "orderId1", "orderId2" };
var client = new DaprClientBuilder().Build();
// Subscribe for configuration changes
SubscribeConfigurationResponse subscribe = await client.SubscribeConfiguration(DAPR_CONFIGURATION_STORE, CONFIGURATION_ITEMS);
// Print configuration changes
await foreach (var items in subscribe.Source)
{
// First invocation when app subscribes to config changes only returns subscription id
if (items.Keys.Count == 0)
{
Console.WriteLine("App subscribed to config changes with subscription id: " + subscribe.Id);
subscriptionId = subscribe.Id;
continue;
}
var cfg = System.Text.Json.JsonSerializer.Serialize(items);
Console.WriteLine("Configuration update " + cfg);
}
```
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
```bash
dapr run --app-id orderprocessing -- dotnet run
```
{{% /codetab %}}
{{% codetab %}}
```csharp
using System;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Dapr.Client;
using Dapr.Extensions.Configuration;
using System.Collections.Generic;
using System.Threading;
namespace ConfigurationApi
{
public class Program
{
public static void Main(string[] args)
{
Console.WriteLine("Starting application.");
CreateHostBuilder(args).Build().Run();
Console.WriteLine("Closing application.");
}
/// <summary>
/// Creates WebHost Builder.
/// </summary>
/// <param name="args">Arguments.</param>
/// <returns>Returns IHostbuilder.</returns>
public static IHostBuilder CreateHostBuilder(string[] args)
{
var client = new DaprClientBuilder().Build();
return Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration(config =>
{
// Get the initial value from the configuration component.
config.AddDaprConfigurationStore("redisconfig", new List<string>() { "withdrawVersion" }, client, TimeSpan.FromSeconds(20));
// Get the initial value and continue to watch it for changes.
config.AddDaprConfigurationStore("configstore", new List<string>() { "orderId1","orderId2" }, client, TimeSpan.FromSeconds(20));
config.AddStreamingDaprConfigurationStore("configstore", new List<string>() { "orderId1","orderId2" }, client, TimeSpan.FromSeconds(20));
// Watch the keys in the configuration component and update it in local configurations.
config.AddStreamingDaprConfigurationStore("redisconfig", new List<string>() { "withdrawVersion", "source" }, client, TimeSpan.FromSeconds(20));
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
}
}
```
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
```bash
dapr run --app-id orderprocessing -- dotnet run
```
{{% /codetab %}}
{{% codetab %}}
```csharp
public IDictionary<string, string> Data { get; set; } = new Dictionary<string, string>();
public string Id { get; set; } = string.Empty;
```java
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.DaprClient;
import io.dapr.client.domain.ConfigurationItem;
import io.dapr.client.domain.GetConfigurationRequest;
import io.dapr.client.domain.SubscribeConfigurationRequest;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
public async Task WatchConfiguration(DaprClient daprClient, string store, IReadOnlyList<string> keys, Dictionary<string, string> metadata, CancellationToken token = default)
{
// Initialize the gRPC Stream that will provide configuration updates.
var subscribeConfigurationResponse = await daprClient.SubscribeConfiguration(store, keys, metadata, token);
//code
private static final String CONFIG_STORE_NAME = "configstore";
private static String subscriptionId = null;
// The response contains a data source which is an IAsyncEnumerable, so it can be iterated through via an awaited foreach.
await foreach (var items in subscribeConfigurationResponse.Source.WithCancellation(token))
{
// Each iteration from the stream can contain all the keys that were queried for, so it must be individually iterated through.
var data = new Dictionary<string, string>(Data);
foreach (var item in items)
{
// The Id in the response is used to unsubscribe.
Id = subscribeConfigurationResponse.Id;
data[item.Key] = item.Value;
public static void main(String[] args) throws Exception {
try (DaprClient client = (new DaprClientBuilder()).build()) {
// Subscribe for config changes
List<String> keys = new ArrayList<>();
keys.add("orderId1");
keys.add("orderId2");
Flux<SubscribeConfigurationResponse> subscription = client.subscribeConfiguration(DAPR_CONFIGURATON_STORE,keys);
// Read config changes for 20 seconds
subscription.subscribe((response) -> {
// First ever response contains the subscription id
if (response.getItems() == null || response.getItems().isEmpty()) {
subscriptionId = response.getSubscriptionId();
System.out.println("App subscribed to config changes with subscription id: " + subscriptionId);
} else {
response.getItems().forEach((k, v) -> {
System.out.println("Configuration update for " + k + ": {'value':'" + v.getValue() + "'}");
});
}
Data = data;
});
Thread.sleep(20000);
}
}
```
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
```bash
dapr run --app-id orderprocessing -- -- mvn spring-boot:run
{{% /codetab %}}
{{% codetab %}}
```python
#dependencies
from dapr.clients import DaprClient
#code
def handler(id: str, resp: ConfigurationResponse):
for key in resp.items:
print(f"Subscribed item received key={key} value={resp.items[key].value} "
f"version={resp.items[key].version} "
f"metadata={resp.items[key].metadata}", flush=True)
def executeConfiguration():
with DaprClient() as d:
storeName = 'configurationstore'
keys = ['orderId1', 'orderId2']
id = d.subscribe_configuration(store_name=storeName, keys=keys,
handler=handler, config_metadata={})
print("Subscription ID is", id, flush=True)
sleep(20)
executeConfiguration()
```
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
```bash
dapr run --app-id orderprocessing -- python3 OrderProcessingService.py
```
{{% /codetab %}}
{{< /tabs >}}
### Watch configuration items using gRPC API
{{% codetab %}}
Create a Dapr gRPC client from the [Dapr proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto) using your [preferred language](https://grpc.io/docs/languages/). Use the `SubscribeConfigurationAlpha1` proto method on your client stub to start subscribing to events. The method accepts the following request object:
```go
package main
```proto
message SubscribeConfigurationRequest {
// The name of configuration store.
string store_name = 1;
import (
"context"
"fmt"
"time"
// Optional. The key of the configuration item to fetch.
// If set, only query for the specified configuration items.
// Empty list means fetch all.
repeated string keys = 2;
dapr "github.com/dapr/go-sdk/client"
)
// The metadata which will be sent to configuration store components.
map<string,string> metadata = 3;
func main() {
ctx := context.Background()
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
subscribeID, err := client.SubscribeConfigurationItems(ctx, "configstore", []string{"orderId1", "orderId2"}, func(id string, items map[string]*dapr.ConfigurationItem) {
for k, v := range items {
fmt.Printf("get updated config key = %s, value = %s version = %s \n", k, v.Value, v.Version)
}
})
if err != nil {
panic(err)
}
time.Sleep(20*time.Second)
}
```
Using this method, you can subscribe to changes in specific keys for a given configuration store. gRPC streaming varies widely based on language - see the [gRPC examples here](https://grpc.io/docs/languages/) for usage.
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
Below are the examples in sdks:
```bash
dapr run --app-id orderprocessing -- go run main.go
```
{{< tabs Python>}}
{{% /codetab %}}
{{% codetab %}}
```js
import { CommunicationProtocolEnum, DaprClient } from "@dapr/dapr";
// JS SDK does not support Configuration API over HTTP protocol yet
const protocol = CommunicationProtocolEnum.GRPC;
const host = process.env.DAPR_HOST ?? "localhost";
const port = process.env.DAPR_GRPC_PORT ?? 3500;
const DAPR_CONFIGURATION_STORE = "configstore";
const CONFIGURATION_ITEMS = ["orderId1", "orderId2"];
async function main() {
const client = new DaprClient(host, port, protocol);
// Subscribe to config updates
try {
const stream = await client.configuration.subscribeWithKeys(
DAPR_CONFIGURATION_STORE,
CONFIGURATION_ITEMS,
(config) => {
console.log("Configuration update", JSON.stringify(config.items));
}
);
// Unsubscribe to config updates and exit app after 20 seconds
setTimeout(() => {
stream.stop();
console.log("App unsubscribed to config changes");
process.exit(0);
}, 20000);
} catch (error) {
console.log("Error subscribing to config updates, err:" + error);
process.exit(1);
}
}
main().catch((e) => console.error(e));
```
Navigate to the directory containing the above code, then run the following command to launch both a Dapr sidecar and the subscriber application:
```bash
dapr run --app-id orderprocessing --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
```
{{% /codetab %}}
{{< /tabs >}}
### Unsubscribe from configuration item updates
After you've subscribed to watch configuration items, you will receive updates for all of the subscribed keys. To stop receiving updates, you need to explicitly call the unsubscribe API.
Following are the code examples showing how you can unsubscribe to configuration updates using unsubscribe API.
{{< tabs ".NET" Java Python Go Javascript "HTTP API (BASH)" "HTTP API (Powershell)">}}
{{% codetab %}}
```csharp
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Dapr.Client;
const string DAPR_CONFIGURATION_STORE = "configstore";
var client = new DaprClientBuilder().Build();
// Unsubscribe to config updates and exit the app
async Task unsubscribe(string subscriptionId)
{
try
{
await client.UnsubscribeConfiguration(DAPR_CONFIGURATION_STORE, subscriptionId);
Console.WriteLine("App unsubscribed from config changes");
Environment.Exit(0);
}
catch (Exception ex)
{
Console.WriteLine("Error unsubscribing from config updates: " + ex.Message);
}
}
```
{{% /codetab %}}
{{% codetab %}}
```java
import io.dapr.client.DaprClientBuilder;
import io.dapr.client.DaprClient;
import io.dapr.client.domain.ConfigurationItem;
import io.dapr.client.domain.GetConfigurationRequest;
import io.dapr.client.domain.SubscribeConfigurationRequest;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
//code
private static final String CONFIG_STORE_NAME = "configstore";
private static String subscriptionId = null;
public static void main(String[] args) throws Exception {
try (DaprClient client = (new DaprClientBuilder()).build()) {
// Unsubscribe from config changes
UnsubscribeConfigurationResponse unsubscribe = client
.unsubscribeConfiguration(subscriptionId, DAPR_CONFIGURATON_STORE).block();
if (unsubscribe.getIsUnsubscribed()) {
System.out.println("App unsubscribed to config changes");
} else {
System.out.println("Error unsubscribing to config updates, err:" + unsubscribe.getMessage());
}
} catch (Exception e) {
System.out.println("Error unsubscribing to config updates," + e.getMessage());
System.exit(1);
}
}
```
{{% /codetab %}}
{{% codetab %}}
```python
#dependencies
import asyncio
import time
import logging
from dapr.clients import DaprClient
#code
async def executeConfiguration():
with DaprClient() as d:
CONFIG_STORE_NAME = 'configstore'
key = 'orderId'
# Subscribe to configuration by key.
configuration = await d.subscribe_configuration(store_name=CONFIG_STORE_NAME, keys=[key], config_metadata={})
if configuration != None:
items = configuration.get_items()
for item in items:
print(f"Subscribe key={item.key} value={item.value} version={item.version}", flush=True)
else:
print("Nothing yet")
asyncio.run(executeConfiguration())
```
subscriptionID = ""
with DaprClient() as d:
isSuccess = d.unsubscribe_configuration(store_name='configstore', id=subscriptionID)
print(f"Unsubscribed successfully? {isSuccess}", flush=True)
```
{{% /codetab %}}
{{% codetab %}}
```go
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"os"
"time"
dapr "github.com/dapr/go-sdk/client"
)
var DAPR_CONFIGURATION_STORE = "configstore"
var subscriptionID = ""
func main() {
client, err := dapr.NewClient()
if err != nil {
log.Panic(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
if err := client.UnsubscribeConfigurationItems(ctx, DAPR_CONFIGURATION_STORE , subscriptionID); err != nil {
panic(err)
}
}
```
{{% /codetab %}}
{{% codetab %}}
```js
import { CommunicationProtocolEnum, DaprClient } from "@dapr/dapr";
// JS SDK does not support Configuration API over HTTP protocol yet
const protocol = CommunicationProtocolEnum.GRPC;
const host = process.env.DAPR_HOST ?? "localhost";
const port = process.env.DAPR_GRPC_PORT ?? 3500;
const DAPR_CONFIGURATION_STORE = "configstore";
const CONFIGURATION_ITEMS = ["orderId1", "orderId2"];
async function main() {
const client = new DaprClient(host, port, protocol);
try {
const stream = await client.configuration.subscribeWithKeys(
DAPR_CONFIGURATION_STORE,
CONFIGURATION_ITEMS,
(config) => {
console.log("Configuration update", JSON.stringify(config.items));
}
);
setTimeout(() => {
// Unsubscribe to config updates
stream.stop();
console.log("App unsubscribed to config changes");
process.exit(0);
}, 20000);
} catch (error) {
console.log("Error subscribing to config updates, err:" + error);
process.exit(1);
}
}
main().catch((e) => console.error(e));
```
{{% /codetab %}}
{{% codetab %}}
```bash
dapr run --app-id orderprocessing --resources-path components/ -- python3 OrderProcessingService.py
curl 'http://localhost:<DAPR_HTTP_PORT>/v1.0/configuration/configstore/<subscription-id>/unsubscribe'
```
{{% /codetab %}}
{{% codetab %}}
```powershell
Invoke-RestMethod -Uri 'http://localhost:<DAPR_HTTP_PORT>/v1.0/configuration/configstore/<subscription-id>/unsubscribe'
```
{{% /codetab %}}
{{< /tabs >}}
#### Stop watching configuration items
After you've subscribed to watch configuration items, the gRPC-server stream starts. Since this stream thread does not close itself, you have to explicitly call the `UnSubscribeConfigurationRequest` API to unsubscribe. This method accepts the following request object:
```proto
// UnSubscribeConfigurationRequest is the message to stop watching the key-value configuration.
message UnSubscribeConfigurationRequest {
// The name of configuration store.
string store_name = 1;
// Optional. The keys of the configuration item to stop watching.
// Store_name and keys should match previous SubscribeConfigurationRequest's keys and store_name.
// Once invoked, the subscription that is watching update for the key-value event is stopped
repeated string keys = 2;
}
```
Using this unsubscribe method, you can stop watching configuration update events. Dapr locates the subscription stream based on the `store_name` and any optional keys supplied and closes it.
## Next steps
* Read [configuration API overview]({{< ref configuration-api-overview.md >}})

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Cryptography"
linkTitle: "Cryptography"
weight: 110
description: "Perform cryptographic operations without exposing keys to your application"
---

View File

@ -0,0 +1,88 @@
---
type: docs
title: Cryptography overview
linkTitle: Overview
weight: 1000
description: "Overview of Dapr Cryptography"
---
With the cryptography building block, you can leverage cryptography in a safe and consistent way. Dapr exposes APIs that allow you to perform operations, such as encrypting and decrypting messages, within key vaults or the Dapr sidecar, without exposing cryptographic keys to your application.
## Why Cryptography?
Applications make extensive use of cryptography, which, when implemented correctly, can make solutions safer even when data is compromised. In certain cases, you may be required to use cryptography to comply with industry regulations (for example, in finance) or legal requirements (including privacy regulations such as GDPR).
However, leveraging cryptography correctly can be difficult. You need to:
- Pick the right algorithms and options
- Learn the proper way to manage and protect keys
- Navigate operational complexities when you wants limit access to cryptographic key material
One important requirement for security is limiting access to your cryptographic keys, what is often referred to as "raw key material". Dapr can integrate with key vaults such as Azure Key Vault (with more components coming in the future) which store keys in secure enclaves and perform cryptographic operations in the vaults, without exposing keys to your application or Dapr.
Alternatively, you can configure Dapr to manage the cryptographic keys for you, performing operations within the sidecar, again without exposing raw key material to your application.
## Cryptography in Dapr
With Dapr, you can perform cryptographic operations without exposing cryptographic keys to your application.
<img src="/images/cryptography-overview.png" width=1000 style="padding-bottom:15px;" alt="Diagram showing how Dapr cryptography works with your app">
By using the cryptography building block, you can:
- More easily perform cryptographic operations in a safe way. Dapr provides safeguards against using unsafe algorithms, or using algorithms with unsafe options.
- Keep keys outside of applications. Applications never see the "raw key material", but can request the vault to perform operations with the keys. When using the cryptographic engine of Dapr, operations are performed safely within the Dapr sidecar.
- Experience greater separation of concerns. By using external vaults or cryptographic components, only authorized teams can access private key materials.
- Manage and rotate keys more easily. Keys are managed in the vault and outside of the application, and they can be rotated without needing the developers to be involved (or even without restarting the apps).
- Enables better audit logging to monitor when operations are performed with keys in a vault.
{{% alert title="Note" color="primary" %}}
While both HTTP and gRPC are supported in the alpha release, using the gRPC APIs with the supported Dapr SDKs is the recommended approach for cryptography.
{{% /alert %}}
## Features
### Cryptographic components
The Dapr cryptography building block incldues two kinds of components:
- **Components that allow interacting with management services or vaults ("key vaults").**
Similar to how Dapr offers an "abstraction layer" on top of various secret stores or state stores, these components allow interacting with various key vaults such as Azure Key Vault (with more coming in future Dapr releases). With these components, cryptographic operations on the private keys are performed within the vaults and Dapr never sees your private keys.
- **Components based on Dapr's own cryptographic engine.**
When key vaults are not available, you can leverage components based on Dapr's own cryptographic engine. These components, which have `.dapr.` in the name, perform cryptographic operations within the Dapr sidecar, with keys stored on files, Kubernetes secrets, or other sources. Although the private keys are known by Dapr, they are still not available to your applications.
Both kinds of components, either those leveraging key vaults or using the cryptopgrahic engine in Dapr, offer the same abstraction layer. This allows your solution to switch between various vaults and/or cryptography components as needed. For example, you can use a locally-stored key during development, and a cloud vault in production.
### Cryptographic APIs
Cryptographic APIs allow encrypting and decrypting data using the [Dapr Crypto Scheme v1](https://github.com/dapr/kit/blob/main/schemes/enc/v1/README.md). This is an opinionated encryption scheme designed to use modern, safe cryptographic standards, and processes data (even large files) efficiently as a stream.
## Try out cryptography
### Quickstarts and tutorials
Want to put the Dapr cryptography API to the test? Walk through the following quickstart and tutorials to see cryptography in action:
| Quickstart/tutorial | Description |
| ------------------- | ----------- |
| [Cryptography quickstart]({{< ref cryptography-quickstart.md >}}) | Encrypt and decrypt messages and large files using RSA and AES keys with the cryptography API. |
### Start using cryptography directly in your app
Want to skip the quickstarts? Not a problem. You can try out the cryptography building block directly in your application to encrypt and decrypt your application. After [Dapr is installed]({{< ref "getting-started/_index.md" >}}), you can begin using the cryptography API starting with [the cryptography how-to guide]({{< ref howto-cryptography.md >}}).
## Demo
Watch this [demo video of the Cryptography API from the Dapr Community Call #83](https://youtu.be/PRWYX4lb2Sg?t=1148):
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/PRWYX4lb2Sg?start=1148" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
## Next steps
{{< button text="Use the cryptography API >>" page="howto-cryptography.md" >}}
## Related links
- [Cryptography overview]({{< ref cryptography-overview.md >}})
- [Cryptography component specs]({{< ref supported-cryptography >}})

View File

@ -0,0 +1,192 @@
---
type: docs
title: "How to: Use the cryptography APIs"
linkTitle: "How to: Use cryptography"
weight: 2000
description: "Learn how to encrypt and decrypt files"
---
Now that you've read about [Cryptography as a Dapr building block]({{< ref cryptography-overview.md >}}), let's walk through using the cryptography APIs with the SDKs.
{{% alert title="Note" color="primary" %}}
Dapr cryptography is currently in alpha.
{{% /alert %}}
## Encrypt
{{< tabs "JavaScript" "Go" >}}
{{% codetab %}}
<!--JavaScript-->
Using the Dapr SDK in your project, with the gRPC APIs, you can encrypt data in a buffer or a string:
```js
// When passing data (a buffer or string), `encrypt` returns a Buffer with the encrypted message
const ciphertext = await client.crypto.encrypt(plaintext, {
// Name of the Dapr component (required)
componentName: "mycryptocomponent",
// Name of the key stored in the component (required)
keyName: "mykey",
// Algorithm used for wrapping the key, which must be supported by the key named above.
// Options include: "RSA", "AES"
keyWrapAlgorithm: "RSA",
});
```
The APIs can also be used with streams, to encrypt data more efficiently when it comes from a stream. The example below encrypts a file, writing to another file, using streams:
```js
// `encrypt` can be used as a Duplex stream
await pipeline(
fs.createReadStream("plaintext.txt"),
await client.crypto.encrypt({
// Name of the Dapr component (required)
componentName: "mycryptocomponent",
// Name of the key stored in the component (required)
keyName: "mykey",
// Algorithm used for wrapping the key, which must be supported by the key named above.
// Options include: "RSA", "AES"
keyWrapAlgorithm: "RSA",
}),
fs.createWriteStream("ciphertext.out"),
);
```
{{% /codetab %}}
{{% codetab %}}
<!--go-->
Using the Dapr SDK in your project, you can encrypt a stream of data, such as a file.
```go
out, err := sdkClient.Encrypt(context.Background(), rf, dapr.EncryptOptions{
// Name of the Dapr component (required)
ComponentName: "mycryptocomponent",
// Name of the key stored in the component (required)
KeyName: "mykey",
// Algorithm used for wrapping the key, which must be supported by the key named above.
// Options include: "RSA", "AES"
Algorithm: "RSA",
})
```
The following example puts the `Encrypt` API in context, with code that reads the file, encrypts it, then stores the result in another file.
```go
// Input file, clear-text
rf, err := os.Open("input")
if err != nil {
panic(err)
}
defer rf.Close()
// Output file, encrypted
wf, err := os.Create("output.enc")
if err != nil {
panic(err)
}
defer wf.Close()
// Encrypt the data using Dapr
out, err := sdkClient.Encrypt(context.Background(), rf, dapr.EncryptOptions{
// These are the 3 required parameters
ComponentName: "mycryptocomponent",
KeyName: "mykey",
Algorithm: "RSA",
})
if err != nil {
panic(err)
}
// Read the stream and copy it to the out file
n, err := io.Copy(wf, out)
if err != nil {
panic(err)
}
fmt.Println("Written", n, "bytes")
```
The following example uses the `Encrypt` API to encrypt a string.
```go
// Input string
rf := strings.NewReader("Amor, cha nullo amato amar perdona, mi prese del costui piacer sì forte, che, come vedi, ancor non mabbandona")
// Encrypt the data using Dapr
enc, err := sdkClient.Encrypt(context.Background(), rf, dapr.EncryptOptions{
ComponentName: "mycryptocomponent",
KeyName: "mykey",
Algorithm: "RSA",
})
if err != nil {
panic(err)
}
// Read the encrypted data into a byte slice
enc, err := io.ReadAll(enc)
if err != nil {
panic(err)
}
```
{{% /codetab %}}
{{< /tabs >}}
## Decrypt
{{< tabs "JavaScript" "Go" >}}
{{% codetab %}}
<!--JavaScript-->
Using the Dapr SDK, you can decrypt data in a buffer or using streams.
```js
// When passing data as a buffer, `decrypt` returns a Buffer with the decrypted message
const plaintext = await client.crypto.decrypt(ciphertext, {
// Only required option is the component name
componentName: "mycryptocomponent",
});
// `decrypt` can also be used as a Duplex stream
await pipeline(
fs.createReadStream("ciphertext.out"),
await client.crypto.decrypt({
// Only required option is the component name
componentName: "mycryptocomponent",
}),
fs.createWriteStream("plaintext.out"),
);
```
{{% /codetab %}}
{{% codetab %}}
<!--go-->
To decrypt a file, use the `Decrypt` gRPC API to your project.
In the following example, `out` is a stream that can be written to file or read in memory, as in the examples above.
```go
out, err := sdkClient.Decrypt(context.Background(), rf, dapr.EncryptOptions{
// Only required option is the component name
ComponentName: "mycryptocomponent",
})
```
{{% /codetab %}}
{{< /tabs >}}
## Next steps
[Cryptography component specs]({{< ref supported-cryptography >}})

View File

@ -186,7 +186,7 @@ Place `subscription.yaml` in the same directory as your `pubsub.yaml` component.
Below are code examples that leverage Dapr SDKs to subscribe to the topic you defined in `subscription.yaml`.
{{< tabs Dotnet Java Python Go Javascript>}}
{{< tabs Dotnet Java Python Go JavaScript>}}
{{% codetab %}}

View File

@ -477,9 +477,15 @@ Some pub/sub brokers support sending and receiving multiple messages in a single
For components that do not have bulk publish or subscribe support, Dapr runtime uses the regular publish and subscribe APIs to send and receive messages one by one. This is still more efficient than directly using the regular publish or subscribe APIs, because applications can still send/receive multiple messages in a single request to/from Dapr.
## Watch the demo
## Demos
Watch [this video for an demo on bulk pub/sub](https://youtu.be/BxiKpEmchgQ?t=1170):
Watch the following demos and presentations about bulk pub/sub.
### [KubeCon Europe 2023 presentation](https://youtu.be/WMBAo-UNg6o)
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/WMBAo-UNg6o" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
### [Dapr Community Call #77 presentation](https://youtu.be/BxiKpEmchgQ?t=1170)
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/BxiKpEmchgQ?start=1170" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

View File

@ -11,22 +11,26 @@ To enable message routing and provide additional context with each message, Dapr
Dapr uses CloudEvents to provide additional context to the event payload, enabling features like:
- Tracing
- Deduplication by message Id
- Content-type for proper deserialization of event data
- Verification of sender application
## CloudEvents example
Dapr implements the following CloudEvents fields when creating a message topic.
A publish operation to Dapr results in a cloud event envelope containing the following fields:
- `id`
- `source`
- `specversion`
- `type`
- `traceparent`
- `traceid`
- `tracestate`
- `topic`
- `pubsubname`
- `time`
- `datacontenttype` (optional)
The following example demonstrates an `orders` topic message sent by Dapr that includes a W3C `traceid` unique to the message, the `data` and the fields for the CloudEvent where the data content is serialized as JSON.
The following example demonstrates a cloud event generated by Dapr for a publish operation to the `orders` topic that includes a W3C `traceid` unique to the message, the `data` and the fields for the CloudEvent where the data content is serialized as JSON.
```json
{
@ -65,6 +69,19 @@ As another example of a v1.0 CloudEvent, the following shows data as XML content
## Publish your own CloudEvent
If you want to use your own CloudEvent, make sure to specify the [`datacontenttype`]({{< ref "pubsub-overview.md#setting-message-content-types" >}}) as `application/cloudevents+json`.
If the CloudEvent that was authored by the app does not contain the [minimum required fields](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#required-attributes) in the CloudEvent specification, the message is rejected. Dapr adds the following fields to the CloudEvent if they are missing:
- `time`
- `traceid`
- `traceparent`
- `tracestate`
- `topic`
- `pubsubname`
- `source`
- `type`
- `specversion`
You can add additional fields to a custom CloudEvent that are not part of the official CloudEvent specification. Dapr will pass these fields as-is.
### Example
@ -102,6 +119,10 @@ Invoke-RestMethod -Method Post -ContentType 'application/cloudevents+json' -Body
{{< /tabs >}}
## Event deduplication
When using cloud events created by Dapr, the envelope contains an `id` field which can be used by the app to perform message deduplication. Dapr does not handle deduplication automatically. Dapr supports using message brokers that natively enable message deduplication.
## Next steps
- Learn why you might [not want to use CloudEvents]({{< ref pubsub-raw.md >}})

View File

@ -30,10 +30,12 @@ The Dapr sidecar doesnt load any workflow definitions. Rather, the sidecar si
[Workflow activities]({{< ref "workflow-features-concepts.md#workflow-activites" >}}) are the basic unit of work in a workflow and are the tasks that get orchestrated in the business process.
{{< tabs ".NET" >}}
{{< tabs ".NET" Python >}}
{{% codetab %}}
<!--csharp-->
Define the workflow activities you'd like your workflow to perform. Activities are a class definition and can take inputs and outputs. Activities also participate in dependency injection, like binding to a Dapr client.
The activities called in the example below are:
@ -96,6 +98,24 @@ public class ProcessPaymentActivity : WorkflowActivity<PaymentRequest, object>
[See the full `ProcessPaymentActivity.cs` workflow activity example.](https://github.com/dapr/dotnet-sdk/blob/master/examples/Workflow/WorkflowConsoleApp/Activities/ProcessPaymentActivity.cs)
{{% /codetab %}}
{{% codetab %}}
<!--python-->
Define the workflow activities you'd like your workflow to perform. Activities are a function definition and can take inputs and outputs. The following example creates a counter (activity) called `hello_act` that notifies users of the current counter value. `hello_act` is a function derived from a class called `WorkflowActivityContext`.
```python
def hello_act(ctx: WorkflowActivityContext, input):
global counter
counter += input
print(f'New counter value is: {counter}!', flush=True)
```
[See the `hello_act` workflow activity in context.](https://github.com/dapr/python-sdk/blob/master/examples/demo_workflow/app.py#LL40C1-L43C59)
{{% /codetab %}}
{{< /tabs >}}
@ -104,10 +124,12 @@ public class ProcessPaymentActivity : WorkflowActivity<PaymentRequest, object>
Next, register and call the activites in a workflow.
{{< tabs ".NET" >}}
{{< tabs ".NET" Python >}}
{{% codetab %}}
<!--csharp-->
The `OrderProcessingWorkflow` class is derived from a base class called `Workflow` with input and output parameter types. It also includes a `RunAsync` method that does the heavy lifting of the workflow and calls the workflow activities.
```csharp
@ -144,6 +166,28 @@ The `OrderProcessingWorkflow` class is derived from a base class called `Workflo
[See the full workflow example in `OrderProcessingWorkflow.cs`.](https://github.com/dapr/dotnet-sdk/blob/master/examples/Workflow/WorkflowConsoleApp/Workflows/OrderProcessingWorkflow.cs)
{{% /codetab %}}
{{% codetab %}}
<!--python-->
The `hello_world_wf` function is derived from a class called `DaprWorkflowContext` with input and output parameter types. It also includes a `yield` statement that does the heavy lifting of the workflow and calls the workflow activities.
```python
def hello_world_wf(ctx: DaprWorkflowContext, input):
print(f'{input}')
yield ctx.call_activity(hello_act, input=1)
yield ctx.call_activity(hello_act, input=10)
yield ctx.wait_for_external_event("event1")
yield ctx.call_activity(hello_act, input=100)
yield ctx.call_activity(hello_act, input=1000)
```
[See the `hello_world_wf` workflow in context.](https://github.com/dapr/python-sdk/blob/master/examples/demo_workflow/app.py#LL32C1-L38C51)
{{% /codetab %}}
{{< /tabs >}}
@ -152,10 +196,12 @@ The `OrderProcessingWorkflow` class is derived from a base class called `Workflo
Finally, compose the application using the workflow.
{{< tabs ".NET" >}}
{{< tabs ".NET" Python >}}
{{% codetab %}}
<!--csharp-->
[In the following `Program.cs` example](https://github.com/dapr/dotnet-sdk/blob/master/examples/Workflow/WorkflowConsoleApp/Program.cs), for a basic ASP.NET order processing application using the .NET SDK, your project code would include:
- A NuGet package called `Dapr.Workflow` to receive the .NET SDK capabilities
@ -223,8 +269,97 @@ app.Run();
{{% /codetab %}}
{{< /tabs >}}
{{% codetab %}}
<!--python-->
[In the following example](https://github.com/dapr/python-sdk/blob/master/examples/demo_workflow/app.py), for a basic Python hello world application using the Python SDK, your project code would include:
- A Python package called `DaprClient` to receive the Python SDK capabilities.
- A builder with extensions called:
- `WorkflowRuntime`: Allows you to register workflows and workflow activities
- `DaprWorkflowContext`: Allows you to [create workflows]({{< ref "#write-the-workflow" >}})
- `WorkflowActivityContext`: Allows you to [create workflow activities]({{< ref "#write-the-workflow-activities" >}})
- API calls. In the example below, these calls start, pause, resume, purge, and terminate the workflow.
```python
from dapr.ext.workflow import WorkflowRuntime, DaprWorkflowContext, WorkflowActivityContext
from dapr.clients import DaprClient
# ...
def main():
with DaprClient() as d:
host = settings.DAPR_RUNTIME_HOST
port = settings.DAPR_GRPC_PORT
workflowRuntime = WorkflowRuntime(host, port)
workflowRuntime = WorkflowRuntime()
workflowRuntime.register_workflow(hello_world_wf)
workflowRuntime.register_activity(hello_act)
workflowRuntime.start()
# Start workflow
print("==========Start Counter Increase as per Input:==========")
start_resp = d.start_workflow(instance_id=instanceId, workflow_component=workflowComponent,
workflow_name=workflowName, input=inputData, workflow_options=workflowOptions)
print(f"start_resp {start_resp.instance_id}")
# ...
# Pause workflow
d.pause_workflow(instance_id=instanceId, workflow_component=workflowComponent)
getResponse = d.get_workflow(instance_id=instanceId, workflow_component=workflowComponent)
print(f"Get response from {workflowName} after pause call: {getResponse.runtime_status}")
# Resume workflow
d.resume_workflow(instance_id=instanceId, workflow_component=workflowComponent)
getResponse = d.get_workflow(instance_id=instanceId, workflow_component=workflowComponent)
print(f"Get response from {workflowName} after resume call: {getResponse.runtime_status}")
sleep(1)
# Raise workflow
d.raise_workflow_event(instance_id=instanceId, workflow_component=workflowComponent,
event_name=eventName, event_data=eventData)
sleep(5)
# Purge workflow
d.purge_workflow(instance_id=instanceId, workflow_component=workflowComponent)
try:
getResponse = d.get_workflow(instance_id=instanceId, workflow_component=workflowComponent)
except DaprInternalError as err:
if nonExistentIDError in err._message:
print("Instance Successfully Purged")
# Kick off another workflow for termination purposes
start_resp = d.start_workflow(instance_id=instanceId, workflow_component=workflowComponent,
workflow_name=workflowName, input=inputData, workflow_options=workflowOptions)
print(f"start_resp {start_resp.instance_id}")
# Terminate workflow
d.terminate_workflow(instance_id=instanceId, workflow_component=workflowComponent)
sleep(1)
getResponse = d.get_workflow(instance_id=instanceId, workflow_component=workflowComponent)
print(f"Get response from {workflowName} after terminate call: {getResponse.runtime_status}")
# Purge workflow
d.purge_workflow(instance_id=instanceId, workflow_component=workflowComponent)
try:
getResponse = d.get_workflow(instance_id=instanceId, workflow_component=workflowComponent)
except DaprInternalError as err:
if nonExistentIDError in err._message:
print("Instance Successfully Purged")
workflowRuntime.shutdown()
if __name__ == '__main__':
main()
```
{{% /codetab %}}
{{< /tabs >}}
{{% alert title="Important" color="warning" %}}
@ -241,4 +376,6 @@ Now that you've authored a workflow, learn how to manage it.
## Related links
- [Workflow overview]({{< ref workflow-overview.md >}})
- [Workflow API reference]({{< ref workflow_api.md >}})
- [Try out the .NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
- Try out the full SDK examples:
- [.NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
- [Python example](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow)

View File

@ -8,12 +8,12 @@ description: Manage and run workflows
Now that you've [authored the workflow and its activities in your application]({{< ref howto-author-workflow.md >}}), you can start, terminate, and get information about the workflow using HTTP API calls. For more information, read the [workflow API reference]({{< ref workflow_api.md >}}).
{{< tabs ".NET SDK" HTTP >}}
{{< tabs ".NET" Python HTTP >}}
<!--NET-->
{{% codetab %}}
Manage your workflow within your code. In the `OrderProcessingWorkflow` example from the [Author a workflow]({{< ref "howto-author-workflow.md#write-the-workflow" >}}) guide, the workflow is registered in the code. You can now start, terminate, and get information about a running workflow:
Manage your workflow within your code. In the `OrderProcessingWorkflow` example from the [Author a workflow]({{< ref "howto-author-workflow.md#write-the-application" >}}) guide, the workflow is registered in the code. You can now start, terminate, and get information about a running workflow:
```csharp
string orderId = "exampleOrderId";
@ -46,6 +46,56 @@ await daprClient.PurgeWorkflowAsync(orderId, workflowComponent);
{{% /codetab %}}
<!--Python-->
{{% codetab %}}
Manage your workflow within your code. In the workflow example from the [Author a workflow]({{< ref "howto-author-workflow.md#write-the-application" >}}) guide, the workflow is registered in the code using the following APIs:
- **start_workflow**: Start an instance of a workflow
- **get_workflow**: Get information on the status of the workflow
- **pause_workflow**: Pauses or suspends a workflow instance that can later be resumed
- **resume_workflow**: Resumes a paused workflow instance
- **raise_workflow_event**: Raise an event on a workflow
- **purge_workflow**: Removes all metadata related to a specific workflow instance
- **terminate_workflow**: Terminate or stop a particular instance of a workflow
```python
from dapr.ext.workflow import WorkflowRuntime, DaprWorkflowContext, WorkflowActivityContext
from dapr.clients import DaprClient
# Sane parameters
instanceId = "exampleInstanceID"
workflowComponent = "dapr"
workflowName = "hello_world_wf"
eventName = "event1"
eventData = "eventData"
# Start the workflow
start_resp = d.start_workflow(instance_id=instanceId, workflow_component=workflowComponent,
workflow_name=workflowName, input=inputData, workflow_options=workflowOptions)
# Get info on the workflow
getResponse = d.get_workflow(instance_id=instanceId, workflow_component=workflowComponent)
# Pause the workflow
d.pause_workflow(instance_id=instanceId, workflow_component=workflowComponent)
# Resume the workflow
d.resume_workflow(instance_id=instanceId, workflow_component=workflowComponent)
# Raise an event on the workflow.
d.raise_workflow_event(instance_id=instanceId, workflow_component=workflowComponent,
event_name=eventName, event_data=eventData)
# Purge the workflow
d.purge_workflow(instance_id=instanceId, workflow_component=workflowComponent)
# Terminate the workflow
d.terminate_workflow(instance_id=instanceId, workflow_component=workflowComponent)
```
{{% /codetab %}}
<!--HTTP-->
{{% codetab %}}
@ -121,5 +171,7 @@ Learn more about these HTTP calls in the [workflow API reference guide]({{< ref
## Next steps
- [Try out the Workflow quickstart]({{< ref workflow-quickstart.md >}})
- [Try out the .NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
- Try out the full SDK examples:
- [.NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
- [Python example](https://github.com/dapr/python-sdk/blob/master/examples/demo_workflow/app.py)
- [Workflow API reference]({{< ref workflow_api.md >}})

View File

@ -14,7 +14,7 @@ For more information on how workflow state is managed, see the [workflow archite
## Workflows
Dapr Workflows are functions you write that define a series of steps or tasks to be executed in a particular order. The Dapr Workflow engine takes care of coordinating and managing the execution of the steps, including managing failures and retries. If the app hosting your workflows is scaled out across multiple machines, the workflow engine may also load balance the execution of workflows and their tasks across multiple machines.
Dapr Workflows are functions you write that define a series of tasks to be executed in a particular order. The Dapr Workflow engine takes care of scheduling and execution of the tasks, including managing failures and retries. If the app hosting your workflows is scaled out across multiple machines, the workflow engine may also load balance the execution of workflows and their tasks across multiple machines.
There are several different kinds of tasks that a workflow can schedule, including
- [Activities]({{< ref "workflow-features-concepts.md#workflow-activities" >}}) for executing custom logic
@ -24,7 +24,7 @@ There are several different kinds of tasks that a workflow can schedule, includi
### Workflow identity
Each workflow you define has a type name, and individual executions of a workflow have a unique _instance ID_. Workflow instance IDs can be generated by your app code, which is useful when workflows correspond to business entities like documents or jobs, or can be auto-generated UUIDs. A workflow's instance ID is useful for debugging and also for managing workflows using the [Workflow APIs]({{< ref workflow_api.md >}}).
Each workflow you define has a type name, and individual executions of a workflow require a unique _instance ID_. Workflow instance IDs can be generated by your app code, which is useful when workflows correspond to business entities like documents or jobs, or can be auto-generated UUIDs. A workflow's instance ID is useful for debugging and also for managing workflows using the [Workflow APIs]({{< ref workflow_api.md >}}).
Only one workflow instance with a given ID can exist at any given time. However, if a workflow instance completes or fails, its ID can be reused by a new workflow instance. Note, however, that the new workflow instance effectively replaces the old one in the configured state store.
@ -36,7 +36,7 @@ When a workflow "awaits" a scheduled task, it unloads itself from memory until t
When a workflow function is replayed, it runs again from the beginning. However, when it encounters a task that already completed, instead of scheduling that task again, the workflow engine:
1. Returns the result of the completed task to the workflow.
1. Returns the stored result of the completed task to the workflow.
1. Continues execution until the next "await" point.
This "replay" behavior continues until the workflow function completes or fails with an error.

View File

@ -10,32 +10,32 @@ description: "Overview of Dapr Workflow"
Dapr Workflow is currently in alpha.
{{% /alert %}}
Dapr Workflow makes orchestrating the logic required for messaging, state management, and failure handling across various microservices easier for developers. Dapr Workflow enables you to create long running, fault-tolerant, stateful applications. Prior to Dapr Workflow, you'd often need to build ad-hoc workflows in custom, complex code in order to achieve long running, fault-tolerant, stateful applications.
Dapr workflow makes it easy for developers to write business logic and integrations in a reliable way. Since Dapr workflows are stateful, they support long-running and fault-tolerant applications, ideal for orchestrating microservices. Dapr workflow works seamlessly with other Dapr building blocks, such as service invocation, pub/sub, state management, and bindings.
The durable, resilient Dapr Workflow capability:
- Offers a built-in workflow runtime for driving Dapr Workflow execution
- Provides SDKs for authoring workflows in code, using any language
- Provides HTTP and gRPC APIs for managing workflows (start, query, suspend/resume, terminate)
- Integrates with any other workflow runtime via workflow components
- Offers a built-in workflow runtime for driving Dapr Workflow execution.
- Provides SDKs for authoring workflows in code, using any language.
- Provides HTTP and gRPC APIs for managing workflows (start, query, suspend/resume, terminate).
- Integrates with any other workflow runtime via workflow components.
<img src="/images/workflow-overview/workflow-overview.png" width=800 alt="Diagram showing basics of Dapr Workflow">
Some example scenarios that Dapr Workflow can perform are:
- Order processing involving inventory management, payment systems, shipping, etc.
- Order processing involving orchestration between inventory management, payment systems, and shipping services.
- HR onboarding workflows coordinating tasks across multiple departments and participants.
- Orchestrating the roll-out of digital menu updates in a national restaurant chain.
- Image processing workflows involving API-based classification and storage.
## Features
### Workflows and activities
With Dapr Workflow, you can write activites and then compose those activities together into a workflow. Workflow activities are:
With Dapr Workflow, you can write activities and then orchestrate those activities in a workflow. Workflow activities are:
- The basic unit of work in a workflow
- The tasks that get orchestrated in the business process
- Used for calling other (Dapr) services, interacting with state stores, and pub/sub brokers.
[Learn more about workflow activities.]({{< ref "workflow-features-concepts.md##workflow-activities" >}})
@ -81,6 +81,8 @@ You can use the following SDKs to author a workflow.
| Language stack | Package |
| - | - |
| .NET | [Dapr.Workflow](https://www.nuget.org/profiles/dapr.io) |
| Python | [dapr-ext-workflow](https://github.com/dapr/python-sdk/tree/master/ext/dapr-ext-workflow) |
## Try out workflows
@ -92,6 +94,8 @@ Want to put workflows to the test? Walk through the following quickstart and tut
| ------------------- | ----------- |
| [Workflow quickstart]({{< ref workflow-quickstart.md >}}) | Run a .NET workflow application with four workflow activities to see Dapr Workflow in action |
| [Workflow .NET SDK example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | Learn how to create a Dapr Workflow and invoke it using ASP.NET Core web APIs. |
| [Workflow Python SDK example](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | Learn how to create a Dapr Workflow and invoke it using the Python `DaprClient` package. |
### Start using workflows directly in your app
@ -110,4 +114,6 @@ Watch [this video for an overview on Dapr Workflow](https://youtu.be/s1p9MNl4VGo
## Related links
- [Workflow API reference]({{< ref workflow_api.md >}})
- [Try out the .NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
- Try out the full SDK examples:
- [.NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
- [Python example](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow)

View File

@ -12,10 +12,10 @@ Dapr Workflows simplify complex, stateful coordination requirements in microserv
In the task chaining pattern, multiple steps in a workflow are run in succession, and the output of one step may be passed as the input to the next step. Task chaining workflows typically involve creating a sequence of operations that need to be performed on some data, such as filtering, transforming, and reducing.
In some cases, the steps of the workflow may need to be orchestrated across multiple microservices. For increased reliability and scalability, you're also likely to use queues to trigger the various steps.
<img src="/images/workflow-overview/workflows-chaining.png" width=800 alt="Diagram showing how the task chaining workflow pattern works">
In some cases, the steps of the workflow may need to be orchestrated across multiple microservices. For increased reliability and scalability, you're also likely to use queues to trigger the various steps.
While the pattern is simple, there are many complexities hidden in the implementation. For example:
- What happens if one of the microservices are unavailable for an extended period of time?
@ -31,25 +31,27 @@ Dapr Workflow solves these complexities by allowing you to implement the task ch
```csharp
// Expotential backoff retry policy that survives long outages
var retryPolicy = TaskOptions.FromRetryPolicy(new RetryPolicy(
maxNumberOfAttempts: 10,
var retryOptions = new WorkflowTaskOptions
{
RetryPolicy = new WorkflowRetryPolicy(
firstRetryInterval: TimeSpan.FromMinutes(1),
backoffCoefficient: 2.0,
maxRetryInterval: TimeSpan.FromHours(1)));
maxRetryInterval: TimeSpan.FromHours(1),
maxNumberOfAttempts: 10),
};
// Task failures are surfaced as ordinary .NET exceptions
try
{
var result1 = await context.CallActivityAsync<string>("Step1", wfInput, retryPolicy);
var result2 = await context.CallActivityAsync<byte[]>("Step2", result1, retryPolicy);
var result3 = await context.CallActivityAsync<long[]>("Step3", result2, retryPolicy);
var result4 = await context.CallActivityAsync<Guid[]>("Step4", result3, retryPolicy);
var result1 = await context.CallActivityAsync<string>("Step1", wfInput, retryOptions);
var result2 = await context.CallActivityAsync<byte[]>("Step2", result1, retryOptions);
var result3 = await context.CallActivityAsync<long[]>("Step3", result2, retryOptions);
var result4 = await context.CallActivityAsync<Guid[]>("Step4", result3, retryOptions);
return string.Join(", ", result4);
}
catch (TaskFailedException)
catch (TaskFailedException) // Task failures are surfaced as TaskFailedException
{
// Retries expired - apply custom compensation logic
await context.CallActivityAsync<long[]>("MyCompensation", options: retryPolicy);
await context.CallActivityAsync<long[]>("MyCompensation", options: retryOptions);
throw;
}
```
@ -262,6 +264,96 @@ A workflow implementing the monitor pattern can loop forever or it can terminate
This pattern can also be expressed using actors and reminders. The difference is that this workflow is expressed as a single function with inputs and state stored in local variables. Workflows can also execute a sequence of actions with stronger reliability guarantees, if necessary.
{{% /alert %}}
## External system interaction
In some cases, a workflow may need to pause and wait for an external system to perform some action. For example, a workflow may need to pause and wait for a payment to be received. In this case, a payment system might publish an event to a pub/sub topic on receipt of a payment, and a listener on that topic can raise an event to the workflow using the [raise event workflow API]({{< ref "howto-manage-workflow.md#raise-an-event" >}}).
Another very common scenario is when a workflow needs to pause and wait for a human, for example when approving a purchase order. Dapr Workflow supports this event pattern via the [external events]({{< ref "workflow-features-concepts.md#external-events" >}}) feature.
Here's an example workflow for a purchase order involving a human:
1. A workflow is triggered when a purchase order is received.
1. A rule in the workflow determines that a human needs to perform some action. For example, the purchase order cost exceeds a certain auto-approval threshold.
1. The workflow sends a notification requesting a human action. For example, it sends an email with an approval link to a designated approver.
1. The workflow pauses and waits for the human to either approve or reject the order by clicking on a link.
1. If the approval isn't received within the specified time, the workflow resumes and performs some compensation logic, such as canceling the order.
The following diagram illustrates this flow.
<img src="/images/workflow-overview/workflow-human-interaction-pattern.png" width=600 alt="Diagram showing how the external system interaction pattern works with a human involved"/>
The following example code shows how this pattern can be implemented using Dapr Workflow.
{{< tabs ".NET" >}}
{{% codetab %}}
```csharp
public override async Task<OrderResult> RunAsync(WorkflowContext context, OrderPayload order)
{
// ...(other steps)...
// Require orders over a certain threshold to be approved
if (order.TotalCost > OrderApprovalThreshold)
{
try
{
// Request human approval for this order
await context.CallActivityAsync(nameof(RequestApprovalActivity), order);
// Pause and wait for a human to approve the order
ApprovalResult approvalResult = await context.WaitForExternalEventAsync<ApprovalResult>(
eventName: "ManagerApproval",
timeout: TimeSpan.FromDays(3));
if (approvalResult == ApprovalResult.Rejected)
{
// The order was rejected, end the workflow here
return new OrderResult(Processed: false);
}
}
catch (TaskCanceledException)
{
// An approval timeout results in automatic order cancellation
return new OrderResult(Processed: false);
}
}
// ...(other steps)...
// End the workflow with a success result
return new OrderResult(Processed: true);
}
```
{{% alert title="Note" color="primary" %}}
In the example above, `RequestApprovalActivity` is the name of a workflow activity to invoke and `ApprovalResult` is an enumeration defined by the workflow app. For brevity, these definitions were left out of the example code.
{{% /alert %}}
{{% /codetab %}}
{{< /tabs >}}
The code that delivers the event to resume the workflow execution is external to the workflow. Workflow events can be delivered to a waiting workflow instance using the [raise event]({{< ref "howto-manage-workflow.md#raise-an-event" >}}) workflow management API, as shown in the following example:
{{< tabs ".NET" >}}
{{% codetab %}}
```csharp
// Raise the workflow event to the waiting workflow
await daprClient.RaiseWorkflowEventAsync(
instanceId: orderId,
workflowComponent: "dapr",
eventName: "ManagerApproval",
eventData: ApprovalResult.Approved);
```
{{% /codetab %}}
{{< /tabs >}}
External events don't have to be directly triggered by humans. They can also be triggered by other systems. For example, a workflow may need to pause and wait for a payment to be received. In this case, a payment system might publish an event to a pub/sub topic on receipt of a payment, and a listener on that topic can raise an event to the workflow using the raise event workflow API.
## Next steps
{{< button text="Workflow architecture >>" page="workflow-architecture.md" >}}

View File

@ -8,43 +8,58 @@ aliases:
- /developing-applications/integrations/authenticating/authenticating-aws/
---
All Dapr components using various AWS services (DynamoDB, SQS, S3, etc) use a standardized set of attributes for configuration. See [how the AWS SDK (which Dapr uses) handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).
All Dapr components using various AWS services (DynamoDB, SQS, S3, etc) use a standardized set of attributes for configuration via the AWS SDK. [Learn more about how the AWS SDK handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).
None of the following attributes are required, since you can configure the AWS SDK using the default provider chain, described in the link above. Test the component configuration and inspect the log output from the Dapr runtime to ensure that components initialize correctly.
Since you can configure the AWS SDK using the default provider chain, all of the following attributes are optional. Test the component configuration and inspect the log output from the Dapr runtime to ensure that components initialize correctly.
| Attribute | Description |
| --------- | ----------- |
| `region` | Which AWS region to connect to. In some situations (when running Dapr in self-hosted mode, for example) this flag can be provided by the environment variable `AWS_REGION`. Since Dapr sidecar injection doesn't allow configuring environment variables on the Dapr sidecar, it is recommended to always set the `region` attribute in the component spec. |
| `region` | Which AWS region to connect to. In some situations (when running Dapr in self-hosted mode, for example), this flag can be provided by the environment variable `AWS_REGION`. Since Dapr sidecar injection doesn't allow configuring environment variables on the Dapr sidecar, it is recommended to always set the `region` attribute in the component spec. |
| `endpoint` | The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). |
| `accessKey` | AWS Access key id. |
| `secretKey` | AWS Secret access key. Use together with `accessKey` to explicitly specify credentials. |
| `sessionToken` | AWS Session token. Used together with `accessKey` and `secretKey`. When using a regular IAM user's access key and secret, a session token is normally not required. |
{{% alert title="Important" color="warning" %}}
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using.
You **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using:
- When running the Dapr sidecar (`daprd`) with your application on EKS (AWS Kubernetes)
- If using a node/pod that has already been attached to an IAM policy defining access to AWS resources
{{% /alert %}}
## Alternatives to explicitly specifying credentials in component manifest files
In production scenarios, it is recommended to use a solution such as [Kiam](https://github.com/uswitch/kiam) or [Kube2iam](https://github.com/jtblin/kube2iam). If running on AWS EKS, you can [link an IAM role to a Kubernetes service account](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html), which your pod can use.
In production scenarios, it is recommended to use a solution such as:
- [Kiam](https://github.com/uswitch/kiam)
- [Kube2iam](https://github.com/jtblin/kube2iam)
If running on AWS EKS, you can [link an IAM role to a Kubernetes service account](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html), which your pod can use.
All of these solutions solve the same problem: They allow the Dapr runtime process (or sidecar) to retrive credentials dynamically, so that explicit credentials aren't needed. This provides several benefits, such as automated key rotation, and avoiding having to manage secrets.
Both Kiam and Kube2IAM work by intercepting calls to the [instance metadata service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html).
## Using instance role/profile when running in stand-alone mode on AWS EC2
### Use an instance profile when running in stand-alone mode on AWS EC2
If running Dapr directly on an AWS EC2 instance in stand-alone mode, instance profiles can be used. Simply configure an iam role and [attach it to the instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) for the ec2 instance, and Dapr should be able to authenticate to AWS without specifying credentials in the Dapr component manifest.
If running Dapr directly on an AWS EC2 instance in stand-alone mode, you can use instance profiles.
## Authenticating to AWS when running dapr locally in stand-alone mode
1. Configure an IAM role.
1. [Attach it to the instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) for the ec2 instance.
When running Dapr (or the Dapr runtime directly) in stand-alone mode, you have the option of injecting environment variables into the process like this (on Linux/MacOS:
Dapr then authenticates to AWS without specifying credentials in the Dapr component manifest.
### Authenticate to AWS when running dapr locally in stand-alone mode
{{< tabs "Linux/MacOS" "Windows" >}}
<!-- linux -->
{{% codetab %}}
When running Dapr (or the Dapr runtime directly) in stand-alone mode, you can inject environment variables into the process, like the following example:
```bash
FOO=bar daprd --app-id myapp
```
If you have [configured named AWS profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) locally , you can tell Dapr (or the Dapr runtime) which profile to use by specifying the "AWS_PROFILE" environment variable:
If you have [configured named AWS profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) locally, you can tell Dapr (or the Dapr runtime) which profile to use by specifying the "AWS_PROFILE" environment variable:
```bash
AWS_PROFILE=myprofile dapr run...
@ -58,11 +73,27 @@ AWS_PROFILE=myprofile daprd...
You can use any of the [supported environment variables](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list) to configure Dapr in this manner.
On Windows, the environment variable needs to be set before starting the `dapr` or `daprd` command, doing it inline as shown above is not supported.
{{% /codetab %}}
## Authenticating to AWS if using AWS SSO based profiles
<!-- windows -->
{{% codetab %}}
If you authenticate to AWS using [AWS SSO](https://aws.amazon.com/single-sign-on/), some AWS SDKs (including the Go SDK) don't yet support this natively. There are several utilities you can use to "bridge the gap" between AWS SSO-based credentials, and "legacy" credentials, such as [AwsHelper](https://pypi.org/project/awshelper/) or [aws-sso-util](https://github.com/benkehoe/aws-sso-util).
On Windows, the environment variable needs to be set before starting the `dapr` or `daprd` command, doing it inline (like in Linux/MacOS) is not supported.
{{% /codetab %}}
{{< /tabs >}}
### Authenticate to AWS if using AWS SSO based profiles
If you authenticate to AWS using [AWS SSO](https://aws.amazon.com/single-sign-on/), some AWS SDKs (including the Go SDK) don't yet support this natively. There are several utilities you can use to "bridge the gap" between AWS SSO-based credentials and "legacy" credentials, such as:
- [AwsHelper](https://pypi.org/project/awshelper/)
- [aws-sso-util](https://github.com/benkehoe/aws-sso-util)
{{< tabs "Linux/MacOS" "Windows" >}}
<!-- linux -->
{{% codetab %}}
If using AwsHelper, start Dapr like this:
@ -75,7 +106,21 @@ or
```bash
AWS_PROFILE=myprofile awshelper daprd...
```
{{% /codetab %}}
On Windows, the environment variable needs to be set before starting the `awshelper` command, doing it inline as shown above is not supported.
<!-- windows -->
{{% codetab %}}
On Windows, the environment variable needs to be set before starting the `awshelper` command, doing it inline (like in Linxu/MacOS) is not supported.
{{% /codetab %}}
{{< /tabs >}}
## Next steps
{{< button text="Refer to AWS component specs >>" page="components-reference" >}}
## Related links
For more information, see [how the AWS SDK (which Dapr uses) handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).

View File

@ -2,6 +2,6 @@
type: docs
title: "Integrations with Azure"
linkTitle: "Azure"
weight: 1500
weight: 1000
description: "Dapr integrations with Azure services"
---

View File

@ -1,401 +0,0 @@
---
type: docs
title: "Authenticating to Azure"
linkTitle: "Authenticating to Azure"
description: "How to authenticate Azure components using Azure AD and/or Managed Identities"
aliases:
- "/operations/components/setup-secret-store/supported-secret-stores/azure-keyvault-managed-identity/"
- "/reference/components-reference/supported-secret-stores/azure-keyvault-managed-identity/"
weight: 1000
---
## Common Azure authentication layer
Certain Azure components for Dapr offer support for the *common Azure authentication layer*, which enables applications to access data stored in Azure resources by authenticating with Azure AD. Thanks to this, administrators can leverage all the benefits of fine-tuned permissions with RBAC (Role-Based Access Control), and applications running on certain Azure services such as Azure VMs, Azure Kubernetes Service, or many Azure platform services can leverage [Managed Service Identities (MSI)](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview).
Some Azure components offer alternative authentication methods, such as systems based on "master keys" or "shared keys". Whenever possible, it is recommended that you authenticate your Dapr components using Azure AD for increased security and ease of management, as well as for the ability to leverage MSI if your app is running on supported Azure services.
> Currently, only a subset of Azure components for Dapr offer support for this authentication method. Over time, support will be expanded to all other Azure components for Dapr. You can track the progress of the work, component-by-component, on [this issue](https://github.com/dapr/components-contrib/issues/1103).
### About authentication with Azure AD
Azure AD is Azure's identity and access management (IAM) solution, which is used to authenticate and authorize users and services.
Azure AD is built on top of open standards such OAuth 2.0, which allows services (applications) to obtain access tokens to make requests to Azure services, including Azure Storage, Azure Key Vault, Cosmos DB, etc. In the Azure terminology, an application is also called a "Service Principal".
Many of the services listed above also support authentication using other systems, such as "master keys" or "shared keys". Although those are always valid methods to authenticate your application (and Dapr continues to support them, as explained in each component's reference page), using Azure AD when possible offers various benefits, including:
- The ability to leverage Managed Service Identities, which allow your application to authenticate with Azure AD, and obtain an access token to make requests to Azure services, without the need to use any credential. When your application is running on a supported Azure service (including, but not limited to, Azure VMs, Azure Kubernetes Service, Azure Web Apps, etc), an identity for your application can be assigned at the infrastructure level.
This way, your code does not have to deal with credentials of any kind, removing the challenge of safely managing credentials, allowing greater separation of concerns between development and operations teams and reducing the number of people with access to credentials, and lastly simplifying operational aspectsespecially when multiple environments are used.
- Using RBAC (Role-Based Access Control) with supported services (such as Azure Storage and Cosmos DB), permissions given to an application can be fine-tuned, for example allowing restricting access to a subset of data or making it read-only.
- Better auditing for access.
- Ability to authenticate using certificates (optional).
## Credentials metadata fields
To authenticate with Azure AD, you will need to add the following credentials as values in the metadata for your Dapr component (read the next section for how to create them). There are multiple options depending on the way you have chosen to pass the credentials to your Dapr service.
**Authenticating using client credentials:**
| Field | Required | Details | Example |
|---------------------|----------|--------------------------------------|----------------------------------------------|
| `azureTenantId` | Y | ID of the Azure AD tenant | `"cd4b2887-304c-47e1-b4d5-65447fdd542b"` |
| `azureClientId` | Y | Client ID (application ID) | `"c7dd251f-811f-4ba2-a905-acd4d3f8f08b"` |
| `azureClientSecret` | Y | Client secret (application password) | `"Ecy3XG7zVZK3/vl/a2NSB+a1zXLa8RnMum/IgD0E"` |
When running on Kubernetes, you can also use references to Kubernetes secrets for any or all of the values above.
**Authenticating using a PFX certificate:**
| Field | Required | Details | Example |
|--------|--------|--------|--------|
| `azureTenantId` | Y | ID of the Azure AD tenant | `"cd4b2887-304c-47e1-b4d5-65447fdd542b"` |
| `azureClientId` | Y | Client ID (application ID) | `"c7dd251f-811f-4ba2-a905-acd4d3f8f08b"` |
| `azureCertificate` | One of `azureCertificate` and `azureCertificateFile` | Certificate and private key (in PFX/PKCS#12 format) | `"-----BEGIN PRIVATE KEY-----\n MIIEvgI... \n -----END PRIVATE KEY----- \n -----BEGIN CERTIFICATE----- \n MIICoTC... \n -----END CERTIFICATE-----` |
| `azureCertificateFile` | One of `azureCertificate` and `azureCertificateFile` | Path to the PFX/PKCS#12 file containing the certificate and private key | `"/path/to/file.pem"` |
| `azureCertificatePassword` | N | Password for the certificate if encrypted | `"password"` |
When running on Kubernetes, you can also use references to Kubernetes secrets for any or all of the values above.
**Authenticating with Managed Service Identities (MSI):**
| Field | Required | Details | Example |
|-----------------|----------|----------------------------|------------------------------------------|
| `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-4ba2-a905-acd4d3f8f08b"` |
Using MSI you're not required to specify any value, although you may optionally pass `azureClientId` if needed.
### Aliases
For backwards-compatibility reasons, the following values in the metadata are supported as aliases, although their use is discouraged.
| Metadata key | Aliases (supported but deprecated) |
|----------------------------|------------------------------------|
| `azureTenantId` | `spnTenantId`, `tenantId` |
| `azureClientId` | `spnClientId`, `clientId` |
| `azureClientSecret` | `spnClientSecret`, `clientSecret` |
| `azureCertificate` | `spnCertificate` |
| `azureCertificateFile` | `spnCertificateFile` |
| `azureCertificatePassword` | `spnCertificatePassword` |
## Generating a new Azure AD application and Service Principal
To start, create a new Azure AD application, which will also be used as Service Principal.
Prerequisites:
- Azure Subscription
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
- [jq](https://stedolan.github.io/jq/download/)
- OpenSSL (included by default on all Linux and macOS systems, as well as on WSL)
- The scripts below are optimized for a bash or zsh shell
> If you haven't already, start by logging in to Azure using the Azure CLI:
>
> ```sh
> # Log in Azure
> az login
> # Set your default subscription
> az account set -s [your subscription id]
> ```
### Creating an Azure AD application
First, create the Azure AD application with:
```sh
# Friendly name for the application / Service Principal
APP_NAME="dapr-application"
# Create the app
APP_ID=$(az ad app create --display-name "${APP_NAME}" | jq -r .appId)
```
{{< tabs "Client secret" "Certificate">}}
{{% codetab %}}
To create a **client secret**, then run this command. This will generate a random password based on the base64 charset and 40-characters long. Additionally, it will make the password valid for 2 years, before it will need to be rotated:
```sh
az ad app credential reset \
--id "${APP_ID}" \
--years 2
```
The output of the command above will be similar to this:
```json
{
"appId": "c7dd251f-811f-4ba2-a905-acd4d3f8f08b",
"password": "Ecy3XG7zVZK3/vl/a2NSB+a1zXLa8RnMum/IgD0E",
"tenant": "cd4b2887-304c-47e1-b4d5-65447fdd542b"
}
```
Take note of the values above, which you'll need to use in your Dapr components' metadata, to allow Dapr to authenticate with Azure:
- `appId` is the value for `azureClientId`
- `password` is the value for `azureClientSecret` (this was randomly-generated)
- `tenant` is the value for `azureTenantId`
{{% /codetab %}}
{{% codetab %}}
If you'd rather use a **PFX (PKCS#12) certificate**, run this command which will create a self-signed certificate:
```sh
az ad app credential reset \
--id "${APP_ID}" \
--create-cert
```
> Note: self-signed certificates are recommended for development only. For production, you should use certificates signed by a CA and imported with the `--cert` flag.
The output of the command above should look like:
```json
{
"appId": "c7dd251f-811f-4ba2-a905-acd4d3f8f08b",
"fileWithCertAndPrivateKey": "/Users/alessandro/tmpgtdgibk4.pem",
"password": null,
"tenant": "cd4b2887-304c-47e1-b4d5-65447fdd542b"
}
```
Take note of the values above, which you'll need to use in your Dapr components' metadata:
- `appId` is the value for `azureClientId`
- `tenant` is the value for `azureTenantId`
- The self-signed PFX certificate and private key are written in the file at the path specified in `fileWithCertAndPrivateKey`.
Use the contents of that file as `azureCertificate` (or write it to a file on the server and use `azureCertificateFile`)
> While the generated file has the `.pem` extension, it contains a certificate and private key encoded as PFX (PKCS#12).
{{% /codetab %}}
{{< /tabs >}}
### Creating a Service Principal
Once you have created an Azure AD application, create a Service Principal for that application, which will allow us to grant it access to Azure resources. Run:
```sh
SERVICE_PRINCIPAL_ID=$(az ad sp create \
--id "${APP_ID}" \
| jq -r .id)
echo "Service Principal ID: ${SERVICE_PRINCIPAL_ID}"
```
The output will be similar to:
```text
Service Principal ID: 1d0ccf05-5427-4b5e-8eb4-005ac5f9f163
```
Note that the value above is the ID of the **Service Principal** which is different from the ID of application in Azure AD (client ID)! The former is defined within an Azure tenant and is used to grant access to Azure resources to an application. The client ID instead is used by your application to authenticate. To sum things up:
- You'll use the client ID in Dapr manifests to configure authentication with Azure services
- You'll use the Service Principal ID to grant permissions to an application to access Azure resources
Keep in mind that the Service Principal that was just created does not have access to any Azure resource by default. Access will need to be granted to each resource as needed, as documented in the docs for the components.
> Note: this step is different from the [official documentation](https://docs.microsoft.com/cli/azure/create-an-azure-service-principal-azure-cli) as the short-hand commands included there create a Service Principal that has broad read-write access to all Azure resources in your subscription.
> Not only doing that would grant our Service Principal more access than you are likely going to desire, but this also applies only to the Azure management plane (Azure Resource Manager, or ARM), which is irrelevant for Dapr anyways (all Azure components are designed to interact with the data plane of various services, and not ARM).
### Example usage in a Dapr component
In this example, you will set up an Azure Key Vault secret store component that uses Azure AD to authenticate.
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
To use a **client secret**, create a file called `azurekeyvault.yaml` in the components directory, filling in with the details from the above setup process:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureClientSecret
value : "[your_client_secret]"
```
If you want to use a **certificate** saved on the local disk, instead, use:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureCertificateFile
value : "[pfx_certificate_file_fully_qualified_local_path]"
```
{{% /codetab %}}
{{% codetab %}}
In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file.
To use a **client secret**:
1. Create a Kubernetes secret using the following command:
```bash
kubectl create secret generic [your_k8s_secret_name] --from-literal=[your_k8s_secret_key]=[your_client_secret]
```
- `[your_client_secret]` is the application's client secret as generated above
- `[your_k8s_secret_name]` is secret name in the Kubernetes secret store
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
2. Create an `azurekeyvault.yaml` component file.
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the client secret stored in the Kubernetes secret store.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureClientSecret
secretKeyRef:
name: "[your_k8s_secret_name]"
key: "[your_k8s_secret_key]"
auth:
secretStore: kubernetes
```
3. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
To use a **certificate**:
1. Create a Kubernetes secret using the following command:
```bash
kubectl create secret generic [your_k8s_secret_name] --from-file=[your_k8s_secret_key]=[pfx_certificate_file_fully_qualified_local_path]
```
- `[pfx_certificate_file_fully_qualified_local_path]` is the path to the PFX file you obtained earlier
- `[your_k8s_secret_name]` is secret name in the Kubernetes secret store
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
2. Create an `azurekeyvault.yaml` component file.
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the certificate stored in the Kubernetes secret store.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureCertificate
secretKeyRef:
name: "[your_k8s_secret_name]"
key: "[your_k8s_secret_key]"
auth:
secretStore: kubernetes
```
3. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
{{% /codetab %}}
{{< /tabs >}}
## Using Managed Service Identities
Using MSI, authentication happens automatically by virtue of your application running on top of an Azure service that has an assigned identity. For example, when you create an Azure VM or an Azure Kubernetes Service cluster and choose to enable a managed identity for that, an Azure AD application is created for you and automatically assigned to the service. Your Dapr services can then leverage that identity to authenticate with Azure AD, transparently and without you having to specify any credential.
To get started with managed identities, first you need to assign an identity to a new or existing Azure resource. The instructions depend on the service use. Below are links to the official documentation:
- [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/azure/aks/use-managed-identity)
- [Azure App Service](https://docs.microsoft.com/azure/app-service/overview-managed-identity) (including Azure Web Apps and Azure Functions)
- [Azure Virtual Machines (VM)](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm)
- [Azure Virtual Machines Scale Sets (VMSS)](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss)
- [Azure Container Instance (ACI)](https://docs.microsoft.com/azure/container-instances/container-instances-managed-identity)
Other Azure application services may offer support for MSI; please check the documentation for those services to understand how to configure them.
After assigning a managed identity to your Azure resource, you will have credentials such as:
```json
{
"principalId": "<object-id>",
"tenantId": "<tenant-id>",
"type": "SystemAssigned",
"userAssignedIdentities": null
}
```
From the list above, take note of **`principalId`** which is the ID of the Service Principal that was created. You'll need that to grant access to Azure resources to your Service Principal.
## Support for other Azure environments
By default, Dapr components are configured to interact with Azure resources in the "public cloud". If your application is deployed to another cloud, such as Azure China, Azure Government, or Azure Germany, you can enable that for supported components by setting the `azureEnvironment` metadata property to one of the supported values:
- Azure public cloud (default): `"AZUREPUBLICCLOUD"`
- Azure China: `"AZURECHINACLOUD"`
- Azure Government: `"AZUREUSGOVERNMENTCLOUD"`
- Azure Germany: `"AZUREGERMANCLOUD"`
## References
- [Azure AD app credential: Azure CLI reference](https://docs.microsoft.com/cli/azure/ad/app/credential)
- [Azure Managed Service Identity (MSI) overview](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview)
- [Secrets building block]({{< ref secrets >}})
- [How-To: Retrieve a secret]({{< ref "howto-secrets.md" >}})
- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}})
- [Secrets API reference]({{< ref secrets_api.md >}})

View File

@ -6,6 +6,11 @@ description: "Publish APIs for Dapr services and components through Azure API Ma
weight: 2000
---
Azure API Management (APIM) is a way to create consistent and modern API gateways for back-end services, including those built with Dapr. Dapr support can be enabled in self-hosted API Management gateways to allow them to forward requests to Dapr services, send messages to Dapr Pub/Sub topics, or trigger Dapr output bindings. For more information, read the guide on [API Management Dapr Integration policies](https://docs.microsoft.com/azure/api-management/api-management-dapr-policies) and try out the [Dapr & Azure API Management Integration Demo](https://github.com/dapr/samples/tree/master/dapr-apim-integration).
[Azure API Management](https://learn.microsoft.com/azure/api-management/api-management-key-concepts) is a way to create consistent and modern API gateways for back-end services, including those built with Dapr. You can enable Dapr support in self-hosted API Management gateways to allow them to:
- Forward requests to Dapr services
- Send messages to Dapr Pub/Sub topics
- Trigger Dapr output bindings
{{< button text="Learn more" link="https://docs.microsoft.com/azure/api-management/api-management-dapr-policies" >}}
Try out the [Dapr & Azure API Management Integration sample](https://github.com/dapr/samples/tree/master/dapr-apim-integration).
{{< button text="Learn more about Dapr integration policies" link="https://docs.microsoft.com/azure/api-management/api-management-dapr-policies" >}}

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Authenticate to Azure"
linkTitle: "Authenticate to Azure"
weight: 1600
description: "Learn about authenticating Azure components using Azure Active Directory or Managed Service Identities"
---

View File

@ -0,0 +1,273 @@
---
type: docs
title: "Authenticating to Azure"
linkTitle: "Overview"
description: "How to authenticate Azure components using Azure AD and/or Managed Identities"
aliases:
- "/operations/components/setup-secret-store/supported-secret-stores/azure-keyvault-managed-identity/"
- "/reference/components-reference/supported-secret-stores/azure-keyvault-managed-identity/"
weight: 10000
---
Certain Azure components for Dapr offer support for the *common Azure authentication layer*, which enables applications to access data stored in Azure resources by authenticating with Azure Active Directory (Azure AD). Thanks to this:
- Administrators can leverage all the benefits of fine-tuned permissions with Role-Based Access Control (RBAC).
- Applications running on Azure services such as Azure Container Apps, Azure Kubernetes Service, Azure VMs, or any other Azure platform services can leverage [Managed Service Identities (MSI)](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview).
## About authentication with Azure AD
Azure AD is Azure's identity and access management (IAM) solution, which is used to authenticate and authorize users and services.
Azure AD is built on top of open standards such OAuth 2.0, which allows services (applications) to obtain access tokens to make requests to Azure services, including Azure Storage, Azure Key Vault, Cosmos DB, etc.
> In Azure terminology, an application is also called a "Service Principal".
Some Azure components offer alternative authentication methods, such as systems based on "master keys" or "shared keys". Although both master keys and shared keys are valid and supported by Dapr, you should authenticate your Dapr components using Azure AD. Using Azure AD offers benefits like the following.
### Managed Service Identities
With Managed Service Identities (MSI), your application can authenticate with Azure AD and obtain an access token to make requests to Azure services. When your application is running on a supported Azure service, an identity for your application can be assigned at the infrastructure level.
Once using MSI, your code doesn't have to deal with credentials, which:
- Removes the challenge of managing credentials safely
- Allows greater separation of concerns between development and operations teams
- Reduces the number of people with access to credentials
- Simplifies operational aspectsespecially when multiple environments are used
### Role-based Access Control
When using Role-Based Access Control (RBAC) with supported services, permissions given to an application can be fine-tuned. For example, you can restrict access to a subset of data or make it read-only.
### Auditing
Using Azure AD provides an improved auditing experience for access.
### (Optional) Authenticate using certificates
While Azure AD allows you to use MSI or RBAC, you still have the option to authenticate using certificates.
## Support for other Azure environments
By default, Dapr components are configured to interact with Azure resources in the "public cloud". If your application is deployed to another cloud, such as Azure China, Azure Government, or Azure Germany, you can enable that for supported components by setting the `azureEnvironment` metadata property to one of the supported values:
- Azure public cloud (default): `"AZUREPUBLICCLOUD"`
- Azure China: `"AZURECHINACLOUD"`
- Azure Government: `"AZUREUSGOVERNMENTCLOUD"`
- Azure Germany: `"AZUREGERMANCLOUD"`
## Credentials metadata fields
To authenticate with Azure AD, you will need to add the following credentials as values in the metadata for your [Dapr component]({{< ref "#example-usage-in-a-dapr-component" >}}).
### Metadata options
Depending on how you've passed credentials to your Dapr services, you have multiple metadata options.
#### Authenticating using client credentials
| Field | Required | Details | Example |
|---------------------|----------|--------------------------------------|----------------------------------------------|
| `azureTenantId` | Y | ID of the Azure AD tenant | `"cd4b2887-304c-47e1-b4d5-65447fdd542b"` |
| `azureClientId` | Y | Client ID (application ID) | `"c7dd251f-811f-4ba2-a905-acd4d3f8f08b"` |
| `azureClientSecret` | Y | Client secret (application password) | `"Ecy3XG7zVZK3/vl/a2NSB+a1zXLa8RnMum/IgD0E"` |
When running on Kubernetes, you can also use references to Kubernetes secrets for any or all of the values above.
#### Authenticating using a PFX certificate
| Field | Required | Details | Example |
|--------|--------|--------|--------|
| `azureTenantId` | Y | ID of the Azure AD tenant | `"cd4b2887-304c-47e1-b4d5-65447fdd542b"` |
| `azureClientId` | Y | Client ID (application ID) | `"c7dd251f-811f-4ba2-a905-acd4d3f8f08b"` |
| `azureCertificate` | One of `azureCertificate` and `azureCertificateFile` | Certificate and private key (in PFX/PKCS#12 format) | `"-----BEGIN PRIVATE KEY-----\n MIIEvgI... \n -----END PRIVATE KEY----- \n -----BEGIN CERTIFICATE----- \n MIICoTC... \n -----END CERTIFICATE-----` |
| `azureCertificateFile` | One of `azureCertificate` and `azureCertificateFile` | Path to the PFX/PKCS#12 file containing the certificate and private key | `"/path/to/file.pem"` |
| `azureCertificatePassword` | N | Password for the certificate if encrypted | `"password"` |
When running on Kubernetes, you can also use references to Kubernetes secrets for any or all of the values above.
#### Authenticating with Managed Service Identities (MSI)
| Field | Required | Details | Example |
|-----------------|----------|----------------------------|------------------------------------------|
| `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-4ba2-a905-acd4d3f8f08b"` |
Using MSI, you're not required to specify any value, although you may pass `azureClientId` if needed.
### Aliases
For backwards-compatibility reasons, the following values in the metadata are supported as aliases. Their use is discouraged.
| Metadata key | Aliases (supported but deprecated) |
|----------------------------|------------------------------------|
| `azureTenantId` | `spnTenantId`, `tenantId` |
| `azureClientId` | `spnClientId`, `clientId` |
| `azureClientSecret` | `spnClientSecret`, `clientSecret` |
| `azureCertificate` | `spnCertificate` |
| `azureCertificateFile` | `spnCertificateFile` |
| `azureCertificatePassword` | `spnCertificatePassword` |
### Example usage in a Dapr component
In this example, you will set up an Azure Key Vault secret store component that uses Azure AD to authenticate.
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
To use a **client secret**, create a file called `azurekeyvault.yaml` in the components directory, filling in with the details from the above setup process:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureClientSecret
value : "[your_client_secret]"
```
If you want to use a **certificate** saved on the local disk, instead, use:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureCertificateFile
value : "[pfx_certificate_file_fully_qualified_local_path]"
```
{{% /codetab %}}
{{% codetab %}}
In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file.
To use a **client secret**:
1. Create a Kubernetes secret using the following command:
```bash
kubectl create secret generic [your_k8s_secret_name] --from-literal=[your_k8s_secret_key]=[your_client_secret]
```
- `[your_client_secret]` is the application's client secret as generated above
- `[your_k8s_secret_name]` is secret name in the Kubernetes secret store
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
1. Create an `azurekeyvault.yaml` component file.
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the client secret stored in the Kubernetes secret store.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureClientSecret
secretKeyRef:
name: "[your_k8s_secret_name]"
key: "[your_k8s_secret_key]"
auth:
secretStore: kubernetes
```
1. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
To use a **certificate**:
1. Create a Kubernetes secret using the following command:
```bash
kubectl create secret generic [your_k8s_secret_name] --from-file=[your_k8s_secret_key]=[pfx_certificate_file_fully_qualified_local_path]
```
- `[pfx_certificate_file_fully_qualified_local_path]` is the path to the PFX file you obtained earlier
- `[your_k8s_secret_name]` is secret name in the Kubernetes secret store
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
1. Create an `azurekeyvault.yaml` component file.
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the certificate stored in the Kubernetes secret store.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureCertificate
secretKeyRef:
name: "[your_k8s_secret_name]"
key: "[your_k8s_secret_key]"
auth:
secretStore: kubernetes
```
1. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
{{% /codetab %}}
{{< /tabs >}}
## Next steps
{{< button text="Generate a new Azure AD application and Service Principal >>" page="howto-aad.md" >}}
## References
- [Azure AD app credential: Azure CLI reference](https://docs.microsoft.com/cli/azure/ad/app/credential)
- [Azure Managed Service Identity (MSI) overview](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview)
- [Secrets building block]({{< ref secrets >}})
- [How-To: Retrieve a secret]({{< ref "howto-secrets.md" >}})
- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}})
- [Secrets API reference]({{< ref secrets_api.md >}})

View File

@ -0,0 +1,147 @@
---
type: docs
title: "How to: Generate a new Azure AD application and Service Principal"
linkTitle: "How to: Generate Azure AD and Service Principal"
weight: 30000
description: "Learn how to generate an Azure Active Directory and use it as a Service Principal"
---
## Prerequisites
- [An Azure subscription](https://azure.microsoft.com/free/)
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
- [jq](https://stedolan.github.io/jq/download/)
- OpenSSL (included by default on all Linux and macOS systems, as well as on WSL)
- Make sure you're using a bash or zsh shell
## Log into Azure using the Azure CLI
In a new terminal, run the following command:
```sh
az login
az account set -s [your subscription id]
```
### Create an Azure AD application
Create the Azure AD application with:
```sh
# Friendly name for the application / Service Principal
APP_NAME="dapr-application"
# Create the app
APP_ID=$(az ad app create --display-name "${APP_NAME}" | jq -r .appId)
```
Select how you'd prefer to pass credentials.
{{< tabs "Client secret" "PFX certificate">}}
{{% codetab %}}
To create a **client secret**, run the following command.
```sh
az ad app credential reset \
--id "${APP_ID}" \
--years 2
```
This generates a random, 40-characters long password based on the `base64` charset. This password will be valid for 2 years, before you need to rotate it.
Save the output values returned; you'll need them for Dapr to authenticate with Azure. The expected output:
```json
{
"appId": "<your-app-id>",
"password": "<your-password>",
"tenant": "<your-azure-tenant>"
}
```
When adding the returned values to your Dapr component's metadata:
- `appId` is the value for `azureClientId`
- `password` is the value for `azureClientSecret` (this was randomly-generated)
- `tenant` is the value for `azureTenantId`
{{% /codetab %}}
{{% codetab %}}
For a **PFX (PKCS#12) certificate**, run the following command to create a self-signed certificate:
```sh
az ad app credential reset \
--id "${APP_ID}" \
--create-cert
```
> **Note:** Self-signed certificates are recommended for development only. For production, you should use certificates signed by a CA and imported with the `--cert` flag.
The output of the command above should look like:
Save the output values returned; you'll need them for Dapr to authenticate with Azure. The expected output:
```json
{
"appId": "<your-app-id>",
"fileWithCertAndPrivateKey": "<file-path>",
"password": null,
"tenant": "<your-azure-tenant>"
}
```
When adding the returned values to your Dapr component's metadata:
- `appId` is the value for `azureClientId`
- `tenant` is the value for `azureTenantId`
- `fileWithCertAndPrivateKey` indicates the location of the self-signed PFX certificate and private key. Use the contents of that file as `azureCertificate` (or write it to a file on the server and use `azureCertificateFile`)
> **Note:** While the generated file has the `.pem` extension, it contains a certificate and private key encoded as _PFX (PKCS#12)_.
{{% /codetab %}}
{{< /tabs >}}
### Create a Service Principal
Once you have created an Azure AD application, create a Service Principal for that application. With this Service Principal, you can grant it access to Azure resources.
To create the Service Principal, run the following command:
```sh
SERVICE_PRINCIPAL_ID=$(az ad sp create \
--id "${APP_ID}" \
| jq -r .id)
echo "Service Principal ID: ${SERVICE_PRINCIPAL_ID}"
```
Expected output:
```text
Service Principal ID: 1d0ccf05-5427-4b5e-8eb4-005ac5f9f163
```
The returned value above is the **Service Principal ID**, which is different from the Azure AD application ID (client ID).
**The Service Principal ID** is:
- Defined within an Azure tenant
- Used to grant access to Azure resources to an application
You'll use the Service Principal ID to grant permissions to an application to access Azure resources.
Meanwhile, **the client ID** is used by your application to authenticate. You'll use the client ID in Dapr manifests to configure authentication with Azure services.
Keep in mind that the Service Principal that was just created does not have access to any Azure resource by default. Access will need to be granted to each resource as needed, as documented in the docs for the components.
{{% alert title="Note" color="primary" %}}
This step is different from the [official Azure documentation](https://docs.microsoft.com/cli/azure/create-an-azure-service-principal-azure-cli). The short-hand commands included in the official documentation creates a Service Principal that has broad `read-write` access to all Azure resources in your subscription, which:
- Grants your Service Principal more access than you likely desire.
- Applies _only_ to the Azure management plane (Azure Resource Manager, or ARM), which is irrelevant for Dapr components, which are designed to interact with the data plane of various services.
{{% /alert %}}
## Next steps
{{< button text="Use MSI >>" page="howto-msi.md" >}}

View File

@ -0,0 +1,38 @@
---
type: docs
title: "How to: Use Managed Service Identities"
linkTitle: "How to: Use MSI"
weight: 40000
description: "Learn how to use Managed Service Identities"
---
Using MSI, authentication happens automatically by virtue of your application running on top of an Azure service that has an assigned identity.
For example, let's say you enable a managed service identity for an Azure VM, Azure Container App, or an Azure Kubernetes Service cluster. When you do, an Azure AD application is created for you and automatically assigned to the service. Your Dapr services can then leverage that identity to authenticate with Azure AD, transparently and without you having to specify any credential.
To get started with managed identities, you need to assign an identity to a new or existing Azure resource. The instructions depend on the service use. Check the following official documentation for the most appropriate instructions:
- [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/azure/aks/use-managed-identity)
- [Azure Container Apps (ACA)](https://learn.microsoft.com/azure/container-apps/dapr-overview?tabs=bicep1%2Cyaml#using-managed-identity)
- [Azure App Service](https://docs.microsoft.com/azure/app-service/overview-managed-identity) (including Azure Web Apps and Azure Functions)
- [Azure Virtual Machines (VM)](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm)
- [Azure Virtual Machines Scale Sets (VMSS)](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss)
- [Azure Container Instance (ACI)](https://docs.microsoft.com/azure/container-instances/container-instances-managed-identity)
After assigning a managed identity to your Azure resource, you will have credentials such as:
```json
{
"principalId": "<object-id>",
"tenantId": "<tenant-id>",
"type": "SystemAssigned",
"userAssignedIdentities": null
}
```
From the returned values, take note of **`principalId`**, which is the Service Principal ID that was created. You'll use that to grant access to Azure resources to your Service Principal.
## Next steps
{{< button text="Refer to Azure component specs >>" page="components-reference" >}}

View File

@ -3,8 +3,16 @@ type: docs
title: "Dapr extension for Azure Functions runtime"
linkTitle: "Azure Functions extension"
description: "Access Dapr capabilities from your Azure Functions runtime application"
weight: 4000
weight: 3000
---
Dapr integrates with the Azure Functions runtime via an extension that lets a function seamlessly interact with Dapr. Azure Functions provides an event-driven programming model and Dapr provides cloud-native building blocks. With this extension, you can bring both together for serverless and event-driven apps. For more information read
[Azure Functions extension for Dapr](https://cloudblogs.microsoft.com/opensource/2020/07/01/announcing-azure-functions-extension-for-dapr/) and visit the [Azure Functions extension](https://github.com/dapr/azure-functions-extension) repo to try out the samples.
{{% alert title="Note" color="primary" %}}
The Dapr Functions extension is currently in preview.
{{% /alert %}}
Dapr integrates with the [Azure Functions runtime](https://learn.microsoft.com/azure/azure-functions/functions-overview) via an extension that lets a function seamlessly interact with Dapr. Azure Functions provides an event-driven programming model and Dapr provides cloud-native building blocks. The extension combines the two for serverless and event-driven apps.
Try out the [Dapr Functions extension](https://github.com/dapr/azure-functions-extension) samples.
{{< button text="Learn more about the Dapr Function extension in preview" link="https://cloudblogs.microsoft.com/opensource/2020/07/01/announcing-azure-functions-extension-for-dapr/" >}}

View File

@ -6,104 +6,12 @@ description: "Provision Dapr on your Azure Kubernetes Service (AKS) cluster with
weight: 4000
---
# Prerequisites
- Azure subscription
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli-windows?tabs=azure-cli) and the ***aks-preview*** extension.
- [Azure Kubernetes Service (AKS) cluster](https://docs.microsoft.com/azure/aks/tutorial-kubernetes-deploy-cluster?tabs=azure-cli)
## Install Dapr using the AKS Dapr extension
The recommended approach for installing Dapr on AKS is to use the AKS Dapr extension. The extension offers support for all native Dapr configuration capabilities through command-line arguments via the Azure CLI and offers the option of opting into automatic minor version upgrades of the Dapr runtime.
The recommended approach for installing Dapr on AKS is to use the AKS Dapr extension. The extension offers:
- Support for all native Dapr configuration capabilities through command-line arguments via the Azure CLI
- The option of opting into automatic minor version upgrades of the Dapr runtime
{{% alert title="Note" color="warning" %}}
If you install Dapr through the AKS extension, our recommendation is to continue using the extension for future management of Dapr instead of the Dapr CLI. Combining the two tools can cause conflicts and result in undesired behavior.
If you install Dapr through the AKS extension, best practice is to continue using the extension for future management of Dapr _instead of the Dapr CLI_. Combining the two tools can cause conflicts and result in undesired behavior.
{{% /alert %}}
### How the extension works
The Dapr extension works by provisioning the Dapr control plane on your AKS cluster through the Azure CLI. The dapr control plane consists of:
- **dapr-operator**: Manages component updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.)
- **dapr-sidecar-injector**: Injects Dapr into annotated deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT`. This enables user-defined applications to communicate with Dapr without the need to hard-code Dapr port values.
- **dapr-placement**: Used for actors only. Creates mapping tables that map actor instances to pods
- **dapr-sentry**: Manages mTLS between services and acts as a certificate authority. For more information read the security overview.
### Extension Prerequisites
In order to use the AKS Dapr extension, you must first enable the `AKS-ExtensionManager` and `AKS-Dapr` feature flags on your Azure subscription.
The below command will register the `AKS-ExtensionManager` and `AKS-Dapr` feature flags on your Azure subscription:
```bash
az feature register --namespace "Microsoft.ContainerService" --name "AKS-ExtensionManager"
az feature register --namespace "Microsoft.ContainerService" --name "AKS-Dapr"
```
After a few minutes, check the status to show `Registered`. Confirm the registration status by using the az feature list command:
```bash
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-ExtensionManager')].{Name:name,State:properties.state}"
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-Dapr')].{Name:name,State:properties.state}"
```
Next, refresh the registration of the `Microsoft.KubernetesConfiguration` and `Microsoft.ContainerService` resource providers by using the az provider register command:
```bash
az provider register --namespace Microsoft.KubernetesConfiguration
az provider register --namespace Microsoft.ContainerService
```
#### Enable the Azure CLI extension for cluster extensions
You will also need the `k8s-extension` Azure CLI extension. Install this by running the following commands:
```bash
az extension add --name k8s-extension
```
If the `k8s-extension` extension is already present, you can update it to the latest version using the below command:
```bash
az extension update --name k8s-extension
```
#### Create the extension and install Dapr on your AKS cluster
After your subscription is registered to use Kubernetes extensions, install Dapr on your cluster by creating the Dapr extension. For example:
```bash
az k8s-extension create --cluster-type managedClusters \
--cluster-name myAKSCluster \
--resource-group myResourceGroup \
--name myDaprExtension \
--extension-type Microsoft.Dapr
```
Additionally, Dapr can automatically update its minor version. To enable this, set the `--auto-upgrade-minor-version` parameter to true:
```bash
--auto-upgrade-minor-version true
```
Once the k8-extension finishes provisioning, you can confirm that the Dapr control plane is installed on your AKS cluster by running:
```bash
kubectl get pods -n dapr-system
```
In the example output below, note how the Dapr control plane is installed with high availability mode, enabled by default.
```
~  kubectl get pods -n dapr-system
NAME READY STATUS RESTARTS AGE
dapr-dashboard-5f49d48796-rnt5t 1/1 Running 0 1h
dapr-operator-98579b8b4-fpz7k 1/1 Running 0 1h
dapr-operator-98579b8b4-nn5vm 1/1 Running 0 1h
dapr-operator-98579b8b4-pplqr 1/1 Running 0 1h
dapr-placement-server-0 1/1 Running 0 1h
dapr-placement-server-1 1/1 Running 0 1h
dapr-placement-server-2 1/1 Running 0 1h
dapr-sentry-775bccdddb-htcl7 1/1 Running 0 1h
dapr-sentry-775bccdddb-vtfxj 1/1 Running 0 1h
dapr-sentry-775bccdddb-w4l8x 1/1 Running 0 1h
dapr-sidecar-injector-9555889bc-klb9g 1/1 Running 0 1h
dapr-sidecar-injector-9555889bc-rpjwl 1/1 Running 0 1h
dapr-sidecar-injector-9555889bc-rqjgt 1/1 Running 0 1h
```
For more information about configuration options and targeting specific Dapr versions, see the official [AKS Dapr Extension Docs](https://docs.microsoft.com/azure/aks/dapr).
{{< button text="Learn more about the Dapr extension for AKS" link="https://learn.microsoft.com/azure/aks/dapr" >}}

View File

@ -1,231 +0,0 @@
---
type: docs
title: "Build workflow applications with Logic Apps"
linkTitle: "Logic Apps workflows"
description: "Learn how to build workflows applications using Dapr Workflows and Logic Apps runtime"
weight: 3000
---
Dapr Workflows is a lightweight host that allows developers to run cloud-native workflows locally, on-premises or any cloud environment using the [Azure Logic Apps](https://docs.microsoft.com/azure/logic-apps/logic-apps-overview) workflow engine and Dapr.
## Benefits
By using a workflow engine, business logic can be defined in a declarative, no-code fashion so application code doesn't need to change when a workflow changes. Dapr Workflows allows you to use workflows in a distributed application along with these added benefits:
- **Run workflows anywhere**: on your local machine, on-premises, on Kubernetes or in the cloud
- **Built-in observability**: tracing, metrics and mTLS through Dapr
- **gRPC and HTTP endpoints** for your workflows
- Kick off workflows based on **Dapr bindings** events
- Orchestrate complex workflows by **calling back to Dapr** to save state, publish a message and more
<img src="/images/workflows-diagram.png" width=500 alt="Diagram of Dapr Workflows">
## How it works
Dapr Workflows hosts a gRPC server that implements the Dapr Client API.
This allows users to start workflows using gRPC and HTTP endpoints through Dapr, or start a workflow asynchronously using Dapr bindings.
Once a workflow request comes in, Dapr Workflows uses the Logic Apps SDK to execute the workflow.
## Supported workflow features
### Supported actions and triggers
- [HTTP](https://docs.microsoft.com/azure/connectors/connectors-native-http)
- [Schedule](https://docs.microsoft.com/azure/logic-apps/concepts-schedule-automated-recurring-tasks-workflows)
- [Request / Response](https://docs.microsoft.com/azure/connectors/connectors-native-reqres)
### Supported control workflows
- [All control workflows](https://docs.microsoft.com/azure/connectors/apis-list#control-workflow)
### Supported data manipulation
- [All data operations](https://docs.microsoft.com/azure/connectors/apis-list#manage-or-manipulate-data)
### Not supported
- [Managed connectors](https://docs.microsoft.com/azure/connectors/apis-list#managed-connectors)
## Example
Dapr Workflows can be used as the orchestrator for many otherwise complex activities. For example, invoking an external endpoint, saving the data to a state store, publishing the result to a different app or invoking a binding can all be done by calling back into Dapr from the workflow itself.
This is due to the fact Dapr runs as a sidecar next to the workflow host just as if it was any other app.
Examine [workflow2.json](/code/workflow.json) as an example of a workflow that does the following:
1. Calls into Azure Functions to get a JSON response
2. Saves the result to a Dapr state store
3. Sends the result to a Dapr binding
4. Returns the result to the caller
Since Dapr supports many pluggable state stores and bindings, the workflow becomes portable between different environments (cloud, edge or on-premises) without the user changing the code - *because there is no code involved*.
## Get started
Prerequisites:
1. Install the [Dapr CLI]({{< ref install-dapr-cli.md >}})
2. [Azure blob storage account](https://docs.microsoft.com/azure/storage/blobs/storage-blob-create-account-block-blob?tabs=azure-portal)
### Self-hosted
1. Make sure you have the Dapr runtime initialized:
```bash
dapr init
```
1. Set up the environment variables containing the Azure Storage Account credentials:
{{< tabs Windows "macOS/Linux" >}}
{{% codetab %}}
```bash
export STORAGE_ACCOUNT_KEY=<YOUR-STORAGE-ACCOUNT-KEY>
export STORAGE_ACCOUNT_NAME=<YOUR-STORAGE-ACCOUNT-NAME>
```
{{% /codetab %}}
{{% codetab %}}
```bash
set STORAGE_ACCOUNT_KEY=<YOUR-STORAGE-ACCOUNT-KEY>
set STORAGE_ACCOUNT_NAME=<YOUR-STORAGE-ACCOUNT-NAME>
```
{{% /codetab %}}
{{< /tabs >}}
1. Move to the workflows directory and run the sample runtime:
```bash
cd src/Dapr.Workflows
dapr run --app-id workflows --protocol grpc --port 3500 --app-port 50003 -- dotnet run --workflows-path ../../samples
```
1. Invoke a workflow:
```bash
curl http://localhost:3500/v1.0/invoke/workflows/method/workflow1
{"value":"Hello from Logic App workflow running with Dapr!"}
```
### Kubernetes
1. Make sure you have a running Kubernetes cluster and `kubectl` in your path.
1. Once you have the Dapr CLI installed, run:
```bash
dapr init --kubernetes
```
1. Wait until the Dapr pods have the status `Running`.
1. Create a Config Map for the workflow:
```bash
kubectl create configmap workflows --from-file ./samples/workflow1.json
```
1. Create a secret containing the Azure Storage Account credentials. Replace the account name and key values below with the actual credentials:
```bash
kubectl create secret generic dapr-workflows --from-literal=accountName=<YOUR-STORAGE-ACCOUNT-NAME> --from-literal=accountKey=<YOUR-STORAGE-ACCOUNT-KEY>
```
1. Deploy Dapr Worfklows:
```bash
kubectl apply -f deploy/deploy.yaml
```
1. Create a port-forward to the dapr workflows container:
```bash
kubectl port-forward deploy/dapr-workflows-host 3500:3500
```
1. Invoke logic apps through Dapr:
```bash
curl http://localhost:3500/v1.0/invoke/workflows/method/workflow1
{"value":"Hello from Logic App workflow running with Dapr!"}
```
## Invoking workflows using Dapr bindings
1. First, create any [Dapr binding]({{< ref components-reference >}}) of your choice. See [this]({{< ref howto-triggers >}}) How-To tutorial.
In order for Dapr Workflows to be able to start a workflow from a Dapr binding event, simply name the binding with the name of the workflow you want it to trigger.
Here's an example of a Kafka binding that will trigger a workflow named `workflow1`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: workflow1
spec:
type: bindings.kafka
metadata:
- name: topics
value: topic1
- name: brokers
value: localhost:9092
- name: consumerGroup
value: group1
- name: authRequired
value: "false"
```
1. Next, apply the Dapr component:
{{< tabs Self-hosted Kubernetes >}}
{{% codetab %}}
Place the binding yaml file above in a `components` directory at the root of your application.
{{% /codetab %}}
{{% codetab %}}
```bash
kubectl apply -f my_binding.yaml
```
{{% /codetab %}}
{{< /tabs >}}
1. Once an event is sent to the bindings component, check the logs Dapr Workflows to see the output.
{{< tabs Self-hosted Kubernetes >}}
{{% codetab %}}
In standalone mode, the output will be printed to the local terminal.
{{% /codetab %}}
{{% codetab %}}
On Kubernetes, run the following command:
```bash
kubectl logs -l app=dapr-workflows-host -c host
```
{{% /codetab %}}
{{< /tabs >}}
## Example
Watch an example from the Dapr community call:
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/7fP-0Ixmi-w?start=116" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## Additional resources
- [Blog announcement](https://cloudblogs.microsoft.com/opensource/2020/05/26/announcing-cloud-native-workflows-dapr-logic-apps/)
- [Repo](https://github.com/dapr/workflows)

View File

@ -67,6 +67,14 @@ You can also name each app directory's `.dapr` directory something other than `.
## Logs
The run template provides two log destination fields for each application and its associated daprd process:
1. `appLogDestination` : This field configures the log destination for the application. The possible values are `console`, `file` and `fileAndConsole`. The default value is `fileAndConsole` where application logs are written to both console and to a file by default.
2. `daprdLogDestination` : This field configures the log destination for the `daprd` process. The possible values are `console`, `file` and `fileAndConsole`. The default value is `file` where the `daprd` logs are written to a file by default.
#### Log file format
Logs for application and `daprd` are captured in separate files. These log files are created automatically under `.dapr/logs` directory under each application directory (`appDirPath` in the template). These log file names follow the pattern seen below:
- `<appID>_app_<timestamp>.log` (file name format for `app` log)
@ -74,6 +82,7 @@ Logs for application and `daprd` are captured in separate files. These log files
Even if you've decided to rename your resources folder to something other than `.dapr`, the log files are written only to the `.dapr/logs` folder (created in the application directory).
## Watch the demo
Watch [this video for an overview on Multi-App Run](https://youtu.be/s1p9MNl4VGo?t=2456):

View File

@ -76,12 +76,16 @@ common: # optional section for variables shared across apps
apps:
- appID: webapp # optional
appDirPath: .dapr/webapp/ # REQUIRED
resourcesPath: .dapr/resources # (optional) can be default by convention
resourcesPath: .dapr/resources # deprecated
resourcesPaths: .dapr/resources # comma separated resources paths. (optional) can be left to default value by convention.
appChannelAddress: 127.0.0.1 # network address where the app listens on. (optional) can be left to default value by convention.
configFilePath: .dapr/config.yaml # (optional) can be default by convention too, ignore if file is not found.
appProtocol: http
appPort: 8080
appHealthCheckPath: "/healthz"
command: ["python3" "app.py"]
appLogDestination: file # (optional), can be file, console or fileAndConsole. default is fileAndConsole.
daprdLogDestination: file # (optional), can be file, console or fileAndConsole. default is file.
- appID: backend # optional
appDirPath: .dapr/backend/ # REQUIRED
appProtocol: grpc
@ -110,7 +114,9 @@ The properties for the Multi-App Run template align with the `dapr run` CLI flag
|--------------------------|:--------:|--------|---------|
| `appDirPath` | Y | Path to the your application code | `./webapp/`, `./backend/` |
| `appID` | N | Application's app ID. If not provided, will be derived from `appDirPath` | `webapp`, `backend` |
| `resourcesPath` | N | Path to your Dapr resources. Can be default by convention; ignore if directory isn't found | `./app/components`, `./webapp/components` |
| `resourcesPath` | N | **Deprecated**. Path to your Dapr resources. Can be default value by convention| `./app/components`, `./webapp/components` |
| `resourcesPaths` | N | Comma separated paths to your Dapr resources. Can be default value by convention | `./app/components`, `./webapp/components` |
| `appChannelAddress` | N | The network address the application listens on. Can be left to the default value by convention. | `127.0.0.1` | `localhost` |
| `configFilePath` | N | Path to your application's configuration file | `./webapp/config.yaml` |
| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `http`, `grpc` |
| `appPort` | N | The port your application is listening on | `8080`, `3000` |
@ -137,6 +143,8 @@ The properties for the Multi-App Run template align with the `dapr run` CLI flag
| `enableApiLogging` | N | Enable the logging of all API calls from application to Dapr | |
| `runtimePath` | N | Dapr runtime install path | |
| `env` | N | Map to environment variable; environment variables applied per application will overwrite environment variables shared across applications | `DEBUG`, `DAPR_HOST_ADD` |
| `appLogDestination` | N | Log destination for outputting app logs; Its value can be file, console or fileAndConsole. Default is fileAndConsole | `file`, `console`, `fileAndConsole` |
| `daprdLogDestination` | N | Log destination for outputting daprd logs; Its value can be file, console or fileAndConsole. Default is file | `file`, `console`, `fileAndConsole` |
## Next steps

View File

@ -34,7 +34,7 @@ wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O
The following example shows how to install CLI version `{{% dapr-latest-version cli="true" %}}`. You can also install release candidates by specifying the version (for example, `1.10.0-rc.3`).
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash -s 1.9.1
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash -s {{% dapr-latest-version cli="true" %}}
```
@ -51,7 +51,7 @@ wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O
The following example shows how to install CLI version `{{% dapr-latest-version cli="true" %}}`. You can also install release candidates by specifying the version (for example, `1.10.0-rc.3`).
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | DAPR_INSTALL_DIR="$HOME/dapr" /bin/bash -s 1.9.1
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | DAPR_INSTALL_DIR="$HOME/dapr" /bin/bash -s {{% dapr-latest-version cli="true" %}}
```
{{% /codetab %}}
@ -73,7 +73,7 @@ powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master
The following example shows how to install CLI version `{{% dapr-latest-version cli="true" %}}`. You can also install release candidates by specifying the version (for example, `1.10.0-rc.3`).
```powershell
powershell -Command "$script=iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1; $block=[ScriptBlock]::Create($script); invoke-command -ScriptBlock $block -ArgumentList 1.9.1"
powershell -Command "$script=iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1; $block=[ScriptBlock]::Create($script); invoke-command -ScriptBlock $block -ArgumentList {{% dapr-latest-version cli="true" %}}"
```
#### Install without administrative rights
@ -91,7 +91,7 @@ The following example shows how to install CLI version `{{% dapr-latest-version
```powershell
$Env:DAPR_INSTALL_DIR = "<your_alt_install_dir_path>"
$script=iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1; $block=[ScriptBlock]::Create($script); invoke-command -ScriptBlock $block -ArgumentList "1.9.1", "$Env:DAPR_INSTALL_DIR"
$script=iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1; $block=[ScriptBlock]::Create($script); invoke-command -ScriptBlock $block -ArgumentList "{{% dapr-latest-version cli="true" %}}", "$Env:DAPR_INSTALL_DIR"
```
#### Install using winget
@ -136,7 +136,7 @@ curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh
The following example shows how to install CLI version `{{% dapr-latest-version cli="true" %}}`. You can also install release candidates by specifying the version (for example, `1.10.0-rc.3`).
```bash
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash -s 1.9.1
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash -s {{% dapr-latest-version cli="true" %}}
```
**For ARM64 Macs:**
@ -177,7 +177,7 @@ curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh
The following example shows how to install CLI version `{{% dapr-latest-version cli="true" %}}`. You can also install release candidates by specifying the version (for example, `1.10.0-rc.3`).
```bash
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | DAPR_INSTALL_DIR="$HOME/dapr" -s 1.9.1
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | DAPR_INSTALL_DIR="$HOME/dapr" -s {{% dapr-latest-version cli="true" %}}
```
{{% /codetab %}}

View File

@ -26,7 +26,9 @@ Hit the ground running with our Dapr quickstarts, complete with code samples aim
| [Service Invocation]({{< ref serviceinvocation-quickstart.md >}}) | Synchronous communication between two services using HTTP or gRPC. |
| [State Management]({{< ref statemanagement-quickstart.md >}}) | Store a service's data as key/value pairs in supported state stores. |
| [Bindings]({{< ref bindings-quickstart.md >}}) | Work with external systems using input bindings to respond to events and output bindings to call operations. |
| [Actors]({{< ref actors-quickstart.md >}}) | Run a microservice and a simple console client to demonstrate stateful object patterns in Dapr Actors. |
| [Secrets Management]({{< ref secrets-quickstart.md >}}) | Securely fetch secrets. |
| [Configuration]({{< ref configuration-quickstart.md >}}) | Get configuration items and subscribe for configuration updates. |
| [Resiliency]({{< ref resiliency >}}) | Define and apply fault-tolerance policies to your Dapr API requests. |
| [Workflow]({{< ref workflow-quickstart.md >}}) | Orchestrate business workflow activities in long running, fault-tolerant, stateful applications. |
| [Cryptography]({{< ref cryptography-quickstart.md >}}) | Encrypt and decrypt data using Dapr's cryptographic APIs. |

View File

@ -0,0 +1,257 @@
---
type: docs
title: "Quickstart: Actors"
linkTitle: "Actors"
weight: 75
description: "Get started with Dapr's Actors building block"
---
Let's take a look at Dapr's [Actors building block]({{< ref actors >}}). In this Quickstart, you will run a smart device microservice and a simple console client to demonstrate the stateful object patterns in Dapr Actors.
Currently, you can experience this actors quickstart using the .NET SDK.
{{< tabs ".NET" >}}
<!-- .NET -->
{{% codetab %}}
As a quick overview of the .NET actors quickstart:
1. Using a `SmartDevice.Service` microservice, you host:
- Two `SmartDectectorActor` smoke alarm objects
- A `ControllerActor` object that commands and controls the smart devices
1. Using a `SmartDevice.Client` console app, the client app interacts with each actor, or the controller, to perform actions in aggregate.
1. The `SmartDevice.Interfaces` contains the shared interfaces and data types used by both the service and client apps.
<img src="/images/actors-quickstart/actors-quickstart.png" width=800 style="padding-bottom:15px;">
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/actors).
```bash
git clone https://github.com/dapr/quickstarts.git
```
### Step 2: Run the service app
In a new terminal window, navigate to the `actors/csharp/sdk/service` directory and restore dependencies:
```bash
cd actors/csharp/sdk/service
dotnet build
```
Run the `SmartDevice.Service`, which will start service itself and the Dapr sidecar:
```bash
dapr run --app-id actorservice --app-port 5001 --dapr-http-port 3500 --resources-path ../../../resources -- dotnet run --urls=http://localhost:5001/
```
Expected output:
```bash
== APP == info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
== APP == Request starting HTTP/1.1 GET http://127.0.0.1:5001/healthz - -
== APP == info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
== APP == Executing endpoint 'Dapr Actors Health Check'
== APP == info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
== APP == Executed endpoint 'Dapr Actors Health Check'
== APP == info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
== APP == Request finished HTTP/1.1 GET http://127.0.0.1:5001/healthz - - - 200 - text/plain 5.2599ms
```
### Step 3: Run the client app
In a new terminal instance, navigate to the `actors/csharp/sdk/client` directory and install the dependencies:
```bash
cd ./actors/csharp/sdk/client
dotnet build
```
Run the `SmartDevice.Client` app:
```bash
dapr run --app-id actorclient -- dotnet run
```
Expected output:
```bash
== APP == Startup up...
== APP == Calling SetDataAsync on SmokeDetectorActor:1...
== APP == Got response: Success
== APP == Calling GetDataAsync on SmokeDetectorActor:1...
== APP == Device 1 state: Location: First Floor, Status: Ready
== APP == Calling SetDataAsync on SmokeDetectorActor:2...
== APP == Got response: Success
== APP == Calling GetDataAsync on SmokeDetectorActor:2...
== APP == Device 2 state: Location: Second Floor, Status: Ready
== APP == Registering the IDs of both Devices...
== APP == Registered devices: 1, 2
== APP == Detecting smoke on Device 1...
== APP == Device 1 state: Location: First Floor, Status: Alarm
== APP == Device 2 state: Location: Second Floor, Status: Alarm
== APP == Sleeping for 16 seconds before checking status again to see reminders fire and clear alarms
== APP == Device 1 state: Location: First Floor, Status: Ready
== APP == Device 2 state: Location: Second Floor, Status: Ready
```
### (Optional) Step 4: View in Zipkin
If you have Zipkin configured for Dapr locally on your machine, you can view the actor's interaction with the client in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
<img src="/images/actors-quickstart/actor-client-interaction-zipkin.png" width=800 style="padding-bottom:15px;">
### What happened?
When you ran the client app, a few things happened:
1. Two `SmartDetectorActor` actors were [created in the client application](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/client/Program.cs) and initialized with object state with:
- `ActorProxy.Create<ISmartDevice>(actorId, actorType)`
- `proxySmartDevice.SetDataAsync(data)`
These objects are re-entrant and hold the state, as shown by `proxySmartDevice.GetDataAsync()`.
```csharp
// Actor Ids and types
var deviceId1 = "1";
var deviceId2 = "2";
var smokeDetectorActorType = "SmokeDetectorActor";
var controllerActorType = "ControllerActor";
Console.WriteLine("Startup up...");
// An ActorId uniquely identifies the first actor instance for the first device
var deviceActorId1 = new ActorId(deviceId1);
// Create a new instance of the data class that will be stored in the first actor
var deviceData1 = new SmartDeviceData(){
Location = "First Floor",
Status = "Ready",
};
// Create the local proxy by using the same interface that the service implements.
var proxySmartDevice1 = ActorProxy.Create<ISmartDevice>(deviceActorId1, smokeDetectorActorType);
// Now you can use the actor interface to call the actor's methods.
Console.WriteLine($"Calling SetDataAsync on {smokeDetectorActorType}:{deviceActorId1}...");
var setDataResponse1 = await proxySmartDevice1.SetDataAsync(deviceData1);
Console.WriteLine($"Got response: {setDataResponse1}");
Console.WriteLine($"Calling GetDataAsync on {smokeDetectorActorType}:{deviceActorId1}...");
var storedDeviceData1 = await proxySmartDevice1.GetDataAsync();
Console.WriteLine($"Device 1 state: {storedDeviceData1}");
// Create a second actor for second device
var deviceActorId2 = new ActorId(deviceId2);
// Create a new instance of the data class that will be stored in the first actor
var deviceData2 = new SmartDeviceData(){
Location = "Second Floor",
Status = "Ready",
};
// Create the local proxy by using the same interface that the service implements.
var proxySmartDevice2 = ActorProxy.Create<ISmartDevice>(deviceActorId2, smokeDetectorActorType);
// Now you can use the actor interface to call the second actor's methods.
Console.WriteLine($"Calling SetDataAsync on {smokeDetectorActorType}:{deviceActorId2}...");
var setDataResponse2 = await proxySmartDevice2.SetDataAsync(deviceData2);
Console.WriteLine($"Got response: {setDataResponse2}");
Console.WriteLine($"Calling GetDataAsync on {smokeDetectorActorType}:{deviceActorId2}...");
var storedDeviceData2 = await proxySmartDevice2.GetDataAsync();
Console.WriteLine($"Device 2 state: {storedDeviceData2}");
```
1. The [`DetectSmokeAsync` method of `SmartDetectorActor 1` is called](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L70).
```csharp
public async Task DetectSmokeAsync()
{
var controllerActorId = new ActorId("controller");
var controllerActorType = "ControllerActor";
var controllerProxy = ProxyFactory.CreateActorProxy<IController>(controllerActorId, controllerActorType);
await controllerProxy.TriggerAlarmForAllDetectors();
}
```
1. The [`TriggerAlarmForAllDetectors` method of `ControllerActor` is called](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/ControllerActor.cs#L54). The `ControllerActor` internally triggers all alarms when smoke is detected
```csharp
public async Task TriggerAlarmForAllDetectors()
{
var deviceIds = await ListRegisteredDeviceIdsAsync();
foreach (var deviceId in deviceIds)
{
var actorId = new ActorId(deviceId);
var proxySmartDevice = ProxyFactory.CreateActorProxy<ISmartDevice>(actorId, "SmokeDetectorActor");
await proxySmartDevice.SoundAlarm();
}
// Register a reminder to refresh and clear alarm state every 15 seconds
await this.RegisterReminderAsync("AlarmRefreshReminder", null, TimeSpan.FromSeconds(15), TimeSpan.FromSeconds(15));
}
```
The console [prints a message indicating that smoke has been detected](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/client/Program.cs#L65).
```csharp
// Smoke is detected on device 1 that triggers an alarm on all devices.
Console.WriteLine($"Detecting smoke on Device 1...");
proxySmartDevice1 = ActorProxy.Create<ISmartDevice>(deviceActorId1, smokeDetectorActorType);
await proxySmartDevice1.DetectSmokeAsync();
```
1. The [`SoundAlarm` methods](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L78) of `SmartDetectorActor 1` and `2` are called.
```csharp
storedDeviceData1 = await proxySmartDevice1.GetDataAsync();
Console.WriteLine($"Device 1 state: {storedDeviceData1}");
storedDeviceData2 = await proxySmartDevice2.GetDataAsync();
Console.WriteLine($"Device 2 state: {storedDeviceData2}");
```
1. The `ControllerActor` also creates a durable reminder to call `ClearAlarm` after 15 seconds using `RegisterReminderAsync`.
```csharp
// Register a reminder to refresh and clear alarm state every 15 seconds
await this.RegisterReminderAsync("AlarmRefreshReminder", null, TimeSpan.FromSeconds(15), TimeSpan.FromSeconds(15));
```
For full context of the sample, take a look at the following code:
- [`SmartDetectorActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs): Implements the smart device actors
- [`ControllerActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/ControllerActor.cs): Implements the controller actor that manages all devices
- [`ISmartDevice`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/ISmartDevice.cs): The method definitions and shared data types for each `SmartDetectorActor`
- [`IController`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/IController.cs): The method definitions and shared data types for the `ControllerActor`
{{% /codetab %}}
{{< /tabs >}}
## Tell us what you think!
We're continuously working to improve our Quickstart examples and value your feedback. Did you find this Quickstart helpful? Do you have suggestions for improvement?
Join the discussion in our [discord channel](https://discord.com/channels/778680217417809931/953427615916638238).
## Next steps
Learn more about [the Actor building block]({{< ref actors >}})
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}

View File

@ -2,7 +2,7 @@
type: docs
title: "Quickstart: Configuration"
linkTitle: Configuration
weight: 76
weight: 77
description: Get started with Dapr's Configuration building block
---
@ -620,6 +620,12 @@ case <-ctx.Done():
{{< /tabs >}}
## Demo
Watch this video [demoing the Configuration API quickstart](https://youtu.be/EcE6IGuX9L8?t=94):
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/EcE6IGuX9L8?start=94" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
## Tell us what you think!
We're continuously working to improve our Quickstart examples and value your feedback. Did you find this quickstart helpful? Do you have suggestions for improvement?

View File

@ -0,0 +1,289 @@
---
type: docs
title: "Quickstart: Cryptography"
linkTitle: Cryptography
weight: 79
description: Get started with the Dapr Cryptography building block
---
{{% alert title="Alpha" color="warning" %}}
The cryptography building block is currently in **alpha**.
{{% /alert %}}
Let's take a look at the Dapr [cryptography building block]({{< ref cryptography >}}). In this Quickstart, you'll create an application that encrypts and decrypts data using the Dapr cryptography APIs. You'll:
- Encrypt and then decrypt a short string (using an RSA key), reading the result in-memory, in a Go byte slice.
- Encrypt and then decrypt a large file (using an AES key), storing the encrypted and decrypted data to files using streams.
<img src="/images/crypto-quickstart.png" width=800 style="padding-bottom:15px;">
{{% alert title="Note" color="primary" %}}
This example uses the Dapr SDK, which leverages gRPC and is **strongly** recommended when using cryptographic APIs to encrypt and decrypt messages.
{{% /alert %}}
Currently, you can experience the cryptography API using the Go SDK.
{{< tabs "Go" >}}
<!-- Go -->
{{% codetab %}}
> This quickstart includes a Go application called `crypto-quickstart`.
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Latest version of Go](https://go.dev/dl/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
- [OpenSSL](https://www.openssl.org/source/) available on your system
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/cryptography)
```bash
git clone https://github.com/dapr/quickstarts.git
```
In the terminal, from the root directory, navigate to the cryptography sample.
```bash
cd cryptography/go/sdk
```
### Step 2: Run the application with Dapr
Navigate into the folder with the source code:
```bash
cd ./crypto-quickstart
```
The application code defines two required keys:
- Private RSA key
- A 256-bit symmetric (AES) key
Generate two keys, an RSA key and and AES key using OpenSSL and write these to two files:
```bash
mkdir -p keys
# Generate a private RSA key, 4096-bit keys
openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:4096 -out keys/rsa-private-key.pem
# Generate a 256-bit key for AES
openssl rand -out keys/symmetric-key-256 32
```
Run the Go service app with Dapr:
```bash
dapr run --app-id crypto-quickstart --resources-path ../../../components/ -- go run .
```
**Expected output**
```
== APP == dapr client initializing for: 127.0.0.1:52407
== APP == Encrypted the message, got 856 bytes
== APP == Decrypted the message, got 24 bytes
== APP == The secret is "passw0rd"
== APP == Wrote decrypted data to encrypted.out
== APP == Wrote decrypted data to decrypted.out.jpg
```
### What happened?
#### `local-storage.yaml`
Earlier, you created a directory inside `crypto-quickstarts` called `keys`. In [the `local-storage` component YAML](https://github.com/dapr/quickstarts/tree/master/cryptography/components/local-storage.yaml), the `path` metadata maps to the newly created `keys` directory.
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: localstorage
spec:
type: crypto.dapr.localstorage
version: v1
metadata:
- name: path
# Path is relative to the folder where the example is located
value: ./keys
```
#### `app.go`
[The application file](https://github.com/dapr/quickstarts/tree/master/cryptography/go/sdk/crypto-quickstart/app.go) encrypts and decrypts messages and files using the RSA and AES keys that you generated. The application creates a new Dapr SDK client:
```go
func main() {
// Create a new Dapr SDK client
client, err := dapr.NewClient()
//...
// Step 1: encrypt a string using the RSA key, then decrypt it and show the output in the terminal
encryptDecryptString(client)
// Step 2: encrypt a large file and then decrypt it, using the AES key
encryptDecryptFile(client)
}
```
##### Encrypting and decrypting a string using the RSA key
Once the client is created, the application encrypts a message:
```go
func encryptDecryptString(client dapr.Client) {
// ...
// Encrypt the message
encStream, err := client.Encrypt(context.Background(),
strings.NewReader(message),
dapr.EncryptOptions{
ComponentName: CryptoComponentName,
// Name of the key to use
// Since this is a RSA key, we specify that as key wrapping algorithm
KeyName: RSAKeyName,
KeyWrapAlgorithm: "RSA",
},
)
// ...
// The method returns a readable stream, which we read in full in memory
encBytes, err := io.ReadAll(encStream)
// ...
fmt.Printf("Encrypted the message, got %d bytes\n", len(encBytes))
```
The application then decrypts the message:
```go
// Now, decrypt the encrypted data
decStream, err := client.Decrypt(context.Background(),
bytes.NewReader(encBytes),
dapr.DecryptOptions{
// We just need to pass the name of the component
ComponentName: CryptoComponentName,
// Passing the name of the key is optional
KeyName: RSAKeyName,
},
)
// ...
// The method returns a readable stream, which we read in full in memory
decBytes, err := io.ReadAll(decStream)
// ...
// Print the message on the console
fmt.Printf("Decrypted the message, got %d bytes\n", len(decBytes))
fmt.Println(string(decBytes))
}
```
##### Encrypt and decrpyt a large file using the AES key
Next, the application encrypts a large image file:
```go
func encryptDecryptFile(client dapr.Client) {
const fileName = "liuguangxi-66ouBTTs_x0-unsplash.jpg"
// Get a readable stream to the input file
plaintextF, err := os.Open(fileName)
// ...
defer plaintextF.Close()
// Encrypt the file
encStream, err := client.Encrypt(context.Background(),
plaintextF,
dapr.EncryptOptions{
ComponentName: CryptoComponentName,
// Name of the key to use
// Since this is a symmetric key, we specify AES as key wrapping algorithm
KeyName: SymmetricKeyName,
KeyWrapAlgorithm: "AES",
},
)
// ...
// Write the encrypted data to a file "encrypted.out"
encryptedF, err := os.Create("encrypted.out")
// ...
encryptedF.Close()
fmt.Println("Wrote decrypted data to encrypted.out")
```
The application then decrypts the large image file:
```go
// Now, decrypt the encrypted data
// First, open the file "encrypted.out" again, this time for reading
encryptedF, err = os.Open("encrypted.out")
// ...
defer encryptedF.Close()
// Now, decrypt the encrypted data
decStream, err := client.Decrypt(context.Background(),
encryptedF,
dapr.DecryptOptions{
// We just need to pass the name of the component
ComponentName: CryptoComponentName,
// Passing the name of the key is optional
KeyName: SymmetricKeyName,
},
)
// ...
// Write the decrypted data to a file "decrypted.out.jpg"
decryptedF, err := os.Create("decrypted.out.jpg")
// ...
decryptedF.Close()
fmt.Println("Wrote decrypted data to decrypted.out.jpg")
}
```
{{% /codetab %}}
{{< /tabs >}}
## Watch the demo
Watch this [demo video of the cryptography API from the Dapr Community Call #83](https://youtu.be/PRWYX4lb2Sg?t=1148):
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/PRWYX4lb2Sg?start=1148" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
## Tell us what you think!
We're continuously working to improve our Quickstart examples and value your feedback. Did you find this Quickstart helpful? Do you have suggestions for improvement?
Join the discussion in our [discord channel](https://discord.com/channels/778680217417809931/953427615916638238).
## Next steps
- Walk through [more examples of encrypting and decrypting using the cryptography API]({{< ref howto-cryptography.md >}})
- Learn more about [cryptography as a Dapr building block]({{< ref cryptography-overview.md >}})
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}

View File

@ -2,7 +2,7 @@
type: docs
title: "Quickstart: Secrets Management"
linkTitle: "Secrets Management"
weight: 75
weight: 76
description: "Get started with Dapr's Secrets Management building block"
---

View File

@ -180,12 +180,12 @@ The `order-processor` service writes, reads, and deletes an `orderId` key/value
const client = new DaprClient()
// Save state into a state store
await client.state.save(DAPR_STATE_STORE_NAME, state)
await client.state.save(DAPR_STATE_STORE_NAME, order)
console.log("Saving Order: ", order)
// Get state from a state store
const savedOrder = await client.state.get(DAPR_STATE_STORE_NAME, order.orderId)
console.log("Getting Order: ", savedOrd)
console.log("Getting Order: ", savedOrder)
// Delete state from the state store
await client.state.delete(DAPR_STATE_STORE_NAME, order.orderId)

View File

@ -2,7 +2,7 @@
type: docs
title: "Quickstart: Workflow"
linkTitle: Workflow
weight: 77
weight: 78
description: Get started with the Dapr Workflow building block
---
@ -28,7 +28,7 @@ In this guide, you'll:
Currently, you can experience the Dapr Workflow using the .NET SDK.
{{< tabs ".NET" >}}
{{< tabs ".NET" "Python" >}}
<!-- .NET -->
{{% codetab %}}
@ -254,8 +254,234 @@ The `Activities` directory holds the four workflow activities used by the workfl
- `ProcessPaymentActivity.cs`
- `UpdateInventoryActivity.cs`
{{% /codetab %}}
<!-- Python -->
{{% codetab %}}
### Step 1: Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Python 3.7+ installed](https://www.python.org/downloads/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows).
```bash
git clone https://github.com/dapr/quickstarts.git
```
In a new terminal window, navigate to the `order-processor` directory:
```bash
cd workflows/python/sdk/order-processor
```
Install the Dapr Python SDK package:
```bash
pip3 install -r requirements.txt
```
### Step 3: Run the order processor app
In the terminal, start the order processor app alongside a Dapr sidecar:
```bash
dapr run --app-id order-processor --resources-path ../../../components/ -- python3 app.py
```
> **Note:** Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
This starts the `order-processor` app with unique workflow ID and runs the workflow activities.
Expected output:
```bash
== APP == Starting order workflow, purchasing 10 of cars
== APP == 2023-06-06 09:35:52.945 durabletask-worker INFO: Successfully connected to 127.0.0.1:65406. Waiting for work items...
== APP == INFO:NotifyActivity:Received order f4e1926e-3721-478d-be8a-f5bebd1995da for 10 cars at $150000 !
== APP == INFO:VerifyInventoryActivity:Verifying inventory for order f4e1926e-3721-478d-be8a-f5bebd1995da of 10 cars
== APP == INFO:VerifyInventoryActivity:There are 100 Cars available for purchase
== APP == INFO:RequestApprovalActivity:Requesting approval for payment of 165000 USD for 10 cars
== APP == 2023-06-06 09:36:05.969 durabletask-worker INFO: f4e1926e-3721-478d-be8a-f5bebd1995da Event raised: manager_approval
== APP == INFO:NotifyActivity:Payment for order f4e1926e-3721-478d-be8a-f5bebd1995da has been approved!
== APP == INFO:ProcessPaymentActivity:Processing payment: f4e1926e-3721-478d-be8a-f5bebd1995da for 10 cars at 150000 USD
== APP == INFO:ProcessPaymentActivity:Payment for request ID f4e1926e-3721-478d-be8a-f5bebd1995da processed successfully
== APP == INFO:UpdateInventoryActivity:Checking inventory for order f4e1926e-3721-478d-be8a-f5bebd1995da for 10 cars
== APP == INFO:UpdateInventoryActivity:There are now 90 cars left in stock
== APP == INFO:NotifyActivity:Order f4e1926e-3721-478d-be8a-f5bebd1995da has completed!
== APP == 2023-06-06 09:36:06.106 durabletask-worker INFO: f4e1926e-3721-478d-be8a-f5bebd1995da: Orchestration completed with status: COMPLETED
== APP == Workflow completed! Result: Completed
== APP == Purchase of item is Completed
```
### (Optional) Step 4: View in Zipkin
If you have Zipkin configured for Dapr locally on your machine, you can view the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
<img src="/images/workflow-trace-spans-zipkin-python.png" width=900 style="padding-bottom:15px;">
### What happened?
When you ran `dapr run --app-id order-processor --resources-path ../../../components/ -- python3 app.py`:
1. A unique order ID for the workflow is generated (in the above example, `f4e1926e-3721-478d-be8a-f5bebd1995da`) and the workflow is scheduled.
1. The `NotifyActivity` workflow activity sends a notification saying an order for 10 cars has been received.
1. The `ReserveInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock.
1. Your workflow starts and notifies you of its status.
1. The `ProcessPaymentActivity` workflow activity begins processing payment for order `f4e1926e-3721-478d-be8a-f5bebd1995da` and confirms if successful.
1. The `UpdateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed.
1. The `NotifyActivity` workflow activity sends a notification saying that order `f4e1926e-3721-478d-be8a-f5bebd1995da` has completed.
1. The workflow terminates as completed.
#### `order-processor/app.py`
In the application's program file:
- The unique workflow order ID is generated
- The workflow is scheduled
- The workflow status is retrieved
- The workflow and the workflow activities it invokes are registered
```python
class WorkflowConsoleApp:
def main(self):
# Register workflow and activities
workflowRuntime = WorkflowRuntime(settings.DAPR_RUNTIME_HOST, settings.DAPR_GRPC_PORT)
workflowRuntime.register_workflow(order_processing_workflow)
workflowRuntime.register_activity(notify_activity)
workflowRuntime.register_activity(requst_approval_activity)
workflowRuntime.register_activity(verify_inventory_activity)
workflowRuntime.register_activity(process_payment_activity)
workflowRuntime.register_activity(update_inventory_activity)
workflowRuntime.start()
print("==========Begin the purchase of item:==========", flush=True)
item_name = default_item_name
order_quantity = 10
total_cost = int(order_quantity) * baseInventory[item_name].per_item_cost
order = OrderPayload(item_name=item_name, quantity=int(order_quantity), total_cost=total_cost)
# Start Workflow
print(f'Starting order workflow, purchasing {order_quantity} of {item_name}', flush=True)
start_resp = daprClient.start_workflow(workflow_component=workflow_component,
workflow_name=workflow_name,
input=order)
_id = start_resp.instance_id
def prompt_for_approval(daprClient: DaprClient):
daprClient.raise_workflow_event(instance_id=_id, workflow_component=workflow_component,
event_name="manager_approval", event_data={'approval': True})
approval_seeked = False
start_time = datetime.now()
while True:
time_delta = datetime.now() - start_time
state = daprClient.get_workflow(instance_id=_id, workflow_component=workflow_component)
if not state:
print("Workflow not found!") # not expected
elif state.runtime_status == "Completed" or\
state.runtime_status == "Failed" or\
state.runtime_status == "Terminated":
print(f'Workflow completed! Result: {state.runtime_status}', flush=True)
break
if time_delta.total_seconds() >= 10:
state = daprClient.get_workflow(instance_id=_id, workflow_component=workflow_component)
if total_cost > 50000 and (
state.runtime_status != "Completed" or
state.runtime_status != "Failed" or
state.runtime_status != "Terminated"
) and not approval_seeked:
approval_seeked = True
threading.Thread(target=prompt_for_approval(daprClient), daemon=True).start()
print("Purchase of item is ", state.runtime_status, flush=True)
def restock_inventory(self, daprClient: DaprClient, baseInventory):
for key, item in baseInventory.items():
print(f'item: {item}')
item_str = f'{{"name": "{item.item_name}", "quantity": {item.quantity},\
"per_item_cost": {item.per_item_cost}}}'
daprClient.save_state("statestore-actors", key, item_str)
if __name__ == '__main__':
app = WorkflowConsoleApp()
app.main()
```
#### `order-processor/workflow.py`
In `workflow.py`, the workflow is defined as a class with all of its associated tasks (determined by workflow activities).
```python
def order_processing_workflow(ctx: DaprWorkflowContext, order_payload_str: OrderPayload):
"""Defines the order processing workflow.
When the order is received, the inventory is checked to see if there is enough inventory to
fulfill the order. If there is enough inventory, the payment is processed and the inventory is
updated. If there is not enough inventory, the order is rejected.
If the total order is greater than $50,000, the order is sent to a manager for approval.
"""
order_id = ctx.instance_id
order_payload=json.loads(order_payload_str)
yield ctx.call_activity(notify_activity,
input=Notification(message=('Received order ' +order_id+ ' for '
+f'{order_payload["quantity"]}' +' ' +f'{order_payload["item_name"]}'
+' at $'+f'{order_payload["total_cost"]}' +' !')))
result = yield ctx.call_activity(verify_inventory_activity,
input=InventoryRequest(request_id=order_id,
item_name=order_payload["item_name"],
quantity=order_payload["quantity"]))
if not result.success:
yield ctx.call_activity(notify_activity,
input=Notification(message='Insufficient inventory for '
+f'{order_payload["item_name"]}'+'!'))
return OrderResult(processed=False)
if order_payload["total_cost"] > 50000:
yield ctx.call_activity(requst_approval_activity, input=order_payload)
approval_task = ctx.wait_for_external_event("manager_approval")
timeout_event = ctx.create_timer(timedelta(seconds=200))
winner = yield when_any([approval_task, timeout_event])
if winner == timeout_event:
yield ctx.call_activity(notify_activity,
input=Notification(message='Payment for order '+order_id
+' has been cancelled due to timeout!'))
return OrderResult(processed=False)
approval_result = yield approval_task
if approval_result["approval"]:
yield ctx.call_activity(notify_activity, input=Notification(
message=f'Payment for order {order_id} has been approved!'))
else:
yield ctx.call_activity(notify_activity, input=Notification(
message=f'Payment for order {order_id} has been rejected!'))
return OrderResult(processed=False)
yield ctx.call_activity(process_payment_activity, input=PaymentRequest(
request_id=order_id, item_being_purchased=order_payload["item_name"],
amount=order_payload["total_cost"], quantity=order_payload["quantity"]))
try:
yield ctx.call_activity(update_inventory_activity,
input=PaymentRequest(request_id=order_id,
item_being_purchased=order_payload["item_name"],
amount=order_payload["total_cost"],
quantity=order_payload["quantity"]))
except Exception:
yield ctx.call_activity(notify_activity,
input=Notification(message=f'Order {order_id} Failed!'))
return OrderResult(processed=False)
yield ctx.call_activity(notify_activity, input=Notification(
message=f'Order {order_id} has completed!'))
return OrderResult(processed=True)
```
{{% /codetab %}}

View File

@ -16,17 +16,7 @@ In this tutorial, you will create a component definition file to interact with t
## Step 1: Create a JSON secret store
Dapr supports [many types of secret stores]({{< ref supported-secret-stores >}}), but for this tutorial, create a local JSON file named `mysecrets.json` with the following secret:
```json
{
"my-secret" : "I'm Batman"
}
```
## Step 2: Create a secret store Dapr component
1. Create a new directory named `my-components` to hold the new component file:
1. Create a new directory named `my-components` to hold the new secret and component file:
```bash
mkdir my-components
@ -38,6 +28,16 @@ Dapr supports [many types of secret stores]({{< ref supported-secret-stores >}})
cd my-components
```
1. Dapr supports [many types of secret stores]({{< ref supported-secret-stores >}}), but for this tutorial, create a local JSON file named `mysecrets.json` with the following secret:
```json
{
"my-secret" : "I'm Batman"
}
```
## Step 2: Create a secret store Dapr component
1. Create a new file `localSecretStore.yaml` with the following contents:
```yaml
@ -51,7 +51,7 @@ Dapr supports [many types of secret stores]({{< ref supported-secret-stores >}})
version: v1
metadata:
- name: secretsFile
value: <PATH TO SECRETS FILE>/mysecrets.json
value: ./mysecrets.json
- name: nestedSeparator
value: ":"
```
@ -65,7 +65,7 @@ In the above file definition:
Launch a Dapr sidecar that will listen on port 3500 for a blank application named `myapp`:
```bash
dapr run --app-id myapp --dapr-http-port 3500 --resources-path ./my-components
dapr run --app-id myapp --dapr-http-port 3500 --resources-path .
```
{{% alert title="Tip" color="primary" %}}

View File

@ -6,14 +6,18 @@ weight: 4500
description: "Choose which Dapr sidecar APIs are available to the app"
---
In certain scenarios such as zero trust networks or when exposing the Dapr sidecar to external traffic through a frontend, it's recommended to only enable the Dapr sidecar APIs that are being used by the app. Doing so reduces the attack surface and helps keep the Dapr APIs scoped to the actual needs of the application.
In certain scenarios, such as zero trust networks or when exposing the Dapr sidecar to external traffic through a frontend, it's recommended to only enable the Dapr sidecar APIs that are being used by the app. Doing so reduces the attack surface and helps keep the Dapr APIs scoped to the actual needs of the application.
Dapr allows developers to control which APIs are accessible to the application by setting an API allow list using a [Dapr Configuration]({{<ref "configuration-overview.md">}}).
Dapr allows developers to control which APIs are accessible to the application by setting an API allowlist or denylist using a [Dapr Configuration]({{<ref "configuration-overview.md">}}).
### Default behavior
If an API allow list section is not specified, the default behavior is to allow access to all Dapr APIs.
Once an allow list is set, only the specified APIs are accessible.
If no API allowlist or denylist is specified, the default behavior is to allow access to all Dapr APIs.
- If only a denylist is defined, all Dapr APIs are allowed except those defined in the denylist
- If only an allowlist is defined, only the Dapr APIs listed in the allowlist are allowed
- If both an allowlist and a denylist are defined, the allowed APIs are those defined in the allowlist, unless they are also included in the denylist. In other words, the denylist overrides the allowlist for APIs that are defined in both.
- If neither is defined, all APIs are allowed.
For example, the following configuration enables all APIs for both HTTP and gRPC:
@ -28,9 +32,11 @@ spec:
samplingRate: "1"
```
### Enabling specific HTTP APIs
### Using an allowlist
The following example enables the state `v1.0` HTTP API and block all the rest:
#### Enabling specific HTTP APIs
The following example enables the state `v1.0` HTTP API and blocks all other HTTP APIs:
```yaml
apiVersion: dapr.io/v1alpha1
@ -46,9 +52,9 @@ spec:
protocol: http
```
### Enabling specific gRPC APIs
#### Enabling specific gRPC APIs
The following example enables the state `v1` gRPC API and block all the rest:
The following example enables the state `v1` gRPC API and blocks all other gRPC APIs:
```yaml
apiVersion: dapr.io/v1alpha1
@ -64,18 +70,62 @@ spec:
protocol: grpc
```
### Using a denylist
#### Disabling specific HTTP APIs
The following example disables the state `v1.0` HTTP API, allowing all other HTTP APIs:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: myappconfig
namespace: default
spec:
api:
denied:
- name: state
version: v1.0
protocol: http
```
#### Disabling specific gRPC APIs
The following example disables the state `v1` gRPC API, allowing all other gRPC APIs:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: myappconfig
namespace: default
spec:
api:
denied:
- name: state
version: v1
protocol: grpc
```
### List of Dapr APIs
The `name` field takes the name of the Dapr API you would like to enable.
See this list of values corresponding to the different Dapr APIs:
| Name | Dapr API |
| ------------- | ------------- |
| state | [State]({{< ref state_api.md>}})|
| invoke | [Service Invocation]({{< ref service_invocation_api.md >}}) |
| secrets | [Secrets]({{< ref secrets_api.md >}})|
| bindings | [Output Bindings]({{< ref bindings_api.md >}}) |
| publish | [Pub/Sub]({{< ref pubsub.md >}}) |
| actors | [Actors]({{< ref actors_api.md >}}) |
| metadata | [Metadata]({{< ref metadata_api.md >}}) |
| API group | HTTP API | [gRPC API](https://github.com/dapr/dapr/blob/master/pkg/grpc/endpoints.go) |
| ----- | ----- | ----- |
| [Service Invocation]({{< ref service_invocation_api.md >}}) | `invoke` (`v1.0`) | `invoke` (`v1`) |
| [State]({{< ref state_api.md>}})| `state` (`v1.0` and `v1.0-alpha1`) | `state` (`v1` and `v1alpha1`) |
| [Pub/Sub]({{< ref pubsub.md >}}) | `publish` (`v1.0` and `v1.0-alpha1`) | `publish` (`v1` and `v1alpha1`) |
| [(Output) Bindings]({{< ref bindings_api.md >}}) | `bindings` (`v1.0`) |`bindings` (`v1`) |
| [Secrets]({{< ref secrets_api.md >}})| `secrets` (`v1.0`) | `secrets` (`v1`) |
| [Actors]({{< ref actors_api.md >}}) | `actors` (`v1.0`) |`actors` (`v1`) |
| [Metadata]({{< ref metadata_api.md >}}) | `metadata` (`v1.0`) |`metadata` (`v1`) |
| [Configuration]({{< ref configuration_api.md >}}) | `configuration` (`v1.0` and `v1.0-alpha1`) | `configuration` (`v1` and `v1alpha1`) |
| [Distributed Lock]({{< ref distributed_lock_api.md >}}) | `lock` (`v1.0-alpha1`)<br/>`unlock` (`v1.0-alpha1`) | `lock` (`v1alpha1`)<br/>`unlock` (`v1alpha1`) |
| Cryptography | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) |
| [Workflow]({{< ref workflow_api.md >}}) | `workflows` (`v1.0-alpha1`) |`workflows` (`v1alpha1`) |
| [Health]({{< ref health_api.md >}}) | `healthz` (`v1.0`) | n/a |
| Shutdown | `shutdown` (`v1.0`) | `shutdown` (`v1`) |

View File

@ -58,6 +58,18 @@ dapr init -k
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run "dapr status -k" in your terminal. To get started, go here: https://aka.ms/dapr-getting-started
```
To run the dashboard, run:
```bash
dapr dashboard -k
```
If you installed Dapr in a non-default namespace, run:
```bash
dapr dashboard -k -n <your-namespace>
```
### Install Dapr (a private Dapr Helm chart)
There are some scenarios where it's necessary to install Dapr from a private Helm chart, such as:
- needing more granular control of the Dapr Helm chart
@ -125,7 +137,7 @@ The latest Dapr helm chart no longer supports Helm v2. Please migrate from Helm
### Add and install Dapr Helm chart
1. Make sure [Helm 3](https://github.com/helm/helm/releases) is installed on your machine
2. Add Helm repo and update
1. Add Helm repo and update
```bash
// Add the official Dapr Helm chart.
@ -134,10 +146,11 @@ The latest Dapr helm chart no longer supports Helm v2. Please migrate from Helm
helm repo add dapr http://helm.custom-domain.com/dapr/dapr/ \
--username=xxx --password=xxx
helm repo update
# See which chart versions are available
// See which chart versions are available
helm search repo dapr --devel --versions
```
3. Install the Dapr chart on your cluster in the `dapr-system` namespace.
1. Install the Dapr chart on your cluster in the `dapr-system` namespace.
```bash
helm upgrade --install dapr dapr/dapr \
@ -158,8 +171,7 @@ The latest Dapr helm chart no longer supports Helm v2. Please migrate from Helm
--wait
```
See [Guidelines for production ready deployments on Kubernetes]({{<ref kubernetes-production.md>}}) for more information on installing and upgrading Dapr using Helm.
See [Guidelines for production ready deployments on Kubernetes]({{< ref kubernetes-production.md >}}) for more information on installing and upgrading Dapr using Helm.
### Uninstall Dapr on Kubernetes
@ -172,6 +184,22 @@ helm uninstall dapr --namespace dapr-system
- Read [this guide]({{< ref kubernetes-production.md >}}) for recommended Helm chart values for production setups
- See [this page](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md) for details on Dapr Helm charts.
## Installing the Dapr dashboard as part of the control plane
If you want to install the Dapr dashboard, use this Helm chart with the additional settings of your choice:
`helm install dapr dapr/dapr-dashboard --namespace dapr-system`
For example:
```bash
helm repo add dapr https://dapr.github.io/helm-charts/
helm repo update
kubectl create namespace dapr-system
# Install the Dapr dashboard
helm install dapr dapr/dapr-dashboard --namespace dapr-system
```
## Verify installation
Once the installation is complete, verify that the dapr-operator, dapr-placement, dapr-sidecar-injector and dapr-sentry pods are running in the `dapr-system` namespace:

View File

@ -75,4 +75,4 @@ By default, tailing is set to /var/log/containers/*.log. To change this setting,
* [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform)
* [New Relic Logging](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging)
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/)
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/alerts-ai-transition-guide-2022/)
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/overview/)

View File

@ -40,4 +40,4 @@ This document explains how to install it in your cluster, either using a Helm ch
* [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform)
* [New Relic Prometheus OpenMetrics Integration](https://github.com/newrelic/helm-charts/tree/master/charts/nri-prometheus)
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/)
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/alerts-ai-transition-guide-2022/)
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/overview/)

View File

@ -101,7 +101,7 @@ And the exact same dashboard templates from Dapr can be imported to visualize Da
## New Relic Alerts
All the data that is collected from Dapr, Kubernetes or any services that run on top of can be used to set-up alerts and notifications into the preferred channel of your choice. See [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/alerts-ai-transition-guide-2022/).
All the data that is collected from Dapr, Kubernetes or any services that run on top of can be used to set-up alerts and notifications into the preferred channel of your choice. See [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/overview/).
## Related Links/References
@ -111,4 +111,4 @@ All the data that is collected from Dapr, Kubernetes or any services that run on
* [New Relic Trace API](https://docs.newrelic.com/docs/distributed-tracing/trace-api/introduction-trace-api/)
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/)
* [New Relic OpenTelemetry User Experience](https://blog.newrelic.com/product-news/opentelemetry-user-experience/)
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/alerts-ai-transition-guide-2022/)
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/overview/)

View File

@ -0,0 +1,16 @@
---
type: docs
title: "Alpha APIs"
linkTitle: "Alpha APIs"
weight: 5000
description: "List of current alpha APIs"
---
| Building block/API | gRPC | HTTP | Description | Documentation | Version introduced |
| ------------------ | ---- | ---- | ----------- | ------------- | ------------------ |
| Query State | [Query State proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L44) | `v1.0-alpha1/state/statestore/query` | The state query API enables you to retrieve, filter, and sort the key/value data stored in state store components. | [Query State API]({{< ref "howto-state-query-api.md" >}}) | v1.5 |
| Distributed Lock | [Lock proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L112) | `/v1.0-alpha1/lock` | The distributed lock API enables you to take a lock on a resource. | [Distributed Lock API]({{< ref "distributed-lock-api-overview.md" >}}) | v1.8 |
| Workflow | [Workflow proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L151) | `/v1.0-alpha1/workflow` | The workflow API enables you to define long running, persistent processes or data flows. | [Workflow API]({{< ref "workflow-overview.md" >}}) | v1.10 |
| Bulk Publish | [Bulk publish proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L59) | `v1.0-alpha1/publish/bulk` | The bulk publish API allows you to publish multiple messages to a topic in a single request. | [Bulk Publish and Subscribe API]({{< ref "pubsub-bulk.md" >}}) | v1.10 |
| Bulk Subscribe | [Bulk subscribe proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/appcallback.proto#L57) | N/A | The bulk subscribe application callback receives multiple messages from a topic in a single call. | [Bulk Publish and Subscribe API]({{< ref "pubsub-bulk.md" >}}) | v1.10 |
| Cryptography | [Crypto proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L118) | `v1.0-alpha1/crypto` | The cryptography API enables you to perform **high level** cryptography operations for encrypting and decrypting messages. | [Cryptography API]({{< ref "cryptography-overview.md" >}}) | v1.11 |

View File

@ -0,0 +1,70 @@
---
type: docs
title: "Breaking changes and deprecations"
linkTitle: "Breaking changes and deprecations"
weight: 2500
description: "Handling of breaking changes and deprecations"
---
## Breaking changes
Breaking changes are defined as a change to any of the following that cause compilation errors or undesirable runtime behavior to an existing 3rd party consumer application or script after upgrading to the next stable minor version of a Dapr artifact (SDK, CLI, runtime, etc):
- Code behavior
- Schema
- Default configuration value
- Command line argument
- Published metric
- Kubernetes CRD template
- Publicly accessible API
- Publicly visible SDK interface, method, class, or attribute
Breaking changes can be applied right away to the following cases:
- Projects versioned at 0.x.y
- Preview feature
- Alpha API
- Preview or Alpha interface, class, method or attribute in SDK
- Dapr Component in Alpha or Beta
- Components-Contrib interface
- URLs in Docs and Blog
- An **exceptional** case where it is **required** to fix a critical bug or security vulnerability.
### Process for applying breaking changes
There is a process for applying breaking changes:
1. A deprecation notice must be posted as part of a release.
1. The breaking changes are applied two (2) releases after the release in which the deprecation was announced.
- For example, feature X is announced to be deprecated in the 1.0.0 release notes and will then be removed in 1.2.0.
## Deprecations
Deprecations can apply to
1. APIs, including alpha APIs
1. Preview features
1. Components
1. CLI
1. Features that could result in security vulnerabilities
Deprecations appear in release notes under a section named “Deprecations”, which indicates:
- The point in the future the now-deprecated feature will no longer be supported. For example release x.y.z. This is at least two (2) releases prior.
- Document any steps the user must take to modify their code, operations, etc if applicable in the release notes.
After announcing a future breaking change, the change will happen in 2 releases or 6 months, whichever is greater. Deprecated features should respond with warning but do nothing otherwise.
## Announced deprecations
| Feature | Deprecation announcement | Removal |
|-----------------------|-----------------------|------------------------- |
| GET /v1.0/shutdown API (Users should use [POST API]({{< ref kubernetes-job.md >}}) instead) | 1.2.0 | 1.4.0 |
| Java domain builder classes deprecated (Users should use [setters](https://github.com/dapr/java-sdk/issues/587) instead) | Java SDK 1.3.0 | Java SDK 1.5.0 |
| Service invocation will no longer provide a default content type header of `application/json` when no content-type is specified. You must explicitly [set a content-type header]({{< ref "service_invocation_api.md#request-contents" >}}) for service invocation if your invoked apps rely on this header. | 1.7.0 | 1.9.0 |
| gRPC service invocation using `invoke` method is deprecated. Use proxy mode service invocation instead. See [How-To: Invoke services using gRPC ]({{< ref howto-invoke-services-grpc.md >}}) to use the proxy mode.| 1.9.0 | 1.10.0 |
| The CLI flag `--app-ssl` (in both the Dapr CLI and daprd) has been deprecated in favor of using `--app-protocol` with values `https` or `grpcs`. [daprd:6158](https://github.com/dapr/dapr/issues/6158) [cli:1267](https://github.com/dapr/cli/issues/1267)| 1.11.0 | 1.13.0 |
## Related links
- Read the [Versioning Policy]({{< ref support-versioning.md >}})
- Read the [Supported Releases]({{< ref support-release-policy.md >}})

View File

@ -20,7 +20,9 @@ For CLI there is no explicit opt-in, just the version that this was first made a
| **Pluggable components** | Allows creating self-hosted gRPC-based components written in any language that supports gRPC. The following component APIs are supported: State stores, Pub/sub, Bindings | N/A | [Pluggable components concept]({{<ref "components-concept#pluggable-components" >}})| v1.9 |
| **Multi-App Run** | Configure multiple Dapr applications from a single configuration file and run from a single command | `dapr run -f` | [Multi-App Run]({{< ref multi-app-dapr-run.md >}}) | v1.10 |
| **Workflows** | Author workflows as code to automate and orchestrate tasks within your application, like messaging, state management, and failure handling | N/A | [Workflows concept]({{< ref "components-concept#workflows" >}})| v1.10 |
| **Cryptography** | Encrypt or decrypt data without having to manage secrets keys | N/A | [Cryptography concept]({{< ref "components-concept#cryptography" >}})| v1.11 |
| **Service invocation for non-Dapr endpoints** | Allow the invocation of non-Dapr endpoints by Dapr using the [Service invocation API]({{< ref service_invocation_api.md >}}). Read ["How-To: Invoke Non-Dapr Endpoints using HTTP"]({{< ref howto-invoke-non-dapr-endpoints.md >}}) for more information. | N/A | [Service invocation API]({{< ref service_invocation_api.md >}}) | v1.11 |
| **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{< ref actors_api.md >}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{< ref actors_api.md >}}) | v1.11 |
### Streaming for HTTP service invocation
@ -43,7 +45,7 @@ Important notes:
- `ServiceInvocationStreaming` needs to be applied on caller sidecars only.
In the example above, streams are used for HTTP service invocation if `ServiceInvocationStreaming` is applied to the configuration of "app A" and its Dapr sidecar, regardless of whether the feature flag is enabled for "app B" and its sidecar.
- When `ServiceInvocationStreaming` is enabled, you should make sure that all services your app invokes using Dapr ("app B") are updated to Dapr 1.10, even if `ServiceInvocationStreaming` is not enabled for those sidecars.
Invoking an app using Dapr 1.9 or older is still possible, but those calls may fail if you have applied a Dapr Resiliency policy with retries enabled.
- When `ServiceInvocationStreaming` is enabled, you should make sure that all services your app invokes using Dapr ("app B") are updated to Dapr 1.10 or higher, even if `ServiceInvocationStreaming` is not enabled for those sidecars.
Invoking an app using Dapr 1.9 or older is still possible, but those calls may fail unless you have applied a Dapr Resiliency policy with retries enabled.
> Full support for streaming for HTTP service invocation will be completed in a future Dapr version.

View File

@ -28,13 +28,26 @@ There will be at least 6 weeks between major.minor version releases giving users
Patch support is for supported versions (current and previous).
## Build variations
The Dapr's sidecar image is published to both [GitHub Container Registry](https://github.com/dapr/dapr/pkgs/container/daprd) and [Docker Registry](https://hub.docker.com/r/daprio/daprd/tags). The default image contains all components. From version 1.11, Dapr also offers a variation of the sidecar image, containing only stable components.
* Default sidecar images: `daprio/daprd:<version>` or `ghcr.io/dapr/daprd:<version>` (for example `ghcr.io/dapr/daprd:1.11.0`)
* Sidecar images for stable components: `daprio/daprd:<version>-stable` or `ghcr.io/dapr/daprd:<version>-stable` (for example `ghcr.io/dapr/daprd:1.11.0-stable`)
On Kubernetes, the sidecar image can be overwritten for the application Deployment resource with the `dapr.io/sidecar-image` annotation. See more about [Dapr's arguments and annotations]({{<ref "arguments-annotations-overview.md" >}}). The default 'daprio/daprd:latest' image is used if not specified.
Learn more about [Dapr components' certification lifecycle]({{<ref "certification-lifecycle.md" >}}).
## Supported versions
The table below shows the versions of Dapr releases that have been tested together and form a "packaged" release. Any other combinations of releases are not supported.
| Release date | Runtime | CLI | SDKs | Dashboard | Status |
|--------------------|:--------:|:--------|---------|---------|---------|
| April 13 2023 | 1.10.5</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported (current) |
| May 15th 2023 | 1.10.7</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported (current) |
| May 12th 2023 | 1.10.6</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported (current) |
| April 13 2023 |1.10.5</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported (current) |
| March 16 2023 | 1.10.4</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported |
| March 14 2023 | 1.10.3</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported |
| February 24 2023 | 1.10.2</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported |
@ -92,70 +105,17 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
| | 1.6.2 | 1.7.5 |
| | 1.7.5 | 1.8.6 |
| | 1.8.6 | 1.9.6 |
| | 1.9.6 | 1.10.5 |
| | 1.9.6 | 1.10.7 |
| 1.6.0 to 1.6.2 | N/A | 1.7.5 |
| | 1.7.5 | 1.8.6 |
| | 1.8.6 | 1.9.6 |
| | 1.9.6 | 1.10.5 |
| | 1.9.6 | 1.10.7 |
| 1.7.0 to 1.7.5 | N/A | 1.8.6 |
| | 1.8.6 | 1.9.6 |
| | 1.9.6 | 1.10.5 |
| | 1.9.6 | 1.10.7 |
| 1.8.0 to 1.8.6 | N/A | 1.9.6 |
| 1.9.0 | N/A | 1.9.6 |
| 1.10.0 | N/A | 1.10.5 |
## Breaking changes and deprecations
### Breaking changes
Breaking changes are defined as a change to any of the following that cause compilation errors or undesirable runtime behavior to an existing 3rd party consumer application or script after upgrading to the next stable minor version of a Dapr artifact (SDK, CLI, runtime, etc):
- Code behavior
- Schema
- Default configuration value
- Command line argument
- Published metric
- Kubernetes CRD template
- Publicly accessible API
- Publicly visible SDK interface, method, class, or attribute
Breaking changes can be applied right away to the following cases:
- Projects versioned at 0.x.y
- Preview feature
- Alpha API
- Preview or Alpha interface, class, method or attribute in SDK
- Dapr Component in Alpha or Beta
- Components-Contrib interface
- URLs in Docs and Blog
- An **exceptional** case where it is **required** to fix a critical bug or security vulnerability.
#### Process for applying breaking changes
There is a process for applying breaking changes:
1. A deprecation notice must be posted as part of a release.
1. The breaking changes are applied two (2) releases after the release in which the deprecation was announced.
- For example, feature X is announced to be deprecated in the 1.0.0 release notes and will then be removed in 1.2.0.
### Depreciations
Deprecations appear in release notes under a section named “Deprecations”, which indicates:
- The point in the future the now-deprecated feature will no longer be supported. For example release x.y.z. This is at least two (2) releases prior.
- Document any steps the user must take to modify their code, operations, etc if applicable in the release notes.
After announcing a future breaking change, the change will happen in 2 releases or 6 months, whichever is greater. Deprecated features should respond with warning but do nothing otherwise.
### Announced deprecations
| Feature | Deprecation announcement | Removal |
|-----------------------|-----------------------|------------------------- |
| GET /v1.0/shutdown API (Users should use [POST API]({{< ref kubernetes-job.md >}}) instead) | 1.2.0 | 1.4.0 |
| Java domain builder classes deprecated (Users should use [setters](https://github.com/dapr/java-sdk/issues/587) instead) | Java SDK 1.3.0 | Java SDK 1.5.0 |
| Service invocation will no longer provide a default content type header of `application/json` when no content-type is specified. You must explicitly [set a content-type header]({{< ref "service_invocation_api.md#request-contents" >}}) for service invocation if your invoked apps rely on this header. | 1.7.0 | 1.9.0 |
| gRPC service invocation using `invoke` method is deprecated. Use proxy mode service invocation instead. See [How-To: Invoke services using gRPC ]({{< ref howto-invoke-services-grpc.md >}}) to use the proxy mode.| 1.9.0 | 1.10.0 |
| 1.10.0 | N/A | 1.10.7 |
## Upgrade on Hosting platforms
@ -173,4 +133,5 @@ Below is a list of software that the latest version of Dapr (v{{% dapr-latest-ve
## Related links
- Read the [Versioning policy]({{< ref support-versioning.md >}})
- Read the [Versioning Policy]({{< ref support-versioning.md >}})
- Read the [Breaking Changes and Deprecation Policy]({{< ref breaking-changes-and-deprecations.md >}})

View File

@ -58,8 +58,12 @@ The [components-contrib](https://github.com/dapr/components-contrib/) repo relea
Note: Components have a production usage lifecycle status: Alpha, Beta and Stable. These statuses are not related to their versioning. The tables of supported components shows both their versions and their status.
* List of [state store components]({{< ref supported-state-stores.md >}})
* List of [pub/sub components]({{< ref supported-pubsub.md >}})
* List of [secret store components]({{< ref supported-secret-stores.md >}})
* List of [binding components]({{< ref supported-bindings.md >}})
* List of [secret store components]({{< ref supported-secret-stores.md >}})
* List of [configuration store components]({{< ref supported-configuration-stores.md >}})
* List of [lock components]({{< ref supported-locks.md >}})
* List of [crytpography components]({{< ref supported-cryptography.md >}})
* List of [middleware components]({{< ref supported-middleware.md >}})
For more information on component versioning read [Version 2 and beyond of a component](https://github.com/dapr/components-contrib/blob/master/docs/developing-component.md#version-2-and-beyond-of-a-component)
@ -96,6 +100,8 @@ The version for a component implementation is determined by the `.spec.version`
### Component deprecations
Deprecations of components will be announced two (2) releases ahead. Deprecation of a component, results in major version update of the component version. After 2 releases, the component is unregistered from the Dapr runtime, and trying to load it will throw a fatal exception.
Component deprecations and removal are announced in the release notes.
## Quickstarts and Samples
Quickstarts in the [Quickstarts repo](https://github.com/dapr/quickstarts) are versioned with the runtime, where a table of corresponding versions is on the front page of the samples repo. Users should only use Quickstarts corresponding to the version of the runtime being run.
@ -103,3 +109,4 @@ Samples in the [Samples repo](https://github.com/dapr/samples) are each versione
## Related links
* Read the [Supported releases]({{< ref support-release-policy.md >}})
* Read the [Breaking Changes and Deprecation Policy]({{< ref breaking-changes-and-deprecations.md >}})

View File

@ -75,10 +75,15 @@ Persists the change to the state for an actor as a multi-item transaction.
***Note that this operation is dependant on a using state store component that supports multi-item transactions.***
When putting state, _always_ set the `ttlInSeconds` field in the
metadata for each value, unless there is a state clean up process out of band of
Dapr. Omitting this field will result in the underlying Actor state store to
grow indefinitely.
#### TTL
With the [`ActorStateTTL` feature enabled]]({{< ref
"support-preview-features.md" >}}), actor clients can set the `ttlInSeconds`
field in the transaction metadata to have the state expire after that many
seconds. If the `ttlInSeconds` field is not set, the state will not expire.
Keep in mind when building actor applications with this feature enabled;
Currently, all actor SDKs will preserve the actor state in their local cache even after the state has expired. This means that the actor state will not be removed from the local cache if the TTL has expired until the actor is restarted or deactivated. This behaviour will be changed in a future release.
See the Dapr Community Call 80 recording for more details on actor state TTL.
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/kVpQYkGemRc?start=28" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
@ -109,6 +114,8 @@ Parameter | Description
#### Examples
> Note, the following example uses the `ttlInSeconds` field, which requires the [`ActorStateTTL` feature enabled]]({{< ref "support-preview-features.md" >}}).
```shell
curl -X POST http://localhost:3500/v1.0/actors/stormtrooper/50/state \
-H "Content-Type: application/json" \
@ -187,7 +194,7 @@ Creates a persistent reminder for an actor.
POST/PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
#### Request Body
#### Reminder request body
A JSON object with the following fields:
@ -351,7 +358,8 @@ Creates a timer for an actor.
POST/PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/timers/<name>
```
Body:
#### Timer request body:
The format for the timer request body is the same as for [actor reminders]({{< ref "#reminder-request-body" >}}). For example:
The following specifies a `dueTime` of 3 seconds and a period of 7 seconds.
@ -484,6 +492,16 @@ Parameter | Description
`maxStackDepth` | A value in the reentrancy configuration that controls how many reentrant calls be made to the same actor.
`entitiesConfig` | Array of entity configurations that allow per actor type settings. Any configuration defined here must have an entity that maps back into the root level entities.
{{% alert title="Note" color="primary" %}}
Actor settings in configuration for timeouts and intervals use [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format. You can use string formats to represent durations. For example:
- `1h30m` or `1.5h`: A duration of 1 hour and 30 minutes
- `1d12h`: A duration of 1 day and 12 hours
- `500ms`: A duration of 500 milliseconds
- `-30m`: A negative duration of 30 minutes
{{% /alert %}}
```json
{
"entities":["actorType1", "actorType2"],

View File

@ -13,7 +13,7 @@ This endpoint lets you get configuration from a store.
### HTTP Request
```
GET http://localhost:<daprPort>/v1.0-alpha1/configuration/<storename>
GET http://localhost:<daprPort>/v1.0/configuration/<storename>
```
#### URL Parameters
@ -29,13 +29,13 @@ If no query parameters are provided, all configuration items are returned.
To specify the keys of the configuration items to get, use one or more `key` query parameters. For example:
```
GET http://localhost:<daprPort>/v1.0-alpha1/configuration/mystore?key=config1&key=config2
GET http://localhost:<daprPort>/v1.0/configuration/mystore?key=config1&key=config2
```
To retrieve all configuration items:
```
GET http://localhost:<daprPort>/v1.0-alpha1/configuration/mystore
GET http://localhost:<daprPort>/v1.0/configuration/mystore
```
#### Request Body
@ -59,7 +59,7 @@ JSON-encoded value of key/value pairs for each configuration item.
### Example
```shell
curl -X GET 'http://localhost:3500/v1.0-alpha1/configuration/mystore?key=myConfigKey'
curl -X GET 'http://localhost:3500/v1.0/configuration/mystore?key=myConfigKey'
```
> The above command returns the following JSON:
@ -75,7 +75,7 @@ This endpoint lets you subscribe to configuration changes. Notifications happen
### HTTP Request
```
GET http://localhost:<daprPort>/v1.0-alpha1/configuration/<storename>/subscribe
GET http://localhost:<daprPort>/v1.0/configuration/<storename>/subscribe
```
#### URL Parameters
@ -91,13 +91,13 @@ If no query parameters are provided, all configuration items are subscribed to.
To specify the keys of the configuration items to subscribe to, use one or more `key` query parameters. For example:
```
GET http://localhost:<daprPort>/v1.0-alpha1/configuration/mystore/subscribe?key=config1&key=config2
GET http://localhost:<daprPort>/v1.0/configuration/mystore/subscribe?key=config1&key=config2
```
To subscribe to all changes:
```
GET http://localhost:<daprPort>/v1.0-alpha1/configuration/mystore/subscribe
GET http://localhost:<daprPort>/v1.0/configuration/mystore/subscribe
```
#### Request Body
@ -121,7 +121,7 @@ JSON-encoded value
### Example
```shell
curl -X GET 'http://localhost:3500/v1.0-alpha1/configuration/mystore/subscribe?key=myConfigKey'
curl -X GET 'http://localhost:3500/v1.0/configuration/mystore/subscribe?key=myConfigKey'
```
> The above command returns the following JSON:
@ -141,7 +141,7 @@ This endpoint lets you unsubscribe to configuration changes.
### HTTP Request
```
GET http://localhost:<daprPort>/v1.0-alpha1/configuration/<storename>/<subscription-id>/unsubscribe
GET http://localhost:<daprPort>/v1.0/configuration/<storename>/<subscription-id>/unsubscribe
```
#### URL Parameters
@ -181,7 +181,7 @@ Code | Description
### Example
```shell
curl -X GET 'http://localhost:3500/v1.0-alpha1/configuration/mystore/bf3aa454-312d-403c-af95-6dec65058fa2/unsubscribe'
curl -X GET 'http://localhost:3500/v1.0/configuration/mystore/bf3aa454-312d-403c-af95-6dec65058fa2/unsubscribe'
```
## Optional application (user code) routes

View File

@ -12,8 +12,8 @@ Dapr provides users with the ability to interact with workflows and comes with a
Start a workflow instance with the given name and optionally, an instance ID.
```http
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/start[?instanceId=<instanceId>]
```
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/start[?instanceID=<instanceID>]
```
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
@ -24,7 +24,7 @@ Parameter | Description
--------- | -----------
`workflowComponentName` | Use `dapr` for Dapr Workflows
`workflowName` | Identify the workflow type
`instanceId` | (Optional) Unique value created for each run of a specific workflow
`instanceID` | (Optional) Unique value created for each run of a specific workflow
### Request content
@ -52,8 +52,8 @@ The API call will provide a response similar to this:
Terminate a running workflow instance with the given name and instance ID.
```http
POST http://localhost:3500/v1.0-alpha1/workflows/<instanceId>/terminate
```
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>/terminate
```
### URL parameters
@ -79,7 +79,7 @@ This API does not return any content.
For workflow components that support subscribing to external events, such as the Dapr Workflow engine, you can use the following "raise event" API to deliver a named event to a specific workflow instance.
```http
```
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceID>/raiseEvent/<eventName>
```
@ -112,7 +112,7 @@ None.
Pause a running workflow instance.
```http
```
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>/pause
```
@ -139,7 +139,7 @@ None.
Resume a paused workflow instance.
```http
```
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>/resume
```
@ -166,7 +166,7 @@ None.
Purge the workflow state from your state store with the workflow's instance ID.
```http
```
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>/purge
```
@ -193,7 +193,7 @@ None.
Get information about a given workflow instance.
```http
```
GET http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>
```

View File

@ -23,7 +23,7 @@ This table is meant to help users understand the equivalent options for running
| `--dapr-http-port` | `--dapr-http-port` | | not supported | The HTTP port for the Dapr API |
| `--dapr-http-max-request-size` | --dapr-http-max-request-size | | `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB |
| `--dapr-http-read-buffer-size` | --dapr-http-read-buffer-size | | `dapr.io/http-read-buffer-size` | Increasing max size of http header read buffer in KB to handle when sending multi-KB headers. The default 4 KB. When sending bigger than default 4KB http headers, you should set this to a larger value, for example 16 (for 16KB) |
| not supported | `--image` | | `dapr.io/sidecar-image` | Dapr sidecar image. Default is `daprio/daprd:latest`. Use this when building your own custom image of Dapr and the Dapr sidecar will use this image instead of the default image of Dapr |
| not supported | `--image` | | `dapr.io/sidecar-image` | Dapr sidecar image. Default is daprio/daprd:latest. The Dapr sidecar uses this image instead of the latest default image. Use this when building your own custom image of Dapr and or [using an alternative stable Dapr image]({{<ref "support-release-policy.md#build-variations" >}}) |
| `--internal-grpc-port` | not supported | | not supported | gRPC port for the Dapr Internal API to listen on |
| `--enable-metrics` | not supported | | configuration spec | Enable prometheus metric (default true) |
| `--enable-mtls` | not supported | | configuration spec | Enables automatic mTLS for daprd to daprd communication channels |

View File

@ -28,6 +28,7 @@ dapr run [flags] [command]
| `--app-port`, `-p` | `APP_PORT` | | The port your application is listening on |
| `--app-protocol`, `-P` | | `http` | The protocol Dapr uses to talk to the application. Valid values are: `http`, `grpc`, `https` (HTTP with TLS), `grpcs` (gRPC with TLS), `h2c` (HTTP/2 Cleartext) |
| `--resources-path`, `-d` | | Linux/Mac: `$HOME/.dapr/components` <br/>Windows: `%USERPROFILE%\.dapr\components` | The path for components directory |
| `--app-channel-address` | | `127.0.0.1` | The network address the application listens on |
| `--runtime-path` | | | Dapr runtime install path |
| `--config`, `-c` | | Linux/Mac: `$HOME/.dapr/config.yaml` <br/>Windows: `%USERPROFILE%\.dapr\config.yaml` | Dapr configuration file |
| `--dapr-grpc-port`, `-G` | `DAPR_GRPC_PORT` | `50001` | The gRPC port for Dapr to listen on |

View File

@ -25,11 +25,11 @@ spec:
- name: url
value: http://something.com
- name: MTLSRootCA
value: /Users/somepath/root.pem # OPTIONAL <path to root CA> or <pem encoded string>
value: /Users/somepath/root.pem # OPTIONAL Secret store ref, <path to root CA>, or <pem encoded string>
- name: MTLSClientCert
value: /Users/somepath/client.pem # OPTIONAL <path to client cert> or <pem encoded string>
value: /Users/somepath/client.pem # OPTIONAL Secret store ref, <path to client cert>, or <pem encoded string>
- name: MTLSClientKey
value: /Users/somepath/client.key # OPTIONAL <path to client key> or <pem encoded string>
value: /Users/somepath/client.key # OPTIONAL Secret store ref, <path to client key>, or <pem encoded string>
- name: MTLSRenegotiation
value: RenegotiateOnceAsClient # OPTIONAL one of: RenegotiateNever, RenegotiateOnceAsClient, RenegotiateFreelyAsClient
- name: securityToken # OPTIONAL <token to include as a header on HTTP requests>
@ -45,13 +45,43 @@ spec:
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|--------|--------|---------|
| url | Y | Output |The base URL of the HTTP endpoint to invoke | `http://host:port/path`, `http://myservice:8000/customers`
| MTLSRootCA | N | Output |Path to root ca certificate or pem encoded string |
| MTLSClientCert | N | Output |Path to client certificate or pem encoded string |
| MTLSClientKey | N | Output |Path client private key or pem encoded string |
| MTLSRootCA | N | Output |Secret store reference, path to root ca certificate, or pem encoded string |
| MTLSClientCert | N | Output |Secret store reference, path to client certificate, or pem encoded string |
| MTLSClientKey | N | Output |Secret store reference, path client private key, or pem encoded string |
| MTLSRenegotiation | N | Output |Type of TLS renegotiation to be used | `RenegotiateOnceAsClient`
| securityToken | N | Output |The value of a token to be added to an HTTP request as a header. Used together with `securityTokenHeader` |
| securityTokenHeader| N | Output |The name of the header for `securityToken` on an HTTP request that |
### How to configure MTLS related fields in Metadata
The values for **MTLSRootCA**, **MTLSClientCert** and **MTLSClientKey** can be provided in three ways:
1. Secret store reference
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.http
version: v1
metadata:
- name: url
value: http://something.com
- name: MTLSRootCA
secretKeyRef:
name: mysecret
key: myrootca
auth:
secretStore: <NAME_OF_SECRET_STORE_COMPONENT>
```
2. Path to the file: The absolute path to the file can be provided as a value for the field.
3. PEM encoded string: The PEM encoded string can also be provided as a value for the field.
{{% alert title="Note" color="primary" %}}
Metadata fields **MTLSRootCA**, **MTLSClientCert** and **MTLSClientKey** are used to configure TLS(m) authentication.
To use mTLS authentication, you must provide all three fields. See [mTLS]({{< ref "#using-mtls-or-enabling-client-tls-authentication-along-with-https" >}}) for more details. You can also provide only **MTLSRootCA**, to enable **HTTPS** connection. See [HTTPS]({{< ref "#install-the-ssl-certificate-in-the-sidecar" >}}) section for more details.
{{% /alert %}}
## Binding support
This component supports **output binding** with the following [HTTP methods/verbs](https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html):
@ -316,6 +346,10 @@ curl -d '{ "operation": "get" }' \
{{< /tabs >}}
{{% alert title="Note" color="primary" %}}
HTTPS binding support can also be configured using the **MTLSRootCA** metadata option. This will add the specified certificate to the list of trusted certificates for the binding. There's no specific preference for either method. While the **MTLSRootCA** option is easy to use and doesn't require any changes to the sidecar, it accepts only one certificate. If you need to trust multiple certificates, you need to [install them in the sidecar by following the steps above]({{< ref "#install-the-ssl-certificate-in-the-sidecar" >}}).
{{% /alert %}}
## Using mTLS or enabling client TLS authentication along with HTTPS
You can configure the HTTP binding to use mTLS or client TLS authentication along with HTTPS by providing the `MTLSRootCA`, `MTLSClientCert`, and `MTLSClientKey` metadata fields in the binding component.

View File

@ -67,6 +67,10 @@ spec:
| oidcClientID | N | Input/Output | The OAuth2 client ID that has been provisioned in the identity provider. Required when `authType` is set to `oidc` | `dapr-kafka` |
| oidcClientSecret | N | Input/Output | The OAuth2 client secret that has been provisioned in the identity provider: Required when `authType` is set to `oidc` | `"KeFg23!"` |
| oidcScopes | N | Input/Output | Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when `authType` is set to `oidc`. Defaults to `"openid"` | `"openid,kafka-prod"` |
| version | N | Input/Output | Kafka cluster version. Defaults to 2.0.0. Please note that this needs to be mandatorily set to `1.0.0` for EventHubs with Kafka. | `1.0.0` |
#### Note
The metadata `version` must be set to `1.0.0` when using Azure EventHubs with Kafka.
## Binding support

View File

@ -1,7 +1,7 @@
---
type: docs
title: "MySQL binding spec"
linkTitle: "MySQL"
title: "MySQL & MariaDB binding spec"
linkTitle: "MySQL & MariaDB"
description: "Detailed documentation on the MySQL binding component"
aliases:
- "/operations/components/setup-bindings/supported-bindings/mysql/"
@ -9,7 +9,9 @@ aliases:
## Component format
To setup MySQL binding create a component of type `bindings.mysql`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
The MySQL binding allows connecting to both MySQL and MariaDB databases. In this document, we refer to "MySQL" to indicate both databases.
To setup a MySQL binding create a component of type `bindings.mysql`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
The MySQL binding uses [Go-MySQL-Driver](https://github.com/go-sql-driver/mysql) internally.

View File

@ -100,7 +100,7 @@ The Azure App Configuration store component supports the following optional `lab
The label can be populated using query parameters in the request URL:
```bash
GET curl http://localhost:<daprPort>/v1.0-alpha1/configuration/<store-name>?key=<key name>&metadata.label=<label value>
GET curl http://localhost:<daprPort>/v1.0/configuration/<store-name>?key=<key name>&metadata.label=<label value>
```
## Related links

View File

@ -39,7 +39,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| connectionString | Y | The connection string for PostgreSQL. Default pool_max_conns = 5 | `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=dapr_test pool_max_conns=10"`
| table | Y | table name for configuration information. | `configTable`
| table | Y | Table name for configuration information, must be lowercased. | `configtable`
## Set up PostgreSQL as Configuration Store
@ -96,7 +96,7 @@ notification = json_build_object(
6. Since this is a generic created trigger, map this trigger to `configuration table`
```console
CREATE TRIGGER config
AFTER INSERT OR UPDATE OR DELETE ON configTable
AFTER INSERT OR UPDATE OR DELETE ON configtable
FOR EACH ROW EXECUTE PROCEDURE notify_event();
```
7. In the subscribe request add an additional metadata field with key as `pgNotifyChannel` and value should be set to same `channel name` mentioned in `pg_notify`. From the above example, it should be set to `config`

View File

@ -0,0 +1,117 @@
---
type: docs
title: "PostgreSQL"
linkTitle: "PostgreSQL"
description: Detailed information on the PostgreSQL configuration store component
aliases:
- "/operations/components/setup-configuration-store/supported-configuration-stores/setup-postgresql/"
- "/operations/components/setup-configuration-store/supported-configuration-stores/setup-postgres/"
---
## Component format
To set up an PostgreSQL configuration store, create a component of type `configuration.postgresql`
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: configuration.postgresql
version: v1
metadata:
- name: connectionString
value: "host=localhost user=postgres password=example port=5432 connect_timeout=10 database=config"
- name: table # name of the table which holds configuration information
value: "[your_configuration_table_name]"
- name: connMaxIdleTime # max timeout for connection
value : "15s"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| connectionString | Y | The connection string for PostgreSQL. Default pool_max_conns = 5 | `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=dapr_test pool_max_conns=10"`
| table | Y | Table name for configuration information, must be lowercased. | `configtable`
## Set up PostgreSQL as Configuration Store
1. Start PostgreSQL Database
1. Connect to the PostgreSQL database and setup a configuration table with following schema -
| Field | Datatype | Nullable |Details |
|--------------------|:--------:|---------|---------|
| KEY | VARCHAR | N |Holds `"Key"` of the configuration attribute |
| VALUE | VARCHAR | N |Holds Value of the configuration attribute |
| VERSION | VARCHAR | N | Holds version of the configuration attribute
| METADATA | JSON | Y | Holds Metadata as JSON
```console
CREATE TABLE IF NOT EXISTS table_name (
KEY VARCHAR NOT NULL,
VALUE VARCHAR NOT NULL,
VERSION VARCHAR NOT NULL,
METADATA JSON );
```
3. Create a TRIGGER on configuration table. An example function to create a TRIGGER is as follows -
```console
CREATE OR REPLACE FUNCTION configuration_event() RETURNS TRIGGER AS $$
DECLARE
data json;
notification json;
BEGIN
IF (TG_OP = 'DELETE') THEN
data = row_to_json(OLD);
ELSE
data = row_to_json(NEW);
END IF;
notification = json_build_object(
'table',TG_TABLE_NAME,
'action', TG_OP,
'data', data);
PERFORM pg_notify('config',notification::text);
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
```
4. Create the trigger with data encapsulated in the field labelled as `data`
```ps
notification = json_build_object(
'table',TG_TABLE_NAME,
'action', TG_OP,
'data', data);
```
5. The channel mentioned as attribute to `pg_notify` should be used when subscribing for configuration notifications
6. Since this is a generic created trigger, map this trigger to `configuration table`
```console
CREATE TRIGGER config
AFTER INSERT OR UPDATE OR DELETE ON configtable
FOR EACH ROW EXECUTE PROCEDURE notify_event();
```
7. In the subscribe request add an additional metadata field with key as `pgNotifyChannel` and value should be set to same `channel name` mentioned in `pg_notify`. From the above example, it should be set to `config`
{{% alert title="Note" color="primary" %}}
When calling `subscribe` API, `metadata.pgNotifyChannel` should be used to specify the name of the channel to listen for notifications from PostgreSQL configuration store.
Any number of keys can be added to a subscription request. Each subscription uses an exclusive database connection. It is strongly recommended to subscribe to multiple keys within a single subscription. This helps optimize the number of connections to the database.
Example of subscribe HTTP API -
```ps
curl --location --request GET 'http://<host>:<dapr-http-port>/configuration/mypostgresql/subscribe?key=<keyname1>&key=<keyname2>&metadata.pgNotifyChannel=<channel name>'
```
{{% /alert %}}
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Configuration building block]({{< ref configuration-api-overview >}})

View File

@ -20,19 +20,11 @@ spec:
version: v1
metadata:
- name: redisHost
value: <HOST>
value: <address>:6379
- name: redisPassword
value: <PASSWORD>
value: **************
- name: enableTLS
value: <bool> # Optional. Allowed: true, false.
- name: failover
value: <bool> # Optional. Allowed: true, false.
- name: sentinelMasterName
value: <string> # Optional
- name: maxRetries
value: # Optional
- name: maxRetryBackoff
value: # Optional
value: <bool>
```
@ -45,14 +37,26 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| redisHost | Y | Connection-string for the redis host | `localhost:6379`, `redis-master.default.svc.cluster.local:6379`
| redisPassword | Y | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"`
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
| maxRetryBackoff | N | Maximum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/) | `""`, `"127.0.0.1:6379"`
| redisHost | Y | Output | The Redis host address | `"localhost:6379"` |
| redisPassword | Y | Output | The Redis password | `"password"` |
| redisUsername | N | Output | Username for Redis host. Defaults to empty. Make sure your Redis server version is 6 or above, and have created acl rule correctly. | `"username"` |
| enableTLS | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |
| failover | N | Output | Property to enabled failover configuration. Needs sentinelMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | Output | The Sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| redisType | N | Output | The type of Redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for Redis cluster mode. Defaults to `"node"`. | `"cluster"`
| redisDB | N | Output | Database selected after connecting to Redis. If `"redisType"` is `"cluster"`, this option is ignored. Defaults to `"0"`. | `"0"`
| redisMaxRetries | N | Output | Maximum number of times to retry commands before giving up. Default is to not retry failed commands. | `"5"`
| redisMinRetryInterval | N | Output | Minimum backoff for Redis commands between each retry. Default is `"8ms"`; `"-1"` disables backoff. | `"8ms"`
| redisMaxRetryInterval | N | Output | Maximum backoff for Redis commands between each retry. Default is `"512ms"`;`"-1"` disables backoff. | `"5s"`
| dialTimeout | N | Output | Dial timeout for establishing new connections. Defaults to `"5s"`. | `"5s"`
| readTimeout | N | Output | Timeout for socket reads. If reached, Redis commands fail with a timeout instead of blocking. Defaults to `"3s"`, `"-1"` for no timeout. | `"3s"`
| writeTimeout | N | Output | Timeout for socket writes. If reached, Redis commands fail with a timeout instead of blocking. Defaults is readTimeout. | `"3s"`
| poolSize | N | Output | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. | `"20"`
| poolTimeout | N | Output | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | `"5s"`
| maxConnAge | N | Output | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | `"30m"`
| minIdleConns | N | Output | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to `"0"`. | `"2"`
| idleCheckFrequency | N | Output | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
| idleTimeout | N | Output | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
## Setup Redis

View File

@ -0,0 +1,12 @@
---
type: docs
title: "Cryptography component specs"
linkTitle: "Cryptography"
weight: 7000
description: The supported cryptography components that interface with Dapr
no_list: true
---
{{< partial "components/description.html" >}}
{{< partial "components/cryptography.html" >}}

View File

@ -0,0 +1,52 @@
---
type: docs
title: "Azure Key Vault"
linkTitle: "Azure Key Vault"
description: Detailed information on the Azure Key Vault cryptography component
---
## Component format
A Dapr `crypto.yaml` component file has the following structure:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
spec:
type: crypto.azure.keyvault
metadata:
- name: vaultName
value: mykeyvault
# See authentication section below for all options
- name: azureTenantId
value: ${{AzureKeyVaultTenantId}}
- name: azureClientId
value: ${{AzureKeyVaultServicePrincipalClientId}}
- name: azureClientSecret
value: ${{AzureKeyVaultServicePrincipalClientSecret}}
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Authenticating with Azure AD
The Azure Key Vault cryptography component supports authentication with Azure AD only. Before you enable this component:
1. Read the [Authenticating to Azure]({{< ref "authenticating-azure.md" >}}) document.
1. Create an [Azure AD application]({{< ref "howto-aad.md" >}}) (also called a Service Principal).
1. Alternatively, create a [managed identity]({{< ref "howto-msi.md" >}}) for your application platform.
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `vaultName` | Y | Azure Key Vault name | `"mykeyvault"` |
| Auth metadata | Y | See [Authenticating to Azure]({{< ref "authenticating-azure.md" >}}) for more information | |
## Related links
- [Cryptography building block]({{< ref cryptography >}})
- [Authenticating to Azure]({{< ref azure-authentication >}})

View File

@ -0,0 +1,79 @@
---
type: docs
title: "JSON Web Key Sets (JWKS)"
linkTitle: "JSON Web Key Sets (JWKS)"
description: Detailed information on the JWKS cryptography component
---
## Component format
The purpose of this component is to load keys from a JSON Web Key Set ([RFC 7517](https://www.rfc-editor.org/rfc/rfc7517)). These are JSON documents that contain 1 or more keys as JWK (JSON Web Key); they can be public, private, or shared keys.
This component supports loading a JWKS:
- From a local file; in this case, Dapr watches for changes to the file on disk and reloads it automatically.
- From a HTTP(S) URL, which is periodically refreshed.
- By passing the actual JWKS in the `jwks` metadata property, as a string (optionally, base64-encoded).
{{% alert title="Note" color="primary" %}}
This component uses the cryptographic engine in Dapr to perform operations. Although keys are never exposed to your application, Dapr has access to the raw key material.
{{% /alert %}}
A Dapr `crypto.yaml` component file has the following structure:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: jwks
spec:
type: crypto.dapr.jwks
version: v1
metadata:
# Example 1: load JWKS from file
- name: "jwks"
value: "fixtures/crypto/jwks/jwks.json"
# Example 2: load JWKS from a HTTP(S) URL
# Only "jwks" is required
- name: "jwks"
value: "https://example.com/.well-known/jwks.json"
- name: "requestTimeout"
value: "30s"
- name: "minRefreshInterval"
value: "10m"
# Option 3: include the actual JWKS
- name: "jwks"
value: |
{
"keys": [
{
"kty": "RSA",
"use": "sig",
"kid": "…",
"n": "…",
"e": "…",
"issuer": "https://example.com"
}
]
}
# Option 3b: include the JWKS base64-encoded
- name: "jwks"
value: |
eyJrZXlzIjpbeyJ…
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `jwks` | Y | Path to the JWKS document | Local file: `"fixtures/crypto/jwks/jwks.json"`<br>HTTP(S) URL: `"https://example.com/.well-known/jwks.json"`<br>Embedded JWKS: `{"keys": […]}` (can be base64-encoded)
| `requestTimeout` | N | Timeout for network requests when fetching the JWKS document from a HTTP(S) URL, as a Go duration. Default: "30s" | `"5s"`
| `minRefreshInterval` | N | Minimum interval to wait before subsequent refreshes of the JWKS document from a HTTP(S) source, as a Go duration. Default: "10m" | `"1h"`
## Related links
[Cryptography building block]({{< ref cryptography >}})

View File

@ -0,0 +1,39 @@
---
type: docs
title: "Kubernetes Secrets"
linkTitle: "Kubernetes Secrets"
description: Detailed information on the Kubernetes secret cryptography component
---
## Component format
The purpose of this component is to load the Kubernetes secret named after the key name.
{{% alert title="Note" color="primary" %}}
This component uses the cryptographic engine in Dapr to perform operations. Although keys are never exposed to your application, Dapr has access to the raw key material.
{{% /alert %}}
A Dapr `crypto.yaml` component file has the following structure:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: crypto.dapr.kubernetes.secrets
version: v1
metadata:[]
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
For the Kubernetes secret store component, there are no metadata attributes.
## Related links
[Cryptography building block]({{< ref cryptography >}})

View File

@ -0,0 +1,61 @@
---
type: docs
title: "Local storage"
linkTitle: "Local storage"
description: Detailed information on the local storage cryptography component
---
## Component format
The purpose of this component is to load keys from a local directory.
The component accepts as input the name of a folder, and loads keys from there. Each key is in its own file, and when users request a key with a given name, Dapr loads the file with that name.
Supported file formats:
- PEM with public and private keys (supports: PKCS#1, PKCS#8, PKIX)
- JSON Web Key (JWK) containing a public, private, or symmetric key
- Raw key data for symmetric keys
{{% alert title="Note" color="primary" %}}
This component uses the cryptographic engine in Dapr to perform operations. Although keys are never exposed to your application, Dapr has access to the raw key material.
{{% /alert %}}
A Dapr `crypto.yaml` component file has the following structure:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mycrypto
spec:
type: crypto.dapr.localstorage
metadata:
version: v1
- name: path
value: /path/to/folder/
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `path` | Y | Folder containing the keys to be loaded. When loading a key, the name of the key will be used as name of the file in this folder. | `/path/to/folder` |
**Example**
Let's say you've set `path=/mnt/keys`, which contains the following files:
- `/mnt/keys/mykey1.pem`
- `/mnt/keys/mykey2`
When using the component, you can reference the keys as `mykey1.pm` and `mykey2`.
## Related links
[Cryptography building block]({{< ref cryptography >}})

View File

@ -116,41 +116,6 @@ If using TinyGo, compile as shown below and set the spec metadata field named
tinygo build -o router.wasm -scheduler=none --no-debug -target=wasi router.go`
```
### Generating Wasm
This component allows you to rewrite a request URI with custom logic compiled
to a Wasm using the waPC protocol. The `rewrite` function receives the request
URI and returns an update as necessary.
To compile your Wasm, you must compile source using a waPC guest SDK such as
[TinyGo](https://github.com/wapc/wapc-guest-tinygo).
Here's an example in TinyGo:
```go
package main
import "github.com/wapc/wapc-guest-tinygo"
func main() {
wapc.RegisterFunctions(wapc.Functions{"rewrite": rewrite})
}
// rewrite returns a new URI if necessary.
func rewrite(requestURI []byte) ([]byte, error) {
if string(requestURI) == "/v1.0/hi" {
return []byte("/v1.0/hello"), nil
}
return requestURI, nil
}
```
If using TinyGo, compile as shown below and set the spec metadata field named
"url" to the location of the output (for example, `file://example.wasm`):
```bash
tinygo build -o example.wasm -scheduler=none --no-debug -target=wasi example.go
```
## Related links

View File

@ -65,7 +65,7 @@ spec:
| maxMessageBytes | N | The maximum size in bytes allowed for a single Kafka message. Defaults to 1024. | `2048`
| consumeRetryInterval | N | The interval between retries when attempting to consume topics. Treats numbers without suffix as milliseconds. Defaults to 100ms. | `200ms` |
| consumeRetryEnabled | N | Disable consume retry by setting `"false"` | `"true"`, `"false"` |
| version | N | Kafka cluster version. Defaults to 2.0.0.0 | `0.10.2.0` |
| version | N | Kafka cluster version. Defaults to 2.0.0. Note that this must be set to `1.0.0` if you are using Azure EventHubs with Kafka. | `0.10.2.0` |
| caCert | N | Certificate authority certificate, required for using TLS. Can be `secretKeyRef` to use a secret reference | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientCert | N | Client certificate, required for `authType` `mtls`. Can be `secretKeyRef` to use a secret reference | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientKey | N | Client key, required for `authType` `mtls` Can be `secretKeyRef` to use a secret reference | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
@ -78,6 +78,9 @@ spec:
The `secretKeyRef` above is referencing a [kubernetes secrets store]({{< ref kubernetes-secret-store.md >}}) to access the tls information. Visit [here]({{< ref setup-secret-store.md >}}) to learn more about how to configure a secret store component.
#### Note
The metadata `version` must be set to `1.0.0` when using Azure EventHubs with Kafka.
### Authentication
Kafka supports a variety of authentication schemes and Dapr supports several: SASL password, mTLS, OIDC/OAuth2. With the added authentication methods, the `authRequired` field has

View File

@ -28,6 +28,8 @@ spec:
value: "public"
- name: token
value: "eyJrZXlJZCI6InB1bHNhci1wajU0cXd3ZHB6NGIiLCJhbGciOiJIUzI1NiJ9.eyJzd"
- name: consumerID
value: "topic1"
- name: namespace
value: "default"
- name: persistent
@ -66,6 +68,7 @@ spec:
| enableTLS | N | Enable TLS. Default: `"false"` | `"true"`, `"false"` |
| token | N | Enable Authentication. | [How to create pulsar token](https://pulsar.apache.org/docs/en/security-jwt/#generate-tokens)|
| tenant | N | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. Default: `"public"` | `"public"` |
| consumerID | N | Used to set the subscription name or consumer ID. | `"topic1"`
| namespace | N | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Default: `"default"` | `"default"`
| persistent | N | Pulsar supports two kinds of topics: [persistent](https://pulsar.apache.org/docs/en/concepts-architecture-overview#persistent-storage) and [non-persistent](https://pulsar.apache.org/docs/en/concepts-messaging/#non-persistent-topics). With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks.
| disableBatching | N | disable batching.When batching enabled default batch delay is set to 10 ms and default batch size is 1000 messages,Setting `disableBatching: true` will make the producer to send messages individually. Default: `"false"` | `"true"`, `"false"`|
@ -74,6 +77,9 @@ spec:
| batchingMaxSize | N | batchingMaxSize sets the maximum number of bytes permitted in a batch. If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxMessages (see above) has been reached or the batch interval has elapsed. Default: `"128KB"` | `"131072"`|
| <topic-name>.jsonschema | N | Enforces JSON schema validation for the configured topic. |
| <topic-name>.avroschema | N | Enforces Avro schema validation for the configured topic. |
| publicKey | N | A public key to be used for publisher and consumer encryption. Value can be one of two options: file path for a local PEM cert, or the cert data string value |
| privateKey | N | A private key to be used for consumer encryption. Value can be one of two options: file path for a local PEM cert, or the cert data string value |
| keys | N | A comma delimited string containing names of [Pulsar session keys](https://pulsar.apache.org/docs/3.0.x/security-encryption/#how-it-works-in-pulsar). Used in conjunction with `publicKey` for publisher encryption |
### Enabling message delivery retries
@ -112,6 +118,90 @@ curl -X POST http://localhost:3500/v1.0/publish/myPulsar/myTopic?metadata.delive
}'
```
### E2E Encryption
Dapr supports setting public and private key pairs to enable Pulsar's [end-to-end encryption feature](https://pulsar.apache.org/docs/3.0.x/security-encryption/).
#### Enabling publisher encryption from file certs
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.pulsar
version: v1
metadata:
- name: host
value: "localhost:6650"
- name: publicKey
value: ./public.key
- name: keys
value: myapp.key
```
#### Enabling consumer encryption from file certs
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.pulsar
version: v1
metadata:
- name: host
value: "localhost:6650"
- name: publicKey
value: ./public.key
- name: privateKey
value: ./private.key
```
#### Enabling publisher encryption from value
> Note: It is recommended to [reference the public key from a secret]({{< ref component-secrets.md >}}).
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.pulsar
version: v1
metadata:
- name: host
value: "localhost:6650"
- name: publicKey
value: "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1KDAM4L8RtJ+nLaXBrBh\nzVpvTemsKVZoAct8A+ShepOHT9lgHOCGLFGWNla6K6j+b3AV/P/fAAhwj82vwTDd\nruXSflvSdmYeFAw3Ypphc1A5oM53wSRWhg63potBNWqdDzj8ApYgqjpmjYSQdL5/\na3golb36GYFrY0MLFTv7wZ87pmMIPsOgGIcPbCHker2fRZ34WXYLb1hkeUpwx4eK\njpwcg35gccvR6o/UhbKAuc60V1J9Wof2sNgtlRaQej45wnpjWYzZrIyk5qUbn0Qi\nCdpIrXvYtANq0Id6gP8zJvUEdPIgNuYxEmVCl9jI+8eGI6peD0qIt8U80hf9axhJ\n3QIDAQAB\n-----END PUBLIC KEY-----\n"
- name: keys
value: myapp.key
```
#### Enabling consumer encryption from value
> Note: It is recommended to [reference the public and private keys from a secret]({{< ref component-secrets.md >}}).
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.pulsar
version: v1
metadata:
- name: host
value: "localhost:6650"
- name: publicKey
value: "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1KDAM4L8RtJ+nLaXBrBh\nzVpvTemsKVZoAct8A+ShepOHT9lgHOCGLFGWNla6K6j+b3AV/P/fAAhwj82vwTDd\nruXSflvSdmYeFAw3Ypphc1A5oM53wSRWhg63potBNWqdDzj8ApYgqjpmjYSQdL5/\na3golb36GYFrY0MLFTv7wZ87pmMIPsOgGIcPbCHker2fRZ34WXYLb1hkeUpwx4eK\njpwcg35gccvR6o/UhbKAuc60V1J9Wof2sNgtlRaQej45wnpjWYzZrIyk5qUbn0Qi\nCdpIrXvYtANq0Id6gP8zJvUEdPIgNuYxEmVCl9jI+8eGI6peD0qIt8U80hf9axhJ\n3QIDAQAB\n-----END PUBLIC KEY-----\n"
- name: privateKey
value: "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEA1KDAM4L8RtJ+nLaXBrBhzVpvTemsKVZoAct8A+ShepOHT9lg\nHOCGLFGWNla6K6j+b3AV/P/fAAhwj82vwTDdruXSflvSdmYeFAw3Ypphc1A5oM53\nwSRWhg63potBNWqdDzj8ApYgqjpmjYSQdL5/a3golb36GYFrY0MLFTv7wZ87pmMI\nPsOgGIcPbCHker2fRZ34WXYLb1hkeUpwx4eKjpwcg35gccvR6o/UhbKAuc60V1J9\nWof2sNgtlRaQej45wnpjWYzZrIyk5qUbn0QiCdpIrXvYtANq0Id6gP8zJvUEdPIg\nNuYxEmVCl9jI+8eGI6peD0qIt8U80hf9axhJ3QIDAQABAoIBAQCKuHnM4ac/eXM7\nQPDVX1vfgyHc3hgBPCtNCHnXfGFRvFBqavKGxIElBvGOcBS0CWQ+Rg1Ca5kMx3TQ\njSweSYhH5A7pe3Sa5FK5V6MGxJvRhMSkQi/lJZUBjzaIBJA9jln7pXzdHx8ekE16\nBMPONr6g2dr4nuI9o67xKrtfViwRDGaG6eh7jIMlEqMMc6WqyhvI67rlVDSTHFKX\njlMcozJ3IT8BtTzKg2Tpy7ReVuJEpehum8yn1ZVdAnotBDJxI07DC1cbOP4M2fHM\ngfgPYWmchauZuTeTFu4hrlY5jg0/WLs6by8r/81+vX3QTNvejX9UdTHMSIfQdX82\nAfkCKUVhAoGBAOvGv+YXeTlPRcYC642x5iOyLQm+BiSX4jKtnyJiTU2s/qvvKkIu\nxAOk3OtniT9NaUAHEZE9tI71dDN6IgTLQlAcPCzkVh6Sc5eG0MObqOO7WOMCWBkI\nlaAKKBbd6cGDJkwGCJKnx0pxC9f8R4dw3fmXWgWAr8ENiekMuvjSfjZ5AoGBAObd\ns2L5uiUPTtpyh8WZ7rEvrun3djBhzi+d7rgxEGdditeiLQGKyZbDPMSMBuus/5wH\nwfi0xUq50RtYDbzQQdC3T/C20oHmZbjWK5mDaLRVzWS89YG/NT2Q8eZLBstKqxkx\ngoT77zoUDfRy+CWs1xvXzgxagD5Yg8/OrCuXOqWFAoGAPIw3r6ELknoXEvihASxU\nS4pwInZYIYGXpygLG8teyrnIVOMAWSqlT8JAsXtPNaBtjPHDwyazfZrvEmEk51JD\nX0tA8M5ah1NYt+r5JaKNxp3P/8wUT6lyszyoeubWJsnFRfSusuq/NRC+1+KDg/aq\nKnSBu7QGbm9JoT2RrmBv5RECgYBRn8Lj1I1muvHTNDkiuRj2VniOSirkUkA2/6y+\nPMKi+SS0tqcY63v4rNCYYTW1L7Yz8V44U5mJoQb4lvpMbolGhPljjxAAU3hVkItb\nvGVRlSCIZHKczADD4rJUDOS7DYxO3P1bjUN4kkyYx+lKUMDBHFzCa2D6Kgt4dobS\n5qYajQKBgQC7u7MFPkkEMqNqNGu5erytQkBq1v1Ipmf9rCi3iIj4XJLopxMgw0fx\n6jwcwNInl72KzoUBLnGQ9PKGVeBcgEgdI+a+tq+1TJo6Ta+hZSx+4AYiKY18eRKG\neNuER9NOcSVJ7Eqkcw4viCGyYDm2vgNV9HJ0VlAo3RDh8x5spEN+mg==\n-----END RSA PRIVATE KEY-----\n"
```
## Create a Pulsar instance
{{< tabs "Self-Hosted" "Kubernetes">}}

View File

@ -371,7 +371,7 @@ To set a priority on a message, add the publish metadata key `maxPriority` to th
{{% codetab %}}
```bash
curl -X POST http://localhost:3601/v1.0/publish/order-pub-sub/orders?metadata.maxPriority=3 -H "Content-Type: application/json" -d '{"orderId": "100"}'
curl -X POST http://localhost:3601/v1.0/publish/order-pub-sub/orders?metadata.priority=3 -H "Content-Type: application/json" -d '{"orderId": "100"}'
```
{{% /codetab %}}
@ -385,7 +385,7 @@ with DaprClient() as client:
topic_name=TOPIC_NAME,
data=json.dumps(orderId),
data_content_type='application/json',
metadata= { 'maxPriority': '3' })
metadata= { 'priority': '3' })
```
{{% /codetab %}}
@ -393,7 +393,7 @@ with DaprClient() as client:
{{% codetab %}}
```javascript
await client.pubsub.publish(PUBSUB_NAME, TOPIC_NAME, orderId, { 'maxPriority': '3' });
await client.pubsub.publish(PUBSUB_NAME, TOPIC_NAME, orderId, { 'priority': '3' });
```
{{% /codetab %}}
@ -401,7 +401,7 @@ await client.pubsub.publish(PUBSUB_NAME, TOPIC_NAME, orderId, { 'maxPriority': '
{{% codetab %}}
```go
client.PublishEvent(ctx, PUBSUB_NAME, TOPIC_NAME, []byte(strconv.Itoa(orderId)), map[string]string{"maxPriority": "3"})
client.PublishEvent(ctx, PUBSUB_NAME, TOPIC_NAME, []byte(strconv.Itoa(orderId)), map[string]string{"priority": "3"})
```
{{% /codetab %}}

View File

@ -70,6 +70,10 @@ In order to setup Cosmos DB as a state store, you need the following properties
- **Database**: The name of the database
- **Collection**: The name of the collection (or container)
### TTLs and cleanups
This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to override the default TTL on the CosmodDB container, indicating when the data should be considered "expired". Note that this value _only_ takes effect if the container's `DefaultTimeToLive` field has a non-NULL value. See the [CosmosDB documentation](https://docs.microsoft.com/azure/cosmos-db/nosql/time-to-live) for more information.
## Best Practices for Production Use
Azure Cosmos DB shares a strict metadata request rate limit across all databases in a single Azure Cosmos DB account. New connections to Azure Cosmos DB assume a large percentage of the allowable request rate limit. (See the [Cosmos DB documentation](https://docs.microsoft.com/azure/cosmos-db/sql/troubleshoot-request-rate-too-large#recommended-solution-3))

View File

@ -1,7 +1,7 @@
---
type: docs
title: "MySQL"
linkTitle: "MySQL"
title: "MySQL & MariaDB"
linkTitle: "MySQL & MariaDB"
description: Detailed information on the MySQL state store component
aliases:
- "/operations/components/setup-state-store/supported-state-stores/setup-mysql/"
@ -9,6 +9,8 @@ aliases:
## Component format
The MySQL state store components allows connecting to both MySQL and MariaDB databases. In this document, we refer to "MySQL" to indicate both databases.
To setup MySQL state store create a component of type `state.mysql`. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration.
@ -56,6 +58,7 @@ If you wish to use MySQL as an actor store, append the following to the yaml.
| `timeoutInSeconds` | N | Timeout for all database operations. Defaults to `20` | `30` |
| `pemPath` | N | Full path to the PEM file to use for [enforced SSL Connection](#enforced-ssl-connection) required if pemContents is not provided. Cannot be used in K8s environment | `"/path/to/file.pem"`, `"C:\path\to\file.pem"` |
| `pemContents` | N | Contents of PEM file to use for [enforced SSL Connection](#enforced-ssl-connection) required if pemPath is not provided. Can be used in K8s environment | `"pem value"` |
| `cleanupIntervalInSeconds` | N | Interval, in seconds, to clean up rows with an expired TTL. Default: `3600` (that is 1 hour). Setting this to values <=0 disables the periodic cleanup. | `1800`, `-1`
## Setup MySQL
@ -132,6 +135,17 @@ Replace the `<CONNECTION STRING>` value with your connection string. The connect
If your server requires SSL your connection string must end with `&tls=custom` for example, `"<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true&tls=custom"`. You must replace the `<PEM PATH>` with a full path to the PEM file. The connection to MySQL will require a minimum TLS version of 1.2.
### TTLs and cleanups
This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate when the data should be considered "expired".
Because MySQL doesn't have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered "expired". Records that are "expired" are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them.
The interval at which the deletion of expired records happens is set with the `cleanupIntervalInSeconds` metadata property, which defaults to 3600 seconds (that is, 1 hour).
- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupIntervalInSeconds` to a smaller value, for example `300` (300 seconds, or 5 minutes).
- If you do not plan to use TTLs with Dapr and the MySQL state store, you should consider setting `cleanupIntervalInSeconds` to a value <= 0 (e.g. `0` or `-1`) to disable the periodic cleanup and reduce the load on the database.
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components

View File

@ -92,7 +92,7 @@ Either the default "postgres" database can be used, or create a new database for
### TTLs and cleanups
This state store supports [Time-To-Live (TTL)](https://docs.dapr.io/developing-applications/building-blocks/state-management/state-store-ttl/) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate after how many seconds the data should be considered "expired".
This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate after how many seconds the data should be considered "expired".
Because PostgreSQL doesn't have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered "expired". Records that are "expired" are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them.

View File

@ -11,12 +11,10 @@ The following table lists the environment variables used by the Dapr runtime, CL
| Environment Variable | Used By | Description |
| -------------------- | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| APP_ID | Your application | The id for your application, used for service discovery |
| APP_PORT | Your application | The port your application is listening on |
| APP_PORT | Dapr sidecar | The port your application is listening on |
| APP_API_TOKEN | Your application | The token used by the application to authenticate requests from Dapr API. Read [authenticate requests from Dapr using token authentication]({{< ref app-api-token >}}) for more information. |
| DAPR_HTTP_PORT | Your application | The HTTP port that the Dapr sidecar is listening on. Your application should use this variable to connect to Dapr sidecar instead of hardcoding the port value. Set by the Dapr CLI run command for self-hosted or injected by the `dapr-sidecar-injector` into all the containers in the pod. |
| DAPR_GRPC_PORT | Your application | The gRPC port that the Dapr sidecar is listening on. Your application should use this variable to connect to Dapr sidecar instead of hardcoding the port value. Set by the Dapr CLI run command for self-hosted or injected by the `dapr-sidecar-injector` into all the containers in the pod. |
| DAPR_METRICS_PORT | Your application | The HTTP [Prometheus]({{< ref prometheus >}}) port to which Dapr sends its metrics information. With this variable, your application sends its application-specific metrics to have both Dapr metrics and application metrics together. See [metrics-port]({{< ref arguments-annotations-overview>}}) for more information |
| DAPR_PROFILE_PORT | Your application | The [profiling port]({{< ref profiling-debugging >}}) through which Dapr lets you enable profiling and track possible CPU/memory/resource spikes in your application's behavior. Enabled by `--enable-profiling` command in Dapr CLI for self-hosted or `dapr.io/enable-profiling` annotation in Dapr annotated pod. |
| DAPR_API_TOKEN | Dapr sidecar | The token used for Dapr API authentication for requests from the application. [Enable API token authentication in Dapr]({{< ref api-token >}}). |
| NAMESPACE | Dapr sidecar | Used to specify a component's [namespace in self-hosted mode]({{< ref component-scopes >}}). |
| DAPR_DEFAULT_IMAGE_REGISTRY | Dapr CLI | In self-hosted mode, it is used to specify the default container registry to pull images from. When its value is set to `GHCR` or `ghcr`, it pulls the required images from Github container registry. To default to Docker hub, unset this environment variable. |

View File

@ -62,7 +62,7 @@
features:
input: true
output: true
- component: MySQL
- component: MySQL & MariaDB
link: mysql
state: Alpha
version: v1

View File

@ -0,0 +1,5 @@
- component: Azure Key Vault
link: azure-key-vault
state: Alpha
version: v1
since: "1.11"

View File

@ -0,0 +1,15 @@
- component: JSON Web Key Sets (JWKS)
link: json-web-key-sets
state: Alpha
version: v1
since: "1.11"
- component: Kubernetes secrets
link: kubernetes-secrets
state: Alpha
version: v1
since: "1.11"
- component: Local storage
link: local-storage
state: Alpha
version: v1
since: "1.11"

View File

@ -0,0 +1,28 @@
{{- $groups := dict
" Generic" $.Site.Data.components.cryptography.generic
"Microsoft Azure" $.Site.Data.components.cryptography.azure
}}
{{ range $group, $components := $groups }}
<h3>{{ $group }}</h3>
<table width="100%">
<tr>
<th>Component</th>
<th>Status</th>
<th>Component version</th>
<th>Since runtime version</th>
</tr>
{{ range sort $components "component" }}
<tr>
<td><a href="/reference/components-reference/supported-cryptography/{{ .link }}/">{{ .component }}</a>
</td>
<td>{{ .state }}</td>
<td>{{ .version }}</td>
<td>{{ .since }}</td>
</tr>
{{ end }}
</table>
{{ end }}
{{ partial "components/componenttoc.html" . }}

View File

@ -25,6 +25,8 @@
</div>
</div>
<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=4848fb3b-3edb-4329-90a9-a9d79afff054" />
<script> (function (ss, ex) { window.ldfdr = window.ldfdr || function () { (ldfdr._q = ldfdr._q || []).push([].slice.call(arguments)); }; (function (d, s) { fs = d.getElementsByTagName(s)[0]; function ce(src) { var cs = d.createElement(s); cs.src = src; cs.async = 1; fs.parentNode.insertBefore(cs, fs); }; ce('https://sc.lfeeder.com/lftracker_v1_' + ss + (ex ? '_' + ex : '') + '.js'); })(document, 'script'); })('JMvZ8gblPjda2pOd'); </script>
<script async src="https://tag.clearbitscripts.com/v1/pk_3f4df076549ad932eda451778a42b09b/tags.js" referrerpolicy="strict-origin-when-cross-origin"></script>
</footer>
{{ define "footer-links-block" }}
<ul class="list-inline mb-0">

View File

@ -1 +1 @@
{{- if .Get "short" }}1.10{{ else if .Get "long" }}1.10.5{{ else if .Get "cli" }}1.10.0{{ else }}1.10.5{{ end -}}
{{- if .Get "short" }}1.10{{ else if .Get "long" }}1.10.7{{ else if .Get "cli" }}1.10.0{{ else }}1.10.7{{ end -}}

Binary file not shown.

After

Width:  |  Height:  |  Size: 686 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 141 KiB

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 KiB

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 535 KiB

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 156 KiB

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 128 KiB

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 113 KiB

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 181 KiB

Some files were not shown because too many files have changed in this diff Show More