Merge remote-tracking branch 'origin/v1.15' into upmerge-09-03

Signed-off-by: Marc Duiker <marcduiker@users.noreply.github.com>
This commit is contained in:
Marc Duiker 2025-09-03 10:04:20 +00:00
commit 87fef049ee
29 changed files with 618 additions and 89 deletions

View File

@ -93,6 +93,22 @@ updatedAt | timestamp | Timestamp of the actor registered/updated.
}
```
## Disabling the Placement service
The Placement service can be disabled with the following setting:
```
global.actors.enabled=false
```
The Placement service is not deployed with this setting in Kubernetes mode. This not only disables actor deployment, but also disables workflows, given that workflows use actors. This setting only applies in Kubernetes mode, however initializing Dapr with `--slim` excludes the Placement service from being deployed in self-hosted mode.
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page](https://docs.dapr.io/operations/hosting/kubernetes/).
## Related links
[Learn more about the Placement API.]({{% ref placement_api %}})
[Learn more about the Placement API.]({{% ref placement_api %}})

View File

@ -114,6 +114,8 @@ Here's an example of using a console app with top-level statements in .NET 6+:
Here's an example of using a console app with top-level statements in .NET 6+:
Here's an example of using a console app with top-level statements in .NET 6+:
```csharp
using System.Text;
using System.Threading.Tasks;

View File

@ -123,6 +123,8 @@ The following example demonstrates how to configure an input binding using ASP.N
The following example demonstrates how to configure an input binding using ASP.NET Core controllers.
The following example demonstrates how to configure an input binding using ASP.NET Core controllers.
```csharp
using System.Collections.Generic;
using System.Threading.Tasks;
@ -152,6 +154,15 @@ app.MapPost("checkout", ([FromBody] int orderId) =>
});
```
The following example demonstrates how to configure the same input binding using a minimal API approach:
```csharp
app.MapPost("checkout", ([FromBody] int orderId) =>
{
Console.WriteLine($"Received Message: {orderId}");
return $"CID{orderId}"
});
```
{{% /tab %}}
{{% tab "Java" %}}

View File

@ -59,7 +59,7 @@ Want to put the Dapr conversation API to the test? Walk through the following qu
| Quickstart/tutorial | Description |
| ------------------- | ----------- |
| [Conversation quickstart]({{% ref conversation-quickstart %}}) | Learn how to interact with Large Language Models (LLMs) using the conversation API. |
| [Conversation quickstart]({{% ref conversation-quickstart %}}) | Learn how to interact with Large Language Models (LLMs) using the conversation API. |
### Start using the conversation API directly in your app

View File

@ -111,7 +111,7 @@ Dapr apps can subscribe to raw messages from pub/sub topics, even if they weren
### Programmatically subscribe to raw events
When subscribing programmatically, add the additional metadata entry for `rawPayload` to allow the subscriber to receive a message that is not wrapped by a CloudEvent. For .NET, this metadata entry is called `rawPayload`.
When subscribing programmatically, add the additional metadata entry for `rawPayload` to allow the subscriber to receive a message that is not wrapped by a CloudEvent. For .NET, this metadata entry is called `isRawPayload`.
When using raw payloads the message is always base64 encoded with content type `application/octet-stream`.

View File

@ -9,38 +9,41 @@ aliases:
weight: 10000
---
Most Azure components for Dapr support authenticating with Microsoft Entra ID. Thanks to this:
- Administrators can leverage all the benefits of fine-tuned permissions with Azure Role-Based Access Control (RBAC).
- Applications running on Azure services such as Azure Container Apps, Azure Kubernetes Service, Azure VMs, or any other Azure platform services can leverage [Managed Identities (MI)](https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview) and [Workload Identity](https://learn.microsoft.com/azure/aks/workload-identity-overview). These offer the ability to authenticate your applications without having to manage sensitive credentials.
## About authentication with Microsoft Entra ID
Microsoft Entra ID is Azure's identity and access management (IAM) solution, which is used to authenticate and authorize users and services.
Microsoft Entra ID is Azure's identity and access management (IAM) solution, which is used to authenticate and authorize users and services. It's built on top of open standards such OAuth 2.0, which allows services (applications) to obtain access tokens to make requests to Azure services, including Azure Storage, Azure Service Bus, Azure Key Vault, Azure Cosmos DB, Azure Database for Postgres, Azure SQL, etc.
Microsoft Entra ID is built on top of open standards such OAuth 2.0, which allows services (applications) to obtain access tokens to make requests to Azure services, including Azure Storage, Azure Service Bus, Azure Key Vault, Azure Cosmos DB, Azure Database for Postgres, Azure SQL, etc.
## Options to authenticate
> In Azure terminology, an application is also called a "Service Principal".
Applications can authenticate with Microsoft Entra ID and obtain an access token to make requests to Azure services through several methods:
Some Azure components offer alternative authentication methods, such as systems based on "shared keys" or "access tokens". Although these are valid and supported by Dapr, you should authenticate your Dapr components using Microsoft Entra ID whenever possible to take advantage of many benefits, including:
- [Workload identity federation]({{< ref howto-wif.md >}}) - The recommended way to configure your Microsoft Entra ID tenant to trust an external identity provider. This includes service accounts from Kubernetes or AKS clusters. [Learn more about workload identity federation](https://learn.microsoft.com/entra/workload-id/workload-identities-overview).
- [System and user assigned managed identities]({{< ref howto-mi.md >}}) - Less granular than workload identity federation, but retains some of the benefits. [Learn more about system and user assigned managed identities](https://learn.microsoft.com/azure/aks/use-managed-identity).
- [Client ID and secret]({{ < ref howto-aad.md >}}) - Not recommended as it requires you to maintian and associate credentials at the application level.
- Pod Identities - [Deprecated approach for authenticating applications running on Kubernetes pods](https://learn.microsoft.com/azure/aks/use-azure-ad-pod-identity) at a pod level. This should no longer be used.
- [Managed Identities and Workload Identity](#managed-identities-and-workload-identity)
- [Role-Based Access Control](#role-based-access-control)
- [Auditing](#auditing)
- [(Optional) Authentication using certificates](#optional-authentication-using-certificates)
If you are just getting started, it is recommended to use workload identity federation.
### Managed Identities and Workload Identity
## Managed identities and workload identity federation
With Managed Identities (MI), your application can authenticate with Microsoft Entra ID and obtain an access token to make requests to Azure services. When your application is running on a supported Azure service (such as Azure VMs, Azure Container Apps, Azure Web Apps, etc), an identity for your application can be assigned at the infrastructure level.
When your application is running on a supported Azure service (such as Azure VMs, Azure Container Apps, Azure Web Apps, etc), an identity for your application can be assigned at the infrastructure level.
Once using MI, your code doesn't have to deal with credentials, which:
This is done through [system or user assigned managed identities]({{< ref howto-mi.md >}}), or [workload identity federation]({{< ref howto-wif.md >}}).
Once using managed identities, your code doesn't have to deal with credentials, which:
- Removes the challenge of managing credentials safely
- Allows greater separation of concerns between development and operations teams
- Reduces the number of people with access to credentials
- Simplifies operational aspectsespecially when multiple environments are used
Applications running on Azure Kubernetes Service can similarly leverage [Workload Identity](https://learn.microsoft.com/azure/aks/workload-identity-overview) to automatically provide an identity to individual pods.
While some Dapr Azure components offer alternative authentication methods, such as systems based on "shared keys" or "access tokens", you should always try to authenticate your Dapr components using Microsoft Entra ID whenever possible. This offers many benefits, including:
- [Role-Based Access Control](#role-based-access-control)
- [Auditing](#auditing)
- [(Optional) Authentication using certificates](#optional-authentication-using-certificates)
It's recommended that applications running on Azure Kubernetes Service leverage [workload identity federation](https://learn.microsoft.com/entra/workload-id/workload-identity-federation) to automatically provide an identity to individual pods.
### Role-Based Access Control

View File

@ -0,0 +1,111 @@
---
type: docs
title: "How to: Use workload identity federation"
linkTitle: "How to: Use workload identity federation"
weight: 20000
description: "Learn how to configure Dapr to use workload identity federation on Azure."
---
This guide will help you configure your Kubernetes cluster to run Dapr with Azure workload identity federation.
## What is it?
[Workload identity federation](https://learn.microsoft.com/entra/workload-id/workload-identities-overview)
is a way for your applications to authenticate to Azure without having to store or manage credentials as part of
your releases.
By using workload identity federation, any Dapr components running on Kubernetes and AKS that target Azure can authenticate transparently
with no extra configuration.
## Guide
We'll show how to configure an Azure Key Vault resource against your AKS cluster. You can adapt this guide for different
Dapr Azure components by substituting component definitions as necessary.
For this How To, we'll use this [Dapr AKS secrets sample app](https://github.com/dapr/samples/dapr-aks-workload-identity-federation).
### Prerequisites
- AKS cluster with workload identity enabled
- Microsoft Entra ID tenant
### 1 - Enable workload identity federation
Follow [the Azure documentation for enabling workload identity federation on your AKS cluster](https://learn.microsoft.com/azure/aks/workload-identity-deploy-cluster#deploy-your-application4).
The HowTo walks through configuring your Azure Entra ID tenant to trust an identity that originates from your AKS cluster issuer.
It also guides you in setting up a [Kubernetes service account](https://kubernetes.io/docs/concepts/security/service-accounts/) which
is associated with an Azure managed identity you create.
Once completed, return here to continue with step 2.
### 2 - Add a secret to Azure Key Vault
In the Azure Key Vault you created and add a secret called `dapr` with the value of `Hello Dapr!`.
### 3 - Configure the Azure Key Vault dapr component
By this point, you should have a Kubernetes service account with a name similar to `workload-identity-sa0a1b2c`.
Apply the following to your Kubernetes cluster, remembering to update `your-key-vault` with the name of your key vault:
```yaml
---
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: demo-secret-store # Be sure not to change this, as our app will be looking for it.
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: your-key-vault # Replace
```
You'll notice that we have not provided any details specific to authentication in the component definition. This is intentional, as Dapr is able to leverage the Kubernetes service account to transparently authenticate to Azure.
### 4 - Deploy the test application
Go to the [workload identity federation sample application](https://github.com/dapr/samples/dapr-aks-workload-identity-federation) and prepare a build of the image.
Make sure the image is pushed up to a registry that your AKS cluster has visibility and permission to pull from.
Next, create a deployment for our sample AKS secrets app container along with a Dapr sidecar.
Remember to update `dapr-wif-k8s-service-account` with your service account name and `dapraksworkloadidentityfederation` with an image your cluster can resolve:
```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-dapr-wif-secrets
labels:
app: aks-dapr-wif-secrets
spec:
replicas: 1
selector:
matchLabels:
app: aks-dapr-wif-secrets
template:
metadata:
labels:
app: aks-dapr-wif-secrets
azure.workload.identity/use: "true" # Important
annotations:
dapr.io/enabled: "true" # Enable Dapr
dapr.io/app-id: "aks-dapr-wif-secrets"
spec:
serviceAccountName: dapr-wif-k8s-service-account # Remember to replace
containers:
- name: workload-id-demo
image: dapraksworkloadidentityfederation # Remember to replace
imagePullPolicy: Always
```
Once the application is up and running, it should output the following:
```
Fetched Secret: Hello dapr!
```

View File

@ -81,6 +81,8 @@ Content-Length: 12
client.saveState("MyStateStore", "MyKey", "My Message").block();
```
In this example, `My Message` is saved. It is not quoted because Dapr's API internally parse the JSON request object before saving it.
{{% /tab %}}
{{< /tabpane >}}
@ -100,9 +102,7 @@ serving it.
await client.PublishEventAsync("MyPubSubName", "TopicName", "My Message");
```
The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber receives it
as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as string. The Dapr SDK also provides a built-in deserializer
for the `CloudEvent` object.
The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber receives it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as string. The Dapr SDK also provides a built-in deserializer for the `CloudEvent` object.
```csharp
public async Task<IActionResult> HandleMessage(string message)

View File

@ -7,7 +7,6 @@ description: "Define secret scopes by augmenting the existing configuration reso
description: "Define secret scopes by augmenting the existing configuration resource with restrictive permissions."
---
In addition to [scoping which applications can access a given component]({{% ref "component-scopes.md"%}}), you can also scope a named secret store component to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` lists, you restrict applications to access only specific secrets.
In addition to [scoping which applications can access a given component]({{% ref "component-scopes.md"%}}), you can also scope a named secret store component to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` lists, you restrict applications to access only specific secrets.
For more information about configuring a Configuration resource:

View File

@ -85,6 +85,14 @@ kubectl delete pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-
Persistent Volume Claims are not deleted automatically with an [uninstall]({{< ref dapr-uninstall.md >}}). This is a deliberate safety measure to prevent accidental data loss.
{{% /alert %}}
{{% alert title="Note" color="primary" %}}
For storage providers that do NOT support dynamic volume expansion: If Dapr has ever been installed on the cluster before, the Scheduler's Persistent Volume Claims must be manually uninstalled in order for new ones with increased storage size to be created.
```bash
kubectl delete pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 dapr-scheduler-data-dir-dapr-scheduler-server-1 dapr-scheduler-data-dir-dapr-scheduler-server-2
```
Persistent Volume Claims are not deleted automatically with an [uninstall]({{< ref dapr-uninstall.md >}}). This is a deliberate safety measure to prevent accidental data loss.
{{% /alert %}}
#### Increase existing Scheduler Storage Size
{{% alert title="Warning" color="warning" %}}

View File

@ -0,0 +1,205 @@
---
type: docs
title: "How-To: Set up Dash0 for distributed tracing"
linkTitle: "Dash0"
weight: 5000
description: "Set up Dash0 for distributed tracing"
---
Dapr captures metrics, traces, and logs that can be sent directly to Dash0 through the OpenTelemetry Collector. Dash0 is an OpenTelemetry-native observability platform that provides comprehensive monitoring capabilities for distributed applications.
## Configure Dapr tracing with the OpenTelemetry Collector and Dash0
By using the OpenTelemetry Collector with the OTLP exporter to send data to Dash0, you can configure Dapr to create traces for each application in your Kubernetes cluster and collect them in Dash0 for analysis and monitoring.
## Prerequisites
* A running Kubernetes cluster with `kubectl` installed
* Helm v3+
* [Dapr installed in the cluster](https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-deploy/)
* A Dash0 account ([Get started with a 14-day free trial](https://www.dash0.com/pricing))
* Your Dash0 **Auth Token** and **OTLP/gRPC endpoint** (find both under **Settings → Auth Tokens** and **Settings → Endpoints**)
## Configure the OpenTelemetry Collector
1) Create a namespace for the Collector
```bash
kubectl create namespace opentelemetry
```
2) Create a Secret with your Dash0 **Auth Token** and **Endpoint**
```bash
kubectl create secret generic dash0-secrets \
--from-literal=dash0-authorization-token="<your_auth_token>" \
--from-literal=dash0-endpoint="<your_otlp_grpc_endpoint>" \
--namespace opentelemetry
```
3) Add the OpenTelemetry Helm repo (once)
```bash
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update
```
4) Create `values.yaml` for the Collector
This config:
* Reads token + endpoint from the Secret via env vars
* Enables OTLP receivers (gRPC + HTTP)
* Sends **traces, metrics, and logs** to Dash0 via OTLP/gRPC with Bearer auth
```yaml
mode: deployment
fullnameOverride: otel-collector
replicaCount: 1
image:
repository: otel/opentelemetry-collector-k8s
extraEnvs:
- name: DASH0_AUTHORIZATION_TOKEN
valueFrom:
secretKeyRef:
name: dash0-secrets
key: dash0-authorization-token
- name: DASH0_ENDPOINT
valueFrom:
secretKeyRef:
name: dash0-secrets
key: dash0-endpoint
config:
receivers:
otlp:
protocols:
grpc: {}
http: {}
processors:
batch: {}
exporters:
otlp/dash0:
auth:
authenticator: bearertokenauth/dash0
endpoint: ${env:DASH0_ENDPOINT}
extensions:
bearertokenauth/dash0:
scheme: Bearer
token: ${env:DASH0_AUTHORIZATION_TOKEN}
health_check: {}
service:
extensions:
- bearertokenauth/dash0
- health_check
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/dash0]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlp/dash0]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp/dash0]
```
5) Install/upgrade the Collector with Helm
```bash
helm upgrade --install otel-collector open-telemetry/opentelemetry-collector \
--namespace opentelemetry \
-f values.yaml
```
## Configure Dapr to send telemetry to the Collector
1) Create a configuration
Create `dapr-config.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
otel:
endpointAddress: "otel-collector.opentelemetry.svc.cluster.local:4317"
isSecure: false
protocol: grpc
```
Apply it:
```bash
kubectl apply -f dapr-config.yaml
```
2) Annotate your application(s)
In each Deployment/Pod you want traced by Dapr, add:
```yaml
metadata:
annotations:
dapr.io/config: "tracing"
```
## Verify the setup
1. Check that the OpenTelemetry Collector is running:
```bash
kubectl get pods -n opentelemetry
```
2. Check the collector logs to ensure it's receiving and forwarding telemetry:
```bash
kubectl logs -n opentelemetry deployment/otel-collector
```
3. Deploy a sample application with Dapr tracing enabled and generate some traffic to verify traces are being sent to Dash0. You can use the [Dapr Kubernetes quickstart tutorial](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) for testing.
## Viewing traces
Once your setup is complete and telemetry data is flowing, you can view traces in Dash0:
1. Navigate to your Dash0 account
2. Go to the **Traces** section
3. You should see distributed traces from your Dapr applications
4. Use filters to narrow down traces by service name, operation, or time range
<img src="/images/dash0-dapr-trace-overview.png" width=1200 alt="Dash0 Trace Overview">
<img src="/images/dash0-dapr-trace.png" width=1200 alt="Dash0 Trace Details">
## Cleanup
```bash
helm -n opentelemetry uninstall otel-collector
kubectl -n opentelemetry delete secret dash0-secrets
kubectl delete ns opentelemetry
```
## Related Links
* [Dapr Kubernetes quickstart tutorial](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes)
* [Dapr observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability)
* [Dash0 documentation](https://www.dash0.com/docs)
* [OpenTelemetry Collector documentation](https://opentelemetry.io/docs/collector/)

View File

@ -0,0 +1,139 @@
---
type: docs
title: "Using Dynatrace OpenTelemetry Collector to collect traces to send to Dynatrace"
linkTitle: "Using the Dynatrace OpenTelemetry Collector"
weight: 1000
description: "How to push trace events to Dynatrace, using the Dynatrace OpenTelemetry Collector."
---
Dapr integrates with the [Dynatrace Collector](https://docs.dynatrace.com/docs/ingest-from/opentelemetry/collector) using the OpenTelemetry protocol (OTLP). This guide walks through an example using Dapr to push traces to Dynatrace, using the Dynatrace version of the OpenTelemetry Collector.
{{% alert title="Note" color="primary" %}}
This guide refers to the Dynatrace OpenTelemetry Collector, which uses the same Helm chart as the open-source collector but overridden with the Dynatrace-maintained image for better support and Dynatrace-specific features.
{{% /alert %}}
## Prerequisites
- [Install Dapr on Kubernetes]({{< ref kubernetes >}})
- Access to a Dynatrace tenant and an API token with `openTelemetryTrace.ingest`, `metrics.ingest`, and `logs.ingest` scopes
- Helm
## Set up Dynatrace OpenTelemetry Collector to push to your Dynatrace instance
To push traces to your Dynatrace instance, install the Dynatrace OpenTelemetry Collector on your Kubernetes cluster.
1. Create a Kubernetes secret with your Dynatrace credentials:
```sh
kubectl create secret generic dynatrace-otelcol-dt-api-credentials \
--from-literal=DT_ENDPOINT=https://YOUR_TENANT.live.dynatrace.com/api/v2/otlp \
--from-literal=DT_API_TOKEN=dt0s01.YOUR_TOKEN_HERE
```
Replace `YOUR_TENANT` with your Dynatrace tenant ID and `YOUR_TOKEN_HERE` with your Dynatrace API token.
1. Use the Dynatrace OpenTelemetry Collector distribution for better defaults and support than the open source version. Download and inspect the [`collector-helm-values.yaml`](https://github.com/Dynatrace/dynatrace-otel-collector/blob/main/config_examples/collector-helm-values.yaml) file. This is based on the [k8s enrichment demo](https://docs.dynatrace.com/docs/ingest-from/opentelemetry/collector/use-cases/kubernetes/k8s-enrich#demo-configuration) and includes Kubernetes metadata enrichment for proper pod/namespace/cluster context.
1. Deploy the Dynatrace Collector with Helm.
```sh
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update
helm upgrade -i dynatrace-collector open-telemetry/opentelemetry-collector -f collector-helm-values.yaml
```
## Set up Dapr to send traces to the Dynatrace Collector
Create a Dapr configuration file to enable tracing and send traces to the OpenTelemetry Collector via [OTLP](https://opentelemetry.io/docs/specs/otel/protocol/).
1. Update the following file to ensure the `endpointAddress` points to your Dynatrace OpenTelemetry Collector service in your Kubernetes cluster. If deployed in the `default` namespace, it's typically `dynatrace-collector.default.svc.cluster.local`.
**Important:** Ensure the `endpointAddress` does NOT include the `http://` prefix to avoid URL encoding issues:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
spec:
tracing:
samplingRate: "1"
otel:
endpointAddress: "dynatrace-collector.default.svc.cluster.local:4318" # Update with your collector's service address
```
1. Apply the configuration with:
```sh
kubectl apply -f collector-config-otel.yaml
```
## Deploy your app with tracing
Apply the `tracing` configuration by adding a `dapr.io/config` annotation to the Dapr applications that you want to include in distributed tracing, as shown in the following example:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
...
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "MyApp"
dapr.io/app-port: "8080"
dapr.io/config: "tracing"
```
{{% alert title="Note" color="primary" %}}
If you are using one of the Dapr tutorials, such as [distributed calculator](https://github.com/dapr/quickstarts/tree/master/tutorials/distributed-calculator), you will need to update the `appconfig` configuration to `tracing`.
{{% /alert %}}
You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.
That's it! There's no need to include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
## View traces
Deploy and run some applications. After a few minutes, you should see traces appearing in your Dynatrace tenant:
1. Navigate to **Search > Distributed tracing** in your Dynatrace UI.
2. Filter by service names to see your Dapr applications and their associated tracing spans.
<img src="/images/open-telemetry-collector-dynatrace-traces.png" width=1200 alt="Dynatrace showing tracing data.">
{{% alert title="Note" color="primary" %}}
Only operations going through Dapr API exposed by Dapr sidecar (for example, service invocation or event publishing) are displayed in Dynatrace distributed traces.
{{% /alert %}}
{{% alert title="Disable OneAgent daprd monitoring" color="warning" %}}
If you are running Dynatrace OneAgent in your cluster, you should exclude the `daprd` sidecar container from OneAgent monitoring to prevent interferences in this configuration. Excluding it prevents any automatic injection attempts that could break functionality or result in confusing traces.
Add this annotation to your application deployments or globally in your dynakube configuration file:
```yaml
metadata:
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "MyApp"
dapr.io/app-port: "8080"
dapr.io/config: "tracing"
container.inject.dynatrace.com/daprd: "false" # Exclude dapr sidecar from being auto-monitored by OneAgent
```
{{% /alert %}}
## Related links
- Try out the [observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability/README.md)
- Learn how to set [tracing configuration options]({{< ref "configuration-overview.md#tracing" >}})
- [Dynatrace OpenTelemetry documentation](https://docs.dynatrace.com/docs/ingest-from/opentelemetry)
- Enrich OTLP telemetry data [with Kubernetes metadata
](https://docs.dynatrace.com/docs/ingest-from/opentelemetry/collector/use-cases/kubernetes/k8s-enrich)

View File

@ -77,4 +77,5 @@ Learn how to set up tracing with one of the following tools:
- [New Relic]({{% ref newrelic.md %}})
- [Jaeger]({{% ref open-telemetry-collector-jaeger.md %}})
- [Zipkin]({{% ref zipkin.md %}})
- [Datadog]({{% ref datadog.md %}})
- [Datadog]({{% ref datadog.md %}})
- [Dash0]({{% ref dash0.md %}})

View File

@ -45,12 +45,16 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes |
|--------------------|:--------:|:--------|---------|---------|---------|------------|
| May 5th 2025 | 1.15.5</br> | 1.15.0 | Java 1.14.1 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.5 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.5) |
| April 4th 2025 | 1.15.4</br> | 1.15.0 | Java 1.14.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.4) |
| March 5rd 2025 | 1.15.3</br> | 1.15.0 | Java 1.14.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.3) |
| March 3rd 2025 | 1.15.2</br> | 1.15.0 | Java 1.14.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported (current) | [v1.15.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.2) |
| February 28th 2025 | 1.15.1</br> | 1.15.0 | Java 1.14.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported (current) | [v1.15.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.1) |
| February 27th 2025 | 1.15.0</br> | 1.15.0 | Java 1.14.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported | [v1.15.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.0) |
| July 31st 2025 | 1.15.9</br> | 1.15.0 | Java 1.14.2, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.9 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.9) |
| July 18th 2025 | 1.15.8</br> | 1.15.0 | Java 1.14.2, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.8 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.8) |
| July 16th 2025 | 1.15.7</br> | 1.15.0 | Java 1.14.1, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.7 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.7) |
| June 20th 2025 | 1.15.6</br> | 1.15.0 | Java 1.14.1, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.6 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.6) |
| May 5th 2025 | 1.15.5</br> | 1.15.0 | Java 1.14.1, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.5 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.5) |
| April 4th 2025 | 1.15.4</br> | 1.15.0 | Java 1.14.0, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.4) |
| March 5rd 2025 | 1.15.3</br> | 1.15.0 | Java 1.14.0, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.3) |
| March 3rd 2025 | 1.15.2</br> | 1.15.0 | Java 1.14.0, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported (current) | [v1.15.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.2) |
| February 28th 2025 | 1.15.1</br> | 1.15.0 | Java 1.14.0, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported (current) | [v1.15.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.1) |
| February 27th 2025 | 1.15.0</br> | 1.15.0 | Java 1.14.0, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported | [v1.15.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.0) |
| September 16th 2024 | 1.14.4</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported | [v1.14.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.4) |
| September 13th 2024 | 1.14.3</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | ⚠️ Recalled | [v1.14.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.3) |
| September 6th 2024 | 1.14.2</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported | [v1.14.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.2) |

View File

@ -46,6 +46,7 @@ dapr init [flags]
| `--container-runtime` | | `docker` | Used to pass in a different container runtime other than Docker. Supported container runtimes are: `docker`, `podman` |
| `--dev` | | | Creates Redis and Zipkin deployments when run in Kubernetes. |
| `--scheduler-volume` | | | Self-hosted only. Optionally, you can specify a volume for the scheduler service data directory. By default, without this flag, scheduler data is not persisted and not resilient to restarts. |
| `--scheduler-override-broadcast-host-port` | | localhost:50006 (6060 for Windows) | Self-hosted only. Specify the scheduler broadcast host and port, for example: 192.168.42.42:50006. |
### Examples
@ -70,7 +71,7 @@ Dapr can also run [Slim self-hosted mode]({{% ref self-hosted-no-docker.md %}}),
dapr init -s
```
> To switch to Dapr Github container registry as the default registry, set the `DAPR_DEFAULT_IMAGE_REGISTRY` environment variable value to be `GHCR`. To switch back to Docker Hub as default registry, unset this environment variable.
> To switch to Dapr Github container registry as the default registry, set the `DAPR_DEFAULT_IMAGE_REGISTRY` environment variable value to be `GHCR`. To switch back to Docker Hub as default registry, unset this environment variable.
**Specify a runtime version**
@ -148,7 +149,7 @@ dapr init --network mynet
Verify all containers are running in the specified network.
```bash
docker ps
docker ps
```
Uninstall Dapr from that Docker network.
@ -157,6 +158,18 @@ Uninstall Dapr from that Docker network.
dapr uninstall --all --network mynet
```
**Specify scheduler broadcast host and port**
You can specify the scheduler broadcast host and port, for example: 192.168.42.42:50006.
This is necessary when you have to connect to the scheduler using a different host and port, as the scheduler only allows connections matching this host and port.
By default, the scheduler will use localhost:50006 (6060 for Windows).
```bash
dapr init --scheduler-override-broadcast-host-port 192.168.42.42:50006
```
{{% /tab %}}
{{% tab "Kubernetes" %}}
@ -192,11 +205,11 @@ dapr init -k --set global.tag=1.0.0 --set dapr_operator.logLevel=error
You can also specify a private registry to pull container images from. As of now `dapr init -k` does not use specific images for sentry, operator, placement, scheduler, and sidecar. It relies on only Dapr runtime container image `dapr` for all these images.
Scenario 1 : dapr image hosted directly under root folder in private registry -
Scenario 1 : dapr image hosted directly under root folder in private registry -
```bash
dapr init -k --image-registry docker.io/username
```
Scenario 2 : dapr image hosted under a new/different directory in private registry -
Scenario 2 : dapr image hosted under a new/different directory in private registry -
```bash
dapr init -k --image-registry docker.io/username/<directory-name>
```

View File

@ -33,6 +33,8 @@ spec:
# value: <integer>
# - name: publicAccessLevel
# value: <publicAccessLevel>
# - name: disableEntityManagement
# value: <bool>
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{% ref component-secrets.md %}}).
@ -45,10 +47,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `accountName` | Y | Input/Output | The name of the Azure Storage account | `"myexmapleaccount"` |
| `accountKey` | Y* | Input/Output | The access key of the Azure Storage account. Only required when not using Microsoft Entra ID authentication. | `"access-key"` |
| `containerName` | Y | Output | The name of the Blob Storage container to write to | `myexamplecontainer` |
| `endpoint` | N | Input/Output | Optional custom endpoint URL. This is useful when using the [Azurite emulator](https://github.com/Azure/azurite) or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (`http://` or `https://`), the IP or FQDN, and optional port. | `"http://127.0.0.1:10000"`
| `endpoint` | N | Input/Output | Optional custom endpoint URL. This is useful when using the [Azurite emulator](https://github.com/Azure/azurite) or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (`http://` or `https://`), the IP or FQDN, and optional port. | `"http://127.0.0.1:10000"` |
| `decodeBase64` | N | Output | Configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). Defaults to `false` | `true`, `false` |
| `getBlobRetryCount` | N | Output | Specifies the maximum number of HTTP GET requests that will be made while reading from a RetryReader Defaults to `10` | `1`, `2`
| `publicAccessLevel` | N | Output | Specifies whether data in the container may be accessed publicly and the level of access (only used if the container is created by Dapr). Defaults to `none` | `blob`, `container`, `none`
| `getBlobRetryCount` | N | Output | Specifies the maximum number of HTTP GET requests that will be made while reading from a RetryReader Defaults to `10` | `1`, `2` |
| `publicAccessLevel` | N | Output | Specifies whether data in the container may be accessed publicly and the level of access (only used if the container is created by Dapr). Defaults to `none` | `blob`, `container`, `none` |
| `disableEntityManagement` | N | Output | Configuration to disable entity management. When set to `true`, the binding skips the attempt to create the specified storage container. This is useful when operating with minimal Azure AD permissions. Defaults to `false` | `true`, `false` |
### Microsoft Entra ID authentication

View File

@ -79,6 +79,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
Since the GCP Storage Bucket component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained further in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide.
Also, see how to [Set up Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc).
## GCP Credentials
Since the GCP Storage Bucket component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained further in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide.
Also, see how to [Set up Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc).
## Binding support
This component supports **output binding** with the following operations:

View File

@ -26,6 +26,8 @@ spec:
value: "items"
- name: region
value: "us-west-2"
- name: endpoint
value: "sqs.us-west-2.amazonaws.com"
- name: accessKey
value: "*****************"
- name: secretKey
@ -45,11 +47,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| `queueName` | Y | Input/Output | The SQS queue name | `"myqueue"` |
| `region` | Y | Input/Output | The specific AWS region | `"us-east-1"` |
| `accessKey` | Y | Input/Output | The AWS Access Key to access this resource | `"key"` |
| `secretKey` | Y | Input/Output | The AWS Secret Access Key to access this resource | `"secretAccessKey"` |
| `sessionToken` | N | Input/Output | The AWS session token to use | `"sessionToken"` |
| `direction` | N | Input/Output | The direction of the binding | `"input"`, `"output"`, `"input, output"` |
| `region` | Y | Input/Output | The specific AWS region | `"us-east-1"` |
| `endpoint` | N | Output | The specific AWS endpoint | `"sqs.us-east-1.amazonaws.com"` |
| `accessKey` | Y | Input/Output | The AWS Access Key to access this resource | `"key"` |
| `secretKey` | Y | Input/Output | The AWS Secret Access Key to access this resource | `"secretAccessKey"` |
| `sessionToken` | N | Input/Output | The AWS session token to use | `"sessionToken"` |
| `direction` | N | Input/Output | The direction of the binding | `"input"`, `"output"`, `"input, output"` |
{{% alert title="Important" color="warning" %}}
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using.

View File

@ -297,12 +297,7 @@ In Kubernetes, you store the client secret or the certificate into the Kubernete
```bash
kubectl apply -f azurekeyvault.yaml
```
1. Create and assign a managed identity at the pod-level via either:
- [Microsoft Entra ID workload identity](https://learn.microsoft.com/azure/aks/workload-identity-overview) (preferred method)
- [Microsoft Entra ID pod identity](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity#create-a-pod-identity)
**Important**: While both Microsoft Entra ID pod identity and workload identity are in preview, currently Microsoft Entra ID Workload Identity is planned for general availability (stable state).
1. Create and assign a managed identity at the pod-level via [Microsoft Entra ID workload identity](https://learn.microsoft.com/azure/aks/workload-identity-overview)
1. After creating a workload identity, give it `read` permissions:
- [On your desired KeyVault instance](https://docs.microsoft.com/azure/key-vault/general/assign-access-policy?tabpane=azure-cli#assign-the-access-policy)
@ -321,7 +316,7 @@ In Kubernetes, you store the client secret or the certificate into the Kubernete
#### Using Azure managed identity directly vs. via Microsoft Entra ID workload identity
When using **managed identity directly**, you can have multiple identities associated with an app, requiring `azureClientId` to specify which identity should be used.
When using **managed identity directly**, you can have multiple identities associated with an app, requiring `azureClientId` to specify which identity should be used.
However, when using **managed identity via Microsoft Entra ID workload identity**, `azureClientId` is not necessary and has no effect. The Azure identity to be used is inferred from the service account tied to an Azure identity via the Azure federated identity.

View File

@ -96,6 +96,10 @@ For example, if installing using the example above, the Cassandra DNS would be:
[Apache Ignite](https://ignite.apache.org/)'s integration with Cassandra as a caching layer is not supported by this component.
## Apache Ignite
[Apache Ignite](https://ignite.apache.org/)'s integration with Cassandra as a caching layer is not supported by this component.
## Related links
- [Basic schema for a Dapr component]({{% ref component-schema %}})
- Read [this guide]({{% ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" %}}) for instructions on configuring state store components

View File

@ -33,3 +33,8 @@
state: Alpha
version: v1
since: "1.16"
- component: Local echo
link: local-echo
state: Stable
version: v1
since: "1.15"

View File

@ -8,4 +8,4 @@
transactions: true
etag: true
ttl: true
query: false
workflow: false

View File

@ -8,7 +8,7 @@
transactions: false
etag: true
ttl: false
query: false
workflow: false
- component: Azure Cosmos DB
link: setup-azure-cosmosdb
state: Stable
@ -19,7 +19,7 @@
transactions: true
etag: true
ttl: true
query: true
workflow: false
- component: Microsoft SQL Server
link: setup-sqlserver
state: Stable
@ -30,7 +30,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: false
- component: Azure Table Storage
link: setup-azure-tablestorage
state: Stable
@ -41,4 +41,4 @@
transactions: false
etag: true
ttl: false
query: false
workflow: false

View File

@ -8,7 +8,7 @@
transactions: false
etag: true
ttl: false
query: false
workflow: false
- component: Apache Cassandra
link: setup-cassandra
state: Stable
@ -19,7 +19,7 @@
transactions: false
etag: false
ttl: true
query: false
workflow: false
- component: CockroachDB
link: setup-cockroachdb
state: Stable
@ -30,7 +30,7 @@
transactions: true
etag: true
ttl: true
query: true
workflow: false
- component: Couchbase
link: setup-couchbase
state: Alpha
@ -41,7 +41,7 @@
transactions: false
etag: true
ttl: false
query: false
workflow: false
- component: etcd
link: setup-etcd
state: Beta
@ -52,7 +52,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: false
- component: Hashicorp Consul
link: setup-consul
state: Alpha
@ -63,7 +63,7 @@
transactions: false
etag: false
ttl: false
query: false
workflow: false
- component: Hazelcast
link: setup-hazelcast
state: Alpha
@ -74,7 +74,7 @@
transactions: false
etag: false
ttl: false
query: false
workflow: false
- component: In-memory
link: setup-inmemory
state: Stable
@ -85,7 +85,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: true
- component: JetStream KV
link: setup-jetstream-kv
state: Alpha
@ -96,7 +96,7 @@
transactions: false
etag: false
ttl: false
query: false
workflow: false
- component: Memcached
link: setup-memcached
state: Stable
@ -107,7 +107,7 @@
transactions: false
etag: false
ttl: true
query: false
workflow: false
- component: MongoDB
link: setup-mongodb
state: Stable
@ -118,7 +118,7 @@
transactions: true
etag: true
ttl: true
query: true
workflow: true
- component: MySQL & MariaDB
link: setup-mysql
state: Stable
@ -129,7 +129,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: true
- component: Oracle Database
link: setup-oracledatabase
state: Beta
@ -140,7 +140,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: false
- component: PostgreSQL v1
link: setup-postgresql-v1
state: Stable
@ -151,7 +151,7 @@
transactions: true
etag: true
ttl: true
query: true
workflow: true
- component: PostgreSQL v2
link: setup-postgresql-v2
state: Stable
@ -162,7 +162,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: true
- component: Redis
link: setup-redis
state: Stable
@ -173,7 +173,7 @@
transactions: true
etag: true
ttl: true
query: true
workflow: true
- component: RethinkDB
link: setup-rethinkdb
state: Beta
@ -184,7 +184,7 @@
transactions: false
etag: false
ttl: false
query: false
workflow: false
- component: SQLite
link: setup-sqlite
state: Stable
@ -195,7 +195,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: false
- component: Zookeeper
link: setup-zookeeper
state: Alpha
@ -206,4 +206,4 @@
transactions: false
etag: true
ttl: false
query: false
workflow: false

View File

@ -9,6 +9,7 @@
etag: true
ttl: true
query: false
workflow: false
- component: Coherence
link: setup-coherence
state: Alpha
@ -30,4 +31,4 @@
transactions: false
etag: true
ttl: true
query: false
workflow: false

View File

@ -18,7 +18,7 @@
<th>ETag</th>
<th>TTL</th>
<th>Actors</th>
<th>Query</th>
<th>Workflow</th>
<th>Status</th>
<th>Component version</th>
<th>Since runtime version</th>
@ -30,44 +30,45 @@
</td>
<td align="center">
{{ if .features.crud }}
<span role="img" aria-label="CRUD: Supported"></span>
<span role="img" aria-label="CRUD: Supported"></span>
{{else}}
<img src="/images/emptybox.png" alt="CRUD: Not supported" aria-label="CRUD: Not supported" />
<img src="/images/emptybox.png" alt="CRUD: Not supported" aria-label="CRUD: Not supported" />
{{ end }}
</td>
<td align="center">
{{ if .features.transactions }}
<span role="img" aria-label="Transactions: Supported"></span>
<span role="img" aria-label="Transactions: Supported"></span>
{{else}}
<img src="/images/emptybox.png" alt="Transactions: Not supported" aria-label="Transactions: Not supported" />
<img src="/images/emptybox.png" alt="Transactions: Not supported"
aria-label="Transactions: Not supported" />
{{ end }}
</td>
<td align="center">
{{ if .features.etag }}
<span role="img" aria-label="ETag: Supported"></span>
<span role="img" aria-label="ETag: Supported"></span>
{{else}}
<img src="/images/emptybox.png" alt="ETag: Not supported" aria-label="ETag: Not supported" />
<img src="/images/emptybox.png" alt="ETag: Not supported" aria-label="ETag: Not supported" />
{{ end }}
</td>
<td align="center">
{{ if .features.ttl }}
<span role="img" aria-label="TTL: Supported"></span>
<span role="img" aria-label="TTL: Supported"></span>
{{else}}
<img src="/images/emptybox.png" alt="TTL: Not supported" aria-label="TTL: Not supported" />
<img src="/images/emptybox.png" alt="TTL: Not supported" aria-label="TTL: Not supported" />
{{ end }}
</td>
<td align="center">
{{ if (and .features.transactions .features.etag) }}
<span role="img" aria-label="Actors: Supported"></span>
<span role="img" aria-label="Actors: Supported"></span>
{{else}}
<img src="/images/emptybox.png" alt="Actors: Not supported" aria-label="Actors: Not supported" />
<img src="/images/emptybox.png" alt="Actors: Not supported" aria-label="Actors: Not supported" />
{{ end }}
</td>
<td align="center">
{{ if .features.query }}
<span role="img" aria-label="Query: Supported"></span>
{{ if .features.workflow }}
<span role="img" aria-label="Workflow: Supported"></span>
{{else}}
<img src="/images/emptybox.png" alt="Query: Not supported" aria-label="Query: Not supported" />
<img src="/images/emptybox.png" alt="Workflow: Not supported" aria-label="Workflow: Not supported" />
{{ end }}
</td>
<td>{{ .state }}</td>

Binary file not shown.

After

Width:  |  Height:  |  Size: 670 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 518 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB