mirror of https://github.com/dapr/docs.git
Merge branch 'v1.11' into ts-azure-app-config-subscribe-interval
This commit is contained in:
commit
4c922efc19
|
@ -7,42 +7,68 @@ description: >
|
|||
Observe applications through tracing, metrics, logs and health
|
||||
---
|
||||
|
||||
When building an application, understanding how the system is behaving is an important part of operating it - this includes having the ability to observe the internal calls of an application, gauging its performance and becoming aware of problems as soon as they occur. This is challenging for any system, but even more so for a distributed system comprised of multiple microservices where a flow, made of several calls, may start in one microservice but continue in another. Observability is critical in production environments, but also useful during development to understand bottlenecks, improve performance and perform basic debugging across the span of microservices.
|
||||
When building an application, understanding the system behavior is an important, yet challenging part of operating it, such as:
|
||||
- Observing the internal calls of an application
|
||||
- Gauging its performance
|
||||
- Becoming aware of problems as soon as they occur
|
||||
|
||||
While some data points about an application can be gathered from the underlying infrastructure (for example memory consumption, CPU usage), other meaningful information must be collected from an "application-aware" layer–one that can show how an important series of calls is executed across microservices. This usually means a developer must add some code to instrument an application for this purpose. Often, instrumentation code is simply meant to send collected data such as traces and metrics to observability tools or services that can help store, visualize and analyze all this information.
|
||||
This can be particularly challenging for a distributed system comprised of multiple microservices, where a flow made of several calls may start in one microservice and continue in another.
|
||||
|
||||
Having to maintain this code, which is not part of the core logic of the application, is a burden on the developer, sometimes requiring understanding the observability tools' APIs, using additional SDKs etc. This instrumentation may also add to the portability challenges of an application, which may require different instrumentation depending on where the application is deployed. For example, different cloud providers offer different observability tools and an on-premises deployment might require a self-hosted solution.
|
||||
Observability into your application is critical in production environments, and can be useful during development to:
|
||||
- Understand bottlenecks
|
||||
- Improve performance
|
||||
- Perform basic debugging across the span of microservices
|
||||
|
||||
While some data points about an application can be gathered from the underlying infrastructure (memory consumption, CPU usage), other meaningful information must be collected from an "application-aware" layer – one that can show how an important series of calls is executed across microservices. Typically, you'd add some code to instrument an application, which simply sends collected data (such as traces and metrics) to observability tools or services that can help store, visualize, and analyze all this information.
|
||||
|
||||
Maintaining this instrumentation code, which is not part of the core logic of the application, requires understanding the observability tools' APIs, using additional SDKs, etc. This instrumentation may also present portability challenges for your application, requiring different instrumentation depending on where the application is deployed. For example:
|
||||
- Different cloud providers offer different observability tools
|
||||
- An on-premises deployment might require a self-hosted solution
|
||||
|
||||
## Observability for your application with Dapr
|
||||
|
||||
When building an application which leverages Dapr API building blocks to perform service-to-service calls and pub/sub messaging, Dapr offers an advantage with respect to [distributed tracing]({{<ref tracing>}}). Because this inter-service communication flows through the Dapr runtime (or "sidecar"), Dapr is in a unique position to offload the burden of application-level instrumentation.
|
||||
When you leverage Dapr API building blocks to perform service-to-service calls and pub/sub messaging, Dapr offers an advantage with respect to [distributed tracing]({{< ref develop-tracing >}}). Since this inter-service communication flows through the Dapr runtime (or "sidecar"), Dapr is in a unique position to offload the burden of application-level instrumentation.
|
||||
|
||||
### Distributed tracing
|
||||
|
||||
Dapr can be [configured to emit tracing data]({{<ref setup-tracing.md>}}), and because Dapr does so using the widely adopted protocols of [Open Telemetry (OTEL)](https://opentelemetry.io/) and [Zipkin](https://zipkin.io), it can be easily integrated with multiple observability tools.
|
||||
Dapr can be [configured to emit tracing data]({{< ref setup-tracing.md >}}) using the widely adopted protocols of [Open Telemetry (OTEL)](https://opentelemetry.io/) and [Zipkin](https://zipkin.io). This makes it easily integrated with multiple observability tools.
|
||||
|
||||
<img src="/images/observability-tracing.png" width=1000 alt="Distributed tracing with Dapr">
|
||||
|
||||
### Automatic tracing context generation
|
||||
|
||||
Dapr uses [W3C tracing]({{<ref w3c-tracing-overview>}}) specification for tracing context, included as part Open Telemetry (OTEL), to generate and propagate the context header for the application or propagate user-provided context headers. This means that you get tracing by default with Dapr.
|
||||
Dapr uses [W3C tracing]({{< ref w3c-tracing-overview >}}) specification for tracing context, included as part Open Telemetry (OTEL), to generate and propagate the context header for the application or propagate user-provided context headers. This means that you get tracing by default with Dapr.
|
||||
|
||||
## Observability for the Dapr sidecar and control plane
|
||||
|
||||
You also want to be able to observe Dapr itself, by collecting metrics on performance, throughput and latency and logs emitted by the Dapr sidecar, as well as the Dapr control plane services. Dapr sidecars have a health endpoint that can be probed to indicate their health status.
|
||||
You can also observe Dapr itself, by:
|
||||
- Generating logs emitted by the Dapr sidecar and the Dapr control plane services
|
||||
- Collecting metrics on performance, throughput, and latency
|
||||
- Using health endpoints probes to indicate the Dapr sidecar health status
|
||||
|
||||
<img src="/images/observability-sidecar.png" width=1000 alt="Dapr sidecar metrics, logs and health checks">
|
||||
|
||||
### Logging
|
||||
|
||||
Dapr generates [logs]({{<ref "logs.md">}}) to provide visibility into sidecar operation and to help users identify issues and perform debugging. Log events contain warning, error, info, and debug messages produced by Dapr system services. Dapr can also be configured to send logs to collectors such as [Fluentd]({{< ref fluentd.md >}}), [Azure Monitor]({{< ref azure-monitor.md >}}), and other observability tools, so that logs can be searched and analyzed to provide insights.
|
||||
Dapr generates [logs]({{< ref logs.md >}}) to:
|
||||
- Provide visibility into sidecar operation
|
||||
- Help users identify issues and perform debugging
|
||||
|
||||
Log events contain warning, error, info, and debug messages produced by Dapr system services. You can also configure Dapr to send logs to collectors, such as Open Telemetry Collector, [Fluentd]({{< ref fluentd.md >}}), [New Relic]({{< ref "operations/monitoring/logging/newrelic.md" >}}), [Azure Monitor]({{< ref azure-monitor.md >}}), and other observability tools, so that logs can be searched and analyzed to provide insights.
|
||||
|
||||
### Metrics
|
||||
|
||||
Metrics are the series of measured values and counts that are collected and stored over time. [Dapr metrics]({{<ref "metrics">}}) provide monitoring capabilities to understand the behavior of the Dapr sidecar and control plane. For example, the metrics between a Dapr sidecar and the user application show call latency, traffic failures, error rates of requests, etc. Dapr [control plane metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md) show sidecar injection failures and the health of control plane services, including CPU usage, number of actor placements made, etc.
|
||||
Metrics are a series of measured values and counts collected and stored over time. [Dapr metrics]({{< ref metrics >}}) provide monitoring capabilities to understand the behavior of the Dapr sidecar and control plane. For example, the metrics between a Dapr sidecar and the user application show call latency, traffic failures, error rates of requests, etc.
|
||||
|
||||
Dapr [control plane metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md) show sidecar injection failures and the health of control plane services, including CPU usage, number of actor placements made, etc.
|
||||
|
||||
### Health checks
|
||||
|
||||
The Dapr sidecar exposes an HTTP endpoint for [health checks]({{<ref sidecar-health.md>}}). With this API, user code or hosting environments can probe the Dapr sidecar to determine its status and identify issues with sidecar readiness.
|
||||
The Dapr sidecar exposes an HTTP endpoint for [health checks]({{< ref sidecar-health.md >}}). With this API, user code or hosting environments can probe the Dapr sidecar to determine its status and identify issues with sidecar readiness.
|
||||
|
||||
Conversely, Dapr can be configured to probe for the [health of your application]({{<ref app-health.md >}}), and react to changes in the app's health, including stopping pub/sub subscriptions and short-circuiting service invocation calls.
|
||||
Conversely, Dapr can be configured to probe for the [health of your application]({{< ref app-health.md >}}), and react to changes in the app's health, including stopping pub/sub subscriptions and short-circuiting service invocation calls.
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Learn more about observability in developing with Dapr]({{< ref develop-tracing >}})
|
||||
- [Learn more about observability in operating with Dapr]({{< ref tracing >}})
|
|
@ -8,4 +8,5 @@ description: "Dapr capabilities that solve common development challenges for dis
|
|||
|
||||
Get a high-level [overview of Dapr building blocks]({{< ref building-blocks-concept >}}) in the **Concepts** section.
|
||||
|
||||
<img src="/images/buildingblocks-overview.png" alt="Diagram showing the different Dapr API building blocks" width=1000>
|
||||
<img src="/images/buildingblocks-overview.png" alt="Diagram showing the different Dapr API building blocks" width=1000>
|
||||
|
||||
|
|
|
@ -5,3 +5,10 @@ linkTitle: "Actors"
|
|||
weight: 50
|
||||
description: Encapsulate code and data in reusable actor objects as a common microservices design pattern
|
||||
---
|
||||
|
||||
{{% alert title="More about Dapr Actors" color="primary" %}}
|
||||
Learn more about how to use Dapr Actors:
|
||||
- Try the [Actors quickstart]({{< ref actors-quickstart.md >}}).
|
||||
- Explore actors via any of the [Dapr SDKs]({{< ref sdks >}}).
|
||||
- Review the [Actors API reference documentation]({{< ref actors_api.md >}}).
|
||||
{{% /alert %}}
|
||||
|
|
|
@ -5,3 +5,12 @@ linkTitle: "Bindings"
|
|||
weight: 40
|
||||
description: Interface with or be triggered from external systems
|
||||
---
|
||||
|
||||
|
||||
{{% alert title="More about Dapr Bindings" color="primary" %}}
|
||||
Learn more about how to use Dapr Bindings:
|
||||
- Try the [Bindings quickstart]({{< ref bindings-quickstart.md >}}).
|
||||
- Explore input and output bindings via any of the supporting [Dapr SDKs]({{< ref sdks >}}).
|
||||
- Review the [Bindings API reference documentation]({{< ref bindings_api.md >}}).
|
||||
- Browse the supported [input and output bindings component specs]({{< ref supported-bindings >}}).
|
||||
{{% /alert %}}
|
|
@ -5,3 +5,11 @@ linkTitle: "Configuration"
|
|||
weight: 80
|
||||
description: Manage and be notified of application configuration changes
|
||||
---
|
||||
|
||||
{{% alert title="More about Dapr Configuration" color="primary" %}}
|
||||
Learn more about how to use Dapr Configuration:
|
||||
- Try the [Configuration quickstart]({{< ref configuration-quickstart.md >}}).
|
||||
- Explore configuration via any of the supporting [Dapr SDKs]({{< ref sdks >}}).
|
||||
- Review the [Configuration API reference documentation]({{< ref configuration_api.md >}}).
|
||||
- Browse the supported [configuration component specs]({{< ref supported-configuration-stores >}}).
|
||||
{{% /alert %}}
|
|
@ -4,4 +4,11 @@ title: "Cryptography"
|
|||
linkTitle: "Cryptography"
|
||||
weight: 110
|
||||
description: "Perform cryptographic operations without exposing keys to your application"
|
||||
---
|
||||
---
|
||||
|
||||
{{% alert title="More about Dapr Cryptography" color="primary" %}}
|
||||
Learn more about how to use Dapr Cryptography:
|
||||
- Try the [Cryptography quickstart]({{< ref cryptography-quickstart.md >}}).
|
||||
- Explore cryptography via any of the supporting [Dapr SDKs]({{< ref sdks >}}).
|
||||
- Browse the supported [cryptography component specs]({{< ref supported-cryptography >}}).
|
||||
{{% /alert %}}
|
|
@ -5,3 +5,10 @@ linkTitle: "Distributed lock"
|
|||
weight: 90
|
||||
description: Distributed locks provide mutually exclusive access to shared resources from an application.
|
||||
---
|
||||
|
||||
{{% alert title="More about Dapr Distributed Lock" color="primary" %}}
|
||||
Learn more about how to use Dapr Distributed Lock:
|
||||
- Explore distributed locks via any of the supporting [Dapr SDKs]({{< ref sdks >}}).
|
||||
- Review the [Distributed Lock API reference documentation]({{< ref distributed_lock_api.md >}}).
|
||||
- Browse the supported [distributed locks component specs]({{< ref supported-locks >}}).
|
||||
{{% /alert %}}
|
|
@ -6,4 +6,10 @@ weight: 60
|
|||
description: See and measure the message calls to components and between networked services
|
||||
---
|
||||
|
||||
This section includes guides for developers in the context of observability. See other sections for a [general overview of the observability concept]({{< ref observability-concept >}}) in Dapr and for [operations guidance on monitoring]({{< ref monitoring >}}).
|
||||
{{% alert title="More about Dapr Observability" color="primary" %}}
|
||||
Learn more about how to use Dapr Observability Lock:
|
||||
- Explore observability via any of the supporting [Dapr SDKs]({{< ref sdks >}}).
|
||||
- Review the [Observability API reference documentation]({{< ref health_api.md >}}).
|
||||
- Read the [general overview of the observability concept]({{< ref observability-concept >}}) in Dapr.
|
||||
- Learn the [operations perspective and guidance on monitoring]({{< ref monitoring >}}).
|
||||
{{% /alert %}}
|
||||
|
|
|
@ -2,17 +2,22 @@
|
|||
type: docs
|
||||
title: "App health checks"
|
||||
linkTitle: "App health checks"
|
||||
weight: 300
|
||||
weight: 100
|
||||
description: Reacting to apps' health status changes
|
||||
---
|
||||
|
||||
App health checks is a feature that allows probing for the health of your application and reacting to status changes.
|
||||
The app health checks feature allows probing for the health of your application and reacting to status changes.
|
||||
|
||||
Applications can become unresponsive for a variety of reasons: for example, they could be too busy to accept new work, could have crashed, or be in a deadlock state. Sometimes the condition can be transitory, for example if the app is just busy (and will eventually be able to resume accepting new work), or if the application is being restarted for whatever reason and is in its initialization phase.
|
||||
Applications can become unresponsive for a variety of reasons. For example, your application:
|
||||
- Could be too busy to accept new work;
|
||||
- Could have crashed; or
|
||||
- Could be in a deadlock state.
|
||||
|
||||
When app health checks are enabled, the Dapr *runtime* (sidecar) periodically polls your application via HTTP or gRPC calls.
|
||||
Sometimes the condition can be transitory, for example:
|
||||
- If the app is just busy and will resume accepting new work eventually
|
||||
- If the application is being restarted for whatever reason and is in its initialization phase
|
||||
|
||||
When it detects a failure in the app's health, Dapr stops accepting new work on behalf of the application by:
|
||||
App health checks are disabled by default. Once you enable app health checks, the Dapr runtime (sidecar) periodically polls your application via HTTP or gRPC calls. When it detects a failure in the app's health, Dapr stops accepting new work on behalf of the application by:
|
||||
|
||||
- Unsubscribing from all pub/sub subscriptions
|
||||
- Stopping all input bindings
|
||||
|
@ -20,15 +25,14 @@ When it detects a failure in the app's health, Dapr stops accepting new work on
|
|||
|
||||
These changes are meant to be temporary, and Dapr resumes normal operations once it detects that the application is responsive again.
|
||||
|
||||
App health checks are disabled by default.
|
||||
|
||||
<img src="/images/observability-app-health.webp" width="800" alt="Diagram showing the app health feature. Running Dapr with app health enabled causes Dapr to periodically probe the app for its health.">
|
||||
|
||||
### App health checks vs platform-level health checks
|
||||
## App health checks vs platform-level health checks
|
||||
|
||||
App health checks in Dapr are meant to be complementary to, and not replace, any platform-level health checks, like [liveness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) when running on Kubernetes.
|
||||
|
||||
Platform-level health checks (or liveness probes) generally ensure that the application is running, and cause the platform to restart the application in case of failures.
|
||||
|
||||
Unlike platform-level health checks, Dapr's app health checks focus on pausing work to an application that is currently unable to accept it, but is expected to be able to resume accepting work *eventually*. Goals include:
|
||||
|
||||
- Not bringing more load to an application that is already overloaded.
|
||||
|
@ -36,7 +40,9 @@ Unlike platform-level health checks, Dapr's app health checks focus on pausing w
|
|||
|
||||
In this regard, Dapr's app health checks are "softer", waiting for an application to be able to process work, rather than terminating the running process in a "hard" way.
|
||||
|
||||
> Note that for Kubernetes, a failing App Health check won't remove a pod from service discovery: this remains the responsibility of the Kubernetes liveness probe, _not_ Dapr.
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
For Kubernetes, a failing app health check won't remove a pod from service discovery: this remains the responsibility of the Kubernetes liveness probe, _not_ Dapr.
|
||||
{{% /alert %}}
|
||||
|
||||
## Configuring app health checks
|
||||
|
||||
|
@ -52,34 +58,46 @@ The full list of options are listed in this table:
|
|||
| CLI flags | Kubernetes deployment annotation | Description | Default value |
|
||||
| ----------------------------- | ----------------------------------- | ----------- | ------------- |
|
||||
| `--enable-app-health-check` | `dapr.io/enable-app-health-check` | Boolean that enables the health checks | Disabled |
|
||||
| `--app-health-check-path` | `dapr.io/app-health-check-path` | Path that Dapr invokes for health probes when the app channel is HTTP (this value is ignored if the app channel is using gRPC) | `/healthz` |
|
||||
| `--app-health-probe-interval` | `dapr.io/app-health-probe-interval` | Number of *seconds* between each health probe | `5` |
|
||||
| `--app-health-probe-timeout` | `dapr.io/app-health-probe-timeout` | Timeout in *milliseconds* for health probe requests | `500` |
|
||||
| `--app-health-threshold` | `dapr.io/app-health-threshold` | Max number of consecutive failures before the app is considered unhealthy | `3` |
|
||||
| [`--app-health-check-path`]({{< ref "app-health.md#health-check-paths" >}}) | `dapr.io/app-health-check-path` | Path that Dapr invokes for health probes when the app channel is HTTP (this value is ignored if the app channel is using gRPC) | `/healthz` |
|
||||
| [`--app-health-probe-interval`]({{< ref "app-health.md#intervals-timeouts-and-thresholds" >}}) | `dapr.io/app-health-probe-interval` | Number of *seconds* between each health probe | `5` |
|
||||
| [`--app-health-probe-timeout`]({{< ref "app-health.md#intervals-timeouts-and-thresholds" >}}) | `dapr.io/app-health-probe-timeout` | Timeout in *milliseconds* for health probe requests | `500` |
|
||||
| [`--app-health-threshold`]({{< ref "app-health.md#intervals-timeouts-and-thresholds" >}}) | `dapr.io/app-health-threshold` | Max number of consecutive failures before the app is considered unhealthy | `3` |
|
||||
|
||||
> See the [full Dapr arguments and annotations reference]({{<ref arguments-annotations-overview>}}) for all options and how to enable them.
|
||||
> See the [full Dapr arguments and annotations reference]({{< ref arguments-annotations-overview >}}) for all options and how to enable them.
|
||||
|
||||
Additionally, app health checks are impacted by the protocol used for the app channel, which is configured with the `--app-protocol` flag (self-hosted) or the `dapr.io/app-protocol` annotation (Kubernetes); supported values are `http` (default), `grpc`, `https`, `grpcs`, and `h2c` (HTTP/2 Cleartext).
|
||||
Additionally, app health checks are impacted by the protocol used for the app channel, which is configured with the following flag or annotation:
|
||||
|
||||
| CLI flag | Kubernetes deployment annotation | Description | Default value |
|
||||
| ----------------------------- | ----------------------------------- | ----------- | ------------- |
|
||||
| [`--app-protocol`]({{< ref "app-health.md#health-check-paths" >}}) | `dapr.io/app-protocol` | Protocol used for the app channel. supported values are `http`, `grpc`, `https`, `grpcs`, and `h2c` (HTTP/2 Cleartext). | `http` |
|
||||
|
||||
### Health check paths
|
||||
|
||||
#### HTTP
|
||||
When using HTTP (including `http`, `https`, and `h2c`) for `app-protocol`, Dapr performs health probes by making an HTTP call to the path specified in `app-health-check-path`, which is `/health` by default.
|
||||
|
||||
For your app to be considered healthy, the response must have an HTTP status code in the 200-299 range. Any other status code is considered a failure. Dapr is only concerned with the status code of the response, and ignores any response header or body.
|
||||
|
||||
#### gRPC
|
||||
When using gRPC for the app channel (`app-protocol` set to `grpc` or `grpcs`), Dapr invokes the method `/dapr.proto.runtime.v1.AppCallbackHealthCheck/HealthCheck` in your application. Most likely, you will use a Dapr SDK to implement the handler for this method.
|
||||
|
||||
While responding to a health probe request, your app *may* decide to perform additional internal health checks to determine if it's ready to process work from the Dapr runtime. However, this is not required; it's a choice that depends on your application's needs.
|
||||
|
||||
### Intervals, timeouts, and thresholds
|
||||
|
||||
When app health checks are enabled, by default Dapr probes your application every 5 seconds. You can configure the interval, in seconds, with `app-health-probe-interval`. These probes happen regularly, regardless of whether your application is healthy or not.
|
||||
#### Intervals
|
||||
By default, when app health checks are enabled, Dapr probes your application every 5 seconds. You can configure the interval, in seconds, with `app-health-probe-interval`. These probes happen regularly, regardless of whether your application is healthy or not.
|
||||
|
||||
#### Timeouts
|
||||
When the Dapr runtime (sidecar) is initially started, Dapr waits for a successful health probe before considering the app healthy. This means that pub/sub subscriptions, input bindings, and service invocation requests won't be enabled for your application until this first health check is complete and successful.
|
||||
|
||||
Health probe requests are considered successful if the application sends a successful response (as explained above) within the timeout configured in `app-health-probe-timeout`. The default value is 500, corresponding to 500 milliseconds (i.e. half a second).
|
||||
Health probe requests are considered successful if the application sends a successful response (as explained above) within the timeout configured in `app-health-probe-timeout`. The default value is 500, corresponding to 500 milliseconds (half a second).
|
||||
|
||||
#### Thresholds
|
||||
Before Dapr considers an app to have entered an unhealthy state, it will wait for `app-health-threshold` consecutive failures, whose default value is 3. This default value means that your application must fail health probes 3 times *in a row* to be considered unhealthy.
|
||||
|
||||
If you set the threshold to 1, any failure causes Dapr to assume your app is unhealthy and will stop delivering work to it.
|
||||
|
||||
A threshold greater than 1 can help exclude transient failures due to external circumstances. The right value for your application depends on your requirements.
|
||||
|
||||
Thresholds only apply to failures. A single successful response is enough for Dapr to consider your app to be healthy and resume normal operations.
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Tracing"
|
||||
linkTitle: "Tracing"
|
||||
weight: 300
|
||||
description: Learn more about tracing scenarios and how to use tracing for visibility in your application
|
||||
---
|
|
@ -0,0 +1,113 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Distributed tracing"
|
||||
linkTitle: "Distributed tracing"
|
||||
weight: 300
|
||||
description: "Use tracing to get visibility into your application"
|
||||
---
|
||||
|
||||
Dapr uses the Open Telemetry (OTEL) and Zipkin protocols for distributed traces. OTEL is the industry standard and is the recommended trace protocol to use.
|
||||
|
||||
Most observability tools support OTEL, including:
|
||||
- [Google Cloud Operations](https://cloud.google.com/products/operations)
|
||||
- [New Relic](https://newrelic.com)
|
||||
- [Azure Monitor](https://azure.microsoft.com/services/monitor/)
|
||||
- [Datadog](https://www.datadoghq.com)
|
||||
- Instana
|
||||
- [Jaeger](https://www.jaegertracing.io/)
|
||||
- [SignalFX](https://www.signalfx.com/)
|
||||
|
||||
## Scenarios
|
||||
|
||||
Tracing is used with service invocaton and pub/sub APIs. You can flow trace context between services that uses these APIs. There are two scenarios for how tracing is used:
|
||||
|
||||
1. Dapr generates the trace context and you propagate the trace context to another service.
|
||||
1. You generate the trace context and Dapr propagates the trace context to a service.
|
||||
|
||||
### Scenario 1: Dapr generates trace context headers
|
||||
|
||||
#### Propagating sequential service calls
|
||||
|
||||
Dapr takes care of creating the trace headers. However, when there are more than two services, you're responsible for propagating the trace headers between them. Let's go through the scenarios with examples:
|
||||
|
||||
##### Single service invocation call
|
||||
|
||||
For example, `service A -> service B`.
|
||||
|
||||
Dapr generates the trace headers in `service A`, which are then propagated from `service A` to `service B`. No further propagation is needed.
|
||||
|
||||
##### Multiple sequential service invocation calls
|
||||
|
||||
For example, `service A -> service B -> propagate trace headers to -> service C` and so on to further Dapr-enabled services.
|
||||
|
||||
Dapr generates the trace headers at the beginning of the request in `service A`, which are then propagated to `service B`. You are now responsible for taking the headers and propagating them to `service C`, since this is specific to your application.
|
||||
|
||||
In other words, if the app is calling to Dapr and wants to trace with an existing trace header (span), it must always propagate to Dapr (from `service B` to `service C`, in this example). Dapr always propagates trace spans to an application.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
No helper methods are exposed in Dapr SDKs to propagate and retrieve trace context. You need to use HTTP/gRPC clients to propagate and retrieve trace headers through HTTP headers and gRPC metadata.
|
||||
{{% /alert %}}
|
||||
|
||||
##### Request is from external endpoint
|
||||
|
||||
For example, `from a gateway service to a Dapr-enabled service A`.
|
||||
|
||||
An external gateway ingress calls Dapr, which generates the trace headers and calls `service A`. `Service A` then calls `service B` and further Dapr-enabled services.
|
||||
|
||||
You must propagate the headers from `service A` to `service B`. For example: `Ingress -> service A -> propagate trace headers -> service B`. This is similar to [case 2]({{< ref "tracing-overview.md#multiple-sequential-service-invocation-calls" >}}).
|
||||
|
||||
##### Pub/sub messages
|
||||
|
||||
Dapr generates the trace headers in the published message topic. These trace headers are propagated to any services listening on that topic.
|
||||
|
||||
#### Propagating multiple different service calls
|
||||
|
||||
In the following scenarios, Dapr does some of the work for you, with you then creating or propagating trace headers.
|
||||
|
||||
##### Multiple service calls to different services from single service
|
||||
|
||||
When you are calling multiple services from a single service, you need to propagate the trace headers. For example:
|
||||
|
||||
```
|
||||
service A -> service B
|
||||
[ .. some code logic ..]
|
||||
service A -> service C
|
||||
[ .. some code logic ..]
|
||||
service A -> service D
|
||||
[ .. some code logic ..]
|
||||
```
|
||||
|
||||
In this case:
|
||||
1. When `service A` first calls `service B`, Dapr generates the trace headers in `service A`.
|
||||
1. The trace headers in `service A` are propagated to `service B`.
|
||||
1. These trace headers are returned in the response from `service B` as part of response headers.
|
||||
1. You then need to propagate the returned trace context to the next services, like `service C` and `service D`, as Dapr does not know you want to reuse the same header.
|
||||
|
||||
### Scenario 2: You generate your own trace context headers from non-Daprized applications
|
||||
|
||||
Generating your own trace context headers is more unusual and typically not required when calling Dapr.
|
||||
|
||||
However, there are scenarios where you could specifically choose to add W3C trace headers into a service call. For example, you have an existing application that does not use Dapr. In this case, Dapr still propagates the trace context headers for you.
|
||||
|
||||
If you decide to generate trace headers yourself, there are three ways this can be done:
|
||||
|
||||
1. Standard OpenTelemetry SDK
|
||||
|
||||
You can use the industry standard [OpenTelemetry SDKs](https://opentelemetry.io/docs/instrumentation/) to generate trace headers and pass these trace headers to a Dapr-enabled service. _This is the preferred method_.
|
||||
|
||||
1. Vendor SDK
|
||||
|
||||
You can use a vendor SDK that provides a way to generate W3C trace headers and pass them to a Dapr-enabled service.
|
||||
|
||||
1. W3C trace context
|
||||
|
||||
You can handcraft a trace context following [W3C trace context specifications](https://www.w3.org/TR/trace-context/) and pass them to a Dapr-enabled service.
|
||||
|
||||
Read [the trace context overview]({{< ref w3c-tracing-overview >}}) for more background and examples on W3C trace context and headers.
|
||||
|
||||
## Related Links
|
||||
|
||||
- [Observability concepts]({{< ref observability-concept.md >}})
|
||||
- [W3C Trace Context for distributed tracing]({{< ref w3c-tracing-overview >}})
|
||||
- [W3C Trace Context specification](https://www.w3.org/TR/trace-context/)
|
||||
- [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability)
|
|
@ -0,0 +1,91 @@
|
|||
---
|
||||
type: docs
|
||||
title: "W3C trace context"
|
||||
linkTitle: "W3C trace context"
|
||||
weight: 2000
|
||||
description: Background and scenarios for using W3C tracing with Dapr
|
||||
type: docs
|
||||
---
|
||||
|
||||
Dapr uses the [Open Telemetry protocol](https://opentelemetry.io/), which in turn uses the [W3C trace context](https://www.w3.org/TR/trace-context/) for distributed tracing for both service invocation and pub/sub messaging. Dapr generates and propagates the trace context information, which can be sent to observability tools for visualization and querying.
|
||||
|
||||
## Background
|
||||
|
||||
Distributed tracing is a methodology implemented by tracing tools to follow, analyze, and debug a transaction across multiple software components.
|
||||
|
||||
Typically, a distributed trace traverses more than one service, which requires it to be uniquely identifiable. **Trace context propagation** passes along this unique identification.
|
||||
|
||||
In the past, trace context propagation was implemented individually by each different tracing vendor. In multi-vendor environments, this causes interoperability problems, such as:
|
||||
|
||||
- Traces collected by different tracing vendors can't be correlated, as there is no shared unique identifier.
|
||||
- Traces crossing boundaries between different tracing vendors can't be propagated, as there is no forwarded, uniformly agreed set of identification.
|
||||
- Vendor-specific metadata might be dropped by intermediaries.
|
||||
- Cloud platform vendors, intermediaries, and service providers cannot guarantee to support trace context propagation, as there is no standard to follow.
|
||||
|
||||
Previously, most applications were monitored by a single tracing vendor and stayed within the boundaries of a single platform provider, so these problems didn't have a significant impact.
|
||||
|
||||
Today, an increasing number of applications are distributed and leverage multiple middleware services and cloud platforms. This transformation of modern applications requires a distributed tracing context propagation standard.
|
||||
|
||||
The [W3C trace context specification](https://www.w3.org/TR/trace-context/) defines a universally agreed-upon format for the exchange of trace context propagation data (referred to as trace context). Trace context solves the above problems by providing:
|
||||
|
||||
- A unique identifier for individual traces and requests, allowing trace data of multiple providers to be linked together.
|
||||
- An agreed-upon mechanism to forward vendor-specific trace data and avoid broken traces when multiple tracing tools participate in a single transaction.
|
||||
- An industry standard that intermediaries, platforms, and hardware providers can support.
|
||||
|
||||
This unified approach for propagating trace data improves visibility into the behavior of distributed applications, facilitating problem and performance analysis.
|
||||
|
||||
## W3C trace context and headers format
|
||||
|
||||
### W3C trace context
|
||||
|
||||
Dapr uses the standard W3C trace context headers.
|
||||
|
||||
- For HTTP requests, Dapr uses `traceparent` header.
|
||||
- For gRPC requests, Dapr uses `grpc-trace-bin` header.
|
||||
|
||||
When a request arrives without a trace ID, Dapr creates a new one. Otherwise, it passes the trace ID along the call chain.
|
||||
|
||||
### W3C trace headers
|
||||
These are the specific trace context headers that are generated and propagated by Dapr for HTTP and gRPC.
|
||||
|
||||
{{< tabs "HTTP" "gRPC" >}}
|
||||
<!-- HTTP -->
|
||||
{{% codetab %}}
|
||||
|
||||
Copy these headers when propagating a trace context header from an HTTP response to an HTTP request:
|
||||
|
||||
**Traceparent header**
|
||||
|
||||
The traceparent header represents the incoming request in a tracing system in a common format, understood by all vendors:
|
||||
|
||||
```
|
||||
traceparent: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01
|
||||
```
|
||||
|
||||
[Learn more about the traceparent fields details](https://www.w3.org/TR/trace-context/#traceparent-header).
|
||||
|
||||
**Tracestate header**
|
||||
|
||||
The tracestate header includes the parent in a potentially vendor-specific format:
|
||||
|
||||
```
|
||||
tracestate: congo=t61rcWkgMzE
|
||||
```
|
||||
|
||||
[Learn more about the tracestate fields details](https://www.w3.org/TR/trace-context/#tracestate-header).
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
<!-- gRPC -->
|
||||
{{% codetab %}}
|
||||
|
||||
In the gRPC API calls, trace context is passed through `grpc-trace-bin` header.
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Related Links
|
||||
- [Learn more about distributed tracing in Dapr]({{< ref tracing-overview.md >}})
|
||||
- [W3C Trace Context specification](https://www.w3.org/TR/trace-context/)
|
|
@ -11,7 +11,7 @@ Dapr provides a way to determine its health using an [HTTP `/healthz` endpoint](
|
|||
- Probed for its health
|
||||
- Determined for readiness and liveness
|
||||
|
||||
The Dapr `/healthz` endpoint can be used by health probes from the application hosting platform (for example Kubernetes). This topic describes how Dapr integrates with probes from different hosting platforms.
|
||||
In this guide, you learn how the Dapr `/healthz` endpoint integrate with health probes from the application hosting platform (for example, Kubernetes).
|
||||
|
||||
When deploying Dapr to a hosting platform like Kubernetes, the Dapr health endpoint is automatically configured for you.
|
||||
|
||||
|
@ -23,20 +23,10 @@ Dapr actors also have a health API endpoint where Dapr probes the application fo
|
|||
|
||||
Kubernetes uses *readiness* and *liveness* probes to determines the health of the container.
|
||||
|
||||
The kubelet uses liveness probes to know when to restart a container.
|
||||
For example, liveness probes could catch a deadlock, where an application is running but is unable to make progress. Restarting a container in such a state can help to make the application more available despite having bugs.
|
||||
### Liveness
|
||||
The kubelet uses liveness probes to know when to restart a container. For example, liveness probes could catch a deadlock (a running application that is unable to make progress). Restarting a container in such a state can help to make the application more available despite having bugs.
|
||||
|
||||
The kubelet uses readiness probes to know when a container is ready to start accepting traffic. A pod is considered ready when all of its containers are ready. One use of this readiness signal is to control which pods are used as backends for Kubernetes services. When a pod is not ready, it is removed from Kubernetes service load balancers.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
The Dapr sidecar will be in ready state once the application is accessible on its configured port. The application cannot access the Dapr components during application start up/initialization.
|
||||
{{% /alert %}}
|
||||
|
||||
When integrating with Kubernetes, the Dapr sidecar is injected with a Kubernetes probe configuration telling it to use the Dapr healthz endpoint. This is done by the "Sidecar Injector" system service. The integration with the kubelet is shown in the diagram below.
|
||||
|
||||
<img src="/images/security-mTLS-dapr-system-services.png" width="800" alt="Diagram of Dapr services interacting" />
|
||||
|
||||
### How to configure a liveness probe in Kubernetes
|
||||
#### How to configure a liveness probe in Kubernetes
|
||||
|
||||
In the pod configuration file, the liveness probe is added in the containers spec section as shown below:
|
||||
|
||||
|
@ -53,7 +43,14 @@ In the above example, the `periodSeconds` field specifies that the kubelet shoul
|
|||
|
||||
Any HTTP status code between 200 and 399 indicates success; any other status code indicates failure.
|
||||
|
||||
### How to configure a readiness probe in Kubernetes
|
||||
### Readiness
|
||||
The kubelet uses readiness probes to know when a container is ready to start accepting traffic. A pod is considered ready when all of its containers are ready. One use of this readiness signal is to control which pods are used as backends for Kubernetes services. When a pod is not ready, it is removed from Kubernetes service load balancers.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
The Dapr sidecar will be in ready state once the application is accessible on its configured port. The application cannot access the Dapr components during application start up/initialization.
|
||||
{{% /alert %}}
|
||||
|
||||
#### How to configure a readiness probe in Kubernetes
|
||||
|
||||
Readiness probes are configured similarly to liveness probes. The only difference is that you use the `readinessProbe` field instead of the `livenessProbe` field:
|
||||
|
||||
|
@ -66,7 +63,13 @@ Readiness probes are configured similarly to liveness probes. The only differenc
|
|||
periodSeconds: 3
|
||||
```
|
||||
|
||||
### How the Dapr sidecar health endpoint is configured with Kubernetes
|
||||
### Sidecar Injector
|
||||
|
||||
When integrating with Kubernetes, the Dapr sidecar is injected with a Kubernetes probe configuration telling it to use the Dapr `healthz` endpoint. This is done by the "Sidecar Injector" system service. The integration with the kubelet is shown in the diagram below.
|
||||
|
||||
<img src="/images/security-mTLS-dapr-system-services.png" width="800" alt="Diagram of Dapr services interacting" />
|
||||
|
||||
#### How the Dapr sidecar health endpoint is configured with Kubernetes
|
||||
|
||||
As mentioned above, this configuration is done automatically by the Sidecar Injector service. This section describes the specific values that are set on the liveness and readiness probes.
|
||||
|
||||
|
@ -91,7 +94,7 @@ Dapr has its HTTP health endpoint `/v1.0/healthz` on port 3500. This can be used
|
|||
failureThreshold: 3
|
||||
```
|
||||
|
||||
For more information refer to:
|
||||
## Related links
|
||||
|
||||
- [Endpoint health API]({{< ref health_api.md >}})
|
||||
- [Actor health API]({{< ref "actors_api.md#health-check" >}})
|
||||
|
|
|
@ -1,118 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Distributed tracing"
|
||||
linkTitle: "Distributed tracing"
|
||||
weight: 100
|
||||
description: "Use tracing to get visibility into your application"
|
||||
---
|
||||
|
||||
Dapr uses the Open Telemetry (OTEL) and Zipkin protocols for distributed traces. OTEL is the industry standard and is the recommended trace protocol to use.
|
||||
|
||||
Most observability tools support OTEL. For example [Google Cloud Operations](https://cloud.google.com/products/operations), [New Relic](https://newrelic.com), [Azure Monitor](https://azure.microsoft.com/services/monitor/), [Datadog](https://www.datadoghq.com), Instana, [Jaeger](https://www.jaegertracing.io/), and [SignalFX](https://www.signalfx.com/).
|
||||
|
||||
## Scenarios
|
||||
Tracing is used with service invocaton and pub/sub APIs. You can flow trace context between services that uses these APIs.
|
||||
|
||||
There are two scenarios for how tracing is used:
|
||||
|
||||
1. Dapr generates the trace context and you propagate the trace context to another service.
|
||||
2. You generate the trace context and Dapr propagates the trace context to a service.
|
||||
|
||||
### Propagating sequential service calls
|
||||
|
||||
Dapr takes care of creating the trace headers. However, when there are more than two services, you're responsible for propagating the trace headers between them. Let's go through the scenarios with examples:
|
||||
|
||||
1. Single service invocation call (`service A -> service B`)
|
||||
|
||||
Dapr generates the trace headers in service A, which are then propagated from service A to service B. No further propagation is needed.
|
||||
|
||||
2. Multiple sequential service invocation calls ( `service A -> service B -> service C`)
|
||||
|
||||
Dapr generates the trace headers at the beginning of the request in service A, which are then propagated to service B. You are now responsible for taking the headers and propagating them to service C, since this is specific to your application.
|
||||
|
||||
`service A -> service B -> propagate trace headers to -> service C` and so on to further Dapr-enabled services.
|
||||
|
||||
In other words, if the app is calling to Dapr and wants to trace with an existing span (trace header), it must always propagate to Dapr (from service B to service C in this case). Dapr always propagates trace spans to an application.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
There are no helper methods exposed in Dapr SDKs to propagate and retrieve trace context. You need to use HTTP/gRPC clients to propagate and retrieve trace headers through HTTP headers and gRPC metadata.
|
||||
{{% /alert %}}
|
||||
|
||||
3. Request is from external endpoint (for example, `from a gateway service to a Dapr-enabled service A`)
|
||||
|
||||
An external gateway ingress calls Dapr, which generates the trace headers and calls service A. Service A then calls service B and further Dapr-enabled services. You must propagate the headers from service A to service B: `Ingress -> service A -> propagate trace headers -> service B`. This is similar to case 2 above.
|
||||
|
||||
4. Pub/sub messages
|
||||
Dapr generates the trace headers in the published message topic. These trace headers are propagated to any services listening on that topic.
|
||||
|
||||
### Propagating multiple different service calls
|
||||
|
||||
In the following scenarios, Dapr does some of the work for you and you need to either create or propagate trace headers.
|
||||
|
||||
1. Multiple service calls to different services from single service
|
||||
|
||||
When you are calling multiple services from a single service (see example below), you need to propagate the trace headers:
|
||||
|
||||
```
|
||||
service A -> service B
|
||||
[ .. some code logic ..]
|
||||
service A -> service C
|
||||
[ .. some code logic ..]
|
||||
service A -> service D
|
||||
[ .. some code logic ..]
|
||||
```
|
||||
|
||||
In this case, when service A first calls service B, Dapr generates the trace headers in service A, which are then propagated to service B. These trace headers are returned in the response from service B as part of response headers. You then need to propagate the returned trace context to the next services, service C and service D, as Dapr does not know you want to reuse the same header.
|
||||
|
||||
### Generating your own trace context headers from non-Daprized applications
|
||||
|
||||
You may have chosen to generate your own trace context headers.
|
||||
Generating your own trace context headers is more unusual and typically not required when calling Dapr. However, there are scenarios where you could specifically choose to add W3C trace headers into a service call; for example, you have an existing application that does not use Dapr. In this case, Dapr still propagates the trace context headers for you. If you decide to generate trace headers yourself, there are three ways this can be done:
|
||||
|
||||
1. You can use the industry standard [OpenTelemetry SDKs](https://opentelemetry.io/docs/instrumentation/) to generate trace headers and pass these trace headers to a Dapr-enabled service. This is the preferred method.
|
||||
|
||||
2. You can use a vendor SDK that provides a way to generate W3C trace headers and pass them to a Dapr-enabled service.
|
||||
|
||||
3. You can handcraft a trace context following [W3C trace context specifications](https://www.w3.org/TR/trace-context/) and pass them to a Dapr-enabled service.
|
||||
|
||||
## W3C trace context
|
||||
|
||||
Dapr uses the standard W3C trace context headers.
|
||||
|
||||
- For HTTP requests, Dapr uses `traceparent` header.
|
||||
- For gRPC requests, Dapr uses `grpc-trace-bin` header.
|
||||
|
||||
When a request arrives without a trace ID, Dapr creates a new one. Otherwise, it passes the trace ID along the call chain.
|
||||
|
||||
Read [trace context overview]({{< ref w3c-tracing-overview >}}) for more background on W3C trace context.
|
||||
|
||||
## W3C trace headers
|
||||
These are the specific trace context headers that are generated and propagated by Dapr for HTTP and gRPC.
|
||||
|
||||
### Trace context HTTP headers format
|
||||
When propagating a trace context header from an HTTP response to an HTTP request, you copy these headers.
|
||||
|
||||
#### Traceparent header
|
||||
The traceparent header represents the incoming request in a tracing system in a common format, understood by all vendors.
|
||||
Here’s an example of a traceparent header.
|
||||
|
||||
`traceparent: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01`
|
||||
|
||||
Find the traceparent fields detailed [here](https://www.w3.org/TR/trace-context/#traceparent-header).
|
||||
|
||||
#### Tracestate header
|
||||
The tracestate header includes the parent in a potentially vendor-specific format:
|
||||
|
||||
`tracestate: congo=t61rcWkgMzE`
|
||||
|
||||
Find the tracestate fields detailed [here](https://www.w3.org/TR/trace-context/#tracestate-header).
|
||||
|
||||
### Trace context gRPC headers format
|
||||
In the gRPC API calls, trace context is passed through `grpc-trace-bin` header.
|
||||
|
||||
## Related Links
|
||||
|
||||
- [Observability concepts]({{< ref observability-concept.md >}})
|
||||
- [W3C Trace Context for distributed tracing]({{< ref w3c-tracing-overview >}})
|
||||
- [W3C Trace Context specification](https://www.w3.org/TR/trace-context/)
|
||||
- [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability)
|
|
@ -1,33 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Trace context"
|
||||
linkTitle: "Trace context"
|
||||
weight: 4000
|
||||
description: Background and scenarios for using W3C tracing with Dapr
|
||||
type: docs
|
||||
---
|
||||
|
||||
Dapr uses the [Open Telemetry protocol](https://opentelemetry.io/), which in turn uses the [W3C trace context](https://www.w3.org/TR/trace-context/) for distributed tracing for both service invocation and pub/sub messaging. Dapr generates and propagates the trace context information, which can be sent to observability tools for visualization and querying.
|
||||
|
||||
## Background
|
||||
Distributed tracing is a methodology implemented by tracing tools to follow, analyze, and debug a transaction across multiple software components. Typically, a distributed trace traverses more than one service which requires it to be uniquely identifiable. Trace context propagation passes along this unique identification.
|
||||
|
||||
In the past, trace context propagation has typically been implemented individually by each different tracing vendor. In multi-vendor environments, this causes interoperability problems, such as:
|
||||
|
||||
- Traces that are collected by different tracing vendors cannot be correlated as there is no shared unique identifier.
|
||||
- Traces that cross boundaries between different tracing vendors can not be propagated as there is no forwarded, uniformly agreed set of identification.
|
||||
- Vendor-specific metadata might be dropped by intermediaries.
|
||||
- Cloud platform vendors, intermediaries, and service providers cannot guarantee to support trace context propagation as there is no standard to follow.
|
||||
|
||||
In the past, these problems did not have a significant impact, as most applications were monitored by a single tracing vendor and stayed within the boundaries of a single platform provider. Today, an increasing number of applications are distributed and leverage multiple middleware services and cloud platforms.
|
||||
|
||||
This transformation of modern applications called for a distributed tracing context propagation standard. The [W3C trace context specification](https://www.w3.org/TR/trace-context/) defines a universally agreed-upon format for the exchange of trace context propagation data - referred to as trace context. Trace context solves the problems described above by:
|
||||
|
||||
* Providing a unique identifier for individual traces and requests, allowing trace data of multiple providers to be linked together.
|
||||
* Providing an agreed-upon mechanism to forward vendor-specific trace data and avoid broken traces when multiple tracing tools participate in a single transaction.
|
||||
* Providing an industry standard that intermediaries, platforms, and hardware providers can support.
|
||||
|
||||
A unified approach for propagating trace data improves visibility into the behavior of distributed applications, facilitating problem and performance analysis.
|
||||
|
||||
## Related Links
|
||||
- [W3C Trace Context specification](https://www.w3.org/TR/trace-context/)
|
|
@ -5,3 +5,11 @@ linkTitle: "Publish & subscribe"
|
|||
weight: 30
|
||||
description: Secure, scalable messaging between services
|
||||
---
|
||||
|
||||
{{% alert title="More about Dapr Pub/sub" color="primary" %}}
|
||||
Learn more about how to use Dapr Pub/sub:
|
||||
- Try the [Pub/sub quickstart]({{< ref pubsub-quickstart.md >}}).
|
||||
- Explore pub/sub via any of the supporting [Dapr SDKs]({{< ref sdks >}}).
|
||||
- Review the [Pub/sub API reference documentation]({{< ref pubsub_api.md >}}).
|
||||
- Browse the supported [pub/sub component specs]({{< ref supported-pubsub >}}).
|
||||
{{% /alert %}}
|
|
@ -100,16 +100,29 @@ Dapr solves multi-tenancy at-scale with [namespaces for consumer groups]({{< ref
|
|||
|
||||
### At-least-once guarantee
|
||||
|
||||
Dapr guarantees at-least-once semantics for message delivery. When an application publishes a message to a topic using the pub/sub API, Dapr ensures the message is delivered *at least once* to every subscriber.
|
||||
Dapr guarantees at-least-once semantics for message delivery. When an application publishes a message to a topic using the pub/sub API, Dapr ensures the message is delivered *at least once* to every subscriber.
|
||||
|
||||
Even if the message fails to deliver, or your application crashes, Dapr attempts to redeliver the message until successful delivery.
|
||||
|
||||
All Dapr pub/sub components support the at-least-once guarantee.
|
||||
|
||||
### Consumer groups and competing consumers pattern
|
||||
|
||||
Dapr automatically handles the burden of dealing with concepts like consumer groups and competing consumers pattern. The competing consumers pattern refers to multiple application instances using a single consumer group. When multiple instances of the same application (running same Dapr app ID) subscribe to a topic, Dapr delivers each message to *only one instance of **that** application*. This concept is illustrated in the diagram below.
|
||||
Dapr handles the burden of dealing with consumer groups and the competing consumers pattern. In the competing consumers pattern, multiple application instances using a single consumer group compete for the message. Dapr enforces the competing conusmer pattern when replicas use the same `app-id` without explict consumer group overrides.
|
||||
|
||||
When multiple instances of the same application (with same `app-id`) subscribe to a topic, Dapr delivers each message to *only one instance of **that** application*. This concept is illustrated in the diagram below.
|
||||
|
||||
<img src="/images/pubsub-overview-pattern-competing-consumers.png" width=1000>
|
||||
<br></br>
|
||||
|
||||
Similarly, if two different applications (with different app-IDs) subscribe to the same topic, Dapr delivers each message to *only one instance of **each** application*.
|
||||
Similarly, if two different applications (with different `app-id`) subscribe to the same topic, Dapr delivers each message to *only one instance of **each** application*.
|
||||
|
||||
Not all Dapr pub/sub components support the competing consumer pattern. Currently, the following (non-exhaustive) pub/sub components support this:
|
||||
|
||||
- [Apache Kafka]({{< ref setup-apache-kafka >}})
|
||||
- [Azure Service Bus Queues]({{< ref setup-azure-servicebus-queues >}})
|
||||
- [RabbitMQ]({{< ref setup-rabbitmq >}})
|
||||
- [Redis Streams]({{< ref setup-redis-pubsub >}})
|
||||
|
||||
### Scoping topics for added security
|
||||
|
||||
|
|
|
@ -187,7 +187,11 @@ The `/checkout` endpoint matches the `route` defined in the subscriptions and th
|
|||
|
||||
### Programmatic subscriptions
|
||||
|
||||
The programmatic approach returns the `routes` JSON structure within the code, unlike the declarative approach's `route` YAML structure. In the example below, you define the values found in the [declarative YAML subscription](#declarative-subscriptions) above within the application code.
|
||||
The dynamic programmatic approach returns the `routes` JSON structure within the code, unlike the declarative approach's `route` YAML structure.
|
||||
|
||||
> **Note:** Programmatic subscriptions are only read once during application start-up. You cannot _dynamically_ add new programmatic subscriptions, only at new ones at compile time.
|
||||
|
||||
In the example below, you define the values found in the [declarative YAML subscription](#declarative-subscriptions) above within the application code.
|
||||
|
||||
{{< tabs ".NET" Java Python JavaScript Go>}}
|
||||
|
||||
|
@ -219,7 +223,7 @@ Both of the handlers defined above also need to be mapped to configure the `dapr
|
|||
app.UseEndpoints(endpoints =>
|
||||
{
|
||||
endpoints.MapSubscribeHandler();
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
|
|
@ -5,3 +5,11 @@ linkTitle: "Secrets management"
|
|||
weight: 70
|
||||
description: Securely access secrets from your application
|
||||
---
|
||||
|
||||
{{% alert title="More about Dapr Secrets" color="primary" %}}
|
||||
Learn more about how to use Dapr Secrets:
|
||||
- Try the [Secrets quickstart]({{< ref secrets-quickstart.md >}}).
|
||||
- Explore secrets via any of the supporting [Dapr SDKs]({{< ref sdks >}}).
|
||||
- Review the [Secrets API reference documentation]({{< ref secrets_api.md >}}).
|
||||
- Browse the supported [secrets component specs]({{< ref supported-secret-stores >}}).
|
||||
{{% /alert %}}
|
|
@ -5,3 +5,10 @@ linkTitle: "Service invocation"
|
|||
weight: 10
|
||||
description: Perform direct, secure, service-to-service method calls
|
||||
---
|
||||
|
||||
{{% alert title="More about Dapr Service Invocation" color="primary" %}}
|
||||
Learn more about how to use Dapr Service Invocation:
|
||||
- Try the [Service Invocation quickstart]({{< ref serviceinvocation-quickstart.md >}}).
|
||||
- Explore service invocation via any of the supporting [Dapr SDKs]({{< ref sdks >}}).
|
||||
- Review the [Service Invocation API reference documentation]({{< ref service_invocation_api.md >}}).
|
||||
{{% /alert %}}
|
|
@ -5,3 +5,11 @@ linkTitle: "State management"
|
|||
weight: 20
|
||||
description: Create long running stateful services
|
||||
---
|
||||
|
||||
{{% alert title="More about Dapr State Management" color="primary" %}}
|
||||
Learn more about how to use Dapr State Management:
|
||||
- Try the [State Management quickstart]({{< ref statemanagement-quickstart.md >}}).
|
||||
- Explore state management via any of the supporting [Dapr SDKs]({{< ref sdks >}}).
|
||||
- Review the [State Management API reference documentation]({{< ref state_api.md >}}).
|
||||
- Browse the supported [state management component specs]({{< ref supported-state-stores >}}).
|
||||
{{% /alert %}}
|
|
@ -77,7 +77,7 @@ using Dapr.Client;
|
|||
|
||||
await client.SaveStateAsync(storeName, stateKeyName, state, metadata: new Dictionary<string, string>() {
|
||||
{
|
||||
"metadata.ttlInSeconds", "120"
|
||||
"ttlInSeconds", "120"
|
||||
}
|
||||
});
|
||||
```
|
||||
|
|
|
@ -4,4 +4,11 @@ title: "Workflow"
|
|||
linkTitle: "Workflow"
|
||||
weight: 100
|
||||
description: "Orchestrate logic across various microservices"
|
||||
---
|
||||
---
|
||||
|
||||
{{% alert title="More about Dapr Workflow" color="primary" %}}
|
||||
Learn more about how to use Dapr Workflow:
|
||||
- Try the [Workflow quickstart]({{< ref workflow-quickstart.md >}}).
|
||||
- Explore workflow via any of the supporting [Dapr SDKs]({{< ref sdks >}}).
|
||||
- Review the [Workflow API reference documentation]({{< ref workflow_api.md >}}).
|
||||
{{% /alert %}}
|
|
@ -3,5 +3,5 @@ type: docs
|
|||
title: "Authenticate to Azure"
|
||||
linkTitle: "Authenticate to Azure"
|
||||
weight: 1600
|
||||
description: "Learn about authenticating Azure components using Azure Active Directory or Managed Service Identities"
|
||||
description: "Learn about authenticating Azure components using Azure Active Directory or Managed Identities"
|
||||
---
|
|
@ -9,59 +9,74 @@ aliases:
|
|||
weight: 10000
|
||||
---
|
||||
|
||||
Certain Azure components for Dapr offer support for the *common Azure authentication layer*, which enables applications to access data stored in Azure resources by authenticating with Azure Active Directory (Azure AD). Thanks to this:
|
||||
- Administrators can leverage all the benefits of fine-tuned permissions with Role-Based Access Control (RBAC).
|
||||
- Applications running on Azure services such as Azure Container Apps, Azure Kubernetes Service, Azure VMs, or any other Azure platform services can leverage [Managed Service Identities (MSI)](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview).
|
||||
Most Azure components for Dapr support authenticating with Azure AD (Azure Active Directory). Thanks to this:
|
||||
|
||||
- Administrators can leverage all the benefits of fine-tuned permissions with Azure Role-Based Access Control (RBAC).
|
||||
- Applications running on Azure services such as Azure Container Apps, Azure Kubernetes Service, Azure VMs, or any other Azure platform services can leverage [Managed Identities (MI)](https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview) and [Workload Identity](https://learn.microsoft.com/azure/aks/workload-identity-overview). These offer the ability to authenticate your applications without having to manage sensitive credentials.
|
||||
|
||||
## About authentication with Azure AD
|
||||
|
||||
Azure AD is Azure's identity and access management (IAM) solution, which is used to authenticate and authorize users and services.
|
||||
|
||||
Azure AD is built on top of open standards such OAuth 2.0, which allows services (applications) to obtain access tokens to make requests to Azure services, including Azure Storage, Azure Key Vault, Cosmos DB, etc.
|
||||
Azure AD is built on top of open standards such OAuth 2.0, which allows services (applications) to obtain access tokens to make requests to Azure services, including Azure Storage, Azure Service Bus, Azure Key Vault, Azure Cosmos DB, Azure Database for Postgres, Azure SQL, etc.
|
||||
|
||||
> In Azure terminology, an application is also called a "Service Principal".
|
||||
|
||||
Some Azure components offer alternative authentication methods, such as systems based on "master keys" or "shared keys". Although both master keys and shared keys are valid and supported by Dapr, you should authenticate your Dapr components using Azure AD. Using Azure AD offers benefits like the following.
|
||||
Some Azure components offer alternative authentication methods, such as systems based on "shared keys" or "access tokens". Although these are valid and supported by Dapr, you should authenticate your Dapr components using Azure AD whenever possible to take advantage of many benefits, including:
|
||||
|
||||
### Managed Service Identities
|
||||
- [Managed Identities and Workload Identity](#managed-identities-and-workload-identity)
|
||||
- [Role-Based Access Control](#role-based-access-control)
|
||||
- [Auditing](#auditing)
|
||||
- [(Optional) Authentication using certificates](#optional-authentication-using-certificates)
|
||||
|
||||
With Managed Service Identities (MSI), your application can authenticate with Azure AD and obtain an access token to make requests to Azure services. When your application is running on a supported Azure service, an identity for your application can be assigned at the infrastructure level.
|
||||
### Managed Identities and Workload Identity
|
||||
|
||||
With Managed Identities (MI), your application can authenticate with Azure AD and obtain an access token to make requests to Azure services. When your application is running on a supported Azure service (such as Azure VMs, Azure Container Apps, Azure Web Apps, etc), an identity for your application can be assigned at the infrastructure level.
|
||||
|
||||
Once using MI, your code doesn't have to deal with credentials, which:
|
||||
|
||||
Once using MSI, your code doesn't have to deal with credentials, which:
|
||||
- Removes the challenge of managing credentials safely
|
||||
- Allows greater separation of concerns between development and operations teams
|
||||
- Reduces the number of people with access to credentials
|
||||
- Simplifies operational aspects–especially when multiple environments are used
|
||||
|
||||
### Role-based Access Control
|
||||
Applications running on Azure Kubernetes Service can similarly leverage [Workload Identity](https://learn.microsoft.com/azure/aks/workload-identity-overview) to automatically provide an identity to individual pods.
|
||||
|
||||
When using Role-Based Access Control (RBAC) with supported services, permissions given to an application can be fine-tuned. For example, you can restrict access to a subset of data or make it read-only.
|
||||
### Role-Based Access Control
|
||||
|
||||
When using Azure Role-Based Access Control (RBAC) with supported services, permissions given to an application can be fine-tuned. For example, you can restrict access to a subset of data or make the access read-only.
|
||||
|
||||
### Auditing
|
||||
|
||||
Using Azure AD provides an improved auditing experience for access.
|
||||
Using Azure AD provides an improved auditing experience for access. Tenant administrators can consult audit logs to track authentication requests.
|
||||
|
||||
### (Optional) Authenticate using certificates
|
||||
### (Optional) Authentication using certificates
|
||||
|
||||
While Azure AD allows you to use MSI or RBAC, you still have the option to authenticate using certificates.
|
||||
While Azure AD allows you to use MI, you still have the option to authenticate using certificates.
|
||||
|
||||
## Support for other Azure environments
|
||||
|
||||
By default, Dapr components are configured to interact with Azure resources in the "public cloud". If your application is deployed to another cloud, such as Azure China, Azure Government, or Azure Germany, you can enable that for supported components by setting the `azureEnvironment` metadata property to one of the supported values:
|
||||
By default, Dapr components are configured to interact with Azure resources in the "public cloud". If your application is deployed to another cloud, such as Azure China or Azure Government ("sovereign clouds"), you can enable that for supported components by setting the `azureEnvironment` metadata property to one of the supported values:
|
||||
|
||||
- Azure public cloud (default): `"AZUREPUBLICCLOUD"`
|
||||
- Azure China: `"AZURECHINACLOUD"`
|
||||
- Azure Government: `"AZUREUSGOVERNMENTCLOUD"`
|
||||
- Azure Germany: `"AZUREGERMANCLOUD"`
|
||||
- Azure public cloud (default): `"AzurePublicCloud"`
|
||||
- Azure China: `"AzureChinaCloud"`
|
||||
- Azure Government: `"AzureUSGovernmentCloud"`
|
||||
|
||||
> Support for sovereign clouds is experimental.
|
||||
|
||||
## Credentials metadata fields
|
||||
|
||||
To authenticate with Azure AD, you will need to add the following credentials as values in the metadata for your [Dapr component]({{< ref "#example-usage-in-a-dapr-component" >}}).
|
||||
To authenticate with Azure AD, you will need to add the following credentials as values in the metadata for your [Dapr component](#example-usage-in-a-dapr-component).
|
||||
|
||||
### Metadata options
|
||||
|
||||
Depending on how you've passed credentials to your Dapr services, you have multiple metadata options.
|
||||
Depending on how you've passed credentials to your Dapr services, you have multiple metadata options.
|
||||
|
||||
- [Using client credentials](#authenticating-using-client-credentials)
|
||||
- [Using a certificate](#authenticating-using-a-certificate)
|
||||
- [Using Managed Identities (MI)](#authenticating-with-managed-identities-mi)
|
||||
- [Using Workload Identity on AKS](#authenticating-with-workload-identity-on-aks)
|
||||
- [Using Azure CLI credentials (development-only)](#authenticating-using-azure-cli-credentials-development-only)
|
||||
|
||||
#### Authenticating using client credentials
|
||||
|
||||
|
@ -73,7 +88,7 @@ Depending on how you've passed credentials to your Dapr services, you have multi
|
|||
|
||||
When running on Kubernetes, you can also use references to Kubernetes secrets for any or all of the values above.
|
||||
|
||||
#### Authenticating using a PFX certificate
|
||||
#### Authenticating using a certificate
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------|--------|--------|--------|
|
||||
|
@ -85,27 +100,30 @@ When running on Kubernetes, you can also use references to Kubernetes secrets fo
|
|||
|
||||
When running on Kubernetes, you can also use references to Kubernetes secrets for any or all of the values above.
|
||||
|
||||
#### Authenticating with Managed Service Identities (MSI)
|
||||
#### Authenticating with Managed Identities (MI)
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|-----------------|----------|----------------------------|------------------------------------------|
|
||||
| `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-4ba2-a905-acd4d3f8f08b"` |
|
||||
|
||||
Using MSI, you're not required to specify any value, although you may pass `azureClientId` if needed.
|
||||
Using Managed Identities, the `azureClientId` field is generally recommended. The field is optional when using a system-assigned identity, but may be required when using user-assigned identities.
|
||||
|
||||
### Aliases
|
||||
#### Authenticating with Workload Identity on AKS
|
||||
|
||||
For backwards-compatibility reasons, the following values in the metadata are supported as aliases. Their use is discouraged.
|
||||
When running on Azure Kubernetes Service (AKS), you can authenticate components using Workload Identity. Refer to the Azure AKS documentation on [enabling Workload Identity](https://learn.microsoft.com/azure/aks/workload-identity-overview) for your Kubernetes resources.
|
||||
|
||||
| Metadata key | Aliases (supported but deprecated) |
|
||||
|----------------------------|------------------------------------|
|
||||
| `azureTenantId` | `spnTenantId`, `tenantId` |
|
||||
| `azureClientId` | `spnClientId`, `clientId` |
|
||||
| `azureClientSecret` | `spnClientSecret`, `clientSecret` |
|
||||
| `azureCertificate` | `spnCertificate` |
|
||||
| `azureCertificateFile` | `spnCertificateFile` |
|
||||
| `azureCertificatePassword` | `spnCertificatePassword` |
|
||||
#### Authenticating using Azure CLI credentials (development-only)
|
||||
|
||||
> **Important:** This authentication method is recommended for **development only**.
|
||||
|
||||
This authentication method can be useful while developing on a local machine. You will need:
|
||||
|
||||
- The [Azure CLI installed](https://learn.microsoft.com/cli/azure/install-azure-cli)
|
||||
- Have successfully authenticated using the `az login` command
|
||||
|
||||
When Dapr is running on a host where there are credentials available for the Azure CLI, components can use those to authenticate automatically if no other authentication method is configuration.
|
||||
|
||||
Using this authentication method does not require setting any metadata option.
|
||||
|
||||
### Example usage in a Dapr component
|
||||
|
||||
|
|
|
@ -62,6 +62,7 @@ Save the output values returned; you'll need them for Dapr to authenticate with
|
|||
```
|
||||
|
||||
When adding the returned values to your Dapr component's metadata:
|
||||
|
||||
- `appId` is the value for `azureClientId`
|
||||
- `password` is the value for `azureClientSecret` (this was randomly-generated)
|
||||
- `tenant` is the value for `azureTenantId`
|
||||
|
@ -93,11 +94,12 @@ Save the output values returned; you'll need them for Dapr to authenticate with
|
|||
```
|
||||
|
||||
When adding the returned values to your Dapr component's metadata:
|
||||
|
||||
- `appId` is the value for `azureClientId`
|
||||
- `tenant` is the value for `azureTenantId`
|
||||
- `fileWithCertAndPrivateKey` indicates the location of the self-signed PFX certificate and private key. Use the contents of that file as `azureCertificate` (or write it to a file on the server and use `azureCertificateFile`)
|
||||
|
||||
> **Note:** While the generated file has the `.pem` extension, it contains a certificate and private key encoded as _PFX (PKCS#12)_.
|
||||
> **Note:** While the generated file has the `.pem` extension, it contains a certificate and private key encoded as PFX (PKCS#12).
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
|
@ -122,26 +124,13 @@ Expected output:
|
|||
Service Principal ID: 1d0ccf05-5427-4b5e-8eb4-005ac5f9f163
|
||||
```
|
||||
|
||||
The returned value above is the **Service Principal ID**, which is different from the Azure AD application ID (client ID).
|
||||
|
||||
**The Service Principal ID** is:
|
||||
- Defined within an Azure tenant
|
||||
- Used to grant access to Azure resources to an application
|
||||
|
||||
The returned value above is the **Service Principal ID**, which is different from the Azure AD application ID (client ID). The Service Principal ID is defined within an Azure tenant and used to grant access to Azure resources to an application
|
||||
You'll use the Service Principal ID to grant permissions to an application to access Azure resources.
|
||||
|
||||
Meanwhile, **the client ID** is used by your application to authenticate. You'll use the client ID in Dapr manifests to configure authentication with Azure services.
|
||||
|
||||
Keep in mind that the Service Principal that was just created does not have access to any Azure resource by default. Access will need to be granted to each resource as needed, as documented in the docs for the components.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
This step is different from the [official Azure documentation](https://docs.microsoft.com/cli/azure/create-an-azure-service-principal-azure-cli). The short-hand commands included in the official documentation creates a Service Principal that has broad `read-write` access to all Azure resources in your subscription, which:
|
||||
|
||||
- Grants your Service Principal more access than you likely desire.
|
||||
- Applies _only_ to the Azure management plane (Azure Resource Manager, or ARM), which is irrelevant for Dapr components, which are designed to interact with the data plane of various services.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Use MSI >>" page="howto-msi.md" >}}
|
||||
{{< button text="Use Managed Identities >>" page="howto-mi.md" >}}
|
||||
|
|
|
@ -1,14 +1,16 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How to: Use Managed Service Identities"
|
||||
linkTitle: "How to: Use MSI"
|
||||
title: "How to: Use Managed Identities"
|
||||
linkTitle: "How to: Use MI"
|
||||
weight: 40000
|
||||
description: "Learn how to use Managed Service Identities"
|
||||
aliases:
|
||||
- "/developing-applications/integrations/azure/azure-authentication/howto-msi/"
|
||||
description: "Learn how to use Managed Identities"
|
||||
---
|
||||
|
||||
Using MSI, authentication happens automatically by virtue of your application running on top of an Azure service that has an assigned identity.
|
||||
Using Managed Identities (MI), authentication happens automatically by virtue of your application running on top of an Azure service that has an assigned identity.
|
||||
|
||||
For example, let's say you enable a managed service identity for an Azure VM, Azure Container App, or an Azure Kubernetes Service cluster. When you do, an Azure AD application is created for you and automatically assigned to the service. Your Dapr services can then leverage that identity to authenticate with Azure AD, transparently and without you having to specify any credential.
|
||||
For example, let's say you enable a managed service identity for an Azure VM, Azure Container App, or an Azure Kubernetes Service cluster. When you do, an Azure AD application is created for you and automatically assigned to the service. Your Dapr services can then leverage that identity to authenticate with Azure AD, transparently and without you having to specify any credentials.
|
||||
|
||||
To get started with managed identities, you need to assign an identity to a new or existing Azure resource. The instructions depend on the service use. Check the following official documentation for the most appropriate instructions:
|
||||
|
||||
|
@ -19,8 +21,9 @@ To get started with managed identities, you need to assign an identity to a new
|
|||
- [Azure Virtual Machines Scale Sets (VMSS)](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss)
|
||||
- [Azure Container Instance (ACI)](https://docs.microsoft.com/azure/container-instances/container-instances-managed-identity)
|
||||
|
||||
Dapr supports both system-assigned and user-assigned identities.
|
||||
|
||||
After assigning a managed identity to your Azure resource, you will have credentials such as:
|
||||
After assigning an identity to your Azure resource, you will have credentials such as:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -31,7 +34,7 @@ After assigning a managed identity to your Azure resource, you will have credent
|
|||
}
|
||||
```
|
||||
|
||||
From the returned values, take note of **`principalId`**, which is the Service Principal ID that was created. You'll use that to grant access to Azure resources to your Service Principal.
|
||||
From the returned values, take note of **`principalId`**, which is the Service Principal ID that was created. You'll use that to grant access to Azure resources to your identity.
|
||||
|
||||
## Next steps
|
||||
|
|
@ -14,4 +14,10 @@ The recommended approach for installing Dapr on AKS is to use the AKS Dapr exten
|
|||
If you install Dapr through the AKS extension, best practice is to continue using the extension for future management of Dapr _instead of the Dapr CLI_. Combining the two tools can cause conflicts and result in undesired behavior.
|
||||
{{% /alert %}}
|
||||
|
||||
Prerequisites for using the Dapr extension for AKS:
|
||||
- [An Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
|
||||
- [The latest version of the Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli)
|
||||
- [An existing AKS cluster](https://learn.microsoft.com/azure/aks/tutorial-kubernetes-deploy-cluster)
|
||||
- [The Azure Kubernetes Service RBAC Admin role](https://learn.microsoft.com/azure/role-based-access-control/built-in-roles#azure-kubernetes-service-rbac-admin)
|
||||
|
||||
{{< button text="Learn more about the Dapr extension for AKS" link="https://learn.microsoft.com/azure/aks/dapr" >}}
|
||||
|
|
|
@ -83,7 +83,7 @@ apps:
|
|||
appProtocol: http
|
||||
appPort: 8080
|
||||
appHealthCheckPath: "/healthz"
|
||||
command: ["python3" "app.py"]
|
||||
command: ["python3", "app.py"]
|
||||
appLogDestination: file # (optional), can be file, console or fileAndConsole. default is fileAndConsole.
|
||||
daprdLogDestination: file # (optional), can be file, console or fileAndConsole. default is file.
|
||||
- appID: backend # optional
|
||||
|
|
|
@ -11,34 +11,25 @@ The Dapr SDKs are the easiest way for you to get Dapr into your application. Cho
|
|||
|
||||
## SDK packages
|
||||
|
||||
- **Client SDK**: The Dapr client allows you to invoke Dapr building block APIs and perform actions such as:
|
||||
- [Invoke]({{< ref service-invocation >}}) methods on other services
|
||||
- Store and get [state]({{< ref state-management >}})
|
||||
- [Publish and subscribe]({{< ref pubsub >}}) to message topics
|
||||
- Interact with external resources through input and output [bindings]({{< ref bindings >}})
|
||||
- Get [secrets]({{< ref secrets >}}) from secret stores
|
||||
- Interact with [virtual actors]({{< ref actors >}})
|
||||
- **Server extensions**: The Dapr service extensions allow you to create services that can:
|
||||
- Be [invoked]({{< ref service-invocation >}}) by other services
|
||||
- [Subscribe]({{< ref pubsub >}}) to topics
|
||||
- **Actor SDK**: The Dapr Actor SDK allows you to build virtual actors with:
|
||||
- Methods that can be [invoked]({{< ref "howto-actors.md#actor-method-invocation" >}}) by other services
|
||||
- [State]({{< ref "howto-actors.md#actor-state-management" >}}) that can be stored and retrieved
|
||||
- [Timers]({{< ref "howto-actors.md#actor-timers" >}}) with callbacks
|
||||
- Persistent [reminders]({{< ref "howto-actors.md#actor-reminders" >}})
|
||||
Select your [preferred language below]({{< ref "#sdk-languages" >}}) to learn more about client, server, actor, and workflow packages.
|
||||
|
||||
- **Client**: The Dapr client allows you to invoke Dapr building block APIs and perform each building block's actions
|
||||
- **Server extensions**: The Dapr service extensions allow you to create services that can be invoked by other services and subscribe to topics
|
||||
- **Actor**: The Dapr Actor SDK allows you to build virtual actors with methods, state, timers, and persistent reminders
|
||||
- **Workflow**: Dapr Workflow makes it easy for you to write long running business logic and integrations in a reliable way
|
||||
|
||||
## SDK languages
|
||||
|
||||
| Language | Status | Client SDK | Server extensions | Actor SDK |
|
||||
|----------|:------|:----------:|:-----------:|:---------:|
|
||||
| [.NET]({{< ref dotnet >}}) | Stable | ✔ | [ASP.NET Core](https://github.com/dapr/dotnet-sdk/tree/master/examples/AspNetCore) | ✔ |
|
||||
| [Python]({{< ref python >}}) | Stable | ✔ | [gRPC]({{< ref python-grpc.md >}}) <br />[FastAPI]({{< ref python-fastapi.md >}})<br />[Flask]({{< ref python-flask.md >}})| ✔ |
|
||||
| [Java]({{< ref java >}}) | Stable | ✔ | Spring Boot | ✔ |
|
||||
| [Go]({{< ref go >}}) | Stable | ✔ | ✔ | ✔ |
|
||||
| [PHP]({{< ref php >}}) | Stable | ✔ | ✔ | ✔ |
|
||||
| [Javascript]({{< ref js >}}) | Stable| ✔ | | ✔ |
|
||||
| [C++](https://github.com/dapr/cpp-sdk) | In development | ✔ | |
|
||||
| [Rust](https://github.com/dapr/rust-sdk) | In development | ✔ | | |
|
||||
| Language | Status | Client | Server extensions | Actor | Workflow |
|
||||
|----------|:------|:----------:|:-----------:|:---------:|:---------:|
|
||||
| [.NET]({{< ref dotnet >}}) | Stable | ✔ | [ASP.NET Core](https://github.com/dapr/dotnet-sdk/tree/master/examples/AspNetCore) | ✔ | ✔ |
|
||||
| [Python]({{< ref python >}}) | Stable | ✔ | [gRPC]({{< ref python-grpc.md >}}) <br />[FastAPI]({{< ref python-fastapi.md >}})<br />[Flask]({{< ref python-flask.md >}})| ✔ | ✔ |
|
||||
| [Java]({{< ref java >}}) | Stable | ✔ | Spring Boot | ✔ | |
|
||||
| [Go]({{< ref go >}}) | Stable | ✔ | ✔ | ✔ | |
|
||||
| [PHP]({{< ref php >}}) | Stable | ✔ | ✔ | ✔ | |
|
||||
| [Javascript]({{< ref js >}}) | Stable| ✔ | | ✔ | |
|
||||
| [C++](https://github.com/dapr/cpp-sdk) | In development | ✔ | | |
|
||||
| [Rust](https://github.com/dapr/rust-sdk) | In development | ✔ | | | |
|
||||
|
||||
## Further reading
|
||||
|
||||
|
|
|
@ -66,6 +66,7 @@ From version 1.0.0 onwards, upgrading Dapr using Helm is no longer a disruptive
|
|||
kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/v{{% dapr-latest-version long="true" %}}/charts/dapr/crds/configuration.yaml
|
||||
kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/v{{% dapr-latest-version long="true" %}}/charts/dapr/crds/subscription.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/dapr/dapr/v{{% dapr-latest-version long="true" %}}/charts/dapr/crds/resiliency.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/dapr/dapr/v{{% dapr-latest-version long="true" %}}/charts/dapr/crds/httpendpoints.yaml
|
||||
```
|
||||
|
||||
```bash
|
||||
|
|
|
@ -8,7 +8,7 @@ description: "Configure resiliency policies for timeouts, retries, and circuit b
|
|||
|
||||
Define timeouts, retries, and circuit breaker policies under `policies`. Each policy is given a name so you can refer to them from the `targets` section in the resiliency spec.
|
||||
|
||||
> Note: Dapr offers default retries for specific APIs. [See here]({{< ref "#override-default-retries" >}}) to learn how you can overwrite default retry logic with user defined retry policies.
|
||||
> Note: Dapr offers default retries for specific APIs. [See here]({{< ref "#overriding-default-retries" >}}) to learn how you can overwrite default retry logic with user defined retry policies.
|
||||
|
||||
## Timeouts
|
||||
|
||||
|
@ -299,4 +299,4 @@ The table below is a break down of which policies are applied when attempting to
|
|||
|
||||
Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
||||
|
|
|
@ -45,11 +45,12 @@ The table below shows the versions of Dapr releases that have been tested togeth
|
|||
|
||||
| Release date | Runtime | CLI | SDKs | Dashboard | Status |
|
||||
|--------------------|:--------:|:--------|---------|---------|---------|
|
||||
| July 20th 2023 | 1.11.2</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Supported (current) |
|
||||
| June 22nd 2023 | 1.11.1</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Supported (current) |
|
||||
| June 12th 2023 | 1.11.0</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Supported (current) |
|
||||
| May 15th 2023 | 1.10.7</br> | 1.10.0 | Java 1.8.0 </br>Go 1.7.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 3.0.0 | 0.11.0 | Supported |
|
||||
| May 12th 2023 | 1.10.6</br> | 1.10.0 | Java 1.8.0 </br>Go 1.7.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 3.0.0 | 0.11.0 | Supported |
|
||||
| April 13 2023 |1.10.5</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 3.0.0 | 0.11.0 | Supported (current) |
|
||||
| April 13 2023 |1.10.5</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 3.0.0 | 0.11.0 | Supported |
|
||||
| March 16 2023 | 1.10.4</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported |
|
||||
| March 14 2023 | 1.10.3</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported |
|
||||
| February 24 2023 | 1.10.2</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported |
|
||||
|
@ -118,7 +119,7 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
|
|||
| 1.8.0 to 1.8.6 | N/A | 1.9.6 |
|
||||
| 1.9.0 | N/A | 1.9.6 |
|
||||
| 1.10.0 | N/A | 1.10.8 |
|
||||
| 1.11.0 | N/A | 1.11.1 |
|
||||
| 1.11.0 | N/A | 1.11.2 |
|
||||
|
||||
|
||||
## Upgrade on Hosting platforms
|
||||
|
|
|
@ -54,7 +54,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| `queueName` | Y | Input/Output | The name of the Azure Storage queue | `"myqueue"` |
|
||||
| `pollingInterval` | N | Output | Set the interval to poll Azure Storage Queues for new messages, as a Go duration value. Default: `"10s"` | `"30s"` |
|
||||
| `ttlInSeconds` | N | Output | Parameter to set the default message time to live. If this parameter is omitted, messages will expire after 10 minutes. See [also](#specifying-a-ttl-per-message) | `"60"` |
|
||||
| `decodeBase64` | N | Output | Configuration to decode base64 file content before saving to Storage Queues. (In case of saving a file with binary content). Defaults to `false` | `true`, `false` |
|
||||
| `decodeBase64` | N | Input | Configuration to decode base64 content received from the Storage Queue into a string. Defaults to `false` | `true`, `false` |
|
||||
| `encodeBase64` | N | Output | If enabled base64 encodes the data payload before uploading to Azure storage queues. Default `false`. | `true`, `false` |
|
||||
| `endpoint` | N | Input/Output | Optional custom endpoint URL. This is useful when using the [Azurite emulator](https://github.com/Azure/azurite) or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (`http://` or `https://`), the IP or FQDN, and optional port. | `"http://127.0.0.1:10001"` or `"https://accountName.queue.example.com"` |
|
||||
| `visibilityTimeout` | N | Input | Allows setting a custom queue visibility timeout to avoid immediate retrying of recently failed messages. Defaults to 30 seconds. | "100s" |
|
||||
|
|
|
@ -38,7 +38,7 @@ The Azure Key Vault cryptography component supports authentication with Azure AD
|
|||
|
||||
1. Read the [Authenticating to Azure]({{< ref "authenticating-azure.md" >}}) document.
|
||||
1. Create an [Azure AD application]({{< ref "howto-aad.md" >}}) (also called a Service Principal).
|
||||
1. Alternatively, create a [managed identity]({{< ref "howto-msi.md" >}}) for your application platform.
|
||||
1. Alternatively, create a [managed identity]({{< ref "howto-mi.md" >}}) for your application platform.
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
|
@ -48,5 +48,6 @@ The Azure Key Vault cryptography component supports authentication with Azure AD
|
|||
| Auth metadata | Y | See [Authenticating to Azure]({{< ref "authenticating-azure.md" >}}) for more information | |
|
||||
|
||||
## Related links
|
||||
|
||||
- [Cryptography building block]({{< ref cryptography >}})
|
||||
- [Authenticating to Azure]({{< ref azure-authentication >}})
|
|
@ -9,8 +9,18 @@ aliases:
|
|||
|
||||
## Component format
|
||||
|
||||
To set up AWS SNS/SQS pub/sub, create a component of type `pubsub.aws.snssqs`. See the [pub/sub broker component file]({{< ref setup-pubsub.md >}}) to learn how ConsumerID is automatically generated. Read the [How-to: Publish and Subscribe guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pub/sub configuration.
|
||||
To set up AWS SNS/SQS pub/sub, create a component of type `pubsub.aws.snssqs`.
|
||||
|
||||
By default, the AWS SNS/SQS component:
|
||||
- Generates the SNS topics
|
||||
- Provisions the SQS queues
|
||||
- Configures a subscription of the queues to the topics
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
If you only have a publisher and no subscriber, only the SNS topics are created.
|
||||
|
||||
However, if you have a subscriber, SNS, SQS, and the dynamic or static subscription thereof are generated.
|
||||
{{% /alert %}}
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -72,7 +82,7 @@ The above example uses secrets as plain strings. It is recommended to use [a sec
|
|||
| accessKey | Y | ID of the AWS account/role with appropriate permissions to SNS and SQS (see below) | `"AKIAIOSFODNN7EXAMPLE"`
|
||||
| secretKey | Y | Secret for the AWS user/role. If using an `AssumeRole` access, you will also need to provide a `sessionToken` |`"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"`
|
||||
| region | Y | The AWS region where the SNS/SQS assets are located or be created in. See [this page](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/?p=ugi&l=na) for valid regions. Ensure that SNS and SQS are available in that region | `"us-east-1"`
|
||||
| consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | `"channel1"`
|
||||
| consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. See the [pub/sub broker component file]({{< ref setup-pubsub.md >}}) to learn how ConsumerID is automatically generated. | `"channel1"`
|
||||
| endpoint | N | AWS endpoint for the component to use. Only used for local development with, for example, [localstack](https://github.com/localstack/localstack). The `endpoint` is unncessary when running against production AWS | `"http://localhost:4566"`
|
||||
| sessionToken | N | AWS session token to use. A session token is only required if you are using temporary security credentials | `"TOKEN"`
|
||||
| messageReceiveLimit | N | Number of times a message is received, after processing of that message fails, that once reached, results in removing of that message from the queue. If `sqsDeadLettersQueueName` is specified, `messageReceiveLimit` is the number of times a message is received, after processing of that message fails, that once reached, results in moving of the message to the SQS dead-letters queue. Default: `10` | `10`
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
---
|
||||
type: docs
|
||||
title: "In Memory"
|
||||
linkTitle: "In Memory"
|
||||
title: "In-memory"
|
||||
linkTitle: "In-memory"
|
||||
description: "Detailed documentation on the In Memory pubsub component"
|
||||
aliases:
|
||||
- "/operations/components/setup-pubsub/supported-pubsub/setup-inmemory/"
|
||||
---
|
||||
|
||||
The In Memory pub/sub component is useful for development purposes and works inside of a single machine boundary.
|
||||
The in-memory pub/sub component operates within a single Dapr sidecar. This is primarily meant for development purposes. State is not replicated across multiple sidecars and is lost when the Dapr sidecar is restarted.
|
||||
|
||||
## Component format
|
||||
|
||||
|
@ -25,6 +25,7 @@ spec:
|
|||
> Note: in-memory does not require any specific metadata for the component to work, however spec.metadata is a required field.
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}}) in the Related links section
|
||||
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
|
||||
- [Pub/Sub building block]({{< ref pubsub >}})
|
||||
|
|
|
@ -31,16 +31,14 @@ spec:
|
|||
value: "/path/to/tls.key"
|
||||
- name: token # Optional. Used for token based authentication.
|
||||
value: "my-token"
|
||||
- name: consumerID
|
||||
value: "channel1"
|
||||
- name: name
|
||||
value: "my-conn-name"
|
||||
- name: streamName
|
||||
value: "my-stream"
|
||||
- name: durableName
|
||||
value: "my-durable"
|
||||
value: "my-durable-subscription"
|
||||
- name: queueGroupName
|
||||
value: "my-queue"
|
||||
value: "my-queue-group"
|
||||
- name: startSequence
|
||||
value: 1
|
||||
- name: startTime # In Unix format
|
||||
|
@ -83,7 +81,6 @@ spec:
|
|||
| tls_client_cert | N | NATS TLS Client Authentication Certificate | `"/path/to/tls.crt"` |
|
||||
| tls_client_key | N | NATS TLS Client Authentication Key | `"/path/to/tls.key"` |
|
||||
| token | N | [NATS token based authentication] | `"my-token"` |
|
||||
| consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | `"channel1"`
|
||||
| name | N | NATS connection name | `"my-conn-name"` |
|
||||
| streamName | N | Name of the JetStream Stream to bind to | `"my-stream"` |
|
||||
| durableName | N | [Durable name] | `"my-durable"` |
|
||||
|
@ -146,6 +143,31 @@ It is essential to create a NATS JetStream for a specific subject. For example,
|
|||
nats -s localhost:4222 stream add myStream --subjects mySubject
|
||||
```
|
||||
|
||||
## Example: Competing consumers pattern
|
||||
|
||||
Let's say you'd like each message to be processed by only one application or pod with the same app-id. Typically, the `consumerID` metadata spec helps you define competing consumers.
|
||||
|
||||
Since `consumerID` is not supported in NATS JetStream, you need to specify `durableName` and `queueGroupName` to achieve the competing consumers pattern. For example:
|
||||
|
||||
```yml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: pubsub
|
||||
spec:
|
||||
type: pubsub.jetstream
|
||||
version: v1
|
||||
metadata:
|
||||
- name: name
|
||||
value: "my-conn-name"
|
||||
- name: streamName
|
||||
value: "my-stream"
|
||||
- name: durableName
|
||||
value: "my-durable-subscription"
|
||||
- name: queueGroupName
|
||||
value: "my-queue-group"
|
||||
```
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
|
||||
|
|
|
@ -1,20 +1,16 @@
|
|||
---
|
||||
type: docs
|
||||
title: "In Memory"
|
||||
linkTitle: "In Memory"
|
||||
description: "Detailed documentation on the In Memory state component"
|
||||
title: "In-memory"
|
||||
linkTitle: "In-memory"
|
||||
description: "Detailed documentation on the in-memory state component"
|
||||
aliases:
|
||||
- "/operations/components/setup-state-store/supported-state-stores/setup-inmemory/"
|
||||
---
|
||||
|
||||
The In Memory state store component is useful for development purposes and works inside of a single machine boundary.
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
This component **shouldn't be used for production**. It is developer only and will never be stable. If you come across a scenario and want to use it in production, you can submit an issue and discuss it with the community.
|
||||
|
||||
{{% /alert %}}
|
||||
The in-memory state store component maintains state in the Dapr sidecar's memory. This is primarily meant for development purposes. State is not replicated across multiple sidecars and is lost when the Dapr sidecar is restarted.
|
||||
|
||||
## Component format
|
||||
|
||||
To setup in-memory state store, create a component of type `state.in-memory`. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration.
|
||||
|
||||
```yaml
|
||||
|
@ -31,6 +27,7 @@ spec:
|
|||
> Note: While in-memory does not require any specific metadata for the component to work, `spec.metadata` is a required field.
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Learn [how to create and configure state store components]({{< ref howto-get-save-state.md >}})
|
||||
- Read more about the [state management building block]({{< ref state-management >}})
|
||||
|
|
|
@ -20,8 +20,8 @@ scopes:
|
|||
- <REPLACE-WITH-SCOPED-APPIDS>
|
||||
spec:
|
||||
policies: # Required
|
||||
timeouts: # Replace with any unique name
|
||||
timeoutName: <REPLACE-WITH-TIME-VALUE>
|
||||
timeouts:
|
||||
timeoutName: <REPLACE-WITH-TIME-VALUE> # Replace with any unique name
|
||||
retries:
|
||||
retryName: # Replace with any unique name
|
||||
policy: <REPLACE-WITH-VALUE>
|
||||
|
@ -62,4 +62,4 @@ targets: # Required
|
|||
|
||||
|
||||
## Related links
|
||||
[Learn more about resiliency policies and targets]({{< ref resiliency-overview.md >}})
|
||||
[Learn more about resiliency policies and targets]({{< ref resiliency-overview.md >}})
|
||||
|
|
|
@ -61,4 +61,4 @@
|
|||
since: "1.0"
|
||||
features:
|
||||
input: true
|
||||
output: true
|
||||
output: true
|
|
@ -1,6 +1,6 @@
|
|||
- component: In-memory
|
||||
link: setup-inmemory
|
||||
state: Beta
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.7"
|
||||
features:
|
||||
|
|
|
@ -77,9 +77,9 @@
|
|||
query: false
|
||||
- component: In-memory
|
||||
link: setup-inmemory
|
||||
state: Developer-only
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.8"
|
||||
since: "1.9"
|
||||
features:
|
||||
crud: true
|
||||
transactions: true
|
||||
|
|
|
@ -1,28 +1,53 @@
|
|||
{{ if .Path }}
|
||||
{{ $pathFormatted := replace .Path "\\" "/" }}
|
||||
{{ $gh_repo := ($.Param "github_repo") }}
|
||||
{{ $gh_subdir := ($.Param "github_subdir") }}
|
||||
{{ $gh_project_repo := ($.Param "github_project_repo") }}
|
||||
{{ $gh_branch := (default "master" ($.Param "github_branch")) }}
|
||||
{{ if $gh_repo }}
|
||||
<div class="td-page-meta ml-2 pb-1 pt-2 mb-0">
|
||||
{{ $gh_repo_path := printf "%s/content/%s" $gh_branch $pathFormatted }}
|
||||
{{ if and ($gh_subdir) (.Site.Language.Lang) }}
|
||||
{{ $gh_repo_path = printf "%s/%s/content/%s/%s" $gh_branch $gh_subdir ($.Site.Language.Lang) $pathFormatted }}
|
||||
{{ else if .Site.Language.Lang }}
|
||||
{{ $gh_repo_path = printf "%s/content/%s/%s" $gh_branch ($.Site.Language.Lang) $pathFormatted }}
|
||||
{{ else if $gh_subdir }}
|
||||
{{ $gh_repo_path = printf "%s/%s/content/%s" $gh_branch $gh_subdir $pathFormatted }}
|
||||
{{ end }}
|
||||
{{ $editURL := printf "%s/edit/%s" $gh_repo $gh_repo_path }}
|
||||
{{ $createURL := printf "%s/edit/%s" $gh_repo $gh_repo_path }}
|
||||
{{ $issuesURL := printf "%s/issues/new/choose" $gh_repo}}
|
||||
{{ $newPageStub := resources.Get "stubs/new-page-template.md" }}
|
||||
{{ $newPageQS := querify "value" $newPageStub.Content "filename" "change-me.md" | safeURL }}
|
||||
{{ $newPageURL := printf "%s/new/%s?%s" $gh_repo $gh_repo_path $newPageQS }}
|
||||
{{ if .File }}
|
||||
{{ $pathFormatted := replace .File.Path "\\" "/" -}}
|
||||
{{ $gh_repo := ($.Param "github_repo") -}}
|
||||
{{ $gh_url := ($.Param "github_url") -}}
|
||||
{{ $gh_subdir := ($.Param "github_subdir") -}}
|
||||
{{ $gh_project_repo := ($.Param "github_project_repo") -}}
|
||||
{{ $gh_branch := (default "main" ($.Param "github_branch")) -}}
|
||||
<div class="td-page-meta ms-2 pb-1 pt-2 mb-0">
|
||||
{{ if $gh_url -}}
|
||||
{{ warnf "Warning: use of `github_url` is deprecated. For details see https://www.docsy.dev/docs/adding-content/repository-links/#github_url-optional" -}}
|
||||
<a href="{{ $gh_url }}" target="_blank"><i class="fa-solid fa-pen-to-square fa-fw"></i> {{ T "post_edit_this" }}</a>
|
||||
{{ else if $gh_repo -}}
|
||||
{{ $gh_repo_path := printf "%s/content/%s" $gh_branch $pathFormatted -}}
|
||||
{{ if and ($gh_subdir) (.Site.Language.Lang) -}}
|
||||
{{ $gh_repo_path = printf "%s/%s/content/%s/%s" $gh_branch $gh_subdir ($.Site.Language.Lang) $pathFormatted -}}
|
||||
{{ else if .Site.Language.Lang -}}
|
||||
{{ $gh_repo_path = printf "%s/content/%s/%s" $gh_branch ($.Site.Language.Lang) $pathFormatted -}}
|
||||
{{ else if $gh_subdir -}}
|
||||
{{ $gh_repo_path = printf "%s/%s/content/%s" $gh_branch $gh_subdir $pathFormatted -}}
|
||||
{{ end -}}
|
||||
|
||||
<a href="{{ $editURL }}" target="_blank" rel="nofollow noopener noreferrer"><i class="fa fa-edit fa-fw"></i> {{ T "post_edit_this" }}</a>
|
||||
<a href="{{ $issuesURL }}" target="_blank" rel="nofollow noopener noreferrer"><i class="fab fa-github fa-fw"></i> {{ T "post_create_issue" }}</a>
|
||||
{{/* Adjust $gh_repo_path based on path_base_for_github_subdir */ -}}
|
||||
{{ $ghs_base := $.Param "path_base_for_github_subdir" -}}
|
||||
{{ $ghs_rename := "" -}}
|
||||
{{ if reflect.IsMap $ghs_base -}}
|
||||
{{ $ghs_rename = $ghs_base.to -}}
|
||||
{{ $ghs_base = $ghs_base.from -}}
|
||||
{{ end -}}
|
||||
{{ with $ghs_base -}}
|
||||
{{ $gh_repo_path = replaceRE . $ghs_rename $gh_repo_path -}}
|
||||
{{ end -}}
|
||||
|
||||
{{ $viewURL := printf "%s/tree/%s" $gh_repo $gh_repo_path -}}
|
||||
{{ $editURL := printf "%s/edit/%s" $gh_repo $gh_repo_path -}}
|
||||
{{ $issuesURL := printf "%s/issues/new/choose" $gh_repo -}}
|
||||
{{ $newPageStub := resources.Get "stubs/new-page-template.md" -}}
|
||||
{{ $newPageQS := querify "value" $newPageStub.Content "filename" "change-me.md" | safeURL -}}
|
||||
{{ $newPageURL := printf "%s/new/%s?%s" $gh_repo $gh_repo_path $newPageQS -}}
|
||||
|
||||
<a href="{{ $editURL }}" target="_blank" rel="nofollow noopener noreferrer"><i class="fa fa-edit fa-fw"></i> {{ T "post_edit_this" }}</a>
|
||||
<a href="{{ $issuesURL }}" target="_blank" rel="nofollow noopener noreferrer"><i class="fab fa-github fa-fw"></i> {{ T "post_create_issue" }}</a>
|
||||
|
||||
{{ with $gh_project_repo -}}
|
||||
{{ $project_issueURL := printf "%s/issues/new/choose" . -}}
|
||||
<a href="{{ $project_issueURL }}" class="td-page-meta--project-issue" target="_blank" rel="noopener"><i class="fab fa-github fa-fw"></i> {{ T "post_create_project_issue" }}</a>
|
||||
{{ end -}}
|
||||
|
||||
{{ end -}}
|
||||
{{ with .CurrentSection.AlternativeOutputFormats.Get "print" -}}
|
||||
<a id="print" href="{{ .Permalink | safeURL }}"><i class="fa-solid fa-print fa-fw"></i> {{ T "print_entire_section" }}</a>
|
||||
{{ end }}
|
||||
</div>
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
{{ end -}}
|
|
@ -1 +1 @@
|
|||
{{- if .Get "short" }}1.11{{ else if .Get "long" }}1.11.1{{ else if .Get "cli" }}1.11.0{{ else }}1.11.1{{ end -}}
|
||||
{{- if .Get "short" }}1.11{{ else if .Get "long" }}1.11.2{{ else if .Get "cli" }}1.11.0{{ else }}1.11.2{{ end -}}
|
||||
|
|
|
@ -1 +1 @@
|
|||
Subproject commit edb09a08b7a2ca63983f5237b307c40cae86d3bb
|
||||
Subproject commit 2449bcd6691eb49825e0e8e9dff50bd50fd41c2e
|
|
@ -1 +1 @@
|
|||
Subproject commit 1e3b6eb859be175e12808c0ff345f40398f209d6
|
||||
Subproject commit 7686ab039bcc30f375f922960020d403dd2d3867
|
|
@ -1 +1 @@
|
|||
Subproject commit 5051a9d5d92003924322a8ddbdf230fb8a872dd7
|
||||
Subproject commit 64e834b0a06f5b218efc941b8caf3683968b7208
|
Loading…
Reference in New Issue