Update references

This commit is contained in:
Aaron Crawfis 2020-10-16 17:11:21 -07:00
parent 0539bc07fc
commit 7d98423f83
36 changed files with 108 additions and 117 deletions

View File

@ -6,7 +6,7 @@ weight: 200
description: "Modular best practices accessible over standard HTTP or gRPC APIs"
---
A [building block](/docs/concepts/architecture/building-blocks) is as an HTTP or gRPC API that can be called from your code and uses one or more Dapr components.
A [building block]({{< ref building-blocks >}}) is as an HTTP or gRPC API that can be called from your code and uses one or more Dapr components.
Building blocks address common challenges in building resilient, microservices applications and codify best practices and patterns. Dapr consists of a set of building blocks, with extensibility to add new building blocks.
@ -24,6 +24,6 @@ The following are the building blocks provided by Dapr:
| [**State Management**] ({{<ref "service-invocation-overview.md">}} ) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state API with pluggable state stores for persistence.
| [**Publish and Subscribe**]({{<ref "service-invocation-overview.md">}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publishes messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
| [**Resource Bindings**]({{<ref "service-invocation-overview.md">}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
| [**Actors**]({{<ref "service-invocation-overview.md">}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the Virtual Actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use. See * [Actor Overview](./actors#understanding-actors)
| [**Actors**]({{<ref "service-invocation-overview.md">}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the Virtual Actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use. See * [Actor Overview]({{< ref actors >}})
| [**Observability**]({{<ref "service-invocation-overview.md">}}) | `N/A` | Dapr system components and runtime emit metrics, logs, and traces to debug, operate and monitor Dapr system services, components and user applications.
| [**Secrets**]({{<ref "service-invocation-overview.md">}}) | `/v1.0/secrets` | Dapr offers a secrets building block API and integrates with secret stores such as Azure Key Vault and Kubernetes to store the secrets. Service code can call the secrets API to retrieve secrets out of the Dapr supported secret stores.

View File

@ -23,10 +23,10 @@ Dapr uses a modular design where functionality is delivered as a component. Each
* [Tracing exporters](https://github.com/dapr/components-contrib/tree/master/exporters)
### Service invocation and service discovery components
Service discovery components are used with the [Service Invocation](./service-invocation/README.md) building block to integrate with the hosting environment to provide service-to-service discovery. For example, the Kubernetes service discovery component integrates with the kubernetes DNS service and self hosted uses mDNS.
Service discovery components are used with the [Service Invocation]({{< ref service-invocation >}}) building block to integrate with the hosting environment to provide service-to-service discovery. For example, the Kubernetes service discovery component integrates with the kubernetes DNS service and self hosted uses mDNS.
### Service invocation and middleware components
Dapr allows custom [**middleware**](./middleware/README.md) to be plugged into the request processing pipeline. Middleware can perform additional actions on a request, such as authentication, encryption and message transformation before the request is routed to the user code, or before the request is returned to the client. The middleware components are used with the [Service Invocation](./service-invocation/README.md) building block.
Dapr allows custom [**middleware**]({{< ref middleware-concept.md >}}) to be plugged into the request processing pipeline. Middleware can perform additional actions on a request, such as authentication, encryption and message transformation before the request is routed to the user code, or before the request is returned to the client. The middleware components are used with the [Service Invocation]({{< ref service-invocation >}}) building block.
### Secret store components
In Dapr, a [**secret**](./secrets/README.md) is any piece of private information that you want to guard against unwanted users. Secretstores, used to store secrets, are Dapr components and can be used by any of the building blocks.
In Dapr, a [**secret**]({{< ref secrets >}}) is any piece of private information that you want to guard against unwanted users. Secretstores, used to store secrets, are Dapr components and can be used by any of the building blocks.

View File

@ -12,11 +12,11 @@ Dapr allows custom processing pipelines to be defined by chaining a series of mi
## Customize processing pipeline
When launched, a Dapr sidecar constructs a middleware processing pipeline. By default the pipeline consists of [tracing middleware](../observability/traces.md) and CORS middleware. Additional middleware, configured by a Dapr [configuration](../configuration/README.md), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, security and others.
When launched, a Dapr sidecar constructs a middleware processing pipeline. By default the pipeline consists of [tracing middleware]({{< ref tracing.md >}}) and CORS middleware. Additional middleware, configured by a Dapr [configuration]({{< ref configuration-concept.md >}}), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, security and others.
> **NOTE:** Dapr provides a **middleware.http.uppercase** pre-registered component that changes all text in a request body to uppercase. You can use it to test/verify if your custom pipeline is in place.
The following configuration example defines a custom pipeline that uses a [OAuth 2.0 middleware](../../howto/authorization-with-oauth/README.md) and an uppercase middleware component. In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code.
The following configuration example defines a custom pipeline that uses a [OAuth 2.0 middleware]({{< ref oauth.md >}}) and an uppercase middleware component. In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code.
```yaml
apiVersion: dapr.io/v1alpha1
@ -63,4 +63,4 @@ Your middleware component can be contributed to the https://github.com/dapr/comp
Then submit another pull request against the https://github.com/dapr/dapr repository to register the new middleware type. You'll need to modify the **Load()** method under https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/registry.go to register your middleware using the **Register** method.
## Next steps
* [How-To: Configure API authorization with OAuth](../../howto/authorization-with-oauth/README.md)
* [How-To: Configure API authorization with OAuth({{< ref oauth.md >}})

View File

@ -11,18 +11,16 @@ Observability is a term from control theory. Observability means you can answer
The observability capabilities enable users to monitor the Dapr system services, their interaction with user applications and understand how these monitored services behave. The observability capabilities are divided into the following areas;
* **[Distributed tracing](./traces.md)**: is used to profile and monitor Dapr system services and user apps. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing is particularly well-suited to debugging and monitoring distributed software architectures, such as microservices.
- **[Distributed tracing]({{< ref tracing.md >}})**: is used to profile and monitor Dapr system services and user apps. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing is particularly well-suited to debugging and monitoring distributed software architectures, such as microservices.
You can use distributed tracing to help debug and optimize application code. Distributed tracing contains trace spans between the Dapr runtime, Dapr system services, and user apps across process, nodes, network, and security boundaries. It provides a detailed understanding of service invocations (call flows) and service dependencies.
Dapr uses [W3C distributed tracing]({{< ref w3c-tracing >}})
It is generally recommended to run Dapr in production with tracing.
* **[Metrics](./metrics.md)**: are the series of measured values and counts that are collected and stored over time. Dapr metrics provide monitoring and understanding of the behavior of Dapr system services and user apps. For example, the service metrics between Dapr sidecars and user apps show call latency, traffic failures, error rates of requests etc. Dapr system services metrics show side car injection failures, health of the system services including CPU usage, number of actor placement made etc.
* **[Logs](./logs.md)**: are records of events that occur and can be used to determine failures or another status. Logs events contain warning, error, info, and debug messages produced by Dapr system services. Each log event includes metadata such as message type, hostname, component name, App ID, ip address, etc.
* **[Health](./health.md)**: Dapr provides a way for a hosting platform to determine it's health using an HTTP endpoint. With this endpoint, the Dapr process, or sidecar, can be probed to determine its readiness and liveness and action taken accordingly.
- **[Metrics]({{< ref metrics.md >}})**: are the series of measured values and counts that are collected and stored over time. Dapr metrics provide monitoring and understanding of the behavior of Dapr system services and user apps. For example, the service metrics between Dapr sidecars and user apps show call latency, traffic failures, error rates of requests etc. Dapr system services metrics show side car injection failures, health of the system services including CPU usage, number of actor placement made etc.
- **[Logs]({{< ref logs.md >}})**: are records of events that occur and can be used to determine failures or another status. Logs events contain warning, error, info, and debug messages produced by Dapr system services. Each log event includes metadata such as message type, hostname, component name, App ID, ip address, etc.
- **[Health]({{< ref health_api.md>}})**: Dapr provides a way for a hosting platform to determine it's health using an HTTP endpoint. With this endpoint, the Dapr process, or sidecar, can be probed to determine its readiness and liveness and action taken accordingly.
## Open Telemetry
Dapr integrates with [OpenTelemetry](https://opentelemetry.io/) for tracing, metrics and logs. With OpenTelemetry, you can configure various exporters for tracing and metrics based on your environment, whether it is running in the cloud or on-premises.

View File

@ -31,13 +31,13 @@ Each of these building blocks is independent, meaning that you can use one, some
| Building Block | Description |
|----------------|-------------|
| **[Service Invocation](/docs/building-blocks/service-invocation)** | Resilient service-to-service invocation enables method calls, including retries, on remote services wherever they are located in the supported hosting environment.
| **[State Management](/docs/building-blocks/state-management)** | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, Azure SQL Server, PostgreSQL, AWS DynamoDB or Redis among others.
| **[Publish and Subscribe Messaging](/docs/building-blocks/pubsub)** | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at least once message delivery guarantee.
| **[Resource Bindings](/docs/building-blocks/bindings)** | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc.
| **[Actors](/docs/building-blocks/actors)** | A pattern for stateful and stateless objects that make concurrency simple with method and state encapsulation. Dapr provides many capabilities in its actor runtime including concurrency, state, life-cycle management for actor activation/deactivation and timers and reminders to wake-up actors.
| **[Observability](/docs/building-blocks/observability)** | Dapr emit metrics, logs, and traces to debug and monitor both Dapr and user applications. Dapr supports distributed tracing to easily diagnose and serve inter-service calls in production using the W3C Trace Context standard and Open Telemetry to send to different monitoring tools.
| **[Secrets](/docs/building-blocks/secrets)** | Dapr provides secrets management and integrates with public cloud and local secret stores to retrieve the secrets for use in application code.
| **[Service Invocation]({{< ref service-invocation >}})** | Resilient service-to-service invocation enables method calls, including retries, on remote services wherever they are located in the supported hosting environment.
| **[State Management]({{< ref state-management >}})** | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, Azure SQL Server, PostgreSQL, AWS DynamoDB or Redis among others.
| **[Publish and Subscribe Messaging]({{< ref pubsub >}})** | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at least once message delivery guarantee.
| **[Resource Bindings]({{< ref bindings >}})** | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc.
| **[Actors]({{< ref actors >}})** | A pattern for stateful and stateless objects that make concurrency simple with method and state encapsulation. Dapr provides many capabilities in its actor runtime including concurrency, state, life-cycle management for actor activation/deactivation and timers and reminders to wake-up actors.
| **[Observability]({{< ref observability >}})** | Dapr emit metrics, logs, and traces to debug and monitor both Dapr and user applications. Dapr supports distributed tracing to easily diagnose and serve inter-service calls in production using the W3C Trace Context standard and Open Telemetry to send to different monitoring tools.
| **[Secrets]({{< ref secrets >}})** | Dapr provides secrets management and integrates with public cloud and local secret stores to retrieve the secrets for use in application code.
## Sidecar architecture
@ -72,7 +72,7 @@ To make using Dapr more natural for different languages, it also includes langua
- **[RUST SDK](https://github.com/dapr/rust-sdk)**
- **[.NET SDK](https://github.com/dapr/dotnet-sdk)**
> Note: Dapr is language agnostic and provides a [RESTful HTTP API](../reference/api/README.md) in addition to the protobuf clients.
> Note: Dapr is language agnostic and provides a [RESTful HTTP API]({{< ref api >}}) in addition to the protobuf clients.
### Developer frameworks
Dapr can be used from any developer framework. Here are some that have been integrated with Dapr.
@ -85,7 +85,7 @@ Dapr can be used from any developer framework. Here are some that have been int
Dapr integrates easily with Python [Flask](https://pypi.org/project/Flask/) and node [Express](http://expressjs.com/), which you can find in the [getting started samples](https://github.com/dapr/docs/tree/master/getting-started)
#### Actors
Dapr SDKs support for [virtual actors](../concepts/actors) which are stateful objects that make concurrency simple, have method and state encapsulation, and are designed for scalable, distributed applications.
Dapr SDKs support for [virtual actors]({{< ref actors >}}) which are stateful objects that make concurrency simple, have method and state encapsulation, and are designed for scalable, distributed applications.
#### Azure Functions
Dapr integrates with the Azure Functions runtime via an extension that lets a function seamlessly interact with Dapr. Azure Functions provides an event-driven programming model and Dapr provides cloud-native building blocks. With this extension, you can bring both together for serverless and event-driven apps. For more information read
@ -98,23 +98,23 @@ To enable developers to easily build workflow applications that use Daprs cap
## Designed for Operations
Dapr is designed for operations. The [services dashboard](https://github.com/dapr/dashboard), installed via the Dapr CLI, provides a web-based UI enabling you to see information, view logs and more for the Dapr sidecars.
The [monitoring dashboard](../reference/dashboard/README.md) provides deeper visibility into the Dapr system services and side-cars and the [observability capabilities](../concepts/observability) of Dapr provide insights into your application such as tracing and metrics.
The [monitoring]({{< ref monitoring >}}) provides deeper visibility into the Dapr system services and side-cars and the [observability]({{< ref observability >}}) of Dapr provide insights into your application such as tracing and metrics.
## Run anywhere
### Running Dapr on a local developer machine in self hosted mode
Dapr can be configured to run on your local developer machine in [self hosted mode](../concepts/hosting/). Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.
Dapr can be configured to run on your local developer machine in [self-hosted mode]({{< ref self-hosted >}}). Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.
You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr enabled application on your local machine. Try this out with the [getting started samples](../getting-started).
You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr enabled application on your local machine. Try this out with the [getting started samples]({{< ref getting-started >}}).
<img src="/images/overview_standalone.png" width=800>
### Running Dapr in Kubernetes mode
Dapr can be configured to run on any [Kubernetes cluster](../concepts/hosting/). In Kubernetes the `dapr-sidecar-injector` and `dapr-operator` services provide first class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned into the cluster.
Dapr can be configured to run on any [Kubernetes cluster]({{< ref kubernetes >}}). In Kubernetes the `dapr-sidecar-injector` and `dapr-operator` services provide first class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned into the cluster.
The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service read the [security overview](../concepts/security/README.md#dapr-to-dapr-communication)
The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service read the [security overview]({{< ref "security-concept.md#dapr-to-dapr-communication" >}})
<img src="/images/overview_kubernetes.png" width=800>

View File

@ -42,7 +42,7 @@ Dapr also supports strong identities when deployed on Kubernetes, relying on a p
By default, a workload cert is valid for 24 hours and the clock skew is set to 15 minutes.
Mutual TLS can be turned off/on by editing the default configuration that is deployed with Dapr via the `spec.mtls.enabled` field.
This can be done for both Kubernetes and self hosted modes. Details for how to do this can be found [here](../../howto/configure-mtls).
This can be done for both Kubernetes and self hosted modes. Details for how to do this can be found [here]({{< ref mtls.md >}}).
### mTLS self hosted
The diagram below shows how the Sentry system service issues certificates for applications based on the root/issuer certificate that is provided by an operator or generated by the Sentry service as stored in a file
@ -73,11 +73,11 @@ The diagram below shows secure communication between the Dapr sidecar and the Da
## Component namespace scopes and secrets
Dapr components are namespaced. That means a Dapr runtime sidecar instance can only access the components that have been deployed to the same namespace. See the [components scope topic](../../howto/components-scopes) for more details.
Dapr components are namespaced. That means a Dapr runtime sidecar instance can only access the components that have been deployed to the same namespace. See the [components scope topic]({{< ref component-scopes.md >}}) for more details.
Dapr components uses Dapr's built-in secret management capability to manage secrets. See the [secret topic](../secrets/README.md) for more details.
Dapr components uses Dapr's built-in secret management capability to manage secrets. See the [secret topic]({{< ref secrets >}}) for more details.
In addition, Dapr offers application-level scoping for components by allowing users to specify which applications can consume given components.For more information about application level scoping, see [here](../../howto/components-scopes#application-access-to-components-with-scopes)
In addition, Dapr offers application-level scoping for components by allowing users to specify which applications can consume given components.For more information about application level scoping, see [here]({{< ref "component-scopes.md#application-access-to-components-with-scopes" >}})
## Network security

View File

@ -29,17 +29,13 @@ High quality documentation is a core tenant of the Dapr project. Some contributi
- Ensure the doc references the spec for examples of using the API.
- Ensure the spec is consistent with concept in terms of names, parameters and terminology. Update both the concept and the spec as needed.
- Avoid just repeating the spec. The idea is to give the reader more information and background on the capability so that they can try this out. Hence provide more information and implementation details where possible.
- Provide a link to the spec in the [Reference](/reference) section.
- Where possible reference a practical [How-To](/howto) doc.
- Provide a link to the spec in the [Reference]({{<ref reference >}}) section.
- Where possible reference a practical How-To] doc.
## Contributing to `How-Tos`
See [this template](./howto-template.md) for `How To` articles.
- `How To` articles are meant to provide step-by-step practical guidance on to readers who wish to enable a feature, integrate a technology or use Dapr in a specific scenario.
- Location - `How To` articles should all be under the [howto](../howto) directory in a relevant sub directories - make sure to see if the article you are contributed should be included in an existing sub directory.
- Sub directory naming - the directory name should be descriptive and if referring to specific component or concept should begin with the relevant name. Example *pubsub-namespaces*.
- When adding a new article make sure to add a link in the main [How To README.md](../howto/README.md) as well as other articles or samples that may be relevant.
- Do not assume the reader is using a specific environment unless the article itself is specific to an environment. This include OS (Windows/Linux/MacOS), deployment target (Kubernetes, IoT etc.) or programming language. If instructions vary between operating systems, provide guidance for all.
- How to articles should include the following sub sections:
- **Pre-requesties**

View File

@ -6,6 +6,6 @@ weight: 10
description: "Dapr capabilities that solve common development challenges for distributed applications"
---
Get a high-level [overview of Dapr building blocks](/docs/concepts/building-blocks/) in the **Concepts** section.
Get a high-level [overview of Dapr building blocks]({{< ref building-blocks-concept >}}) in the **Concepts** section.
<img src="/images/buildingblocks-overview.png" alt="Diagram showing the different Dapr building blocks" width=1000>

View File

@ -18,7 +18,7 @@ POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/meth
You can provide any data for the actor method in the request body and the response for the request is in response body which is data from actor method call.
Refer [api spec](../../reference/api/actors_api.md#invoke-actor-method) for more details.
Refer [api spec]({{< ref "actors_api.md#invoke-actor-method" >}}) for more details.
## Actor state management
@ -80,7 +80,7 @@ You can remove the actor timer by calling
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
```
Refer [api spec](../../reference/api/actors_api.md#invoke-timer) for more details.
Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
### Actor reminders
@ -134,4 +134,4 @@ You can remove the actor reminder by calling
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
Refer [api spec](../../reference/api/actors_api.md#invoke-reminder) for more details.
Refer [api spec]({{< ref "actors_api.md#invoke-reminder" >}}) for more details.

View File

@ -16,7 +16,7 @@ Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&
An output binding represents a resource that Dapr will use invoke and send messages to.
For the purpose of this guide, you'll use a Kafka binding. You can find a list of the different binding specs [here](../../concepts/bindings/README.md).
For the purpose of this guide, you'll use a Kafka binding. You can find a list of the different binding specs [here]({{< ref bindings >}}).
Create the following YAML file, named binding.yaml, and save this to a `components` sub-folder in your application directory.
(Use the `--components-path` flag with `dapr run` to point to your custom components dir)
@ -56,11 +56,11 @@ As seen above, you invoked the `/binding` endpoint with the name of the binding
The payload goes inside the mandatory `data` field, and can be any JSON serializable value.
You'll also notice that there's an `operation` field that tells the binding what you need it to do.
You can check [here](../../reference/specs/bindings) which operations are supported for every output binding.
You can check [here]({{< ref bindings >}}) which operations are supported for every output binding.
## References
* Binding [API](https://github.com/dapr/docs/blob/master/reference/api/bindings_api.md)
* Binding [Components](https://github.com/dapr/docs/tree/master/concepts/bindings)
* Binding [Detailed specifications](https://github.com/dapr/docs/tree/master/reference/specs/bindings)
- [Binding API]({{< ref bindings_api.md >}})
- [Binding components]({{< ref bindings >}})
- [Binding detailed specifications]({{< ref supported-bindings >}})

View File

@ -12,11 +12,9 @@ This is ideal for event-driven processing, data pipelines or just generally reac
Dapr bindings allow you to:
* Receive events without including specific SDKs or libraries
* Replace bindings without changing your code
* Focus on business logic and not the event resource implementation
For more info on bindings, read [this](../../concepts/bindings/README.md) link.
- Receive events without including specific SDKs or libraries
- Replace bindings without changing your code
- Focus on business logic and not the event resource implementation
For a complete sample showing bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings).
@ -24,7 +22,7 @@ For a complete sample showing bindings, visit this [link](https://github.com/dap
An input binding represents an event resource that Dapr uses to read events from and push to your application.
For the purpose of this HowTo, we'll use a Kafka binding. You can find a list of the different binding specs [here](../../reference/specs/bindings/README.md).
For the purpose of this HowTo, we'll use a Kafka binding. You can find a list of the different binding specs [here]({{< ref supported-bindings >}}).
Create the following YAML file, named binding.yaml, and save this to a `components` sub-folder in your application directory.
(Use the `--components-path` flag with `dapr run` to point to your custom components dir)

View File

@ -6,4 +6,4 @@ weight: 60
description: See and measure the message calls across components and networked services
---
This section includes guides for developers in the context of observability. See other sections for a [general overview of the observability concept](/docs/concepts/observability/) in Dapr and for [operations guidance on monitoring](/docs/operations/monitoring/).
This section includes guides for developers in the context of observability. See other sections for a [general overview of the observability concept]({{< ref observability >}}) in Dapr and for [operations guidance on monitoring]({{< ref monitoring >}}).

View File

@ -86,17 +86,17 @@ spec:
## Log collectors
If you run Dapr in a Kubernetes cluster, [Fluentd](https://www.fluentd.org/) is a popular container log collector. You can use Fluentd with a [json parser plugin](https://docs.fluentd.org/parser/json) to parse Dapr JSON formatted logs. This [how-to](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md) shows how to configure the Fleuntd in your cluster.
If you run Dapr in a Kubernetes cluster, [Fluentd](https://www.fluentd.org/) is a popular container log collector. You can use Fluentd with a [json parser plugin](https://docs.fluentd.org/parser/json) to parse Dapr JSON formatted logs. This [how-to]({{< ref fluentd.md >}}) shows how to configure the Fleuntd in your cluster.
If you are using the Azure Kubernetes Service, you can use the default OMS Agent to collect logs with Azure Monitor without needing to install Fluentd.
## Search engines
If you use [Fluentd](https://www.fluentd.org/), we recommend to using Elastic Search and Kibana. This [how-to](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md) shows how to set up Elastic Search and Kibana in your Kubernetes cluster.
If you use [Fluentd](https://www.fluentd.org/), we recommend to using Elastic Search and Kibana. This [how-to]({{< ref fluentd.md >}}) shows how to set up Elastic Search and Kibana in your Kubernetes cluster.
If you are using the Azure Kubernetes Service, you can use [Azure monitor for containers](https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-overview) without indstalling any additional monitoring tools. Also read [How to enable Azure Monitor for containers](https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-onboard)
## References
- [How-to : Set up Fleuntd, Elastic search, and Kibana](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md)
- [How-to : Set up Azure Monitor in Azure Kubernetes Service](../../howto/setup-monitoring-tools/setup-azure-monitor.md)
- [How-to : Set up Fleuntd, Elastic search, and Kibana]({{< ref fluentd.md >}})
- [How-to : Set up Azure Monitor in Azure Kubernetes Service]({{< ref azure-monitor.md >}})

View File

@ -37,6 +37,6 @@ Each Dapr system process emits Go runtime/process metrics by default and have th
## References
* [Howto: Run Prometheus locally](../../howto/setup-monitoring-tools/observe-metrics-with-prometheus-locally.md)
* [Howto: Set up Prometheus and Grafana for metrics](../../howto/setup-monitoring-tools/setup-prometheus-grafana.md)
* [Howto: Set up Azure monitor to search logs and collect metrics for Dapr](../../howto/setup-monitoring-tools/setup-azure-monitor.md)
* [Howto: Run Prometheus locally]({{< ref prometheus.md >}})
* [Howto: Set up Prometheus and Grafana for metrics]({{< ref grafana.md >}})
* [Howto: Set up Azure monitor to search logs and collect metrics for Dapr]({{< ref azure-monitor.md >}})

View File

@ -7,13 +7,13 @@ description: Dapr sidecar health checks.
---
Dapr provides a way to determine it's health using an HTTP /healthz endpoint.
With this endpoint, the Dapr process, or sidecar, can be probed for its health and hence determine its readiness and liveness. See [health API ](../../reference/api/health_api.md)
With this endpoint, the Dapr process, or sidecar, can be probed for its health and hence determine its readiness and liveness. See [health API ]({{< ref health_api.md >}})
The Dapr `/healthz` endpoint can be used by health probes from the application hosting platform. This topic describes how Dapr integrates with probes from different hosting platforms.
As a user, when deploying Dapr to a hosting platform (for example Kubernetes), the Dapr health endpoint is automatically configured for you. There is nothing you need to configure.
Note: Dapr actors also have a health API endpoint where Dapr probes the application for a response to a signal from Dapr that the actor application is healthy and running. See [actor health API](../../reference/api/actors_api.md#health-check)
Note: Dapr actors also have a health API endpoint where Dapr probes the application for a response to a signal from Dapr that the actor application is healthy and running. See [actor health API]({{< ref "actors_api.md#health-check" >}})
## Health endpoint: Integration with Kubernetes
@ -84,6 +84,6 @@ readinessProbe:
For more information refer to;
- [ Endpoint health API](../../reference/api/health_api.md)
- [Actor health API](../../reference/api/actors_api.md#health-check)
- [ Endpoint health API]({{< ref health_api.md >}})
- [Actor health API]({{< ref "actors_api.md#health-check" >}})
- [Kubernetes probe configuration parameters](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)

View File

@ -1,7 +1,7 @@
---
type: docs
title: "How-To: Use W3C trace context with Dapr"
linkTitle: "Overview"
linkTitle: "How-To: Use W3C trace context"
weight: 20000
description: Using W3C tracing standard with Dapr
type: docs
@ -10,7 +10,7 @@ type: docs
# How to use trace context
Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Dapr does all the heavy lifting of generating and propagating the trace context information and there are very few cases where you need to either propagate or create a trace context. First read scenarios in the [W3C distributed tracing]({{< ref w3c-tracing >}}) article to understand whether you need to propagate or create a trace context.
To view traces, read the [how to diagnose with tracing](../diagnose-with-tracing) article.
To view traces, read the [how to diagnose with tracing]({{< ref tracing.md >}}) article.
## How to retrieve trace context from a response
`Note: There are no helper methods exposed in Dapr SDKs to propagate and retrieve trace context. You need to use http/gRPC clients to propagate and retrieve trace headers through http headers and gRPC metadata.`
@ -215,7 +215,7 @@ In Kubernetes, you can apply the configuration as below :
kubectl apply -f appconfig.yaml
```
You then set the following tracing annotation in your deployment YAML. You can add the following annotaion in sample [grpc app](../create-grpc-app) deployment yaml.
You then set the following tracing annotation in your deployment YAML. You can add the following annotaion in sample [grpc app]({{< ref grpc.md >}}) deployment yaml.
```yaml
dapr.io/config: "appconfig"
@ -223,13 +223,13 @@ dapr.io/config: "appconfig"
### Invoking Dapr with trace context
As mentioned in `Scenarios` section in [W3C Trace Context for distributed tracing](../../concepts/observability/W3C-traces.md) that Dapr covers generating trace context and you do not need to explicitly create trace context.
Dapr covers generating trace context and you do not need to explicitly create trace context.
However if you choose to pass the trace context explicitly, then Dapr will use the passed trace context and propagate all across the HTTP/gRPC call.
Using the [grpc app](../create-grpc-app) in the example and putting this all together, the following steps show you how to create a Dapr client and call the InvokeService method passing the trace context:
Using the [grpc app]({{< ref grpc.md >}}) in the example and putting this all together, the following steps show you how to create a Dapr client and call the InvokeService method passing the trace context:
The Rest code snippet and details, refer to the [grpc app](../create-grpc-app).
The Rest code snippet and details, refer to the [grpc app]({{< ref grpc >}}).
### 1. Import the package
@ -289,10 +289,10 @@ You can now correlate the calls in your app and across services with Dapr using
## Related Links
* [Observability concepts](../../concepts/observability/traces.md)
* [W3C Trace Context for distributed tracing](../../concepts/observability/W3C-traces.md)
* [How to set up Application Insights for distributed tracing](../../howto/diagnose-with-tracing/azure-monitor.md)
* [How to set up Zipkin for distributed tracing](../../howto/diagnose-with-tracing/zipkin.md)
* [Observability concepts]({{< ref observability-concept.md >}})
* [W3C Trace Context for distributed tracing]({{< ref w3c-tracing >}})
* [How to set up Application Insights for distributed tracing]({{< ref azure-monitor.md >}})
* [How to set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
* [W3C trace context specification](https://www.w3.org/TR/trace-context/)
* [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/observability)

View File

@ -69,7 +69,7 @@ In these scenarios Dapr does some of the work for you and you need to either cre
In this case, when service A first calls service B, Dapr generates the trace headers in service A, and these trace headers are then propagated to service B. These trace headers are returned in the response from service B as part of response headers. However you need to propagate the returned trace context to the next services, service C and Service D, as Dapr does not know you want to reuse the same header.
To understand how to extract the trace headers from a response and add the trace headers into a request, see the [how to use trace context](../../howto/use-w3c-tracecontext/README.md) article.
To understand how to extract the trace headers from a response and add the trace headers into a request, see the [how to use trace context]({{< ref w3c-tracing >}}) article.
2. You have chosen to generate your own trace context headers.
This is much more unusual. There may be occassions where you specifically chose to add W3C trace headers into a service call, for example if you have an existing application that does not currently use Dapr. In this case Dapr still propagates the trace context headers for you. If you decide to generate trace headers yourself, there are three ways this can be done :
@ -105,8 +105,7 @@ The tracestate fields are detailed [here](https://www.w3.org/TR/trace-context/#t
In the gRPC API calls, trace context is passed through `grpc-trace-bin` header.
## Related Links
* [How To set up Application Insights for distributed tracing](../../howto/diagnose-with-tracing/azure-monitor.md)
* [How To set up Zipkin for distributed tracing](../../howto/diagnose-with-tracing/zipkin.md)
* [How to use Trace Context](../../howto/use-w3c-tracecontext)
* [How To set up Application Insights for distributed tracing]({{< ref azure-monitor.md >}})
* [How To set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
* [W3C trace context specification](https://www.w3.org/TR/trace-context/)
* [Observability sample](https://github.com/dapr/quickstarts/tree/master/observability)

View File

@ -23,7 +23,7 @@ See [Setup secret stores](https://github.com/dapr/docs/tree/master/howto/setup-s
Instead of including credentials directly within a Dapr component file, you can place the credentials within a Dapr supported secret store and reference the secret within the Dapr component. This is preferred approach and is a recommended best practice especially in production environments.
For more information read [Referencing Secret Stores in Components](./component-secrets.md)
For more information read [Referencing Secret Stores in Components]({{< ref component-secrets.md >}})
## Using secrets in your application

View File

@ -7,11 +7,11 @@ description: "Use scoping to limit the secrets that can be read from secret stor
type: docs
---
Follow [these instructions](../setup-secret-store) to configure secret store for an application. Once configured, any secret defined within that store will be accessible from the Dapr application.
Follow [these instructions]({{< ref setup-secret-store >}}) to configure secret store for an application. Once configured, any secret defined within that store will be accessible from the Dapr application.
To limit the secrets to which the Dapr application has access, users can define secret scopes by augmenting existing configuration CRD with restrictive permissions.
Follow [these instructions](../../concepts/configuration/README.md) to define a configuration CRD.
Follow [these instructions]({{< ref configuration-concept.md >}}) to define a configuration CRD.
## Scenario 1 : Deny access to all secrets for a secret store
@ -31,7 +31,7 @@ spec:
defaultAccess: deny
```
For applications that need to be deined access to the Kubernetes secret store, follow [these instructions](../configure-k8s/README.md), and add the following annotation to the application pod.
For applications that need to be deined access to the Kubernetes secret store, follow [these instructions]({{< ref kubernetes-overview.md >}}), and add the following annotation to the application pod.
```yaml
dapr.io/config: appconfig
@ -56,7 +56,7 @@ spec:
allowedSecrets: ["secret1", "secret2"]
```
This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions](../../concepts/configuration/README.md) to apply configuration to the sidecar.
This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
## Scenario 3: Deny access to certain senstive secrets in a secret store
@ -75,7 +75,7 @@ spec:
deniedSecrets: ["secret1", "secret2"]
```
The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions](../../concepts/configuration/README.md) to apply configuration to the sidecar.
The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
## Permission priority

View File

@ -15,7 +15,7 @@ This frees developers from difficult state coordination, conflict resolution and
A state store component represents a resource that Dapr uses to communicate with a database.
For the purpose of this guide, we'll use a Redis state store.
See a list of supported state stores [here](../setup-state-store/supported-state-stores.md)
See a list of supported state stores [here]({{< ref supported-state-stores >}})
### Using the Dapr CLI
@ -24,7 +24,7 @@ To change the state store being used, replace the YAML under `/components` with
### Kubernetes
See the instructions [here](../setup-state-store) on how to setup different state stores on Kubernetes.
See the instructions [here]({{< ref setup-state-store >}}) on how to setup different state stores on Kubernetes.
## Strong and Eventual consistency

View File

@ -6,7 +6,7 @@ weight: 1000
description: "Use Azure Cosmos DB as a backend state store"
---
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{< ref state_api.md >}}). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
> **NOTE:** Azure Cosmos DB is a multi-modal database that supports multiple APIs. The default Dapr Cosmos DB state store implementation uses the [Azure Cosmos DB SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started).

View File

@ -6,7 +6,7 @@ weight: 2000
description: "Use Redis as a backend state store"
---
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{< ref state_api.md >}}). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
>**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation.

View File

@ -6,7 +6,7 @@ weight: 3000
description: "Use SQL server as a backend state store"
---
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{< ref state_api.md >}}). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
## 1. Connect to SQL Server

View File

@ -80,14 +80,14 @@ Optionally, you may also create a new entry for a sidecar tool that can be reuse
Now, create or edit the run configuration for the application to be debugged. It can be found in the menu next to the `main()` function.
![Edit run configuration menu](../../images/intellij_debug_menu.png)
![Edit run configuration menu](/images/intellij_debug_menu.png)
Now, add the program arguments and environment variables. These need to match the ports defined in the entry in 'External Tool' above.
* Command line arguments for this example: `-p 3000`
* Environment variables for this example: `DAPR_HTTP_PORT=3005;DAPR_GRPC_PORT=52000`
![Edit run configuration](../../images/intellij_edit_run_configuration.png)
![Edit run configuration](/images/intellij_edit_run_configuration.png)
## Start debugging
@ -95,11 +95,11 @@ Once the one-time config above is done, there are two steps required to debug a
1. Start `dapr` via `Tools` -> `External Tool` in IntelliJ.
![Run dapr as 'External Tool'](../../images/intellij_start_dapr.png)
![Run dapr as 'External Tool'](/images/intellij_start_dapr.png)
2. Start your application in debug mode.
![Start application in debug mode](../../images/intellij_debug_app.png)
![Start application in debug mode](/images/intellij_debug_app.png)
## Wrapping up

View File

@ -5,11 +5,11 @@ linkTitle: "Autoscale"
weight: 2000
---
Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components](../../concepts/publish-subscribe-messaging), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting later.
Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components]({{< ref pubsub >}}), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting later.
For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), an event driven autoscaler for Kubernetes. Many of Dapr's pub/sub components overlap with the scalers provided by [KEDA](https://github.com/kedacore/keda) so it's easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA.
This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to [pub/sub components](../../concepts/publish-subscribe-messaging) offered by Dapr.
This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to [pub/sub components]({{< ref pubsub >}}) offered by Dapr.
## Install KEDA

View File

@ -231,5 +231,5 @@ You can use Dapr with any language supported by Protobuf, and not just with the
Using the [protoc](https://developers.google.com/protocol-buffers/docs/downloads) tool you can generate the Dapr clients for other languages like Ruby, C++, Rust and others.
## Related Topics
* [Service invocation concepts](../../concepts/service-invocation/README.md)
* [Service invocation API specification](../../reference/api/service_invocation_api.md)
- [Service invocation building block]({{< ref service-invocation >}})
- [Service invocation API specification]({{< ref service_invocation_api.md >}})

View File

@ -121,7 +121,7 @@ That's it! Now go to the [Configuration](#configuration) section
## Configuration
Dapr can use Redis as a `statestore` component for state persistence (`state.redis`) or as a `pubsub` component (`pubsub.redis`). The following yaml files demonstrates how to define each component using either a secretKey reference (which is preferred) or a plain text password. **Note:** In a production-grade application, follow [secret management](../../concepts/secrets/README.md) instructions to securely manage your secrets.
Dapr can use Redis as a `statestore` component for state persistence (`state.redis`) or as a `pubsub` component (`pubsub.redis`). The following yaml files demonstrates how to define each component using either a secretKey reference (which is preferred) or a plain text password. **Note:** In a production-grade application, follow [secret management]({{< ref secrets >}}) instructions to securely manage your secrets.
### Configuring Redis for state persistence using a secret key reference (preferred)

View File

@ -14,11 +14,11 @@ Dapr is a portable, event-driven runtime that makes it easy for enterprise devel
* **Components** encapsulate the implementation for a building block API. Example implementations for the state building block may include Redis, Azure Storage, Azure Cosmos DB, and AWS DynamoDB. Many of the components are pluggable so that one implementation can be swapped out for another.
To learn more, see [Dapr Concepts](/docs/concepts).
To learn more, see [Dapr Concepts]({{< ref concepts >}}).
## Setup the development environment
Dapr can be run locally or in Kubernetes. We recommend starting with a local setup to explore the core Dapr concepts and familiarize yourself with the Dapr CLI. Follow these instructions to [configure Dapr locally and on Kubernetes](/docs/concepts/getting-started/install-dapr).
Dapr can be run locally or in Kubernetes. We recommend starting with a local setup to explore the core Dapr concepts and familiarize yourself with the Dapr CLI. Follow these instructions to [configure Dapr locally and on Kubernetes]({{< ref install-dapr.md >}}).
## Next steps

View File

@ -126,13 +126,13 @@ Dapr installs the following pods:
You can install Dapr on any Kubernetes cluster. Here are some helpful links:
- [Setup Minikube Cluster](./cluster/setup-minikube.md)
- [Setup Azure Kubernetes Service Cluster](./cluster/setup-aks.md)
- [Setup Minikube Cluster]({{< ref setup-minikube.md >}})
- [Setup Azure Kubernetes Service Cluster]({{< ref setup-aks.md >}})
- [Setup Google Cloud Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/quickstart)
- [Setup Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
> **Note:** Both the Dapr CLI, and the Dapr Helm chart automatically deploy with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy Dapr to Windows nodes, but most users should not need to.
> For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster](../howto/windows-k8s/)
> For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster]({{< ref kubernetes-hybrid-clusters >}})
### Using the Dapr CLI
@ -221,7 +221,7 @@ dapr-sentry-9435776c7f-8f7yd 1/1 Running 0 40s
#### Sidecar annotations
To see all the supported annotations for the Dapr sidecar on Kubernetes, visit [this](../howto/configure-k8s/README.md) how to guide.
To see all the supported annotations for the Dapr sidecar on Kubernetes, visit [this]({{< ref kubernetes >}}) guide.
#### Uninstall Dapr on Kubernetes
@ -234,4 +234,4 @@ helm uninstall dapr -n dapr-system
> **Note:** See [here](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md) for details on Dapr helm charts.
### Installing Redis on Kubernetes
To install Redis as a state store or as a pub/sub message bus into your Kubernetes cluster. See [Configure Redis for state management or pub/sub](../howto/configure-redis/readme.md)
To install Redis as a state store or as a pub/sub message bus into your Kubernetes cluster. See [Configure Redis for state management or pub/sub]({{< ref setup-redis-pubsub.md >}})

View File

@ -5,7 +5,7 @@ linkTitle: "GCP Secret Manager"
description: Detailed information on the GCP Secret Manager secret store component
---
This document shows how to enable GCP Secret Manager secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for self hosted and Kubernetes mode.
This document shows how to enable GCP Secret Manager secret store using [Dapr Secrets Component./../concepts/secrets/README.md) for self hosted and Kubernetes mode.
## Setup GCP Secret Manager instance

View File

@ -13,7 +13,7 @@ Dapr can use any Redis instance - containerized, running on your local dev machi
{{< tabs "Self-Hosted" "Kubernetes" "Azure" "AWS" "GCP" >}}
{{% codetab %}}
[Content for Tab1]
A Redis instance is automatically created as a Docker container when you run `dapr init`
{{% /codetab %}}
{{% codetab %}}

View File

@ -69,7 +69,7 @@ spec:
defaultAccess: deny
```
For applications that need to be deined access to the Kubernetes secret store, follow [these instructions](../configure-k8s/README.md), and add the following annotation to the application pod.
For applications that need to be deined access to the Kubernetes secret store, follow [these instructions]({{< ref kubernetes-overview >}}), and add the following annotation to the application pod.
```yaml
dapr.io/config: appconfig
@ -94,7 +94,7 @@ spec:
allowedSecrets: ["secret1", "secret2"]
```
This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions](../../concepts/configuration/README.md) to apply configuration to the sidecar.
This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-overview.md >}}) to apply configuration to the sidecar.
### Scenario 3: Deny access to certain senstive secrets in a secret store
@ -113,4 +113,4 @@ spec:
deniedSecrets: ["secret1", "secret2"]
```
The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions](../../concepts/configuration/README.md) to apply configuration to the sidecar.
The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-overview.md >}}) to apply configuration to the sidecar.

View File

@ -10,7 +10,7 @@ This article provides guidance on running Dapr in self-hosted mode without Docke
## Prerequisites
- [Dapr CLI](../../getting-started/environment-setup.md#installing-dapr-cli)
- [Dapr CLI]({{< ref "install-dapr.md#installing-dapr-cli" >}})
## Initialize Dapr without containers
@ -29,7 +29,7 @@ See [this sample](https://github.com/dapr/samples/tree/master/hello-dapr-slim) f
## Enabling state management or pub/sub
See configuring Redis in self hosted mode [without docker](../../howto/configure-redis/README.md#Self-Hosted-Mode-without-Containers) to enable a local state store or pub/sub broker for messaging.
See configuring Redis in self hosted mode [without docker](https://redis.io/topics/quickstart) to enable a local state store or pub/sub broker for messaging.
## Enabling actors

View File

@ -215,7 +215,7 @@ That's it! There's no need include any SDKs or instrument your application code.
Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your Application Insights resource. You can also use **Application Map** to examine the topology of your services, as shown below:
![Application map](../../images/azure-monitor.png)
![Application map](/images/azure-monitor.png)
> **NOTE**: Only operations going through Dapr API exposed by Dapr sidecar (e.g. service invocation or event publishing) will be displayed in Application Map topology. Direct service invocations (not going through the Dapr API) will not be shown.
@ -240,4 +240,4 @@ set `samplingRate : "0"` in the configuration. The valid range of samplingRate i
## References
* [How-To: Use W3C Trace Context for distributed tracing](../../howto/use-w3c-tracecontext/README.md)
* [How-To: Use W3C Trace Context for distributed tracing]({{< ref w3c-tracing-howto >}})

View File

@ -130,7 +130,7 @@ You can find `grafana-actor-dashboard.json`, `grafana-sidecar-dashboard.json` an
## References
* [Set up Prometheus and Grafana](./setup-prometheus-grafana.md)
* [Set up Prometheus and Grafana]({{< ref grafana.md >}})
* [Prometheus Installation](https://github.com/helm/charts/tree/master/stable/prometheus-operator)
* [Prometheus on Kubernetes](https://github.com/coreos/kube-prometheus)
* [Prometheus Kubernetes Operator](https://github.com/helm/charts/tree/master/stable/prometheus-operator)

View File

@ -158,7 +158,7 @@ If `concurrency` is not set, it is sent out sequential (the example below shows
This endpoint lets you invoke a Dapr output binding.
Dapr bindings support various operations, such as `create`.
See the [different specs](../specs/bindings) on each binding to see the list of supported operations.
See the [different specs]({{< ref supported-bindings >}}) on each binding to see the list of supported operations.
### HTTP Request