Migrate to new Dapr docs site (#880)

* Initial configuration of hugo and docsy

* ci: add Azure Static Web Apps workflow file
on-behalf-of: @Azure opensource@microsoft.com

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Testing file

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Updated overview structure

* Upadted git ignore for package lock

* Removed generated scss

* Update package lock

* Add README

* Update install process

* Move How-To Guides into Docs

* Add front matter

* Add Overview

* Adjust Table of Contents sizing

* Add back overview

* Update azure-static-web-apps-agreeable-mushroom-00aa4761e.yml

* Rename Guide to HowTo

* Allow unsafe HTML for custom images

* Move images into static repo

* Darken TOC

* Add setup docsy step

* Add algolia information

* Update docs with new heirarchy

* Add algoloia information

* Merge master

* Updating directory structure

* Integrating deploying folder intop operations

* Moved building blocks

* Conceptual docs updates (#845)

* Update title

* Update to conceptual docs

* Operations section (#846)

* added FAQ on perf benchmark (#839)

* added FAQ on perf benchmark

* typos

* Update FAQ.md

Co-authored-by: Yaron Schneider <yaronsc@microsoft.com>

* Move images

* Move static docs

* Update toc template

* Update operations section

* Update Actions Workflow

Co-authored-by: Mark Fussell <mfussell@microsoft.com>
Co-authored-by: Yaron Schneider <yaronsc@microsoft.com>

* Developing applications section [hugo-docs] (#847)

* Developing applications section

* Changing capital letters in getting started section

* Remove redundant getting started title

* Updating theme and configuration (#852)

* Enable robots, Update versions, add links

* Update left column minimum width

* Add Dapr docs image

* Update spacing for list pages

* Add home image

* moving components setup howtos to operations section

* Removing moved files from OLD

* Update Operations (#853)

* Enable robots, Update versions, add links

* Update left column minimum width

* Add Dapr docs image

* Update spacing for list pages

* Add home image

* Update hosting references

* Remove extra width

* Add favicons

* Update configuration section (#854)

* Enable robots, Update versions, add links

* Update left column minimum width

* Add Dapr docs image

* Update spacing for list pages

* Add home image

* Update hosting references

* Remove extra width

* Add favicons

* Update configuration section

* Fix configuration reference

* Adding headers for secrets and state components

* Complete operations section (#856)

* Enable robots, Update versions, add links

* Update left column minimum width

* Add Dapr docs image

* Update spacing for list pages

* Add home image

* Update hosting references

* Remove extra width

* Add favicons

* Update configuration section

* Fix configuration reference

* Complete operations section

* Integrating final readme files from OLD

* Final update (#858)

* Enable robots, Update versions, add links

* Update left column minimum width

* Add Dapr docs image

* Update spacing for list pages

* Add home image

* Update hosting references

* Remove extra width

* Add favicons

* Update configuration section

* Fix configuration reference

* Complete operations section

* Moved docs to top level folder

* Disable taxonomies

* Remove duplicate ToC

* Added information on disabling categories and tags

* Removed manual ToC

* Add CLI title (#859)

* Update build command (#860)

* Add CLI title

* Update build command

* Add env variables

* Update version

* Remove extra v

* Add Google Analytics and Algolia

* Move Algolia search code

* Fix algolia info

* Update algolia search

* Update theme

* Enable debugging for Algolia

* Enable debugging for algolia

* Delete tabs

* Temporarily disable algolia

* Add shortcodes for tabs

* Update Dapr CLI instructions to use shortcodes

* Override max_width for code blocks

* Add authoring guide

* Update readme with authoring guide link

* Fixed spacing

* Update images

* Update service invocation

* Update building block descriptions

* Update wording

* Update state management building blocks

* Move example

* Updated state docs

* Pub/Sub Updates

* Pupsub updates

* CLI reference

* API reference

* Contributing

* Binding specs

* Service invocatio nupdate

* Update reference page

* Small actors fixes

* Enable search

* Fix ref

* Fix small typo

* Correct letter

* Fix caps

* Fix caps

* Rename logs

* fixing links in concepts section

* Renaming cli/overview.md to avoid conflict with general Dapr overview file name

* Update Operations

* componenet setups

* fixing links is Getting started section

* Update references

* Fix references

* Fix link

* partial fixes to developing applications

* Adding code of conduct link to main README

* Update app insights tabs

* Bold headers

* Update getting started guide

* Update titles

* Update branch name

* ci: add Azure Static Web Apps workflow file
on-behalf-of: @Azure opensource@microsoft.com

* Update azure-static-web-apps-green-hill-0d7377310.yml

* Delete hugo-docs info

* Update publish cli command

* Update pubsub HowTo

* Updating contributing section

* Update README.md

Co-authored-by: Azure Static Web Apps <opensource@microsoft.com>
Co-authored-by: Ori Zohar <orzohar@microsoft.com>
Co-authored-by: Mark Fussell <mfussell@microsoft.com>
Co-authored-by: Yaron Schneider <yaronsc@microsoft.com>
This commit is contained in:
Aaron Crawfis 2020-10-26 11:37:44 -07:00 committed by GitHub
parent 615e7968fb
commit a234e10fa3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
370 changed files with 10328 additions and 6728 deletions

View File

@ -0,0 +1,51 @@
name: Azure Static Web Apps CI/CD
on:
push:
branches:
- website
pull_request:
types: [opened, synchronize, reopened, closed]
branches:
- website
jobs:
build_and_deploy_job:
if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
runs-on: ubuntu-latest
name: Build and Deploy Job
steps:
- uses: actions/checkout@v2
with:
submodules: recursive
- name: Setup Docsy
run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli
- name: Build And Deploy
id: builddeploy
uses: Azure/static-web-apps-deploy@v0.0.1-preview
env:
HUGO_ENV: production
HUGO_VERSION: "0.74.3"
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_GREEN_HILL_0D7377310 }}
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
action: "upload"
###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
# For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
app_location: "daprdocs" # App source code path
api_location: "api" # Api source code path - optional
app_artifact_location: 'public' # Built app content directory - optional
app_build_command: "hugo"
###### End of Repository/Build Configurations ######
close_pull_request_job:
if: github.event_name == 'pull_request' && github.event.action == 'closed'
runs-on: ubuntu-latest
name: Close Pull Request Job
steps:
- name: Close Pull Request
id: closepullrequest
uses: Azure/static-web-apps-deploy@v0.0.1-preview
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_GREEN_HILL_0D7377310 }}
action: "close"

3
.gitignore vendored
View File

@ -1,2 +1,5 @@
# Visual Studio 2015/2017/2019 cache/options directory
.vs/
node_modules/
daprdocs/public
daprdocs/resources/_gen

3
.gitmodules vendored Normal file
View File

@ -0,0 +1,3 @@
[submodule "daprdocs/themes/docsy"]
path = daprdocs/themes/docsy
url = https://github.com/google/docsy.git

3
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,3 @@
# Contributing to Dapr docs
Please see [this docs section](https://docs.dapr.io/contributing/) for general guidance on contributions to the Dapr project as well as specific guidelines on contributions to the docs repo.

View File

@ -1,49 +1,63 @@
# 📖 Dapr documentation
# Dapr documentation
Welcome to the Dapr documentation repository. You can learn more about Dapr from the links below.
If you are looking to explore the Dapr documentation, please go to the documentation website:
[**https://docs.dapr.io**](https://docs.dapr.io)
## Document versions
This repo contains the markdown files which generate the above website. See below for guidance on running with a local environment to contribute to the docs.
Dapr is currently under community development in preview phase and master branch could include breaking changes. Therefore, please ensure that you refer to the right version of the documents for your Dapr runtime version.
## Contribution guidelines
| Version | Repo |
|:-------:|:----:|
| v0.11.0 | [Docs](https://github.com/dapr/docs/tree/v0.11.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.11.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.11.0) - [CLI](https://github.com/dapr/cli/tree/release-0.11)
| v0.10.0 | [Docs](https://github.com/dapr/docs/tree/v0.10.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.10.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.10.0) - [CLI](https://github.com/dapr/cli/tree/release-0.10)
| v0.9.0 | [Docs](https://github.com/dapr/docs/tree/v0.9.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.9.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.9.0) - [CLI](https://github.com/dapr/cli/tree/release-0.9)
| v0.8.0 | [Docs](https://github.com/dapr/docs/tree/v0.8.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.8.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.8.0) - [CLI](https://github.com/dapr/cli/tree/release-0.8)
| v0.7.0 | [Docs](https://github.com/dapr/docs/tree/v0.7.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.7.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.7.0) - [CLI](https://github.com/dapr/cli/tree/release-0.7)
| v0.6.0 | [Docs](https://github.com/dapr/docs/tree/v0.6.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.6.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.6.0) - [CLI](https://github.com/dapr/cli/tree/release-0.6)
| v0.5.0 | [Docs](https://github.com/dapr/docs/tree/v0.5.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.5.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.5.0) - [CLI](https://github.com/dapr/cli/tree/release-0.5)
| v0.4.0 | [Docs](https://github.com/dapr/docs/tree/v0.4.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.4.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.4.0) - [CLI](https://github.com/dapr/cli/tree/release-0.4)
| v0.3.0 | [Docs](https://github.com/dapr/docs/tree/v0.3.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.3.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.3.0) - [CLI](https://github.com/dapr/cli/tree/release-0.3)
| v0.2.0 | [Docs](https://github.com/dapr/docs/tree/v0.2.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.2.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.2.0) - [CLI](https://github.com/dapr/cli/tree/release-0.2)
| v0.1.0 | [Docs](https://github.com/dapr/docs/tree/v0.1.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.1.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.1.0) - [CLI](https://github.com/dapr/cli/tree/release-0.1)
Before making your first contribution, make sure to review the [contributing section](http://docs.dapr.io/contributing/) in the docs.
## Contents
## Overview
| Topic | Description |
|-------|-------------|
|**[Overview](./overview)** | An overview of Dapr and how it enables you to build event driven, distributed applications
|**[Getting Started](./getting-started)** | Set up your development environment
|**[Concepts](./concepts)** | Dapr concepts explained
|**[How-Tos](./howto)** | Guides explaining how to accomplish specific tasks
|**[Best Practices](./best-practices)** | Guides explaining best practices when using Dapr
|**[Reference](./reference)** | API and bindings reference documentation
|**[FAQ](FAQ.md)** | Frequently asked questions
The Dapr docs are built using [Hugo](https://gohugo.io/) with the [Docsy](https://docsy.dev) theme, hosted on an [Azure Static Web App](https://docs.microsoft.com/en-us/azure/static-web-apps/overview).
## Further documentation
The [daprdocs](./daprdocs) directory contains the hugo project, markdown files, and theme configurations.
| Area | Description |
|------|-------------|
| **[Command Line Interface (CLI)](https://github.com/dapr/cli)** | The Dapr CLI allows you to setup Dapr on your local dev machine or on a Kubernetes cluster, provides debugging support, launches and manages Dapr instances.
| **[Dapr Runtime](https://github.com/dapr/dapr)** | Dapr runtime code and overview documentation.
| **[Components Contribution](https://github.com/dapr/components-contrib)** | Open, community driven reusable components for building distributed applications.
| **SDKs** | - [Go SDK](https://github.com/dapr/go-sdk)<br>- [Java SDK](https://github.com/dapr/java-sdk)<br>- [Javascript SDK](https://github.com/dapr/js-sdk)<br>- [Python SDK](https://github.com/dapr/python-sdk)<br>- [.NET SDK](https://github.com/dapr/dotnet-sdk)<br>- [C++ SDK](https://github.com/dapr/cpp-sdk)<br>- [RUST SDK](https://github.com/dapr/rust-sdk)<br><br>**Note:** Dapr is language agnostic and provides a [RESTful HTTP API](./reference/api/README.md) in addition to the protobuf clients.
| **Frameworks** | - [Workflows](https://github.com/dapr/workflows)<br>- [Azure Functions extension](https://github.com/dapr/azure-functions-extension)<br>
| **[Dapr Presentations](./presentations)** | Previous Dapr presentations and information on how to give your own Dapr presentation.
## Pre-requisites
- [Hugo extended version](https://gohugo.io/getting-started/installing)
- [Node.js](https://nodejs.org/en/)
## Environment setup
1. Ensure pre-requisites are installed
2. Clone this repository
```sh
git clone https://github.com/dapr/docs.git
```
3. Change to daprdocs directory:
```sh
cd daprdocs
```
4. Add Docsy submodule:
```sh
git submodule add https://github.com/google/docsy.git themes/docsy
```
5. Update submodules:
```sh
git submodule update --init --recursive
```
6. Install npm packages:
```sh
npm install
```
## Run local server
1. Make sure you're still in the `daprdocs` directory
2. Run
```sh
hugo server --disableFastRender
```
3. Navigate to `http://localhost:1313/docs`
## Update docs
1. Create new branch
1. Commit and push changes to content
1. Submit pull request to `master`
1. Staging site will automatically get created and linked to PR to review and test
## Code of Conduct
Please refer to our [Dapr Community Code of Conduct](https://github.com/dapr/community/blob/master/CODE-OF-CONDUCT.md)
Please refer to our [Dapr community code of conduct](https://github.com/dapr/community/blob/master/CODE-OF-CONDUCT.md).

View File

@ -1,3 +0,0 @@
# documentation
Content for this file to be added

View File

@ -1,11 +0,0 @@
# Debugging and Troubleshooting
This section describes different tools, techniques and common problems to help users debug and diagnose issues with Dapr.
1. [Logs](logs.md)
2. [Tracing and diagnostics](tracing.md)
3. [Profiling and debugging](profiling_debugging.md)
4. [Common issues](common_issues.md)
Please open a new Bug or Feature Request item on our [issues section](https://github.com/dapr/dapr/issues) if you've encountered a problem running Dapr.
If a security vulnerability has been found, contact the [Dapr team](mailto:daprct@microsoft.com).

View File

@ -1,73 +0,0 @@
# Dapr concepts
The goal of these topics is to provide an understanding of the key concepts used in the Dapr documentation.
## Contents
- [Building blocks](#building-blocks)
- [Components](#components)
- [Configuration](#configuration)
- [Secrets](#secrets)
- [Hosting environments](#hosting-environments)
## Building blocks
A [building block](./architecture/building_blocks.md) is as an HTTP or gRPC API that can be called from user code and uses one or more Dapr components. Dapr consists of a set of building blocks, with extensibility to add new building blocks.
The diagram below shows how building blocks expose a public API that is called from your code, using components to implement the building blocks' capability.
<img src="../images/concepts-building-blocks.png" width=250>
The following are the building blocks provided by Dapr:
<img src="../images/building_blocks.png" width=800>
| Building Block | Endpoint | Description |
|----------------|----------|-------------|
| [**Service-to-Service Invocation**](./service-invocation/README.md) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
| [**State Management**](./state-management/README.md) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state API with pluggable state stores for persistence.
| [**Publish and Subscribe**](./publish-subscribe-messaging/README.md) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publishes messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
| [**Resource Bindings**](./bindings/README.md)| `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
| [**Actors**](./actors/README.md) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the Virtual Actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use. See * [Actor Overview](./actors#understanding-actors)
| [**Observability**](./observability/README.md) | `N/A` | Dapr system components and runtime emit metrics, logs, and traces to debug, operate and monitor Dapr system services, components and user applications.
| [**Secrets**](./secrets/README.md) | `/v1.0/secrets` | Dapr offers a secrets building block API and integrates with secret stores such as Azure Key Vault and Kubernetes to store the secrets. Service code can call the secrets API to retrieve secrets out of the Dapr supported secret stores.
## Components
Dapr uses a modular design where functionality is delivered as a component. Each component has an interface definition. All of the components are pluggable so that you can swap out one component with the same interface for another. The [components contrib repo](https://github.com/dapr/components-contrib) is where you can contribute implementations for the component interfaces and extends Dapr's capabilities.
A building block can use any combination of components. For example the [actors](./actors) building block and the state management building block both use state components. As another example, the pub/sub building block uses [pub/sub](./publish-subscribe-messaging/README.md) components.
You can get a list of current components available in the current hosting environment using the `dapr components` CLI command.
The following are the component types provided by Dapr:
* [Service discovery](https://github.com/dapr/components-contrib/tree/master/nameresolution)
* [State](https://github.com/dapr/components-contrib/tree/master/state)
* [Pub/sub](https://github.com/dapr/components-contrib/tree/master/pubsub)
* [Bindings](https://github.com/dapr/components-contrib/tree/master/bindings)
* [Middleware](https://github.com/dapr/components-contrib/tree/master/middleware)
* [Secret stores](https://github.com/dapr/components-contrib/tree/master/secretstores)
* [Tracing exporters](https://github.com/dapr/components-contrib/tree/master/exporters)
### Service invocation and service discovery components
Service discovery components are used with the [Service Invocation](./service-invocation/README.md) building block to integrate with the hosting environment to provide service-to-service discovery. For example, the Kubernetes service discovery component integrates with the kubernetes DNS service and self hosted uses mDNS.
### Service invocation and middleware components
Dapr allows custom [**middleware**](./middleware/README.md) to be plugged into the request processing pipeline. Middleware can perform additional actions on a request, such as authentication, encryption and message transformation before the request is routed to the user code, or before the request is returned to the client. The middleware components are used with the [Service Invocation](./service-invocation/README.md) building block.
### Secret store components
In Dapr, a [**secret**](./secrets/README.md) is any piece of private information that you want to guard against unwanted users. Secretstores, used to store secrets, are Dapr components and can be used by any of the building blocks.
## Configuration
Dapr [Configuration](./configuration/README.md) defines a policy that affects how any Dapr sidecar instance behaves, such as using [distributed tracing](./observability/traces.md) or a [middleware component](./middleware/README.md). Configuration can be applied to Dapr sidecar instances dynamically.
You can get a list of current configurations available in the current the hosting environment using the `dapr configurations` CLI command.
## Hosting environments
Dapr can run on multiple hosting platforms. The supported hosting platforms are:
* [**Self hosted**](./hosting/README.md#running-dapr-on-a-local-developer-machine-in-standalone-mode). Dapr runs on a single machine either as a process or in a container. Used for local development or running on a single machine execution
* [**Kubernetes**](./hosting/README.md#running-dapr-in-kubernetes-mode). Dapr runs on any Kubernetes cluster either from a cloud provider or on-premises.

View File

@ -1,3 +0,0 @@
# Dapr architecture
- [Building Blocks](./building_blocks.md)

View File

@ -1,15 +0,0 @@
# Building blocks
Dapr consists of a set of building blocks that can be called from any programming language through Dapr HTTP or gRPC APIs. These building blocks address common challenges in building resilient, microservices applications. And they capture and share best practices and patterns that empower distributed application developers.
![Dapr building blocks](../../images/overview.png)
## Anatomy of a building block
Both Dapr spec and Dapr runtime are designed to be extensible
to include new building blocks. A building block is comprised of the following artifacts:
* Dapr spec API definition. A newly proposed building block shall have its API design incorporated into the Dapr spec.
* Components. A building block may reuse existing [Dapr components](../README.md#components), or introduce new components.
* Test suites. A new building block implementation should come with associated unit tests and end-to-end scenario tests.
* Documents and samples.

View File

@ -1,99 +0,0 @@
# Bindings
Using bindings, you can trigger your app with events coming in from external systems, or invoke external systems. This building block provides several benefits for you and your code:
* Remove the complexities of connecting to, and polling from, messaging systems such as queues and message buses
* Focus on business logic and not implementation details of how to interact with a system
* Keep your code free from SDKs or libraries
* Handle retries and failure recovery
* Switch between bindings at run time
* Build portable applications where environment-specific bindings are set-up and no code changes are required
For a specific example, bindings would allow your microservice to respond to incoming Twilio/SMS messages without adding or configuring a third-party Twilio SDK, worrying about polling from Twilio (or using websockets, etc.).
Bindings are developed independently of Dapr runtime. You can view and contribute to the bindings [here](https://github.com/dapr/components-contrib/tree/master/bindings).
## Supported bindings and specs
Every binding has its own unique set of properties. Click the name link to see the component YAML for each binding.
### Generic
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [APNs](../../reference/specs/bindings/apns.md) | | ✅ | Experimental |
| [Cron (Scheduler)](../../reference/specs/bindings/cron.md) | ✅ | ✅ | Experimental |
| [HTTP](../../reference/specs/bindings/http.md) | | ✅ | Experimental |
| [InfluxDB](../../reference/specs/bindings/influxdb.md) | | ✅ | Experimental |
| [Kafka](../../reference/specs/bindings/kafka.md) | ✅ | ✅ | Experimental |
| [Kubernetes Events](../../reference/specs/bindings/kubernetes.md) | ✅ | | Experimental |
| [MQTT](../../reference/specs/bindings/mqtt.md) | ✅ | ✅ | Experimental |
| [PostgreSql](../../reference/specs/bindings/postgres.md) | | ✅ | Experimental |
| [RabbitMQ](../../reference/specs/bindings/rabbitmq.md) | ✅ | ✅ | Experimental |
| [Redis](../../reference/specs/bindings/redis.md) | | ✅ | Experimental |
| [Twilio](../../reference/specs/bindings/twilio.md) | | ✅ | Experimental |
| [Twitter](../../reference/specs/bindings/twitter.md) | ✅ | ✅ | Experimental |
| [SendGrid](../../reference/specs/bindings/sendgrid.md) | | ✅ | Experimental |
### Amazon Web Service (AWS)
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [AWS DynamoDB](../../reference/specs/bindings/dynamodb.md) | | ✅ | Experimental |
| [AWS S3](../../reference/specs/bindings/s3.md) | | ✅ | Experimental |
| [AWS SNS](../../reference/specs/bindings/sns.md) | | ✅ | Experimental |
| [AWS SQS](../../reference/specs/bindings/sqs.md) | ✅ | ✅ | Experimental |
| [AWS Kinesis](../../reference/specs/bindings/kinesis.md) | ✅ | ✅ | Experimental |
### Google Cloud Platform (GCP)
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [GCP Cloud Pub/Sub](../../reference/specs/bindings/gcppubsub.md) | ✅ | ✅ | Experimental |
| [GCP Storage Bucket](../../reference/specs/bindings/gcpbucket.md) | | ✅ | Experimental |
### Microsoft Azure
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [Azure Blob Storage](../../reference/specs/bindings/blobstorage.md) | | ✅ | Experimental |
| [Azure EventHubs](../../reference/specs/bindings/eventhubs.md) | ✅ | ✅ | Experimental |
| [Azure CosmosDB](../../reference/specs/bindings/cosmosdb.md) | | ✅ | Experimental |
| [Azure Service Bus Queues](../../reference/specs/bindings/servicebusqueues.md) | ✅ | ✅ | Experimental |
| [Azure SignalR](../../reference/specs/bindings/signalr.md) | | ✅ | Experimental |
| [Azure Storage Queues](../../reference/specs/bindings/storagequeues.md) | ✅ | ✅ | Experimental |
| [Azure Event Grid](../../reference/specs/bindings/eventgrid.md) | ✅ | ✅ | Experimental |
## Input bindings
Input bindings are used to trigger your application when an event from an external resource has occurred.
An optional payload and metadata might be sent with the request.
In order to receive events from an input binding:
1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.)
2. Listen on an HTTP endpoint for the incoming event, or use the gRPC proto library to get incoming events
> On startup Dapr sends a ```OPTIONS``` request for all defined input bindings to the application and expects a different status code as ```NOT FOUND (404)``` if this application wants to subscribe to the binding.
Read the [Create an event-driven app using input bindings](../../howto/trigger-app-with-input-binding) section to get started with input bindings.
## Output bindings
Output bindings allow users to invoke external resources.
An optional payload and metadata can be sent with the invocation request.
In order to invoke an output binding:
1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.)
2. Use the HTTP endpoint or gRPC method to invoke the binding with an optional payload
Read the [Send events to external systems using Output Bindings](../../howto/send-events-with-output-bindings) section to get started with output bindings.
## Related Topics
* [Implementing a new binding](https://github.com/dapr/docs/tree/master/reference/specs/bindings)
* [Trigger a service from different resources with input bindings](../../howto/trigger-app-with-input-binding)
* [Invoke different resources using output bindings](../../howto/send-events-with-output-bindings)

View File

@ -1,263 +0,0 @@
# Configurations
Dapr configurations are settings that enable you to change the behavior of individual Dapr application sidecars or globally on the system services in the Dapr control plane.
An example of a per Dapr application sidecar setting is configuring trace settings. An example of a Dapr control plane setting is mutual TLS which is a global setting on the Sentry system service.
- [Setting self hosted sidecar configuration](#setting-self-hosted-sidecar-configuration)
- [Setting Kubernetes sidecar configuration](#setting-kubernetes-sidecar-configuration)
- [Sidecar configuration settings](#sidecar-configuration-settings)
- [Setting Kubernetes control plane configuration](#kubernetes-control-plane-configuration)
- [Control plane configuration settings](#control-plane-configuration-settings)
## Setting self hosted sidecar configuration
In self hosted mode the Dapr configuration is a configuration file, for example `config.yaml`. By default the Dapr sidecar looks in the default Dapr folder for the runtime configuration eg: `$HOME/.dapr/config.yaml` in Linux/MacOS and `%USERPROFILE%\.dapr\config.yaml` in Windows.
A Dapr sidecar can also apply a configuration by using a ```--config``` flag to the file path with ```dapr run``` CLI command.
## Setting Kubernetes sidecar configuration
In Kubernetes mode the Dapr configuration is a Configuration CRD, that is applied to the cluster. For example;
```cli
kubectl apply -f myappconfig.yaml
```
You can use the Dapr CLI to list the Configuration CRDs
```cli
dapr configurations -k
```
A Dapr sidecar can apply a specific configuration by using a ```dapr.io/config``` annotation. For example:
```yml
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "nodeapp"
dapr.io/app-port: "3000"
dapr.io/config: "myappconfig"
```
Note: There are more [Kubernetes annotations](../../howto/configure-k8s/README.md) available to configure the Dapr sidecar on activation by sidecar Injector system service.
## Sidecar configuration settings
The following configuration settings can be applied to Dapr application sidecars;
- [Tracing](#tracing)
- [Middleware](#middleware)
- [Scoping secrets for secret stores](#scoping-secrets-for-secret-stores)
- [Access control allow lists for service invocation](#access-control-allow-lists-for-service-invocation)
- [Example application sidecar configuration](#example-application-sidecar-configuration)
### Tracing
Tracing configuration turns on tracing for an application.
The `tracing` section under the `Configuration` spec contains the following properties:
```yml
tracing:
samplingRate: "1"
```
The following table lists the properties for tracing:
Property | Type | Description
---- | ------- | -----------
samplingRate | string | Set sampling rate for tracing to be enabled or disabled.
`samplingRate` is used to enable or disable the tracing. To disable the sampling rate ,
set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` samples all traces. By default, the sampling rate is (0.0001) or 1 in 10,000 traces.
See [Observability distributed tracing](../observability/traces.md) for more information
### Middleware
Middleware configuration set named Http pipeline middleware handlers
The `httpPipeline` section under the `Configuration` spec contains the following properties:
```yml
httpPipeline:
handlers:
- name: oauth2
type: middleware.http.oauth2
- name: uppercase
type: middleware.http.uppercase
```
The following table lists the properties for HTTP handlers:
Property | Type | Description
---- | ------- | -----------
name | string | name of the middleware component
type | string | type of middleware component
See [Middleware pipelines](../middleware/README.md) for more information
### Scoping secrets for secret stores
In addition to scoping which applications can access a given component, for example a secret store component (see [Scoping components](../../howto/components-scopes)), a named secret store component itself can be scoped to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` list, applications can be restricted to access only specific secrets.
The `secrets` section under the `Configuration` spec contains the following properties:
```yml
secrets:
scopes:
- storeName: kubernetes
defaultAccess: allow
allowedSecrets: ["redis-password"]
- storeName: localstore
defaultAccess: allow
deniedSecrets: ["redis-password"]
```
The following table lists the properties for secret scopes:
Property | Type | Description
---- | ------- | -----------
storeName | string | name of the secret store component. storeName must be unique within the list
defaultAccess | string | access modifier. Accepted values "allow" (default) or "deny"
allowedSecrets | list | list of secret keys that can be accessed
deniedSecrets | list | list of secret keys that cannot be accessed
When an `allowedSecrets` list is present with at least one element, only those secrets defined in the list can be accessed by the application.
See the [Scoping secrets](../../howto/secrets-scopes/README.md) HowTo for examples on how to scope secrets to an application.
### Access Control allow lists for service invocation
Access control enables the configuration of policies that restrict what operations *calling* applications can perform, via service invocation, on the *called* application.
An access control policy is specified in configuration and be applied to Dapr sidecar for the *called* application. Example access policies are shown below and access to the called app is based on the matched policy action. You can provide a default global action for all calling applications and if no access control policy is specified, the default behavior is to allow all calling applicatons to access to the called app.
## Concepts
**TrustDomain** - A "trust domain" is a logical group to manage trust relationships. Every application is assigned a trust domain which can be specified in the access control list policy spec. If no policy spec is defined or an empty trust domain is specified, then a default value "public" is used. This trust domain is used to generate the identity of the application in the TLS cert.
**App Identity** - Dapr generates a [SPIFFE](https://spiffe.io/) id for all applications which is attached in the TLS cert. The SPIFFE id is of the format: **spiffe://\<trustdomain>/ns/\<namespace\>/\<appid\>**. For matching policies, the trust domain, namespace and app ID values of the calling app are extracted from the SPIFFE id in the TLS cert of the calling app. These values are matched against the trust domain, namespace and app ID values specified in the policy spec. If all three of these match, then more specific policies are further matched.
```
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
accessControl:
defaultAction: deny --> Global default action in case no other policy is matched
trustDomain: "public" --> The called application is assigned a trust domain and is used to generate the identity of this app in the TLS certificate.
policies:
- appId: app1 --> AppId of the calling app to allow/deny service invocation from
defaultAction: deny --> App level default action in case the app is found but no specific operation is matched
trustDomain: 'public' --> Trust domain of the calling app is matched against the specified value here.
namespace: "default" --> Namespace of the calling app is matched against the specified value here.
operations:
- name: /op1 --> operation name on the called app
httpVerb: ['POST', 'GET'] --> specific http verbs, unused for grpc invocation
action: deny --> allow/deny access
- name: /op2/* --> operation name with a postfix
httpVerb: ["*"] --> wildcards can be used to match any http verb
action: allow
- appId: app2
defaultAction: allow
trustDomain: "public"
namespace: "default"
operations:
- name: /op3
httpVerb: ['POST', 'PUT']
action: deny
```
The following tables lists the different properties for access control, policies and operations:
Access Control
Property | Type | Description
---- | ------- | -----------
defaultAction | string | Global default action when no other policy is matched
trustDomain | string | Trust domain assigned to the application. Default is "public".
policies | string | Policies to determine what operations the calling app can do on the called app
Policies
Property | Type | Description
---- | ------- | -----------
app | string | AppId of the calling app to allow/deny service invocation from
namespace | string | Namespace value that needs to be matched with the namespace of the calling app
trustDomain | string | Trust domain that needs to be matched with the trust domain of the calling app. Default is "public"
defaultAction | string | App level default action in case the app is found but no specific operation is matched
operations | string | operations that are allowed from the calling app
Operations
Property | Type | Description
---- | ------- | -----------
name | string | Path name of the operations allowed on the called app. Wildcard "\*" can be used to under a path to match
httpVerb | list | list specific http verbs that can be used by the calling app. Wildcard "\*" can be used to match any http verb. Unused for grpc invocation
action | string | Access modifier. Accepted values "allow" (default) or "deny"
See the [Allow lists for service invocation](../../howto/allowlists-serviceinvocation/README.md) HowTo for examples on how to set allow lists.
### Example application sidecar configuration
The following yaml shows an example configuration file that can be applied to an applications' Dapr sidecar.
```yml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: myappconfig
namespace: default
spec:
tracing:
samplingRate: "1"
httpPipeline:
handlers:
- name: oauth2
type: middleware.http.oauth2
secrets:
scopes:
- storeName: localstore
defaultAccess: allow
deniedSecrets: ["redis-password"]
accessControl:
defaultAction: deny
trustDomain: "public"
policies:
- appId: app1
defaultAction: deny
trustDomain: 'public'
namespace: "default"
operations:
- name: /op1
httpVerb: ['POST', 'GET']
action: deny
- name: /op2/*
httpVerb: ["*"]
action: allow
```
## Setting Kubernetes control plane configuration
There is a single configuration file called `default` installed with the Dapr control plane system services that applies global settings. This is set up when Dapr is deployed to Kubernetes
## Control plane configuration settings
A Dapr control plane configuration can configure the following settings:
Property | Type | Description
---- | ------- | -----------
enabled | bool | Set mtls to be enabled or disabled
allowedClockSkew | string | The extra time to give for certificate expiry based on possible clock skew on a machine. Default is 15 minutes.
workloadCertTTL | string | Time a certificate is valid for. Default is 24 hours
See the [Mutual TLS](../../howto/configure-mtls/README.md) HowTo and [security concepts](../security/README.md) for more information.
### Example control plane configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: default
namespace: default
spec:
mtls:
enabled: true
allowedClockSkew: 15m
workloadCertTTL: 24h
```
## References
* [Distributed tracing](../observability/traces.md)
* [Middleware pipelines](../middleware/README.md)
* [Security](../security/README.md)
* [How-To: Configuring the Dapr sidecar on Kubernetes](../../howto/configure-k8s/README.md)

View File

@ -1,41 +0,0 @@
# Hosting environments
Dapr can run on multiple hosting platforms.
## Contents
- [Running Dapr on a local developer machine in self hosted mode](#running-dapr-on-a-local-developer-machine-in-self-hosted-mode)
- [Running Dapr in Kubernetes mode](#running-dapr-in-kubernetes-mode)
## Running Dapr on a local developer machine in self hosted mode
Dapr can be configured to run on your local developer machine in [self hosted mode](../../getting-started). Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.
In self hosted mode, Redis is running locally in a container and is configured to serve as both the default component for state store and for pub/sub. A Zipkin container is also configured for diagnostics and tracing. After running `dapr init`, see the `$HOME/.dapr/components` directory (Mac/Linux) or `%USERPROFILE%\.dapr\components` on Windows.
The `dapr-placement` service is responsible for managing the actor distribution scheme and key range settings. This service is only required if you are using Dapr actors. For more information on the actor `Placement` service read [actor overview](../actors).
<img src="../../images/overview_standalone.png" width=800>
You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr enabled application on your local machine.
## Running Dapr in Kubernetes mode
Dapr can be configured to run on any [Kubernetes cluster](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes). In Kubernetes the `dapr-sidecar-injector` and `dapr-operator` services provide first class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned into the cluster. Additionally, the `dapr-sidecar-injector` also injects the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` into **all** the containers in the pod to enable user defined applications to easily communicate with Dapr without hardcoding Dapr port values.
The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service read the [security overview](../security/README.md#dapr-to-dapr-communication)
<img src="../../images/overview_kubernetes.png" width=800>
Deploying and running a Dapr enabled application into your Kubernetes cluster is a simple as adding a few annotations to the deployment schemes. To give your service an `id` and `port` known to Dapr, turn on tracing through configuration and launch the Dapr sidecar container, you annotate your Kubernetes deployment like this.
```yml
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "nodeapp"
dapr.io/app-port: "3000"
dapr.io/config: "tracing"
```
You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes/deploy) in the Kubernetes getting started sample.
Read [Kubernetes how to topics](https://github.com/dapr/docs/tree/master/howto#kubernetes-configuration) for more information about setting up Kubernetes and Dapr.

View File

@ -1,39 +0,0 @@
# Observability
Observability is a term from control theory. Observability means you can answer questions about what's happening on the inside of a system by observing the outside of the system, without having to ship new code to answer new questions. Observability is critical in production environments and services to debug, operate and monitor Dapr system services, components and user applications.
The observability capabilities enable users to monitor the Dapr system services, their interaction with user applications and understand how these monitored services behave. The observability capabilities are divided into the following areas;
* **[Metrics](./metrics.md)**: are the series of measured values and counts that are collected and stored over time. Dapr metrics provide monitoring and understanding of the behavior of Dapr system services and user apps. For example, the service metrics between Dapr sidecars and user apps show call latency, traffic failures, error rates of requests etc. Dapr system services metrics show side car injection failures, health of the system services including CPU usage, number of actor placement made etc.
* **[Logs](./logs.md)**: are records of events that occur and can be used to determine failures or another status. Logs events contain warning, error, info, and debug messages produced by Dapr system services. Each log event includes metadata such as message type, hostname, component name, App ID, ip address, etc.
* **[Distributed tracing](./traces.md)**: is used to profile and monitor Dapr system services and user apps. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing is particularly well-suited to debugging and monitoring distributed software architectures, such as microservices.
You can use distributed tracing to help debug and optimize application code. Distributed tracing contains trace spans between the Dapr runtime, Dapr system services, and user apps across process, nodes, network, and security boundaries. It provides a detailed understanding of service invocations (call flows) and service dependencies.
Dapr uses [W3C tracing context for distributed tracing](./W3C-traces.md)
It is generally recommended to run Dapr in production with tracing.
* **[Health](./health.md)**: Dapr provides a way for a hosting platform to determine it's health using an HTTP endpoint. With this endpoint, the Dapr process, or sidecar, can be probed to determine its readiness and liveness and action taken accordingly.
## Open Telemetry
Dapr integrates with OpenTelemetry for metrics, logs and tracing. With OpenTelemetry, you can configure various exporters for tracing and metrics based on your environment, whether it is running in the cloud or on-premises.
## Monitoring tools
The observability tools listed below are ones that have been tested to work with Dapr.
### Metrics
* [How-To: Set up Prometheus and Grafana](../../howto/setup-monitoring-tools/setup-prometheus-grafana.md)
* [How-To: Set up Azure Monitor](../../howto/setup-monitoring-tools/setup-azure-monitor.md)
### Logs
* [How-To: Set up Fluentd, Elastic search and Kibana in Kubernetes](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md)
* [How-To: Set up Azure Monitor](../../howto/setup-monitoring-tools/setup-azure-monitor.md)
### Distributed Tracing
* [How-To: Set up Zipkin](../../howto/diagnose-with-tracing/zipkin.md)
* [How-To: Set up Application Insights with Open Telemetry Collector](../../howto/diagnose-with-tracing/open-telemetry-collector.md)

View File

@ -1,42 +0,0 @@
# Contributing to Dapr documentation
High quality documentation is a core tenant of the Dapr project. Some contribution guidelines are below.
## Style and tone
- Use sentence-casing for headers.
- When referring to product names and technologies use capitalization (e.g. Kubernetes, Helm, Visual Studio Code, Azure Key Vault and of course Dapr).
- Check the spelling and grammar in your articles.
- Use a casual and friendly voice—like tone as if you're talking to another person one-on-one.
- Use simple sentences. Easy-to-read sentences mean the reader can quickly use the guidance you share.
- Use “you” rather than “users” or “developers” conveying friendliness.
- Avoid the word “will”; write in present tense and not future where possible. E.g. Avoid “Next we will open the file and build”. Just say “Now open the file and build”
- Avoid the word “we”. We is generally not meaningful. We who?
- Avoid the word “please”. This is not a letter asking for any help, this is technical documentation.
- Assume a new developer audience. Some obvious steps can seem hard. E.g. Now set an environment variable DAPR to a value X. It is better to give the reader the explicit command to do this, rather than having them figure this out.
- Where possible give the reader links to next document/article for next steps or related topics (this can be relevant "how-to", samples for reference or concepts).
# Contributing to `Concepts`
- Ensure the reader can understand why they should care about this feature. What problems does it help them solve?
- Ensure the doc references the spec for examples of using the API.
- Ensure the spec is consistent with concept in terms of names, parameters and terminology. Update both the concept and the spec as needed.
- Avoid just repeating the spec. The idea is to give the reader more information and background on the capability so that they can try this out. Hence provide more information and implementation details where possible.
- Provide a link to the spec in the [Reference](/reference) section.
- Where possible reference a practical [How-To](/howto) doc.
# Contributing to `How-Tos`
See [this template](./howto-template.md) for `How To` articles.
- `How To` articles are meant to provide step-by-step practical guidance on to readers who wish to enable a feature, integrate a technology or use Dapr in a specific scenario.
- Location - `How To` articles should all be under the [howto](../howto) directory in a relevant sub directories - make sure to see if the article you are contributed should be included in an existing sub directory.
- Sub directory naming - the directory name should be descriptive and if referring to specific component or concept should begin with the relevant name. Example *pubsub-namespaces*.
- When adding a new article make sure to add a link in the main [How To README.md](../howto/README.md) as well as other articles or samples that may be relevant.
- Do not assume the reader is using a specific environment unless the article itself is specific to an environment. This include OS (Windows/Linux/MacOS), deployment target (Kubernetes, IoT etc.) or programming language. If instructions vary between operating systems, provide guidance for all.
- How to articles should include the following sub sections:
- **Pre-requesties**
- **\<Steps\>** times X as needed
- **Cleanup**
- **Related links**
- Include code/sample/config snippets that can be easily copied and pasted.

View File

@ -1,82 +0,0 @@
# [Title]
>Title should be descriptive of what this article helps achieve. Imagine it continues a sentence that starts with ***How to...*** so should start with a word such as "Setup", "Configure", "Implement" etc.
>
>Does not need to include the word *Dapr* in it (as it is in the context of the Dapr docs repo)
>
>If it is specific to an environment (e.g. Kubernetes), should call out the environment.
>
>Capital letters only for first word and product/technology names.
>
>Example:
># Set up Zipkin for distributed tracing in Kubernetes
[Intro paragraph]
> Intro paragraph should be a short description of what this article covers to set expectations of the reader. Include links if they can provide context and clarity to the reader.
>
> Example:
>
> This article will provide guidance on how to enable Dapr distributed tracing capabilities on Kubernetes using [Zipkin](https://zipkin.io/) as a tracing broker.
## Pre-requisites
>List the required setup this article assumes with links on how to achieve each prerequisite.
>
>Example:
>
> - [Setup Dapr on a Kubernetes cluster](https://github.com/dapr/docs/blob/master/getting-started/environment-setup.md#installing-dapr-on-a-kubernetes-cluster)
> - [Install Helm](https://helm.sh/docs/intro/install/)
> - [Install Python](https://www.python.org/downloads/) version >= 3.7
## [Step header] - (multiple)
>
>Name each step section in a clear descriptive way which allows readers to understand what this section covers. Example: **Create a configuration file**
>
>If using terminal commands, make sure to allow easy copy/paste by having each terminal command in a separate line with the markdown (indented as needed when appearing in bullets or numbered lists). If Windows/Linux/MacOS instructions differ, make sure to include instructions for each.
>
>Example (note the indentation of the commands):
>
>- Clone the Dapr samples repository:
> ```bash
> git clone https://github.com/dapr/samples.git
> ```
>- Go to the hello world sample:
> ```
> cd 1.hello-world
> ```
>
>Add sections as needed for multiple steps.
>
## Cleanup
>
> If possible, provide steps that undo the steps above. These should bring the user environment back to the pre-requisites stage. If using terminal commands, make sure to allow easy copy/paste by having each terminal command in a separate line with the markdown (indented as needed when appearing in bullets or numbered lists). If Windows/Linux/MacOS instructions differ, make sure to include instructions for each.
>
>Example:
>
>1. Delete the deployments from the cluster
> ```
> kubectl delete -f file.yaml
> ```
>2. Delete the Helm chart from the cluster
> ```
> helm del --purge dapr-kafka
> ```
>
## Related links
>
> Reference other documentation that may be relevant to a user interested in this How To. Include any of the following:
>
>- Other **How To** articles in related topics or alternative technology integrations.
>- **Concept** articles that are relevant.
>- **Reference** and **API** documentation that can be helpful
>- **Samples** that provide code reference relevant to this guidance.
>- Any other documentation link that may be a logical next step for a reader interested in this guidance (for example, if this is a how to on publishing to a pub/sub topic, a logical next step would be a how to which describes consuming from a topic).
>

View File

@ -0,0 +1,6 @@
---
title: "{{ replace .Name "-" " " | title }}"
date: {{ .Date }}
draft: true
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.2 KiB

View File

@ -0,0 +1,15 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg width="206px" height="206px" viewBox="0 0 206 206" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<!-- Generator: Sketch 51.3 (57544) - http://www.bohemiancoding.com/sketch -->
<title>white on dark</title>
<desc>Created with Sketch.</desc>
<defs></defs>
<g id="white-on-dark" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<path d="M63.08125,128 L51.55,128 L51.55,124.378906 C50.448432,125.761726 49.3351619,126.769528 48.2101562,127.402344 C46.2413964,128.503912 44.0031375,129.054688 41.4953125,129.054688 C37.4406047,129.054688 33.8312658,127.66017 30.6671875,124.871094 C26.8937311,121.542952 25.0070312,117.136746 25.0070312,111.652344 C25.0070312,106.074191 26.9406057,101.62111 30.8078125,98.2929688 C33.8781404,95.644518 37.4054488,94.3203125 41.3898437,94.3203125 C43.7101679,94.3203125 45.8898336,94.8124951 47.9289062,95.796875 C49.1007871,96.3593778 50.3078063,97.2851498 51.55,98.5742188 L51.55,75.1953125 L63.08125,75.1953125 L63.08125,128 Z M51.9015625,111.6875 C51.9015625,109.62499 51.1750073,107.873054 49.721875,106.431641 C48.2687427,104.990227 46.5109478,104.269531 44.4484375,104.269531 C42.151551,104.269531 40.2648511,105.13671 38.7882812,106.871094 C37.5929628,108.277351 36.9953125,109.882803 36.9953125,111.6875 C36.9953125,113.492197 37.5929628,115.097649 38.7882812,116.503906 C40.2414135,118.23829 42.1281134,119.105469 44.4484375,119.105469 C46.5343854,119.105469 48.2980397,118.390632 49.7394531,116.960938 C51.1808666,115.531243 51.9015625,113.773448 51.9015625,111.6875 Z M106.329687,128 L94.7984375,128 L94.7984375,124.378906 C93.6968695,125.761726 92.5835994,126.769528 91.4585937,127.402344 C89.4898339,128.503912 87.251575,129.054688 84.74375,129.054688 C80.6890422,129.054688 77.0797033,127.66017 73.915625,124.871094 C70.1421686,121.542952 68.2554687,117.136746 68.2554687,111.652344 C68.2554687,106.074191 70.1890432,101.62111 74.05625,98.2929688 C77.1265779,95.644518 80.6538863,94.3203125 84.6382812,94.3203125 C86.9586054,94.3203125 89.1382711,94.8124951 91.1773437,95.796875 C92.3492246,96.3593778 93.5562438,97.2851498 94.7984375,98.5742188 L94.7984375,95.375 L106.329687,95.375 L106.329687,128 Z M95.15,111.6875 C95.15,109.62499 94.4234448,107.873054 92.9703125,106.431641 C91.5171802,104.990227 89.7593853,104.269531 87.696875,104.269531 C85.3999885,104.269531 83.5132886,105.13671 82.0367187,106.871094 C80.8414003,108.277351 80.24375,109.882803 80.24375,111.6875 C80.24375,113.492197 80.8414003,115.097649 82.0367187,116.503906 C83.489851,118.23829 85.3765509,119.105469 87.696875,119.105469 C89.7828229,119.105469 91.5464772,118.390632 92.9878906,116.960938 C94.4293041,115.531243 95.15,113.773448 95.15,111.6875 Z M150.878906,111.722656 C150.878906,117.300809 148.945332,121.75389 145.078125,125.082031 C142.007797,127.730482 138.480489,129.054688 134.496094,129.054688 C132.17577,129.054688 129.996104,128.562505 127.957031,127.578125 C126.78515,127.015622 125.578131,126.08985 124.335937,124.800781 L124.335937,144.3125 L112.804687,144.3125 L112.804687,95.375 L124.335937,95.375 L124.335937,98.9960938 C125.367193,97.636712 126.480463,96.6289095 127.675781,95.9726562 C129.644541,94.8710882 131.8828,94.3203125 134.390625,94.3203125 C138.445333,94.3203125 142.054672,95.7148298 145.21875,98.5039062 C148.992206,101.832048 150.878906,106.238254 150.878906,111.722656 Z M138.890625,111.6875 C138.890625,109.835928 138.304693,108.230476 137.132812,106.871094 C135.656243,105.13671 133.757824,104.269531 131.4375,104.269531 C129.351552,104.269531 127.587898,104.984368 126.146484,106.414062 C124.705071,107.843757 123.984375,109.601552 123.984375,111.6875 C123.984375,113.75001 124.71093,115.501946 126.164062,116.943359 C127.617195,118.384773 129.37499,119.105469 131.4375,119.105469 C133.757824,119.105469 135.644524,118.23829 137.097656,116.503906 C138.292975,115.097649 138.890625,113.492197 138.890625,111.6875 Z M180.521875,106.027344 C178.904679,105.253902 177.264071,104.867188 175.6,104.867188 C171.803106,104.867188 169.342193,106.414047 168.217187,109.507812 C167.79531,110.632818 167.584375,112.144522 167.584375,114.042969 L167.584375,128 L156.053125,128 L156.053125,95.375 L167.584375,95.375 L167.584375,100.71875 C168.803131,98.820303 170.115618,97.449223 171.521875,96.6054688 C173.420322,95.4804631 175.670299,94.9179688 178.271875,94.9179688 C178.881253,94.9179688 179.631246,94.9531246 180.521875,95.0234375 L180.521875,106.027344 Z" id="dapr" fill="#FFFFFF"></path>
<polygon id="tie" fill="#FFFFFF" fill-rule="nonzero" points="112.713867 128.237305 124.324219 128.237305 125.324219 155.49707 118.519043 160.265625 111.713867 155.49707"></polygon>
<rect id="Rectangle-4" fill="#FFFFFF" fill-rule="nonzero" x="86.6816586" y="46" width="44.0478543" height="31" rx="2"></rect>
<rect id="Rectangle-4" fill="#000000" fill-rule="nonzero" opacity="0.0799999982" x="86.6816586" y="46" width="16.2935291" height="31"></rect>
<rect id="Rectangle-3" fill="#FFFFFF" fill-rule="nonzero" x="72.7718099" y="75" width="71.2879747" height="7.44032012" rx="3.72016"></rect>
<rect id="Rectangle-4" fill="#000000" fill-rule="nonzero" opacity="0.0799999982" x="72.7718099" y="75" width="22.0566132" height="9.15731707"></rect>
</g>
</svg>

After

Width:  |  Height:  |  Size: 5.2 KiB

View File

@ -0,0 +1,57 @@
// Code formatting.
.td-content {
// Highlighted code.
.highlight {
@extend .card;
margin: 2rem 0;
padding: 0;
max-width: 80%;
pre {
margin: 0;
padding: 1rem;
}
}
// Inline code
p code, li > code, table code {
color: inherit;
padding: 0.2em 0.4em;
margin: 0;
font-size: 85%;
word-break: normal;
background-color: rgba($black, 0.05);
border-radius: $border-radius;
br {
display: none;
}
}
// Code blocks
pre {
word-wrap: normal;
background-color: $gray-100;
padding: $spacer;
> code {
background-color: inherit !important;
padding: 0;
margin: 0;
font-size: 100%;
word-break: normal;
white-space: pre;
border: 0;
}
}
pre.mermaid {
background-color: inherit;
font-size: 0;
}
}

View File

@ -0,0 +1,85 @@
//
// Style Markdown content
//
.td-content {
order: 1;
p, li, td {
font-weight: $font-weight-body-text;
}
> h1 {
font-weight: $font-weight-bold;
margin-bottom: .5rem;
}
> h2 {
font-weight: $font-weight-bold;
margin-bottom: 1rem;
}
> h2:not(:first-child) {
margin-top: 3rem;
}
> h2 + h3 {
margin-top: 1rem;
}
> h3, > h4, > h5, > h6 {
margin-bottom: 1rem;
margin-top: 2rem;
font-weight: $font-weight-bold;
}
img {
@extend .img-fluid;
}
> table {
@extend .table-striped;
@extend .table-responsive;
@extend .table;
}
> blockquote {
padding: 0 0 0 1rem;
margin-bottom: $spacer;
color: $gray-600;
border-left: 6px solid $secondary;
}
> ul li, > ol li {
margin-bottom: .25rem;
}
strong {
font-weight: $font-weight-bold;
}
> pre, > .highlight, > .lead, > h1, > h2, > ul, > ol, > p, > blockquote, > dl dd, .footnotes, > .alert {
@extend .td-max-width-on-larger-screens;
}
.alert:not(:first-child) {
margin-top: 2 * $spacer;
margin-bottom: 2 * $spacer;
}
.lead {
margin-bottom: 1.5rem;
font-weight: $font-weight-bold;
}
}
.td-title {
margin-top: 1rem;
margin-bottom: .5rem;
@include media-breakpoint-up(sm) {
font-size: 3rem;
}
}

View File

@ -0,0 +1,102 @@
//
// Main navbar
//
.td-navbar-cover {
background: $primary;
@include media-breakpoint-up(md) {
background: transparent !important;
.nav-link {
text-shadow: 1px 1px 2px $dark;
}
}
&.navbar-bg-onscroll .nav-link {
text-shadow: none;
}
}
.navbar-bg-onscroll {
background: $primary !important;
opacity: inherit;
}
.td-navbar {
background: $primary;
min-height: 4rem;
margin: 0;
z-index: 32;
@include media-breakpoint-up(md) {
position: fixed;
top: 0;
width: 100%;
}
.navbar-brand {
text-transform: none !important;
text-align: middle;
.nav-link {
display: inline-block;
margin-right: -30px;
}
svg {
display: inline-block;
margin-right: 5px;
margin-left: 5px;
margin-top: 0px;
margin-bottom: 0px;
height: 40px;
width: 40px;
}
}
.nav-link {
text-transform: none !important;
font-weight: $font-weight-bold;
}
.td-search-input {
border: none;
@include placeholder {
color: $navbar-dark-color;
}
}
.dropdown {
min-width: 100px;
}
@include media-breakpoint-down(md) {
padding-right: .5rem;
padding-left: .75rem;
.td-navbar-nav-scroll {
max-width: 100%;
height: 2.5rem;
margin-top: .25rem;
overflow: hidden;
font-size: .875rem;
.nav-link {
padding-right: .25rem;
padding-left: 0;
}
.navbar-nav {
padding-bottom: 2rem;
overflow-x: auto;
white-space: nowrap;
-webkit-overflow-scrolling: touch;
}
}
}
}

View File

@ -0,0 +1,58 @@
//
// Right side toc
//
.td-toc {
border-left: 1px solid $border-color;
@supports (position: sticky) {
position: sticky;
top: 4rem;
height: calc(100vh - 10rem);
overflow-y: auto;
}
order: 2;
padding-top: 2.75rem;
padding-bottom: 1.5rem;
vertical-align: top;
a {
display: block;
font-weight: $font-weight-medium;
padding-bottom: .25rem;
}
li {
list-style: none;
display: block;
font-size: 1.1rem;
}
li li {
margin-left: 1.5rem;
font-size: 1.1rem;
}
.td-page-meta {
a {
font-weight: $font-weight-medium;
}
}
#TableOfContents {
// Hugo's ToC is a mouthful, this can be used to style the top level h2 entries.
> ul > li > ul > li > a {}
a {
color: rgb(68, 68, 68);
&:hover {
color: $blue;
text-decoration: none;
}
}
}
ul {
padding-left: 0;
}
}

View File

@ -0,0 +1,147 @@
//
// Left side navigation
//
.td-sidebar-nav {
padding-right: 0.5rem;
margin-right: -15px;
margin-left: -15px;
@include media-breakpoint-up(md) {
@supports (position: sticky) {
max-height: calc(100vh - 10rem);
overflow-y: auto;
}
}
@include media-breakpoint-up(md) {
display: block !important;
}
&__section {
li {
list-style: none;
}
ul {
padding: 0;
margin: 0;
}
@include media-breakpoint-up(md) {
& > ul {
padding-left: .5rem;
}
}
padding-left: 0;
}
&__section-title {
display: block;
font-weight: $font-weight-medium;
.active {
font-weight: $font-weight-bold;
}
a {
color: $gray-900;
}
}
.td-sidebar-link {
display: block;
padding-bottom: 0.375rem;
&__page {
color: $gray-700;
font-weight: $font-weight-light;
}
}
a {
&:hover {
color: $blue;
text-decoration: none;
}
&.active {
font-weight: $font-weight-bold;
}
}
.dropdown {
a {
color: $gray-700;
}
.nav-link {
padding: 0 0 1rem;
}
}
& > .td-sidebar-nav__section {
padding-top: .5rem;
padding-left: 0rem;
}
}
.td-sidebar {
@include media-breakpoint-up(md) {
padding-top: 4rem;
background-color: $td-sidebar-bg-color;
padding-right: 1rem;
border-right: 1px solid $td-sidebar-border-color;
min-width: 18rem;
}
padding-bottom: 1rem;
&__toggle {
line-height: 1;
color: $gray-900;
margin: 1rem;
}
&__search {
padding: 1rem 15px;
margin-right: -15px;
margin-left: -15px;
}
&__inner {
order: 0;
@include media-breakpoint-up(md) {
@supports (position: sticky) {
position: sticky;
top: 4rem;
z-index: 10;
height: calc(100vh - 6rem);
}
}
@include media-breakpoint-up(xl) {
flex: 0 1 320px;
}
.td-search-box {
width: 100%;
}
}
#content-desktop {display: block;}
#content-mobile {display: none;}
@include media-breakpoint-down(md) {
#content-desktop {display: none;}
#content-mobile {display: block;}
}
}

View File

@ -0,0 +1,12 @@
$primary:#0D2192;
$secondary: #1F329A;
.navbar-brand {
text-align: left;
svg {
display: inline-block;
margin: 0 10px;
height: 60px;
}
}

119
daprdocs/config.toml Normal file
View File

@ -0,0 +1,119 @@
# Site Configuration
baseURL = "https://docs.dapr.io/"
title = "Dapr Docs"
theme = "docsy"
enableRobotsTXT = true
enableGitInfo = true
# Language Configuration
languageCode = "en-us"
contentDir = "content/en"
defaultContentLanguage = "en"
# Disable categories & tags
disableKinds = ["taxonomy", "term"]
# Google Analytics
[services.googleAnalytics]
id = "UA-149338238-3"
# Markdown Engine - Allow inline html
[markup]
[markup.goldmark]
[markup.goldmark.renderer]
unsafe = true
# Top Nav Bar
[[menu.main]]
name = "Home"
weight = 40
url = "https://dapr.io"
[[menu.main]]
name = "About"
weight = 50
url = "https://dapr.io/#about"
[[menu.main]]
name = "Download"
weight = 60
url = "https://dapr.io/#download"
[[menu.main]]
name = "Blog"
weight = 70
url = "https://blog.dapr.io/blog"
[[menu.main]]
name = "Community"
weight = 80
url = "https://dapr.io/#community"
[params]
copyright = "Dapr"
#privacy_policy = "https://policies.google.com/privacy"
# Algolia
algolia_docsearch = true
offlineSearch = false
# GitHub Information
github_repo = "https://github.com/dapr/docs"
github_project_repo = "https://github.com/dapr/dapr"
github_subdir = "daprdocs"
github_branch = "website"
# Versioning
version_menu = "Releases"
version = "v0.11"
archived_version = false
[[params.versions]]
version = "v0.11"
url = "#"
[[params.versions]]
version = "v0.10"
url = "https://github.com/dapr/docs/tree/v0.10.0"
[[params.versions]]
version = "v0.9"
url = "https://github.com/dapr/docs/tree/v0.9.0"
[[params.versions]]
version = "v0.8"
url = "https://github.com/dapr/docs/tree/v0.8.0"
# UI Customization
[params.ui]
sidebar_menu_compact = true
navbar_logo = true
sidebar_search_disable = true
# Links
## End user relevant links. These will show up on left side of footer and in the community page if you have one.
[[params.links.user]]
name ="Twitter"
url = "https://twitter.com/daprdev"
icon = "fab fa-twitter"
desc = "Follow us on Twitter to get the latest updates!"
[[params.links.user]]
name = "YouTube"
url = "https://www.youtube.com/channel/UCtpSQ9BLB_3EXdWAUQYwnRA"
icon = "fab fa-youtube"
desc = "Community call recordings and other cool demos"
[[params.links.user]]
name = "Blog"
url = "https://blog.dapr.io/posts"
icon = "fas fa-blog"
desc = "Community call recordings and other cool demos"
## Developer relevant links. These will show up on right side of footer and in the community page if you have one.
[[params.links.developer]]
name = "GitHub"
url = "https://github.com/dapr/"
icon = "fab fa-github"
desc = "Development takes place here!"
[[params.links.developer]]
name = "Gitter"
url = "https://gitter.im/Dapr/community"
icon = "fab fa-gitter"
desc = "Conversations happen here!"
[[params.links.developer]]
name = "Zoom"
url = "https://aka.ms/dapr-community-call"
icon = "fas fa-video"
desc = "Meetings happen here!"

View File

@ -0,0 +1,7 @@
---
type: docs
---
# <img src="/images/home-title.png" alt="Dapr Docs" width=400>
Welcome to the Dapr documentation site!

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Dapr Concepts"
linkTitle: "Concepts"
weight: 10
description: "Learn about Dapr including its main features and capabilities"
---

View File

@ -0,0 +1,29 @@
---
type: docs
title: "Building blocks"
linkTitle: "Building blocks"
weight: 200
description: "Modular best practices accessible over standard HTTP or gRPC APIs"
---
A [building block]({{< ref building-blocks >}}) is as an HTTP or gRPC API that can be called from your code and uses one or more Dapr components.
Building blocks address common challenges in building resilient, microservices applications and codify best practices and patterns. Dapr consists of a set of building blocks, with extensibility to add new building blocks.
The diagram below shows how building blocks expose a public API that is called from your code, using components to implement the building blocks' capability.
<img src="/images/concepts-building-blocks.png" width=250>
The following are the building blocks provided by Dapr:
<img src="/images/building_blocks.png" width=1000>
| Building Block | Endpoint | Description |
|----------------|----------|-------------|
| [**Service-to-service invocation**]({{<ref "service-invocation-overview.md">}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
| [**State management**]({{<ref "state-management-overview.md">}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state API with pluggable state stores for persistence.
| [**Publish and subscribe**]({{<ref "pubsub-overview.md">}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publishes messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
| [**Resource bindings**]({{<ref "bindings-overview.md">}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
| [**Actors**]({{<ref "actors-overview.md">}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the Virtual Actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use. See * [Actor Overview](./actors#understanding-actors)
| [**Observability**]({{<ref "observability-concept.md">}}) | `N/A` | Dapr system components and runtime emit metrics, logs, and traces to debug, operate and monitor Dapr system services, components and user applications.
| [**Secrets**]({{<ref "secrets-overview.md">}}) | `/v1.0/secrets` | Dapr offers a secrets building block API and integrates with secret stores such as Azure Key Vault and Kubernetes to store the secrets. Service code can call the secrets API to retrieve secrets out of the Dapr supported secret stores.

View File

@ -0,0 +1,32 @@
---
type: docs
title: "Components"
linkTitle: "Components"
weight: 300
description: "Modular functionality used by building blocks and applications"
---
Dapr uses a modular design where functionality is delivered as a component. Each component has an interface definition. All of the components are pluggable so that you can swap out one component with the same interface for another. The [components contrib repo](https://github.com/dapr/components-contrib) is where you can contribute implementations for the component interfaces and extends Dapr's capabilities.
A building block can use any combination of components. For example the [actors]({{<ref "actors-overview.md">}}) building block and the [state management]({{<ref "state-management-overview.md">}}) building block both use [state components](https://github.com/dapr/components-contrib/tree/master/state). As another example, the [Pub/Sub]({{<ref "pubsub-overview.md">}}) building block uses [Pub/Sub components](https://github.com/dapr/components-contrib/tree/master/pubsub).
You can get a list of current components available in the current hosting environment using the `dapr components` CLI command.
The following are the component types provided by Dapr:
* [Bindings](https://github.com/dapr/components-contrib/tree/master/bindings)
* [Pub/sub](https://github.com/dapr/components-contrib/tree/master/pubsub)
* [Middleware](https://github.com/dapr/components-contrib/tree/master/middleware)
* [Service discovery name resolution](https://github.com/dapr/components-contrib/tree/master/nameresolution)
* [Secret stores](https://github.com/dapr/components-contrib/tree/master/secretstores)
* [State](https://github.com/dapr/components-contrib/tree/master/state)
* [Tracing exporters](https://github.com/dapr/components-contrib/tree/master/exporters)
### Service invocation and service discovery components
Service discovery components are used with the [service invocation]({{<ref "service-invocation-overview.md">}}) building block to integrate with the hosting environment to provide service-to-service discovery. For example, the Kubernetes service discovery component integrates with the Kubernetes DNS service and self hosted uses mDNS.
### Service invocation and middleware components
Dapr allows custom [middleware]({{<ref "middleware-concept.md">}}) to be plugged into the request processing pipeline. Middleware can perform additional actions on a request, such as authentication, encryption and message transformation before the request is routed to the user code, or before the request is returned to the client. The middleware components are used with the [service invocation]({{<ref "service-invocation-overview.md">}}) building block.
### Secret store components
In Dapr, a [secret]({{<ref "secrets-overview.md">}}) is any piece of private information that you want to guard against unwanted users. Secrets stores, used to store secrets, are Dapr components and can be used by any of the building blocks.

View File

@ -0,0 +1,12 @@
---
type: docs
title: "Configuration"
linkTitle: "Configuration"
weight: 400
description: "Change the behavior of Dapr sidecars or globally on Dapr system services"
---
Dapr configurations are settings that enable you to change the behavior of individual Dapr application sidecars or globally on the system services in the Dapr control plane.
An example of a per Dapr application sidecar setting is configuring trace settings. An example of a Dapr control plane setting is mutual TLS which is a global setting on the Sentry system service.
Read [this page]({{<ref "configuration-overview.md">}}) for a list of all configuration options.

View File

@ -1,65 +1,66 @@
# FAQ
- **[Networking and service meshes](#networking-and-service-meshes)**
- **[Performance Benchmarks](#performance-benchmarks)**
- **[Actors](#actors)**
- **[Developer language SDKs and frameworks](#developer-language-sdks-and-frameworks)**
## Networking and service meshes
### Understanding how Dapr works with service meshes
Dapr is a distributed application runtime. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build microservices. Dapr is developer-centric versus service meshes being infrastructure-centric.
Dapr can be used alongside any service mesh such as Istio and Linkerd. A service mesh is a dedicated network infrastructure layer designed to connect services to one another and provide insightful telemetry. A service mesh doesnt introduce new functionality to an application.
That is where Dapr comes in. Dapr is a language agnostic programming model built on http and gRPC that provides distributed system building blocks via open APIs for asynchronous pub-sub, stateful services, service discovery and invocation, actors and distributed tracing. Dapr introduces new functionality to an apps runtime. Both service meshes and Dapr run as side-car services to your application, one giving network features and the other distributed application capabilities.
Watch this [video](https://www.youtube.com/watch?v=xxU68ewRmz8&feature=youtu.be&t=140) on how Dapr and service meshes work together.
### Understanding how Dapr interoperates with the service mesh interface (SMI)
SMI is an abstraction layer that provides a common API surface across different service mesh technology. Dapr can leverage any service mesh technology including SMI.
### Differences between Dapr, Istio and Linkerd
Read [How does Dapr work with service meshes?](https://github.com/dapr/dapr/wiki/FAQ#how-does-dapr-work-with-service-meshes) Istio is an open source service mesh implementation that focuses on Layer7 routing, traffic flow management and mTLS authentication between services. Istio uses a sidecar to intercept traffic going into and out of a container and enforces a set of network policies on them.
Istio is not a programming model and does not focus on application level features such as state management, pub-sub, bindings etc. That is where Dapr comes in.
## Performance Benchmarks
The Dapr project is focused on performance due to the inherent discussion of Dapr being a sidecar to your application. This [performance benchmark video](https://youtu.be/4kV3SHs1j2k?t=783) discusses and demos the work that has been done so far. The performance benchmark data is planned to be published on a regular basis. You can also run the perf tests in your own environment to get perf numbers.
## Actors
### What is the relationship between Dapr, Orleans and Service Fabric Reliable Actors?
The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments.
Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview](https://github.com/dapr/docs/blob/master/overview/README.md).
### Differences between Dapr from an actor framework
Virtual actors capabilities are one of the building blocks that Dapr provides in its runtime. With Dapr because it is programming language agnostic with an http/gRPC API, the actors can be called from any language. This allows actors written in one language to invoke actors written in a different language.
Creating a new actor follows a local call like `http://localhost:3500/v1.0/actors/<actorType>/<actorId>/…`, for example `http://localhost:3500/v1.0/actors/myactor/50/method/getData` to call the `getData` method on the newly created `myactor` with id `50`.
The Dapr runtime SDKs have language specific actor frameworks. The .NET SDK for example has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java and Python SDK have actor frameworks.
## Developer language SDKs and frameworks
### Does Dapr have any SDKs if I want to work with a particular programming language or framework?
To make using Dapr more natural for different languages, it includes language specific SDKs for Go, Java, JavaScript, .NET, Python, RUST and C++.
These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
### What frameworks does Dapr integrated with?
Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services.
Dapr is integrated with the following frameworks;
- Logic Apps with Dapr [Workflows](https://github.com/dapr/workflows)
- Functions with Dapr [Azure Functions Extension](https://github.com/dapr/azure-functions-extension)
- Spring Boot Web apps in Java SDK
- ASP.NET Core in .NET SDK
- [Azure API Management](https://cloudblogs.microsoft.com/opensource/2020/09/22/announcing-dapr-integration-azure-api-management-service-apim/)
---
type: docs
title: "Frequently asked questions and answers"
linkTitle: "FAQs"
weight: 1000
description: "Common questions asked about Dapr"
---
## Networking and service meshes
### Understanding how Dapr works with service meshes
Dapr is a distributed application runtime. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build microservices. Dapr is developer-centric versus service meshes being infrastructure-centric.
Dapr can be used alongside any service mesh such as Istio and Linkerd. A service mesh is a dedicated network infrastructure layer designed to connect services to one another and provide insightful telemetry. A service mesh doesnt introduce new functionality to an application.
That is where Dapr comes in. Dapr is a language agnostic programming model built on http and gRPC that provides distributed system building blocks via open APIs for asynchronous pub-sub, stateful services, service discovery and invocation, actors and distributed tracing. Dapr introduces new functionality to an apps runtime. Both service meshes and Dapr run as side-car services to your application, one giving network features and the other distributed application capabilities.
Watch this [video](https://www.youtube.com/watch?v=xxU68ewRmz8&feature=youtu.be&t=140) on how Dapr and service meshes work together.
### Understanding how Dapr interoperates with the service mesh interface (SMI)
SMI is an abstraction layer that provides a common API surface across different service mesh technology. Dapr can leverage any service mesh technology including SMI.
### Differences between Dapr, Istio and Linkerd
Read [How does Dapr work with service meshes?](https://github.com/dapr/dapr/wiki/FAQ#how-does-dapr-work-with-service-meshes) Istio is an open source service mesh implementation that focuses on Layer7 routing, traffic flow management and mTLS authentication between services. Istio uses a sidecar to intercept traffic going into and out of a container and enforces a set of network policies on them.
Istio is not a programming model and does not focus on application level features such as state management, pub-sub, bindings etc. That is where Dapr comes in.
## Performance Benchmarks
The Dapr project is focused on performance due to the inherent discussion of Dapr being a sidecar to your application. This [performance benchmark video](https://youtu.be/4kV3SHs1j2k?t=783) discusses and demos the work that has been done so far. The performance benchmark data is planned to be published on a regular basis. You can also run the perf tests in your own environment to get perf numbers.
## Actors
### What is the relationship between Dapr, Orleans and Service Fabric Reliable Actors?
The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments.
Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview](https://github.com/dapr/docs/blob/master/overview/README.md).
### Differences between Dapr from an actor framework
Virtual actors capabilities are one of the building blocks that Dapr provides in its runtime. With Dapr because it is programming language agnostic with an http/gRPC API, the actors can be called from any language. This allows actors written in one language to invoke actors written in a different language.
Creating a new actor follows a local call like `http://localhost:3500/v1.0/actors/<actorType>/<actorId>/…`, for example `http://localhost:3500/v1.0/actors/myactor/50/method/getData` to call the `getData` method on the newly created `myactor` with id `50`.
The Dapr runtime SDKs have language specific actor frameworks. The .NET SDK for example has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java and Python SDK have actor frameworks.
## Developer language SDKs and frameworks
### Does Dapr have any SDKs if I want to work with a particular programming language or framework?
To make using Dapr more natural for different languages, it includes language specific SDKs for Go, Java, JavaScript, .NET, Python, RUST and C++.
These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
### What frameworks does Dapr integrated with?
Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services.
Dapr is integrated with the following frameworks;
- Logic Apps with Dapr [Workflows](https://github.com/dapr/workflows)
- Functions with Dapr [Azure Functions Extension](https://github.com/dapr/azure-functions-extension)
- Spring Boot Web apps in Java SDK
- ASP.NET Core in .NET SDK
- [Azure API Management](https://cloudblogs.microsoft.com/opensource/2020/09/22/announcing-dapr-integration-azure-api-management-service-apim/)

View File

@ -1,16 +1,22 @@
# Middleware pipeline
---
type: docs
title: "Middleware pipelines"
linkTitle: "Middleware"
weight: 400
description: "Custom processing pipelines of chained middleware components"
---
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. A request goes through all defined middleware components before it's routed to user code, and then goes through the defined middleware, in reverse order, before it's returned to the client, as shown in the following diagram.
![Middleware](../../images/middleware.png)
<img src="/images/middleware.png" width=400>
## Customize processing pipeline
When launched, a Dapr sidecar constructs a middleware processing pipeline. By default the pipeline consists of [tracing middleware](../observability/traces.md) and CORS middleware. Additional middleware, configured by a Dapr [configuration](../configuration/README.md), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, security and others.
When launched, a Dapr sidecar constructs a middleware processing pipeline. By default the pipeline consists of [tracing middleware]({{< ref tracing.md >}}) and CORS middleware. Additional middleware, configured by a Dapr [configuration]({{< ref configuration-concept.md >}}), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, security and others.
> **NOTE:** Dapr provides a **middleware.http.uppercase** pre-registered component that changes all text in a request body to uppercase. You can use it to test/verify if your custom pipeline is in place.
The following configuration example defines a custom pipeline that uses a [OAuth 2.0 middleware](../../howto/authorization-with-oauth/README.md) and an uppercase middleware component. In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code.
The following configuration example defines a custom pipeline that uses a [OAuth 2.0 middleware]({{< ref oauth.md >}}) and an uppercase middleware component. In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code.
```yaml
apiVersion: dapr.io/v1alpha1
@ -51,8 +57,10 @@ func GetHandler(metadata Metadata) fasthttp.RequestHandler {
}
```
## Submitting middleware components
Your middleware component can be contributed to the https://github.com/dapr/components-contrib repository, under the */middleware* folder. Then submit another pull request against the https://github.com/dapr/dapr repository to register the new middleware type. You'll need to modify the **Load()** method under https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/registry.go to register your middleware using the **Register** method.
## Adding new middleware components
Your middleware component can be contributed to the [components-contrib repository](https://github.com/dapr/components-contrib/tree/master/middleware).
## References
* [How-To: Configure API authorization with OAuth](../../howto/authorization-with-oauth/README.md)
Then submit another pull request against the [Dapr runtime repository](https://github.com/dapr/dapr) to register the new middleware type. You'll need to modify the **Load()** method in [registry.go]( https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/registry.go) to register your middleware using the **Register** method.
## Next steps
* [How-To: Configure API authorization with OAuth({{< ref oauth.md >}})

View File

@ -0,0 +1,63 @@
---
type: docs
title: "Observability"
linkTitle: "Observability"
weight: 500
description: >
How to monitor applications through tracing, metrics, logs and health
---
Observability is a term from control theory. Observability means you can answer questions about what's happening on the inside of a system by observing the outside of the system, without having to ship new code to answer new questions. Observability is critical in production environments and services to debug, operate and monitor Dapr system services, components and user applications.
The observability capabilities enable users to monitor the Dapr system services, their interaction with user applications and understand how these monitored services behave. The observability capabilities are divided into the following areas;
## Distributed tracing
[Distributed tracing]({{<ref "tracing.md">}}) is used to profile and monitor Dapr system services and user apps. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing is particularly well-suited to debugging and monitoring distributed software architectures, such as microservices.
You can use distributed tracing to help debug and optimize application code. Distributed tracing contains trace spans between the Dapr runtime, Dapr system services, and user apps across process, nodes, network, and security boundaries. It provides a detailed understanding of service invocations (call flows) and service dependencies.
Dapr uses [W3C tracing context for distributed tracing]({{<ref w3c-tracing>}})
It is generally recommended to run Dapr in production with tracing.
### Open Telemetry
Dapr integrates with [OpenTelemetry](https://opentelemetry.io/) for tracing, metrics and logs. With OpenTelemetry, you can configure various exporters for tracing and metrics based on your environment, whether it is running in the cloud or on-premises.
#### Next steps
- [How-To: Set up Zipkin]({{< ref zipkin.md >}})
- [How-To: Set up Application Insights with Open Telemetry Collector]({{< ref open-telemetry-collector.md >}})
## Metrics
[Metrics]({{<ref "metrics.md">}}) are the series of measured values and counts that are collected and stored over time. Dapr metrics provide monitoring and understanding of the behavior of Dapr system services and user apps.
For example, the service metrics between Dapr sidecars and user apps show call latency, traffic failures, error rates of requests etc.
Dapr system services metrics show side car injection failures, health of the system services including CPU usage, number of actor placement made etc.
#### Next steps
- [How-To: Set up Prometheus and Grafana]({{< ref prometheus.md >}})
- [How-To: Set up Azure Monitor]({{< ref azure-monitor.md >}})
## Logs
[Logs]({{<ref "logs.md">}}) are records of events that occur and can be used to determine failures or another status.
Logs events contain warning, error, info, and debug messages produced by Dapr system services. Each log event includes metadata such as message type, hostname, component name, App ID, ip address, etc.
#### Next steps
- [How-To: Set up Fluentd, Elastic search and Kibana in Kubernetes]({{< ref fluentd.md >}})
- [How-To: Set up Azure Monitor]({{< ref azure-monitor.md >}})
## Health
Dapr provides a way for a hosting platform to determine its [Health]({{<ref "sidecar-health.md">}}) using an HTTP endpoint. With this endpoint, the Dapr process, or sidecar, can be probed to determine its readiness and liveness and action taken accordingly.
#### Next steps
- [Health API]({{< ref health_api.md >}})

View File

@ -1,22 +1,17 @@
# Dapr overview
---
type: docs
title: "Overview"
linkTitle: "Overview"
weight: 100
description: >
Introduction to the Distributed Application Runtime
---
Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, stateless and stateful microservice applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
## Contents:
- [Any language, any framework, anywhere](#any-language-any-framework-anywhere)
- [Microservice building blocks for cloud and edge](#microservice-building-blocks-for-cloud-and-edge)
- [Sidecar architecture](#sidecar-architecture)
- [Developer language SDKs and frameworks](#developer-language-sdks-and-frameworks)
- [Designed for operations](#designed-for-operations)
- [Run anywhere](#Run-anywhere)
- [Running Dapr on a local developer machine in self hosted mode](#running-dapr-on-a-local-developer-machine-in-self-hosted-mode)
- [Running Dapr in Kubernetes mode](#running-dapr-in-kubernetes-mode)
## Any language, any framework, anywhere
<img src="../images/overview.png" width=800>
<img src="/images/overview.png" width=1000>
Today we are experiencing a wave of cloud adoption. Developers are comfortable with web + database application architectures (for example classic 3-tier designs) but not with microservice application architectures which are inherently distributed. Its hard to become a distributed systems expert, nor should you have to. Developers want to focus on business logic, while leaning on the platforms to imbue their applications with scale, resiliency, maintainability, elasticity and the other attributes of cloud-native architectures.
@ -28,7 +23,7 @@ Using Dapr you can easily build microservice applications using any language, an
## Microservice building blocks for cloud and edge
<img src="../images/building_blocks.png" width=800>
<img src="/images/building_blocks.png" width=1000>
There are many considerations when architecting microservices applications. Dapr provides best practices for common capabilities when building microservice applications that developers can use in a standard way and deploy to any environment. It does this by providing distributed system building blocks.
@ -36,13 +31,13 @@ Each of these building blocks is independent, meaning that you can use one, some
| Building Block | Description |
|----------------|-------------|
| **[Service Invocation](../concepts/service-invocation)** | Resilient service-to-service invocation enables method calls, including retries, on remote services wherever they are located in the supported hosting environment.
| **[State Management](../concepts/state-management)** | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, Azure SQL Server, PostgreSQL, AWS DynamoDB or Redis among others.
| **[Publish and Subscribe Messaging](../concepts/publish-subscribe-messaging)** | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at least once message delivery guarantee.
| **[Resource Bindings](../concepts/bindings)** | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc.
| **[Actors](../concepts/actors)** | A pattern for stateful and stateless objects that make concurrency simple with method and state encapsulation. Dapr provides many capabilities in its actor runtime including concurrency, state, life-cycle management for actor activation/deactivation and timers and reminders to wake-up actors.
| **[Observability](../concepts/observability)** | Dapr emit metrics, logs, and traces to debug and monitor both Dapr and user applications. Dapr supports distributed tracing to easily diagnose and serve inter-service calls in production using the W3C Trace Context standard and Open Telemetry to send to different monitoring tools.
| **[Secrets](../concepts/secrets)** | Dapr provides secrets management and integrates with public cloud and local secret stores to retrieve the secrets for use in application code.
| [**Service-to-service invocation**]({{<ref "service-invocation-overview.md">}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services wherever they are located in the supported hosting environment.
| [**State management**]({{<ref "state-management-overview.md">}}) | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, Azure SQL Server, PostgreSQL, AWS DynamoDB or Redis among others.
| [**Publish and subscribe**]({{<ref "pubsub-overview.md">}}) | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at least once message delivery guarantee.
| [**Resource bindings**]({{<ref "bindings-overview.md">}}) | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc.
| [**Actors**]({{<ref "actors-overview.md">}}) | A pattern for stateful and stateless objects that make concurrency simple with method and state encapsulation. Dapr provides many capabilities in its actor runtime including concurrency, state, life-cycle management for actor activation/deactivation and timers and reminders to wake-up actors.
| [**Observability**]({{<ref "observability-concept.md">}}) | Dapr emit metrics, logs, and traces to debug and monitor both Dapr and user applications. Dapr supports distributed tracing to easily diagnose and serve inter-service calls in production using the W3C Trace Context standard and Open Telemetry to send to different monitoring tools.
| [**Secrets**]({{<ref "secrets-overview.md">}}) | Dapr provides secrets management and integrates with public cloud and local secret stores to retrieve the secrets for use in application code.
## Sidecar architecture
@ -55,13 +50,13 @@ Dapr can be hosted in multiple environments, including self hosted for local dev
In self hosted mode Dapr runs as a separate side-car process which your service code can call via HTTP or gRPC. In self hosted mode, you can also deploy Dapr onto a set of VMs.
<img src="../images/overview-sidecar.png" width=600>
<img src="/images/overview-sidecar.png" width=1000>
### Kubernetes hosted
In container hosting environments such as Kubernetes, Dapr runs as a side-car container with the application container in the same pod.
<img src="../images/overview-sidecar-kubernetes.png" width=600>
<img src="/images/overview-sidecar-kubernetes.png" width=1000>
## Developer language SDKs and frameworks
@ -74,9 +69,10 @@ To make using Dapr more natural for different languages, it also includes langua
- **[Java SDK](https://github.com/dapr/java-sdk)**
- **[Javascript SDK](https://github.com/dapr/js-sdk)**
- **[Python SDK](https://github.com/dapr/python-sdk)**
- **[RUST SDK](https://github.com/dapr/rust-sdk)**
- **[.NET SDK](https://github.com/dapr/dotnet-sdk)**
> Note: Dapr is language agnostic and provides a [RESTful HTTP API](../reference/api/README.md) in addition to the protobuf clients.
> Note: Dapr is language agnostic and provides a [RESTful HTTP API]({{< ref api >}}) in addition to the protobuf clients.
### Developer frameworks
Dapr can be used from any developer framework. Here are some that have been integrated with Dapr.
@ -86,10 +82,10 @@ Dapr can be used from any developer framework. Here are some that have been int
In the Dapr [Java SDK](https://github.com/dapr/java-sdk) you can find [Spring Boot](https://spring.io/) integration.
Dapr integrates easily with Python [Flask](https://pypi.org/project/Flask/) and node [Express](http://expressjs.com/), which you can find in the [getting started samples](https://github.com/dapr/docs/tree/master/getting-started)
Dapr integrates easily with Python [Flask](https://pypi.org/project/Flask/) and node [Express](http://expressjs.com/). See examples in the [Dapr quickstarts](https://github.com/dapr/quickstarts).
#### Actors
Dapr SDKs support for [virtual actors](../concepts/actors) which are stateful objects that make concurrency simple, have method and state encapsulation, and are designed for scalable, distributed applications.
Dapr SDKs support for [virtual actors]({{< ref actors >}}) which are stateful objects that make concurrency simple, have method and state encapsulation, and are designed for scalable, distributed applications.
#### Azure Functions
Dapr integrates with the Azure Functions runtime via an extension that lets a function seamlessly interact with Dapr. Azure Functions provides an event-driven programming model and Dapr provides cloud-native building blocks. With this extension, you can bring both together for serverless and event-driven apps. For more information read
@ -100,26 +96,26 @@ To enable developers to easily build workflow applications that use Daprs cap
[cloud-native workflows using Dapr and Logic Apps](https://cloudblogs.microsoft.com/opensource/2020/05/26/announcing-cloud-native-workflows-dapr-logic-apps/) and visit the [Dapr workflow](https://github.com/dapr/workflows) repo to try out the samples.
## Designed for Operations
Dapr is designed for operations. The [services dashboard](https://github.com/dapr/dashboard), installed via the Dapr CLI, provides a web-based UI enabling you to see information, view logs and more for the Dapr sidecars.
Dapr is designed for [operations](/operations/). The [services dashboard](https://github.com/dapr/dashboard), installed via the Dapr CLI, provides a web-based UI enabling you to see information, view logs and more for the Dapr sidecars.
The [monitoring dashboard](../reference/dashboard/README.md) provides deeper visibility into the Dapr system services and side-cars and the [observability capabilities](../concepts/observability) of Dapr provide insights into your application such as tracing and metrics.
The [monitoring tools support](/operations/monitoring/) provides deeper visibility into the Dapr system services and side-cars and the [observability capabilities]({{<ref "observability-concept.md">}}) of Dapr provide insights into your application such as tracing and metrics.
## Run anywhere
### Running Dapr on a local developer machine in self hosted mode
Dapr can be configured to run on your local developer machine in [self hosted mode](../concepts/hosting/). Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.
Dapr can be configured to run on your local developer machine in [self-hosted mode]({{< ref self-hosted >}}). Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.
You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr enabled application on your local machine. Try this out with the [getting started samples](../getting-started).
You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr enabled application on your local machine. Try this out with the [getting started samples]({{< ref getting-started >}}).
<img src="../images/overview_standalone.png" width=800>
<img src="/images/overview_standalone.png" width=800>
### Running Dapr in Kubernetes mode
Dapr can be configured to run on any [Kubernetes cluster](../concepts/hosting/). In Kubernetes the `dapr-sidecar-injector` and `dapr-operator` services provide first class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned into the cluster.
Dapr can be configured to run on any [Kubernetes cluster]({{< ref kubernetes >}}). In Kubernetes the `dapr-sidecar-injector` and `dapr-operator` services provide first class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned into the cluster.
The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service read the [security overview](../concepts/security/README.md#dapr-to-dapr-communication)
The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service read the [security overview]({{< ref "security-concept.md#dapr-to-dapr-communication" >}})
<img src="../images/overview_kubernetes.png" width=800>
<img src="/images/overview_kubernetes.png" width=800>
Deploying and running a Dapr enabled application into your Kubernetes cluster is a simple as adding a few annotations to the deployment schemes. You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes/deploy) in the Kubernetes getting started sample. Try this out with the [Kubernetes getting started sample](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes)
Deploying and running a Dapr enabled application into your Kubernetes cluster is a simple as adding a few annotations to the deployment schemes. You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes/deploy) in the Kubernetes getting started sample. Try this out with the [Kubernetes quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes).

View File

@ -1,18 +1,14 @@
# Security
---
type: docs
title: "Security"
linkTitle: "Security"
weight: 600
description: >
How Dapr is designed with security in mind
---
This article addresses multiple security considerations when using Dapr in a distributed application including:
- [Sidecar-to-App Communication](#sidecar-to-app-communication)
- [Sidecar-to-Sidecar Communication](#sidecar-to-sidecar-communication)
- [Sidecar-to-system-services-communication](#sidecar-to-system-services-communication)
- [Component namespace scopes and secrets](#component-namespace-scopes-and-secrets)
- [Network Security](#network-security)
- [Bindings Security](#bindings-security)
- [State Store Security](#state-store-security)
- [Management Security](#management-security)
- [Threat Model](#threat-model)
- [Security Audit June 2020](#security-audit-june-2020)
Several of the areas above are addressed through encryption of data in transit. One of the security mechanisms that Dapr employs for encrypting data in transit is [mutual authentication TLS](https://en.wikipedia.org/wiki/Mutual_authentication) or mTLS. mTLS offers a few key features for network traffic inside your application:
- Two way authentication - the client proving its identify to the server, and vice-versa
@ -22,11 +18,11 @@ Mutual TLS is useful in almost all scenarios, but especially so for systems subj
Dapr enables mTLS and all the features described in this document in your application with little to no extra code or complex configuration inside your production systems
## Sidecar-to-App communication
## Sidecar-to-app communication
The Dapr sidecar runs close to the application through **localhost**, and is recommended to run under the same network boundary as the app. While many cloud-native systems today consider the pod level (on Kubernetes, for example) as a trusted security boundary, Dapr provides user with API level authentication using tokens. This feature guarantees that even on localhost, only an authenticated caller may call into Dapr.
## Sidecar-to-Sidecar communication
## Sidecar-to-sidecar communication
Dapr includes an "on by default", automatic mutual TLS that provides in-transit encryption for traffic between Dapr sidecars.
To achieve this, Dapr leverages a system service named `Sentry` which acts as a Certificate Authority (CA) and signs workload (app) certificate requests originating from the Dapr sidecar.
@ -46,17 +42,17 @@ Dapr also supports strong identities when deployed on Kubernetes, relying on a p
By default, a workload cert is valid for 24 hours and the clock skew is set to 15 minutes.
Mutual TLS can be turned off/on by editing the default configuration that is deployed with Dapr via the `spec.mtls.enabled` field.
This can be done for both Kubernetes and self hosted modes. Details for how to do this can be found [here](../../howto/configure-mtls).
This can be done for both Kubernetes and self hosted modes. Details for how to do this can be found [here]({{< ref mtls.md >}}).
### mTLS self hosted
The diagram below shows how the Sentry system service issues certificates for applications based on the root/issuer certificate that is provided by an operator or generated by the Sentry service as stored in a file
![mTLS self hosted](../../images/security-mTLS-sentry-selfhosted.png)
<img src="/images/security-mTLS-sentry-selfhosted.png" width=1000>
### mTLS in Kubernetes
The diagram below shows how the Sentry system service issues certificates for applications based on the root/issuer certificate that is provided by an operator or generated by the Sentry service as stored as a Kubernetes secret
![mTLS Kubernetes](../../images/security-mTLS-sentry-kubernetes.png)
<img src="/images/security-mTLS-sentry-kubernetes.png" width=1000>
## Sidecar to system services communication
@ -73,15 +69,15 @@ When the Dapr sidecar initializes, it authenticates with the system pods using t
### mTLS to system services in Kubernetes
The diagram below shows secure communication between the Dapr sidecar and the Dapr Sentry (Certificate Authority), Placement (actor placement) and the Kubernetes Operator system services
![mTLS System Services on Kubernetes](../../images/security-mTLS-dapr-system-services.png)
<img src="/images/security-mTLS-dapr-system-services.png" width=1000>
## Component namespace scopes and secrets
Dapr components are namespaced. That means a Dapr runtime sidecar instance can only access the components that have been deployed to the same namespace. See the [components scope topic](../../howto/components-scopes) for more details.
Dapr components are namespaced. That means a Dapr runtime sidecar instance can only access the components that have been deployed to the same namespace. See the [components scope documentation]({{<ref "component-scopes.md">}}) for more details.
Dapr components uses Dapr's built-in secret management capability to manage secrets. See the [secret topic](../secrets/README.md) for more details.
Dapr components uses Dapr's built-in secret management capability to manage secrets. See the [secret store overview]({{<ref "secrets-overview.md">}}) for more details.
In addition, Dapr offers application-level scoping for components by allowing users to specify which applications can consume given components.For more information about application level scoping, see [here](../../howto/components-scopes#application-access-to-components-with-scopes)
In addition, Dapr offers application-level scoping for components by allowing users to specify which applications can consume given components.For more information about application level scoping, see [here]({{<ref "component-scopes.md#application-access-to-components-with-scopes">}}).
## Network security
@ -107,12 +103,14 @@ When deploying on Kubernetes, you can use regular [Kubernetes RBAC]( https://kub
When deploying on Azure Kubernetes Service (AKS), you can use [Azure Active Directory (AD) service principals]( https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) to control access to management activities and resource management.
## Threat Model
## Threat model
Threat modeling is a process by which potential threats, such as structural vulnerabilities or the absence of appropriate safeguards, can be identified, enumerated, and mitigations can be prioritized. The Dapr threat model is below.
![Threat Model](../../images/threat_model.png)
<img src="/images/security-threat-model.png" alt="Dapr threat model" width=1000>
## Security Audit June 2020
## Security audit
### June 2020
In June 2020, Dapr has undergone a security audit from Cure53, a CNCF approved cybersecurity firm.
The test focused on the following:
@ -129,7 +127,7 @@ The test focused on the following:
* DoS attacks
* Penetration testing
The full report can be found [here](./audits/DAP-01-report.pdf).
The full report can be found [here](/docs/Dapr-july-2020-security-audit-report.pdf).
Two issues, one critical and one high, were fixed during the test.
As of July 21st 2020, Dapr has 0 criticals, 2 highs, 2 mediums, 1 low, 1 info.

View File

@ -0,0 +1,8 @@
---
type: docs
title: "Contributing to Dapr"
linkTitle: "Contributing"
weight: 100
description: >
How to contribute to the Dapr project
---

View File

@ -0,0 +1,214 @@
---
type: docs
title: "Docs contributions"
linkTitle: "Docs"
weight: 2000
description: >
Guidelines for contributing to the Dapr Docs
---
This guide contains information about contributions to the [Dapr docs repository](https://github.com/dapr/docs). Please review the guidelines below before making a contribution to the Dapr docs. This guide assumes you have already reviewed the [general guidance]({{< ref contributing-overview>}}) which applies to any Dapr project contributions.
Dapr docs are published to [docs.dapr.io](https://docs.dapr.io). Therefore, any contribution must ensure docs can be compiled and published correctly.
## Prerequisites
The Dapr docs are built using [Hugo](https://gohugo.io/) with the [Docsy](https://docsy.dev) theme. To verify docs are built correctly before submitting a contribution, you should setup your local environment to build and display the docs locally.
Fork the [docs repository](https://github.com/dapr/docs) to work on any changes
Follow the instructions in the repository [README.md](https://github.com/dapr/docs/blob/master/README.md#environment-setup) to install Hugo locally and build the docs website.
## Style and tone
These conventions should be followed throughout all Dapr documentation to ensure a consistent experience across all docs.
- **Casing** - Use upper case only at the start of a sentence or for proper nouns including names of technologies (Dapr, Redis, Kubernetes etc.).
- **Headers and titles** - Headers and titles must be descriptive and clear, use sentence casing i.e. use the above casing guidance for headers and titles too
- **Use simple sentences** - Easy-to-read sentences mean the reader can quickly use the guidance you share.
- **Avoid the first person** - Use 2nd person "you", "your" instead of "I", "we", "our".
- **Assume a new developer audience** - Some obvious steps can seem hard. E.g. Now set an environment variable Dapr to a value X. It is better to give the reader the explicit command to do this, rather than having them figure this out.
## Contributing a new docs page
- Make sure the documentation you are writing is in the correct place in the hierarchy.
- Avoid creating new sections where possible, there is a good chance a proper place in the docs hierarchy already exists.
- Make sure to include a complete [Hugo front-matter](front-matter).
### Contributing a new concept doc
- Ensure the reader can understand why they should care about this feature. What problems does it help them solve?
- Ensure the doc references the spec for examples of using the API.
- Ensure the spec is consistent with concept in terms of names, parameters and terminology. Update both the concept and the spec as needed.
- Avoid just repeating the spec. The idea is to give the reader more information and background on the capability so that they can try this out. Hence provide more information and implementation details where possible.
- Provide a link to the spec in the [Reference]({{<ref reference >}}) section.
- Where possible reference a practical How-To doc.
### Contributing a new How-To guide
- `How To` articles are meant to provide step-by-step practical guidance on to readers who wish to enable a feature, integrate a technology or use Dapr in a specific scenario.
- Sub directory naming - the directory name should be descriptive and if referring to specific component or concept should begin with the relevant name. Example *pubsub-namespaces*.
- Do not assume the reader is using a specific environment unless the article itself is specific to an environment. This include OS (Windows/Linux/MacOS), deployment target (Kubernetes, IoT etc.) or programming language. If instructions vary between operating systems, provide guidance for all.
- Include code/sample/config snippets that can be easily copied and pasted.
- At the end of the article, provide the reader with related links and next steps (this can be other relevant "how-to", samples for reference or related concepts).
## Requirements for docs.dapr.io
Any contribution must ensure not to break the website build. The way Hugo builds the website requires following the below guidance.
### Files and folder names
File and folder names should be globally unique.
- `\service-invocation`
- `service-invocation-overview.md`
### Front-matter
[Front-matter](https://www.docsy.dev/docs/adding-content/content/#page-frontmatter) is what takes regular markdown files and upgrades them into Hugo compatible docs for rendering into the nav bars and ToCs.
Every page needs a section at the top of the document like this:
```yaml
---
type: docs
title: "TITLE FOR THE PAGE"
linkTitle: "SHORT TITLE FOR THE NAV BAR"
weight: (number)
description: "1+ SENTENCES DESCRIBING THE ARTICLE"
---
```
#### Example
```yaml
---
type: docs
title: "Service invocation overview"
linkTitle: "Overview"
weight: 10
description: "A quick overview of Dapr service invocation and how to use it to invoke services within your application"
---
```
> Weight determines the order of the pages in the left sidebar, with 0 being the top-most.
Front-matter should be completed with all fields including type, title, linkTitle, weight, and description.
- `title` should be 1 sentence, no period at the end
- `linkTitle` should be 1-3 words, with the exception of How-to at the front.
- `description` should be 1-2 sentences on what the reader will learn, accomplish, or do in this doc.
As per the [styling conventions](#styling-conventions), titles should only capitalize the first word and proper nouns, with the exception of "How-To:"
- "Getting started with Dapr service invocation"
- "How-To: Setup a local Redis instance"
### Referencing other pages
Hugo `ref` and `relref` [shortcodes](https://gohugo.io/content-management/cross-references/) are used to reference other pages and sections. It also allows the build to break if a page is incorrectly renamed or removed.
This shortcode, written inline with the rest of the markdown page, will link to the _index.md of the section/folder name:
```md
{{</* ref "folder" */>}}
```
This shortcode will link to a specific page:
```md
{{</* ref "page.md" */>}}
```
> Note that all pages and folders need to have globally unique names in order for the ref shortcode to work properly.
### Images
The markdown spec used by Docsy and Hugo does not give an option to resize images using markdown notation. Instead, raw HMTL is used.
Begin by placing images under `/daprdocs/static/images` with the naming convention of `[page-name]-[image-name].[png|jpg|svg]`.
Then link to the image using:
```md
<img src="/images/[image-filename]" width=1000 alt="Description of image">
```
>Don't forget to set the alt attribute to keep the docs readable for our visually impaired users.
#### Example
This HTML will display the `dapr-overview.png` image on the `overview.md` page:
```md
<img src="/images/overview-dapr-overview.png" width=1000 alt="Overview diagram of Dapr and its building blocks">
```
### Tabbed Content
Tabs are made possible through [Hugo shortcodes](https://gohugo.io/content-management/shortcodes/).
The overall format is:
```
{{</* tabs [Tab1] [Tab2]>}}
{{% codetab %}}
[Content for Tab1]
{{% /codetab %}}
{{% codetab %}}
[Content for Tab2]
{{% /codetab %}}
{{< /tabs */>}}
```
All content you author will be rendered to Markdown, so you can include images, code blocks, YouTube videos, and more.
#### Example
````
{{</* tabs Windows Linux MacOS>}}
{{% codetab %}}
```powershell
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
```
{{% /codetab %}}
{{% codetab %}}
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
```bash
brew install dapr/tap/dapr-cli
```
{{% /codetab %}}
{{< /tabs */>}}
````
This example will render to this:
{{< tabs Windows Linux MacOS>}}
{{% codetab %}}
```powershell
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
```
{{% /codetab %}}
{{% codetab %}}
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
```bash
brew install dapr/tap/dapr-cli
```
{{% /codetab %}}
{{< /tabs >}}
### YouTube Videos
Hugo can automatically embed YouTube videos using a shortcode:
```
{{</* youtube [VIDEO ID] */>}}
```
#### Example
Given the video https://youtu.be/dQw4w9WgXcQ
The shortcode would be:
```
{{</* youtube dQw4w9WgXcQ */>}}
```
### References
- [Docsy authoring guide](https://www.docsy.dev/docs/adding-content/)

View File

@ -0,0 +1,69 @@
---
type: docs
title: "Contribution overview"
linkTitle: "Overview"
weight: 1000
description: >
General guidance for contributing to any of the Dapr project repositories
---
Thank you for your interest in Dapr!
This document provides the guidelines for how to contribute to the [Dapr project](https://github.com/dapr) through issues and pull-requests. Contributions can also come in additional ways such as engaging with the community in community calls, commenting on issues or pull requests and more.
See the [Dapr community repository](https://github.com/dapr/community) for more information on community engagement and community membership.
> If you are looking to contribute to the Dapr docs, please also see the specific guidelines for [docs contributions]({{< ref contributing-docs >}}).
## Issues
### Issue types
In most Dapr repositories there are usually 4 types of issues:
- Issue/Bug: You've found a bug with the code, and want to report it, or create an issue to track the bug.
- Issue/Discussion: You have something on your mind, which requires input form others in a discussion, before it eventually manifests as a proposal.
- Issue/Proposal: Used for items that propose a new idea or functionality. This allows feedback from others before code is written.
- Issue/Question: Use this issue type, if you need help or have a question.
### Before submitting
Before you submit an issue, make sure you've checked the following:
1. Is it the right repository?
- The Dapr project is distributed across multiple repositories. Check the list of [repositories](https://github.com/dapr) if you aren't sure which repo is the correct one.
1. Check for existing issues
- Before you create a new issue, please do a search in [open issues](https://github.com/dapr/dapr/issues) to see if the issue or feature request has already been filed.
- If you find your issue already exists, make relevant comments and add your [reaction](https://github.com/blog/2119-add-reaction-to-pull-requests-issues-and-comments). Use a reaction:
- 👍 up-vote
- 👎 down-vote
1. For bugs
- Check it's not an environment issue. For example, if running on Kubernetes, make sure prerequisites are in place. (state stores, bindings, etc.)
- You have as much data as possible. This usually comes in the form of logs and/or stacktrace. If running on Kubernetes or other environment, look at the logs of the Dapr services (runtime, operator, placement service). More details on how to get logs can be found [here](https://github.com/dapr/docs/tree/master/best-practices/troubleshooting/logs.md).
1. For proposals
- Many changes to the Dapr runtime may require changes to the API. In that case, the best place to discuss the potential feature is the main [Dapr repo](https://github.com/dapr/dapr).
- Other examples could include bindings, state stores or entirely new components.
## Pull Requests
All contributions come through pull requests. To submit a proposed change, follow this workflow:
1. Make sure there's an issue (bug or proposal) raised, which sets the expectations for the contribution you are about to make.
1. Fork the relevant repo and create a new branch
1. Create your change
- Code changes require tests
1. Update relevant documentation for the change
1. Commit and open a PR
1. Wait for the CI process to finish and make sure all checks are green
1. A maintainer of the project will be assigned, and you can expect a review within a few days
#### Use work-in-progress PRs for early feedback
A good way to communicate before investing too much time is to create a "Work-in-progress" PR and share it with your reviewers. The standard way of doing this is to add a "[WIP]" prefix in your PR's title and assign the **do-not-merge** label. This will let people looking at your PR know that it is not well baked yet.
### Use of Third-party code
- Third-party code must include licenses.
## Code of Conduct
Please see the [Dapr community code of conduct](https://github.com/dapr/community/blob/master/CODE-OF-CONDUCT.md).

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Developing applications with Dapr"
linkTitle: "Developing applications"
description: "Tools, tips, and information on how to build your application with Dapr"
weight: 30
---

View File

@ -0,0 +1,11 @@
---
type: docs
title: "Building blocks"
linkTitle: "Building blocks"
weight: 10
description: "Dapr capabilities that solve common development challenges for distributed applications"
---
Get a high-level [overview of Dapr building blocks]({{< ref building-blocks-concept >}}) in the **Concepts** section.
<img src="/images/buildingblocks-overview.png" alt="Diagram showing the different Dapr building blocks" width=1000>

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Actors"
linkTitle: "Actors"
weight: 50
description: Encapsulate code and data in reusable actor objects as a common microservices design pattern
---

View File

@ -1,4 +1,10 @@
# Introduction to actors
---
type: docs
title: "Introduction to actors"
linkTitle: "Actors background"
weight: 20
description: Learn more about the actor pattern
---
The [actor pattern](https://en.wikipedia.org/wiki/Actor_model) describes **actors** as the lowest-level "unit of computation". In other words, you write your code in a self-contained unit (called an actor) that receives messages and processes them one at a time, without any kind of concurrency or threading.
@ -10,15 +16,8 @@ Dapr includes a runtime that specifically implements the [Virtual Actor pattern]
## Quick links
- [Dapr Actor Features](./actors_features.md)
- [Dapr Actor API Spec](../../reference/api/actors_api.md)
## Contents
- [Actors in Dapr](#actors-in-dapr)
- [Actor Lifetime](#actor-lifetime)
- [Distribution and Failover](#distribution-and-failover)
- [Actor Communication](#actor-communication)
- [Dapr Actor Features]({{< ref actors-overview.md >}})
- [Dapr Actor API Spec]({{< ref actors_api.md >}} )
### When to use actors
@ -34,7 +33,7 @@ The actor design pattern can be a good fit to a number of distributed systems pr
Every actor is defined as an instance of an actor type, identical to the way an object is an instance of a class. For example, there may be an actor type that implements the functionality of a calculator and there could be many actors of that type that are distributed on various nodes across a cluster. Each such actor is uniquely identified by an actor ID.
<img src="../../images/actor_game_example.png" width=400>
<img src="/images/actor_background_game_example.png" width=400>
## Actor lifetime
@ -57,11 +56,11 @@ Actors are distributed across the instances of the actor service, and those inst
### Actor placement service
The Dapr actor runtime manages distribution scheme and key range settings for you. This is done by the actor `Placement` service. When a new instance of a service is created, the corresponding Dapr runtime register the actor types it can create and the `Placement` service calculates the partitioning across all the instances for a given actor type. This table of partition information for each actor type is updated and stored in each Dapr instance running in the environment and can change dynamically as new instance of actor services are created and destroyed. This is shown in the diagram below.
![Placement service registration](../../images/actors_placement_service_registration.png)
<img src="/images/actors_background_placement_service_registration.png" width=600>
When a client calls an actor with a particular id (for example, actor id 123), the Dapr instance for the client hashes the actor type and id, and uses the information to call onto the corresponding Dapr instance that can serve the requests for that particular actor id. As a result, the same partition (or service instance) is always called for any given actor id. This is shown in the diagram below.
![Actor ID creation and calling](../../images/actors_id_hashing_calling.png)
<img src="/images/actors_background_id_hashing_calling.png" width=600>
This simplifies some choices but also carries some consideration:
@ -80,7 +79,7 @@ POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/<met
You can provide any data for the actor method in the request body, and the response for the request would be in the response body which is the data from actor call.
Refer to [Dapr Actor Features](./actors_features.md) for more details.
Refer to [Dapr Actor Features]({{< ref actors-overview.md >}}) for more details.
### Concurrency
@ -90,7 +89,8 @@ A single actor instance cannot process more than one request at a time. An actor
Actors can deadlock on each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The Dapr actor runtime automatically times out on actor calls and throw an exception to the caller to interrupt possible deadlock situations.
!["Actor concurrency"](../../images/actors_communication.png)
<img src="/images/actors_background_communication.png" width=600>
### Turn-based access
@ -100,4 +100,5 @@ The Dapr actors runtime enforces turn-based concurrency by acquiring a per-actor
The following example illustrates the above concepts. Consider an actor type that implements two asynchronous methods (say, Method1 and Method2), a timer, and a reminder. The diagram below shows an example of a timeline for the execution of these methods and callbacks on behalf of two actors (ActorId1 and ActorId2) that belong to this actor type.
![""](../../images/actors_concurrency.png)
<img src="/images/actors_background_concurrency.png" width=600>

View File

@ -1,10 +1,12 @@
# Dapr actors runtime
---
type: docs
title: "Dapr actors overview"
linkTitle: "Overview"
weight: 10
description: Overview of Dapr support for actors
---
The Dapr actors runtime provides following capabilities:
- [Method Invocation](#actor-method-invocation)
- [State Management](#actor-state-management)
- [Timers and Reminders](#actor-timers-and-reminders)
The Dapr actors runtime provides support for [virtual actors]({{< ref actors-background.md >}}) through following capabilities:
## Actor method invocation
@ -16,7 +18,7 @@ POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/meth
You can provide any data for the actor method in the request body and the response for the request is in response body which is data from actor method call.
Refer [api spec](../../reference/api/actors_api.md#invoke-actor-method) for more details.
Refer [api spec]({{< ref "actors_api.md#invoke-actor-method" >}}) for more details.
## Actor state management
@ -78,7 +80,7 @@ You can remove the actor timer by calling
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
```
Refer [api spec](../../reference/api/actors_api.md#invoke-timer) for more details.
Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
### Actor reminders
@ -116,7 +118,7 @@ The following request body configures a reminder with a `dueTime` 15 seconds and
}
```
#### Retrieve Actor Reminder
#### Retrieve actor reminder
You can retrieve the actor reminder by calling
@ -124,7 +126,7 @@ You can retrieve the actor reminder by calling
GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
#### Remove the Actor Reminder
#### Remove the actor reminder
You can remove the actor reminder by calling
@ -132,4 +134,4 @@ You can remove the actor reminder by calling
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
Refer [api spec](../../reference/api/actors_api.md#invoke-reminder) for more details.
Refer [api spec]({{< ref "actors_api.md#invoke-reminder" >}}) for more details.

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Bindings"
linkTitle: "Bindings"
weight: 40
description: Trigger code from and interface with a large array of external resources
---

View File

@ -0,0 +1,55 @@
---
type: docs
title: "Bindings overview"
linkTitle: "Overview"
weight: 100
description: Overview of the Dapr bindings building block
---
## Introduction
Using bindings, you can trigger your app with events coming in from external systems, or interface with external systems. This building block provides several benefits for you and your code:
- Remove the complexities of connecting to, and polling from, messaging systems such as queues and message buses
- Focus on business logic and not implementation details of how to interact with a system
- Keep your code free from SDKs or libraries
- Handle retries and failure recovery
- Switch between bindings at run time
- Build portable applications where environment-specific bindings are set-up and no code changes are required
For a specific example, bindings would allow your microservice to respond to incoming Twilio/SMS messages without adding or configuring a third-party Twilio SDK, worrying about polling from Twilio (or using websockets, etc.).
Bindings are developed independently of Dapr runtime. You can view and contribute to the bindings [here](https://github.com/dapr/components-contrib/tree/master/bindings).
## Input bindings
Input bindings are used to trigger your application when an event from an external resource has occurred.
An optional payload and metadata may be sent with the request.
In order to receive events from an input binding:
1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.)
2. Listen on an HTTP endpoint for the incoming event, or use the gRPC proto library to get incoming events
> On startup Dapr sends a `OPTIONS` request for all defined input bindings to the application and expects a status code other than `NOT FOUND (404)` if this application wants to subscribe to the binding.
Read the [Create an event-driven app using input bindings]({{< ref howto-triggers.md >}}) page to get started with input bindings.
## Output bindings
Output bindings allow users to invoke external resources.
An optional payload and metadata can be sent with the invocation request.
In order to invoke an output binding:
1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.)
2. Use the HTTP endpoint or gRPC method to invoke the binding with an optional payload
Read the [Send events to external systems using output bindings]({{< ref howto-bindings.md >}}) page to get started with output bindings.
## Related Topics
- [Trigger a service from different resources with input bindings]({{< ref howto-triggers.md >}})
- [Invoke different resources using output bindings]({{< ref howto-bindings.md >}})

View File

@ -1,6 +1,12 @@
# Send events to external systems using Output Bindings
---
type: docs
title: "How-To: Use bindings to interface with external resources"
linkTitle: "How-To: Bindings"
description: "Invoke external systems with Dapr output bindings"
weight: 300
---
Using bindings, its possible to invoke external resources without tying in to special SDK or libraries.
Using bindings, it is possible to invoke external resources without tying in to special SDK or libraries.
For a complete sample showing output bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings).
Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&t=1960) on how to use bi-directional output bindings.
@ -10,7 +16,7 @@ Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&
An output binding represents a resource that Dapr will use invoke and send messages to.
For the purpose of this guide, we'll use a Kafka binding. You can find a list of the different binding specs [here](../../concepts/bindings/README.md).
For the purpose of this guide, you'll use a Kafka binding. You can find a list of the different binding specs [here]({{< ref bindings >}}).
Create the following YAML file, named binding.yaml, and save this to a `components` sub-folder in your application directory.
(Use the `--components-path` flag with `dapr run` to point to your custom components dir)
@ -32,29 +38,29 @@ spec:
value: topic1
```
Here, we create a new binding component with the name of `myevent`.
Here, create a new binding component with the name of `myevent`.
Inside the `metadata` section, we configure Kafka related properties such as the topic to publish the message to and the broker.
Inside the `metadata` section, configure Kafka related properties such as the topic to publish the message to and the broker.
## 2. Send an event
All that's left now is to invoke the bindings endpoint on a running Dapr instance.
We can do so using HTTP:
You can do so using HTTP:
```bash
curl -X POST -H http://localhost:3500/v1.0/bindings/myevent -d '{ "data": { "message": "Hi!" }, "operation": "create" }'
```
As seen above, we invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myevent`.
As seen above, you invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myevent`.
The payload goes inside the mandatory `data` field, and can be any JSON serializable value.
You'll also notice that there's an `operation` field that tells the binding what we need it to do.
You can check [here](../../reference/specs/bindings) which operations are supported for every output binding.
You'll also notice that there's an `operation` field that tells the binding what you need it to do.
You can check [here]({{< ref bindings >}}) which operations are supported for every output binding.
## References
* Binding [API](https://github.com/dapr/docs/blob/master/reference/api/bindings_api.md)
* Binding [Components](https://github.com/dapr/docs/tree/master/concepts/bindings)
* Binding [Detailed specifications](https://github.com/dapr/docs/tree/master/reference/specs/bindings)
- [Binding API]({{< ref bindings_api.md >}})
- [Binding components]({{< ref bindings >}})
- [Binding detailed specifications]({{< ref supported-bindings >}})

View File

@ -1,4 +1,10 @@
# Create an event-driven app using input bindings
---
type: docs
title: "How-To: Trigger your application with input bindings"
linkTitle: "How-To: Triggers"
description: "Use Dapr input bindings to trigger event driven applications"
weight: 200
---
Using bindings, your code can be triggered with incoming events from different resources which can be anything: a queue, messaging pipeline, cloud-service, filesystem etc.
@ -10,15 +16,15 @@ Dapr bindings allow you to:
* Replace bindings without changing your code
* Focus on business logic and not the event resource implementation
For more info on bindings, read [this](../../concepts/bindings/README.md) link.
For more info on bindings, read [this overview]({{<ref bindings-overview.md>}}).
For a complete sample showing bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings).
For a quickstart sample showing bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings).
## 1. Create a binding
An input binding represents an event resource that Dapr uses to read events from and push to your application.
For the purpose of this HowTo, we'll use a Kafka binding. You can find a list of the different binding specs [here](../../reference/specs/bindings/README.md).
For the purpose of this HowTo, we'll use a Kafka binding. You can find a list of the different binding specs [here]({{< ref supported-bindings >}}).
Create the following YAML file, named binding.yaml, and save this to a `components` sub-folder in your application directory.
(Use the `--components-path` flag with `dapr run` to point to your custom components dir)

View File

@ -0,0 +1,9 @@
---
type: docs
title: "Observability"
linkTitle: "Observability"
weight: 60
description: See and measure the message calls across components and networked services
---
This section includes guides for developers in the context of observability. See other sections for a [general overview of the observability concept]({{< ref observability >}}) in Dapr and for [operations guidance on monitoring]({{< ref monitoring >}}).

View File

@ -1,4 +1,10 @@
# Logs
---
type: docs
title: "Logs"
linkTitle: "Logs"
weight: 3000
description: "Understand Dapr logging"
---
Dapr produces structured logs to stdout either as a plain text or JSON formatted. By default, all Dapr processes (runtime and system services) write to console out in plain text. To enable JSON formatted logs, you need to add the `--log-as-json` command flag when running Dapr processes.
@ -80,17 +86,17 @@ spec:
## Log collectors
If you run Dapr in a Kubernetes cluster, [Fluentd](https://www.fluentd.org/) is a popular container log collector. You can use Fluentd with a [json parser plugin](https://docs.fluentd.org/parser/json) to parse Dapr JSON formatted logs. This [how-to](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md) shows how to configure the Fleuntd in your cluster.
If you run Dapr in a Kubernetes cluster, [Fluentd](https://www.fluentd.org/) is a popular container log collector. You can use Fluentd with a [json parser plugin](https://docs.fluentd.org/parser/json) to parse Dapr JSON formatted logs. This [how-to]({{< ref fluentd.md >}}) shows how to configure the Fleuntd in your cluster.
If you are using the Azure Kubernetes Service, you can use the default OMS Agent to collect logs with Azure Monitor without needing to install Fluentd.
## Search engines
If you use [Fluentd](https://www.fluentd.org/), we recommend to using Elastic Search and Kibana. This [how-to](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md) shows how to set up Elastic Search and Kibana in your Kubernetes cluster.
If you use [Fluentd](https://www.fluentd.org/), we recommend to using Elastic Search and Kibana. This [how-to]({{< ref fluentd.md >}}) shows how to set up Elastic Search and Kibana in your Kubernetes cluster.
If you are using the Azure Kubernetes Service, you can use [Azure monitor for containers](https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-overview) without indstalling any additional monitoring tools. Also read [How to enable Azure Monitor for containers](https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-onboard)
## References
- [How-to : Set up Fleuntd, Elastic search, and Kibana](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md)
- [How-to : Set up Azure Monitor in Azure Kubernetes Service](../../howto/setup-monitoring-tools/setup-azure-monitor.md)
- [How-to : Set up Fleuntd, Elastic search, and Kibana]({{< ref fluentd.md >}})
- [How-to : Set up Azure Monitor in Azure Kubernetes Service]({{< ref azure-monitor.md >}})

View File

@ -1,4 +1,10 @@
# Metrics
---
type: docs
title: "Metrics"
linkTitle: "Metrics"
weight: 4000
description: "Observing Dapr metrics"
---
Dapr exposes a [Prometheus](https://prometheus.io/) metrics endpoint that you can scrape to gain a greater understanding of how Dapr is behaving and to setup alerts for specific conditions.
@ -31,6 +37,6 @@ Each Dapr system process emits Go runtime/process metrics by default and have th
## References
* [Howto: Run Prometheus locally](../../howto/setup-monitoring-tools/observe-metrics-with-prometheus-locally.md)
* [Howto: Set up Prometheus and Grafana for metrics](../../howto/setup-monitoring-tools/setup-prometheus-grafana.md)
* [Howto: Set up Azure monitor to search logs and collect metrics for Dapr](../../howto/setup-monitoring-tools/setup-azure-monitor.md)
* [Howto: Run Prometheus locally]({{< ref prometheus.md >}})
* [Howto: Set up Prometheus and Grafana for metrics]({{< ref grafana.md >}})
* [Howto: Set up Azure monitor to search logs and collect metrics for Dapr]({{< ref azure-monitor.md >}})

View File

@ -1,13 +1,19 @@
# Health
---
type: docs
title: "Sidecar health"
linkTitle: "Sidecar health"
weight: 5000
description: Dapr sidecar health checks.
---
Dapr provides a way to determine it's health using an HTTP /healthz endpoint.
With this endpoint, the Dapr process, or sidecar, can be probed for its health and hence determine its readiness and liveness. See [health API ](../../reference/api/health_api.md)
With this endpoint, the Dapr process, or sidecar, can be probed for its health and hence determine its readiness and liveness. See [health API ]({{< ref health_api.md >}})
The Dapr `/healthz` endpoint can be used by health probes from the application hosting platform. This topic describes how Dapr integrates with probes from different hosting platforms.
As a user, when deploying Dapr to a hosting platform (for example Kubernetes), the Dapr health endpoint is automatically configured for you. There is nothing you need to configure.
Note: Dapr actors also have a health API endpoint where Dapr probes the application for a response to a signal from Dapr that the actor application is healthy and running. See [actor health API](../../reference/api/actors_api.md#health-check)
Note: Dapr actors also have a health API endpoint where Dapr probes the application for a response to a signal from Dapr that the actor application is healthy and running. See [actor health API]({{< ref "actors_api.md#health-check" >}})
## Health endpoint: Integration with Kubernetes
@ -20,7 +26,7 @@ The kubelet uses readiness probes to know when a container is ready to start acc
When integrating with Kubernetes, the Dapr sidecar is injected with a Kubernetes probe configuration telling it to use the Dapr healthz endpoint. This is done by the `Sidecar Injector` system service. The integration with the kubelet is shown in the diagram below.
![mTLS System Services on Kubernetes](../../images/security-mTLS-dapr-system-services.png)
<img src="/images/security-mTLS-dapr-system-services.png" width=600>
### How to configure a liveness probe in Kubernetes
@ -78,6 +84,6 @@ readinessProbe:
For more information refer to;
- [ Endpoint health API](../../reference/api/health_api.md)
- [Actor health API](../../reference/api/actors_api.md#health-check)
- [ Endpoint health API]({{< ref health_api.md >}})
- [Actor health API]({{< ref "actors_api.md#health-check" >}})
- [Kubernetes probe configuration parameters](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)

View File

@ -1,17 +1,14 @@
# Distributed Tracing
---
type: docs
title: "Distributed tracing"
linkTitle: "Distributed tracing"
weight: 1000
description: "Use Dapr tracing to get visibility for distributed application"
---
Dapr uses OpenTelemetry (previously known as OpenCensus) for distributed traces and metrics collection. OpenTelemetry supports various backends including [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), [Instana](https://www.instana.com), [Jaeger](https://www.jaegertracing.io/), [SignalFX](https://www.signalfx.com/), [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io) and others.
![Tracing](../../images/tracing.png)
## Contents
- [Distributed Tracing](#distributed-tracing)
- [Contents](#contents)
- [Tracing design](#tracing-design)
- [W3C Correlation ID](#w3c-correlation-id)
- [Configuration](#configuration)
- [References](#references)
<img src="/images/tracing.png" width=600>
## Tracing design
@ -26,7 +23,7 @@ Dapr adds a HTTP/gRPC middleware to the Dapr sidecar. The middleware intercepts
Dapr uses the standard W3C Trace Context headers. For HTTP requests, Dapr uses `traceparent` header. For gRPC requests, Dapr uses `grpc-trace-bin` header. When a request arrives without a trace ID, Dapr creates a new one. Otherwise, it passes the trace ID along the call chain.
Read [W3C Tracing Context for distributed tracing](./W3C-traces.md) for more background on W3C Trace Context.
Read [W3C distributed tracing]({{< ref w3c-tracing >}}) for more background on W3C Trace Context.
## Configuration
@ -68,6 +65,6 @@ spec:
## References
* [How-To: Set up Application Insights distributed tracing with OpenTelemetry Collector](../../howto/diagnose-with-tracing/open-telemetry-collector.md)
* [How-To: Set up Zipkin for distributed tracing](../../howto/diagnose-with-tracing/zipkin.md)
* [How-To: Use W3C Trace Context for distributed tracing](../../howto/use-w3c-tracecontext/README.md)
- [How-To: Setup Application Insights for distributed tracing with OpenTelemetry Collector]({{< ref open-telemetry-collector.md >}})
- [How-To: Set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
- [W3C distributed tracing]({{< ref w3c-tracing >}})

View File

@ -0,0 +1,8 @@
---
type: docs
title: "W3C trace context"
linkTitle: "W3C trace context"
weight: 1000
description: Background and scenarios for using W3C tracing with Dapr
type: docs
---

View File

@ -1,20 +1,16 @@
---
type: docs
title: "How-To: Use W3C trace context with Dapr"
linkTitle: "How-To: Use W3C trace context"
weight: 20000
description: Using W3C tracing standard with Dapr
type: docs
---
# How to use trace context
Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Dapr does all the heavy lifting of generating and propagating the trace context information and there are very few cases where you need to either propagate or create a trace context. First read scenarios in the [W3C trace context for distributed tracing](../../concepts/observability/W3C-traces.md) article to understand whether you need to propagate or create a trace context.
Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Dapr does all the heavy lifting of generating and propagating the trace context information and there are very few cases where you need to either propagate or create a trace context. First read scenarios in the [W3C distributed tracing]({{< ref w3c-tracing >}}) article to understand whether you need to propagate or create a trace context.
To view traces, read the [how to diagnose with tracing](../diagnose-with-tracing) article.
## Contents
- [How to retrieve trace context from a response](#how-to-retrieve-trace-context-from-a-response)
- [How to propagate trace context in a request](#how-to-propagate-trace-context-in-a-request)
- [How to create trace context](#how-to-create-trace-context)
- [Go](#create-trace-context-in-go)
- [Java](#create-trace-context-in-java)
- [Python](#create-trace-context-in-python)
- [NodeJS](#create-trace-context-in-nodejs)
- [C++](#create-trace-context-in-c++)
- [C#](#create-trace-context-in-c#)
- [Putting it all together with a Go Sample](#putting-it-all-together-with-a-go-sample)
- [Related Links](#related-links)
To view traces, read the [how to diagnose with tracing]({{< ref tracing.md >}}) article.
## How to retrieve trace context from a response
`Note: There are no helper methods exposed in Dapr SDKs to propagate and retrieve trace context. You need to use http/gRPC clients to propagate and retrieve trace headers through http headers and gRPC metadata.`
@ -102,8 +98,8 @@ f.SpanContextToRequest(traceContext, req)
traceContext := span.SpanContext()
traceContextBinary := propagation.Binary(traceContext)
```
You can then pass the trace context through [gRPC metadata]("google.golang.org/grpc/metadata") through `grpc-trace-bin` header.
You can then pass the trace context through [gRPC metadata](https://google.golang.org/grpc/metadata) through `grpc-trace-bin` header.
```go
ctx = metadata.AppendToOutgoingContext(ctx, "grpc-trace-bin", string(traceContextBinary))
@ -219,7 +215,7 @@ In Kubernetes, you can apply the configuration as below :
kubectl apply -f appconfig.yaml
```
You then set the following tracing annotation in your deployment YAML. You can add the following annotaion in sample [grpc app](../create-grpc-app) deployment yaml.
You then set the following tracing annotation in your deployment YAML. You can add the following annotaion in sample [grpc app]({{< ref grpc.md >}}) deployment yaml.
```yaml
dapr.io/config: "appconfig"
@ -227,13 +223,13 @@ dapr.io/config: "appconfig"
### Invoking Dapr with trace context
As mentioned in `Scenarios` section in [W3C Trace Context for distributed tracing](../../concepts/observability/W3C-traces.md) that Dapr covers generating trace context and you do not need to explicitly create trace context.
Dapr covers generating trace context and you do not need to explicitly create trace context.
However if you choose to pass the trace context explicitly, then Dapr will use the passed trace context and propagate all across the HTTP/gRPC call.
Using the [grpc app](../create-grpc-app) in the example and putting this all together, the following steps show you how to create a Dapr client and call the InvokeService method passing the trace context:
Using the [grpc app]({{< ref grpc.md >}}) in the example and putting this all together, the following steps show you how to create a Dapr client and call the InvokeService method passing the trace context:
The Rest code snippet and details, refer to the [grpc app](../create-grpc-app).
The Rest code snippet and details, refer to the [grpc app]({{< ref grpc >}}).
### 1. Import the package
@ -293,9 +289,9 @@ You can now correlate the calls in your app and across services with Dapr using
## Related Links
* [Observability concepts](../../concepts/observability/traces.md)
* [W3C Trace Context for distributed tracing](../../concepts/observability/W3C-traces.md)
* [How to set up Application Insights distributed tracing with OpenTelemetry Collector](../../howto/diagnose-with-tracing/open-telemetry-collector.md)
* [How to set up Zipkin for distributed tracing](../../howto/diagnose-with-tracing/zipkin.md)
* [W3C trace context specification](https://www.w3.org/TR/trace-context/)
* [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/observability)
- [Observability concepts]({{< ref observability-concept.md >}})
- [W3C Trace Context for distributed tracing]({{< ref w3c-tracing >}})
- [How To set up Application Insights for distributed tracing with OpenTelemetry]({{< ref open-telemetry-collector.md >}})
- [How to set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
- [W3C trace context specification](https://www.w3.org/TR/trace-context/)
- [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/observability)

View File

@ -1,8 +1,11 @@
# W3C trace context for distributed tracing
- [Background](#background)
- [Trace scenarios](#scenarios)
- [W3C trace headers](#w3c-trace-headers)
---
type: docs
title: "W3C trace context overview"
linkTitle: "Overview"
weight: 10000
description: Background and scenarios for using W3C tracing with Dapr
type: docs
---
## Introduction
Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Largely Dapr does all the heavy lifting of generating and propogating the trace context information and this can be sent to many different diagnostics tools for visualization and querying. There are only a very few cases where you, as a developer, need to either propagate a trace header or generate one.
@ -28,7 +31,7 @@ This transformation of modern applications called for a distributed tracing cont
A unified approach for propagating trace data improves visibility into the behavior of distributed applications, facilitating problem and performance analysis.
## Scenarios
There are two scenarios where you need to understand how tracing is used;
There are two scenarios where you need to understand how tracing is used:
1. Dapr generates and propagates the trace context between services.
2. Dapr generates the trace context and you need to propagate the trace context to another service **or** you generate the trace context and Dapr propagates the trace context to a service.
@ -66,7 +69,7 @@ In these scenarios Dapr does some of the work for you and you need to either cre
In this case, when service A first calls service B, Dapr generates the trace headers in service A, and these trace headers are then propagated to service B. These trace headers are returned in the response from service B as part of response headers. However you need to propagate the returned trace context to the next services, service C and Service D, as Dapr does not know you want to reuse the same header.
To understand how to extract the trace headers from a response and add the trace headers into a request, see the [how to use trace context](../../howto/use-w3c-tracecontext/README.md) article.
To understand how to extract the trace headers from a response and add the trace headers into a request, see the [how to use trace context]({{< ref w3c-tracing >}}) article.
2. You have chosen to generate your own trace context headers.
This is much more unusual. There may be occassions where you specifically chose to add W3C trace headers into a service call, for example if you have an existing application that does not currently use Dapr. In this case Dapr still propagates the trace context headers for you. If you decide to generate trace headers yourself, there are three ways this can be done :
@ -102,8 +105,7 @@ The tracestate fields are detailed [here](https://www.w3.org/TR/trace-context/#t
In the gRPC API calls, trace context is passed through `grpc-trace-bin` header.
## Related Links
* [How To set up Application Insights using OpenTelemetry Collector](../../howto/diagnose-with-tracing/open-telemetry-collector.md)
* [How To set up Zipkin for distributed tracing](../../howto/diagnose-with-tracing/zipkin.md)
* [How to use Trace Context](../../howto/use-w3c-tracecontext)
* [W3C trace context specification](https://www.w3.org/TR/trace-context/)
* [Observability sample](https://github.com/dapr/quickstarts/tree/master/observability)
- [How To set up Application Insights for distributed tracing with OpenTelemetry]({{< ref open-telemetry-collector.md >}})
- [How To set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
- [W3C trace context specification](https://www.w3.org/TR/trace-context/)
- [Observability sample](https://github.com/dapr/quickstarts/tree/master/observability)

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Publish & subscribe messaging"
linkTitle: "Publish & subscribe"
weight: 30
description: Secure, scalable messaging between services
---

View File

@ -0,0 +1,341 @@
---
type: docs
title: "How-To: Publish a message and subscribe to a topic"
linkTitle: "How-To: Publish & subscribe"
weight: 2000
description: "Learn how to send messages to a topic with one service and subscribe to that topic in another service"
---
## Introduction
Pub/Sub is a common pattern in a distributed system with many services that want to utilize decoupled, asynchronous messaging.
Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers.
Dapr provides an extensible Pub/Sub system with At-Least-Once guarantees, allowing developers to publish and subscribe to topics.
Dapr provides different implementation of the underlying system, and allows operators to bring in their preferred infrastructure, for example Redis Streams, Kafka, etc.
## Step 1: Setup the Pub/Sub component
The first step is to setup the Pub/Sub component:
{{< tabs "Self-Hosted (CLI)" Kubernetes >}}
{{% codetab %}}
Redis Streams is installed by default on a local machine when running `dapr init`.
Verify by opening your components file under `%UserProfile%\.dapr\components\pubsub.yaml` on Windows or `~/.dapr/components/pubsub.yaml` on Linux/MacOS:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
```
You can override this file with another Redis instance or another [pubsub component]({{< ref setup-pubsub >}}) by creating a `components` directory containing the file and using the flag `--components-path` with the `dapr run` CLI command.
{{% /codetab %}}
{{% codetab %}}
To deploy this into a Kubernetes cluster, fill in the `metadata` connection details of your [desired pubsub component]({{< ref setup-pubsub >}}) in the yaml below, save as `pubsub.yaml`, and run `kubectl apply -f pubsub.yaml`.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
```
{{% /codetab %}}
{{< /tabs >}}
## Step 2: Subscribe to topics
Dapr allows two methods by which you can subscribe to topics:
- **Declaratively**, where subscriptions are are defined in an external file.
- **Programatically**, where subscriptions are defined in user code
### Declarative subscriptions
You can subscribe to a topic using the following Custom Resources Definition (CRD). Create a file named `subscription.yaml` and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Subscription
metadata:
name: myevent-subscription
spec:
topic: deathStarStatus
route: /dsstatus
pubsubname: pubsub
scopes:
- app1
- app2
```
The example above shows an event subscription to topic `deathStarStatus`, for the pubsub component `pubsub`.
- The `route` field tells Dapr to send all topic messages to the `/dsstatus` endpoint in the app.
- The `scopes` field enables this subscription for apps with IDs `app1` and `app2`.
Set the component with:
{{< tabs "Self-Hosted (CLI)" Kubernetes>}}
{{% codetab %}}
Place the CRD in your `./components` directory. When Dapr starts up, it will load subscriptions along with components.
*Note: By default, Dapr loads components from `$HOME/.dapr/components` on MacOS/Linux and `%USERPROFILE%\.dapr\components` on Windows.*
You can also override the default directory by pointing the Dapr CLI to a components path:
```bash
dapr run --app-id myapp --components-path ./myComponents -- python3 app1.py
```
*Note: If you place the subscription in a custom components path, make sure the Pub/Sub component is present also.*
{{% /codetab %}}
{{% codetab %}}
In Kubernetes, save the CRD to a file and apply it to the cluster:
```bash
kubectl apply -f subscription.yaml
```
{{% /codetab %}}
{{< /tabs >}}
#### Example
{{< tabs Python Node>}}
{{% codetab %}}
Create a file named `app1.py` and paste in the following:
```python
import flask
from flask import request, jsonify
from flask_cors import CORS
import json
import sys
app = flask.Flask(__name__)
CORS(app)
@app.route('/dsstatus', methods=['POST'])
def ds_subscriber():
print(request.json, flush=True)
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
app.run()
```
After creating `app1.py` ensute flask and flask_cors are installed:
```bash
pip install flask
pip install flask_cors
```
Then run:
```bash
dapr --app-id app1 --app-port 5000 run python app1.py
```
{{% /codetab %}}
{{% codetab %}}
After setting up the subscription above, download this javascript (Node > 4.16) into a `app2.js` file:
```javascript
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
app.use(bodyParser.json({ type: 'application/*+json' }));
const port = 3000
app.post('/dsstatus', (req, res) => {
console.log(req.body);
res.sendStatus(200);
});
app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
```
Run this app with:
```bash
dapr --app-id app2 --app-port 3000 run node app2.js
```
{{% /codetab %}}
{{< /tabs >}}
### Programmatic subscriptions
To subscribe to topics, start a web server in the programming language of your choice and listen on the following `GET` endpoint: `/dapr/subscribe`.
The Dapr instance will call into your app at startup and expect a JSON response for the topic subscriptions with:
- `pubsubname`: Which pub/sub component Dapr should use
- `topic`: Which topic to subscribe to
- `route`: Which endpoint for Dapr to call on when a message comes to that topic
#### Example
{{< tabs Python Node>}}
{{% codetab %}}
```python
import flask
from flask import request, jsonify
from flask_cors import CORS
import json
import sys
app = flask.Flask(__name__)
CORS(app)
@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
subscriptions = [{'pubsubname': 'pubsub',
'topic': 'deathStarStatus',
'route': 'dsstatus'}]
return jsonify(subscriptions)
@app.route('/dsstatus', methods=['POST'])
def ds_subscriber():
print(request.json, flush=True)
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
app.run()
```
After creating `app1.py` ensute flask and flask_cors are installed:
```bash
pip install flask
pip install flask_cors
```
Then run:
```bash
dapr --app-id app1 --app-port 5000 run python app1.py
```
{{% /codetab %}}
{{% codetab %}}
```javascript
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
app.use(bodyParser.json({ type: 'application/*+json' }));
const port = 3000
app.get('/dapr/subscribe', (req, res) => {
res.json([
{
pubsubname: "pubsub",
topic: "deathStarStatus",
route: "dsstatus"
}
]);
})
app.post('/dsstatus', (req, res) => {
console.log(req.body);
res.sendStatus(200);
});
app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
```
Run this app with:
```bash
dapr --app-id app2 --app-port 3000 run node app2.js
```
{{% /codetab %}}
{{< /tabs >}}
The `/dsstatus` endpoint matches the `route` defined in the subscriptions and this is where Dapr will send all topic messages to.
## Step 3: Publish a topic
To publish a message to a topic, invoke the following endpoint on a Dapr instance:
{{< tabs "Dapr CLI" "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}}
```bash
dapr publish --pubsub pubsub --topic deathStarStatus --data '{"status": "completed"}'
```
{{% /codetab %}}
{{% codetab %}}
Begin by ensuring a Dapr sidecar is running:
```bash
dapr --app-id myapp --port 3500 run
```
Then publish a message to the `deathStarStatus` topic:
```bash
curl -X POST http://localhost:3500/v1.0/publish/pubsub/deathStarStatus -H "Content-Type: application/json" -d '{"status": "completed"}'
```
{{% /codetab %}}
{{% codetab %}}
Begin by ensuring a Dapr sidecar is running:
```bash
dapr --app-id myapp --port 3500 run
```
Then publish a message to the `deathStarStatus` topic:
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"status": "completed"}' -Uri 'http://localhost:3500/v1.0/publish/pubsub/deathStarStatus'
```
{{% /codetab %}}
{{< /tabs >}}
Dapr automatically wraps the user payload in a Cloud Events v1.0 compliant envelope.
## Step 4: ACK-ing a message
In order to tell Dapr that a message was processed successfully, return a `200 OK` response. If Dapr receives any other return status code than `200`, or if your app crashes, Dapr will attempt to redeliver the message following At-Least-Once semantics.
#### Example
{{< tabs Python Node>}}
{{% codetab %}}
```python
@app.route('/dsstatus', methods=['POST'])
def ds_subscriber():
print(request.json, flush=True)
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
```
{{% /codetab %}}
{{% codetab %}}
```javascript
app.post('/dsstatus', (req, res) => {
res.sendStatus(200);
});
```
{{% /codetab %}}
{{< /tabs >}}
## Next steps
- [Scope access to your pub/sub topics]({{< ref pubsub-scopes.md >}})
- [Pub/Sub quickstart](https://github.com/dapr/quickstarts/tree/master/pub-sub)
- [Pub/sub components]({{< ref setup-pubsub >}})

View File

@ -1,4 +1,12 @@
# Publish/Subscribe Messaging
---
type: docs
title: "Publish and subscribe overview"
linkTitle: "Overview"
weight: 1000
description: "Overview of the Dapr Pub/Sub building block"
---
## Introduction
The [publish/subscribe pattern](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) allows your microservices to communicate with each other purely by sending messages. In this system, the **producer** of a message sends it to a **topic**, with no knowledge of what service will receive the message. A messages can even be sent if there's no consumer for it.
@ -6,41 +14,37 @@ Similarly, a **consumer** will receive messages from a topic without knowledge o
Dapr provides a publish/subscribe API that provides at-least-once guarantees and integrates with various message brokers implementations. These implementations are pluggable, and developed outside of the Dapr runtime in [components-contrib](https://github.com/dapr/components-contrib/tree/master/pubsub).
## Publish/Subscribe API
## Features
The API for Publish/Subscribe can be found in the [spec repo](../../reference/api/pubsub_api.md).
### Publish/Subscribe API
## Behavior and guarantees
The API for Publish/Subscribe can be found in the [spec repo]({{< ref pubsub_api.md >}}).
### At-Least-Once guarantee
Dapr guarantees At-Least-Once semantics for message delivery.
That means that when an application publishes a message to a topic using the Publish/Subscribe API, it can assume the message is delivered at least once to any subscriber when the response status code from that endpoint is `200`, or returns no error if using the gRPC client.
### Consumer groups and multiple instances
The burden of dealing with concepts like consumer groups and multiple instances inside consumer groups is all catered for by Dapr.
### App ID
Dapr has the concept of an `id`. This is specified in Kubernetes using the `dapr.io/app-id` annotation and with the `app-id` flag using the Dapr CLI. Dapr requires an ID to be assigned to every application.
When multiple instances of the same application ID subscribe to a topic, Dapr will make sure to deliver the message to only one instance. If two different applications with different IDs subscribe to a topic, at least one instance in each application receives a copy of the same message.
## Cloud events
### Cloud events
Dapr follows the [CloudEvents 1.0 Spec](https://github.com/cloudevents/spec/tree/v1.0) and wraps any payload sent to a topic inside a Cloud Events envelope.
The following fields from the Cloud Events spec are implemented with Dapr:
* `id`
* `source`
* `specversion`
* `type`
* `datacontenttype` (Optional)
- `id`
- `source`
- `specversion`
- `type`
- `datacontenttype` (Optional)
> Starting with Dapr v0.9, Dapr no longer wraps published content into CloudEvent if the published payload itself is already in CloudEvent format.
The following example shows an XML content in CloudEvent v1.0 serialized as JSON:
```json
{
"specversion" : "1.0",
@ -53,3 +57,12 @@ The following example shows an XML content in CloudEvent v1.0 serialized as JSON
"data" : "<note><to>User1</to><from>user2</from><message>hi</message></note>"
}
```
### Topic scoping
Limit which topics applications are able to publish/subscibe to in order to limit access to potentially sensitive data streams. Read [Pub/Sub scoping]({{< ref pubsub-scopes.md >}}) for more information.
## Next steps
- Read the How-To guide on [publishing and subscribing]({{< ref howto-publish-subscribe.md >}})
- Learn about [Pub/Sub scopes]({{< ref pubsub-scopes.md >}})

View File

@ -0,0 +1,158 @@
---
type: docs
title: "Scope Pub/Sub topic access"
linkTitle: "Scope topic access"
weight: 5000
description: "Use scopes to limit Pub/Sub topics to specific applications"
---
## Introduction
[Namespaces or component scopes]({{< ref component-scopes.md >}}) can be used to limit component access to particular applications. These application scopes added to a component limit only the applications with specific IDs to be able to use the component.
In addition to this general component scope, the following can be limited for pub/sub components:
- Which topics which can be used (published or subscribed)
- Which applications are allowed to publish to specific topics
- Which applications are allowed to subscribe to specific topics
This is called **pub/sub topic scoping**.
Pub/sub scopes are defined for each pub/sub component. You may have a pub/sub component named `pubsub` that has one set of scopes, and another `pubsub2` with a different set.
To use this topic scoping three metadata properties can be set for a pub/sub component:
- `spec.metadata.publishingScopes`
- A semicolon-separated list of applications & comma-separated topic lists, allowing that app to publish to that list of topics
- If nothing is specified in `publishingScopes` (default behavior), all apps can publish to all topics
- To deny an app the ability to publish to any topic, leave the topics list blank (`app1=;app2=topic2`)
- For example, `app1=topic1;app2=topic2,topic3;app3=` will allow app1 to publish to topic1 and nothing else, app2 to publish to topic2 and topic3 only, and app3 to publish to nothing.
- `spec.metadata.subscriptionScopes`
- A semicolon-separated list of applications & comma-separated topic lists, allowing that app to subscribe to that list of topics
- If nothing is specified in `subscriptionScopes` (default behavior), all apps can subscribe to all topics
- For example, `app1=topic1;app2=topic2,topic3` will allow app1 to subscribe to topic1 only and app2 to subscribe to topic2 and topic3
- `spec.metadata.allowedTopics`
- A comma-separated list of allowed topics for all applications.
- If `allowedTopics` is not set (default behavior), all topics are valid. `subscriptionScopes` and `publishingScopes` still take place if present.
- `publishingScopes` or `subscriptionScopes` can be used in conjuction with `allowedTopics` to add granular limitations
These metadata properties can be used for all pub/sub components. The following examples use Redis as pub/sub component.
## Example 1: Scope topic access
Limiting which applications can publish/subscribe to topics can be useful if you have topics which contain sensitive information and only a subset of your applications are allowed to publish or subscribe to these.
It can also be used for all topics to have always a "ground truth" for which applications are using which topics as publishers/subscribers.
Here is an example of three applications and three topics:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: publishingScopes
value: "app1=topic1;app2=topic2,topic3;app3="
- name: subscriptionScopes
value: "app2=;app3=topic1"
```
The table below shows which applications are allowed to publish into the topics:
| | topic1 | topic2 | topic3 |
|------|--------|--------|--------|
| app1 | X | | |
| app2 | | X | X |
| app3 | | | |
The table below shows which applications are allowed to subscribe to the topics:
| | topic1 | topic2 | topic3 |
|------|--------|--------|--------|
| app1 | X | X | X |
| app2 | | | |
| app3 | X | | |
> Note: If an application is not listed (e.g. app1 in subscriptionScopes) it is allowed to subscribe to all topics. Because `allowedTopics` is not used and app1 does not have any subscription scopes, it can also use additional topics not listed above.
## Example 2: Limit allowed topics
A topic is created if a Dapr application sends a message to it. In some scenarios this topic creation should be governed. For example:
- A bug in a Dapr application on generating the topic name can lead to an unlimited amount of topics created
- Streamline the topics names and total count and prevent an unlimited growth of topics
In these situations `allowedTopics` can be used.
Here is an example of three allowed topics:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: allowedTopics
value: "topic1,topic2,topic3"
```
All applications can use these topics, but only those topics, no others are allowed.
## Example 3: Combine `allowedTopics` and scopes
Sometimes you want to combine both scopes, thus only having a fixed set of allowed topics and specify scoping to certain applications.
Here is an example of three applications and two topics:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: allowedTopics
value: "A,B"
- name: publishingScopes
value: "app1=A"
- name: subscriptionScopes
value: "app1=;app2=A"
```
> Note: The third application is not listed, because if an app is not specified inside the scopes, it is allowed to use all topics.
The table below shows which application is allowed to publish into the topics:
| | A | B | C |
|------|---|---|---|
| app1 | X | | |
| app2 | X | X | |
| app3 | X | X | |
The table below shows which application is allowed to subscribe to the topics:
| | A | B | C |
|------|---|---|---|
| app1 | | | |
| app2 | X | | |
| app3 | X | X | |
## Demo
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VdWBBGcbHQ?start=513" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Secrets building block"
linkTitle: "Secrets"
weight: 70
description: Securely access secrets from your application
---

View File

@ -1,4 +1,10 @@
# Access Application Secrets using the Secrets API
---
type: docs
title: "How To: Retrieve a secret"
linkTitle: "How To: Retrieve a secret"
weight: 2000
description: "Use the secret store building block to securely retrieve a secret"
---
It's common for applications to store sensitive information such as connection strings, keys and tokens that are used to authenticate with databases, services and external systems in secrets by using a dedicated secret store.
@ -16,7 +22,13 @@ The first step involves setting up a secret store, either in the cloud or in the
The second step is to configure the secret store with Dapr.
Follow the instructions [here](../setup-secret-store) to set up the secret store of your choice.
To deploy in Kubernetes, save the file above to `aws_secret_manager.yaml` and then run:
```bash
kubectl apply -f aws_secret_manager.yaml
```
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&feature=youtu.be&t=1818) for an example on how to use the secrets API. Or watch this [video](https://www.youtube.com/watch?v=8W-iBDNvCUM&feature=youtu.be&t=1765) for an example on how to component scopes with secret components and the secrets API.

View File

@ -1,4 +1,10 @@
# Dapr secrets management
---
type: docs
title: "Secrets stores overview"
linkTitle: "Secrets stores overview"
weight: 1000
description: "Overview of Dapr secrets management building block"
---
Almost all non-trivial applications need to _securely_ store secret data like API keys, database passwords, and more. By nature, these secrets should not be checked into the version control system, but they also need to be accessible to code running in production. This is generally a hard problem, but it's critical to get it right. Otherwise, critical production systems can be compromised.
@ -17,7 +23,7 @@ See [Setup secret stores](https://github.com/dapr/docs/tree/master/howto/setup-s
Instead of including credentials directly within a Dapr component file, you can place the credentials within a Dapr supported secret store and reference the secret within the Dapr component. This is preferred approach and is a recommended best practice especially in production environments.
For more information read [Referencing Secret Stores in Components](./component-secrets.md)
For more information read [Referencing Secret Stores in Components]({{< ref component-secrets.md >}})
## Using secrets in your application
@ -27,15 +33,15 @@ Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&t=1818) for an ex
For example, the diagram below shows an application requesting the secret called "mysecret" from a secret store called "vault" from a configured cloud secret store.
<img src="../../images/secrets_cloud_stores.png" width=800>
<img src="/images/secrets-overview-cloud-stores.png" width=600>
Applications can use the secrets API to access secrets from a Kubernetes secret store. In the example below, the application retrieves the same secret "mysecret" from a Kubernetes secret store.
<img src="../../images/secrets_kubernetes_store.png" width=800>
<img src="/images/secrets-overview-kubernetes-store.png" width=600>
In Azure Dapr can be configured to use Managed Identities to authenticate with Azure Key Vault in order to retrieve secrets. In the example below, an Azure Kubernetes Service (AKS) cluster is configured to use managed identities. Then Dapr uses [pod identities](https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-identity#use-pod-identities) to retrieve secrets from Azure Key Vault on behalf of the application.
<img src="../../images/secrets_azure_aks_keyvault.png" width=800>
<img src="/images/secrets-overview-azure-aks-keyvault.png" width=600>
Notice that in all of the examples above the application code did not have to change to get the same secret. Dapr did all the heavy lifting here via the secrets building block API and using the secret components.

View File

@ -1,10 +1,17 @@
# Limit the secrets that can be read from secret stores
---
type: docs
title: "How To: Use secret scoping"
linkTitle: "How To: Use secret scoping"
weight: 3000
description: "Use scoping to limit the secrets that can be read from secret stores"
type: docs
---
Follow [these instructions](../setup-secret-store) to configure secret store for an application. Once configured, any secret defined within that store will be accessible from the Dapr application.
Follow [these instructions]({{< ref setup-secret-store >}}) to configure secret store for an application. Once configured, any secret defined within that store will be accessible from the Dapr application.
To limit the secrets to which the Dapr application has access, users can define secret scopes by augmenting existing configuration CRD with restrictive permissions.
Follow [these instructions](../../concepts/configuration/README.md) to define a configuration CRD.
Follow [these instructions]({{< ref configuration-concept.md >}}) to define a configuration CRD.
## Scenario 1 : Deny access to all secrets for a secret store
@ -24,7 +31,7 @@ spec:
defaultAccess: deny
```
For applications that need to be deined access to the Kubernetes secret store, follow [these instructions](../configure-k8s/README.md), and add the following annotation to the application pod.
For applications that need to be deined access to the Kubernetes secret store, follow [these instructions]({{< ref kubernetes-overview.md >}}), and add the following annotation to the application pod.
```yaml
dapr.io/config: appconfig
@ -49,7 +56,7 @@ spec:
allowedSecrets: ["secret1", "secret2"]
```
This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions](../../concepts/configuration/README.md) to apply configuration to the sidecar.
This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
## Scenario 3: Deny access to certain senstive secrets in a secret store
@ -68,7 +75,7 @@ spec:
deniedSecrets: ["secret1", "secret2"]
```
The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions](../../concepts/configuration/README.md) to apply configuration to the sidecar.
The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
## Permission priority

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Service invocation"
linkTitle: "Service invocation"
weight: 10
description: Perform direct, secure, service-to-service method calls
---

View File

@ -1,15 +1,20 @@
# Get started with HTTP service invocation
---
type: docs
title: "How-To: Invoke and discover services"
linkTitle: "How-To: Invoke services"
description: "How-to guide on how to use Dapr service invocation in a distributed application"
weight: 2000
---
This article describe how to deploy services each with an unique application ID, so that other services can discover and call endpoints on them using service invocation API.
## 1. Choose an ID for your service
## Step 1: Choose an ID for your service
Dapr allows you to assign a global, unique ID for your app.
Dapr allows you to assign a global, unique ID for your app. This ID encapsulates the state for your application, regardless of the number of instances it may have.
This ID encapsulates the state for your application, regardless of the number of instances it may have.
### Setup an ID using the Dapr CLI
{{< tabs "Self-Hosted (CLI)" Kubernetes >}}
{{% codetab %}}
In self hosted mode, set the `--app-id` flag:
```bash
@ -21,8 +26,9 @@ If your app uses an SSL connection, you can tell Dapr to invoke your app over an
```bash
dapr run --app-id cart --app-port 5000 --app-ssl python app.py
```
{{% /codetab %}}
*Note: the Kubernetes annotation can be found [here](../configure-k8s).*
{{% codetab %}}
### Setup an ID using Kubernetes
@ -51,14 +57,16 @@ spec:
dapr.io/app-port: "5000"
...
```
*If your app uses an SSL connection, you can tell Dapr to invoke your app over an insecure SSL connection with the `app-ssl: "true"` annotation (full list [here]({{< ref kubernetes-annotations.md >}}))*
## 2. Invoke a service
{{% /codetab %}}
Dapr uses a sidecar, decentralized architecture. To invoke an application using Dapr, you can use the `invoke` API on any Dapr instance.
{{< /tabs >}}
The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another.
*Note: The following is a Python example of a cart app. It can be written in any programming language*
## Step 2: Setup a service
The following is a Python example of a cart app. It can be written in any programming language.
```python
from flask import Flask
@ -74,8 +82,16 @@ if __name__ == '__main__':
This Python app exposes an `add()` method via the `/add` endpoint.
### Invoke with curl over HTTP
## Step 3: Invoke the service
Dapr uses a sidecar, decentralized architecture. To invoke an application using Dapr, you can use the `invoke` API on any Dapr instance.
The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another.
{{< tabs curl CLI >}}
{{% codetab %}}
From a terminal or command prompt run:
```bash
curl http://localhost:3500/v1.0/invoke/cart/method/add -X POST
```
@ -95,30 +111,35 @@ curl http://localhost:3500/v1.0/invoke/cart/method/add -X DELETE
```
Dapr puts any payload returned by the called service in the HTTP response's body.
{{% /codetab %}}
{{% codetab %}}
```bash
dapr invoke --app-id cart --method add
```
{{% /codetab %}}
{{< /tabs >}}
### Namespaces
When running on [namespace supported platforms](../../reference/api/service_invocation_api.md#namespace-supported-platforms), you include the namespace of the target app in the app ID:
When running on [namespace supported platforms]({{< ref "service_invocation_api.md#namespace-supported-platforms" >}}), you include the namespace of the target app in the app ID: `myApp.production`
```
myApp.production
```
For example, invoking the example python service with a namespace would be;
For example, invoking the example python service with a namespace would be:
```bash
curl http://localhost:3500/v1.0/invoke/cart.production/method/add -X POST
```
See the [Cross namespace API spec](../../reference/api/service_invocation_api.md#cross-namespace-invocation) for more information on namespaces.
See the [Cross namespace API spec]({{< ref "service_invocation_api.md#cross-namespace-invocation" >}}) for more information on namespaces.
## 3. View traces and logs
## Step 4: View traces and logs
The example above showed you how to directly invoke a different service running locally or in Kubernetes. Dapr outputs metrics, tracing and logging information allowing you to visualize a call graph between services, log errors and optionally log the payload body.
For more information on tracing and logs see the [observability](../../concepts/observability) article.
For more information on tracing and logs see the [observability]({{< ref observability-concept.md >}}) article.
## Related Links
* [Service invocation concepts](../../concepts/service-invocation/README.md)
* [Service invocation API specification](../../reference/api/service_invocation_api.md)
* [Service invocation overview]({{< ref service-invocation-overview.md >}})
* [Service invocation API specification]({{< ref service_invocation_api.md >}})

View File

@ -1,12 +1,15 @@
# Service Invocation
---
type: docs
title: "Service invocation overview"
linkTitle: "Overview"
weight: 1000
description: "Overview of the service invocation building block"
---
## Introduction
Using service invocation, your application can discover and reliably and securely communicate with other applications using the standard protocols of [gRPC](https://grpc.io) or [HTTP](https://www.w3.org/Protocols/).
- [Overview](#overview)
- [Features](#features)
- [Next steps](#next-steps)
## Overview
In many environments with multiple services that need to communicate with each other, developers often ask themselves the following questions:
* How do I discover and invoke methods on different services?
@ -18,47 +21,32 @@ Dapr allows you to overcome these challenges by providing an endpoint that acts
Dapr uses a sidecar, decentralized architecture. To invoke an application using Dapr, you use the `invoke` API on any Dapr instance. The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another.
### Invoke logic
The diagram below is an overview of how Dapr's service invocation works.
![Service Invocation Diagram](../../images/service-invocation.png)
<img src="/images/service-invocation-overview.png" width=800 alt="Diagram showing the steps of service invocation">
1. Service A makes an http/gRPC call meant for Service B. The call goes to the local Dapr sidecar.
2. Dapr discovers Service B's location using the [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) installed for the given hosting platform.
1. Service A makes an http/gRPC call targeting Service B. The call goes to the local Dapr sidecar.
2. Dapr discovers Service B's location using the [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) which is running on the given [hosting platform]({{< ref "hosting" >}}).
3. Dapr forwards the message to Service B's Dapr sidecar
* Note: All calls between Dapr sidecars go over gRPC for performance. Only calls between services and Dapr sidecars are either HTTP or gRPC
**Note**: All calls between Dapr sidecars go over gRPC for performance. Only calls between services and Dapr sidecars can be either HTTP or gRPC
4. Service B's Dapr sidecar forwards the request to the specified endpoint (or method) on Service B. Service B then runs its business logic code.
5. Service B sends a response to Service A. The response goes to Service B's sidecar.
6. Dapr forwards the response to Service A's Dapr sidecar.
7. Service A receives the response.
### Example
As an example for the above call sequence, suppose you have the applications as described in the [hello world sample](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md), where a python app invokes a node.js app.
In such a scenario, the python app would be "Service A" above, and the Node.js app would be "Service B".
The diagram below shows sequence 1-7 again on a local machine showing the API call:
![Service Invocation Diagram](../../images/service-invocation-example.png)
1. Suppose the Node.js app has a Dapr app ID of `nodeapp`, as in the sample. The python app invokes the Node.js app's `neworder` method by posting `http://localhost:3500/v1.0/invoke/nodeapp/method/neworder`, which first goes to the python app's local Dapr sidecar.
2. Dapr discovers the Node.js app's location using multicast DNS component which runs on your local machine.
3. Dapr forwards the request to the Node.js app's sidecar.
4. The Node.js app's sidecar forwards the request to the Node.js app. The Node.js app performs its business logic, which, as described in the sample, is to log the incoming message and then persist the order ID into Redis (not shown in the diagram above).
Steps 5-7 are the same as above.
## Features
Service invocation provides several features to make it easy for you to call methods on remote applications.
- [Namespaces scoping](#namespaces-scoping)
- [Retries](#Retries)
- [Service-to-service security](#service-to-service-security)
- [Service access security](#service-access-security)
- [Observability: Tracing, logging and metrics](#observability)
- [Pluggable service discovery](#pluggable-service-discovery)
### Service invocation API
The API for Pservice invocation can be found in the [spec repo]({{< ref service_invocation_api.md >}}).
### Namespaces scoping
Service invocation supports calls across namespaces. On all supported hosting platforms, Dapr app IDs conform to a valid FQDN format that includes the target namespace.
For example, the following string contains the app ID `nodeapp` in addition to the namespace the app runs in `production`.
@ -67,10 +55,14 @@ For example, the following string contains the app ID `nodeapp` in addition to t
localhost:3500/v1.0/invoke/nodeapp.production/method/neworder
```
This is especially useful in cross namespace calls in a Kubernetes cluster. Watch this [video](https://youtu.be/LYYV_jouEuA?t=495) for a demo on how to use namespaces with service invocation.
This is especially useful in cross namespace calls in a Kubernetes cluster. Watch this video for a demo on how to use namespaces with service invocation.
<iframe width="560" height="315" src="https://www.youtube.com/embed/LYYV_jouEuA?start=497" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
### Retries
Service invocation performs automatic retries with backoff time periods in the event of call failures and transient errors.
Errors that cause retries are:
* Network errors including endpoint unavailability and refused connections
@ -80,29 +72,48 @@ Per call retries are performed with a backoff interval of 1 second up to a thres
Connection establishment via gRPC to the target sidecar has a timeout of 5 seconds.
### Service-to-service security
All calls between Dapr applications can be made secure with mutual (mTLS) authentication on hosted platforms, including automatic certificate rollover, via the Dapr Sentry service. The diagram below shows this for self hosted applications.
For more information read the [service-to-service security](../security#mtls-self-hosted) article.
For more information read the [service-to-service security]({{< ref "security-concept.md#sidecar-to-sidecar-communication" >}}) article.
![Self Hosted service to service security](../../images/security-mTLS-sentry-selfhosted.png)
<img src="/images/security-mTLS-sentry-selfhosted.png" width=800>
### Service access security
Applications can control which other applications are allowed to call them and what they are authorized to do via access policies. This enables you to restrict sensitive applications, that say have personnel information, from being accessed by unauthorized applications, and combined with service-to-service secure communication, provides for soft multi-tenancy deployments.
For more information read the [access control allow lists for service invocation](../configuration#access-control-allow-lists-for-service-invocation) article.
For more information read the [access control allow lists for service invocation]({{< ref invoke-allowlist.md >}}) article.
### Observability
By default, all calls between applications are traced and metrics are gathered to provide insights and diagnostics for applications, which is especially important in production scenarios.
For more information read the [observability](../concepts/observability) article.
For more information read the [observability]({{< ref observability-concept.md >}}) article.
### Pluggable service discovery
Dapr can run on any [hosting platform](../hosting). For the supported hosting platforms this means they have a [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) developed for them that enables service discovery. For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster.
Dapr can run on any [hosting platform]({{< ref hosting >}}). For the supported hosting platforms this means they have a [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) developed for them that enables service discovery. For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster.
## Example
Following the above call sequence, suppose you have the applications as described in the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md), where a python app invokes a node.js app. In such a scenario, the python app would be "Service A" , and a Node.js app would be "Service B".
The diagram below shows sequence 1-7 again on a local machine showing the API call:
<img src="/images/service-invocation-overview-example.png" width=800>
1. The Node.js app has a Dapr app ID of `nodeapp`. The python app invokes the Node.js app's `neworder` method by POSTing `http://localhost:3500/v1.0/invoke/nodeapp/method/neworder`, which first goes to the python app's local Dapr sidecar.
2. Dapr discovers the Node.js app's location using name resolution component (in this case mDNS while self-hosted) which runs on your local machine.
3. Dapr forwards the request to the Node.js app's sidecar using the location it just received.
4. The Node.js app's sidecar forwards the request to the Node.js app. The Node.js app performs its business logic, logging the incoming message and then persist the order ID into Redis (not shown in the diagram)
5. The Node.js app sends a response to the Python app through the Node.js sidecar.
6. Dapr forwards the response to the Python Dapr sidecar
7. The Python app receives the resposne.
## Next steps
* Follow these guide on
* [How-to: Get started with HTTP service invocation](../../howto/invoke-and-discover-services)
* [How-to: Get started with Dapr and gRPC](../../howto/create-grpc-app)
* Try out the [hello world sample](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or visit the samples in each of the [Dapr SDKs](https://github.com/dapr/docs#further-documentation)
* Read the [service invocation API specification](../../reference/api/service_invocation_api.md)
* Follow these guide on:
* [How-to: Get started with HTTP service invocation]({{< ref howto-invoke-discover-services.md >}})
* [How-to: Get started with Dapr and gRPC]({{< ref grpc >}})
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or visit the samples in each of the [Dapr SDKs]({{< ref sdks >}})
* Read the [service invocation API specification]({{< ref service_invocation_api.md >}})

View File

@ -0,0 +1,7 @@
---
type: docs
title: "State management"
linkTitle: "State management"
weight: 20
description: Create long running stateful services
---

View File

@ -0,0 +1,168 @@
---
type: docs
title: "How-To: Save and get state"
linkTitle: "How-To: Save & get state"
weight: 200
description: "Use key value pairs to persist a state"
---
## Introduction
State management is one of the most common needs of any application: new or legacy, monolith or microservice.
Dealing with different databases libraries, testing them, handling retries and faults can be time consuming and hard.
Dapr provides state management capabilities that include consistency and concurrency options.
In this guide we'll start of with the basics: Using the key/value state API to allow an application to save, get and delete state.
## Step 1: Setup a state store
A state store component represents a resource that Dapr uses to communicate with a database.
For the purpose of this how to we'll use a Redis state store, but any state store from the [supported list]({{< ref supported-state-stores >}}) will work.
{{< tabs "Self-Hosted (CLI)" Kubernetes>}}
{{% codetab %}}
When using `Dapr init` in Standalone mode, the Dapr CLI automatically provisions a state store (Redis) and creates the relevant YAML in a `components` directory, which for Linux/MacOS is `$HOME/.dapr/components` and for Windows is `%USERPROFILE%\.dapr\components`
To change the state store being used, replace the YAML under `/components` with the file of your choice.
{{% /codetab %}}
{{% codetab %}}
See the instructions [here]({{< ref "setup-state-store" >}}) on how to setup different state stores on Kubernetes.
{{% /codetab %}}
{{< /tabs >}}
## Step 2: Save state
The following example shows how to save two key/value pairs in a single call using the state management API.
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK">}}
{{% codetab %}}
Begin by ensuring a Dapr sidecar is running:
```bash
dapr --app-id myapp --port 3500 run
```
{{% alert title="Note" color="info" %}}
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
{{% /alert %}}
Then in a separate terminal run:
```bash
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' http://localhost:3500/v1.0/state/statestore
```
{{% /codetab %}}
{{% codetab %}}
Begin by ensuring a Dapr sidecar is running:
```bash
dapr --app-id myapp --port 3500 run
```
{{% alert title="Note" color="info" %}}
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
{{% /alert %}}
Then in a separate terminal run:
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' -Uri 'http://localhost:3500/v1.0/state/statestore'
```
{{% /codetab %}}
{{% codetab %}}
Make sure to install the Dapr Python SDK with `pip3 install dapr`. Then create a file named `state.py` with:
```python
from dapr.clients import DaprClient
from dapr.clients.grpc._state import StateItem
with DaprClient() as d:
d.save_states(store_name="statestore",
states=[
StateItem(key="key1", value="value1"),
StateItem(key="key2", value="value2")
])
```
Run with `dapr run --app-id myapp run python state.py`
{{% alert title="Note" color="info" %}}
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
{{% /alert %}}
{{% /codetab %}}
{{< /tabs >}}
## Step 3: Get state
The following example shows how to get an item by using a key with the state management API:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK">}}
{{% codetab %}}
With the same dapr instance running from above run:
```bash
curl http://localhost:3500/v1.0/state/statestore/key1
```
{{% /codetab %}}
{{% codetab %}}
With the same dapr instance running from above run:
```powershell
Invoke-RestMethod -Uri 'http://localhost:3500/v1.0/state/statestore/key1'
```
{{% /codetab %}}
{{% codetab %}}
Add the following code to `state.py` from above and run again:
```python
data = d.get_state(store_name="statestore",
key="key1",
state_metadata={"metakey": "metavalue"}).data
print(f"Got value: {data}")
```
{{% /codetab %}}
{{< /tabs >}}
## Step 4: Delete state
The following example shows how to delete an item by using a key with the state management API:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK">}}
{{% codetab %}}
With the same dapr instance running from above run:
```bash
curl -X DELETE 'http://localhost:3500/v1.0/state/statestore/key1'
```
Try getting state again and note that no value is returned.
{{% /codetab %}}
{{% codetab %}}
With the same dapr instance running from above run:
```powershell
Invoke-RestMethod -Method Delete -Uri 'http://localhost:3500/v1.0/state/statestore/key1'
```
Try getting state again and note that no value is returned.
{{% /codetab %}}
{{% codetab %}}
Add the following code to `state.py` from above and run again:
```python
d.delete_state(store_name="statestore"",
key="key1",
state_metadata={"metakey": "metavalue"})
data = d.get_state(store_name="statestore",
key="key1",
state_metadata={"metakey": "metavalue"}).data
print(f"Got value after delete: {data}")
```
{{% /codetab %}}
{{< /tabs >}}

View File

@ -1,15 +1,21 @@
# Create a stateful replicated service
---
type: docs
title: "How-To: Build a stateful service"
linkTitle: "How-To: Build a stateful service"
weight: 300
description: "Use state management with a scaled, replicated service"
---
In this HowTo we'll show you how you can create a stateful service which can be horizontally scaled, using opt-in concurrency and consistency models.
In this article you'll learn how you can create a stateful service which can be horizontally scaled, using opt-in concurrency and consistency models.
This frees developers from difficult state coordination, conflict resolution and failure handling, and allows them instead to consume these capabilities as APIs from Dapr.
## 1. Setup a state store
## Setup a state store
A state store component represents a resource that Dapr uses to communicate with a database.
For the purpose of this guide, we'll use a Redis state store.
See a list of supported state stores [here](../setup-state-store/supported-state-stores.md)
See a list of supported state stores [here]({{< ref supported-state-stores >}})
### Using the Dapr CLI
@ -18,7 +24,7 @@ To change the state store being used, replace the YAML under `/components` with
### Kubernetes
See the instructions [here](../setup-state-store) on how to setup different state stores on Kubernetes.
See the instructions [here]({{<ref setup-state-store-overview>}}) on how to setup different state stores on Kubernetes.
## Strong and Eventual consistency

View File

@ -0,0 +1,9 @@
---
type: docs
title: "Work with backend state stores"
linkTitle: "Backend stores"
weight: 400
description: "Guides for working with specific backend states stores"
---
Explore the **Operations** section to see a list of [supported state stores]({{<ref supported-state-stores.md>}}) and how to setup [state store components]({{<ref setup-state-store-overview.md>}}).

View File

@ -1,53 +1,59 @@
# Query Azure Cosmos DB state store
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
> **NOTE:** Azure Cosmos DB is a multi-modal database that supports multiple APIs. The default Dapr Cosmos DB state store implementation uses the [Azure Cosmos DB SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started).
## 1. Connect to Azure Cosmos DB
The easiest way to connect to your Cosmos DB instance is to use the Data Explorer on [Azure Management Portal](https://portal.azure.com). Alternatively, you can use [various SDKs and tools](https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction).
> **NOTE:** The following samples use Cosmos DB [SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). When you configure an Azure Cosmos DB for Dapr, you need to specify the exact database and collection to use. The follow samples assume you've already connected to the right database and a collection named "states".
## 2. List keys by App ID
To get all state keys associated with application "myapp", use the query:
```sql
SELECT * FROM states WHERE CONTAINS(states.id, 'myapp||')
```
The above query returns all documents with id containing "myapp-", which is the prefix of the state keys.
## 3. Get specific state data
To get the state data by a key "balance" for the application "myapp", use the query:
```sql
SELECT * FROM states WHERE states.id = 'myapp||balance'
```
Then, read the **value** field of the returned document.
To get the state version/ETag, use the command:
```sql
SELECT states._etag FROM states WHERE states.id = 'myapp||balance'
```
## 4. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```sql
SELECT * FROM states WHERE CONTAINS(states.id, 'mypets||cat||leroy||')
```
And to get a specific actor state such as "food", use the command:
```sql
SELECT * FROM states WHERE states.id = 'mypets||cat||leroy||food'
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.
---
type: docs
title: "Azure Cosmos DB"
linkTitle: "Azure Cosmos DB"
weight: 1000
description: "Use Azure Cosmos DB as a backend state store"
---
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{< ref state_api.md >}}). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
> **NOTE:** Azure Cosmos DB is a multi-modal database that supports multiple APIs. The default Dapr Cosmos DB state store implementation uses the [Azure Cosmos DB SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started).
## 1. Connect to Azure Cosmos DB
The easiest way to connect to your Cosmos DB instance is to use the Data Explorer on [Azure Management Portal](https://portal.azure.com). Alternatively, you can use [various SDKs and tools](https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction).
> **NOTE:** The following samples use Cosmos DB [SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). When you configure an Azure Cosmos DB for Dapr, you need to specify the exact database and collection to use. The follow samples assume you've already connected to the right database and a collection named "states".
## 2. List keys by App ID
To get all state keys associated with application "myapp", use the query:
```sql
SELECT * FROM states WHERE CONTAINS(states.id, 'myapp||')
```
The above query returns all documents with id containing "myapp-", which is the prefix of the state keys.
## 3. Get specific state data
To get the state data by a key "balance" for the application "myapp", use the query:
```sql
SELECT * FROM states WHERE states.id = 'myapp||balance'
```
Then, read the **value** field of the returned document.
To get the state version/ETag, use the command:
```sql
SELECT states._etag FROM states WHERE states.id = 'myapp||balance'
```
## 4. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```sql
SELECT * FROM states WHERE CONTAINS(states.id, 'mypets||cat||leroy||')
```
And to get a specific actor state such as "food", use the command:
```sql
SELECT * FROM states WHERE states.id = 'mypets||cat||leroy||food'
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.

View File

@ -1,60 +1,66 @@
# Query Redis state store
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
>**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation.
## 1. Connect to Redis
You can use the official [redis-cli](https://redis.io/topics/rediscli) or any other Redis compatible tools to connect to the Redis state store to directly query Dapr states. If you are running Redis in a container, the easiest way to use redis-cli is to use a container:
```bash
docker run --rm -it --link <name of the Redis container> redis redis-cli -h <name of the Redis container>
```
## 2. List keys by App ID
To get all state keys associated with application "myapp", use the command:
```bash
KEYS myapp*
```
The above command returns a list of existing keys, for example:
```bash
1) "myapp||balance"
2) "myapp||amount"
```
## 3. Get specific state data
Dapr saves state values as hash values. Each hash value contains a "data" field, which contains the state data and a "version" field, which contains an ever-incrementing version serving as the ETag.
For example, to get the state data by a key "balance" for the application "myapp", use the command:
```bash
HGET myapp||balance data
```
To get the state version/ETag, use the command:
```bash
HGET myapp||balance version
```
## 4. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```bash
KEYS mypets||cat||leroy*
```
And to get a specific actor state such as "food", use the command:
```bash
HGET mypets||cat||leroy||food value
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.
---
type: docs
title: "Redis"
linkTitle: "Redis"
weight: 2000
description: "Use Redis as a backend state store"
---
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{<ref state_api.md>}}). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
>**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation.
## 1. Connect to Redis
You can use the official [redis-cli](https://redis.io/topics/rediscli) or any other Redis compatible tools to connect to the Redis state store to directly query Dapr states. If you are running Redis in a container, the easiest way to use redis-cli is to use a container:
```bash
docker run --rm -it --link <name of the Redis container> redis redis-cli -h <name of the Redis container>
```
## 2. List keys by App ID
To get all state keys associated with application "myapp", use the command:
```bash
KEYS myapp*
```
The above command returns a list of existing keys, for example:
```bash
1) "myapp||balance"
2) "myapp||amount"
```
## 3. Get specific state data
Dapr saves state values as hash values. Each hash value contains a "data" field, which contains the state data and a "version" field, which contains an ever-incrementing version serving as the ETag.
For example, to get the state data by a key "balance" for the application "myapp", use the command:
```bash
HGET myapp||balance data
```
To get the state version/ETag, use the command:
```bash
HGET myapp||balance version
```
## 4. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```bash
KEYS mypets||cat||leroy*
```
And to get a specific actor state such as "food", use the command:
```bash
HGET mypets||cat||leroy||food value
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.

View File

@ -1,59 +1,65 @@
# Query SQL Server state store
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
## 1. Connect to SQL Server
The easiest way to connect to your SQL Server instance is to use the [Azure Data Studio](https://docs.microsoft.com/sql/azure-data-studio/download-azure-data-studio) (Windows, macOS, Linux) or [SQL Server Management Studio](https://docs.microsoft.com/sql/ssms/download-sql-server-management-studio-ssms) (Windows).
> **NOTE:** The following samples use Azure SQL. When you configure an Azure SQL database for Dapr, you need to specify the exact table name to use. The follow samples assume you've already connected to the right database with a table named "states".
## 2. List keys by App ID
To get all state keys associated with application "myapp", use the query:
```sql
SELECT * FROM states WHERE [Key] LIKE 'myapp-%'
```
The above query returns all rows with id containing "myapp-", which is the prefix of the state keys.
## 3. Get specific state data
To get the state data by a key "balance" for the application "myapp", use the query:
```sql
SELECT * FROM states WHERE [Key] = 'myapp-balance'
```
Then, read the **Data** field of the returned row.
To get the state version/ETag, use the command:
```sql
SELECT [RowVersion] FROM states WHERE [Key] = 'myapp-balance'
```
## 4. Get filtered state data
To get all state data where the value "color" in json data equals to "blue", use the query:
```sql
SELECT * FROM states WHERE JSON_VALUE([Data], '$.color') = 'blue'
```
## 5. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```sql
SELECT * FROM states WHERE [Key] LIKE 'mypets-cat-leroy-%'
```
And to get a specific actor state such as "food", use the command:
```sql
SELECT * FROM states WHERE [Key] = 'mypets-cat-leroy-food'
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.
---
type: docs
title: "SQL server"
linkTitle: "SQL server"
weight: 3000
description: "Use SQL server as a backend state store"
---
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{< ref state_api.md >}}). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
## 1. Connect to SQL Server
The easiest way to connect to your SQL Server instance is to use the [Azure Data Studio](https://docs.microsoft.com/sql/azure-data-studio/download-azure-data-studio) (Windows, macOS, Linux) or [SQL Server Management Studio](https://docs.microsoft.com/sql/ssms/download-sql-server-management-studio-ssms) (Windows).
> **NOTE:** The following samples use Azure SQL. When you configure an Azure SQL database for Dapr, you need to specify the exact table name to use. The follow samples assume you've already connected to the right database with a table named "states".
## 2. List keys by App ID
To get all state keys associated with application "myapp", use the query:
```sql
SELECT * FROM states WHERE [Key] LIKE 'myapp-%'
```
The above query returns all rows with id containing "myapp-", which is the prefix of the state keys.
## 3. Get specific state data
To get the state data by a key "balance" for the application "myapp", use the query:
```sql
SELECT * FROM states WHERE [Key] = 'myapp-balance'
```
Then, read the **Data** field of the returned row.
To get the state version/ETag, use the command:
```sql
SELECT [RowVersion] FROM states WHERE [Key] = 'myapp-balance'
```
## 4. Get filtered state data
To get all state data where the value "color" in json data equals to "blue", use the query:
```sql
SELECT * FROM states WHERE JSON_VALUE([Data], '$.color') = 'blue'
```
## 5. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```sql
SELECT * FROM states WHERE [Key] LIKE 'mypets-cat-leroy-%'
```
And to get a specific actor state such as "food", use the command:
```sql
SELECT * FROM states WHERE [Key] = 'mypets-cat-leroy-food'
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.

View File

@ -1,12 +1,15 @@
# State management
---
type: docs
title: "State management overview"
linkTitle: "Overview"
weight: 100
description: "Overview of the state management building block"
---
Dapr offers key/value storage APIs for state management. If a microservice uses state management, it can use these APIs to leverage any of the [supported state stores](https://github.com/dapr/docs/blob/master/howto/setup-state-store/supported-state-stores.md), without adding or learning a third party SDK.
## Introduction
- [Overview](#overview)
- [Features](#features)
- [Next Steps](#next-steps)
Dapr offers key/value storage APIs for state management. If a microservice uses state management, it can use these APIs to leverage any of the [supported state stores]({{< ref supported-state-stores.md >}}), without adding or learning a third party SDK.
## Overview
When using state management your application can leverage several features that would otherwise be complicated and error-prone to build yourself such as:
- Distributed concurrency and data consistency
@ -15,32 +18,29 @@ When using state management your application can leverage several features that
See below for a diagram of state management's high level architecture.
![State management](../../images/state_management.png)
<img src="/images/state-management-overview.png" width=900>
## Features
- [State Management API](#state-management-api)
- [State Store Behaviors](#state-store-behaviors)
- [State management API](#state-management-api)
- [State store behaviors](#state-store-behaviors)
- [Concurrency](#concurrency)
- [Consistency](#consistency)
- [Retry Policies](#retry-policies)
- [Bulk Operations](#bulk-operations)
- [Querying State Store Directly](#querying-state-store-directly)
- [Retry policies](#retry-policies)
- [Bulk operations](#bulk-operations)
- [Querying state store directly](#querying-state-store-directly)
## State management API
### State management API
Developers can use the state management API to retrieve, save and delete state values by providing keys.
Dapr data stores are components. Dapr ships with [Redis](https://redis.io
) out-of-box for local development in self hosted mode. Dapr allows you to plug in other data stores as components such as [Azure CosmosDB](https://azure.microsoft.com/services/cosmos-db/), [SQL Server](https://azure.microsoft.com/services/sql-database/), [AWS DynamoDB](https://aws.amazon.com/DynamoDB
), [GCP Cloud Spanner](https://cloud.google.com/spanner
) and [Cassandra](http://cassandra.apache.org/).
Dapr data stores are components. Dapr ships with [Redis](https://redis.io) out-of-box for local development in self hosted mode. Dapr allows you to plug in other data stores as components such as [Azure CosmosDB](https://azure.microsoft.com/services/cosmos-db/), [SQL Server](https://azure.microsoft.com/services/sql-database/), [AWS DynamoDB](https://aws.amazon.com/DynamoDB), [GCP Cloud Spanner](https://cloud.google.com/spanner) and [Cassandra](http://cassandra.apache.org/).
Visit [State API](../../reference/api/state_api.md) for more information.
Visit [State API]({{< ref state_api.md >}}) for more information.
> **NOTE:** Dapr prefixes state keys with the ID of the current Dapr instance. This allows multiple Dapr instances to share the same state store.
## State store behaviors
### State store behaviors
Dapr allows developers to attach to a state operation request additional metadata that describes how the request is expected to be handled. For example, you can attach concurrency requirements, consistency requirements, and retry policy to any state operation requests.
@ -50,50 +50,51 @@ Not all stores are created equal. To ensure portability of your application, you
The following table summarizes the capabilities of existing data store implementations.
Store | Strong consistent write | Strong consistent read | ETag|
----|----|----|----
Cosmos DB | Yes | Yes | Yes
PostgreSQL | Yes | Yes | Yes
Redis | Yes | Yes | Yes
Redis (clustered)| Yes | No | Yes
SQL Server | Yes | Yes | Yes
| Store | Strong consistent write | Strong consistent read | ETag |
|-------------------|-------------------------|------------------------|------|
| Cosmos DB | Yes | Yes | Yes |
| PostgreSQL | Yes | Yes | Yes |
| Redis | Yes | Yes | Yes |
| Redis (clustered) | Yes | No | Yes |
| SQL Server | Yes | Yes | Yes |
## Concurrency
### Concurrency
Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an **ETag** property to the returned state. And when the user code tries to update or delete a state, it's expected to attach the ETag through the **If-Match** header. The write operation can succeed only when the provided ETag matches with the ETag in the state store.
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a [Retry Policy](#Retry-Policies) to compensate for such conflicts when using ETags.
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a [retry policy](#Retry-Policies) to compensate for such conflicts when using ETags.
If your application omits ETags in writing requests, Dapr skips ETag checks while handling the requests. This essentially enables the **last-write-wins** pattern, compared to the **first-write-wins** pattern with ETags.
> **NOTE:** For stores that don't natively support ETags, it's expected that the corresponding Dapr state store implementation simulates ETags and follows the Dapr state management API specification when handling states. Because Dapr state store implementations are technically clients to the underlying data store, such simulation should be straightforward using the concurrency control mechanisms provided by the store.
## Consistency
### Consistency
Dapr supports both **strong consistency** and **eventual consistency**, with eventual consistency as the default behavior.
When strong consistency is used, Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request. When eventual consistency is used, Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica.
## Retry policies
### Retry policies
Dapr allows you to attach a retry policy to any write request. A policy is described by an **retryInterval**, a **retryPattern** and a **retryThreshold**. Dapr keeps retrying the request at the given interval up to the specified threshold. You can choose between a **linear** retry pattern or an **exponential** (backoff) pattern. When the **exponential** pattern is used, the retry interval is doubled after each attempt.
## Bulk operations
### Bulk operations
Dapr supports two types of bulk operations - **bulk** or **multi**. You can group several requests of the same type into a bulk (or a batch). Dapr submits requests in the bulk as individual requests to the underlying data store. In other words, bulk operations are not transactional. On the other hand, you can group requests of different types into a multi-operation, which is handled as an atomic transaction.
## Querying state store directly
### Querying state store directly
Dapr saves and retrieves state values without any transformation. You can query and aggregate state directly from the underlying state store. For example, to get all state keys associated with an application ID "myApp" in Redis, use:
Dapr saves and retrieves state values without any transformation. You can query and aggregate state directly from the [underlying state store]({{< ref query-state-store >}}).
For example, to get all state keys associated with an application ID "myApp" in Redis, use:
```bash
KEYS "myApp*"
```
> **NOTE:** See [How to query Redis store](../../howto/query-state-store/query-redis-store.md) for details on how to query a Redis store.
>
> **NOTE:** See [How to query Redis store]({{< ref query-redis-store.md >}} ) for details on how to query a Redis store.
### Querying actor state
#### Querying actor state
If the data store supports SQL queries, you can query an actor's state using SQL queries. For example use:
@ -111,13 +112,6 @@ SELECT AVG(value) FROM StateTable WHERE Id LIKE '<app-id>||<thermometer>||*||tem
## Next steps
* Follow these guides on
* [How-to: Set up Azure Cosmos DB store](../../howto/setup-state-store/setup-azure-cosmosdb.md)
* [How-to: query Azure Cosmos DB store](../../howto/query-state-store/query-cosmosdb-store.md)
* [How-to: Set up PostgreSQL store](../../howto/setup-state-store/setup-postgresql.md)
* [How-to: Set up Redis store](../../howto/setup-state-store/setup-redis.md)
* [How-to: Query Redis store](../../howto/query-state-store/query-redis-store.md)
* [How-to: Set up SQL Server store](../../howto/setup-state-store/setup-sqlserver.md)
* [How-to: Query SQL Server store](../../howto/query-state-store/query-sqlserver-store.md)
* Read the [state management API specification](../../reference/api/state_api.md)
* Read the [actors API specification](../../reference/api/actors_api.md)
* Follow the [state store setup guides]({{< ref setup-state-store >}})
* Read the [state management API specification]({{< ref state_api.md >}})
* Read the [actors API specification]({{< ref actors_api.md >}})

View File

@ -0,0 +1,7 @@
---
type: docs
title: "IDE support"
linkTitle: "IDE support"
weight: 30
description: "Support for common Integrated Development Environments (IDEs)"
---

View File

@ -1,4 +1,10 @@
# Configuring IntelliJ Community Edition for debugging with Dapr
---
type: docs
title: "IntelliJ"
linkTitle: "IntelliJ"
weight: 1000
description: "Configuring IntelliJ community edition for debugging with Dapr"
---
When developing Dapr applications, you typically use the Dapr CLI to start your 'Daprized' service similar to this:
@ -74,14 +80,14 @@ Optionally, you may also create a new entry for a sidecar tool that can be reuse
Now, create or edit the run configuration for the application to be debugged. It can be found in the menu next to the `main()` function.
![Edit run configuration menu](../../images/intellij_debug_menu.png)
![Edit run configuration menu](/images/intellij_debug_menu.png)
Now, add the program arguments and environment variables. These need to match the ports defined in the entry in 'External Tool' above.
* Command line arguments for this example: `-p 3000`
* Environment variables for this example: `DAPR_HTTP_PORT=3005;DAPR_GRPC_PORT=52000`
![Edit run configuration](../../images/intellij_edit_run_configuration.png)
![Edit run configuration](/images/intellij_edit_run_configuration.png)
## Start debugging
@ -89,11 +95,11 @@ Once the one-time config above is done, there are two steps required to debug a
1. Start `dapr` via `Tools` -> `External Tool` in IntelliJ.
![Run dapr as 'External Tool'](../../images/intellij_start_dapr.png)
![Run dapr as 'External Tool'](/images/intellij_start_dapr.png)
2. Start your application in debug mode.
![Start application in debug mode](../../images/intellij_debug_app.png)
![Start application in debug mode](/images/intellij_debug_app.png)
## Wrapping up

View File

@ -1,4 +1,10 @@
# Application development with Visual Studio Code
---
type: docs
title: "VS Code"
linkTitle: "VS Code"
weight: 2000
description: "Application development and debugging with Visual Studio Code"
---
## Visual Studio Code Dapr extension
It is recommended to use the *preview* of the [Dapr Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-dapr) available in the Visual Studio marketplace for local development and debugging of your Dapr applications.

View File

@ -1,4 +1,10 @@
# Application development with Visual Studio Code
---
type: docs
title: "VS Code remote containers"
linkTitle: "VS Code remote containers"
weight: 3000
description: "Application development and debugging with Visual Studio Code remote containers"
---
## Using remote containers for your application development

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Integrations"
linkTitle: "Integrations"
weight: 50
description: "Dapr integrations with other technologies"
---

View File

@ -1,10 +1,15 @@
# Autoscaling Dapr application on Kubernetes using KEDA
---
type: docs
title: "Autoscaling a Dapr app with KEDA"
linkTitle: "Autoscale"
weight: 2000
---
Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components](../../concepts/publish-subscribe-messaging), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting later.
Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components]({{< ref pubsub >}}), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting later.
For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), an event driven autoscaler for Kubernetes. Many of Dapr's pub/sub components overlap with the scalers provided by [KEDA](https://github.com/kedacore/keda) so it's easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA.
This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to [pub/sub components](../../concepts/publish-subscribe-messaging) offered by Dapr.
This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to [pub/sub components]({{< ref pubsub >}}) offered by Dapr.
## Install KEDA

View File

@ -1,6 +1,15 @@
# Create a gRPC enabled app, and invoke Dapr over gRPC
---
type: docs
title: "Dapr's gRPC Interface"
linkTitle: "gRPC"
weight: 1000
description: "Use the Dapr gRPC API in your application"
type: docs
---
Dapr implements both an HTTP and a gRPC API for local calls.gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients.
# Dapr and gRPC
Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients.
You can find a list of auto-generated clients [here](https://github.com/dapr/docs#sdks).
@ -222,5 +231,5 @@ You can use Dapr with any language supported by Protobuf, and not just with the
Using the [protoc](https://developers.google.com/protocol-buffers/docs/downloads) tool you can generate the Dapr clients for other languages like Ruby, C++, Rust and others.
## Related Topics
* [Service invocation concepts](../../concepts/service-invocation/README.md)
* [Service invocation API specification](../../reference/api/service_invocation_api.md)
- [Service invocation building block]({{< ref service-invocation >}})
- [Service invocation API specification]({{< ref service_invocation_api.md >}})

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Middleware"
linkTitle: "Middleware"
weight: 50
description: "Customize Dapr processing pipelines by adding middleware components"
---

View File

@ -1,4 +1,11 @@
# Apply Open Policy Agent Polices
---
type: docs
title: "How-To: Apply OPA policies"
linkTitle: "How-To: Apply OPA policies"
weight: 1000
description: "Use Dapr middleware to apply Open Policy Agent (OPA) policies on incoming requests"
type: docs
---
The Dapr Open Policy Agent (OPA) [HTTP middleware](https://github.com/dapr/docs/blob/master/concepts/middleware/README.md) allows applying [OPA Policies](https://www.openpolicyagent.org/) to incoming Dapr HTTP requests. This can be used to apply reusable authorization policies to app endpoints.

View File

@ -0,0 +1,22 @@
---
type: docs
title: "SDKs"
linkTitle: "SDKs"
weight: 20
description: "Use your favorite languages with Dapr"
---
### .NET
See the [.NET SDK repository](https://github.com/dapr/dotnet-sdk)
### Java
See the [Java SDK repository](https://github.com/dapr/java-sdk)
### Go
See the [Go SDK repository](https://github.com/dapr/go-sdk)
### Python
See the [Python SDK repository](https://github.com/dapr/python-sdk)
### Javascript
See the [Javascript SDK repository](https://github.com/dapr/js-sdk)

View File

@ -1,146 +1,151 @@
# Serialization in Dapr's SDKs
An SDK for Dapr should provide serialization for two use cases. First, for API objects sent through request and response payloads. Second, for objects to be persisted. For both these use cases, a default serialization is provided. In the Java SDK, it is the [DefaultObjectSerializer](https://dapr.github.io/java-sdk/io/dapr/serializer/DefaultObjectSerializer.html) class, providing JSON serialization.
## Service invocation
```java
DaprClient client = (new DaprClientBuilder()).build();
client.invokeService(Verb.POST, "myappid", "saySomething", "My Message", null).block();
```
In the example above, the app will receive a `POST` request for the `saySomething` method with the request payload as `"My Message"` - quoted since the serializer will serialize the input String to JSON.
```text
POST /saySomething HTTP/1.1
Host: localhost
Content-Type: text/plain
Content-Length: 12
"My Message"
```
## State management
```java
DaprClient client = (new DaprClientBuilder()).build();
client.saveState("MyStateStore", "MyKey", "My Message").block();
```
In this example, `My Message` will be saved. It is not quoted because Dapr's API will internally parse the JSON request object before saving it.
```JSON
[
{
"key": "MyKey",
"value": "My Message"
}
]
```
## PubSub
```java
DaprClient client = (new DaprClientBuilder()).build();
client.publishEvent("TopicName", "My Message").block();
```
The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber will receive it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. Dapr SDK also provides a built-in deserializer for `CloudEvent` object.
```java
@PostMapping(path = "/TopicName")
public void handleMessage(@RequestBody(required = false) byte[] body) {
// Dapr's event is compliant to CloudEvent.
CloudEvent event = CloudEvent.deserialize(body);
}
```
## Bindings
In this case, the object is serialized to `byte[]` as well and the input binding receives the raw `byte[]` as-is and deserializes it to the expected object type.
* Output binding:
```java
DaprClient client = (new DaprClientBuilder()).build();
client.invokeBinding("sample", "My Message").block();
```
* Input binding:
```java
@PostMapping(path = "/sample")
public void handleInputBinding(@RequestBody(required = false) byte[] body) {
String message = (new DefaultObjectSerializer()).deserialize(body, String.class);
System.out.println(message);
}
```
It should print:
```
My Message
```
## Actor Method invocation
Object serialization and deserialization for invocation of Actor's methods are same as for the service method invocation, the only difference is that the application does not need to deserialize the request or serialize the response since it is all done transparently by the SDK.
For Actor's methods, the SDK only supports methods with zero or one parameter.
* Invoking an Actor's method:
```java
public static void main() {
ActorProxyBuilder builder = new ActorProxyBuilder("DemoActor");
String result = actor.invokeActorMethod("say", "My Message", String.class).block();
}
```
* Implementing an Actor's method:
```java
public String say(String something) {
System.out.println(something);
return "OK";
}
```
It should print:
```
My Message
```
## Actor's state management
Actors can also have state. In this case, the state manager will serialize and deserialize the objects using the state serializer and handle it transparently to the application.
```java
public String actorMethod(String message) {
// Reads a state from key and deserializes it to String.
String previousMessage = super.getActorStateManager().get("lastmessage", String.class).block();
// Sets the new state for the key after serializing it.
super.getActorStateManager().set("lastmessage", message).block();
return previousMessage;
}
```
## Default serializer
The default serializer for Dapr is a JSON serializer with the following expectations:
1. Use of basic [JSON data types](https://www.w3schools.com/js/js_json_datatypes.asp) for cross-language and cross-platform compatibility: string, number, array, boolean, null and another JSON object. Every complex property type in application's serializable objects (DateTime, for example), should be represented as one of the JSON's basic types.
2. Data persisted with the default serializer should be saved as JSON objects too, without extra quotes or encoding. The example below shows how a string and a JSON object would look like in a Redis store.
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message
"This is a message to be saved and retrieved."
```
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata
{"value":"My data value."}
```
3. Custom serializers must serialize object to `byte[]`.
4. Custom serializers must deserilize `byte[]` to object.
5. When user provides a custom serializer, it should be transferred or persisted as `byte[]`. When persisting, also encode as Base64 string. This is done natively by most JSON libraries.
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message
"VGhpcyBpcyBhIG1lc3NhZ2UgdG8gYmUgc2F2ZWQgYW5kIHJldHJpZXZlZC4="
```
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata
"eyJ2YWx1ZSI6Ik15IGRhdGEgdmFsdWUuIn0="
```
6. When serializing a object that is a `byte[]`, the serializer should just pass it through since `byte[]` shoould be already handled internally in the SDK. The same happens when deserializing to `byte[]`.
---
type: docs
title: "Serialization in Dapr's SDKs"
linkTitle: "Serialization"
weight: 1000
---
An SDK for Dapr should provide serialization for two use cases. First, for API objects sent through request and response payloads. Second, for objects to be persisted. For both these use cases, a default serialization is provided. In the Java SDK, it is the [DefaultObjectSerializer](https://dapr.github.io/java-sdk/io/dapr/serializer/DefaultObjectSerializer.html) class, providing JSON serialization.
## Service invocation
```java
DaprClient client = (new DaprClientBuilder()).build();
client.invokeService(Verb.POST, "myappid", "saySomething", "My Message", null).block();
```
In the example above, the app will receive a `POST` request for the `saySomething` method with the request payload as `"My Message"` - quoted since the serializer will serialize the input String to JSON.
```text
POST /saySomething HTTP/1.1
Host: localhost
Content-Type: text/plain
Content-Length: 12
"My Message"
```
## State management
```java
DaprClient client = (new DaprClientBuilder()).build();
client.saveState("MyStateStore", "MyKey", "My Message").block();
```
In this example, `My Message` will be saved. It is not quoted because Dapr's API will internally parse the JSON request object before saving it.
```JSON
[
{
"key": "MyKey",
"value": "My Message"
}
]
```
## PubSub
```java
DaprClient client = (new DaprClientBuilder()).build();
client.publishEvent("TopicName", "My Message").block();
```
The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber will receive it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. Dapr SDK also provides a built-in deserializer for `CloudEvent` object.
```java
@PostMapping(path = "/TopicName")
public void handleMessage(@RequestBody(required = false) byte[] body) {
// Dapr's event is compliant to CloudEvent.
CloudEvent event = CloudEvent.deserialize(body);
}
```
## Bindings
In this case, the object is serialized to `byte[]` as well and the input binding receives the raw `byte[]` as-is and deserializes it to the expected object type.
* Output binding:
```java
DaprClient client = (new DaprClientBuilder()).build();
client.invokeBinding("sample", "My Message").block();
```
* Input binding:
```java
@PostMapping(path = "/sample")
public void handleInputBinding(@RequestBody(required = false) byte[] body) {
String message = (new DefaultObjectSerializer()).deserialize(body, String.class);
System.out.println(message);
}
```
It should print:
```
My Message
```
## Actor Method invocation
Object serialization and deserialization for invocation of Actor's methods are same as for the service method invocation, the only difference is that the application does not need to deserialize the request or serialize the response since it is all done transparently by the SDK.
For Actor's methods, the SDK only supports methods with zero or one parameter.
* Invoking an Actor's method:
```java
public static void main() {
ActorProxyBuilder builder = new ActorProxyBuilder("DemoActor");
String result = actor.invokeActorMethod("say", "My Message", String.class).block();
}
```
* Implementing an Actor's method:
```java
public String say(String something) {
System.out.println(something);
return "OK";
}
```
It should print:
```
My Message
```
## Actor's state management
Actors can also have state. In this case, the state manager will serialize and deserialize the objects using the state serializer and handle it transparently to the application.
```java
public String actorMethod(String message) {
// Reads a state from key and deserializes it to String.
String previousMessage = super.getActorStateManager().get("lastmessage", String.class).block();
// Sets the new state for the key after serializing it.
super.getActorStateManager().set("lastmessage", message).block();
return previousMessage;
}
```
## Default serializer
The default serializer for Dapr is a JSON serializer with the following expectations:
1. Use of basic [JSON data types](https://www.w3schools.com/js/js_json_datatypes.asp) for cross-language and cross-platform compatibility: string, number, array, boolean, null and another JSON object. Every complex property type in application's serializable objects (DateTime, for example), should be represented as one of the JSON's basic types.
2. Data persisted with the default serializer should be saved as JSON objects too, without extra quotes or encoding. The example below shows how a string and a JSON object would look like in a Redis store.
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message
"This is a message to be saved and retrieved."
```
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata
{"value":"My data value."}
```
3. Custom serializers must serialize object to `byte[]`.
4. Custom serializers must deserilize `byte[]` to object.
5. When user provides a custom serializer, it should be transferred or persisted as `byte[]`. When persisting, also encode as Base64 string. This is done natively by most JSON libraries.
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message
"VGhpcyBpcyBhIG1lc3NhZ2UgdG8gYmUgc2F2ZWQgYW5kIHJldHJpZXZlZC4="
```
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata
"eyJ2YWx1ZSI6Ik15IGRhdGEgdmFsdWUuIn0="
```
6. When serializing a object that is a `byte[]`, the serializer should just pass it through since `byte[]` shoould be already handled internally in the SDK. The same happens when deserializing to `byte[]`.
*As of now, the [Java SDK](https://github.com/dapr/java-sdk/) is the only Dapr SDK that implements this specification. In the near future, other SDKs will also implement the same.*

View File

@ -0,0 +1,9 @@
---
type: docs
title: "Getting started with Dapr"
linkTitle: "Getting started"
weight: 20
description: "Get up and running with Dapr"
type: docs
---

View File

@ -0,0 +1,224 @@
---
type: docs
title: "How-To: Setup Redis"
linkTitle: "How-To: Setup Redis"
weight: 30
description: "Configure Redis for Dapr state management or Pub/Sub"
---
Dapr can use Redis in two ways:
1. As state store component (state.redis) for persistence and restoration
2. As pub/sub component (pubsub.redis) for async style message delivery
## Create a Redis store
Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service. If you already have a Redis store, move on to the [configuration](#configure-dapr-components) section.
{{< tabs "Self-Hosted" "Kubernetes (Helm)" "Azure Redis Cache" "AWS Redis" "GCP Memorystore" >}}
{{% codetab %}}
Redis is automatically installed in self-hosted environments by the Dapr CLI as part of the initialization process.
{{% /codetab %}}
{{% codetab %}}
You can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm v3](https://github.com/helm/helm#install).
1. Install Redis into your cluster:
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install redis bitnami/redis
```
Note that you will need a Redis version greater than 5, which is what Dapr's pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub) a lower version can be used.
2. Run `kubectl get pods` to see the Redis containers now running in your cluster:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-0 1/1 Running 0 69s
redis-slave-0 1/1 Running 0 69s
redis-slave-1 1/1 Running 0 22s
```
3. Add `redis-master.default.svc.cluster.local:6379` as the `redisHost` in your [redis.yaml](#configure-dapr-components) file. For example:
```yaml
metadata:
- name: redisHost
value: redis-master.default.svc.cluster.local:6379
```
4. Securely reference the redis passoword in your [redis.yaml](#configure-dapr-components) file. For example:
```yaml
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
```
5. (Alternative) It is **not recommended**, but you can use a hard code a password instead of using secretKeyRef. First you'll get the Redis password, which is slightly different depending on the OS you're using:
- **Windows**: In Powershell run:
```powershell
PS C:\> $base64pwd=kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}"
PS C:\> $redispassword=[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64pwd))
PS C:\> $base64pwd=""
PS C:\> $redispassword
```
- **Linux/MacOS**: Run:
```bash
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode
```
Add this password as the `redisPassword` value in your [redis.yaml](#configure-dapr-components) file. For example:
```yaml
metadata:
- name: redisPassword
value: lhDOkwTlp0
```
{{% /codetab %}}
{{% codetab %}}
This method requires having an Azure Subscription.
1. Open the [Azure Portal](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow. Log in if necessary.
1. Fill out the necessary information
1. Click "Create" to kickoff deployment of your Redis instance.
1. Once your instance is created, you'll need to grab your access key. Navigate to "Access Keys" under "Settings" and copy your key.
1. You'll need the hostname of your Redis instance, which you can retrieve from the "Overview" in Azure. It should look like `xxxxxx.redis.cache.windows.net:6380`.
1. Finally, you'll need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configure-dapr-components).
As the connection to Azure is encrypted, make sure to add the following block to the `metadata` section of your `redis.yaml` file.
```yaml
metadata:
- name: enableTLS
value: "true"
```
> **NOTE:** Dapr pub/sub uses [Redis streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0, which isn't currently available on Azure Cache for Redis. Consequently, you can use Azure Cache for Redis only for state persistence.
{{% /codetab %}}
{{% codetab %}}
Visit [AWS Redis](https://aws.amazon.com/redis/).
{{% /codetab %}}
{{% codetab %}}
Visit [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/).
{{% /codetab %}}
{{< /tabs >}}
## Configure Dapr components
Dapr can use Redis as a [`statestore` component]({{< ref setup-state-store >}}) for state persistence (`state.redis`) or as a [`pubsub` component]({{< ref setup-pubsub >}}) (`pubsub.redis`). The following yaml files demonstrates how to define each component using either a secretKey reference (which is preferred) or a plain text password.
### Create component files
#### State store component with secret reference
Create a file called redis-state.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
metadata:
- name: redisHost
value: <HOST e.g. redis-master.default.svc.cluster.local:6379>
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
```
#### Pub/sub component with secret reference
Create a file called redis-pubsub.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: <HOST e.g. redis-master.default.svc.cluster.local:6379>
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
```
#### State store component with hard coded password (not recommended)
For development purposes only, create a file called redis-state.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
```
#### Pub/Sub component with hard coded password (not recommended)
For development purposes only, create a file called redis-pubsub.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
```
### Apply the configuration
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
By default the Dapr CLI creates a local Redis instance when you run `dapr init`. However, if you want to configure a different Redis instance, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
If you initialized Dapr using `dapr init --slim`, the Dapr CLI did not create a Redis instance or a default configuration file for it. Follow [the instructions above](#creat-a-redis-store) to create a Redis store. Create the `redis.yaml` following the configuration [instructions](#configure-dapr-components) in a `components` dir and provide the path to the `dapr run` command with the flag `--components-path`.
{{% /codetab %}}
{{% codetab %}}
Run `kubectl apply -f <FILENAME>` for both state and pubsub files:
```bash
kubectl apply -f redis-state.yaml
kubectl apply -f redis-pubsub.yaml
```
{{% /codetab %}}
{{< /tabs >}}

View File

@ -0,0 +1,261 @@
---
type: docs
title: "How-To: Install Dapr into your environment"
linkTitle: "How-To: Install Dapr"
weight: 20
description: "Install Dapr in your preferred environment"
---
This guide will get you up and running to evaluate Dapr and develop applications. Visit [this page]({{< ref hosting >}}) for a full list of supported platforms with instructions and best practices on running in production.
## Install the Dapr CLI
Begin by downloading and installing the Dapr CLI. This will be used to initialize your environment on your desired platform.
{{< tabs Linux Windows MacOS Binaries>}}
{{% codetab %}}
This command will install the latest linux Dapr CLI to `/usr/local/bin`:
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
This command will install the latest windows Dapr cli to `%USERPROFILE%\.dapr\` and add this directory to User PATH environment variable:
```powershell
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
```
Verify by opening Explorer and entering `%USERPROFILE%\.dapr\` into the address bar. You should see folders for bin, componenets and a config file.
{{% /codetab %}}
{{% codetab %}}
This command will install the latest darwin Dapr CLI to `/usr/local/bin`:
```bash
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash
```
Or you can install via [Homebrew](https://brew.sh):
```bash
brew install dapr/tap/dapr-cli
```
{{% /codetab %}}
{{% codetab %}}
Each release of Dapr CLI includes various OSes and architectures. These binary versions can be manually downloaded and installed.
1. Download the desired Dapr CLI from the latest [Dapr Release](https://github.com/dapr/cli/releases)
2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip)
3. Move it to your desired location.
- For Linux/MacOS - `/usr/local/bin`
- For Windows, create a directory and add this to your System PATH. For example create a directory called `c:\dapr` and add this directory to your path, by editing your system environment variable.
{{% /codetab %}}
{{< /tabs >}}
## Install Dapr in self-hosted mode
Running Dapr runtime in self hosted mode enables you to develop Dapr applications in your local development environment and then deploy and run them in other Dapr supported environments.
### Prerequisites
- Install [Docker Desktop](https://docs.docker.com/install/)
- Windows users ensure that `Docker Desktop For Windows` uses Linux containers.
By default Dapr will install with a developer environment using Docker containers to get you started easily. This getting started guide assumes Docker is installed to ensure the best experience. However, Dapr does not depend on Docker to run. Read [this page]({{< ref self-hosted-no-docker.md >}}) for instructions on installing Dapr locally without Docker using slim init.
### Initialize Dapr using the CLI
This step will install the latest Dapr Docker containers and setup a developer environment to help you get started easily with Dapr.
1. Ensure you are in an elevated terminal:
- **Linux/MacOS:** if you run your docker cmds with sudo or the install path is `/usr/local/bin`(default install path), you need to use `sudo`
- **Windows:** make sure that you run the cmd terminal in administrator mode
2. Run `dapr init`
```bash
$ dapr init
⌛ Making the jump to hyperspace...
Downloading binaries and setting up components
✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started
```
3. Verify installation
From a command prompt run the `docker ps` command and check that the `daprio/dapr`, `openzipkin/zipkin`, and `redis` container images are running:
```bash
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
67bc611a118c daprio/dapr "./placement" About a minute ago Up About a minute 0.0.0.0:6050->50005/tcp dapr_placement
855f87d10249 openzipkin/zipkin "/busybox/sh run.sh" About a minute ago Up About a minute 9410/tcp, 0.0.0.0:9411->9411/tcp dapr_zipkin
71cccdce0e8f redis "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:6379->6379/tcp dapr_redis
```
4. Visit our [hello world quickstart](https://github.com/dapr/quickstarts/tree/master/hello-world) or dive into the [Dapr building blocks]({{< ref building-blocks >}})
### (optional) Install a specific runtime version
You can install or upgrade to a specific version of the Dapr runtime using `dapr init --runtime-version`. You can find the list of versions in [Dapr Release](https://github.com/dapr/dapr/releases).
```bash
# Install v0.1.0 runtime
$ dapr init --runtime-version 0.11.0
# Check the versions of cli and runtime
$ dapr --version
cli version: v0.11.0
runtime version: v0.11.2
```
### Uninstall Dapr in self-hosted mode
This command will remove the placement Dapr container:
```bash
$ dapr uninstall
```
{{% alert title="Warning" color="warning" %}}
This command won't remove the Redis or Zipkin containers by default, just in case you were using them for other purposes. To remove Redis, Zipkin, Actor Placement container, as well as the default Dapr directory located at `$HOME/.dapr` or `%USERPROFILE%\.dapr\`, run:
```bash
$ dapr uninstall --all
```
{{% /alert %}}
> For Linux/MacOS users, if you run your docker cmds with sudo or the install path is `/usr/local/bin`(default install path), you need to use `sudo dapr uninstall` to remove dapr binaries and/or the containers.
## Install Dapr on a Kubernetes cluster
When setting up Kubernetes you can use either the Dapr CLI or Helm.
The following pods will be installed:
- dapr-operator: Manages component updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.)
- dapr-sidecar-injector: Injects Dapr into annotated deployment pods
- dapr-placement: Used for actors only. Creates mapping tables that map actor instances to pods
- dapr-sentry: Manages mTLS between services and acts as a certificate authority
### Setup cluster
You can install Dapr on any Kubernetes cluster. Here are some helpful links:
- [Setup Minikube Cluster]({{< ref setup-minikube.md >}})
- [Setup Azure Kubernetes Service Cluster]({{< ref setup-aks.md >}})
- [Setup Google Cloud Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/quickstart)
- [Setup Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
{{% alert title="Note" color="primary" %}}
Both the Dapr CLI and the Dapr Helm chart automatically deploy with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy Dapr to Windows nodes, but most users should not need to. For more information see [Deploying to a hybrid Linux/Windows Kubernetes cluster]({{<ref kubernetes-hybrid-clusters>}}).
{{% /alert %}}
### Install with Dapr CLI
You can install Dapr to a Kubernetes cluster using the Dapr CLI.
#### Install Dapr
The `-k` flag will initialize Dapr on the kuberentes cluster in your current context.
```bash
$ dapr init -k
⌛ Making the jump to hyperspace...
Note: To install Dapr using Helm, see here: https://github.com/dapr/docs/blob/master/getting-started/environment-setup.md#using-helm-advanced
✅ Deploying the Dapr control plane to your cluster...
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run "dapr status -k" in your terminal. To get started, go here: https://aka.ms/dapr-getting-started
```
#### Install to a custom namespace:
The default namespace when initializeing Dapr is `dapr-system`. You can override this with the `-n` flag.
```
dapr init -k -n mynamespace
```
#### Install in highly available mode:
You can run Dapr with 3 replicas of each control plane pod with the exception of the Placement pod in the dapr-system namespace for [production scenarios]({{< ref kubernetes-production.md >}}).
```
dapr init -k --enable-ha=true
```
#### Disable mTLS:
Dapr is initialized by default with [mTLS]({{< ref "security-concept.md#sidecar-to-sidecar-communication" >}}). You can disable it with:
```
dapr init -k --enable-mtls=false
```
#### Uninstall Dapr on Kubernetes
```bash
$ dapr uninstall --kubernetes
```
### Install with Helm (advanced)
You can install Dapr to Kubernetes cluster using a Helm 3 chart.
{{% alert title="Note" color="primary" %}}
The latest Dapr helm chart no longer supports Helm v2. Please migrate from helm v2 to helm v3 by following [this guide](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/).
{{% /alert %}}
#### Install Dapr on Kubernetes
1. Make sure Helm 3 is installed on your machine
2. Add Helm repo
```bash
helm repo add dapr https://dapr.github.io/helm-charts/
helm repo update
```
3. Create `dapr-system` namespace on your kubernetes cluster
```bash
kubectl create namespace dapr-system
```
4. Install the Dapr chart on your cluster in the `dapr-system` namespace.
```bash
helm install dapr dapr/dapr --namespace dapr-system
```
#### Verify installation
Once the chart installation is complete, verify the dapr-operator, dapr-placement, dapr-sidecar-injector and dapr-sentry pods are running in the `dapr-system` namespace:
```bash
$ kubectl get pods -n dapr-system -w
NAME READY STATUS RESTARTS AGE
dapr-operator-7bd6cbf5bf-xglsr 1/1 Running 0 40s
dapr-placement-7f8f76778f-6vhl2 1/1 Running 0 40s
dapr-sidecar-injector-8555576b6f-29cqm 1/1 Running 0 40s
dapr-sentry-9435776c7f-8f7yd 1/1 Running 0 40s
```
#### Uninstall Dapr on Kubernetes
```bash
helm uninstall dapr -n dapr-system
```
> **Note:** See [this page](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md) for details on Dapr helm charts.
### Sidecar annotations
To see all the supported annotations for the Dapr sidecar on Kubernetes, visit [this]({{<ref "kubernetes-annotations.md">}}) how to guide.
### Configure Redis
Unlike Dapr self-hosted, redis is not pre-installed out of the box on Kubernetes. To install Redis as a state store or as a pub/sub message bus in your Kubernetes cluster see [How-To: Setup Redis]({{< ref configure-redis.md >}})

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Deploying and configuring Dapr in your environment"
linkTitle: "Operations"
weight: 40
description: "Hosting options, best-practices, and other guides and running your application on Dapr"
---

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Managing components in Dapr"
linkTitle: "Components"
weight: 300
description: "How to manage your Dapr components in your application"
---

View File

@ -1,9 +1,12 @@
# Scope components to be used by one or more applications
---
type: docs
title: "How-To: Scope components to one or more applications"
linkTitle: "How-To: Set component scopes"
weight: 100
description: "Limit component access to particular Dapr instances"
---
There are two things to know about Dapr components in terms of security and access.
First, Dapr components are namespaced. That means a Dapr runtime instance can only access components that have been deployed to the same namespace.
Although namespace sounds like a Kubernetes term, this is true for Dapr not only on Kubernetes.
Dapr components are namespaced (separate from the Kubernetes namespace concept), meaning a Dapr runtime instance can only access components that have been deployed to the same namespace.
## Namespaces
Namespaces can be used to limit component access to particular Dapr instances.
@ -75,4 +78,7 @@ scopes:
- app1
- app2
```
Watch this [video](https://www.youtube.com/watch?v=8W-iBDNvCUM&feature=youtu.be&t=1765) for an example on how to component scopes with secret components and the secrets API.
## Example
<iframe width="560" height="315" src="https://www.youtube.com/embed/8W-iBDNvCUM?start=1763" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

View File

@ -1,6 +1,14 @@
# Referencing Secret Stores in Components
---
type: docs
title: "How-To: Reference secret stores in components"
linkTitle: "How-To: Reference secrets"
weight: 200
description: "How to securly reference secrets from a component definition"
---
Components can reference secrets for the `spec.metadata` section.
## Overview
Components can reference secrets for the `spec.metadata` section within the components definition.
In order to reference a secret, you need to set the `auth.secretStore` field to specify the name of the secret store that holds the secrets.
@ -8,7 +16,7 @@ When running in Kubernetes, if the `auth.secretStore` is empty, the Kubernetes s
### Supported secret stores
Go to [this](../../howto/setup-secret-store/README.md) link to see all the secret stores supported by Dapr, along with information on how to configure and use them.
Go to [this]({{< ref "howto-secrets.md" >}}) link to see all the secret stores supported by Dapr, along with information on how to configure and use them.
## Non default namespaces

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Bindings components"
linkTitle: "Bindings"
description: "Guidance on setting up Dapr bindings components"
weight: 4000
---

View File

@ -0,0 +1,58 @@
---
type: docs
title: "Supported external bindings"
linkTitle: "Supported bindings"
weight: 200
description: List of all the supported external bindings that can interface with Dapr
---
Every binding has its own unique set of properties. Click the name link to see the component YAML for each binding.
### Generic
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [APNs]({{< ref apns.md >}}) | | ✅ | Experimental |
| [Cron (Scheduler)]({{< ref cron.md >}}) | ✅ | ✅ | Experimental |
| [HTTP]({{< ref http.md >}}) | | ✅ | Experimental |
| [InfluxDB]({{< ref influxdb.md >}}) | | ✅ | Experimental |
| [Kafka]({{< ref kafka.md >}}) | ✅ | ✅ | Experimental |
| [Kubernetes Events]({{< ref "kubernetes-binding.md" >}}) | ✅ | | Experimental |
| [MQTT]({{< ref mqtt.md >}}) | ✅ | ✅ | Experimental |
| [PostgreSql]({{< ref postgres.md >}}) | | ✅ | Experimental |
| [RabbitMQ]({{< ref rabbitmq.md >}}) | ✅ | ✅ | Experimental |
| [Redis]({{< ref redis.md >}}) | | ✅ | Experimental |
| [Twilio]({{< ref twilio.md >}}) | | ✅ | Experimental |
| [Twitter]({{< ref twitter.md >}}) | ✅ | ✅ | Experimental |
| [SendGrid]({{< ref sendgrid.md >}}) | | ✅ | Experimental |
### Amazon Web Service (AWS)
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [AWS DynamoDB]({{< ref dynamodb.md >}}) | | ✅ | Experimental |
| [AWS S3]({{< ref s3.md >}}) | | ✅ | Experimental |
| [AWS SNS]({{< ref sns.md >}}) | | ✅ | Experimental |
| [AWS SQS]({{< ref sqs.md >}}) | ✅ | ✅ | Experimental |
| [AWS Kinesis]({{< ref kinesis.md >}}) | ✅ | ✅ | Experimental |
### Google Cloud Platform (GCP)
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [GCP Cloud Pub/Sub]({{< ref gcppubsub.md >}}) | ✅ | ✅ | Experimental |
| [GCP Storage Bucket]({{< ref gcpbucket.md >}}) | | ✅ | Experimental |
### Microsoft Azure
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [Azure Blob Storage]({{< ref blobstorage.md >}}) | | ✅ | Experimental |
| [Azure EventHubs]({{< ref eventhubs.md >}}) | ✅ | ✅ | Experimental |
| [Azure CosmosDB]({{< ref cosmosdb.md >}}) | | ✅ | Experimental |
| [Azure Service Bus Queues]({{< ref servicebusqueues.md >}}) | ✅ | ✅ | Experimental |
| [Azure SignalR]({{< ref signalr.md >}}) | | ✅ | Experimental |
| [Azure Storage Queues]({{< ref storagequeues.md >}}) | ✅ | ✅ | Experimental |
| [Azure Event Grid]({{< ref eventgrid.md >}}) | ✅ | ✅ | Experimental |

View File

@ -1,4 +1,10 @@
# Apple Push Notification Service Binding Spec
---
type: docs
title: "Apple Push Notification Service binding spec"
linkTitle: "Apple Push Notification Service"
description: "Detailed documentation on the Apple Push Notification Service binding component"
---
## Configuration

Some files were not shown because too many files have changed in this diff Show More