Merge branch 'v1.0' into harrykimpel-newrelic-howto

This commit is contained in:
Yaron Schneider 2021-02-16 11:31:46 -08:00 committed by GitHub
commit 324e51fe03
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
264 changed files with 9127 additions and 3924 deletions

View File

@ -8,13 +8,13 @@ assignees: ''
---
**What content needs to be created or modified?**
A clear and concise description of what the problem is. Ex. There should be docs on how pub/sub works...
<!--A clear and concise description of what the problem is. Ex. There should be docs on how pub/sub works...-->
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
<!--A clear and concise description of what you want to happen-->
**Where should the new material be placed?**
Please suggest where in the docs structure the new content should be created.
<!--Please suggest where in the docs structure the new content should be created-->
**Additional context**
Add any other context or screenshots about the feature request here.
<!--Add any other context or screenshots about the feature request here-->

View File

@ -8,16 +8,16 @@ assignees: ''
---
**URL of the docs page**
The URL(s) on docs.dapr.io where the typo occurs
<!--The URL(s) on docs.dapr.io where the typo occurs-->
**How is it currently worded?**
Please copy and paste the sentence where the typo occurs.
<!--Please copy and paste the sentence where the typo occurs-->
**How should it be worded?**
Please correct the sentence
<!--Please correct the sentence-->
**Screenshots**
If applicable, add screenshots to help explain your problem.
<!--If applicable, add screenshots to help explain your problem-->
**Additional context**
Add any other context about the problem here.
<!--Add any other context about the problem here-->

View File

@ -8,31 +8,33 @@ assignees: AaronCrawfis
---
**Describe the bug**
A clear and concise description of what the bug is.
<!--A clear and concise description of what the bug is.-->
**To Reproduce**
**Steps to reproduce**
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
-->
**Expected behavior**
A clear and concise description of what you expected to happen.
<!--A clear and concise description of what you expected to happen.-->
**Screenshots**
If applicable, add screenshots to help explain your problem.
<!--If applicable, add screenshots to help explain your problem.-->
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
- OS: <!--[e.g. iOS]-->
- Browser <!--[e.g. chrome, safari]-->
- Version <!--[e.g. 22]-->
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
- Device: <!--[e.g. iPhone6]-->
- OS: <!--[e.g. iOS8.1]-->
- Browser <!--[e.g. stock browser, safari]-->
- Version <!--[e.g. 22]-->
**Additional context**
Add any other context about the problem here.
<!--Add any other context about the problem here-->

View File

@ -8,16 +8,16 @@ assignees: ''
---
**Describe the issue**
A clear and concise description of what the bug is.
<!--A clear and concise description of what the bug is-->
**URL of the docs**
Paste the URL (docs.dapr.io/concepts/......) of the page
<!--Paste the URL (docs.dapr.io/concepts/......) of the page-->
**Expected content**
A clear and concise description of what you expected to happen.
<!--A clear and concise description of what you expected to happen-->
**Screenshots**
If applicable, add screenshots to help explain your problem.
<!--If applicable, add screenshots to help explain your problem-->
**Additional context**
Add any other context about the problem here.
<!--Add any other context about the problem here-->

View File

@ -1,15 +1,20 @@
Thank you for helping make the Dapr documentation better!
If you are a new contributor, please see the [this contribution guidance](https://docs.dapr.io/contributing/contributing-docs/) which helps keep the Dapr documentation readable, consistent and useful for Dapr users.
**Please follow this checklist before submitting:**
Note that you must see that the suggested changes do not break the website [docs.dapr.io](https://docs.dapr.io). See [this overview](https://github.com/dapr/docs/blob/master/README.md#overview) on how to setup a local version of the website and make sure the website is built correctly.
- [ ] [Read the contribution guide](https://docs.dapr.io/contributing/contributing-docs/)
- [ ] Commands include options for Linux, MacOS, and Windows within codetabs
- [ ] New file and folder names are globally unique
- [ ] Page references use shortcodes instead of markdown or URL links
- [ ] Images use HTML style and have alternative text
- [ ] Places where multiple code/command options are given have codetabs
In addition, please fill out the following to help reviewers understand this pull request:
## Description
_Please explain the changes you've made_
<!--Please explain the changes you've made-->
## Issue reference
_Please reference the issue this PR will close: #[issue number]_
<!--Please reference the issue this PR will close: #[issue number]-->

View File

@ -0,0 +1,52 @@
name: Azure Static Web Apps CI/CD
on:
push:
branches:
- v1.0
pull_request:
types: [opened, synchronize, reopened, closed]
branches:
- v1.0
jobs:
build_and_deploy_job:
if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
runs-on: ubuntu-latest
name: Build and Deploy Job
steps:
- uses: actions/checkout@v2
with:
submodules: recursive
- name: Setup Docsy
run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli
- name: Build And Deploy
id: builddeploy
uses: Azure/static-web-apps-deploy@v0.0.1-preview
env:
HUGO_ENV: production
HUGO_VERSION: "0.74.3"
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_BLACK_WATER_03A7CE11E }}
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
skip_deploy_on_missing_secrets: true
action: "upload"
###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
app_location: "daprdocs" # App source code path
api_location: "api" # Api source code path - optional
app_artifact_location: 'public' # Built app content directory - optional
app_build_command: "hugo"
###### End of Repository/Build Configurations ######
close_pull_request_job:
if: github.event_name == 'pull_request' && github.event.action == 'closed'
runs-on: ubuntu-latest
name: Close Pull Request Job
steps:
- name: Close Pull Request
id: closepullrequest
uses: Azure/static-web-apps-deploy@v0.0.1-preview
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_BLACK_WATER_03A7CE11E }}
skip_deploy_on_missing_secrets: true
action: "close"

View File

@ -3,11 +3,11 @@ name: Azure Static Web Apps CI/CD
on:
push:
branches:
- website
- v0.11
pull_request:
types: [opened, synchronize, reopened, closed]
branches:
- website
- v0.11
jobs:
build_and_deploy_job:

View File

@ -0,0 +1,52 @@
name: Azure Static Web Apps CI/CD
on:
push:
branches:
- v1.0-rc3
pull_request:
types: [opened, synchronize, reopened, closed]
branches:
- v1.0-rc3
jobs:
build_and_deploy_job:
if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
runs-on: ubuntu-latest
name: Build and Deploy Job
steps:
- uses: actions/checkout@v2
with:
submodules: recursive
- name: Setup Docsy
run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli
- name: Build And Deploy
id: builddeploy
uses: Azure/static-web-apps-deploy@v0.0.1-preview
env:
HUGO_ENV: production
HUGO_VERSION: "0.74.3"
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_KIND_POND_0F48CBE1E }}
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
skip_deploy_on_missing_secrets: true
action: "upload"
###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
app_location: "daprdocs" # App source code path
api_location: "api" # Api source code path - optional
app_artifact_location: 'public' # Built app content directory - optional
app_build_command: "hugo"
###### End of Repository/Build Configurations ######
close_pull_request_job:
if: github.event_name == 'pull_request' && github.event.action == 'closed'
runs-on: ubuntu-latest
name: Close Pull Request Job
steps:
- name: Close Pull Request
id: closepullrequest
uses: Azure/static-web-apps-deploy@v0.0.1-preview
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_KIND_POND_0F48CBE1E }}
skip_deploy_on_missing_secrets: true
action: "close"

View File

@ -0,0 +1,54 @@
name: Azure Static Web Apps CI/CD
on:
push:
branches:
- v1.0-rc2
pull_request:
types: [opened, synchronize, reopened, closed]
branches:
- v1.0-rc2
jobs:
build_and_deploy_job:
if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
runs-on: ubuntu-latest
name: Build and Deploy Job
steps:
- uses: actions/checkout@v2
with:
submodules: recursive
- name: Setup Docsy
run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli
- name: Build And Deploy
id: builddeploy
uses: Azure/static-web-apps-deploy@v0.0.1-preview
env:
HUGO_ENV: production
HUGO_VERSION: "0.74.3"
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_POLITE_BUSH_0F42B0A1E }}
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
skip_deploy_on_missing_secrets: true
action: "upload"
###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
# For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
app_location: "daprdocs" # App source code path
api_location: "api" # Api source code path - optional
app_artifact_location: 'public' # Built app content directory - optional
app_build_command: "hugo"
###### End of Repository/Build Configurations ######
close_pull_request_job:
if: github.event_name == 'pull_request' && github.event.action == 'closed'
runs-on: ubuntu-latest
name: Close Pull Request Job
steps:
- name: Close Pull Request
id: closepullrequest
uses: Azure/static-web-apps-deploy@v0.0.1-preview
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_POLITE_BUSH_0F42B0A1E }}
skip_deploy_on_missing_secrets: true
action: "close"

1
.gitignore vendored
View File

@ -1,5 +1,6 @@
# Visual Studio 2015/2017/2019 cache/options directory
.vs/
.idea/
node_modules/
daprdocs/public
daprdocs/resources/_gen

6
.gitmodules vendored
View File

@ -1,3 +1,9 @@
[submodule "daprdocs/themes/docsy"]
path = daprdocs/themes/docsy
url = https://github.com/google/docsy.git
[submodule "sdkdocs/python"]
path = sdkdocs/python
url = https://github.com/dapr/python-sdk.git
[submodule "sdkdocs/php"]
path = sdkdocs/php
url = https://github.com/dapr/php-sdk.git

View File

@ -6,6 +6,19 @@ If you are looking to explore the Dapr documentation, please go to the documenta
This repo contains the markdown files which generate the above website. See below for guidance on running with a local environment to contribute to the docs.
## Branch guidance
The Dapr docs handles branching differently than most code repositories. Instead of having a `master` or `main` branch, every branch is labeled to match the major and minor version of a runtime release.
The following branches are currently maintained:
| Branch | Website | Description |
|--------|---------|-------------|
| [v1.0](https://github.com/dapr/docs) (primary) | https://docs.dapr.io | Latest Dapr release documentation. Typo fixes, clarifications, and most documentation goes here.
| [v1.1](https://github.com/dapr/docs/tree/v1.1) (pre-release) | https://v1-1.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.1+ go here.
For more information visit the [Dapr branch structure](https://docs.dapr.io/contributing/contributing-docs/#branch-guidance) document.
## Contribution guidelines
Before making your first contribution, make sure to review the [contributing section](http://docs.dapr.io/contributing/) in the docs.
@ -47,12 +60,13 @@ npm install
```sh
hugo server --disableFastRender
```
3. Navigate to `http://localhost:1313/docs`
3. Navigate to `http://localhost:1313/`
## Update docs
1. Fork repo into your account
1. Create new branch
1. Commit and push changes to content
1. Submit pull request to `master`
1. Commit and push changes to forked branch
1. Submit pull request from downstream branch to the upstream branch for the correct version you are targeting
1. Staging site will automatically get created and linked to PR to review and test
## Code of Conduct

View File

@ -0,0 +1 @@
// Intentionally blank

View File

@ -1,14 +1,53 @@
// Code formatting.
.copy-code-button {
color: #272822;
background-color: #FFF;
border-color: #0D2192;
border: 2px solid;
border-radius: 3px 3px 0px 0px;
/* right-align */
display: block;
margin-left: auto;
margin-right: 0;
margin-bottom: -2px;
padding: 3px 8px;
font-size: 0.8em;
}
.copy-code-button:hover {
cursor: pointer;
background-color: #F2F2F2;
}
.copy-code-button:focus {
/* Avoid an ugly focus outline on click in Chrome,
but darken the button for accessibility.
See https://stackoverflow.com/a/25298082/1481479 */
background-color: #E6E6E6;
outline: 0;
}
.copy-code-button:active {
background-color: #D9D9D9;
}
.highlight pre {
/* Avoid pushing up the copy buttons. */
margin: 0;
}
.td-content {
// Highlighted code.
.highlight {
@extend .card;
margin: 2rem 0;
padding: 0;
margin: 0rem 0;
padding: 0rem;
max-width: 80%;
max-width: 100%;
pre {
margin: 0;
@ -37,7 +76,8 @@
word-wrap: normal;
background-color: $gray-100;
padding: $spacer;
max-width: 100%;
> code {
background-color: inherit !important;

View File

@ -93,9 +93,9 @@
@include media-breakpoint-up(md) {
padding-top: 4rem;
background-color: $td-sidebar-bg-color;
padding-right: 1rem;
padding-right: .5rem;
padding-left: .5rem;
border-right: 1px solid $td-sidebar-border-color;
min-width: 18rem;
}

View File

@ -1,15 +1,14 @@
# Site Configuration
baseURL = "https://docs.dapr.io/"
baseURL = "https://v1-0.docs.dapr.io/"
title = "Dapr Docs"
theme = "docsy"
disableFastRender = true
enableRobotsTXT = true
enableGitInfo = true
# Language Configuration
languageCode = "en-us"
contentDir = "content/en"
defaultContentLanguage = "en"
# Disable categories & tags
disableKinds = ["taxonomy", "term"]
@ -18,6 +17,36 @@ disableKinds = ["taxonomy", "term"]
[services.googleAnalytics]
id = "UA-149338238-3"
# Mounts
[module]
[[module.mounts]]
source = "content/en"
target = "content"
[[module.mounts]]
source = "static"
target = "static"
[[module.mounts]]
source = "layouts"
target = "layouts"
[[module.mounts]]
source = "data"
target = "data"
[[module.mounts]]
source = "assets"
target = "assets"
[[module.mounts]]
source = "archetypes"
target = "archetypes"
[[module.mounts]]
source = "../sdkdocs/python/daprdocs/content/en/python-sdk-docs"
target = "content/developing-applications/sdks/python"
[[module.mounts]]
source = "../sdkdocs/python/daprdocs/content/en/python-sdk-contributing"
target = "content/contributing/"
[[module.mounts]]
source = "../sdkdocs/php/daprdocs/content/en/php-sdk-docs"
target = "content/developing-applications/sdks/php"
# Markdown Engine - Allow inline html
[markup]
[markup.goldmark]
@ -26,25 +55,25 @@ id = "UA-149338238-3"
# Top Nav Bar
[[menu.main]]
name = "Home"
name = "Homepage"
weight = 40
url = "https://dapr.io"
[[menu.main]]
name = "About"
name = "GitHub"
weight = 50
url = "https://dapr.io/#about"
[[menu.main]]
name = "Download"
weight = 60
url = "https://dapr.io/#download"
url = "https://github.com/dapr"
[[menu.main]]
name = "Blog"
weight = 70
weight = 60
url = "https://blog.dapr.io/posts"
[[menu.main]]
name = "Discord"
weight = 70
url = "https://aka.ms/dapr-discord"
[[menu.main]]
name = "Community"
weight = 80
url = "https://dapr.io/#community"
url = "https://github.com/dapr/community/blob/master/README.md"
[params]
copyright = "Dapr"
@ -58,16 +87,19 @@ offlineSearch = false
github_repo = "https://github.com/dapr/docs"
github_project_repo = "https://github.com/dapr/dapr"
github_subdir = "daprdocs"
github_branch = "website"
github_branch = "v1.0"
# Versioning
version_menu = "Releases"
version = "v0.11"
version_menu = "v1.0 (latest)"
version = "v1.0"
archived_version = false
[[params.versions]]
version = "v0.11"
version = "v1.0 (latest)"
url = "#"
[[params.versions]]
version = "v0.11"
url = "https://v0-11.docs.dapr.io"
[[params.versions]]
version = "v0.10"
url = "https://github.com/dapr/docs/tree/v0.10.0"

View File

@ -1,7 +1,121 @@
---
type: docs
no_list: true
---
# <img src="/images/home-title.png" alt="Dapr Docs" width=400>
Welcome to the Dapr documentation site!
Welcome to the Dapr documentation site!
### Sections
<div class="card-deck">
<div class="card">
<div class="card-body">
<h5 class="card-title"><b>Concepts</b></h5>
<p class="card-text">Learn about Dapr, including its main features and capabilities.</p>
<a href="{{< ref concepts >}}" class="stretched-link"></a>
</div>
</div>
<div class="card">
<div class="card-body">
<h5 class="card-title"><b>Getting started</b></h5>
<p class="card-text">How to get up and running with Dapr in your environment in minutes.</p>
<a href="{{< ref getting-started >}}" class="stretched-link"></a>
</div>
</div>
<div class="card">
<div class="card-body">
<h5 class="card-title"><b>Developing applications</b></h5>
<p class="card-text">Tools, tips, and information on how to build your application with Dapr.</p>
<a href="{{< ref developing-applications >}}" class="stretched-link"></a>
</div>
</div>
</div>
<br />
<div class="card-deck">
<div class="card">
<div class="card-body">
<h5 class="card-title"><b>Operations</b></h5>
<p class="card-text">Hosting options, best-practices, and other guides and running your application on Dapr.</p>
<a href="{{< ref operations >}}" class="stretched-link"></a>
</div>
</div>
<div class="card">
<div class="card-body">
<h5 class="card-title"><b>Reference</b></h5>
<p class="card-text">Detailed documentation on the Dapr API, CLI, bindings and more.</p>
<a href="{{< ref reference >}}" class="stretched-link"></a>
</div>
</div>
<div class="card">
<div class="card-body">
<h5 class="card-title"><b>Contributing</b></h5>
<p class="card-text">How to contribute to the Dapr project and the various repositories.</p>
<a href="{{< ref contributing >}}" class="stretched-link"></a>
</div>
</div>
</div>
### Tooling
<div class="media">
<a class="pr-1" href="{{< ref ides >}}">
<img class="mr-3" src="/images/homepage/vscode.svg" alt="Visual studio code icon" width=40>
</a>
<div class="media-body">
<h5 class="mt-0"><b>IDE Integrations</b></h5>
<p>Learn how to get up and running with Dapr in your preferred integrated development environment.</p>
</div>
</div>
<div class="media">
<a class="pr-1" href="{{< ref sdks >}}">
<img class="mr-3" src="/images/homepage/code.svg" alt="Code icon" width=40>
</a>
<div class="media-body">
<h5 class="mt-0"><b>Language SDKs</b></h5>
<p>Create Dapr applications in your preferred language using the Dapr SDKs.</p>
<div class="media mt-3">
<a class="pr-3" href="{{< ref sdks >}}">
<img src="/images/homepage/dotnet.png" alt=".NET logo" width=30>
</a>
<div class="media-body">
<h5 class="mt-0"><b>.NET</b></h5>
</div>
</div>
<div class="media mt-3">
<a class="pr-3" href="{{< ref python >}}">
<img src="/images/homepage/python.png" alt="Python logo" width=30>
</a>
<div class="media-body">
<h5 class="mt-0"><b>Python</b></h5>
</div>
</div>
<div class="media mt-3">
<a class="pr-4" href="{{< ref sdks >}}">
<img src="/images/homepage/java.png" alt="Java logo" width=20>
</a>
<div class="media-body">
<h5 class="mt-0"><b>Java</b></h5>
</div>
</div>
<div class="media mt-3">
<a class="pr-4" href="{{< ref sdks >}}">
<img src="/images/homepage/golang.svg" alt="Go logo" width=30>
</a>
<div class="media-body">
<h5 class="mt-0"><b>Go</b></h5>
</div>
</div>
<div class="media mt-3">
<a class="pr-4" href="{{< ref sdks >}}">
<img src="/images/homepage/php.png" alt="PHP logo" width=30>
</a>
<div class="media-body">
<h5 class="mt-0"><b>PHP</b></h5>
</div>
</div>
</div>
</div>
<br />

View File

@ -20,7 +20,6 @@ Dapr uses a modular design where functionality is delivered as a component. Each
* [Service discovery name resolution](https://github.com/dapr/components-contrib/tree/master/nameresolution)
* [Secret stores](https://github.com/dapr/components-contrib/tree/master/secretstores)
* [State](https://github.com/dapr/components-contrib/tree/master/state)
* [Tracing exporters](https://github.com/dapr/components-contrib/tree/master/exporters)
### Service invocation and service discovery components
Service discovery components are used with the [service invocation]({{<ref "service-invocation-overview.md">}}) building block to integrate with the hosting environment to provide service-to-service discovery. For example, the Kubernetes service discovery component integrates with the Kubernetes DNS service and self hosted uses mDNS.

View File

@ -50,7 +50,7 @@ The Dapr runtime SDKs have language specific actor frameworks. The .NET SDK for
### Does Dapr have any SDKs if I want to work with a particular programming language or framework?
To make using Dapr more natural for different languages, it includes language specific SDKs for Go, Java, JavaScript, .NET, Python, RUST and C++.
To make using Dapr more natural for different languages, it includes [language specific SDKs]({{<ref sdks>}}) for Go, Java, JavaScript, .NET, Python, PHP, Rust and C++.
These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.

View File

@ -12,7 +12,7 @@ Dapr allows custom processing pipelines to be defined by chaining a series of mi
## Customize processing pipeline
When launched, a Dapr sidecar constructs a middleware processing pipeline. By default the pipeline consists of [tracing middleware]({{< ref tracing.md >}}) and CORS middleware. Additional middleware, configured by a Dapr [configuration]({{< ref configuration-concept.md >}}), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, security and others.
When launched, a Dapr sidecar constructs a middleware processing pipeline. By default the pipeline consists of [tracing middleware]({{< ref tracing-overview.md >}}) and CORS middleware. Additional middleware, configured by a Dapr [configuration]({{< ref configuration-concept.md >}}), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, security and others.
> **NOTE:** Dapr provides a **middleware.http.uppercase** pre-registered component that changes all text in a request body to uppercase. You can use it to test/verify if your custom pipeline is in place.
@ -33,34 +33,7 @@ spec:
type: middleware.http.uppercase
```
## Writing a custom middleware
Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement it's HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns a **fasthttp.RequestHandler**:
```go
type Middleware interface {
GetHandler(metadata Metadata) (func(h fasthttp.RequestHandler) fasthttp.RequestHandler, error)
}
```
Your handler implementation can include any inbound logic, outbound logic, or both:
```go
func GetHandler(metadata Metadata) fasthttp.RequestHandler {
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
//inboud logic
h(ctx) //call the downstream handler
//outbound logic
}
}
}
```
## Adding new middleware components
Your middleware component can be contributed to the [components-contrib repository](https://github.com/dapr/components-contrib/tree/master/middleware).
Then submit another pull request against the [Dapr runtime repository](https://github.com/dapr/dapr) to register the new middleware type. You'll need to modify the **Load()** method in [registry.go]( https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/registry.go) to register your middleware using the **Register** method.
## Next steps
* [Middleware overview]({{< ref middleware-overview.md >}})
* [How-To: Configure API authorization with OAuth]({{< ref oauth.md >}})

View File

@ -4,60 +4,41 @@ title: "Observability"
linkTitle: "Observability"
weight: 500
description: >
How to monitor applications through tracing, metrics, logs and health
Monitor applications through tracing, metrics, logs and health
---
Observability is a term from control theory. Observability means you can answer questions about what's happening on the inside of a system by observing the outside of the system, without having to ship new code to answer new questions. Observability is critical in production environments and services to debug, operate and monitor Dapr system services, components and user applications.
When building an applications, understanding how the system is behaving is an important part of operating it - this includes having the ability to observe the internal calls of an application, gauging its performance and becoming aware of problems as soon as they occur. This is challenging for any system but even more so for a distributed system comprised of multiple microservices where a flow, made of several calls, may start in one microservices but continue in another. Observability is critical in production environments but also useful during development to understand bottlenecks, improve performance and perform basic debugging across the span of microservices.
The observability capabilities enable users to monitor the Dapr system services, their interaction with user applications and understand how these monitored services behave. The observability capabilities are divided into the following areas;
While some data points about an application can be gathered from the underlying infrastructure (e.g. memory consumption, CPU usage), other meaningful information must be collected from an "application aware" layer - one that can show how an important series of calls is executed across microservices. This usually means a developer must add some code to instrument an application for this purpose. Often, instrumentation code is simply meant to send collected data such as traces and metrics to an external monitoring tool or service that can help store, visualize and analyze all this information.
## Distributed tracing
Having to maintain this code, which is not part of the core logic of the application, is another burden on the developer, sometimes requiring understanding monitoring tools APIs, using additional SDKs etc. This instrumentation may also add to the portability challenges of an application which may require different instrumentation depending on where the application is deployed. For example, different cloud providers offer different monitoring solutions and an on-prem deployment might require an on-prem solution.
[Distributed tracing]({{<ref "tracing.md">}}) is used to profile and monitor Dapr system services and user apps. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing is particularly well-suited to debugging and monitoring distributed software architectures, such as microservices.
## Observability for your application with Dapr
When building an application which is leveraging Dapr building blocks to perform service-to-service calls and pub/sub messaging, Dapr offers an advantage in respect to [distributed tracing]({{<ref tracing>}}) because this inter-service communication flows through the Dapr sidecar, the sidecar is in a unique position to offload the burden of application level instrumentation.
You can use distributed tracing to help debug and optimize application code. Distributed tracing contains trace spans between the Dapr runtime, Dapr system services, and user apps across process, nodes, network, and security boundaries. It provides a detailed understanding of service invocations (call flows) and service dependencies.
### Distributed tracing
Dapr can be [configured to emit tracing data]({{<ref setup-tracing.md>}}), and because Dapr does so using widely adopted protocols such as the [Zipkin](https://zipkin.io) protocol, it can be easily integrated with multiple [monitoring backends]({{<ref supported-tracing-backends>}}).
Dapr uses [W3C tracing context for distributed tracing]({{<ref w3c-tracing>}})
<img src="/images/observability-tracing.png" width=1000 alt="Distributed tracing with Dapr">
It is generally recommended to run Dapr in production with tracing.
### OpenTelemetry collector
Dapr can also be configured to work with the [OpenTelemetry Collector]({{<ref open-telemetry-collector>}}) which offers even more compatibility with external monitoring tools.
### Open Telemetry
<img src="/images/observability-opentelemetry-collector.png" width=1000 alt="Distributed tracing via OpenTelemetry collector">
Dapr integrates with [OpenTelemetry](https://opentelemetry.io/) for tracing, metrics and logs. With OpenTelemetry, you can configure various exporters for tracing and metrics based on your environment, whether it is running in the cloud or on-premises.
### Tracing context
Dapr uses [W3C tracing]({{<ref w3c-tracing>}}) specification for tracing context and can generate and propagate the context header itself or propagate user provided context headers.
#### Next steps
## Observability for the Dapr sidecar and system services
As for other parts of your system, you will want to be able to observe Dapr itself and collect metrics and logs emitted by the Dapr sidecar that runs along each microservice as well as the Dapr related services in your environment such as the control plane services that are deployed for a Dapr enabled Kubernetes cluster.
- [How-To: Set up Zipkin]({{< ref zipkin.md >}})
- [How-To: Set up Application Insights with Open Telemetry Collector]({{< ref open-telemetry-collector.md >}})
<img src="/images/observability-sidecar.png" width=1000 alt="Dapr sidecar metrics, logs and health checks">
## Metrics
### Logging
Dapr generates [logs]({{<ref "logs.md">}}) to provide visibility into sidecar operation and to help users identify issues and perform debugging. Log events contain warning, error, info, and debug messages produced by Dapr system services. Dapr can also be configured to send logs to collectors such as [Fluentd]({{< ref fluentd.md >}}) and [Azure Monitor]({{< ref azure-monitor.md >}}) so they can be easily searched, analyzed and provide insights.
[Metrics]({{<ref "metrics.md">}}) are the series of measured values and counts that are collected and stored over time. Dapr metrics provide monitoring and understanding of the behavior of Dapr system services and user apps.
### Metrics
Metrics are the series of measured values and counts that are collected and stored over time. [Dapr metrics]({{<ref "metrics">}}) provide monitoring capabilities to understand the behavior of the Dapr sidecar and system services. For example, the metrics between a Dapr sidecar and the user application show call latency, traffic failures, error rates of requests etc. Dapr [system services metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md) show sidecar injection failures, health of the system services including CPU usage, number of actor placements made etc.
For example, the service metrics between Dapr sidecars and user apps show call latency, traffic failures, error rates of requests etc.
Dapr system services metrics show side car injection failures, health of the system services including CPU usage, number of actor placement made etc.
#### Next steps
- [How-To: Set up Prometheus and Grafana]({{< ref prometheus.md >}})
- [How-To: Set up Azure Monitor]({{< ref azure-monitor.md >}})
## Logs
[Logs]({{<ref "logs.md">}}) are records of events that occur and can be used to determine failures or another status.
Logs events contain warning, error, info, and debug messages produced by Dapr system services. Each log event includes metadata such as message type, hostname, component name, App ID, ip address, etc.
#### Next steps
- [How-To: Set up Fluentd, Elastic search and Kibana in Kubernetes]({{< ref fluentd.md >}})
- [How-To: Set up Azure Monitor]({{< ref azure-monitor.md >}})
## Health
Dapr provides a way for a hosting platform to determine its [Health]({{<ref "sidecar-health.md">}}) using an HTTP endpoint. With this endpoint, the Dapr process, or sidecar, can be probed to determine its readiness and liveness and action taken accordingly.
#### Next steps
- [Health API]({{< ref health_api.md >}})
### Health checks
The Dapr sidecar exposes an HTTP endpoint for [health checks]({{<ref sidecar-health.md>}}). With this API, user code or hosting environments can probe the Dapr sidecar to determine its status and identify issues with sidecar readiness.

View File

@ -7,7 +7,9 @@ description: >
Introduction to the Distributed Application Runtime
---
Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, stateless and stateful microservice applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
Dapr is a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
{{< youtube 9o9iDAgYBA8 >}}
## Any language, any framework, anywhere
@ -60,7 +62,7 @@ In container hosting environments such as Kubernetes, Dapr runs as a side-car co
## Developer language SDKs and frameworks
To make using Dapr more natural for different languages, it also includes language specific SDKs for Go, Java, JavaScript, .NET and Python. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
To make using Dapr more natural for different languages, it also includes [language specific SDKs]({{<ref sdks>}}) for Go, Java, JavaScript, .NET, PHP and Python. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
### SDKs
@ -71,6 +73,7 @@ To make using Dapr more natural for different languages, it also includes langua
- **[Python SDK](https://github.com/dapr/python-sdk)**
- **[RUST SDK](https://github.com/dapr/rust-sdk)**
- **[.NET SDK](https://github.com/dapr/dotnet-sdk)**
- **[PHP SDK](https://github.com/dapr/php-sdk)**
> Note: Dapr is language agnostic and provides a [RESTful HTTP API]({{< ref api >}}) in addition to the protobuf clients.
@ -84,6 +87,8 @@ Dapr can be used from any developer framework. Here are some that have been int
Dapr integrates easily with Python [Flask](https://pypi.org/project/Flask/) and node [Express](http://expressjs.com/). See examples in the [Dapr quickstarts](https://github.com/dapr/quickstarts).
In the Dapr [PHP-SDK](https://github.com/dapr/php-sdk) you can serve with Apache, Nginx, or Caddyserver.
#### Actors
Dapr SDKs support for [virtual actors]({{< ref actors >}}) which are stateful objects that make concurrency simple, have method and state encapsulation, and are designed for scalable, distributed applications.
@ -118,4 +123,4 @@ The `dapr-sentry` service is a certificate authority that enables mutual TLS bet
<img src="/images/overview_kubernetes.png" width=800>
Deploying and running a Dapr enabled application into your Kubernetes cluster is a simple as adding a few annotations to the deployment schemes. You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes/deploy) in the Kubernetes getting started sample. Try this out with the [Kubernetes quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes).
Deploying and running a Dapr enabled application into your Kubernetes cluster is as simple as adding a few annotations to the deployment schemes. You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes/deploy) in the Kubernetes getting started sample. Try this out with the [Kubernetes quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes).

View File

@ -110,6 +110,22 @@ Threat modeling is a process by which potential threats, such as structural vuln
## Security audit
### February 2021
In February 2021, Dapr has gone a 2nd security audit targetting it's 1.0 release by Cure53.
The test focused on the following:
* Dapr runtime code base evaluation since last audit
* Access control lists
* Secrets management
* Penetration testing
* Validating fixes for previous high/medium issues
The full report can be found [here](/docs/Dapr-february-2021-security-audit-report.pdf).
One high issue was detected and fixed during the test.
As of February 16th 2021, Dapr has 0 criticals, 0 highs, 0 mediums, 2 lows, 2 infos.
### June 2020
In June 2020, Dapr has undergone a security audit from Cure53, a CNCF approved cybersecurity firm.
@ -129,5 +145,3 @@ The test focused on the following:
The full report can be found [here](/docs/Dapr-july-2020-security-audit-report.pdf).
Two issues, one critical and one high, were fixed during the test.
As of July 21st 2020, Dapr has 0 criticals, 2 highs, 2 mediums, 1 low, 1 info.

View File

@ -18,6 +18,14 @@ Fork the [docs repository](https://github.com/dapr/docs) to work on any changes
Follow the instructions in the repository [README.md](https://github.com/dapr/docs/blob/master/README.md#environment-setup) to install Hugo locally and build the docs website.
## Branch guidance
The Dapr docs handles branching differently than most code repositories. Instead of having a `master` or `main` branch, every branch is labeled to match the major and minor version of a runtime release. For the full list visit the [Docs repo](https://github.com/dapr/docs#branch-guidance)
Overall, all updates should go into the docs branch for the latest release of Dapr. You can find this directly at https://github.com/dapr/docs, as the latest release will be the default branch. For any docs changes that are applicable to a release candidate or a pre-release version of the docs, make your changes into that particular branch.
For example, if you are fixing a typo, adding notes, or clarifying a point, make your changes into the default Dapr branch. If you are documenting an upcoming change to a component or the runtime, make your changes to the pre-release branch. Branches can be found in the [Docs repo](https://github.com/dapr/docs#branch-guidance)
## Style and tone
These conventions should be followed throughout all Dapr documentation to ensure a consistent experience across all docs.
@ -26,11 +34,12 @@ These conventions should be followed throughout all Dapr documentation to ensure
- **Use simple sentences** - Easy-to-read sentences mean the reader can quickly use the guidance you share.
- **Avoid the first person** - Use 2nd person "you", "your" instead of "I", "we", "our".
- **Assume a new developer audience** - Some obvious steps can seem hard. E.g. Now set an environment variable Dapr to a value X. It is better to give the reader the explicit command to do this, rather than having them figure this out.
- **Use present tense** - Avoid sentences like "this command will install redis", which implies the action is in the future. Instead use "This command installs redis" which is in the present tense.
## Contributing a new docs page
- Make sure the documentation you are writing is in the correct place in the hierarchy.
- Avoid creating new sections where possible, there is a good chance a proper place in the docs hierarchy already exists.
- Make sure to include a complete [Hugo front-matter](front-matter).
- Make sure to include a complete [Hugo front-matter](#front-matter).
### Contributing a new concept doc
- Ensure the reader can understand why they should care about this feature. What problems does it help them solve?
@ -104,8 +113,19 @@ This shortcode will link to a specific page:
```md
{{</* ref "page.md" */>}}
```
> Note that all pages and folders need to have globally unique names in order for the ref shortcode to work properly. If there are duplicate names the build will break and an error will be thrown.
> Note that all pages and folders need to have globally unique names in order for the ref shortcode to work properly.
#### Referencing sections in other pages
To reference a specific section in another page, add `#section-short-name` to the end of your reference.
As a general rule, the section short name is the text of the section title, all lowercase, with spaces changed to "-". You can check the section short name by visiting the website page, clicking the link icon (🔗) next to the section, and see how the URL renders in the nav bar. The content after the "#" is your section shortname.
As an example, for this specific section the complete reference to the page and section would be:
```md
{{</* ref "contributing-docs.md#referencing-sections-in-other-pages" */>}}
```
### Images
The markdown spec used by Docsy and Hugo does not give an option to resize images using markdown notation. Instead, raw HMTL is used.

View File

@ -1,104 +0,0 @@
---
type: docs
title: "Introduction to actors"
linkTitle: "Actors background"
weight: 20
description: Learn more about the actor pattern
---
The [actor pattern](https://en.wikipedia.org/wiki/Actor_model) describes **actors** as the lowest-level "unit of computation". In other words, you write your code in a self-contained unit (called an actor) that receives messages and processes them one at a time, without any kind of concurrency or threading.
While your code processes a message, it can send one or more messages to other actors, or create new actors. An underlying **runtime** manages how, when and where each actor runs, and also routes messages between actors.
A large number of actors can execute simultaneously, and actors execute independently from each other.
Dapr includes a runtime that specifically implements the [Virtual Actor pattern](https://www.microsoft.com/en-us/research/project/orleans-virtual-actors/). With Dapr's implementation, you write your Dapr actors according to the Actor model, and Dapr leverages the scalability and reliability guarantees that the underlying platform provides.
## Quick links
- [Dapr Actor Features]({{< ref actors-overview.md >}})
- [Dapr Actor API Spec]({{< ref actors_api.md >}} )
### When to use actors
As with any other technology decision, you should decide whether to use actors based on the problem you're trying to solve.
The actor design pattern can be a good fit to a number of distributed systems problems and scenarios, but the first thing you should consider are the constraints of the pattern. Generally speaking, consider the actor pattern to model your problem or scenario if:
* Your problem space involves a large number (thousands or more) of small, independent, and isolated units of state and logic.
* You want to work with single-threaded objects that do not require significant interaction from external components, including querying state across a set of actors.
* Your actor instances won't block callers with unpredictable delays by issuing I/O operations.
## Actors in dapr
Every actor is defined as an instance of an actor type, identical to the way an object is an instance of a class. For example, there may be an actor type that implements the functionality of a calculator and there could be many actors of that type that are distributed on various nodes across a cluster. Each such actor is uniquely identified by an actor ID.
<img src="/images/actor_background_game_example.png" width=400>
## Actor lifetime
Dapr actors are virtual, meaning that their lifetime is not tied to their in-memory representation. As a result, they do not need to be explicitly created or destroyed. The Dapr actors runtime automatically activates an actor the first time it receives a request for that actor ID. If an actor is not used for a period of time, the Dapr Actors runtime garbage-collects the in-memory object. It will also maintain knowledge of the actor's existence should it need to be reactivated later.
Invocation of actor methods and reminders reset the idle time, e.g. reminder firing will keep the actor active. Actor reminders fire whether an actor is active or inactive, if fired for inactive actor, it will activate the actor first. Actor timers do not reset the idle time, so timer firing will not keep the actor active. Timers only fire while the actor is active.
The idle timeout and scan interval Dapr runtime uses to see if an actor can be garbage-collected is configurable. This information can be passed when Dapr runtime calls into the actor service to get supported actor types.
This virtual actor lifetime abstraction carries some caveats as a result of the virtual actor model, and in fact the Dapr Actors implementation deviates at times from this model.
An actor is automatically activated (causing an actor object to be constructed) the first time a message is sent to its actor ID. After some period of time, the actor object is garbage collected. In the future, using the actor ID again, causes a new actor object to be constructed. An actor's state outlives the object's lifetime as state is stored in configured state provider for Dapr runtime.
## Distribution and failover
To provide scalability and reliability, actors instances are distributed throughout the cluster and Dapr automatically migrates them from failed nodes to healthy ones as required.
Actors are distributed across the instances of the actor service, and those instance are distributed across the nodes in a cluster. Each service instance contains a set of actors for a given actor type.
### Actor placement service
The Dapr actor runtime manages distribution scheme and key range settings for you. This is done by the actor `Placement` service. When a new instance of a service is created, the corresponding Dapr runtime register the actor types it can create and the `Placement` service calculates the partitioning across all the instances for a given actor type. This table of partition information for each actor type is updated and stored in each Dapr instance running in the environment and can change dynamically as new instance of actor services are created and destroyed. This is shown in the diagram below.
<img src="/images/actors_background_placement_service_registration.png" width=600>
When a client calls an actor with a particular id (for example, actor id 123), the Dapr instance for the client hashes the actor type and id, and uses the information to call onto the corresponding Dapr instance that can serve the requests for that particular actor id. As a result, the same partition (or service instance) is always called for any given actor id. This is shown in the diagram below.
<img src="/images/actors_background_id_hashing_calling.png" width=600>
This simplifies some choices but also carries some consideration:
* By default, actors are randomly placed into pods resulting in uniform distribution.
* Because actors are randomly placed, it should be expected that actor operations always require network communication, including serialization and deserialization of method call data, incurring latency and overhead.
Note: The Dapr actor Placement service is only used for actor placement and therefore is not needed if your services are not using Dapr actors. The Placement service can run in all hosting environments for example, self hosted, Kubernetes
## Actor communication
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint
```bash
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/<method/state/timers/reminders>
```
You can provide any data for the actor method in the request body, and the response for the request would be in the response body which is the data from actor call.
Refer to [Dapr Actor Features]({{< ref actors-overview.md >}}) for more details.
### Concurrency
The Dapr Actors runtime provides a simple turn-based access model for accessing actor methods. This means that no more than one thread can be active inside an actor object's code at any time. Turn-based access greatly simplifies concurrent systems as there is no need for synchronization mechanisms for data access. It also means systems must be designed with special considerations for the single-threaded access nature of each actor instance.
A single actor instance cannot process more than one request at a time. An actor instance can cause a throughput bottleneck if it is expected to handle concurrent requests.
Actors can deadlock on each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The Dapr actor runtime automatically times out on actor calls and throw an exception to the caller to interrupt possible deadlock situations.
<img src="/images/actors_background_communication.png" width=600>
### Turn-based access
A turn consists of the complete execution of an actor method in response to a request from other actors or clients, or the complete execution of a timer/reminder callback. Even though these methods and callbacks are asynchronous, the Dapr Actors runtime does not interleave them. A turn must be fully finished before a new turn is allowed. In other words, an actor method or timer/reminder callback that is currently executing must be fully finished before a new call to a method or callback is allowed. A method or callback is considered to have finished if the execution has returned from the method or callback and the task returned by the method or callback has finished. It is worth emphasizing that turn-based concurrency is respected even across different methods, timers, and callbacks.
The Dapr actors runtime enforces turn-based concurrency by acquiring a per-actor lock at the beginning of a turn and releasing the lock at the end of the turn. Thus, turn-based concurrency is enforced on a per-actor basis and not across actors. Actor methods and timer/reminder callbacks can execute simultaneously on behalf of different actors.
The following example illustrates the above concepts. Consider an actor type that implements two asynchronous methods (say, Method1 and Method2), a timer, and a reminder. The diagram below shows an example of a timeline for the execution of these methods and callbacks on behalf of two actors (ActorId1 and ActorId2) that belong to this actor type.
<img src="/images/actors_background_concurrency.png" width=600>

View File

@ -1,137 +1,102 @@
---
type: docs
title: "Dapr actors overview"
title: "Actors overview"
linkTitle: "Overview"
weight: 10
description: Overview of Dapr support for actors
description: Overview of the actors building block
aliases:
- "/developing-applications/building-blocks/actors/actors-background"
---
The Dapr actors runtime provides support for [virtual actors]({{< ref actors-background.md >}}) through following capabilities:
## Introduction
The [actor pattern](https://en.wikipedia.org/wiki/Actor_model) describes actors as the lowest-level "unit of computation". In other words, you write your code in a self-contained unit (called an actor) that receives messages and processes them one at a time, without any kind of concurrency or threading.
## Actor method invocation
While your code processes a message, it can send one or more messages to other actors, or create new actors. An underlying runtime manages how, when and where each actor runs, and also routes messages between actors.
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint
A large number of actors can execute simultaneously, and actors execute independently from each other.
Dapr includes a runtime that specifically implements the [Virtual Actor pattern](https://www.microsoft.com/en-us/research/project/orleans-virtual-actors/). With Dapr's implementation, you write your Dapr actors according to the Actor model, and Dapr leverages the scalability and reliability guarantees that the underlying platform provides.
### When to use actors
As with any other technology decision, you should decide whether to use actors based on the problem you're trying to solve.
The actor design pattern can be a good fit to a number of distributed systems problems and scenarios, but the first thing you should consider are the constraints of the pattern. Generally speaking, consider the actor pattern to model your problem or scenario if:
* Your problem space involves a large number (thousands or more) of small, independent, and isolated units of state and logic.
* You want to work with single-threaded objects that do not require significant interaction from external components, including querying state across a set of actors.
* Your actor instances won't block callers with unpredictable delays by issuing I/O operations.
## Actors in dapr
Every actor is defined as an instance of an actor type, identical to the way an object is an instance of a class. For example, there may be an actor type that implements the functionality of a calculator and there could be many actors of that type that are distributed on various nodes across a cluster. Each such actor is uniquely identified by an actor ID.
<img src="/images/actor_background_game_example.png" width=400>
## Actor lifetime
Dapr actors are virtual, meaning that their lifetime is not tied to their in-memory representation. As a result, they do not need to be explicitly created or destroyed. The Dapr actors runtime automatically activates an actor the first time it receives a request for that actor ID. If an actor is not used for a period of time, the Dapr Actors runtime garbage-collects the in-memory object. It will also maintain knowledge of the actor's existence should it need to be reactivated later.
Invocation of actor methods and reminders reset the idle time, e.g. reminder firing will keep the actor active. Actor reminders fire whether an actor is active or inactive, if fired for inactive actor, it will activate the actor first. Actor timers do not reset the idle time, so timer firing will not keep the actor active. Timers only fire while the actor is active.
The idle timeout and scan interval Dapr runtime uses to see if an actor can be garbage-collected is configurable. This information can be passed when Dapr runtime calls into the actor service to get supported actor types.
This virtual actor lifetime abstraction carries some caveats as a result of the virtual actor model, and in fact the Dapr Actors implementation deviates at times from this model.
An actor is automatically activated (causing an actor object to be constructed) the first time a message is sent to its actor ID. After some period of time, the actor object is garbage collected. In the future, using the actor ID again, causes a new actor object to be constructed. An actor's state outlives the object's lifetime as state is stored in configured state provider for Dapr runtime.
## Distribution and failover
To provide scalability and reliability, actors instances are distributed throughout the cluster and Dapr automatically migrates them from failed nodes to healthy ones as required.
Actors are distributed across the instances of the actor service, and those instance are distributed across the nodes in a cluster. Each service instance contains a set of actors for a given actor type.
### Actor placement service
The Dapr actor runtime manages distribution scheme and key range settings for you. This is done by the actor `Placement` service. When a new instance of a service is created, the corresponding Dapr runtime registers the actor types it can create and the `Placement` service calculates the partitioning across all the instances for a given actor type. This table of partition information for each actor type is updated and stored in each Dapr instance running in the environment and can change dynamically as new instance of actor services are created and destroyed. This is shown in the diagram below.
<img src="/images/actors_background_placement_service_registration.png" width=600>
When a client calls an actor with a particular id (for example, actor id 123), the Dapr instance for the client hashes the actor type and id, and uses the information to call onto the corresponding Dapr instance that can serve the requests for that particular actor id. As a result, the same partition (or service instance) is always called for any given actor id. This is shown in the diagram below.
<img src="/images/actors_background_id_hashing_calling.png" width=600>
This simplifies some choices but also carries some consideration:
* By default, actors are randomly placed into pods resulting in uniform distribution.
* Because actors are randomly placed, it should be expected that actor operations always require network communication, including serialization and deserialization of method call data, incurring latency and overhead.
Note: The Dapr actor Placement service is only used for actor placement and therefore is not needed if your services are not using Dapr actors. The Placement service can run in all [hosting environments]({{< ref hosting >}}), including self-hosted and Kubernetes.
## Actor communication
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint.
```bash
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/method/<method>
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/<method/state/timers/reminders>
```
You can provide any data for the actor method in the request body and the response for the request is in response body which is data from actor method call.
You can provide any data for the actor method in the request body, and the response for the request would be in the response body which is the data from actor call.
Refer [api spec]({{< ref "actors_api.md#invoke-actor-method" >}}) for more details.
Refer to [Dapr Actor Features]({{< ref actors-overview.md >}}) for more details.
## Actor state management
### Concurrency
Actors can save state reliably using state management capability.
The Dapr Actors runtime provides a simple turn-based access model for accessing actor methods. This means that no more than one thread can be active inside an actor object's code at any time. Turn-based access greatly simplifies concurrent systems as there is no need for synchronization mechanisms for data access. It also means systems must be designed with special considerations for the single-threaded access nature of each actor instance.
You can interact with Dapr through HTTP/gRPC endpoints for state management.
A single actor instance cannot process more than one request at a time. An actor instance can cause a throughput bottleneck if it is expected to handle concurrent requests.
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The following state stores implement this interface:
Actors can deadlock on each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The Dapr actor runtime automatically times out on actor calls and throw an exception to the caller to interrupt possible deadlock situations.
- Redis
- MongoDB
- PostgreSQL
- SQL Server
- Azure CosmosDB
<img src="/images/actors_background_communication.png" width=600>
## Actor timers and reminders
Actors can schedule periodic work on themselves by registering either timers or reminders.
### Turn-based access
### Actor timers
A turn consists of the complete execution of an actor method in response to a request from other actors or clients, or the complete execution of a timer/reminder callback. Even though these methods and callbacks are asynchronous, the Dapr Actors runtime does not interleave them. A turn must be fully finished before a new turn is allowed. In other words, an actor method or timer/reminder callback that is currently executing must be fully finished before a new call to a method or callback is allowed. A method or callback is considered to have finished if the execution has returned from the method or callback and the task returned by the method or callback has finished. It is worth emphasizing that turn-based concurrency is respected even across different methods, timers, and callbacks.
You can register a callback on actor to be executed based on a timer.
The Dapr actors runtime enforces turn-based concurrency by acquiring a per-actor lock at the beginning of a turn and releasing the lock at the end of the turn. Thus, turn-based concurrency is enforced on a per-actor basis and not across actors. Actor methods and timer/reminder callbacks can execute simultaneously on behalf of different actors.
The Dapr actor runtime ensures that the callback methods respect the turn-based concurrency guarantees.This means that no other actor methods or timer/reminder callbacks will be in progress until this callback completes execution.
The following example illustrates the above concepts. Consider an actor type that implements two asynchronous methods (say, Method1 and Method2), a timer, and a reminder. The diagram below shows an example of a timeline for the execution of these methods and callbacks on behalf of two actors (ActorId1 and ActorId2) that belong to this actor type.
The next period of the timer starts after the callback completes execution. This implies that the timer is stopped while the callback is executing and is started when the callback finishes.
<img src="/images/actors_background_concurrency.png" width=600>
The Dapr actors runtime saves changes made to the actor's state when the callback finishes. If an error occurs in saving the state, that actor object is deactivated and a new instance will be activated.
All timers are stopped when the actor is deactivated as part of garbage collection. No timer callbacks are invoked after that. Also, the Dapr actors runtime does not retain any information about the timers that were running before deactivation. It is up to the actor to register any timers that it needs when it is reactivated in the future.
You can create a timer for an actor by calling the HTTP/gRPC request to Dapr.
```http
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
```
The timer `duetime` and callback are specified in the request body. The due time represents when the timer will first fire after registration. The `period` represents how often the timer fires after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid.
The following request body configures a timer with a `dueTime` of 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
```json
{
"dueTime":"0h0m9s0ms",
"period":"0h0m3s0ms"
}
```
The following request body configures a timer with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it fires immediately after registration, then every 3 seconds after that.
```json
{
"dueTime":"0h0m0s0ms",
"period":"0h0m3s0ms"
}
```
You can remove the actor timer by calling
```http
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
```
Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
### Actor reminders
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
You can create a persistent reminder for an actor by calling the Http/gRPC request to Dapr.
```http
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
The reminder `duetime` and callback can be specified in the request body. The due time represents when the reminder first fires after registration. The `period` represents how often the reminder will fire after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid. To register a reminder that fires only once, set the period to an empty string.
The following request body configures a reminder with a `dueTime` 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
```json
{
"dueTime":"0h0m9s0ms",
"period":"0h0m3s0ms"
}
```
The following request body configures a reminder with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it will fire immediately after registration, then every 3 seconds after that.
```json
{
"dueTime":"0h0m0s0ms",
"period":"0h0m3s0ms"
}
```
The following request body configures a reminder with a `dueTime` 15 seconds and a `period` of empty string. This means it will first fire after 15 seconds, then never fire again.
```json
{
"dueTime":"0h0m15s0ms",
"period":""
}
```
#### Retrieve actor reminder
You can retrieve the actor reminder by calling
```http
GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
#### Remove the actor reminder
You can remove the actor reminder by calling
```http
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
Refer [api spec]({{< ref "actors_api.md#invoke-reminder" >}}) for more details.

View File

@ -0,0 +1,131 @@
---
type: docs
title: "How-to: Use virtual actors in Dapr"
linkTitle: "How-To: Virtual actors"
weight: 20
description: Learn more about the actor pattern
---
The Dapr actors runtime provides support for [virtual actors]({{< ref actors-overview.md >}}) through following capabilities:
## Actor method invocation
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint
```html
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/method/<method>
```
You can provide any data for the actor method in the request body and the response for the request is in response body which is data from actor method call.
Refer [api spec]({{< ref "actors_api.md#invoke-actor-method" >}}) for more details.
## Actor state management
Actors can save state reliably using state management capability.
You can interact with Dapr through HTTP/gRPC endpoints for state management.
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The list of components that support transactions/actors can be found here: [supported state stores]({{< ref supported-state-stores.md >}}).
## Actor timers and reminders
Actors can schedule periodic work on themselves by registering either timers or reminders.
### Actor timers
You can register a callback on actor to be executed based on a timer.
The Dapr actor runtime ensures that the callback methods respect the turn-based concurrency guarantees.This means that no other actor methods or timer/reminder callbacks will be in progress until this callback completes execution.
The next period of the timer starts after the callback completes execution. This implies that the timer is stopped while the callback is executing and is started when the callback finishes.
The Dapr actors runtime saves changes made to the actor's state when the callback finishes. If an error occurs in saving the state, that actor object is deactivated and a new instance will be activated.
All timers are stopped when the actor is deactivated as part of garbage collection. No timer callbacks are invoked after that. Also, the Dapr actors runtime does not retain any information about the timers that were running before deactivation. It is up to the actor to register any timers that it needs when it is reactivated in the future.
You can create a timer for an actor by calling the HTTP/gRPC request to Dapr.
```md
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
```
The timer `duetime` and callback are specified in the request body. The due time represents when the timer will first fire after registration. The `period` represents how often the timer fires after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid.
The following request body configures a timer with a `dueTime` of 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
```json
{
"dueTime":"0h0m9s0ms",
"period":"0h0m3s0ms"
}
```
The following request body configures a timer with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it fires immediately after registration, then every 3 seconds after that.
```json
{
"dueTime":"0h0m0s0ms",
"period":"0h0m3s0ms"
}
```
You can remove the actor timer by calling
```md
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
```
Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
### Actor reminders
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
You can create a persistent reminder for an actor by calling the Http/gRPC request to Dapr.
```md
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
The reminder `duetime` and callback can be specified in the request body. The due time represents when the reminder first fires after registration. The `period` represents how often the reminder will fire after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid. To register a reminder that fires only once, set the period to an empty string.
The following request body configures a reminder with a `dueTime` 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
```json
{
"dueTime":"0h0m9s0ms",
"period":"0h0m3s0ms"
}
```
The following request body configures a reminder with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it will fire immediately after registration, then every 3 seconds after that.
```json
{
"dueTime":"0h0m0s0ms",
"period":"0h0m3s0ms"
}
```
The following request body configures a reminder with a `dueTime` 15 seconds and a `period` of empty string. This means it will first fire after 15 seconds, then never fire again.
```json
{
"dueTime":"0h0m15s0ms",
"period":""
}
```
#### Retrieve actor reminder
You can retrieve the actor reminder by calling
```md
GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
#### Remove the actor reminder
You can remove the actor reminder by calling
```md
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
Refer [api spec]({{< ref "actors_api.md#invoke-reminder" >}}) for more details.

View File

@ -3,5 +3,5 @@ type: docs
title: "Bindings"
linkTitle: "Bindings"
weight: 40
description: Trigger code from and interface with a large array of external resources
description: Interface with or be triggered from external systems
---

View File

@ -3,7 +3,7 @@ type: docs
title: "Bindings overview"
linkTitle: "Overview"
weight: 100
description: Overview of the Dapr bindings building block
description: Overview of the bindings building block
---
## Introduction
@ -37,19 +37,18 @@ Read the [Create an event-driven app using input bindings]({{< ref howto-trigger
## Output bindings
Output bindings allow users to invoke external resources.
An optional payload and metadata can be sent with the invocation request.
Output bindings allow you to invoke external resources. An optional payload and metadata can be sent with the invocation request.
In order to invoke an output binding:
1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.)
2. Use the HTTP endpoint or gRPC method to invoke the binding with an optional payload
Read the [Send events to external systems using output bindings]({{< ref howto-bindings.md >}}) page to get started with output bindings.
## Related Topics
- [Trigger a service from different resources with input bindings]({{< ref howto-triggers.md >}})
- [Invoke different resources using output bindings]({{< ref howto-bindings.md >}})
Read the [Use output bindings to interface with external resources]({{< ref howto-bindings.md >}}) page to get started with output bindings.
## Next Steps
* Follow these guides on:
* [How-To: Trigger a service from different resources with input bindings]({{< ref howto-triggers.md >}})
* [How-To: Use output bindings to interface with external resources]({{< ref howto-bindings.md >}})
* Try out the [bindings quickstart](https://github.com/dapr/quickstarts/tree/master/bindings/README.md) which shows how to bind to a Kafka queue
* Read the [bindings API specification]({{< ref bindings_api.md >}})

View File

@ -1,28 +1,35 @@
---
type: docs
title: "How-To: Use bindings to interface with external resources"
linkTitle: "How-To: Bindings"
description: "Invoke external systems with Dapr output bindings"
title: "How-To: Use output bindings to interface with external resources"
linkTitle: "How-To: Output bindings"
description: "Invoke external systems with output bindings"
weight: 300
---
Using bindings, it is possible to invoke external resources without tying in to special SDK or libraries.
Output bindings enable you to invoke external resources without taking dependencies on special SDK or libraries.
For a complete sample showing output bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings).
Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&t=1960) on how to use bi-directional output bindings.
<iframe width="560" height="315" src="https://www.youtube.com/embed/ysklxm81MTs?start=1960" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## 1. Create a binding
An output binding represents a resource that Dapr will use invoke and send messages to.
An output binding represents a resource that Dapr uses to invoke and send messages to.
For the purpose of this guide, you'll use a Kafka binding. You can find a list of the different binding specs [here]({{< ref bindings >}}).
For the purpose of this guide, you'll use a Kafka binding. You can find a list of the different binding specs [here]({{< ref setup-bindings >}}).
Create the following YAML file, named binding.yaml, and save this to a `components` sub-folder in your application directory.
Create a new binding component with the name of `myevent`.
Inside the `metadata` section, configure Kafka related properties such as the topic to publish the message to and the broker.
{{< tabs "Self-Hosted (CLI)" Kubernetes >}}
{{% codetab %}}
Create the following YAML file, named `binding.yaml`, and save this to a `components` sub-folder in your application directory.
(Use the `--components-path` flag with `dapr run` to point to your custom components dir)
*Note: When running in Kubernetes, apply this file to your cluster using `kubectl apply -f binding.yaml`*
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
@ -39,13 +46,36 @@ spec:
value: topic1
```
Here, create a new binding component with the name of `myevent`.
{{% /codetab %}}
Inside the `metadata` section, configure Kafka related properties such as the topic to publish the message to and the broker.
{{% codetab %}}
To deploy this into a Kubernetes cluster, fill in the `metadata` connection details of your [desired binding component]({{< ref setup-bindings >}}) in the yaml below (in this case kafka), save as `binding.yaml`, and run `kubectl apply -f binding.yaml`.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: myevent
namespace: default
spec:
type: bindings.kafka
version: v1
metadata:
- name: brokers
value: localhost:9092
- name: publishTopic
value: topic1
```
{{% /codetab %}}
{{< /tabs >}}
## 2. Send an event
All that's left now is to invoke the bindings endpoint on a running Dapr instance.
All that's left now is to invoke the output bindings endpoint on a running Dapr instance.
You can do so using HTTP:
@ -57,8 +87,7 @@ As seen above, you invoked the `/binding` endpoint with the name of the binding
The payload goes inside the mandatory `data` field, and can be any JSON serializable value.
You'll also notice that there's an `operation` field that tells the binding what you need it to do.
You can check [here]({{< ref bindings >}}) which operations are supported for every output binding.
You can check [here]({{< ref supported-bindings >}}) which operations are supported for every output binding.
## References

View File

@ -97,6 +97,7 @@ Event delivery guarantees are controlled by the binding implementation. Dependin
## References
* Binding [API](https://github.com/dapr/docs/blob/master/reference/api/bindings_api.md)
* Binding [Components](https://github.com/dapr/docs/tree/master/concepts/bindings)
* Binding [Detailed specifications](https://github.com/dapr/docs/tree/master/reference/specs/bindings)
* [Bindings building block]({{< ref bindings >}})
* [Bindings API]({{< ref bindings_api.md >}})
* [Components concept]({{< ref components-concept.md >}})
* [Supported bindings]({{< ref supported-bindings >}})

View File

@ -6,4 +6,4 @@ weight: 60
description: See and measure the message calls across components and networked services
---
This section includes guides for developers in the context of observability. See other sections for a [general overview of the observability concept]({{< ref observability >}}) in Dapr and for [operations guidance on monitoring]({{< ref monitoring >}}).
This section includes guides for developers in the context of observability. See other sections for a [general overview of the observability concept]({{< ref observability-concept >}}) in Dapr and for [operations guidance on monitoring]({{< ref monitoring >}}).

View File

@ -6,7 +6,7 @@ weight: 1000
description: "Use Dapr tracing to get visibility for distributed application"
---
Dapr uses OpenTelemetry (previously known as OpenCensus) for distributed traces and metrics collection. OpenTelemetry supports various backends including [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), [Instana](https://www.instana.com), [Jaeger](https://www.jaegertracing.io/), [SignalFX](https://www.signalfx.com/), [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io) and others.
Dapr uses the Zipkin protocol for distributed traces and metrics collection. Due to the ubiquity of the Zipkin protocol, many backends are supported out of the box, for examples [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io), [New Relic](https://newrelic.com) and others. Combining with the OpenTelemetry Collector, Dapr can export traces to many other backends including but not limted to [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), [Instana](https://www.instana.com), [Jaeger](https://www.jaegertracing.io/), and [SignalFX](https://www.signalfx.com/).
<img src="/images/tracing.png" width=600>
@ -14,10 +14,10 @@ Dapr uses OpenTelemetry (previously known as OpenCensus) for distributed traces
Dapr adds a HTTP/gRPC middleware to the Dapr sidecar. The middleware intercepts all Dapr and application traffic and automatically injects correlation IDs to trace distributed transactions. This design has several benefits:
* No need for code instrumentation. All traffic is automatically traced (with configurable tracing levels).
* No need for code instrumentation. All traffic is automatically traced with configurable tracing levels.
* Consistent tracing behavior across microservices. Tracing is configured and managed on Dapr sidecar so that it remains consistent across services made by different teams and potentially written in different programming languages.
* Configurable and extensible. By leveraging OpenTelemetry, Dapr tracing can be configured to work with popular tracing backends, including custom backends a customer may have.
* OpenTelemetry exporters are defined as first-class Dapr components. You can define and enable multiple exporters at the same time.
* Configurable and extensible. By leveraging the Zipkin API and the OpenTelemetry Collector, Dapr tracing can be configured to work with popular tracing backends, including custom backends a customer may have.
* You can define and enable multiple exporters at the same time.
## W3C Correlation ID
@ -27,9 +27,9 @@ Read [W3C distributed tracing]({{< ref w3c-tracing >}}) for more background on W
## Configuration
Dapr uses [probabilistic sampling](https://opencensus.io/tracing/sampling/probabilistic/) as defined by OpenCensus. The sample rate defines the probability a tracing span will be sampled and can have a value between 0 and 1 (inclusive). The deafault sample rate is 0.0001 (i.e. 1 in 10,000 spans is sampled).
Dapr uses probabilistic sampling. The sample rate defines the probability a tracing span will be sampled and can have a value between 0 and 1 (inclusive). The default sample rate is 0.0001 (i.e. 1 in 10,000 spans is sampled).
To change the default tracing behavior, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (i.e. every span is sampled):
To change the default tracing behavior, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (i.e. every span is sampled), and sends trace using Zipkin protocol to the Zipkin server at http://zipkin.default.svc.cluster.local
```yaml
apiVersion: dapr.io/v1alpha1
@ -40,30 +40,14 @@ metadata:
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
```
Similarly, changing `samplingRate` to 0 will disable tracing altogether.
Note: Changing `samplingRate` to 0 disables tracing altogether.
See the [References](#references) section for more details on how to configure tracing on local environment and Kubernetes environment.
Dapr supports pluggable exporters, defined by configuration files (in self hosted mode) or a Kubernetes custom resource object (in Kubernetes mode). For example, the following manifest defines a Zipkin exporter:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: zipkin
namespace: default
spec:
type: exporters.zipkin
version: v1
metadata:
- name: enabled
value: "true"
- name: exporterAddress
value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
```
## References
- [How-To: Setup Application Insights for distributed tracing with OpenTelemetry Collector]({{< ref open-telemetry-collector.md >}})

View File

@ -10,7 +10,7 @@ type: docs
# How to use trace context
Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Dapr does all the heavy lifting of generating and propagating the trace context information and there are very few cases where you need to either propagate or create a trace context. First read scenarios in the [W3C distributed tracing]({{< ref w3c-tracing >}}) article to understand whether you need to propagate or create a trace context.
To view traces, read the [how to diagnose with tracing]({{< ref tracing.md >}}) article.
To view traces, read the [how to diagnose with tracing]({{< ref tracing-overview.md >}}) article.
## How to retrieve trace context from a response
`Note: There are no helper methods exposed in Dapr SDKs to propagate and retrieve trace context. You need to use http/gRPC clients to propagate and retrieve trace headers through http headers and gRPC metadata.`

View File

@ -12,9 +12,19 @@ Pub/Sub is a common pattern in a distributed system with many services that want
Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers.
Dapr provides an extensible Pub/Sub system with At-Least-Once guarantees, allowing developers to publish and subscribe to topics.
Dapr provides different implementation of the underlying system, and allows operators to bring in their preferred infrastructure, for example Redis Streams, Kafka, etc.
Dapr provides components for pub/sub, that enable operators to use their preferred infrastructure, for example Redis Streams, Kafka, etc.
## Content Types
When publishing a message, it's important to specify the content type of the data being sent.
Unless specified, Dapr will assume `text/plain`. When using Dapr's HTTP API, the content type can be set in a `Content-Type` header.
gRPC clients and SDKs have a dedicated content type parameter.
## Step 1: Setup the Pub/Sub component
The following example creates applications to publish and subscribe to a topic called `deathStarStatus`.
<img src="/images/pubsub-publish-subscribe-example.png" width=1000>
<br></br>
The first step is to setup the Pub/Sub component:
@ -68,8 +78,14 @@ spec:
## Step 2: Subscribe to topics
Dapr allows two methods by which you can subscribe to topics:
- **Declaratively**, where subscriptions are are defined in an external file.
- **Programatically**, where subscriptions are defined in user code
- **Programmatically**, where subscriptions are defined in user code.
{{% alert title="Note" color="primary" %}}
Both declarative and programmatic approaches support the same features. The declarative approach removes the Dapr dependency from your code and allows, for example, existing applications to subscribe to topics, without having to change code. The programmatic approach implements the subscription in your code.
{{% /alert %}}
### Declarative subscriptions
@ -97,9 +113,9 @@ Set the component with:
{{< tabs "Self-Hosted (CLI)" Kubernetes>}}
{{% codetab %}}
Place the CRD in your `./components` directory. When Dapr starts up, it will load subscriptions along with components.
Place the CRD in your `./components` directory. When Dapr starts up, it loads subscriptions along with components.
*Note: By default, Dapr loads components from `$HOME/.dapr/components` on MacOS/Linux and `%USERPROFILE%\.dapr\components` on Windows.*
Note: By default, Dapr loads components from `$HOME/.dapr/components` on MacOS/Linux and `%USERPROFILE%\.dapr\components` on Windows.
You can also override the default directory by pointing the Dapr CLI to a components path:
@ -123,7 +139,7 @@ kubectl apply -f subscription.yaml
#### Example
{{< tabs Python Node>}}
{{< tabs Python Node PHP>}}
{{% codetab %}}
Create a file named `app1.py` and paste in the following:
@ -144,7 +160,7 @@ def ds_subscriber():
app.run()
```
After creating `app1.py` ensute flask and flask_cors are installed:
After creating `app1.py` ensure flask and flask_cors are installed:
```bash
pip install flask
@ -183,19 +199,50 @@ dapr --app-id app2 --app-port 3000 run node app2.js
```
{{% /codetab %}}
{{% codetab %}}
Create a file named `app1.php` and paste in the following:
```php
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create();
$app->post('/dsstatus', function(
#[\Dapr\Attributes\FromBody]
\Dapr\PubSub\CloudEvent $cloudEvent,
\Psr\Log\LoggerInterface $logger
) {
$logger->alert('Received event: {event}', ['event' => $cloudEvent]);
return ['status' => 'SUCCESS'];
}
);
$app->start();
```
After creating `app1.php`, and with the [SDK installed](https://github.com/dapr/php-sdk/blob/main/docs/getting-started.md),
go ahead and start the app:
```bash
dapr --app-id app1 --app-port 3000 run -- php -S 0.0.0.0:3000 app1.php
```
{{% /codetab %}}
{{< /tabs >}}
### Programmatic subscriptions
To subscribe to topics, start a web server in the programming language of your choice and listen on the following `GET` endpoint: `/dapr/subscribe`.
The Dapr instance will call into your app at startup and expect a JSON response for the topic subscriptions with:
- `pubsubname`: Which pub/sub component Dapr should use
- `topic`: Which topic to subscribe to
- `route`: Which endpoint for Dapr to call on when a message comes to that topic
The Dapr instance calls into your app at startup and expect a JSON response for the topic subscriptions with:
- `pubsubname`: Which pub/sub component Dapr should use.
- `topic`: Which topic to subscribe to.
- `route`: Which endpoint for Dapr to call on when a message comes to that topic.
#### Example
{{< tabs Python Node>}}
{{< tabs Python Node PHP>}}
{{% codetab %}}
```python
@ -221,7 +268,7 @@ def ds_subscriber():
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
app.run()
```
After creating `app1.py` ensute flask and flask_cors are installed:
After creating `app1.py` ensure flask and flask_cors are installed:
```bash
pip install flask
@ -268,27 +315,63 @@ dapr --app-id app2 --app-port 3000 run node app2.js
```
{{% /codetab %}}
{{% codetab %}}
Update `app1.php` with the following:
```php
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions(['dapr.subscriptions' => [
new \Dapr\PubSub\Subscription(pubsubname: 'pubsub', topic: 'deathStarStatus', route: '/dsstatus'),
]]));
$app->post('/dsstatus', function(
#[\Dapr\Attributes\FromBody]
\Dapr\PubSub\CloudEvent $cloudEvent,
\Psr\Log\LoggerInterface $logger
) {
$logger->alert('Received event: {event}', ['event' => $cloudEvent]);
return ['status' => 'SUCCESS'];
}
);
$app->start();
```
Run this app with:
```bash
dapr --app-id app1 --app-port 3000 run -- php -S 0.0.0.0:3000 app1.php
```
{{% /codetab %}}
{{< /tabs >}}
The `/dsstatus` endpoint matches the `route` defined in the subscriptions and this is where Dapr will send all topic messages to.
## Step 3: Publish a topic
To publish a message to a topic, invoke the following endpoint on a Dapr instance:
To publish a topic you need to run an instance of a Dapr sidecar to use the pubsub Redis component. You can use the default Redis component installed into your local environment.
Start an instance of Dapr with an app-id called `testpubsub`:
```bash
dapr run --app-id testpubsub --dapr-http-port 3500
```
{{< tabs "Dapr CLI" "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}}
Then publish a message to the `deathStarStatus` topic:
```bash
dapr publish --pubsub pubsub --topic deathStarStatus --data '{"status": "completed"}'
dapr publish --publish-app-id testpubapp --pubsub pubsub --topic deathStarStatus --data '{"status": "completed"}'
```
{{% /codetab %}}
{{% codetab %}}
Begin by ensuring a Dapr sidecar is running:
```bash
dapr --app-id myapp --port 3500 run
```
Then publish a message to the `deathStarStatus` topic:
```bash
curl -X POST http://localhost:3500/v1.0/publish/pubsub/deathStarStatus -H "Content-Type: application/json" -d '{"status": "completed"}'
@ -296,10 +379,6 @@ curl -X POST http://localhost:3500/v1.0/publish/pubsub/deathStarStatus -H "Conte
{{% /codetab %}}
{{% codetab %}}
Begin by ensuring a Dapr sidecar is running:
```bash
dapr --app-id myapp --port 3500 run
```
Then publish a message to the `deathStarStatus` topic:
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"status": "completed"}' -Uri 'http://localhost:3500/v1.0/publish/pubsub/deathStarStatus'
@ -308,7 +387,7 @@ Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"status":
{{< /tabs >}}
Dapr automatically wraps the user payload in a Cloud Events v1.0 compliant envelope.
Dapr automatically wraps the user payload in a Cloud Events v1.0 compliant envelope, using `Content-Type` header value for `datacontenttype` attribute.
## Step 4: ACK-ing a message
@ -337,7 +416,77 @@ app.post('/dsstatus', (req, res) => {
{{< /tabs >}}
## (Optional) Step 5: Publishing a topic with code
{{< tabs Node PHP>}}
{{% codetab %}}
If you prefer publishing a topic using code, here is an example.
```javascript
const express = require('express');
const path = require('path');
const request = require('request');
const bodyParser = require('body-parser');
const app = express();
app.use(bodyParser.json());
const daprPort = process.env.DAPR_HTTP_PORT || 3500;
const daprUrl = `http://localhost:${daprPort}/v1.0`;
const port = 8080;
const pubsubName = 'pubsub';
app.post('/publish', (req, res) => {
console.log("Publishing: ", req.body);
const publishUrl = `${daprUrl}/publish/${pubsubName}/deathStarStatus`;
request( { uri: publishUrl, method: 'POST', json: req.body } );
res.sendStatus(200);
});
app.listen(process.env.PORT || port, () => console.log(`Listening on port ${port}!`));
```
{{% /codetab %}}
{{% codetab %}}
If you prefer publishing a topic using code, here is an example.
```php
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create();
$app->run(function(\DI\FactoryInterface $factory, \Psr\Log\LoggerInterface $logger) {
$publisher = $factory->make(\Dapr\PubSub\Publish::class, ['pubsub' => 'pubsub']);
$publisher->topic('deathStarStatus')->publish('operational');
$logger->alert('published!');
});
```
You can save this to `app2.php` and while `app1` is running in another terminal, execute:
```bash
dapr --app-id app2 run -- php app2.php
```
{{% /codetab %}}
{{< /tabs >}}
## Sending a custom CloudEvent
Dapr automatically takes the data sent on the publish request and wraps it in a CloudEvent 1.0 envelope.
If you want to use your own custom CloudEvent, make sure to specify the content type as `application/cloudevents+json`.
See info about content types [here](#Content-Types).
## Next steps
- [Scope access to your pub/sub topics]({{< ref pubsub-scopes.md >}})
- [Pub/Sub quickstart](https://github.com/dapr/quickstarts/tree/master/pub-sub)
- [Pub/sub components]({{< ref setup-pubsub >}})
- Try the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/pub-sub)
- Learn about [topic scoping]({{< ref pubsub-scopes.md >}})
- Learn about [message time-to-live]({{< ref pubsub-message-ttl.md >}})
- Learn [how to configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
- List of [pub/sub components]({{< ref setup-pubsub >}})
- Read the [API reference]({{< ref pubsub_api.md >}})

View File

@ -0,0 +1,92 @@
---
type: docs
title: "Message Time-to-Live (TTL)"
linkTitle: "Message TTL"
weight: 6000
description: "Use time-to-live in Pub/Sub messages."
---
## Introduction
Dapr enables per-message time-to-live (TTL). This means that applications can set time-to-live per message, and subscribers do not receive those messages after expiration.
All Dapr [pub/sub components]({{< ref supported-pubsub >}}) are compatible with message TTL, as Dapr handles the TTL logic within the runtime. Simply set the `ttlInSeconds` metadata when publishing a message.
In some components, such as Kafka, time-to-live can be configured in the topic via `retention.ms` as per [documentation](https://kafka.apache.org/documentation/#topicconfigs_retention.ms). With message TTL in Dapr, applications using Kafka can now set time-to-live per message in addition to per topic.
## Native message TTL support
When message time-to-live has native support in the pub/sub component, Dapr simply forwards the time-to-live configuration without adding any extra logic, keeping predictable behavior. This is helpful when the expired messages are handled differently by the component. For example, with Azure Service Bus, where expired messages are stored in the dead letter queue and are not simply deleted.
### Supported components
#### Azure Service Bus
Azure Service Bus supports [entity level time-to-live]((https://docs.microsoft.com/en-us/azure/service-bus-messaging/message-expiration)). This means that messages have a default time-to-live but can also be set with a shorter timespan at publishing time. Dapr propagates the time-to-live metadata for the message and lets Azure Service Bus handle the expiration directly.
## Non-Dapr subscribers
If messages are consumed by subscribers not using Dapr, the expired messages are not automatically dropped, as expiration is handled by the Dapr runtime when a Dapr sidecar receives a message. However, subscribers can programmatically drop expired messages by adding logic to handle the `expiration` attribute in the cloud event, which follows the [RFC3339](https://tools.ietf.org/html/rfc3339) format.
When non-Dapr subscribers use components such as Azure Service Bus, which natively handle message TTL, they do not receive expired messages. Here, no extra logic is needed.
## Example
Message TTL can be set in the metadata as part of the publishing request:
{{< tabs curl "Python SDK" "PHP SDK">}}
{{% codetab %}}
```bash
curl -X "POST" http://localhost:3500/v1.0/publish/pubsub/TOPIC_A?metadata.ttlInSeconds=120 -H "Content-Type: application/json" -d '{"order-number": "345"}'
```
{{% /codetab %}}
{{% codetab %}}
```python
from dapr.clients import DaprClient
with DaprClient() as d:
req_data = {
'order-number': '345'
}
# Create a typed message with content type and body
resp = d.publish_event(
pubsub_name='pubsub',
topic='TOPIC_A',
data=json.dumps(req_data),
metadata=(
('ttlInSeconds', '120')
)
)
# Print the request
print(req_data, flush=True)
```
{{% /codetab %}}
{{% codetab %}}
```php
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create();
$app->run(function(\DI\FactoryInterface $factory) {
$publisher = $factory->make(\Dapr\PubSub\Publish::class, ['pubsub' => 'pubsub']);
$publisher->topic('TOPIC_A')->publish('data', ['ttlInSeconds' => '120']);
});
```
{{% /codetab %}}
{{< /tabs >}}
See [this guide]({{< ref pubsub_api.md >}}) for a reference on the pub/sub API.
## Related links
- Learn about [topic scoping]({{< ref pubsub-scopes.md >}})
- Learn [how to configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
- List of [pub/sub components]({{< ref supported-pubsub >}})
- Read the [API reference]({{< ref pubsub_api.md >}})

View File

@ -3,48 +3,52 @@ type: docs
title: "Publish and subscribe overview"
linkTitle: "Overview"
weight: 1000
description: "Overview of the Dapr Pub/Sub building block"
description: "Overview of the Pub/Sub building block"
---
## Introduction
The [publish/subscribe pattern](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) allows your microservices to communicate with each other purely by sending messages. In this system, the **producer** of a message sends it to a **topic**, with no knowledge of what service will receive the message. A messages can even be sent if there's no consumer for it.
The [publish/subscribe pattern](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) allows microservices to communicate with each other using messages. The **producer or publisher** sends messages to a **topic** without knowledge of what application will receive them. This involves writing them to an input channel. Similarly, a **consumer or subscriber** subscribes to the topic and receive its messages without any knowledge of what service produced these messages. This involves receiving messages from an output channel. An intermediary message broker is responsible for copying each message from an input channel to an output channels for all subscribers interested in that message. This pattern is especially useful when you need to decouple microservices from one another.
Similarly, a **consumer** will receive messages from a topic without knowledge of what producer sent it. This pattern is especially useful when you need to decouple microservices from one another.
The publish/subscribe API in Dapr provides an at-least-once guarantee and integrates with various message brokers and queuing systems. The specific implementation used by your service is pluggable and configured as a Dapr pub/sub component at runtime. This approach removes the dependency from your service and, as a result, makes your service more portable and flexible to changes.
Dapr provides a publish/subscribe API that provides at-least-once guarantees and integrates with various message brokers implementations. These implementations are pluggable, and developed outside of the Dapr runtime in [components-contrib](https://github.com/dapr/components-contrib/tree/master/pubsub).
The complete list of Dapr pub/sub components is [here]({{< ref supported-pubsub >}}).
<img src="/images/pubsub-overview-pattern.png" width=1000>
<br></br>
The Dapr pub/sub building block provides a platform-agnostic API to send and receive messages. Your services publish messages to a named topic and also subscribe to a topic to consume the messages.
The service makes a network call to a Dapr pub/sub building block, exposed as a sidecar. This building block then makes calls into a Dapr pub/sub component that encapsulates a specific message broker product. To receive topics, Dapr subscribes to the Dapr pub/sub component on behalf of your service and delivers the messages to an endpoint when they arrive.
The diagram below shows an example of a "shipping" service and an "email" service that have both subscribed to topics that are published by the "cart" service. Each service loads pub/sub component configuration files that point to the same pub/sub message bus component, for example Redis Streams, NATS Streaming, Azure Service Bus, or GCP Pub/Sub.
<img src="/images/pubsub-overview-components.png" width=1000>
<br></br>
The diagram below has the same services, however this time showing the Dapr publish API that sends an "order" topic and order endpoints on the subscribing services that these topic messages are delivered posted to by Dapr.
<img src="/images/pubsub-overview-publish-API.png" width=1000>
<br></br>
## Features
The pub/sub building block provides several features to your application.
### Publish/Subscribe API
### Cloud Events message format
The API for Publish/Subscribe can be found in the [spec repo]({{< ref pubsub_api.md >}}).
To enable message routing and to provide additional context with each message, Dapr uses the [CloudEvents 1.0 specification](https://github.com/cloudevents/spec/tree/v1.0) as its message format. Any message sent by an application to a topic using Dapr is automatically "wrapped" in a Cloud Events envelope, using `Content-Type` header value for `datacontenttype` attribute.
### At-Least-Once guarantee
Dapr implements the following Cloud Events fields:
Dapr guarantees At-Least-Once semantics for message delivery.
That means that when an application publishes a message to a topic using the Publish/Subscribe API, it can assume the message is delivered at least once to any subscriber when the response status code from that endpoint is `200`, or returns no error if using the gRPC client.
### Consumer groups and multiple instances
The burden of dealing with concepts like consumer groups and multiple instances inside consumer groups is all catered for by Dapr.
When multiple instances of the same application ID subscribe to a topic, Dapr will make sure to deliver the message to only one instance. If two different applications with different IDs subscribe to a topic, at least one instance in each application receives a copy of the same message.
### Cloud events
Dapr follows the [CloudEvents 1.0 Spec](https://github.com/cloudevents/spec/tree/v1.0) and wraps any payload sent to a topic inside a Cloud Events envelope.
The following fields from the Cloud Events spec are implemented with Dapr:
- `id`
- `source`
- `specversion`
- `type`
- `datacontenttype` (Optional)
> Starting with Dapr v0.9, Dapr no longer wraps published content into CloudEvent if the published payload itself is already in CloudEvent format.
* `id`
* `source`
* `specversion`
* `type`
* `datacontenttype` (Optional)
The following example shows an XML content in CloudEvent v1.0 serialized as JSON:
```json
{
"specversion" : "1.0",
@ -58,11 +62,55 @@ The following example shows an XML content in CloudEvent v1.0 serialized as JSON
}
```
### Message subscription
Dapr applications can subscribe to published topics. Dapr allows two methods by which your applications can subscribe to topics:
- **Declarative**, where a subscription is defined in an external file,
- **Programmatic**, where a subscription is defined in the user code.
Both declarative and programmatic approaches support the same features. The declarative approach removes the Dapr dependency from your code and allows for existing applications to subscribe to topics, without having to change code. The programmatic approach implements the subscription in your code.
For more information read [How-To: Publish a message and subscribe to a topic]({{< ref howto-publish-subscribe >}}).
### Message delivery
In principle, Dapr considers message successfully delivered when the subscriber responds with a non-error response after processing the message. For more granular control, Dapr's publish/subscribe API also provides explicit statuses, defined in the response payload, which the subscriber can use to indicate the specific handling instructions to Dapr (e.g. `RETRY` or `DROP`). For more information on message routing read [Dapr publish/subscribe API documentation]({{< ref "pubsub_api.md#provide-routes-for-dapr-to-deliver-topic-events" >}})
### At-least-once guarantee
Dapr guarantees "At-Least-Once" semantics for message delivery. That means that when an application publishes a message to a topic using the publish/subscribe API, Dapr ensures that this message will be delivered at least once to every subscriber.
### Consumer groups and competing consumers pattern
The burden of dealing with concepts like consumer groups and multiple application instances using a single consumer group is all handled automatically by Dapr. When multiple instances of the same application (running same app-IDs) subscribe to a topic, Dapr delivers each message to *only one instance of **that** application*. This is commonly known as the competing consumers pattern and is illustrated in the diagram below.
<img src="/images/pubsub-overview-pattern-competing-consumers.png" width=1000>
<br></br>
Similarly, if two different applications (different app-IDs) subscribe to the same topic, Dapr deliver each message to *only one instance of **each** application*.
### Topic scoping
Limit which topics applications are able to publish/subscibe to in order to limit access to potentially sensitive data streams. Read [Pub/Sub scoping]({{< ref pubsub-scopes.md >}}) for more information.
By default, all topics backing the Dapr pub/sub component (e.g. Kafka, Redis Stream, RabbitMQ) are available to every application configured with that component. To limit which application can publish or subscribe to topics, Dapr provides topic scoping. This enables to you say which topics an application is allowed to published and which topics an application is allowed to subscribed to. For more information read [publish/subscribe topic scoping]({{< ref pubsub-scopes.md >}}).
### Message Time-to-Live (TTL)
Dapr can set a timeout message on a per message basis, meaning that if the message is not read from the pub/sub component, then the message is discarded. This is to prevent the build up of messages that are not read. A message that has been in the queue for longer than the configured TTL is said to be dead. For more information read [publish/subscribe message time-to-live]({{< ref pubsub-message-ttl.md >}}).
- Note: Message TTL can also be set for a given queue at the time of component creation. Look at the specific characteristic of the component that you are using.
### Publish/Subscribe API
The publish/subscribe API is located in the [API reference]({{< ref pubsub_api.md >}}).
## Next steps
- Read the How-To guide on [publishing and subscribing]({{< ref howto-publish-subscribe.md >}})
- Learn about [Pub/Sub scopes]({{< ref pubsub-scopes.md >}})
* Follow these guides on:
* [How-To: Publish a message and subscribe to a topic]({{< ref howto-publish-subscribe.md >}})
* [How-To: Configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
* Try out the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/pub-sub)
* Learn about [topic scoping]({{< ref pubsub-scopes.md >}})
* Learn about [message time-to-live (TTL)]({{< ref pubsub-message-ttl.md >}})
* List of [pub/sub components]({{< ref supported-pubsub.md >}})
* Read the [pub/sub API reference]({{< ref pubsub_api.md >}})

View File

@ -158,4 +158,11 @@ The table below shows which application is allowed to subscribe to the topics:
## Demo
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VdWBBGcbHQ?start=513" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VdWBBGcbHQ?start=513" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Related links
- Learn [how to configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
- Learn about [message time-to-live]({{< ref pubsub-message-ttl.md >}})
- List of [pub/sub components]({{< ref supported-pubsub >}})
- Read the [API reference]({{< ref pubsub_api.md >}})

View File

@ -1,7 +1,7 @@
---
type: docs
title: "Secrets building block"
linkTitle: "Secrets"
title: "Secrets management"
linkTitle: "Secrets management"
weight: 70
description: Securely access secrets from your application
---

View File

@ -6,42 +6,66 @@ weight: 2000
description: "Use the secret store building block to securely retrieve a secret"
---
It's common for applications to store sensitive information such as connection strings, keys and tokens that are used to authenticate with databases, services and external systems in secrets by using a dedicated secret store.
This article provides guidance on using Dapr's secrets API in your code to leverage the [secrets store building block]({{<ref secrets-overview>}}). The secrets API allows you to easily retrieve secrets in your application code from a configured secret store.
Usually this involves setting up a secret store such as Azure Key Vault, Hashicorp Vault and others and storing the application level secrets there. To access these secret stores, the application needs to import the secret store SDK, and use it to access the secrets.
## Set up a secret store
This usually involves writing a fair amount of boilerplate code that is not related to the actual business domain of the app, and this becomes an even greater challenge in multi-cloud scenarios: if an app needs to deploy to two different environments and use two different secret stores, the amount of boilerplate code gets doubled, and the effort increases.
Before retrieving secrets in your application's code, you must have a secret store component configured. For the purposes of this guide, as an example you will configure a local secret store which uses a local JSON file to store secrets.
In addition, not all secret stores have native SDKs for all programming languages.
>Note: The component used in this example is not secured and is not recommended for production deployments. You can find other alternatives [here]({{<ref supported-secret-stores >}}).
To make it easier for developers everywhere to consume application secrets, Dapr has a dedicated secrets building block API that allows developers to get secrets from a secret store.
Create a file named `secrets.json` with the following contents:
## Setting up a secret store component
The first step involves setting up a secret store, either in the cloud or in the hosting environment such as a cluster. This is done by using the relevant instructions from the cloud provider or secret store implementation.
The second step is to configure the secret store with Dapr.
To deploy in Kubernetes, save the file above to `aws_secret_manager.yaml` and then run:
```bash
kubectl apply -f aws_secret_manager.yaml
```json
{
"my-secret" : "I'm Batman"
}
```
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
Create a directory for your components file named `components` and inside it create a file named `localSecretStore.yaml` with the following contents:
Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&feature=youtu.be&t=1818) for an example on how to use the secrets API. Or watch this [video](https://www.youtube.com/watch?v=8W-iBDNvCUM&feature=youtu.be&t=1765) for an example on how to component scopes with secret components and the secrets API.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: my-secrets-store
namespace: default
spec:
type: secretstores.local.file
version: v1
metadata:
- name: secretsFile
value: <PATH TO SECRETS FILE>/mysecrets.json
- name: nestedSeparator
value: ":"
```
## Calling the secrets API
Make sure to replace `<PATH TO SECRETS FILE>` with the path to the JSON file you just created.
Now that the secret store is set up, you can call Dapr to get the secrets for a given key for a specific secret store.
To configure a different kind of secret store see the guidance on [how to configure a secret store]({{<ref secret-stores-overview>}}) and review [supported secret stores]({{<ref supported-secret-stores >}}) to see specific details required for different secret store solutions.
## Get a secret
For a full API reference, go [here](https://github.com/dapr/docs/blob/master/reference/api/secrets_api.md).
Now run the Dapr sidecar (with no application)
Here are a few examples in different programming languages:
```bash
dapr run --app-id my-app --dapr-http-port 3500 --components-path ./components
```
### Go
And now you can get the secret by calling the Dapr sidecar using the secrets API:
```bash
curl http://localhost:3500/v1.0/secrets/my-secrets-store/my-secret
```
For a full API reference, go [here]({{< ref secrets_api.md >}}).
## Calling the secrets API from your code
Once you have a secret store set up, you can call Dapr to get the secrets from your application code. Here are a few examples in different programming languages:
{{< tabs "Go" "Javascript" "Python" "Rust" "C#" "PHP" >}}
{{% codetab %}}
```Go
import (
"fmt"
@ -49,7 +73,7 @@ import (
)
func main() {
url := "http://localhost:3500/v1.0/secrets/kubernetes/my-secret"
url := "http://localhost:3500/v1.0/secrets/my-secrets-store/my-secret"
res, err := http.Get(url)
if err != nil {
@ -61,13 +85,16 @@ func main() {
fmt.Println(string(body))
}
```
### Javascript
{{% /codetab %}}
{{% codetab %}}
```javascript
require('isomorphic-fetch');
const secretsUrl = `http://localhost:3500/v1.0/secrets`;
fetch(`${secretsUrl}/kubernetes/my-secret`)
fetch(`${secretsUrl}/my-secrets-store/my-secret`)
.then((response) => {
if (!response.ok) {
throw "Could not get secret";
@ -78,16 +105,21 @@ fetch(`${secretsUrl}/kubernetes/my-secret`)
});
```
### Python
{{% /codetab %}}
{{% codetab %}}
```python
import requests as req
resp = req.get("http://localhost:3500/v1.0/secrets/kubernetes/my-secret")
resp = req.get("http://localhost:3500/v1.0/secrets/my-secrets-store/my-secret")
print(resp.text)
```
### Rust
{{% /codetab %}}
{{% codetab %}}
```rust
#![deny(warnings)]
@ -95,7 +127,7 @@ use std::{thread};
#[tokio::main]
async fn main() -> Result<(), reqwest::Error> {
let res = reqwest::get("http://localhost:3500/v1.0/secrets/kubernetes/my-secret").await?;
let res = reqwest::get("http://localhost:3500/v1.0/secrets/my-secrets-store/my-secret").await?;
let body = res.text().await?;
println!("Secret:{}", body);
@ -105,13 +137,43 @@ async fn main() -> Result<(), reqwest::Error> {
}
```
### C#
{{% /codetab %}}
{{% codetab %}}
```csharp
var client = new HttpClient();
var response = await client.GetAsync("http://localhost:3500/v1.0/secrets/kubernetes/my-secret");
var response = await client.GetAsync("http://localhost:3500/v1.0/secrets/my-secrets-store/my-secret");
response.EnsureSuccessStatusCode();
string secret = await response.Content.ReadAsStringAsync();
Console.WriteLine(secret);
```
{{% /codetab %}}
{{% codetab %}}
```php
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create();
$app->run(function(\Dapr\SecretManager $secretManager, \Psr\Log\LoggerInterface $logger) {
$secret = $secretManager->retrieve(secret_store: 'my-secret-store', name: 'my-secret');
$logger->alert('got secret: {secret}', ['secret' => $secret]);
});
```
{{% /codetab %}}
{{< /tabs >}}
## Related links
- [Dapr secrets overview]({{<ref secrets-overview>}})
- [Secrets API reference]({{<ref secrets_api>}})
- [Configure a secret store]({{<ref secret-stores-overview>}})
- [Supported secrets]({{<ref secret-stores-overview>}})
- [Using secrets in components]({{<ref component-secrets>}})
- [Secret stores quickstart](https://github.com/dapr/quickstarts/tree/master/secretstore)

View File

@ -1,30 +1,25 @@
---
type: docs
title: "Secrets stores overview"
linkTitle: "Secrets stores overview"
title: "Secrets management overview"
linkTitle: "Overview"
weight: 1000
description: "Overview of Dapr secrets management building block"
description: "Overview of secrets management building block"
---
Almost all non-trivial applications need to _securely_ store secret data like API keys, database passwords, and more. By nature, these secrets should not be checked into the version control system, but they also need to be accessible to code running in production. This is generally a hard problem, but it's critical to get it right. Otherwise, critical production systems can be compromised.
It's common for applications to store sensitive information such as connection strings, keys and tokens that are used to authenticate with databases, services and external systems in secrets by using a dedicated secret store.
Dapr's solution to this problem is the secrets API and secrets stores.
Usually this involves setting up a secret store such as Azure Key Vault, Hashicorp Vault and others and storing the application level secrets there. To access these secret stores, the application needs to import the secret store SDK, and use it to access the secrets. This may require a fair amount of boilerplate code that is not related to the actual business domain of the app, and so becomes an even greater challenge in multi-cloud scenarios where different vendor specific secret stores may be used.
Here's how it works:
To make it easier for developers everywhere to consume application secrets, Dapr has a dedicated secrets building block API that allows developers to get secrets from a secret store.
- Dapr is set up to use a **secret store** - a place to securely store secret data
- Application code uses the standard Dapr secrets API to retrieve secrets.
Using Dapr's secret store building block typically involves the following:
1. Setting up a component for a specific secret store solution.
1. Retrieving secrets using the Dapr secrets API in the application code.
1. Optionally, referencing secrets in Dapr component files.
Some examples for secret stores include `Kubernetes`, `Hashicorp Vault`, `Azure KeyVault`. See [secret stores](https://github.com/dapr/components-contrib/tree/master/secretstores) for the list of supported stores.
See [Setup secret stores](https://github.com/dapr/docs/tree/master/howto/setup-secret-store) for a HowTo guide for setting up and using secret stores.
## Referencing secret stores in Dapr components
Instead of including credentials directly within a Dapr component file, you can place the credentials within a Dapr supported secret store and reference the secret within the Dapr component. This is preferred approach and is a recommended best practice especially in production environments.
For more information read [Referencing Secret Stores in Components]({{< ref component-secrets.md >}})
## Setting up a secret store
See [Setup secret stores]({{< ref howto-secrets.md >}}) for guidance on how to setup a secret store with Dapr.
## Using secrets in your application
@ -45,11 +40,18 @@ In Azure Dapr can be configured to use Managed Identities to authenticate with A
Notice that in all of the examples above the application code did not have to change to get the same secret. Dapr did all the heavy lifting here via the secrets building block API and using the secret components.
See [Access Application Secrets using the Secrets API](https://github.com/dapr/docs/tree/master/howto/get-secrets) for a How To guide to use secrets in your application.
For detailed API information read [Secrets API](https://github.com/dapr/docs/blob/master/reference/api/secrets_api.md).
See [Access Application Secrets using the Secrets API]({{< ref howto-secrets.md >}}) for a How To guide to use secrets in your application.
For detailed API information read [Secrets API]({{< ref secrets_api.md >}}).
## Referencing secret stores in Dapr components
When configuring Dapr components such as state stores it is often required to include credentials in components files. Instead of doing that, you can place the credentials within a Dapr supported secret store and reference the secret within the Dapr component. This is preferred approach and is a recommended best practice especially in production environments.
For more information read [referencing secret stores in components]({{< ref component-secrets.md >}})
## Limiting access to secrets
To provide more granular control on access to secrets, Dapr provides the ability to define scopes and restricting access permissions. Learn more about [using secret scoping]({{<ref secrets-scopes>}})

View File

@ -3,21 +3,24 @@ type: docs
title: "How To: Use secret scoping"
linkTitle: "How To: Use secret scoping"
weight: 3000
description: "Use scoping to limit the secrets that can be read from secret stores"
description: "Use scoping to limit the secrets that can be read by your application from secret stores"
type: docs
---
Follow [these instructions]({{< ref setup-secret-store >}}) to configure secret store for an application. Once configured, any secret defined within that store will be accessible from the Dapr application.
You can read [guidance on setting up secret store components]({{< ref setup-secret-store >}}) to configure a secret store for an application. Once configured, by default *any* secret defined within that store is accessible from the Dapr application.
To limit the secrets to which the Dapr application has access, users can define secret scopes by augmenting existing configuration CRD with restrictive permissions.
To limit the secrets to which the Dapr application has access to, you can can define secret scopes by adding a secret scope policy to the application configuration with restrictive permissions. Follow [these instructions]({{< ref configuration-concept.md >}}) to define an application configuration.
Follow [these instructions]({{< ref configuration-concept.md >}}) to define a configuration CRD.
The secret scoping policy applies to any [secret store]({{< ref supported-secret-stores.md >}}), whether that is a local secret store, a Kubernetes secret store or a public cloud secret store. For details on how to set up a [secret stores]({{< ref secret-stores-overview.md >}}) read [How To: Retrieve a secret]({{< ref howto-secrets.md >}})
Watch this [video](https://youtu.be/j99RN_nxExA?start=2272) for a demo on how to use secret scoping with your application.
<iframe width="688" height="430" src="https://www.youtube.com/embed/j99RN_nxExA?start=2272" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Scenario 1 : Deny access to all secrets for a secret store
In Kubernetes cluster, the native Kubernetes secret store is added to Dapr application by default. In some scenarios it may be necessary to deny access to Dapr secrets for a given application. To add this configuration follow the steps below:
This example uses Kubernetes. The native Kubernetes secret store is added to you Dapr application by default. In some scenarios it may be necessary to deny access to Dapr secrets for a given application. To add this configuration follow the steps below:
Define the following `appconfig.yaml` and apply it to the Kubernetes cluster using the command `kubectl apply -f appconfig.yaml`.
Define the following `appconfig.yaml` configuration and apply it to the Kubernetes cluster using the command `kubectl apply -f appconfig.yaml`.
```yaml
apiVersion: dapr.io/v1alpha1
@ -37,11 +40,11 @@ For applications that need to be denied access to the Kubernetes secret store, f
dapr.io/config: appconfig
```
With this defined, the application no longer has access to Kubernetes secret store.
With this defined, the application no longer has access to any secrets in the Kubernetes secret store.
## Scenario 2 : Allow access to only certain secrets in a secret store
To allow a Dapr application to have access to only certain secrets, define the following `config.yaml`:
This example uses a secret store that is named `vault`. For example this could be a Hashicorp secret store component that has been set on your application. To allow a Dapr application to have access to only certain secrets `secret1` and `secret2` in the `vault` secret store, define the following `appconfig.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
@ -56,9 +59,9 @@ spec:
allowedSecrets: ["secret1", "secret2"]
```
This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
This example defines configuration for secret store named `vault`. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
## Scenario 3: Deny access to certain senstive secrets in a secret store
## Scenario 3: Deny access to certain sensitive secrets in a secret store
Define the following `config.yaml`:
@ -75,11 +78,11 @@ spec:
deniedSecrets: ["secret1", "secret2"]
```
The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
This example uses a secret store that is named `vault`. The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
## Permission priority
The `allowedSecrets` and `deniedSecrets` list values take priorty over the `defaultAccess`.
The `allowedSecrets` and `deniedSecrets` list values take priority over the `defaultAccess` policy.
Scenarios | defaultAccess | allowedSecrets | deniedSecrets | permission
---- | ------- | -----------| ----------| ------------
@ -90,6 +93,8 @@ Scenarios | defaultAccess | allowedSecrets | deniedSecrets | permission
5 - Default deny with denied list | deny | empty | ["s1"] | deny
6 - Default deny/allow with both lists | deny/allow | ["s1"] | ["s2"] | only "s1" can be accessed
## Related links
* List of [secret stores]({{< ref supported-secret-stores.md >}})
* Overview of [secret stores]({{< ref secret-stores-overview.md >}})
howto-secrets/

View File

@ -1,8 +1,8 @@
---
type: docs
title: "How-To: Invoke and discover services"
title: "How-To: Invoke services using HTTP"
linkTitle: "How-To: Invoke services"
description: "How-to guide on how to use Dapr service invocation in a distributed application"
description: "Call between services using service invocation"
weight: 2000
---

View File

@ -8,26 +8,26 @@ description: "Overview of the service invocation building block"
## Introduction
Using service invocation, your application can discover and reliably and securely communicate with other applications using the standard protocols of [gRPC](https://grpc.io) or [HTTP](https://www.w3.org/Protocols/).
Using service invocation, your application can reliably and securely communicate with other applications using the standard [gRPC](https://grpc.io) or [HTTP](https://www.w3.org/Protocols/) protocols.
In many environments with multiple services that need to communicate with each other, developers often ask themselves the following questions:
* How do I discover and invoke methods on different services?
* How do I call other services securely?
* How do I call other services securely with encryption and apply access control on the methods?
* How do I handle retries and transient errors?
* How do I use distributed tracing to see a call graph to diagnose issues in production?
* How do I use tracing to see a call graph with metrics to diagnose issues in production?
Dapr allows you to overcome these challenges by providing an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing, metrics, error handling and more.
Dapr addresses these challenges by providing a service invocation API that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing, metrics, error handling, encryption and more.
Dapr uses a sidecar, decentralized architecture. To invoke an application using Dapr, you use the `invoke` API on any Dapr instance. The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another.
Dapr uses a sidecar architecture. To invoke an application using Dapr, you use the `invoke` API on any Dapr instance. The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another.
### Invoke logic
### Service invocation
The diagram below is an overview of how Dapr's service invocation works.
<img src="/images/service-invocation-overview.png" width=800 alt="Diagram showing the steps of service invocation">
1. Service A makes an http/gRPC call targeting Service B. The call goes to the local Dapr sidecar.
1. Service A makes an HTTP or gRPC call targeting Service B. The call goes to the local Dapr sidecar.
2. Dapr discovers Service B's location using the [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) which is running on the given [hosting platform]({{< ref "hosting" >}}).
3. Dapr forwards the message to Service B's Dapr sidecar
@ -39,11 +39,7 @@ The diagram below is an overview of how Dapr's service invocation works.
7. Service A receives the response.
## Features
Service invocation provides several features to make it easy for you to call methods on remote applications.
### Service invocation API
The API for Pservice invocation can be found in the [spec repo]({{< ref service_invocation_api.md >}}).
Service invocation provides several features to make it easy for you to call methods between applications.
### Namespaces scoping
@ -59,17 +55,6 @@ This is especially useful in cross namespace calls in a Kubernetes cluster. Watc
<iframe width="560" height="315" src="https://www.youtube.com/embed/LYYV_jouEuA?start=497" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
### Retries
Service invocation performs automatic retries with backoff time periods in the event of call failures and transient errors.
Errors that cause retries are:
* Network errors including endpoint unavailability and refused connections
* Authentication errors due to a renewing certificate on the calling/callee Dapr sidecars
Per call retries are performed with a backoff interval of 1 second up to a threshold of 3 times.
Connection establishment via gRPC to the target sidecar has a timeout of 5 seconds.
### Service-to-service security
@ -77,28 +62,55 @@ All calls between Dapr applications can be made secure with mutual (mTLS) authen
For more information read the [service-to-service security]({{< ref "security-concept.md#sidecar-to-sidecar-communication" >}}) article.
<img src="/images/security-mTLS-sentry-selfhosted.png" width=800>
### Service access security
### Service access policies security
Applications can control which other applications are allowed to call them and what they are authorized to do via access policies. This enables you to restrict sensitive applications, that say have personnel information, from being accessed by unauthorized applications, and combined with service-to-service secure communication, provides for soft multi-tenancy deployments.
For more information read the [access control allow lists for service invocation]({{< ref invoke-allowlist.md >}}) article.
### Observability
#### Example service invocation security
The diagram below is an example deployment on a Kubernetes cluster with a Daprized `Ingress` service that calls onto `Service A` using service invocation with mTLS encryption and an applies access control policy. `Service A` then calls onto `Service B` also using service invocation and mTLS. Each service is running in different namespaces for added isolation.
By default, all calls between applications are traced and metrics are gathered to provide insights and diagnostics for applications, which is especially important in production scenarios.
<img src="/images/service-invocation-security.png" width=800>
For more information read the [observability]({{< ref observability-concept.md >}}) article.
### Retries
Service invocation performs automatic retries with backoff time periods in the event of call failures and transient errors.
Errors that cause retries are:
* Network errors including endpoint unavailability and refused connections.
* Authentication errors due to a renewing certificate on the calling/callee Dapr sidecars.
Per call retries are performed with a backoff interval of 1 second up to a threshold of 3 times.
Connection establishment via gRPC to the target sidecar has a timeout of 5 seconds.
### Pluggable service discovery
Dapr can run on any [hosting platform]({{< ref hosting >}}). For the supported hosting platforms this means they have a [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) developed for them that enables service discovery. For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster.
Dapr can run on any [hosting platform]({{< ref hosting >}}). For the supported hosting platforms this means they have a [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) developed for them that enables service discovery. For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster. For local and multiple physical machines this uses the mDNS protocol.
### Round robin load balancing with mDNS
Dapr provides round robin load balancing of service invocation requests with the mDNS protocol, for example with a single machine or with multiple, networked, physical machines.
The diagram below shows an example of how this works. If you have 1 instance of an application with app ID `FrontEnd` and 3 instances of application with app ID `Cart` and you call from `FrontEnd` app to `Cart` app, Dapr round robins' between the 3 instances. These instance can be on the same machine or on different machines. .
<img src="/images/service-invocation-mdns-round-robin.png" width=800 alt="Diagram showing the steps of service invocation">
Note: You can have N instances of the same app with the same app ID as app ID is unique per app. And you can have multiple instances of that app where all those instances have the same app ID.
### Tracing and metrics with observability
By default, all calls between applications are traced and metrics are gathered to provide insights and diagnostics for applications, which is especially important in production scenarios. This gives you call graphs and metrics on the calls between your services. For more information read about [observability]({{< ref observability-concept.md >}}).
### Service invocation API
The API for service invocation can be found in the [service invocation API reference]({{< ref service_invocation_api.md >}}) which describes how to invoke a method on another service.
## Example
Following the above call sequence, suppose you have the applications as described in the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md), where a python app invokes a node.js app. In such a scenario, the python app would be "Service A" , and a Node.js app would be "Service B".
The diagram below shows sequence 1-7 again on a local machine showing the API call:
The diagram below shows sequence 1-7 again on a local machine showing the API calls:
<img src="/images/service-invocation-overview-example.png" width=800>
@ -108,13 +120,13 @@ The diagram below shows sequence 1-7 again on a local machine showing the API ca
4. The Node.js app's sidecar forwards the request to the Node.js app. The Node.js app performs its business logic, logging the incoming message and then persist the order ID into Redis (not shown in the diagram)
5. The Node.js app sends a response to the Python app through the Node.js sidecar.
6. Dapr forwards the response to the Python Dapr sidecar
7. The Python app receives the resposne.
7. The Python app receives the response.
## Next steps
* Follow these guide on:
* [How-to: Get started with HTTP service invocation]({{< ref howto-invoke-discover-services.md >}})
* [How-to: Get started with Dapr and gRPC]({{< ref grpc >}})
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or visit the samples in each of the [Dapr SDKs]({{< ref sdks >}})
* Follow these guides on:
* [How-to: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}})
* [How-To: Configure Dapr to use gRPC]({{< ref grpc >}})
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or try the samples in the [Dapr SDKs]({{< ref sdks >}})
* Read the [service invocation API specification]({{< ref service_invocation_api.md >}})
* See the [service invocation performance]({{< ref perf-service-invocation.md >}}) numbers
* Understand the [service invocation performance]({{< ref perf-service-invocation.md >}}) numbers

View File

@ -14,127 +14,192 @@ Dealing with different databases libraries, testing them, handling retries and f
Dapr provides state management capabilities that include consistency and concurrency options.
In this guide we'll start of with the basics: Using the key/value state API to allow an application to save, get and delete state.
## Pre-requisites
- [Dapr CLI]({{< ref install-dapr-cli.md >}})
- Initialized [Dapr environment]({{< ref install-dapr-selfhost.md >}})
## Step 1: Setup a state store
A state store component represents a resource that Dapr uses to communicate with a database.
For the purpose of this how to we'll use a Redis state store, but any state store from the [supported list]({{< ref supported-state-stores >}}) will work.
For the purpose of this guide we'll use a Redis state store, but any state store from the [supported list]({{< ref supported-state-stores >}}) will work.
{{< tabs "Self-Hosted (CLI)" Kubernetes>}}
{{% codetab %}}
When using `Dapr init` in Standalone mode, the Dapr CLI automatically provisions a state store (Redis) and creates the relevant YAML in a `components` directory, which for Linux/MacOS is `$HOME/.dapr/components` and for Windows is `%USERPROFILE%\.dapr\components`
When using `dapr init` in Standalone mode, the Dapr CLI automatically provisions a state store (Redis) and creates the relevant YAML in a `components` directory, which for Linux/MacOS is `$HOME/.dapr/components` and for Windows is `%USERPROFILE%\.dapr\components`
To change the state store being used, replace the YAML under `/components` with the file of your choice.
To optionally change the state store being used, replace the YAML file `statestore.yaml` under `/components` with the file of your choice.
{{% /codetab %}}
{{% codetab %}}
To deploy this into a Kubernetes cluster, fill in the `metadata` connection details of your [desired statestore component]({{< ref supported-state-stores >}}) in the yaml below, save as `statestore.yaml`, and run `kubectl apply -f statestore.yaml`.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
```
See the instructions [here]({{< ref "setup-state-store" >}}) on how to setup different state stores on Kubernetes.
{{% /codetab %}}
{{< /tabs >}}
## Step 2: Save state
The following example shows how to save two key/value pairs in a single call using the state management API.
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK">}}
{{% codetab %}}
Begin by ensuring a Dapr sidecar is running:
```bash
dapr --app-id myapp --port 3500 run
```
{{% alert title="Note" color="info" %}}
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
{{% /alert %}}
Then in a separate terminal run:
```bash
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' http://localhost:3500/v1.0/state/statestore
```
{{% /codetab %}}
{{% codetab %}}
Begin by ensuring a Dapr sidecar is running:
```bash
dapr --app-id myapp --port 3500 run
```
{{% alert title="Note" color="info" %}}
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
{{% /alert %}}
Then in a separate terminal run:
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' -Uri 'http://localhost:3500/v1.0/state/statestore'
```
{{% /codetab %}}
{{% codetab %}}
Make sure to install the Dapr Python SDK with `pip3 install dapr`. Then create a file named `state.py` with:
```python
from dapr.clients import DaprClient
from dapr.clients.grpc._state import StateItem
with DaprClient() as d:
d.save_states(store_name="statestore",
states=[
StateItem(key="key1", value="value1"),
StateItem(key="key2", value="value2")
])
```
Run with `dapr run --app-id myapp run python state.py`
{{% alert title="Note" color="info" %}}
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
{{% /alert %}}
{{% /codetab %}}
{{< /tabs >}}
## Step 3: Get state
## Step 2: Save and retrieve a single state
The following example shows how to get an item by using a key with the state management API:
The following example shows how to a single key/value pair using the Dapr state building block.
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK">}}
{{% alert title="Note" color="warning" %}}
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
{{% /alert %}}
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK" "PHP SDK">}}
{{% codetab %}}
With the same dapr instance running from above run:
Begin by launching a Dapr sidecar:
```bash
dapr run --app-id myapp --dapr-http-port 3500
```
Then in a separate terminal save a key/value pair into your statestore:
```bash
curl -X POST -H "Content-Type: application/json" -d '{ "key": "key1", "value": "value1"}' http://localhost:3500/v1.0/state/statestore
```
Now get the state you just saved:
```bash
curl http://localhost:3500/v1.0/state/statestore/key1
```
You can also restart your sidecar and try retrieving state again to see that state persists separate from the app.
{{% /codetab %}}
{{% codetab %}}
With the same dapr instance running from above run:
Begin by launching a Dapr sidecar:
```bash
dapr --app-id myapp --port 3500 run
```
Then in a separate terminal save a key/value pair into your statestore:
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"key": "key1", "value": "value1"}' -Uri 'http://localhost:3500/v1.0/state/statestore'
```
Now get the state you just saved:
```powershell
Invoke-RestMethod -Uri 'http://localhost:3500/v1.0/state/statestore/key1'
```
You can also restart your sidecar and try retrieving state again to see that state persists separate from the app.
{{% /codetab %}}
{{% codetab %}}
Add the following code to `state.py` from above and run again:
Save the following to a file named `pythonState.py`:
```python
data = d.get_state(store_name="statestore",
key="key1",
state_metadata={"metakey": "metavalue"}).data
from dapr.clients import DaprClient
with DaprClient() as d:
d.save_state(store_name="statestore", key="myFirstKey", value="myFirstValue" )
print("State has been stored")
data = d.get_state(store_name="statestore", key="myFirstKey").data
print(f"Got value: {data}")
```
Once saved run the following command to launch a Dapr sidecar and run the Python application:
```bash
dapr --app-id myapp run python pythonState.py
```
You should get an output similar to the following, which will show both the Dapr and app logs:
```md
== DAPR == time="2021-01-06T21:34:33.7970377-08:00" level=info msg="starting Dapr Runtime -- version 0.11.3 -- commit a1a8e11" app_id=Braidbald-Boot scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T21:34:33.8040378-08:00" level=info msg="standalone mode configured" app_id=Braidbald-Boot scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T21:34:33.8040378-08:00" level=info msg="app id: Braidbald-Boot" app_id=Braidbald-Boot scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T21:34:33.9750400-08:00" level=info msg="component loaded. name: statestore, type: state.redis" app_id=Braidbald-Boot scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T21:34:33.9760387-08:00" level=info msg="API gRPC server is running on port 51656" app_id=Braidbald-Boot scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T21:34:33.9770372-08:00" level=info msg="dapr initialized. Status: Running. Init Elapsed 172.9994ms" app_id=Braidbald-Boot scope=dapr.
Checking if Dapr sidecar is listening on GRPC port 51656
Dapr sidecar is up and running.
Updating metadata for app command: python pythonState.py
You are up and running! Both Dapr and your app logs will appear here.
== APP == State has been stored
== APP == Got value: b'myFirstValue'
```
{{% /codetab %}}
{{% codetab %}}
Save the following in `state-example.php`:
```php
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create();
$app->run(function(\Dapr\State\StateManager $stateManager, \Psr\Log\LoggerInterface $logger) {
$stateManager->save_state(store_name: 'statestore', item: new \Dapr\State\StateItem(
key: 'myFirstKey',
value: 'myFirstValue'
));
$logger->alert('State has been stored');
$data = $stateManager->load_state(store_name: 'statestore', key: 'myFirstKey')->value;
$logger->alert("Got value: {data}", ['data' => $data]);
});
```
Once saved run the following command to launch a Dapr sidecar and run the PHP application:
```bash
dapr --app-id myapp run -- php state-example.php
```
You should get an output similar to the following, which will show both the Dapr and app logs:
```md
✅ You're up and running! Both Dapr and your app logs will appear here.
== APP == [2021-02-12T16:30:11.078777+01:00] APP.ALERT: State has been stored [] []
== APP == [2021-02-12T16:30:11.082620+01:00] APP.ALERT: Got value: myFirstValue {"data":"myFirstValue"} []
```
{{% /codetab %}}
{{< /tabs >}}
## Step 4: Delete state
## Step 3: Delete state
The following example shows how to delete an item by using a key with the state management API:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK">}}
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK" "PHP SDK">}}
{{% codetab %}}
With the same dapr instance running from above run:
@ -153,16 +218,374 @@ Try getting state again and note that no value is returned.
{{% /codetab %}}
{{% codetab %}}
Add the following code to `state.py` from above and run again:
Update `pythonState.py` with:
```python
d.delete_state(store_name="statestore"",
key="key1",
state_metadata={"metakey": "metavalue"})
data = d.get_state(store_name="statestore",
key="key1",
state_metadata={"metakey": "metavalue"}).data
from dapr.clients import DaprClient
with DaprClient() as d:
d.save_state(store_name="statestore", key="key1", value="value1" )
print("State has been stored")
data = d.get_state(store_name="statestore", key="key1").data
print(f"Got value: {data}")
d.delete_state(store_name="statestore", key="key1")
data = d.get_state(store_name="statestore", key="key1").data
print(f"Got value after delete: {data}")
```
Now run your program with:
```bash
dapr --app-id myapp run python pythonState.py
```
You should see an output similar to the following:
```md
Starting Dapr with id Yakchocolate-Lord. HTTP Port: 59457. gRPC Port: 59458
== DAPR == time="2021-01-06T22:55:36.5570696-08:00" level=info msg="starting Dapr Runtime -- version 0.11.3 -- commit a1a8e11" app_id=Yakchocolate-Lord scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T22:55:36.5690367-08:00" level=info msg="standalone mode configured" app_id=Yakchocolate-Lord scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T22:55:36.7220140-08:00" level=info msg="component loaded. name: statestore, type: state.redis" app_id=Yakchocolate-Lord scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T22:55:36.7230148-08:00" level=info msg="API gRPC server is running on port 59458" app_id=Yakchocolate-Lord scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T22:55:36.7240207-08:00" level=info msg="dapr initialized. Status: Running. Init Elapsed 154.984ms" app_id=Yakchocolate-Lord scope=dapr.runtime type=log ver=0.11.3
Checking if Dapr sidecar is listening on GRPC port 59458
Dapr sidecar is up and running.
Updating metadata for app command: python pythonState.py
You're up and running! Both Dapr and your app logs will appear here.
== APP == State has been stored
== APP == Got value: b'value1'
== APP == Got value after delete: b''
```
{{% /codetab %}}
{{% codetab %}}
Update `state-example.php` with the following contents:
```php
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create();
$app->run(function(\Dapr\State\StateManager $stateManager, \Psr\Log\LoggerInterface $logger) {
$stateManager->save_state(store_name: 'statestore', item: new \Dapr\State\StateItem(
key: 'myFirstKey',
value: 'myFirstValue'
));
$logger->alert('State has been stored');
$data = $stateManager->load_state(store_name: 'statestore', key: 'myFirstKey')->value;
$logger->alert("Got value: {data}", ['data' => $data]);
$stateManager->delete_keys(store_name: 'statestore', keys: ['myFirstKey']);
$data = $stateManager->load_state(store_name: 'statestore', key: 'myFirstKey')->value;
$logger->alert("Got value after delete: {data}", ['data' => $data]);
});
```
Now run it with:
```bash
dapr --app-id myapp run -- php state-example.php
```
You should see something similar the following output:
```md
✅ You're up and running! Both Dapr and your app logs will appear here.
== APP == [2021-02-12T16:38:00.839201+01:00] APP.ALERT: State has been stored [] []
== APP == [2021-02-12T16:38:00.841997+01:00] APP.ALERT: Got value: myFirstValue {"data":"myFirstValue"} []
== APP == [2021-02-12T16:38:00.845721+01:00] APP.ALERT: Got value after delete: {"data":null} []
```
{{% /codetab %}}
{{< /tabs >}}
## Step 4: Save and retrieve multiple states
Dapr also allows you to save and retrieve multiple states in the same call.
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK" "PHP SDK">}}
{{% codetab %}}
With the same dapr instance running from above save two key/value pairs into your statestore:
```bash
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' http://localhost:3500/v1.0/state/statestore
```
Now get the states you just saved:
```bash
curl -X POST -H "Content-Type: application/json" -d '{"keys":["key1", "key2"]}' http://localhost:3500/v1.0/state/statestore/bulk
```
{{% /codetab %}}
{{% codetab %}}
With the same dapr instance running from above save two key/value pairs into your statestore:
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' -Uri 'http://localhost:3500/v1.0/state/statestore'
```
Now get the states you just saved:
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"keys":["key1", "key2"]}' -Uri 'http://localhost:3500/v1.0/state/statestore/bulk'
```
{{% /codetab %}}
{{% codetab %}}
The `StateItem` object can be used to store multiple Dapr states with the `save_states` and `get_states` methods.
Update your `pythonState.py` file with the following code:
```python
from dapr.clients import DaprClient
from dapr.clients.grpc._state import StateItem
with DaprClient() as d:
s1 = StateItem(key="key1", value="value1")
s2 = StateItem(key="key2", value="value2")
d.save_bulk_state(store_name="statestore", states=[s1,s2])
print("States have been stored")
items = d.get_bulk_state(store_name="statestore", keys=["key1", "key2"]).items
print(f"Got items: {[i.data for i in items]}")
```
Now run your program with:
```bash
dapr --app-id myapp run python pythonState.py
```
You should see an output similar to the following:
```md
== DAPR == time="2021-01-06T21:54:56.7262358-08:00" level=info msg="starting Dapr Runtime -- version 0.11.3 -- commit a1a8e11" app_id=Musesequoia-Sprite scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T21:54:56.7401933-08:00" level=info msg="standalone mode configured" app_id=Musesequoia-Sprite scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T21:54:56.8754240-08:00" level=info msg="Initialized name resolution to standalone" app_id=Musesequoia-Sprite scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T21:54:56.8844248-08:00" level=info msg="component loaded. name: statestore, type: state.redis" app_id=Musesequoia-Sprite scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T21:54:56.8854273-08:00" level=info msg="API gRPC server is running on port 60614" app_id=Musesequoia-Sprite scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T21:54:56.8854273-08:00" level=info msg="dapr initialized. Status: Running. Init Elapsed 145.234ms" app_id=Musesequoia-Sprite scope=dapr.runtime type=log ver=0.11.3
Checking if Dapr sidecar is listening on GRPC port 60614
Dapr sidecar is up and running.
Updating metadata for app command: python pythonState.py
You're up and running! Both Dapr and your app logs will appear here.
== APP == States have been stored
== APP == Got items: [b'value1', b'value2']
```
{{% /codetab %}}
{{% codetab %}}
To batch load and save state with PHP, just create a "Plain Ole' PHP Object" (POPO) and annotate it with
the StateStore annotation.
Update the `state-example.php` file:
```php
<?php
require_once __DIR__.'/vendor/autoload.php';
#[\Dapr\State\Attributes\StateStore('statestore', \Dapr\consistency\EventualLastWrite::class)]
class MyState {
public string $key1 = 'value1';
public string $key2 = 'value2';
}
$app = \Dapr\App::create();
$app->run(function(\Dapr\State\StateManager $stateManager, \Psr\Log\LoggerInterface $logger) {
$obj = new MyState();
$stateManager->save_object(item: $obj);
$logger->alert('States have been stored');
$stateManager->load_object(into: $obj);
$logger->alert("Got value: {data}", ['data' => $obj]);
});
```
Run the app:
```bash
dapr --app-id myapp run -- php state-example.php
```
And see the following output:
```md
✅ You're up and running! Both Dapr and your app logs will appear here.
== APP == [2021-02-12T16:55:02.913801+01:00] APP.ALERT: States have been stored [] []
== APP == [2021-02-12T16:55:02.917850+01:00] APP.ALERT: Got value: [object MyState] {"data":{"MyState":{"key1":"value1","key2":"value2"}}} []
```
{{% /codetab %}}
{{< /tabs >}}
## Step 5: Perform state transactions
{{% alert title="Note" color="warning" %}}
State transactions require a state store that supports multi-item transactions. Visit the [supported state stores page]({{< ref supported-state-stores >}}) page for a full list. Note that the default Redis container created in a self-hosted environment supports them.
{{% /alert %}}
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK" "PHP SDK">}}
{{% codetab %}}
With the same dapr instance running from above perform two state transactions:
```bash
curl -X POST -H "Content-Type: application/json" -d '{"operations": [{"operation":"upsert", "request": {"key": "key1", "value": "newValue1"}}, {"operation":"delete", "request": {"key": "key2"}}]}' http://localhost:3500/v1.0/state/statestore/transaction
```
Now see the results of your state transactions:
```bash
curl -X POST -H "Content-Type: application/json" -d '{"keys":["key1", "key2"]}' http://localhost:3500/v1.0/state/statestore/bulk
```
{{% /codetab %}}
{{% codetab %}}
With the same dapr instance running from above save two key/value pairs into your statestore:
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"operations": [{"operation":"upsert", "request": {"key": "key1", "value": "newValue1"}}, {"operation":"delete", "request": {"key": "key2"}}]}' -Uri 'http://localhost:3500/v1.0/state/statestore'
```
Now see the results of your state transactions:
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"keys":["key1", "key2"]}' -Uri 'http://localhost:3500/v1.0/state/statestore/bulk'
```
{{% /codetab %}}
{{% codetab %}}
The `TransactionalStateOperation` can perform a state transaction if your state stores need to be transactional.
Update your `pythonState.py` file with the following code:
```python
from dapr.clients import DaprClient
from dapr.clients.grpc._state import StateItem
from dapr.clients.grpc._request import TransactionalStateOperation, TransactionOperationType
with DaprClient() as d:
s1 = StateItem(key="key1", value="value1")
s2 = StateItem(key="key2", value="value2")
d.save_bulk_state(store_name="statestore", states=[s1,s2])
print("States have been stored")
d.execute_state_transaction(
store_name="statestore",
operations=[
TransactionalStateOperation(key="key1", data="newValue1", operation_type=TransactionOperationType.upsert),
TransactionalStateOperation(key="key2", data="value2", operation_type=TransactionOperationType.delete)
]
)
print("State transactions have been completed")
items = d.get_bulk_state(store_name="statestore", keys=["key1", "key2"]).items
print(f"Got items: {[i.data for i in items]}")
```
Now run your program with:
```bash
dapr run python pythonState.py
```
You should see an output similar to the following:
```md
Starting Dapr with id Singerchecker-Player. HTTP Port: 59533. gRPC Port: 59534
== DAPR == time="2021-01-06T22:18:14.1246721-08:00" level=info msg="starting Dapr Runtime -- version 0.11.3 -- commit a1a8e11" app_id=Singerchecker-Player scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T22:18:14.1346254-08:00" level=info msg="standalone mode configured" app_id=Singerchecker-Player scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T22:18:14.2747063-08:00" level=info msg="component loaded. name: statestore, type: state.redis" app_id=Singerchecker-Player scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T22:18:14.2757062-08:00" level=info msg="API gRPC server is running on port 59534" app_id=Singerchecker-Player scope=dapr.runtime type=log ver=0.11.3
== DAPR == time="2021-01-06T22:18:14.2767059-08:00" level=info msg="dapr initialized. Status: Running. Init Elapsed 142.0805ms" app_id=Singerchecker-Player scope=dapr.runtime type=log ver=0.11.3
Checking if Dapr sidecar is listening on GRPC port 59534
Dapr sidecar is up and running.
Updating metadata for app command: python pythonState.py
You're up and running! Both Dapr and your app logs will appear here.
== APP == State transactions have been completed
== APP == Got items: [b'value1', b'']
```
{{% /codetab %}}
{{% codetab %}}
Transactional state is supported by extending `TransactionalState` base object which hooks into your
object via setters and getters to provide a transaction. Before you created your own transactional object,
but now you'll ask the Dependency Injection framework to build one for you.
Modify the `state-example.php` file again:
```php
<?php
require_once __DIR__.'/vendor/autoload.php';
#[\Dapr\State\Attributes\StateStore('statestore', \Dapr\consistency\EventualLastWrite::class)]
class MyState extends \Dapr\State\TransactionalState {
public string $key1 = 'value1';
public string $key2 = 'value2';
}
$app = \Dapr\App::create();
$app->run(function(MyState $obj, \Psr\Log\LoggerInterface $logger, \Dapr\State\StateManager $stateManager) {
$obj->begin();
$obj->key1 = 'hello world';
$obj->key2 = 'value3';
$obj->commit();
$logger->alert('Transaction committed!');
// begin a new transaction which reloads from the store
$obj->begin();
$logger->alert("Got value: {key1}, {key2}", ['key1' => $obj->key1, 'key2' => $obj->key2]);
});
```
Run the application:
```bash
dapr --app-id myapp run -- php state-example.php
```
Observe the following output:
```md
✅ You're up and running! Both Dapr and your app logs will appear here.
== APP == [2021-02-12T17:10:06.837110+01:00] APP.ALERT: Transaction committed! [] []
== APP == [2021-02-12T17:10:06.840857+01:00] APP.ALERT: Got value: hello world, value3 {"key1":"hello world","key2":"value3"} []
```
{{% /codetab %}}
{{< /tabs >}}
## Next steps
- Read the full [State API reference]({{< ref state_api.md >}})
- Try one of the [Dapr SDKs]({{< ref sdks >}})
- Build a [stateful service]({{< ref howto-stateful-service.md >}})

View File

@ -0,0 +1,95 @@
---
type: docs
title: "How-To: Share state between applications"
linkTitle: "How-To: Share state between applications"
weight: 400
description: "Choose different strategies for sharing state between different applications"
---
## Introduction
Dapr offers developers different ways to share state between applications.
Different architectures might have different needs when it comes to sharing state. For example, in one scenario you may want to encapsulate all state within a given application and have Dapr manage the access for you. In a different scenario, you may need to have two applications working on the same state be able to get and save the same keys.
To enable state sharing, Dapr supports the following key prefixes strategies:
* **`appid`** - This is the default strategy. the `appid` prefix allows state to be managed only by the app with the specified `appid`. All state keys will be prefixed with the `appid`, and are scoped for the application.
* **`name`** - This setting uses the name of the state store component as the prefix. Multiple applications can share the same state for a given state store.
* **`none`** - This setting uses no prefixing. Multiple applications share state across different state stores.
## Specifying a state prefix strategy
To specify a prefix strategy, add a metadata key named `keyPrefix` on a state component:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: production
spec:
type: state.redis
version: v1
metadata:
- name: keyPrefix
value: <key-prefix-strategy>
```
## Examples
The following examples will show you how state retrieval looks like with each of the supported prefix strategies:
### `appid` (default)
A Dapr application with app id `myApp` is saving state into a state store named `redis`:
```shell
curl -X POST http://localhost:3500/v1.0/state/redis \
-H "Content-Type: application/json"
-d '[
{
"key": "darth",
"value": "nihilus"
}
]'
```
The key will be saved as `myApp||darth`.
### `name`
A Dapr application with app id `myApp` is saving state into a state store named `redis`:
```shell
curl -X POST http://localhost:3500/v1.0/state/redis \
-H "Content-Type: application/json"
-d '[
{
"key": "darth",
"value": "nihilus"
}
]'
```
The key will be saved as `redis||darth`.
### `none`
A Dapr application with app id `myApp` is saving state into a state store named `redis`:
```shell
curl -X POST http://localhost:3500/v1.0/state/redis \
-H "Content-Type: application/json"
-d '[
{
"key": "darth",
"value": "nihilus"
}
]'
```
The key will be saved as `darth`.

View File

@ -2,7 +2,7 @@
type: docs
title: "Work with backend state stores"
linkTitle: "Backend stores"
weight: 400
weight: 500
description: "Guides for working with specific backend states stores"
---

View File

@ -6,7 +6,7 @@ weight: 2000
description: "Use Redis as a backend state store"
---
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{<ref state_api.md>}}). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{<ref state_api.md>}}). You can directly interact with the underlying store to manipulate the state data, such as querying states, creating aggregated views and making backups.
>**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation.

View File

@ -19,17 +19,17 @@ The easiest way to connect to your SQL Server instance is to use the [Azure Data
To get all state keys associated with application "myapp", use the query:
```sql
SELECT * FROM states WHERE [Key] LIKE 'myapp-%'
SELECT * FROM states WHERE [Key] LIKE 'myapp||%'
```
The above query returns all rows with id containing "myapp-", which is the prefix of the state keys.
The above query returns all rows with id containing "myapp||", which is the prefix of the state keys.
## 3. Get specific state data
To get the state data by a key "balance" for the application "myapp", use the query:
```sql
SELECT * FROM states WHERE [Key] = 'myapp-balance'
SELECT * FROM states WHERE [Key] = 'myapp||balance'
```
Then, read the **Data** field of the returned row.
@ -37,7 +37,7 @@ Then, read the **Data** field of the returned row.
To get the state version/ETag, use the command:
```sql
SELECT [RowVersion] FROM states WHERE [Key] = 'myapp-balance'
SELECT [RowVersion] FROM states WHERE [Key] = 'myapp||balance'
```
## 4. Get filtered state data
@ -53,13 +53,13 @@ SELECT * FROM states WHERE JSON_VALUE([Data], '$.color') = 'blue'
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```sql
SELECT * FROM states WHERE [Key] LIKE 'mypets-cat-leroy-%'
SELECT * FROM states WHERE [Key] LIKE 'mypets||cat||leroy||%'
```
And to get a specific actor state such as "food", use the command:
```sql
SELECT * FROM states WHERE [Key] = 'mypets-cat-leroy-food'
SELECT * FROM states WHERE [Key] = 'mypets||cat||leroy||food'
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.

View File

@ -8,65 +8,47 @@ description: "Overview of the state management building block"
## Introduction
Dapr offers key/value storage APIs for state management. If a microservice uses state management, it can use these APIs to leverage any of the [supported state stores]({{< ref supported-state-stores.md >}}), without adding or learning a third party SDK.
Using state management, your application can store data as key/value pairs in the [supported state stores]({{< ref supported-state-stores.md >}}).
When using state management your application can leverage several features that would otherwise be complicated and error-prone to build yourself such as:
When using state management your application can leverage features that would otherwise be complicated and error-prone to build yourself such as:
- Distributed concurrency and data consistency
- Retry policies
- Bulk [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations
See below for a diagram of state management's high level architecture.
Your application can used Dapr's state management API to save and read key/value pairs using a state store component, as shown in the diagram below. For example, by using HTTP POST you can save key/value pairs and by using HTTP GET you can read a key and have its value returned.
<img src="/images/state-management-overview.png" width=900>
## Features
- [State management API](#state-management-api)
- [State store behaviors](#state-store-behaviors)
- [Concurrency](#concurrency)
- [Consistency](#consistency)
- [Retry policies](#retry-policies)
- [Bulk operations](#bulk-operations)
- [Querying state store directly](#querying-state-store-directly)
### Pluggable state stores
### State management API
Dapr data stores are modeled as components, which can be swapped out without any changes to your service code. See [supported state stores]({{< ref supported-state-stores >}}) to see the list.
Developers can use the state management API to retrieve, save and delete state values by providing keys.
### Configurable state store behavior
Dapr data stores are components. Dapr ships with [Redis](https://redis.io) out-of-box for local development in self hosted mode. Dapr allows you to plug in other data stores as components such as [Azure CosmosDB](https://azure.microsoft.com/services/cosmos-db/), [SQL Server](https://azure.microsoft.com/services/sql-database/), [AWS DynamoDB](https://aws.amazon.com/DynamoDB), [GCP Cloud Spanner](https://cloud.google.com/spanner) and [Cassandra](http://cassandra.apache.org/).
Dapr allows developers to attach additional metadata to a state operation request that describes how the request is expected to be handled. You can attach:
- Concurrency requirements
- Consistency requirements
Visit [State API]({{< ref state_api.md >}}) for more information.
By default, your application should assume a data store is **eventually consistent** and uses a **last-write-wins** concurrency pattern.
> **NOTE:** Dapr prefixes state keys with the ID of the current Dapr instance. This allows multiple Dapr instances to share the same state store.
### State store behaviors
Dapr allows developers to attach to a state operation request additional metadata that describes how the request is expected to be handled. For example, you can attach concurrency requirements, consistency requirements, and retry policy to any state operation requests.
By default, your application should assume a data store is **eventually consistent** and uses a **last-write-wins** concurrency pattern. On the other hand, if you do attach metadata to your requests, Dapr passes the metadata along with the requests to the state store and expects the data store to fulfill the requests.
Not all stores are created equal. To ensure portability of your application, you can query the capabilities of the store and make your code adaptive to different store capabilities.
The following table summarizes the capabilities of existing data store implementations.
| Store | Strong consistent write | Strong consistent read | ETag |
|-------------------|-------------------------|------------------------|------|
| Cosmos DB | Yes | Yes | Yes |
| PostgreSQL | Yes | Yes | Yes |
| Redis | Yes | Yes | Yes |
| Redis (clustered) | Yes | No | Yes |
| SQL Server | Yes | Yes | Yes |
[Not all stores are created equal]({{< ref supported-state-stores.md >}}). To ensure portability of your application you can query the capabilities of the store and make your code adaptive to different store capabilities.
### Concurrency
Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an **ETag** property to the returned state. And when the user code tries to update or delete a state, it's expected to attach the ETag through the **If-Match** header. The write operation can succeed only when the provided ETag matches with the ETag in the state store.
Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an ETag property to the returned state. When the user code tries to update or delete a state, its expected to attach the ETag either through the request body for updates or the `If-Match` header for deletes. The write operation can succeed only when the provided ETag matches with the ETag in the state store.
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a [retry policy](#Retry-Policies) to compensate for such conflicts when using ETags.
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a retry policy to compensate for such conflicts when using ETags.
If your application omits ETags in writing requests, Dapr skips ETag checks while handling the requests. This essentially enables the **last-write-wins** pattern, compared to the **first-write-wins** pattern with ETags.
> **NOTE:** For stores that don't natively support ETags, it's expected that the corresponding Dapr state store implementation simulates ETags and follows the Dapr state management API specification when handling states. Because Dapr state store implementations are technically clients to the underlying data store, such simulation should be straightforward using the concurrency control mechanisms provided by the store.
{{% alert title="Note on ETags" color="primary" %}}
For stores that don't natively support ETags, it's expected that the corresponding Dapr state store implementation simulates ETags and follows the Dapr state management API specification when handling states. Because Dapr state store implementations are technically clients to the underlying data store, such simulation should be straightforward using the concurrency control mechanisms provided by the store.
{{% /alert %}}
Read the [API reference]({{< ref state_api.md >}}) to learn how to set concurrency options.
### Consistency
@ -74,15 +56,18 @@ Dapr supports both **strong consistency** and **eventual consistency**, with eve
When strong consistency is used, Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request. When eventual consistency is used, Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica.
### Retry policies
Dapr allows you to attach a retry policy to any write request. A policy is described by an **retryInterval**, a **retryPattern** and a **retryThreshold**. Dapr keeps retrying the request at the given interval up to the specified threshold. You can choose between a **linear** retry pattern or an **exponential** (backoff) pattern. When the **exponential** pattern is used, the retry interval is doubled after each attempt.
Read the [API reference]({{< ref state_api.md >}}) to learn how to set consistency options.
### Bulk operations
Dapr supports two types of bulk operations - **bulk** or **multi**. You can group several requests of the same type into a bulk (or a batch). Dapr submits requests in the bulk as individual requests to the underlying data store. In other words, bulk operations are not transactional. On the other hand, you can group requests of different types into a multi-operation, which is handled as an atomic transaction.
### Querying state store directly
Read the [API reference]({{< ref state_api.md >}}) to learn how use bulk and multi options.
### Actor state
Transactional state stores can be used to store actor state. To specify which state store to be used for actors, specify value of property `actorStateStore` as `true` in the metadata section of the state store component. Actors state is stored with a specific scheme in transactional state stores, which allows for consistent querying. Read the [API reference]({{< ref state_api.md >}}) to learn more about state stores for actors and the [actors API reference]({{< ref actors_api.md >}})
### Query state store directly
Dapr saves and retrieves state values without any transformation. You can query and aggregate state directly from the [underlying state store]({{< ref query-state-store >}}).
@ -92,8 +77,6 @@ For example, to get all state keys associated with an application ID "myApp" in
KEYS "myApp*"
```
> **NOTE:** See [How to query Redis store]({{< ref query-redis-store.md >}} ) for details on how to query a Redis store.
#### Querying actor state
If the data store supports SQL queries, you can query an actor's state using SQL queries. For example use:
@ -108,10 +91,20 @@ You can also perform aggregate queries across actor instances, avoiding the comm
SELECT AVG(value) FROM StateTable WHERE Id LIKE '<app-id>||<thermometer>||*||temperature'
```
> **NOTE:** Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the actor instances.
{{% alert title="Note on direct queries" color="primary" %}}
Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the Dapr state management or actors APIs.
{{% /alert %}}
### State management API
The API for state management can be found in the [state management API reference]({{< ref state_api.md >}}) which describes how to retrieve, save and delete state values by providing keys.
## Next steps
* Follow the [state store setup guides]({{< ref setup-state-store >}})
* Read the [state management API specification]({{< ref state_api.md >}})
* Read the [actors API specification]({{< ref actors_api.md >}})
* Follow these guides on:
* [How-To: Save and get state]({{< ref howto-get-save-state.md >}})
* [How-To: Build a stateful service]({{< ref howto-stateful-service.md >}})
* [How-To: Share state between applications]({{< ref howto-share-state.md >}})
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use state management or try the samples in the [Dapr SDKs]({{< ref sdks >}})
* List of [state store components]({{< ref supported-state-stores.md >}})
* Read the [state management API reference]({{< ref state_api.md >}})
* Read the [actors API reference]({{< ref actors_api.md >}})

View File

@ -0,0 +1,32 @@
---
type: docs
title: "Developing with GitHub Codespaces"
linkTitle: "GitHub Codespaces"
weight: 3000
description: "How to get up and running with Dapr in a GitHub Codespace"
---
[GitHub Codespaces](https://github.com/features/codespaces) are the easiest way to get up and running in a Dapr environment. In as little as a single click you have the environment, packages, code, samples, and documentation all ready to go in your browser.
{{% alert title="Private Beta" color="warning" %}}
GitHub Codespaces is currently in a private beta. Sign up [here](https://github.com/features/codespaces/signup).
{{% /alert %}}
## Features
- **Click and Run**: Get a dedicated and sandboxed environment with all of the required frameworks and packages ready to go.
- **Usage-based Billing**: Only pay for the time you spend developing in the Codespace. Environments are spun down automatically when not in use.
- **Portable**: Run in your browser or in Visual Studio Code
## Open a Dapr repo in a Codespace
To open a Dapr repository in a Codespace simply select "Code" from the repo homepage and "Open with Codespaces":
<img src="/images/codespaces-create.png" alt="Screenshot of creating a Dapr Codespace" width="300">
### Supported repos
- [Python SDK](https://github.com/dapr/python-sdk)
## Related links
- [GitHub documentation](https://docs.github.com/en/github/developing-online-with-codespaces/about-codespaces)

View File

@ -2,7 +2,7 @@
type: docs
title: "IntelliJ"
linkTitle: "IntelliJ"
weight: 1000
weight: 2000
description: "Configuring IntelliJ community edition for debugging with Dapr"
---
@ -23,9 +23,44 @@ Let's get started!
## Add Dapr as an 'External Tool'
First, quit IntelliJ.
First, quit IntelliJ before modifying the configurations file directly.
Create or edit the file in `$HOME/.IdeaIC2019.3/config/tools/External\ Tools.xml` (change IntelliJ version in path if needed) to add a new `<tool></tool>` entry:
### IntelliJ configuration file location
For versions [2020.1](https://www.jetbrains.com/help/idea/2020.1/tuning-the-ide.html#config-directory) and above the configuration files for tools should be located in:
{{< tabs Windows Linux MacOS >}}
{{% codetab %}}
```powershell
%USERPROFILE%\AppData\Roaming\JetBrains\IntelliJIdea2020.1\tools\
```
{{% /codetab %}}
{{% codetab %}}
```shell
$HOME/.config/JetBrains/IntelliJIdea2020.1/tools/
```
{{% /codetab %}}
{{% codetab %}}
```shell
~/Library/Application Support/JetBrains/IntelliJIdea2020.1/tools/
```
{{% /codetab %}}
{{< /tabs >}}
> The configuration file location is different for version 2019.3 or prior. See [here](https://www.jetbrains.com/help/idea/2019.3/tuning-the-ide.html#config-directory) for more details.
Change the version of IntelliJ in the path if needed.
Create or edit the file in `<CONFIG PATH>/tools/External\ Tools.xml` (change IntelliJ version in path if needed). The `<CONFIG PATH>` is OS dependennt as seen above.
Add a new `<tool></tool>` entry:
```xml
<toolSet name="External Tools">
@ -33,10 +68,10 @@ Create or edit the file in `$HOME/.IdeaIC2019.3/config/tools/External\ Tools.xml
<!-- 1. Each tool has its own app-id, so create one per application to be debugged -->
<tool name="dapr for DemoService in examples" description="Dapr sidecar" showInMainMenu="false" showInEditor="false" showInProject="false" showInSearchPopup="false" disabled="false" useConsole="true" showConsoleOnStdOut="true" showConsoleOnStdErr="true" synchronizeAfterRun="true">
<exec>
<!-- 2. For Linux or MacOS use: /usr/local/bin/daprd -->
<option name="COMMAND" value="C:\dapr\daprd.exe" />
<!-- 2. For Linux or MacOS use: /usr/local/bin/dapr -->
<option name="COMMAND" value="C:\dapr\dapr.exe" />
<!-- 3. Choose app, http and grpc ports that do not conflict with other daprd command entries (placement address should not change). -->
<option name="PARAMETERS" value="-app-id demoservice -app-port 3000 -dapr-http-port 3005 -dapr-grpc-port 52000 -placement-host-address localhost:50005" />
<option name="PARAMETERS" value="run -app-id demoservice -app-port 3000 -dapr-http-port 3005 -dapr-grpc-port 52000 />
<!-- 4. Use the folder where the `components` folder is located -->
<option name="WORKING_DIRECTORY" value="C:/Code/dapr/java-sdk/examples" />
</exec>
@ -53,7 +88,7 @@ Optionally, you may also create a new entry for a sidecar tool that can be reuse
<!-- 1. Reusable entry for apps with app port. -->
<tool name="dapr with app-port" description="Dapr sidecar" showInMainMenu="false" showInEditor="false" showInProject="false" showInSearchPopup="false" disabled="false" useConsole="true" showConsoleOnStdOut="true" showConsoleOnStdErr="true" synchronizeAfterRun="true">
<exec>
<!-- 2. For Linux or MacOS use: /usr/bin/dapr -->
<!-- 2. For Linux or MacOS use: /usr/local/bin/dapr -->
<option name="COMMAND" value="c:\dapr\dapr.exe" />
<!-- 3. Prompts user 4 times (in order): app id, app port, Dapr's http port, Dapr's grpc port. -->
<option name="PARAMETERS" value="run --app-id $Prompt$ --app-port $Prompt$ --dapr-http-port $Prompt$ --dapr-grpc-port $Prompt$" />
@ -64,7 +99,7 @@ Optionally, you may also create a new entry for a sidecar tool that can be reuse
<!-- 1. Reusable entry for apps without app port. -->
<tool name="dapr without app-port" description="Dapr sidecar" showInMainMenu="false" showInEditor="false" showInProject="false" showInSearchPopup="false" disabled="false" useConsole="true" showConsoleOnStdOut="true" showConsoleOnStdErr="true" synchronizeAfterRun="true">
<exec>
<!-- 2. For Linux or MacOS use: /usr/bin/dapr -->
<!-- 2. For Linux or MacOS use: /usr/local/bin/dapr -->
<option name="COMMAND" value="c:\dapr\dapr.exe" />
<!-- 3. Prompts user 3 times (in order): app id, Dapr's http port, Dapr's grpc port. -->
<option name="PARAMETERS" value="run --app-id $Prompt$ --dapr-http-port $Prompt$ --dapr-grpc-port $Prompt$" />
@ -108,3 +143,7 @@ After debugging, make sure you stop both `dapr` and your app in IntelliJ.
>Note: Since you launched the service(s) using the **dapr** ***run*** CLI command, the **dapr** ***list*** command will show runs from IntelliJ in the list of apps that are currently running with Dapr.
Happy debugging!
## Related links
- [Change](https://intellij-support.jetbrains.com/hc/en-us/articles/206544519-Directories-used-by-the-IDE-to-store-settings-caches-plugins-and-logs) in IntelliJ configuration directory location

View File

@ -1,24 +0,0 @@
---
type: docs
title: "VS Code remote containers"
linkTitle: "VS Code remote containers"
weight: 3000
description: "Application development and debugging with Visual Studio Code remote containers"
---
## Using remote containers for your application development
The Visual Studio Code Remote - Containers extension lets you use a Docker container as a full-featured development environment enabling you to [develop inside a container](https://code.visualstudio.com/docs/remote/containers).
Dapr has pre-built Docker remote containers for each of the language SDKs. You can pick the one of your choice for a ready made environment. Note these pre-built containers automatically update to the latest Dapr release.
Watch this [video](https://www.youtube.com/watch?v=D2dO4aGpHcg&t=120) on how to use the Dapr VS Code Remote Containers with your application.
These are the steps to use Dapr Remote Containers
1. Open your application workspace in VS Code
2. In the command command palette select the `Remote-Containers: Add Development Container Configuration Files...` command
3. Type `dapr` to filter the list to available Dapr remote containers and choose the language container that matches your application. See screen shot below.
4. Follow the prompts to rebuild your application in container.
<img src="../../../../static/images/vscode_remote_containers.png" width=800>

View File

@ -1,21 +1,64 @@
---
type: docs
title: "VS Code"
linkTitle: "VS Code"
weight: 2000
description: "Application development and debugging with Visual Studio Code"
title: "Visual Studio Code integrations with Dapr"
linkTitle: "Visual Studio Code"
weight: 1000
description: "Information on how to develop and run Dapr applications in VS Code"
---
## Visual Studio Code Dapr extension
It is recommended to use the *preview* of the [Dapr Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-dapr) available in the Visual Studio marketplace for local development and debugging of your Dapr applications.
## Extension
Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&t=85) on how to use the Dapr VS Code extension.
Dapr offers a *preview* [Dapr Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-dapr) for local development and debugging of your Dapr applications.
<a href="vscode:extension/ms-azuretools.vscode-dapr" class="btn btn-primary" role="button">Open in VSCode</a>
### Feature overview
- Scaffold Dapr task, launch, and component assets
<br /><img src="/images/vscode-extension-scaffold.png" alt="Screenshot of the Dapr VSCode extension scaffold option" width="800">
- View running Dapr applications
<br /><img src="/images/vscode-extension-view.png" alt="Screenshot of the Dapr VSCode extension view running applications option" width="800">
- Invoke Dapr application methods
<br /><img src="/images/vscode-extension-invoke.png" alt="Screenshot of the Dapr VSCode extension invoke option" width="800">
- Publish events to Dapr applications
<br /><img src="/images/vscode-extension-publish.png" alt="Screenshot of the Dapr VSCode extension publish option" width="800">
#### Example
Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&t=85) on how to use the Dapr VS Code extension:
<iframe width="560" height="315" src="https://www.youtube.com/embed/OtbYCBt9C34?start=85" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Remote Dev Containers
The Visual Studio Code Remote Containers extension lets you use a Docker container as a full-featured development environment enabling you to [develop inside a container](https://code.visualstudio.com/docs/remote/containers) without installing any additional frameworks or packages to your local filesystem.
Dapr has pre-built Docker remote containers for each of the language SDKs. You can pick the one of your choice for a ready made environment. Note these pre-built containers automatically update to the latest Dapr release.
### Setup a remote dev container
#### Prerequisites
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
- [Visual Studio Code](https://code.visualstudio.com/)
- [VSCode Remote Development extension pack](https://aka.ms/vscode-remote/download/extension)
#### Create remote Dapr container
1. Open your application workspace in VS Code
2. In the command command palette (ctrl+shift+p) type and select `Remote-Containers: Add Development Container Configuration Files...`
<br /><img src="/images/vscode-remotecontainers-addcontainer.png" alt="Screenshot of adding a remote container" width="700">
3. Type `dapr` to filter the list to available Dapr remote containers and choose the language container that matches your application. Note you may need to select `Show All Definitions...`
<br /><img src="/images/vscode-remotecontainers-daprcontainers.png" alt="Screenshot of adding a Dapr container" width="700">
4. Follow the prompts to rebuild your application in container.
<br /><img src="/images/vscode-remotecontainers-reopen.png" alt="Screenshot of reopening an application in the dev container" width="700">
#### Example
Watch this [video](https://www.youtube.com/watch?v=D2dO4aGpHcg&t=120) on how to use the Dapr VS Code Remote Containers with your application.
<iframe width="560" height="315" src="https://www.youtube.com/embed/D2dO4aGpHcg?start=120" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Troubleshooting
### Debugging multiple Dapr applications at the same time
Using the VS Code extension you can debug multiple Dapr applications at the same time with [Multi-target debugging](https://code.visualstudio.com/docs/editor/debugging#_multitarget-debugging)
## Manually configuring Visual Studio Code for debugging with daprd
### Manually configuring Visual Studio Code for debugging with daprd
If instead of using the Dapr VS Code extension you wish to configure a project to use Dapr in the [tasks.json](https://code.visualstudio.com/Docs/editor/tasks) and [launch.json](https://code.visualstudio.com/Docs/editor/debugging) files these are the manual steps.
When developing Dapr applications, you typically use the dapr cli to start your daprized service similar to this:
@ -26,7 +69,9 @@ dapr run --app-id nodeapp --app-port 3000 --dapr-http-port 3500 app.js
This will generate the components yaml files (if they don't exist) so that your service can interact with the local redis container. This is great when you are just getting started but what if you want to attach a debugger to your service and step through the code? This is where you can use the dapr runtime (daprd) to help facilitate this.
>Note: The dapr runtime (daprd) will not automatically generate the components yaml files for Redis. These will need to be created manually or you will need to run the dapr cli (dapr) once in order to have them created automatically.
{{% alert title="Note" color="primary" %}}
The dapr runtime (daprd) will not automatically generate the components yaml files for Redis. These will need to be created manually or you will need to run the dapr cli (dapr) once in order to have them created automatically.
{{% /alert %}}
One approach to attaching the debugger to your service is to first run daprd with the correct arguments from the command line and then launch your code and attach the debugger. While this is a perfectly acceptable solution, it does require a few extra steps and some instruction to developers who might want to clone your repo and hit the "play" button to begin debugging.
@ -34,7 +79,7 @@ Using the [tasks.json](https://code.visualstudio.com/Docs/editor/tasks) and [lau
Let's get started!
### Modifying launch.json configurations to include a preLaunchTask
#### Modifying launch.json configurations to include a preLaunchTask
In your [launch.json](https://code.visualstudio.com/Docs/editor/debugging) file add a [preLaunchTask](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) for each configuration that you want daprd launched. The [preLaunchTask](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) will reference tasks that you define in your tasks.json file. Here is an example for both Node and .NET Core. Notice the [preLaunchTasks](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) referenced: daprd-web and daprd-leaderboard.
@ -76,7 +121,7 @@ In your [launch.json](https://code.visualstudio.com/Docs/editor/debugging) file
}
```
## Adding daprd tasks to tasks.json
#### Adding daprd tasks to tasks.json
You will need to define a task and problem matcher for daprd in your [tasks.json](https://code.visualstudio.com/Docs/editor/tasks) file. Here are two examples (both referenced via the [preLaunchTask](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) members above). Notice that in the case of the .NET Core daprd task (daprd-leaderboard) there is also a [dependsOn](https://code.visualstudio.com/Docs/editor/tasks#_compound-tasks) member that references the build task to ensure the latest code is being run/debugged. The [problemMatcher](https://code.visualstudio.com/Docs/editor/tasks#_defining-a-problem-matcher) is used so that VSCode can understand when the daprd process is up and running.
@ -173,9 +218,10 @@ Let's take a quick look at the args that are being passed to the daprd command.
}
```
### Wrapping up
#### Wrapping up
Once you have made the required changes, you should be able to switch to the [debug](https://code.visualstudio.com/Docs/editor/debugging) view in VSCode and launch your daprized configurations by clicking the "play" button. If everything was configured correctly, you should see daprd launch in the VSCode terminal window and the [debugger](https://code.visualstudio.com/Docs/editor/debugging) should attach to your application (you should see it's output in the debug window).
>Note: Since you didn't launch the service(s) using the **dapr** ***run*** cli command, but instead by running **daprd**, the **dapr** ***list*** command will not show a list of apps that are currently running.
{{% alert title="Note" color="primary" %}}
Since you didn't launch the service(s) using the **dapr** ***run*** cli command, but instead by running **daprd**, the **dapr** ***list*** command will not show a list of apps that are currently running.
{{% /alert %}}

View File

@ -3,5 +3,5 @@ type: docs
title: "Middleware"
linkTitle: "Middleware"
weight: 50
description: "Customize Dapr processing pipelines by adding middleware components"
description: "Customize processing pipelines by adding middleware components"
---

View File

@ -0,0 +1,85 @@
---
type: docs
title: "Overview"
linkTitle: "Overview"
description: "General overview on set up of middleware components for Dapr"
weight: 10000
type: docs
---
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. Middleware pipelines are defined in Dapr configuration files.
As with other [building block components]({{< ref component-schema.md >}}), middleware components are extensible and can be found in the [components-contrib repo](https://github.com/dapr/components-contrib/tree/master/middleware/http).
Middleware in Dapr is described using a `Component` file with the following schema:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <COMPONENT NAME>
namespace: <NAMESPACE>
spec:
type: middleware.http.<MIDDLEWARE TYPE>
version: v1
metadata:
- name: <KEY>
value: <VALUE>
- name: <KEY>
value: <VALUE>
...
```
The type of middleware is determined by the `type` field. Component setting values such as rate limits, OAuth credentials and other settings are put in the `metadata` section.
Even though metadata values can contain secrets in plain text, it is recommended that you use a [secret store]({{< ref component-secrets.md >}}).
Next, a Dapr [configuration]({{< ref configuration-overview.md >}}) defines the pipeline of middleware components for your application.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: <COMPONENT NAME>
type: middleware.http.<MIDDLEWARE TYPE>
- name: <COMPONENT NAME>
type: middleware.http.<MIDDLEWARE TYPE>
```
## Writing a custom middleware
Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement its HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns a **fasthttp.RequestHandler**:
```go
type Middleware interface {
GetHandler(metadata Metadata) (func(h fasthttp.RequestHandler) fasthttp.RequestHandler, error)
}
```
Your handler implementation can include any inbound logic, outbound logic, or both:
```go
func GetHandler(metadata Metadata) fasthttp.RequestHandler {
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
// inboud logic
h(ctx) // call the downstream handler
// outbound logic
}
}
}
```
## Adding new middleware components
Your middleware component can be contributed to the [components-contrib repository](https://github.com/dapr/components-contrib/tree/master/middleware).
After the components-contrib change has been accepted, submit another pull request against the [Dapr runtime repository](https://github.com/dapr/dapr) to register the new middleware type. You'll need to modify **[runtime.WithHTTPMiddleware](https://github.com/dapr/dapr/blob/f4d50b1369e416a8f7b93e3e226c4360307d1313/cmd/daprd/main.go#L394-L424)** method in [cmd/daprd/main.go](https://github.com/dapr/dapr/blob/master/cmd/daprd/main.go) to register your middleware with Dapr's runtime.
## Related links
* [Middleware pipelines concept]({{< ref middleware-concept.md >}})
* [Component schema]({{< ref component-schema.md >}})
* [Configuration overview]({{< ref configuration-overview.md >}})
* [Middleware quickstart](https://github.com/dapr/quickstarts/tree/master/middleware)

View File

@ -0,0 +1,19 @@
---
type: docs
title: "Supported middleware"
linkTitle: "Supported middleware"
weight: 30000
description: List of all the supported middleware components that can be injected in Dapr's processing pipeline.
no_list: true
---
### HTTP
| Name | Description | Status |
|--------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|----------------------------|
| [Rate limit]({{< ref middleware-rate-limit.md >}}) | Restricts the maximum number of allowed HTTP requests per second | Alpha |
| [OAuth2]({{< ref middleware-oauth2.md >}}) | Enables the [OAuth2 Authorization Grant flow](https://tools.ietf.org/html/rfc6749#section-4.1) on a Web API | Alpha |
| [OAuth2 client credentials]({{< ref middleware-oauth2clientcredentials.md >}}) | Enables the [OAuth2 Client Credentials Grant flow](https://tools.ietf.org/html/rfc6749#section-4.4) on a Web API | Alpha |
| [Bearer]({{< ref middleware-bearer.md >}}) | Verifies a [Bearer Token](https://tools.ietf.org/html/rfc6750) using [OpenID Connect](https://openid.net/connect/) on a Web API | Alpha |
| [Open Policy Agent]({{< ref middleware-opa.md >}}) | Applies [Rego/OPA Policies](https://www.openpolicyagent.org/) to incoming Dapr HTTP requests | Alpha |
| [Uppercase]({{< ref middleware-uppercase.md >}}) | Converts the body of the request to uppercase letters | GA (For local development) |

View File

@ -0,0 +1,55 @@
---
type: docs
title: "Bearer"
linkTitle: "Bearer"
weight: 4000
description: "Use bearer middleware to secure HTTP endpoints by verifying bearer tokens"
type: docs
---
The bearer [HTTP middleware]({{< ref middleware-concept.md >}}) verifies a [Bearer Token](https://tools.ietf.org/html/rfc6750) using [OpenID Connect](https://openid.net/connect/) on a Web API without modifying the application. This design separates authentication/authorization concerns from the application, so that application operators can adopt and configure authentication/authorization providers without impacting the application code.
## Component format
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: bearer-token
spec:
type: middleware.http.bearer
version: v1
metadata:
- name: clientId
value: "<your client ID>"
- name: issuerURL
value: "https://accounts.google.com"
```
## Spec metadata fields
| Field | Details | Example |
|----------------|---------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------|
| clientId | The client ID of your application that is created as part of a credential hosted by a OpenID Connect platform | |
| issuerURL | URL identifier for the service. | `"https://accounts.google.com"`, `"https://login.salesforce.com"` |
## Dapr configuration
To be applied, the middleware must be referenced in [configuration]({{< ref configuration-concept.md >}}). See [middleware pipelines]({{< ref "middleware-concept.md#customize-processing-pipeline">}}).
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: bearer-token
type: middleware.http.bearer
```
## Related links
- [Middleware concept]({{< ref middleware-concept.md >}})
- [Configuration concept]({{< ref configuration-concept.md >}})
- [Configuration overview]({{< ref configuration-overview.md >}})

View File

@ -0,0 +1,72 @@
---
type: docs
title: "OAuth2"
linkTitle: "OAuth2"
weight: 2000
description: "Use OAuth2 middleware to secure HTTP endpoints"
---
The OAuth2 [HTTP middleware]({{< ref middleware-concept.md >}}) enables the [OAuth2 Authorization Code flow](https://tools.ietf.org/html/rfc6749#section-4.1) on a Web API without modifying the application. This design separates authentication/authorization concerns from the application, so that application operators can adopt and configure authentication/authorization providers without impacting the application code.
## Component format
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: oauth2
spec:
type: middleware.http.oauth2
version: v1
metadata:
- name: clientId
value: "<your client ID>"
- name: clientSecret
value: "<your client secret>"
- name: scopes
value: "https://www.googleapis.com/auth/userinfo.email"
- name: authURL
value: "https://accounts.google.com/o/oauth2/v2/auth"
- name: tokenURL
value: "https://accounts.google.com/o/oauth2/token"
- name: redirectURL
value: "http://dummy.com"
- name: authHeaderName
value: "authorization"
- name: forceHTTPS
value: "false"
```
## Spec metadata fields
| Field | Details | Example |
|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------|
| clientId | The client ID of your application that is created as part of a credential hosted by a OAuth-enabled platform | |
| clientSecret | The client secret of your application that is created as part of a credential hosted by a OAuth-enabled platform | |
| scopes | A list of space-delimited, case-sensitive strings of [scopes](https://tools.ietf.org/html/rfc6749#section-3.3) which are typically used for authorization in the application | `"https://www.googleapis.com/auth/userinfo.email"` |
| authURL | The endpoint of the OAuth2 authorization server | `"https://accounts.google.com/o/oauth2/v2/auth"` |
| tokenURL | The endpoint is used by the client to obtain an access token by presenting its authorization grant or refresh token | `"https://accounts.google.com/o/oauth2/token"` |
| redirectURL | The URL of your web application that the authorization server should redirect to once the user has authenticated | `"https://myapp.com"` |
| authHeaderName | The authorization header name to forward to your application | `"authorization"` |
| forceHTTPS | If true, enforces the use of TLS/SSL | `"true"`,`"false"` |
## Dapr configuration
To be applied, the middleware must be referenced in [configuration]({{< ref configuration-concept.md >}}). See [middleware pipelines]({{< ref "middleware-concept.md#customize-processing-pipeline">}}).
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: oauth2
type: middleware.http.oauth2
```
## Related links
- [Configure API authorization with OAuth]({{< ref oauth >}})
- [Middleware OAuth quickstart](https://github.com/dapr/quickstarts/tree/master/middleware)
- [Middleware concept]({{< ref middleware-concept.md >}})
- [Configuration concept]({{< ref configuration-concept.md >}})
- [Configuration overview]({{< ref configuration-overview.md >}})

View File

@ -0,0 +1,72 @@
---
type: docs
title: "OAuth2 client credentials"
linkTitle: "OAuth2 client credentials"
weight: 3000
description: "Use OAuth2 client credentials middleware to secure HTTP endpoints"
---
The OAuth2 client credentials [HTTP middleware]({{< ref middleware-concept.md >}}) enables the [OAuth2 Client Credentials flow](https://tools.ietf.org/html/rfc6749#section-4.4) on a Web API without modifying the application. This design separates authentication/authorization concerns from the application, so that application operators can adopt and configure authentication/authorization providers without impacting the application code.
## Component format
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: oauth2clientcredentials
spec:
type: middleware.http.oauth2clientcredentials
version: v1
metadata:
- name: clientId
value: "<your client ID>"
- name: clientSecret
value: "<your client secret>"
- name: scopes
value: "https://www.googleapis.com/auth/userinfo.email"
- name: tokenURL
value: "https://accounts.google.com/o/oauth2/token"
- name: headerName
value: "authorization"
```
## Spec metadata fields
| Field | Details | Example |
|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------|
| clientId | The client ID of your application that is created as part of a credential hosted by a OAuth-enabled platform | |
| clientSecret | The client secret of your application that is created as part of a credential hosted by a OAuth-enabled platform | |
| scopes | A list of space-delimited, case-sensitive strings of [scopes](https://tools.ietf.org/html/rfc6749#section-3.3) which are typically used for authorization in the application | `"https://www.googleapis.com/auth/userinfo.email"` |
| tokenURL | The endpoint is used by the client to obtain an access token by presenting its authorization grant or refresh token | `"https://accounts.google.com/o/oauth2/token"` |
| headerName | The authorization header name to forward to your application | `"authorization"` |
| endpointParamsQuery | Specifies additional parameters for requests to the token endpoint | `true` |
| authStyle | Optionally specifies how the endpoint wants the client ID & client secret sent. See the table of possible values below | `0` |
### Possible values for `authStyle`
| Value | Meaning |
|-------|---------|
| `1` | Sends the "client_id" and "client_secret" in the POST body as application/x-www-form-urlencoded parameters. |
| `2` | Sends the "client_id" and "client_secret" using HTTP Basic Authorization. This is an optional style described in the [OAuth2 RFC 6749 section 2.3.1](https://tools.ietf.org/html/rfc6749#section-2.3.1). |
| `0` | Means to auto-detect which authentication style the provider wants by trying both ways and caching the successful way for the future. |
## Dapr configuration
To be applied, the middleware must be referenced in a [configuration]({{< ref configuration-concept.md >}}). See [middleware pipelines]({{< ref "middleware-concept.md#customize-processing-pipeline">}}).
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: oauth2clientcredentials
type: middleware.http.oauth2clientcredentials
```
## Related links
- [Middleware concept]({{< ref middleware-concept.md >}})
- [Configuration concept]({{< ref configuration-concept.md >}})
- [Configuration overview]({{< ref configuration-overview.md >}})

View File

@ -1,15 +1,15 @@
---
type: docs
title: "How-To: Apply OPA policies"
linkTitle: "How-To: Apply OPA policies"
weight: 1000
description: "Use Dapr middleware to apply Open Policy Agent (OPA) policies on incoming requests"
type: docs
title: "Apply Open Policy Agent (OPA) policies"
linkTitle: "Open Policy Agent (OPA)"
weight: 6000
description: "Use middleware to apply Open Policy Agent (OPA) policies on incoming requests"
---
The Dapr Open Policy Agent (OPA) [HTTP middleware](https://github.com/dapr/docs/blob/master/concepts/middleware/README.md) allows applying [OPA Policies](https://www.openpolicyagent.org/) to incoming Dapr HTTP requests. This can be used to apply reusable authorization policies to app endpoints.
The Open Policy Agent (OPA) [HTTP middleware]({{< ref middleware-concept.md >}}) applys [OPA Policies](https://www.openpolicyagent.org/) to incoming Dapr HTTP requests. This can be used to apply reusable authorization policies to app endpoints.
## Component format
## Middleware Component Definition
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
@ -20,8 +20,8 @@ spec:
type: middleware.http.opa
version: v1
metadata:
# `includedHeaders` is a comma-seperated set of case-insensitive headers to include in the request input.
# Request headers are not passed to the policy by default. Include to recieve incoming request headers in
# `includedHeaders` is a comma-separated set of case-insensitive headers to include in the request input.
# Request headers are not passed to the policy by default. Include to receive incoming request headers in
# the input
- name: includedHeaders
value: "x-my-custom-header, x-jwt-header"
@ -59,7 +59,6 @@ spec:
} {
my_claim := jwt.payload["my-claim"]
}
jwt = { "payload": payload } {
auth_header := input.request.headers["authorization"]
[_, jwt] := split(auth_header, " ")
@ -69,6 +68,30 @@ spec:
You can prototype and experiment with policies using the [official opa playground](https://play.openpolicyagent.org). For example, [you can find the example policy above here](https://play.openpolicyagent.org/p/oRIDSo6OwE).
## Spec metadata fields
| Field | Details | Example |
|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------|
| rego | The Rego policy language | See above |
| defaultStatus | The status code to return for denied responses | `"https://accounts.google.com"`, `"https://login.salesforce.com"` |
| includedHeaders | A comma-separated set of case-insensitive headers to include in the request input. Request headers are not passed to the policy by default. Include to receive incoming request headers in the input | `"x-my-custom-header, x-jwt-header"` |
## Dapr configuration
To be applied, the middleware must be referenced in [configuration]({{< ref configuration-concept.md >}}). See [middleware pipelines]({{< ref "middleware-concept.md#customize-processing-pipeline">}}).
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: my-policy
type: middleware.http.opa
```
## Input
This middleware supplies a [`HTTPRequest`](#httprequest) as input.
@ -95,7 +118,7 @@ type HTTPRequest struct {
query map[string][]string
// The request headers
// NOTE: By default, no headers are included. You must specify what headers
// you want to recieve via `spec.metadata.includedHeaders` (see above)
// you want to receive via `spec.metadata.includedHeaders` (see above)
headers map[string]string
// The request scheme (e.g. http, https)
scheme string
@ -122,7 +145,7 @@ default allow = {
}
```
### Changing the Rejected Response Status Code
### Changing the rejected response status code
When rejecting a request, you can override the status code the that gets returned. For example, if you wanted to return a `401` instead of a `403`, you could do the following:
@ -135,7 +158,7 @@ default allow = {
}
```
### Adding Response Headers
### Adding response headers
To redirect, add headers and set the `status_code` to the returned result:
@ -151,7 +174,7 @@ default allow = {
}
```
### Adding Request Headers
### Adding request headers
You can also set additional headers on the allowed request:
@ -162,12 +185,12 @@ default allow = false
allow = { "allow": true, "additional_headers": { "X-JWT-Payload": payload } } {
not input.path[0] == "forbidden"
# Where `jwt` is the result of another rule
// Where `jwt` is the result of another rule
payload := base64.encode(json.marshal(jwt.payload))
}
```
### Result Structure
### Result structure
```go
type Result bool
// or
@ -183,5 +206,8 @@ type Result struct {
## Related links
- Open Policy Agent: https://www.openpolicyagent.org
- HTTP API Example: https://www.openpolicyagent.org/docs/latest/http-api-authorization/
- [Open Policy Agent](https://www.openpolicyagent.org)
- [HTTP API example](https://www.openpolicyagent.org/docs/latest/http-api-authorization/)
- [Middleware concept]({{< ref middleware-concept.md >}})
- [Configuration concept]({{< ref configuration-concept.md >}})
- [Configuration overview]({{< ref configuration-overview.md >}})

View File

@ -0,0 +1,57 @@
---
type: docs
title: "Rate limiting"
linkTitle: "Rate limiting"
weight: 1000
description: "Use rate limit middleware to limit requests per second"
---
The rate limit [HTTP middleware]({{< ref middleware-concept.md >}}) allows restricting the maximum number of allowed HTTP requests per second. Rate limiting can protect your application from denial of service (DOS) attacks. DOS attacks can be initiated by malicious 3rd parties but also by bugs in your software (a.k.a. a "friendly fire" DOS attack).
## Component format
In the following definition, the maximum requests per second are set to 10:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: ratelimit
spec:
type: middleware.http.ratelimit
metadata:
- name: maxRequestsPerSecond
value: 10
```
## Spec metadata fields
| Field | Details | Example |
|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
| maxRequestsPerSecond | The maximum requests per second by remote IP and path. Something to consider is that **the limit is enforced independently in each Dapr sidecar and not cluster wide** | `10` |
Once the limit is reached, the request will return *HTTP Status code 429: Too Many Requests*.
Alternatively, the [max concurrency setting]({{< ref control-concurrency.md >}}) can be used to rate limit applications and applies to all traffic regardless of remote IP or path.
## Dapr configuration
To be applied, the middleware must be referenced in [configuration]({{< ref configuration-concept.md >}}). See [middleware pipelines]({{< ref "middleware-concept.md#customize-processing-pipeline">}}).
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: ratelimit
type: middleware.http.ratelimit
```
## Related links
- [Control max concurrently]({{< ref control-concurrency.md >}})
- [Middleware concept]({{< ref middleware-concept.md >}})
- [Dapr configuration]({{< ref configuration-concept.md >}})
- [Configuration overview]({{< ref configuration-overview.md >}})

View File

@ -0,0 +1,45 @@
---
type: docs
title: "Uppercase request body"
linkTitle: "Uppercase"
weight: 9999
description: "Test your HTTP pipeline is functioning with the uppercase middleware"
---
The uppercase [HTTP middleware]({{< ref middleware-concept.md >}}) converts the body of the request to uppercase letters and is used for testing that the pipeline is functioning. It should only be used for local development.
## Component format
In the following definition, the maximum requests per second are set to 10:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: uppercase
spec:
type: middleware.http.uppercase
```
This component has no `metadata` to configure.
## Dapr configuration
To be applied, the middleware must be referenced in [configuration]({{< ref configuration-concept.md >}}). See [middleware pipelines]({{< ref "middleware-concept.md#customize-processing-pipeline">}}).
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
httpPipeline:
handlers:
- name: uppercase
type: middleware.http.uppercase
```
## Related links
- [Middleware concept]({{< ref middleware-concept.md >}})
- [Configuration concept]({{< ref configuration-concept.md >}})
- [Configuration overview]({{< ref configuration-overview.md >}})

View File

@ -1,22 +1,45 @@
---
type: docs
title: "SDKs"
title: "Dapr Software Development Kits (SDKs)"
linkTitle: "SDKs"
weight: 20
description: "Use your favorite languages with Dapr"
no_list: true
---
### .NET
See the [.NET SDK repository](https://github.com/dapr/dotnet-sdk)
The Dapr SDKs are the easiest way for you to get Dapr into your application. Choose your favorite language and get up and running with Dapr in minutes.
### Java
See the [Java SDK repository](https://github.com/dapr/java-sdk)
## SDK packages
### Go
See the [Go SDK repository](https://github.com/dapr/go-sdk)
- **Client SDK**: The Dapr client allows you to invoke Dapr building block APIs and perform actions such as:
- [Invoke]({{< ref service-invocation >}}) methods on other services
- Store and get [state]({{< ref state-management >}})
- [Publish and subscribe]({{< ref pubsub >}}) to message topics
- Interact with external resources through input and output [bindings]({{< ref bindings >}})
- Get [secrets]({{< ref secrets >}}) from secret stores
- Interact with [virtual actors]({{< ref actors >}})
- **Service extensions**: The Dapr service extensions allow you to create services that can:
- Be [invoked]({{< ref service-invocation >}}) by other services
- [Subscribe]({{< ref pubsub >}}) to topics
- **Actor SDK**: The Dapr Actor SDK allows you to build virtual actors with:
- Methods that can be [invoked]({{< ref "howto-actors.md#actor-method-invocation" >}}) by other services
- [State]({{< ref "howto-actors.md#actor-state-management" >}}) that can be stored and retrieved
- [Timers]({{< ref "howto-actors.md#actor-timers" >}}) with callbacks
- Persistent [reminders]({{< ref "howto-actors.md#actor-reminders" >}})
### Python
See the [Python SDK repository](https://github.com/dapr/python-sdk)
## SDK languages
### Javascript
See the [Javascript SDK repository](https://github.com/dapr/js-sdk)
| Language | Status | Client SDK | Service Extensions | Actor SDK |
|----------|:-----:|:----------:|:-----------:|:---------:|
| [.NET](https://github.com/dapr/dotnet-sdk) | Stable | ✔ | ✔ </br>ASP.NET Core | ✔ |
| [Python]({{< ref python >}}) | Stable | ✔ | ✔ </br>[gRPC]({{< ref python-grpc.md >}}) | ✔ </br>[FastAPI]({{< ref python-fastapi.md >}})<br />[Flask]({{< ref python-flask.md >}}) |
| [Java](https://github.com/dapr/java-sdk) | Stable | ✔ | ✔ </br>Spring Boot | ✔ |
| [Go](https://github.com/dapr/go-sdk) | Stable | ✔ | ✔ | |
| [PHP](https://github.com/dapr/php-sdk) | Stable | ✔ | ✔ | ✔ |
| [C++](https://github.com/dapr/cpp-sdk) | In development | ✔ | |
| [Rust]() | In development | ✔ | | |
| [Javascript]() | In development| ✔ | |
## Further reading
- [Serialization in the Dapr SDKs]({{< ref sdk-serialization.md >}})

View File

@ -2,7 +2,10 @@
type: docs
title: "Serialization in Dapr's SDKs"
linkTitle: "Serialization"
weight: 1000
description: "How Dapr serializes data within the SDKs"
weight: 2000
aliases:
- '/developing-applications/sdks/serialization/'
---
An SDK for Dapr should provide serialization for two use cases. First, for API objects sent through request and response payloads. Second, for objects to be persisted. For both these use cases, a default serialization is provided. In the Java SDK, it is the [DefaultObjectSerializer](https://dapr.github.io/java-sdk/io/dapr/serializer/DefaultObjectSerializer.html) class, providing JSON serialization.

View File

@ -3,7 +3,24 @@ type: docs
title: "Getting started with Dapr"
linkTitle: "Getting started"
weight: 20
description: "Get up and running with Dapr"
type: docs
description: "How to get up and running with Dapr in minutes"
no_list: true
---
Welcome to the Dapr getting started guide!
{{% alert title="Dapr Concepts" color="primary" %}}
If you are looking for an introductory overview of Dapr and learn more about basic Dapr terminology, it is recommended to visit the [concepts section]({{<ref concepts>}}).
{{% /alert %}}
This guide will walk you through a series of steps to install, initialize and start using Dapr. The recommended way to get started with Dapr is to setup a local development environment (also referred to as [_self-hosted_ mode]({{< ref self-hosted >}})) which includes the Dapr CLI, Dapr sidecar binaries, and some default components that can help you start using Dapr quickly.
The following steps in this guide are:
1. Install the Dapr CLI
1. Initialize Dapr
1. Use the Dapr API
1. Configure a component
1. Explore Dapr quickstarts
<a class="btn btn-primary" href="{{< ref install-dapr-cli.md >}}" role="button">First step: Install the Dapr CLI >></a>

View File

@ -1,228 +0,0 @@
---
type: docs
title: "How-To: Setup Redis"
linkTitle: "How-To: Setup Redis"
weight: 30
description: "Configure Redis for Dapr state management or Pub/Sub"
---
Dapr can use Redis in two ways:
1. As state store component (state.redis) for persistence and restoration
2. As pub/sub component (pubsub.redis) for async style message delivery
## Create a Redis store
Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service. If you already have a Redis store, move on to the [configuration](#configure-dapr-components) section.
{{< tabs "Self-Hosted" "Kubernetes (Helm)" "Azure Redis Cache" "AWS Redis" "GCP Memorystore" >}}
{{% codetab %}}
Redis is automatically installed in self-hosted environments by the Dapr CLI as part of the initialization process.
{{% /codetab %}}
{{% codetab %}}
You can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm v3](https://github.com/helm/helm#install).
1. Install Redis into your cluster:
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install redis bitnami/redis
```
Note that you will need a Redis version greater than 5, which is what Dapr's pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub) a lower version can be used.
2. Run `kubectl get pods` to see the Redis containers now running in your cluster:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-0 1/1 Running 0 69s
redis-slave-0 1/1 Running 0 69s
redis-slave-1 1/1 Running 0 22s
```
3. Add `redis-master.default.svc.cluster.local:6379` as the `redisHost` in your [redis.yaml](#configure-dapr-components) file. For example:
```yaml
metadata:
- name: redisHost
value: redis-master.default.svc.cluster.local:6379
```
4. Securely reference the redis passoword in your [redis.yaml](#configure-dapr-components) file. For example:
```yaml
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
```
5. (Alternative) It is **not recommended**, but you can use a hard code a password instead of using secretKeyRef. First you'll get the Redis password, which is slightly different depending on the OS you're using:
- **Windows**: In Powershell run:
```powershell
PS C:\> $base64pwd=kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}"
PS C:\> $redispassword=[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64pwd))
PS C:\> $base64pwd=""
PS C:\> $redispassword
```
- **Linux/MacOS**: Run:
```bash
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode
```
Add this password as the `redisPassword` value in your [redis.yaml](#configure-dapr-components) file. For example:
```yaml
metadata:
- name: redisPassword
value: lhDOkwTlp0
```
{{% /codetab %}}
{{% codetab %}}
This method requires having an Azure Subscription.
1. Open the [Azure Portal](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow. Log in if necessary.
1. Fill out the necessary information
1. Click "Create" to kickoff deployment of your Redis instance.
1. Once your instance is created, you'll need to grab your access key. Navigate to "Access Keys" under "Settings" and copy your key.
1. You'll need the hostname of your Redis instance, which you can retrieve from the "Overview" in Azure. It should look like `xxxxxx.redis.cache.windows.net:6380`.
1. Finally, you'll need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configure-dapr-components).
As the connection to Azure is encrypted, make sure to add the following block to the `metadata` section of your `redis.yaml` file.
```yaml
metadata:
- name: enableTLS
value: "true"
```
> **NOTE:** Dapr pub/sub uses [Redis streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0, which isn't currently available on Azure Cache for Redis. Consequently, you can use Azure Cache for Redis only for state persistence.
{{% /codetab %}}
{{% codetab %}}
Visit [AWS Redis](https://aws.amazon.com/redis/).
{{% /codetab %}}
{{% codetab %}}
Visit [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/).
{{% /codetab %}}
{{< /tabs >}}
## Configure Dapr components
Dapr can use Redis as a [`statestore` component]({{< ref setup-state-store >}}) for state persistence (`state.redis`) or as a [`pubsub` component]({{< ref setup-pubsub >}}) (`pubsub.redis`). The following yaml files demonstrates how to define each component using either a secretKey reference (which is preferred) or a plain text password.
### Create component files
#### State store component with secret reference
Create a file called redis-state.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: <HOST e.g. redis-master.default.svc.cluster.local:6379>
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
```
#### Pub/sub component with secret reference
Create a file called redis-pubsub.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: <HOST e.g. redis-master.default.svc.cluster.local:6379>
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
```
#### State store component with hard coded password (not recommended)
For development purposes only, create a file called redis-state.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
```
#### Pub/Sub component with hard coded password (not recommended)
For development purposes only, create a file called redis-pubsub.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
```
### Apply the configuration
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
By default the Dapr CLI creates a local Redis instance when you run `dapr init`. However, if you want to configure a different Redis instance, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
If you initialized Dapr using `dapr init --slim`, the Dapr CLI did not create a Redis instance or a default configuration file for it. Follow [the instructions above](#creat-a-redis-store) to create a Redis store. Create the `redis.yaml` following the configuration [instructions](#configure-dapr-components) in a `components` dir and provide the path to the `dapr run` command with the flag `--components-path`.
{{% /codetab %}}
{{% codetab %}}
Run `kubectl apply -f <FILENAME>` for both state and pubsub files:
```bash
kubectl apply -f redis-state.yaml
kubectl apply -f redis-pubsub.yaml
```
{{% /codetab %}}
{{< /tabs >}}

View File

@ -0,0 +1,231 @@
---
type: docs
title: "How-To: Configure state store and pub/sub message broker"
linkTitle: "(optional) Configure state & pub/sub"
weight: 80
description: "Configure state store and pub/sub message broker components for Dapr"
aliases:
- /getting-started/configure-redis/
---
In order to get up and running with the state and pub/sub building blocks two components are needed:
1. A state store component for persistence and restoration
2. As pub/sub message broker component for async-style message delivery
A full list of supported components can be found here:
- [Supported state stores]({{< ref supported-state-stores >}})
- [Supported pub/sub message brokers]({{< ref supported-pubsub >}})
The rest of this page describes how to get up and running with Redis.
{{% alert title="Self-hosted mode" color="warning" %}}
When initialized in self-hosted mode, Dapr automatically runs a Redis container and sets up the required component yaml files. You can skip this page and go to [next steps](#next-steps)
{{% /alert %}}
## Create a Redis store
Dapr can use any Redis instance - either containerized on your local dev machine or a managed cloud service. If you already have a Redis store, move on to the [configuration](#configure-dapr-components) section.
{{< tabs "Self-Hosted" "Kubernetes" "Azure" "AWS" "GCP" >}}
{{% codetab %}}
Redis is automatically installed in self-hosted environments by the Dapr CLI as part of the initialization process. You are all set and can skip to the [next steps](next steps)
{{% /codetab %}}
{{% codetab %}}
You can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm v3](https://github.com/helm/helm#install).
1. Install Redis into your cluster:
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install redis bitnami/redis
```
Note that you will need a Redis version greater than 5, which is what Dapr's pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub) a lower version can be used.
2. Run `kubectl get pods` to see the Redis containers now running in your cluster:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-0 1/1 Running 0 69s
redis-slave-0 1/1 Running 0 69s
redis-slave-1 1/1 Running 0 22s
```
Note that the hostname is `redis-master.default.svc.cluster.local:6379`, and a Kubernetes secret, `redis`, is created automatically.
{{% /codetab %}}
{{% codetab %}}
This method requires having an Azure Subscription.
1. Open the [Azure Portal](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow. Log in if necessary.
1. Fill out the necessary information
- Dapr pub/sub uses [Redis streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0. If you would like to use Azure Redis Cache for pub/sub make sure to set the version to (PREVIEW) 6.
1. Click "Create" to kickoff deployment of your Redis instance.
1. You'll need the hostname of your Redis instance, which you can retrieve from the "Overview" in Azure. It should look like `xxxxxx.redis.cache.windows.net:6380`. Note this for later.
1. Once your instance is created, you'll need to grab your access key. Navigate to "Access Keys" under "Settings" and create a Kubernetes secret to store your Redis password:
```bash
kubectl create secret generic redis --from-literal=redis-password=*********
```
{{% /codetab %}}
{{% codetab %}}
1. Visit [AWS Redis](https://aws.amazon.com/redis/) to deploy a Redis instance
1. Note the Redis hostname in the AWS portal for use later
1. Create a Kubernetes secret to store your Redis password:
```bash
kubectl create secret generic redis --from-literal=redis-password=*********
```
{{% /codetab %}}
{{% codetab %}}
1. Visit [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/) to deploy a MemoryStore instance
1. Note the Redis hostname in the GCP portal for use later
1. Create a Kubernetes secret to store your Redis password:
```bash
kubectl create secret generic redis --from-literal=redis-password=*********
```
{{% /codetab %}}
{{< /tabs >}}
## Configure Dapr components
Dapr uses components to define what resources to use for building block functionality. These steps go through how to connect the resources you created above to Dapr for state and pub/sub.
In self-hosted mode, component files are automatically created under:
- **Windows**: `%USERPROFILE%\.dapr\components\`
- **Linux/MacOS**: `$HOME/.dapr/components`
For Kubernetes, files can be created in any directory, as they are applied with `kubectl`.
### Create State store component
Create a file named `redis-state.yaml`, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: <REPLACE WITH HOSTNAME FROM ABOVE - for Redis on Kubernetes it is redis-master.default.svc.cluster.local:6379>
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
```
This example uses the the kubernetes secret that was created when setting up a cluster with the above instructions.
{{% alert title="Other stores" color="primary" %}}
If using a state store other than Redis, refer to the [supported state stores]({{< ref supported-state-stores >}}) for information on what options to set.
{{% /alert %}}
### Create Pub/sub message broker component
Create a file called redis-pubsub.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: <REPLACE WITH HOSTNAME FROM ABOVE - for Redis on Kubernetes it is redis-master.default.svc.cluster.local:6379>
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
```
This example uses the the kubernetes secret that was created when setting up a cluster with the above instructions.
{{% alert title="Other stores" color="primary" %}}
If using a pub/sub message broker other than Redis, refer to the [supported pub/sub message brokers]({{< ref supported-pubsub >}}) for information on what options to set.
{{% /alert %}}
### Hard coded passwords (not recommended)
For development purposes only you can skip creating kubernetes secrets and place passwords directly into the Dapr component file:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
```
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
```
## Apply the configuration
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
By default the Dapr CLI creates a local Redis instance when you run `dapr init`. However, if you want to configure a different Redis instance you can either:
- Update the existing component files or create new ones in the default components directory
- **Linux/MacOS:** `$HOME/.dapr/components`
- **Windows:** `%USERPROFILE%\.dapr\components`
- Create a new `components` directory in your app folder containing the YAML files and provide the path to the `dapr run` command with the flag `--components-path`
{{% alert title="Self-hosted slim mode" color="primary" %}}
If you initialized Dapr in [slim mode]({{< ref self-hosted-no-docker.md >}}) (without Docker) you need to manually create the default directory, or always specify a components directory using `--components-path`.
{{% /alert %}}
{{% /codetab %}}
{{% codetab %}}
Run `kubectl apply -f <FILENAME>` for both state and pubsub files:
```bash
kubectl apply -f redis-state.yaml
kubectl apply -f redis-pubsub.yaml
```
{{% /codetab %}}
{{< /tabs >}}
## Next steps
- [Try out a Dapr quickstart]({{< ref quickstarts.md >}})

View File

@ -0,0 +1,104 @@
---
type: docs
title: "Use the Dapr API"
linkTitle: "Use the Dapr API"
weight: 30
---
After running the `dapr init` command in the [previous step]({{<ref install-dapr-selfhost.md>}}), your local environment has the Dapr sidecar binaries as well as default component definitions for both state management and a message broker (both using Redis). You can now try out some of what Dapr has to offer by using the Dapr CLI to run a Dapr sidecar and try out the state API that will allow you to store and retrieve a state. You can learn more about the state building block and how it works in [these docs]({{< ref state-management >}}).
You will now run the sidecar and call the API directly (simulating what an application would do).
## Step 1: Run the Dapr sidecar
One the most useful Dapr CLI commands is [`dapr run`]({{< ref dapr-run.md >}}). This command launches an application together with a sidecar. For the purpose of this tutorial you'll run the sidecar without an application.
Run the following command to launch a Dapr sidecar that will listen on port 3500 for a blank application named myapp:
```bash
dapr run --app-id myapp --dapr-http-port 3500
```
With this command, no custom component folder was defined so the Dapr uses the default component definitions that were created during the init flow (these can be found under `$HOME/.dapr/components` on Linux or MacOS and under `%USERPROFILE%\.dapr\components` on Windows). These tell Dapr to the local Redis Docker container as a state store and message broker.
## Step 2: Save state
In a separate terminal run:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}}
```bash
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "name", "value": "Bruce Wayne"}]' http://localhost:3500/v1.0/state/statestore
```
{{% /codetab %}}
{{% codetab %}}
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{ "key": "name", "value": "Bruce Wayne"}]' -Uri 'http://localhost:3500/v1.0/state/statestore'
```
{{% /codetab %}}
{{< /tabs >}}
## Step 2: Get state
Now get the state you just stored using a key with the state management API:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}}
With the same Dapr instance running from above run:
```bash
curl http://localhost:3500/v1.0/state/statestore/name
```
{{% /codetab %}}
{{% codetab %}}
With the same Dapr instance running from above run:
```powershell
Invoke-RestMethod -Uri 'http://localhost:3500/v1.0/state/statestore/name'
```
{{% /codetab %}}
{{< /tabs >}}
## Step 3: See how the state is stored in Redis
You can look in the Redis container and verify Dapr is using it as a state store. Run the following to use the Redis CLI:
```bash
docker exec -it dapr_redis redis-cli
```
List the redis keys to see how Dapr created a key value pair (with the app-id you provided to `dapr run` as a prefix to the key):
```bash
keys *
```
```
1) "myapp||name"
```
View the state value by running:
```bash
hgetall "myapp||name"
```
```
1) "data"
2) "\"Bruce Wayne\""
3) "version"
4) "1"
```
Exit the redis-cli with:
```bash
exit
```
<a class="btn btn-primary" href="{{< ref get-started-component.md >}}" role="button">Next step: Define a component >></a>

View File

@ -0,0 +1,93 @@
---
type: docs
title: "Define a component"
linkTitle: "Define a component"
weight: 40
---
In the [previous step]({{<ref get-started-api.md>}}) you called the Dapr HTTP API to store and retrieve a state from a Redis backed state store. Dapr knew to use the Redis instance that was configured locally on your machine through default component definition files that were created when Dapr was initialized.
When building an app, you most likely would create your own component file definitions depending on the building block and specific component that you'd like to use.
As an example of how to define custom components for your application, you will now create a component definition file to interact with the [secrets building block]({{< ref secrets >}}).
In this guide you will:
- Create a local JSON secret store
- Register the secret store with Dapr using a component definition file
- Obtain the secret using the Dapr HTTP API
## Step 1: Create a JSON secret store
While Dapr supports [many types of secret stores]({{< ref supported-secret-stores >}}), the easiest way to get started is a local JSON file with your secret (note this secret store is meant for development purposes and is not recommended for production use cases as it is not secured).
Begin by saving the following JSON contents into a file named `mysecrets.json`:
```json
{
"my-secret" : "I'm Batman"
}
```
## Step 2: Create a secret store Dapr component
Create a new directory named `my-components` to hold the new component file:
```bash
mkdir my-components
```
Inside this directory create a new file `localSecretStore.yaml` with the following contents:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: my-secret-store
namespace: default
spec:
type: secretstores.local.file
version: v1
metadata:
- name: secretsFile
value: <PATH TO SECRETS FILE>/mysecrets.json
- name: nestedSeparator
value: ":"
```
You can see that the above file definition has a `type: secretstores.local.file` which tells Dapr to use the local file component as a secret store. The metadata fields provide component specific information needed to work with this component (in this case, the path to the secret store JSON)
## Step 3: Run the Dapr sidecar
Run the following command to launch a Dapr sidecar that will listen on port 3500 for a blank application named myapp:
```bash
dapr run --app-id myapp --dapr-http-port 3500 --components-path ./my-components
```
## Step 4: Get a secret
In a separate terminal run:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}}
```bash
curl http://localhost:3500/v1.0/secrets/my-secret-store/my-secret
```
{{% /codetab %}}
{{% codetab %}}
```powershell
Invoke-RestMethod -Uri 'http://localhost:3500/v1.0/secrets/my-secret-store/my-secret'
```
{{% /codetab %}}
{{< /tabs >}}
You should see output with the secret you stored in the JSON file.
```
"I'm Batman"
```
<a class="btn btn-primary" href="{{< ref quickstarts.md >}}" role="button">Next step: Explore Dapr quickstarts >></a>

View File

@ -0,0 +1,114 @@
---
type: docs
title: "Install the Dapr CLI"
linkTitle: "Install Dapr CLI"
weight: 10
---
The Dapr CLI is the main tool you'll be using for various Dapr related tasks. You can use it to run an application with a Dapr sidecar, as well as review sidecar logs, list running services, and run the Dapr dashboard. The Dapr CLI works with both [self-hosted]({{< ref self-hosted >}}) and [Kubernetes]({{< ref Kubernetes >}}) environments.
Begin by downloading and installing the Dapr CLI:
{{< tabs Linux Windows MacOS Binaries>}}
{{% codetab %}}
This command installs the latest linux Dapr CLI to `/usr/local/bin`:
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
This Command Prompt command installs the latest windows Dapr cli to `C:\dapr` and adds this directory to User PATH environment variable.
```powershell
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
```
{{% /codetab %}}
{{% codetab %}}
This command installs the latest darwin Dapr CLI to `/usr/local/bin`:
```bash
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash
```
Or you can install via [Homebrew](https://brew.sh):
```bash
brew install dapr/tap/dapr-cli
```
{{% alert title="Note for M1 Macs" color="primary" %}}
For M1 Macs, homebrew is not supported. You will need to use the dapr install script and have the rosetta amd64 compatibility layer installed. If you do not have it installed already, you can run the following:
```bash
softwareupdate --install-rosetta
```
{{% /alert %}}
{{% /codetab %}}
{{% codetab %}}
Each release of Dapr CLI includes various OSes and architectures. These binary versions can be manually downloaded and installed.
1. Download the desired Dapr CLI from the latest [Dapr Release](https://github.com/dapr/cli/releases)
2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip)
3. Move it to your desired location.
- For Linux/MacOS - `/usr/local/bin`
- For Windows, create a directory and add this to your System PATH. For example create a directory called `C:\dapr` and add this directory to your User PATH, by editing your system environment variable.
{{% /codetab %}}
{{< /tabs >}}
### Step 2: Verify the installation
You can verify the CLI is installed by restarting your terminal/command prompt and running the following:
```bash
dapr
```
The output should look like this:
```md
__
____/ /___ _____ _____
/ __ / __ '/ __ \/ ___/
/ /_/ / /_/ / /_/ / /
\__,_/\__,_/ .___/_/
/_/
===============================
Distributed Application Runtime
Usage:
dapr [command]
Available Commands:
completion Generates shell completion scripts
components List all Dapr components. Supported platforms: Kubernetes
configurations List all Dapr configurations. Supported platforms: Kubernetes
dashboard Start Dapr dashboard. Supported platforms: Kubernetes and self-hosted
help Help about any command
init Install Dapr on supported hosting platforms. Supported platforms: Kubernetes and self-hosted
invoke Invoke a method on a given Dapr application. Supported platforms: Self-hosted
list List all Dapr instances. Supported platforms: Kubernetes and self-hosted
logs Get Dapr sidecar logs for an application. Supported platforms: Kubernetes
mtls Check if mTLS is enabled. Supported platforms: Kubernetes
publish Publish a pub-sub event. Supported platforms: Self-hosted
run Run Dapr and (optionally) your application side by side. Supported platforms: Self-hosted
status Show the health status of Dapr services. Supported platforms: Kubernetes
stop Stop Dapr instances and their associated apps. . Supported platforms: Self-hosted
uninstall Uninstall Dapr runtime. Supported platforms: Kubernetes and self-hosted
upgrade Upgrades a Dapr control plane installation in a cluster. Supported platforms: Kubernetes
Flags:
-h, --help help for dapr
-v, --version version for dapr
Use "dapr [command] --help" for more information about a command.
```
<a class="btn btn-primary" href="{{< ref install-dapr-selfhost.md >}}" role="button">Next step: Initialize Dapr >></a>

View File

@ -0,0 +1,113 @@
---
type: docs
title: "Initialize Dapr in your local environment"
linkTitle: "Init Dapr locally"
weight: 20
aliases:
- /getting-started/install-dapr/
---
Now that you have the [Dapr CLI installed]({{<ref install-dapr-cli.md>}}), it's time to initialize Dapr on your local machine using the CLI.
Dapr runs as a sidecar alongside your application, and in self-hosted mode this means it is a process on your local machine. Therefore, initializing Dapr includes fetching the Dapr sidecar binaries and installing them locally.
In addition, the default initialization process also creates a development environment that helps streamline application development with Dapr. This includes the following steps:
1. Running a **Redis container instance** to be used as a local state store and message broker
1. Running a **Zipkin container instance** for observability
1. Creating a **default components folder** with component definitions for the above
1. Running a **Dapr placement service container instance** for local actor support
{{% alert title="Docker" color="primary" %}}
This recommended development environment requires [Docker](https://docs.docker.com/install/). It is possible to initialize Dapr without a dependency on Docker (see [this guidance]({{<ref self-hosted-no-docker.md>}})) but next steps in this guide assume the recommended development environment.
{{% /alert %}}
### Step 1: Open an elevated terminal
{{< tabs "Linux/MacOS" "Windows">}}
{{% codetab %}}
If you run your Docker commands with sudo, or the install path is `/usr/local/bin` (default install path), you will need to use `sudo` below.
{{% /codetab %}}
{{% codetab %}}
Make sure that you run Command Prompt as administrator (right click, run as administrator)
{{% /codetab %}}
{{< /tabs >}}
### Step 2: Run the init CLI command
Install the latest Dapr runtime binaries:
```bash
dapr init
```
### Step 3: Verify Dapr version
```bash
dapr --version
```
Output should look like this:
```
CLI version: 1.0.0
Runtime version: 1.0.0
```
### Step 4: Verify containers are running
As mentioned above, the `dapr init` command launches several containers that will help you get started with Dapr. Verify this by running:
```bash
docker ps
```
Make sure that instances with `daprio/dapr`, `openzipkin/zipkin`, and `redis` images are all running:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0dda6684dc2e openzipkin/zipkin "/busybox/sh run.sh" 2 minutes ago Up 2 minutes 9410/tcp, 0.0.0.0:9411->9411/tcp dapr_zipkin
9bf6ef339f50 redis "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 0.0.0.0:6379->6379/tcp dapr_redis
8d993e514150 daprio/dapr "./placement" 2 minutes ago Up 2 minutes 0.0.0.0:6050->50005/tcp dapr_placement
```
### Step 5: Verify components directory has been initialized
On `dapr init`, the CLI also creates a default components folder which includes several YAML files with definitions for a state store, pub/sub and zipkin. These will be read by the Dapr sidecar, telling it to use the Redis container for state management and messaging and the Zipkin container for collecting traces.
- In Linux/MacOS Dapr is initialized with default components and files in `$HOME/.dapr`.
- For Windows Dapr is initialized to `%USERPROFILE%\.dapr\`
{{< tabs "Linux/MacOS" "Windows">}}
{{% codetab %}}
Run:
```bash
ls $HOME/.dapr
```
You should see:
```
bin components config.yaml
```
{{% /codetab %}}
{{% codetab %}}
Open `%USERPROFILE%\.dapr\` in file explorer:
```powershell
explorer "%USERPROFILE%\.dapr\"
```
You will see the Dapr config, Dapr binaries directory, and the default components directory for Dapr:
<img src="/images/install-dapr-selfhost-windows.png" width=500>
{{% /codetab %}}
{{< /tabs >}}
<a class="btn btn-primary" href="{{< ref get-started-api.md >}}" role="button">Next step: Use the Dapr API >></a>

View File

@ -1,262 +0,0 @@
---
type: docs
title: "How-To: Install Dapr into your environment"
linkTitle: "How-To: Install Dapr"
weight: 20
description: "Install Dapr in your preferred environment"
---
This guide will get you up and running to evaluate Dapr and develop applications. Visit [this page]({{< ref hosting >}}) for a full list of supported platforms with instructions and best practices on running in production.
## Install the Dapr CLI
Begin by downloading and installing the Dapr CLI. This will be used to initialize your environment on your desired platform.
{{< tabs Linux Windows MacOS Binaries>}}
{{% codetab %}}
This command will install the latest linux Dapr CLI to `/usr/local/bin`:
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
This command will install the latest windows Dapr cli to `%USERPROFILE%\.dapr\` and add this directory to User PATH environment variable:
```powershell
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
```
Verify by opening Explorer and entering `%USERPROFILE%\.dapr\` into the address bar. You should see folders for bin, componenets and a config file.
{{% /codetab %}}
{{% codetab %}}
This command will install the latest darwin Dapr CLI to `/usr/local/bin`:
```bash
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash
```
Or you can install via [Homebrew](https://brew.sh):
```bash
brew install dapr/tap/dapr-cli
```
{{% /codetab %}}
{{% codetab %}}
Each release of Dapr CLI includes various OSes and architectures. These binary versions can be manually downloaded and installed.
1. Download the desired Dapr CLI from the latest [Dapr Release](https://github.com/dapr/cli/releases)
2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip)
3. Move it to your desired location.
- For Linux/MacOS - `/usr/local/bin`
- For Windows, create a directory and add this to your System PATH. For example create a directory called `c:\dapr` and add this directory to your path, by editing your system environment variable.
{{% /codetab %}}
{{< /tabs >}}
## Install Dapr in self-hosted mode
Running Dapr runtime in self hosted mode enables you to develop Dapr applications in your local development environment and then deploy and run them in other Dapr supported environments.
### Prerequisites
- Install [Docker Desktop](https://docs.docker.com/install/)
- Windows users ensure that `Docker Desktop For Windows` uses Linux containers.
By default Dapr will install with a developer environment using Docker containers to get you started easily. This getting started guide assumes Docker is installed to ensure the best experience. However, Dapr does not depend on Docker to run. Read [this page]({{< ref self-hosted-no-docker.md >}}) for instructions on installing Dapr locally without Docker using slim init.
### Initialize Dapr using the CLI
This step will install the latest Dapr Docker containers and setup a developer environment to help you get started easily with Dapr.
1. Ensure you are in an elevated terminal:
- **Linux/MacOS:** if you run your docker cmds with sudo or the install path is `/usr/local/bin`(default install path), you need to use `sudo`
- **Windows:** make sure that you run the cmd terminal in administrator mode
2. Run `dapr init`
```bash
$ dapr init
⌛ Making the jump to hyperspace...
Downloading binaries and setting up components
✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started
```
3. Verify installation
From a command prompt run the `docker ps` command and check that the `daprio/dapr`, `openzipkin/zipkin`, and `redis` container images are running:
```bash
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
67bc611a118c daprio/dapr "./placement" About a minute ago Up About a minute 0.0.0.0:6050->50005/tcp dapr_placement
855f87d10249 openzipkin/zipkin "/busybox/sh run.sh" About a minute ago Up About a minute 9410/tcp, 0.0.0.0:9411->9411/tcp dapr_zipkin
71cccdce0e8f redis "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:6379->6379/tcp dapr_redis
```
4. Visit our [hello world quickstart](https://github.com/dapr/quickstarts/tree/master/hello-world) or dive into the [Dapr building blocks]({{< ref building-blocks >}})
### (optional) Install a specific runtime version
You can install or upgrade to a specific version of the Dapr runtime using `dapr init --runtime-version`. You can find the list of versions in [Dapr Release](https://github.com/dapr/dapr/releases).
```bash
# Install v0.1.0 runtime
$ dapr init --runtime-version 0.11.0
# Check the versions of cli and runtime
$ dapr --version
cli version: v0.11.0
runtime version: v0.11.2
```
### Uninstall Dapr in self-hosted mode
This command will remove the placement Dapr container:
```bash
$ dapr uninstall
```
{{% alert title="Warning" color="warning" %}}
This command won't remove the Redis or Zipkin containers by default, just in case you were using them for other purposes. To remove Redis, Zipkin, Actor Placement container, as well as the default Dapr directory located at `$HOME/.dapr` or `%USERPROFILE%\.dapr\`, run:
```bash
$ dapr uninstall --all
```
{{% /alert %}}
> For Linux/MacOS users, if you run your docker cmds with sudo or the install path is `/usr/local/bin`(default install path), you need to use `sudo dapr uninstall` to remove dapr binaries and/or the containers.
## Install Dapr on a Kubernetes cluster
When setting up Kubernetes you can use either the Dapr CLI or Helm.
The following pods will be installed:
- dapr-operator: Manages component updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.)
- dapr-sidecar-injector: Injects Dapr into annotated deployment pods
- dapr-placement: Used for actors only. Creates mapping tables that map actor instances to pods
- dapr-sentry: Manages mTLS between services and acts as a certificate authority
### Setup cluster
You can install Dapr on any Kubernetes cluster. Here are some helpful links:
- [Setup Minikube Cluster]({{< ref setup-minikube.md >}})
- [Setup Azure Kubernetes Service Cluster]({{< ref setup-aks.md >}})
- [Setup Google Cloud Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/quickstart)
- [Setup Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
{{% alert title="Note" color="primary" %}}
Both the Dapr CLI and the Dapr Helm chart automatically deploy with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy Dapr to Windows nodes, but most users should not need to. For more information see [Deploying to a hybrid Linux/Windows Kubernetes cluster]({{<ref kubernetes-hybrid-clusters>}}).
{{% /alert %}}
### Install with Dapr CLI
You can install Dapr to a Kubernetes cluster using the Dapr CLI.
#### Install Dapr
The `-k` flag will initialize Dapr on the kuberentes cluster in your current context.
```bash
$ dapr init -k
⌛ Making the jump to hyperspace...
Note: To install Dapr using Helm, see here: https://github.com/dapr/docs/blob/master/getting-started/environment-setup.md#using-helm-advanced
✅ Deploying the Dapr control plane to your cluster...
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run "dapr status -k" in your terminal. To get started, go here: https://aka.ms/dapr-getting-started
```
#### Install to a custom namespace:
The default namespace when initializeing Dapr is `dapr-system`. You can override this with the `-n` flag.
```
dapr init -k -n mynamespace
```
#### Install in highly available mode:
You can run Dapr with 3 replicas of each control plane pod with the exception of the Placement pod in the dapr-system namespace for [production scenarios]({{< ref kubernetes-production.md >}}).
```
dapr init -k --enable-ha=true
```
#### Disable mTLS:
Dapr is initialized by default with [mTLS]({{< ref "security-concept.md#sidecar-to-sidecar-communication" >}}). You can disable it with:
```
dapr init -k --enable-mtls=false
```
#### Uninstall Dapr on Kubernetes
```bash
$ dapr uninstall --kubernetes
```
### Install with Helm (advanced)
You can install Dapr to Kubernetes cluster using a Helm 3 chart.
{{% alert title="Note" color="primary" %}}
The latest Dapr helm chart no longer supports Helm v2. Please migrate from helm v2 to helm v3 by following [this guide](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/).
{{% /alert %}}
#### Install Dapr on Kubernetes
1. Make sure Helm 3 is installed on your machine
2. Add Helm repo
```bash
helm repo add dapr https://dapr.github.io/helm-charts/
helm repo update
```
3. Create `dapr-system` namespace on your kubernetes cluster
```bash
kubectl create namespace dapr-system
```
4. Install the Dapr chart on your cluster in the `dapr-system` namespace.
```bash
helm install dapr dapr/dapr --namespace dapr-system
```
#### Verify installation
Once the chart installation is complete, verify the dapr-operator, dapr-placement, dapr-sidecar-injector and dapr-sentry pods are running in the `dapr-system` namespace:
```bash
$ kubectl get pods -n dapr-system -w
NAME READY STATUS RESTARTS AGE
dapr-dashboard-7bd6cbf5bf-xglsr 1/1 Running 0 40s
dapr-operator-7bd6cbf5bf-xglsr 1/1 Running 0 40s
dapr-placement-7f8f76778f-6vhl2 1/1 Running 0 40s
dapr-sidecar-injector-8555576b6f-29cqm 1/1 Running 0 40s
dapr-sentry-9435776c7f-8f7yd 1/1 Running 0 40s
```
#### Uninstall Dapr on Kubernetes
```bash
helm uninstall dapr -n dapr-system
```
> **Note:** See [this page](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md) for details on Dapr helm charts.
### Sidecar annotations
To see all the supported annotations for the Dapr sidecar on Kubernetes, visit [this]({{<ref "kubernetes-annotations.md">}}) how to guide.
### Configure Redis
Unlike Dapr self-hosted, redis is not pre-installed out of the box on Kubernetes. To install Redis as a state store or as a pub/sub message bus in your Kubernetes cluster see [How-To: Setup Redis]({{< ref configure-redis.md >}})

View File

@ -0,0 +1,27 @@
---
type: docs
title: "Try out Dapr quickstarts to learn core concepts"
linkTitle: "Dapr Quickstarts"
weight: 60
description: "Tutorials with code samples that are aimed to get you started quickly with Dapr"
---
The [Dapr Quickstarts](https://github.com/dapr/quickstarts/tree/v1.0.0) are a collection of tutorials with code samples that are aimed to get you started quickly with Dapr, each highlighting a different Dapr capability.
- A good place to start is the hello-world quickstart, it demonstrates how to run Dapr in standalone mode locally on your machine and demonstrates state management and service invocation in a simple application.
- Next, if you are familiar with Kubernetes and want to see how to run the same application in a Kubernetes environment, look for the hello-kubernetes quickstart. Other quickstarts such as pub-sub, bindings and the distributed-calculator quickstart explore different Dapr capabilities include instructions for running both locally and on Kubernetes and can be completed in any order. A full list of the quickstarts can be found below.
- At anytime, you can explore the Dapr documentation or SDK specific samples and come back to try additional quickstarts.
- When you're done, consider exploring the [Dapr samples repository](https://github.com/dapr/samples) for additional code samples contributed by the community that show more advanced or specific usages of Dapr.
## Quickstarts
| Quickstart | Description |
|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Hello World](https://github.com/dapr/quickstarts/tree/v1.0.0/hello-world) | Demonstrates how to run Dapr locally. Highlights service invocation and state management. |
| [Hello Kubernetes](https://github.com/dapr/quickstarts/tree/v1.0.0/hello-kubernetes) | Demonstrates how to run Dapr in Kubernetes. Highlights service invocation and state management. |
| [Distributed Calculator](https://github.com/dapr/quickstarts/tree/v1.0.0/distributed-calculator) | Demonstrates a distributed calculator application that uses Dapr services to power a React web app. Highlights polyglot (multi-language) programming, service invocation and state management. |
| [Pub/Sub](https://github.com/dapr/quickstarts/tree/v1.0.0/pub-sub) | Demonstrates how to use Dapr to enable pub-sub applications. Uses Redis as a pub-sub component. |
| [Bindings](https://github.com/dapr/quickstarts/tree/v1.0.0/bindings) | Demonstrates how to use Dapr to create input and output bindings to other components. Uses bindings to Kafka. |
| [Middleware](https://github.com/dapr/quickstarts/tree/v1.0.0/middleware) | Demonstrates use of Dapr middleware to enable OAuth 2.0 authorization. |
| [Observability](https://github.com/dapr/quickstarts/tree/v1.0.0/observability) | Demonstrates Dapr tracing capabilities. Uses Zipkin as a tracing component. |
| [Secret Store](https://github.com/dapr/quickstarts/tree/v1.0.0/secretstore) | Demonstrates the use of Dapr Secrets API to access secret stores. |

View File

@ -0,0 +1,77 @@
---
type: docs
title: "Component schema"
linkTitle: "Component schema"
weight: 100
description: "The basic schema for a Dapr component"
---
Dapr defines and registers components using a [CustomResourceDefinition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). All components are defined as a CRD and can be applied to any hosting environment where Dapr is running, not just Kubernetes.
## Format
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: [COMPONENT-NAME]
namespace: [COMPONENT-NAMESPACE]
spec:
type: [COMPONENT-TYPE]
version: v1
initTimeout: [TIMEOUT-DURATION]
ignoreErrors: [BOOLEAN]
metadata:
- name: [METADATA-NAME]
value: [METADATA-VALUE]
```
## Fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| apiVersion | Y | The version of the Dapr (and Kubernetes if applicable) API you are calling | `dapr.io/v1alpha1`
| kind | Y | The type of CRD. For components is must always be `Component` | `Component`
| **metadata** | - | **Information about the component registration** |
| metadata.name | Y | The name of the component | `prod-statestore`
| metadata.namespace | N | The namespace for the component for hosting environments with namespaces | `myapp-namespace`
| **spec** | - | **Detailed information on the component resource**
| spec.type | Y | The type of the component | `state.redis`
| spec.version | Y | The version of the component | `v1`
| spec.initTimeout | N | The timeout duration for the initialization of the component. Default is 30s | `5m`, `1h`, `20s`
| spec.ignoreErrors | N | Tells the Dapr sidecar to continue initialization if the component fails to load. Default is false | `false`
| **spec.metadata** | - | **A key/value pair of component specific configuration. See your component definition for fields**|
### Special metadata values
Metadata values can contain a `{uuid}` tag that is replaced with a randomly generate UUID when the Dapr sidecar starts up. A new UUID is generated on every start up. It can be used, for example, to have a pod on Kubernetes with multiple application instances consuming a [shared MQTT subscription]({{< ref "setup-mqtt.md" >}}). Below is an example of using the `{uuid}` tag.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.mqtt
version: v1
metadata:
- name: consumerID
value: "{uuid}"
- name: url
value: "tcp://admin:public@localhost:1883"
- name: qos
value: 1
- name: retain
value: "false"
- name: cleanSession
value: "false"
```
## Further reading
- [Components concept]({{< ref components-concept.md >}})
- [Reference secrets in component definitions]({{< ref component-secrets.md >}})
- [Supported state stores]({{< ref supported-state-stores >}})
- [Supported pub/sub brokers]({{< ref supported-pubsub >}})
- [Supported secret stores]({{< ref supported-secret-stores >}})
- [Supported bindings]({{< ref supported-bindings >}})
- [Set component scopes]({{< ref component-scopes.md >}})

View File

@ -2,17 +2,55 @@
type: docs
title: "How-To: Scope components to one or more applications"
linkTitle: "How-To: Set component scopes"
weight: 100
weight: 200
description: "Limit component access to particular Dapr instances"
---
Dapr components are namespaced (separate from the Kubernetes namespace concept), meaning a Dapr runtime instance can only access components that have been deployed to the same namespace.
When Dapr runs, it matches it's own configured namespace with the namespace of the components that it loads and initializes only the ones matching its namespaces. All other components in a different namespace are not loaded.
## Namespaces
Namespaces can be used to limit component access to particular Dapr instances.
### Example of component namespacing in Kubernetes
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
In self hosted mode, a developer can specify the namespace to a Dapr instance by setting the `NAMESPACE` environment variable.
If the `NAMESPACE` environment variable is set, Dapr does not load any component that does not specify the same namespace in its metadata.
For example given this component in the `production` namespace
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: production
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: redis-master:6379
```
To tell Dapr which namespace it is deployed to, set the environment variable:
MacOS/Linux:
```bash
export NAMESPACE=production
# run Dapr as usual
```
Windows:
```powershell
setx NAMESPACE "production"
# run Dapr as usual
```
{{% /codetab %}}
{{% codetab %}}
Let's consider the following component in Kubernetes:
```yaml
@ -30,32 +68,30 @@ spec:
```
In this example, the Redis component is only accessible to Dapr instances running inside the `production` namespace.
{{% /codetab %}}
### Example of component namsespacing in self-hosted mode
{{< /tabs >}}
In self hosted mode, a developer can specify the namespace to a Dapr instance by setting the `NAMESPACE` environment variable.
If the `NAMESPACE` environment variable is set, Dapr will not load any component that does not specify the same namespace in its metadata.
## Using namespaces with service invocation
Considering the example above, to tell Dapr which namespace it is deployed to, set the environment variable:
MacOS/Linux:
When using service invocation an application in a namespace you have to qualify it with the namespace. For example calling the `ping` method on `myapp` which is scoped to the `production` namespace would be like this.
```bash
export NAMESPACE=production
# run Dapr as usual
https://localhost:3500/v1.0/invoke/myapp.production/method/ping
```
Windows:
Or using a curl command from an external DNS address, in this case `api.demo.dapr.team` would be like this.
```powershell
setx NAMESPACE "production"
# run Dapr as usual
MacOS/Linux:
```
curl -i -d '{ "message": "hello" }' \
-H "Content-type: application/json" \
-H "dapr-api-token: ${API_TOKEN}" \
https://api.demo.dapr.team/v1.0/invoke/myapp.production/method/ping
```
When Dapr runs, it matches it's own configured namespace with the namespace of the components that it loads and initializes only the the one matching its namespaces. All other components in a different namespace are not loaded.
## Using namespaces with pub/sub
Read [Pub/Sub and namespaces]({{< ref "component-scopes.md" >}}) for more information on scoping components.
## Application access to components with scopes
@ -83,4 +119,10 @@ scopes:
## Example
<iframe width="560" height="315" src="https://www.youtube.com/embed/8W-iBDNvCUM?start=1763" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<iframe width="560" height="315" src="https://www.youtube.com/embed/8W-iBDNvCUM?start=1763" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Related links
- [Configure Pub/Sub components with multiple namespaces]({{< ref "pubsub-namespaces.md" >}})
- [Use secret scoping]({{< ref "secrets-scopes.md" >}})
- [Limit the secrets that can be read from secret stores]({{< ref "secret-scope.md" >}})

View File

@ -1,8 +1,8 @@
---
type: docs
title: "How-To: Reference secret stores in components"
title: "How-To: Reference secrets in components"
linkTitle: "How-To: Reference secrets"
weight: 200
weight: 300
description: "How to securly reference secrets from a component definition"
---
@ -18,9 +18,98 @@ When running in Kubernetes, if the `auth.secretStore` is empty, the Kubernetes s
Go to [this]({{< ref "howto-secrets.md" >}}) link to see all the secret stores supported by Dapr, along with information on how to configure and use them.
## Non default namespaces
## Referencing secrets
If your Dapr enabled apps are using components that fetch secrets from non-default namespaces, apply the following resources to the namespace:
While you have the option to use plain text secrets, this is not recommended for production:
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: MyPassword
```
Instead create the secret in your secret store and reference it in the component definition:
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
secretKeyRef:
name: redis-secret
key: redis-password
auth:
secretStore: <SECRET_STORE_NAME>
```
`SECRET_STORE_NAME` is the name of the configured [secret store component]({{< ref supported-secret-stores >}}). When running in Kubernetes and using a Kubernetes secret store, the field `auth.SecretStore` defaults to `kubernetes` and can be left empty.
The above component definition tells Dapr to extract a secret named `redis-secret` from the defined secret store and assign the value of the `redis-password` key in the secret to the `redisPassword` field in the Component.
## Example
### Referencing a Kubernetes secret
The following example shows you how to create a Kubernetes secret to hold the connection string for an Event Hubs binding.
1. First, create the Kubernetes secret:
```bash
kubectl create secret generic eventhubs-secret --from-literal=connectionString=*********
```
2. Next, reference the secret in your binding:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: eventhubs
namespace: default
spec:
type: bindings.azure.eventhubs
version: v1
metadata:
- name: connectionString
secretKeyRef:
name: eventhubs-secret
key: connectionString
```
3. Finally, apply the component to the Kubernetes cluster:
```bash
kubectl apply -f ./eventhubs.yaml
```
## Scoping access to secrets
Dapr can restrict access to secrets in a secret store using its configuration. Read [How To: Use secret scoping]({{< ref "secrets-scopes.md" >}}) and [How-To: Limit the secrets that can be read from secret stores]({{< ref "secret-scope.md" >}}) for more information. This is the recommended way to limit access to secrets using Dapr.
## Kubernetes permissions
### Default namespace
When running in Kubernetes, Dapr, during installtion, defines default Role and RoleBinding for secrets access from Kubernetes secret store in the `default` namespace. For Dapr enabled apps that fetch secrets from `default` namespace, a secret can be defined and referenced in components as shown in the example above.
### Non-default namespaces
If your Dapr enabled apps are using components that fetch secrets from non-default namespaces, apply the following resources to that namespace:
```yaml
---
@ -49,82 +138,13 @@ roleRef:
apiGroup: rbac.authorization.k8s.io
```
## Examples
These resources grant Dapr permissions to get secrets from the Kubernetes secret store for the namespace defined in the Role and RoleBinding.
Using plain text:
{{% alert title="Note" color="warning" %}}
In production scenario to limit Dapr's access to certain secret resources alone, you can use the `resourceNames` field. See this [link](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources) for further explanation.
{{% /alert %}}
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: MyPassword
```
## Related links
Using a Kubernetes secret:
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
secretKeyRef:
name: redis-secret
key: redis-password
auth:
secretStore: kubernetes
```
The above example tells Dapr to use the `kubernetes` secret store, extract a secret named `redis-secret` and assign the value of the `redis-password` key in the secret to the `redisPassword` field in the Component.
### Creating a secret and referencing it in a Component
The following example shows you how to create a Kubernetes secret to hold the connection string for an Event Hubs binding.
First, create the Kubernetes secret:
```bash
kubectl create secret generic eventhubs-secret --from-literal=connectionString=*********
```
Next, reference the secret in your binding:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: eventhubs
namespace: default
spec:
type: bindings.azure.eventhubs
version: v1
metadata:
- name: connectionString
secretKeyRef:
name: eventhubs-secret
key: connectionString
```
Finally, apply the component to the Kubernetes cluster:
```bash
kubectl apply -f ./eventhubs.yaml
```
All done!
- [Use secret scoping]({{< ref "secrets-scopes.md" >}})
- [Limit the secrets that can be read from secret stores]({{< ref "secret-scope.md" >}})

View File

@ -3,56 +3,62 @@ type: docs
title: "Supported external bindings"
linkTitle: "Supported bindings"
weight: 200
description: List of all the supported external bindings that can interface with Dapr
description: The supported external systems that interface with Dapr as input/output bindings
no_list: true
---
Every binding has its own unique set of properties. Click the name link to see the component YAML for each binding.
### Generic
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [APNs]({{< ref apns.md >}}) | | ✅ | Experimental |
| [Cron (Scheduler)]({{< ref cron.md >}}) | ✅ | ✅ | Experimental |
| [HTTP]({{< ref http.md >}}) | | ✅ | Experimental |
| [InfluxDB]({{< ref influxdb.md >}}) | | ✅ | Experimental |
| [Kafka]({{< ref kafka.md >}}) | ✅ | ✅ | Experimental |
| [Kubernetes Events]({{< ref "kubernetes-binding.md" >}}) | ✅ | | Experimental |
| [MQTT]({{< ref mqtt.md >}}) | ✅ | ✅ | Experimental |
| [PostgreSql]({{< ref postgres.md >}}) | | ✅ | Experimental |
| [RabbitMQ]({{< ref rabbitmq.md >}}) | ✅ | ✅ | Experimental |
| [Redis]({{< ref redis.md >}}) | | ✅ | Experimental |
| [Twilio]({{< ref twilio.md >}}) | | ✅ | Experimental |
| [Twitter]({{< ref twitter.md >}}) | ✅ | ✅ | Experimental |
| [SendGrid]({{< ref sendgrid.md >}}) | | ✅ | Experimental |
| [Apple Push Notifications (APN)]({{< ref apns.md >}}) | | ✅ | Alpha |
| [Cron (Scheduler)]({{< ref cron.md >}}) | ✅ | ✅ | Alpha |
| [HTTP]({{< ref http.md >}}) | | ✅ | Alpha |
| [InfluxDB]({{< ref influxdb.md >}}) | | ✅ | Alpha |
| [Kafka]({{< ref kafka.md >}}) | ✅ | ✅ | Alpha |
| [Kubernetes Events]({{< ref "kubernetes-binding.md" >}}) | ✅ | | Alpha |
| [MQTT]({{< ref mqtt.md >}}) | ✅ | ✅ | Alpha |
| [MySQL]({{< ref mysql.md >}}) | | ✅ | Alpha |
| [PostgreSql]({{< ref postgres.md >}}) | | ✅ | Alpha |
| [Postmark]({{< ref postmark.md >}}) | | ✅ | Alpha |
| [RabbitMQ]({{< ref rabbitmq.md >}}) | ✅ | ✅ | Alpha |
| [Redis]({{< ref redis.md >}}) | | ✅ | Alpha |
| [SMTP]({{< ref smtp.md >}}) | | ✅ | Alpha |
| [Twilio]({{< ref twilio.md >}}) | | ✅ | Alpha |
| [Twitter]({{< ref twitter.md >}}) | ✅ | ✅ | Alpha |
| [SendGrid]({{< ref sendgrid.md >}}) | | ✅ | Alpha |
### Amazon Web Service (AWS)
### Alibaba Cloud
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [AWS DynamoDB]({{< ref dynamodb.md >}}) | | ✅ | Experimental |
| [AWS S3]({{< ref s3.md >}}) | | ✅ | Experimental |
| [AWS SNS]({{< ref sns.md >}}) | | ✅ | Experimental |
| [AWS SQS]({{< ref sqs.md >}}) | ✅ | ✅ | Experimental |
| [AWS Kinesis]({{< ref kinesis.md >}}) | ✅ | ✅ | Experimental |
| [Alibaba Cloud OSS]({{< ref alicloudoss.md >}}) | | ✅ | Alpha |
### Amazon Web Services (AWS)
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [AWS DynamoDB]({{< ref dynamodb.md >}}) | | ✅ | Alpha |
| [AWS S3]({{< ref s3.md >}}) | | ✅ | Alpha |
| [AWS SNS]({{< ref sns.md >}}) | | ✅ | Alpha |
| [AWS SQS]({{< ref sqs.md >}}) | ✅ | ✅ | Alpha |
| [AWS Kinesis]({{< ref kinesis.md >}}) | ✅ | ✅ | Alpha |
### Google Cloud Platform (GCP)
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [GCP Cloud Pub/Sub]({{< ref gcppubsub.md >}}) | ✅ | ✅ | Experimental |
| [GCP Storage Bucket]({{< ref gcpbucket.md >}}) | | ✅ | Experimental |
| [GCP Cloud Pub/Sub]({{< ref gcppubsub.md >}}) | ✅ | ✅ | Alpha |
| [GCP Storage Bucket]({{< ref gcpbucket.md >}}) | | ✅ | Alpha |
### Microsoft Azure
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [Azure Blob Storage]({{< ref blobstorage.md >}}) | | ✅ | Experimental |
| [Azure EventHubs]({{< ref eventhubs.md >}}) | ✅ | ✅ | Experimental |
| [Azure CosmosDB]({{< ref cosmosdb.md >}}) | | ✅ | Experimental |
| [Azure Service Bus Queues]({{< ref servicebusqueues.md >}}) | ✅ | ✅ | Experimental |
| [Azure SignalR]({{< ref signalr.md >}}) | | ✅ | Experimental |
| [Azure Storage Queues]({{< ref storagequeues.md >}}) | ✅ | ✅ | Experimental |
| [Azure Event Grid]({{< ref eventgrid.md >}}) | ✅ | ✅ | Experimental |
| [Azure Blob Storage]({{< ref blobstorage.md >}}) | | ✅ | Alpha |
| [Azure CosmosDB]({{< ref cosmosdb.md >}}) | | ✅ | Alpha |
| [Azure Event Grid]({{< ref eventgrid.md >}}) | ✅ | ✅ | Alpha |
| [Azure Event Hubs]({{< ref eventhubs.md >}}) | ✅ | ✅ | Alpha |
| [Azure Service Bus Queues]({{< ref servicebusqueues.md >}}) | ✅ | ✅ | Alpha |
| [Azure SignalR]({{< ref signalr.md >}}) | | ✅ | Alpha |
| [Azure Storage Queues]({{< ref storagequeues.md >}}) | ✅ | ✅ | Alpha |

View File

@ -0,0 +1,139 @@
---
type: docs
title: "Alibaba Cloud Object Storage Service binding spec"
linkTitle: "Alibaba Cloud Object Storage"
description: "Detailed documentation on the Alibaba Cloud Object Storage binding component"
---
## Component format
To setup an Alibaba Cloud Object Storage binding create a component of type `bindings.alicloud.oss`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a secretstore configuration. See this guide on [referencing secrets]({{< ref component-secrets.md >}}) to retrieve and use the secret with Dapr components.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: alicloudobjectstorage
namespace: default
spec:
type: bindings.alicloud.oss
version: v1
metadata:
- name: endpoint
value: "[endpoint]"
- name: accessKeyID
value: "[key-id]"
- name: accessKey
value: "[access-key]"
- name: bucket
value: "[bucket]"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|---------------|----------|---------|---------|---------|
| `endpoint` | Y | Output | Alicloud OSS endpoint. | https://oss-cn-hangzhou.aliyuncs.com
| `accessKeyID` | Y | Output | Access key ID credential. |
| `accessKey` | Y | Output | Access key credential. |
| `bucket` | Y | Output | Name of the storage bucket. |
## Binding support
This component supports **output binding** with the following operations:
- `create`: [Create object](#create-object)
### Create object
To perform a create object operation, invoke the binding with a `POST` method and the following JSON body:
```json
{
"operation": "create",
"data": "YOUR_CONTENT"
}
```
{{% alert title="Note" color="primary" %}}
By default, a random UUID is auto-generated as the object key. See below for Metadata support to set the key for the object.
{{% /alert %}}
#### Example
**Saving to a random generated UUID file**
{{< tabs "Windows" "Linux/MacOS" >}}
{{% codetab %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World" }' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
<br />
**Saving to a specific file**
{{< tabs "Windows" "Linux/MacOS" >}}
{{% codetab %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-key\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-key" } }' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
{{% alert title="Note" color="primary" %}}
Windows CMD requires escaping the `"` character.
{{% /alert %}}
## Metadata information
### Object key
By default, the Alicloud OSS output binding will auto-generate a UUID as the object key.
You can set the key with the following metadata:
```json
{
"data": "file content",
"metadata": {
"key": "my-key"
},
"operation": "create"
}
```
## Related links
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -1,4 +1,3 @@
---
type: docs
title: "Apple Push Notification Service binding spec"
@ -6,7 +5,9 @@ linkTitle: "Apple Push Notification Service"
description: "Detailed documentation on the Apple Push Notification Service binding component"
---
## Configuration
## Component format
To setup Apple Push Notifications binding create a component of type `bindings.apns`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -19,7 +20,7 @@ spec:
version: v1
metadata:
- name: development
value: <true | false>
value: <bool>
- name: key-id
value: <APPLE_KEY_ID>
- name: team-id
@ -29,13 +30,68 @@ spec:
name: <SECRET>
key: <SECRET-KEY-NAME>
```
## Spec metadata fields
- `database` tells the binding which APNs service to use. Set to `true` to use the development service or `false` to use the production service. If not specified, the binding will default to production.
- `key-id` is the identifier for the private key from the Apple Developer Portal.
- `team-id` is the identifier for the organization or author from the Apple Developer Portal.
- `private-key` is a PKCS #8-formatted private key. It is intended that the private key is stored in the secret store and not exposed directly in the configuration.
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:| ----------------|---------|---------|
| development | Y | Output | Tells the binding which APNs service to use. Set to `"true"` to use the development service or `"false"` to use the production service. Default: `"true"` | `"true"` |
| key-id | Y | Output | The identifier for the private key from the Apple Developer Portal | `"private-key-id`" |
| team-id | Y | Output | The identifier for the organization or author from the Apple Developer Portal | `"team-id"` |
| private-key | Y | Output| Is a PKCS #8-formatted private key. It is intended that the private key is stored in the secret store and not exposed directly in the configuration. See [here](#private-key) for more details | `"pem file"` |
## Request Format
### Private key
The APNS binding needs a cryptographic private key in order to generate authentication tokens for the APNS service.
The private key can be generated from the Apple Developer Portal and is provided as a PKCS #8 file with the private key stored in PEM format.
The private key should be stored in the Dapr secret store and not stored directly in the binding's configuration file.
A sample configuration file for the APNS binding is shown below:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: apns
namespace: default
spec:
type: bindings.apns
metadata:
- name: development
value: false
- name: key-id
value: PUT-KEY-ID-HERE
- name: team-id
value: PUT-APPLE-TEAM-ID-HERE
- name: private-key
secretKeyRef:
name: apns-secrets
key: private-key
```
If using Kubernetes, a sample secret configuration may look like this:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: apns-secrets
namespace: default
stringData:
private-key: |
-----BEGIN PRIVATE KEY-----
KEY-DATA-GOES-HERE
-----END PRIVATE KEY-----
```
## Binding support
This component supports **output binding** with the following operations:
- `create`
## Push notification format
The APNS binding is a pass-through wrapper over the Apple Push Notification Service. The APNS binding will send the request directly to the APNS service without any translation.
It is therefore important to understand the payload for push notifications expected by the APNS service.
The payload format is documented [here](https://developer.apple.com/documentation/usernotifications/setting_up_a_remote_notification_server/generating_a_remote_notification).
### Request format
```json
{
@ -61,7 +117,7 @@ The `data` object contains a complete push notification specification as describ
Besides the `device-token` value, the HTTP headers specified in the [Apple documentation](https://developer.apple.com/documentation/usernotifications/setting_up_a_remote_notification_server/sending_notification_requests_to_apns) can be sent as metadata fields and will be included in the HTTP request to the APNs service.
## Response Format
### Response format
```json
{
@ -69,6 +125,10 @@ Besides the `device-token` value, the HTTP headers specified in the [Apple docum
}
```
## Output Binding Supported Operations
## Related links
* `create`
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -5,7 +5,10 @@ linkTitle: "Azure Blob Storage"
description: "Detailed documentation on the Azure Blob Storage binding component"
---
## Setup Dapr component
## Component format
To setup Azure Blob Storage binding create a component of type `bindings.azure.blobstorage`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -23,39 +26,130 @@ spec:
value: ***********
- name: container
value: container1
- name: decodeBase64
value: <bool>
- name: getBlobRetryCount
value: <integer>
```
- `storageAccount` is the Blob Storage account name.
- `storageAccessKey` is the Blob Storage access key.
- `container` is the name of the Blob Storage container to write to.
- `decodeBase64` optional configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). "true" is the only allowed positive value. Other positive variations like "True" are not acceptable.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Output Binding Supported Operations
## Spec metadata fields
### Create Blob
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|--------|---------|---------|
| storageAccount | Y | Output | The Blob Storage account name | `"myexmapleaccount"` |
| storageAccessKey | Y | Output | The Blob Storage access key | `"access-key"` |
| container | Y | Output | The name of the Blob Storage container to write to | `"myexamplecontainer"` |
| decodeBase64 | N | Output | Configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). `"true"` is the only allowed positive value. Other positive variations like `"True"` are not acceptable. Defaults to `"false"` | `"true"`, `"false"` |
| getBlobRetryCount | N | Output | Specifies the maximum number of HTTP GET requests that will be made while reading from a RetryReader Defaults to `"10"` | `"1"`, `"2"`
To perform a get blob operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body:
## Binding support
This component supports **output binding** with the following operations:
- `create` : [Create blob](#create-blob)
- `get` : [Get blob](#get-blob)
### Create blob
To perform a create blob operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body:
> Note: by default, a random UUID is generated. See below for Metadata support to set the name
```json
{
"operation": "create",
"data": {
"field1": "value1"
}
"data": "YOUR_CONTENT"
}
```
#### Example:
#### Examples
{{< tabs Windows Linux >}}
**Saving to a random generated UUID file**
{{% codetab %}}
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
curl -d '{ "operation": "create", "data": { "field1": "value1" }}' \
{{% codetab %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
**Saving to a specific file**
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"blobName\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "blobName": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
**Saving a file**
To upload a file, encode it as Base64 and let the Binding know to deserialize it:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: bindings.azure.blobstorage
version: v1
metadata:
- name: storageAccount
value: myStorageAccountName
- name: storageAccessKey
value: ***********
- name: container
value: container1
- name: decodeBase64
value: "true"
```
Then you can upload it as you would normally:
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"blobName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "blobName": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
#### Response
@ -68,7 +162,7 @@ The response body will contain the following JSON:
```
### Get Blob
### Get blob
To perform a get blob operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body:
@ -81,22 +175,32 @@ To perform a get blob operation, invoke the Azure Blob Storage binding with a `P
}
```
#### Example:
#### Example
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d '{ \"operation\": \"get\", \"metadata\": { \"blobName\": \"myblob\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "get", "metadata": { "blobName": "myblob" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
#### Response
The response body will contain the value stored in the blob object.
The response body contains the value stored in the blob object.
## Metadata information
By default the Azure Blob Storage output binding will auto generate a UUID as blob filename and not assign any system or custom metadata to it. It is configurable in the Metadata property of the message (all optional).
By default the Azure Blob Storage output binding auto generates a UUID as the blob filename and is not assigned any system or custom metadata to it. It is configurable in the metadata property of the message (all optional).
Applications publishing to an Azure Blob Storage output binding should send a message with the following contract:
Applications publishing to an Azure Blob Storage output binding should send a message with the following format:
```json
{
"data": "file content",
@ -115,6 +219,8 @@ Applications publishing to an Azure Blob Storage output binding should send a me
```
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})

View File

@ -5,7 +5,10 @@ linkTitle: "Azure CosmosDB"
description: "Detailed documentation on the Azure CosmosDB binding component"
---
## Setup Dapr component
## Component format
To setup Azure CosmosDB binding create a component of type `bindings.azure.cosmosdb`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -29,22 +32,32 @@ spec:
value: message
```
- `url` is the CosmosDB url.
- `masterKey` is the CosmosDB account master key.
- `database` is the name of the CosmosDB database.
- `collection` is name of the collection inside the database.
- `partitionKey` is the name of the partitionKey to extract from the payload.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Output Binding Supported Operations
## Spec metadata fields
* create
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|--------|---------|---------|
| url | Y | Output | The CosmosDB url | `"https://******.documents.azure.com:443/"` |
| masterKey | Y | Output | The CosmosDB account master key | `"master-key"` |
| database | Y | Output | The name of the CosmosDB database | `"OrderDb"` |
| collection | Y | Output | The name of the container inside the database. | `"Orders"` |
| partitionKey | Y | Output | The name of the partitionKey to extract from the payload and is used in the container | `"OrderId"`, `"message"` |
For more information see [Azure Cosmos DB resource model](https://docs.microsoft.com/en-us/azure/cosmos-db/account-databases-containers-items).
## Binding support
This component supports **output binding** with the following operations:
- `create`
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -5,7 +5,10 @@ linkTitle: "Cron"
description: "Detailed documentation on the cron binding component"
---
## Setup Dapr component
## Component format
To setup cron binding create a component of type `bindings.cron`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -21,7 +24,13 @@ spec:
value: "@every 15m" # valid cron schedule
```
## Schedule Format
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|-------|--------|---------|
| schedule | Y | Input/Output | The valid cron schedule to use. See [this](#schedule-format) for more details | `"@every 15m"`
### Schedule Format
The Dapr cron binding supports following formats:
@ -48,8 +57,18 @@ For ease of use, the Dapr cron binding also supports few shortcuts:
* `@every 15s` where `s` is seconds, `m` minutes, and `h` hours
* `@daily` or `@hourly` which runs at that period from the time the binding is initialized
## Binding support
This component supports both **input and output** binding interfaces.
This component supports **output binding** with the following operations:
- `delete`
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -5,7 +5,10 @@ linkTitle: "AWS DynamoDB"
description: "Detailed documentation on the AWS DynamoDB binding component"
---
## Setup Dapr component
## Component format
To setup AWS DynamoDB binding create a component of type `bindings.aws.dynamodb`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes
```yaml
@ -30,21 +33,33 @@ spec:
value: *****************
```
- `table` is the DynamoDB table name.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Output Binding Supported Operations
## Spec metadata fields
* create
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| table | Y | Output | The DynamoDB table name | `"items"` |
| region | Y | Output | The specific AWS region the AWS DynamoDB instance is deployed in | `"us-east-1"` |
| accessKey | Y | Output | The AWS Access Key to access this resource | `"key"` |
| secretKey | Y | Output | The AWS Secret Access Key to access this resource | `"secretAccessKey"` |
| sessionToken | N | Output | The AWS session token to use | `"sessionToken"` |
## Binding support
This component supports **output binding** with the following operations:
- `create`
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Authenticating to AWS]({{< ref authenticating-aws.md >}})
- [Authenticating to AWS]({{< ref authenticating-aws.md >}})

View File

@ -5,9 +5,11 @@ linkTitle: "Azure Event Grid"
description: "Detailed documentation on the Azure Event Grid binding component"
---
See [this](https://docs.microsoft.com/en-us/azure/event-grid/) for Azure Event Grid documentation.
## Component format
## Setup Dapr component
To setup Azure Event Grid binding create a component of type `bindings.azure.eventgrid`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
See [this](https://docs.microsoft.com/en-us/azure/event-grid/) for Azure Event Grid documentation.
```yml
apiVersion: dapr.io/v1alpha1
@ -47,28 +49,36 @@ spec:
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Input Binding Metadata
- `tenantId` is the Azure tenant id in which this Event Grid Event Subscription should be created
- `subscriptionId` is the Azure subscription id in which this Event Grid Event Subscription should be created
- `clientId` is the client id that should be used by the binding to create or update the Event Grid Event Subscription
- `clientSecret` is the client secret that should be used by the binding to create or update the Event Grid Event Subscription
- `subscriberEndpoint` is the https (required) endpoint in which Event Grid will handshake and send Cloud Events. If you aren't re-writing URLs on ingress, it should be in the form of: `https://[YOUR HOSTNAME]/api/events` If testing on your local machine, you can use something like [ngrok](https://ngrok.com) to create a public endpoint.
- `handshakePort` is the container port that the input binding will listen on for handshakes and events
- `scope` is the identifier of the resource to which the event subscription needs to be created or updated. The scope can be a subscription, or a resource group, or a top level resource belonging to a resource provider namespace, or an Event Grid topic. For example:
- '/subscriptions/{subscriptionId}/' for a subscription
- '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}' for a resource group
- '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}' for a resource
- '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}' for an Event Grid topic
> Values in braces {} should be replaced with actual values.
- `eventSubscriptionName` (Optional) is the name of the event subscription. Event subscription names must be between 3 and 64 characters in length and should use alphanumeric letters only.
## Spec metadata fields
## Output Binding Metadata
- `accessKey` is the Access Key to be used for publishing an Event Grid Event to a custom topic
- `topicEndpoint` is the topic endpoint in which this output binding should publish events
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| tenantId | Y | Input | The Azure tenant id in which this Event Grid Event Subscription should be created | `"tenentID"` |
| subscriptionId | Y | Input | The Azure subscription id in which this Event Grid Event Subscription should be created | `"subscriptionId"` |
| clientId | Y | Input | The client id that should be used by the binding to create or update the Event Grid Event Subscription | `"clientId"` |
| clientSecret | Y | Input | The client id that should be used by the binding to create or update the Event Grid Event Subscription | `"clientSecret"` |
| subscriberEndpoint | Y | Input | The https endpoint in which Event Grid will handshake and send Cloud Events. If you aren't re-writing URLs on ingress, it should be in the form of: `https://[YOUR HOSTNAME]/api/events` If testing on your local machine, you can use something like [ngrok](https://ngrok.com) to create a public endpoint. | `"https://[YOUR HOSTNAME]/api/events"` |
| handshakePort | Y | Input | The container port that the input binding will listen on for handshakes and events | `"9000"` |
| scope | Y | Input | The identifier of the resource to which the event subscription needs to be created or updated. See [here](#scope) for more details | `"/subscriptions/{subscriptionId}/"` |
| eventSubscriptionName | N | Input | The name of the event subscription. Event subscription names must be between 3 and 64 characters in length and should use alphanumeric letters only | `"name"` |
| accessKey | Y | Output | The Access Key to be used for publishing an Event Grid Event to a custom topic | `"accessKey"` |
| topicEndpoint | Y | Output | The topic endpoint in which this output binding should publish events | `"topic-endpoint"` |
## Output Binding Supported Operations
- create
### Scope
Scope is the identifier of the resource to which the event subscription needs to be created or updated. The scope can be a subscription, or a resource group, or a top level resource belonging to a resource provider namespace, or an Event Grid topic. For example:
- `'/subscriptions/{subscriptionId}/'` for a subscription
- `'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}'` for a resource group
- `'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}'` for a resource
- `'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}'` for an Event Grid topic
> Values in braces {} should be replaced with actual values.
## Binding support
This component supports both **input and output** binding interfaces.
This component supports **output binding** with the following operations:
- `create`
## Additional information
Event Grid Binding creates an [event subscription](https://docs.microsoft.com/en-us/azure/event-grid/concepts#event-subscriptions) when Dapr initializes. Your Service Principal needs to have the RBAC permissions to enable this.
@ -101,7 +111,7 @@ ngrok http -host-header=localhost 9000
dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run
```
### Testing om Kubernetes
### Testing on Kubernetes
Azure Event Grid requires a valid HTTPS endpoint for custom webhooks. Self signed certificates won't do. In order to enable traffic from public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/).
@ -238,7 +248,9 @@ $ kubectl delete pod nginx-nginx-ingress-controller-649df94867-fp6mg
```
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -5,9 +5,11 @@ linkTitle: "Azure Event Hubs"
description: "Detailed documentation on the Azure Event Hubs binding component"
---
See [this](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-framework-getstarted-send) for instructions on how to set up an Event Hub.
## Component format
## Setup Dapr component
To setup Azure Event Hubs binding create a component of type `bindings.azure.eventhubs`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
See [this](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-framework-getstarted-send) for instructions on how to set up an Event Hub.
```yaml
apiVersion: dapr.io/v1alpha1
@ -33,23 +35,31 @@ spec:
value: 0
```
- `connectionString` is the [EventHubs connection string](https://docs.microsoft.com/en-us/azure/event-hubs/authorize-access-shared-access-signature). Note that this is the EventHub itself and not the EventHubs namespace. Make sure to use the child EventHub shared access policy connection string.
- `consumerGroup` is the name of an [EventHubs Consumer Group](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features#consumer-groups) to listen on.
- `storageAccountName` Is the name of the account of the Azure Storage account to persist checkpoints data on.
- `storageAccountKey` Is the account key for the Azure Storage account to persist checkpoints data on.
- `storageContainerName` Is the name of the container in the Azure Storage account to persist checkpoints data on.
- `partitionID` (Optional) ID of the partition to send and receive events.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Output Binding Supported Operations
## Spec metadata fields
* create
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| connectionString | Y | Output | The [EventHubs connection string](https://docs.microsoft.com/en-us/azure/event-hubs/authorize-access-shared-access-signature). Note that this is the EventHub itself and not the EventHubs namespace. Make sure to use the child EventHub shared access policy connection string | `"Endpoint=sb://****"` |
| consumerGroup | Y | Output | The name of an [EventHubs Consumer Group](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features#consumer-groups) to listen on | `"group1"` |
| storageAccountName | Y | Output | The name of the account of the Azure Storage account to persist checkpoints data on | `"accountName"` |
| storageAccountKey | Y | Output | The account key for the Azure Storage account to persist checkpoints data on | `"accountKey"` |
| storageContainerName | Y | Output | The name of the container in the Azure Storage account to persist checkpoints data on | `"contianerName"` |
| partitionID | N | Output | ID of the partition to send and receive events | `0` |
## Binding support
This component supports **output binding** with the following operations:
- `create`
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -5,7 +5,10 @@ linkTitle: "GCP Storage Bucket"
description: "Detailed documentation on the GCP Storage Bucket binding component"
---
## Setup Dapr component
## Component format
To setup GCP Storage Bucket binding create a component of type `bindings.gcp.bucket`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -41,28 +44,36 @@ spec:
value: PRIVATE KEY
```
- `bucket` is the bucket name.
- `type` is the GCP credentials type.
- `project_id` is the GCP project id.
- `private_key_id` is the GCP private key id.
- `client_email` is the GCP client email.
- `client_id` is the GCP client id.
- `auth_uri` is Google account oauth endpoint.
- `token_uri` is Google account token uri.
- `auth_provider_x509_cert_url` is the GCP credentials cert url.
- `client_x509_cert_url` is the GCP credentials project x509 cert url.
- `private_key` is the GCP credentials private key.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Output Binding Supported Operations
## Spec metadata fields
* create
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| bucket | Y | Output | The bucket name | `"mybucket"` |
| type | Y | Output | Tge GCP credentials type | `"service_account"` |
| project_id | Y | Output | GCP project id| `projectId`
| private_key_id | Y | Output | GCP private key id | `"privateKeyId"`
| private_key | Y | Output | GCP credentials private key. Replace with x509 cert | `12345-12345`
| client_email | Y | Output | GCP client email | `"client@email.com"`
| client_id | Y | Output | GCP client id | `0123456789-0123456789`
| auth_uri | Y | Output | Google account OAuth endpoint | `https://accounts.google.com/o/oauth2/auth`
| token_uri | Y | Output | Google account token uri | `https://oauth2.googleapis.com/token`
| auth_provider_x509_cert_url | Y | Output | GCP credentials cert url | `https://www.googleapis.com/oauth2/v1/certs`
| client_x509_cert_url | Y | Output | GCP credentials project x509 cert url | `https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com`
## Binding support
This component supports **output binding** with the following operations:
- `create`
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -5,7 +5,10 @@ linkTitle: "GCP Pub/Sub"
description: "Detailed documentation on the GCP Pub/Sub binding component"
---
## Setup Dapr component
## Component format
To setup Azure Pub/Sub binding create a component of type `bindings.gcp.pubsub`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -42,30 +45,37 @@ spec:
- name: private_key
value: PRIVATE KEY
```
- `topic` is the Pub/Sub topic name.
- `subscription` is the Pub/Sub subscription name.
- `type` is the GCP credentials type.
- `project_id` is the GCP project id.
- `private_key_id` is the GCP private key id.
- `client_email` is the GCP client email.
- `client_id` is the GCP client id.
- `auth_uri` is Google account OAuth endpoint.
- `token_uri` is Google account token uri.
- `auth_provider_x509_cert_url` is the GCP credentials cert url.
- `client_x509_cert_url` is the GCP credentials project x509 cert url.
- `private_key` is the GCP credentials private key.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Output Binding Supported Operations
## Spec metadata fields
* create
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|-----------| -----|---------|
| topic | Y | Output | GCP Pub/Sub topic name | `"topic1"` |
| subscription | Y | GCP Pub/Sub subscription name | `"name1"` |
| type | Y | Output | GCP credentials type | `service_account`
| project_id | Y | Output | GCP project id| `projectId`
| private_key_id | Y | Output | GCP private key id | `"privateKeyId"`
| private_key | Y | Output | GCP credentials private key. Replace with x509 cert | `12345-12345`
| client_email | Y | Output | GCP client email | `"client@email.com"`
| client_id | Y | Output | GCP client id | `0123456789-0123456789`
| auth_uri | Y | Output | Google account OAuth endpoint | `https://accounts.google.com/o/oauth2/auth`
| token_uri | Y | Output | Google account token uri | `https://oauth2.googleapis.com/token`
| auth_provider_x509_cert_url | Y | Output |GCP credentials cert url | `https://www.googleapis.com/oauth2/v1/certs`
| client_x509_cert_url | Y | Output | GCP credentials project x509 cert url | `https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com`
## Binding support
This component supports **output binding** with the following operations:
- `create`
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -19,19 +19,158 @@ spec:
metadata:
- name: url
value: http://something.com
- name: method
value: GET
```
- `url` is the HTTP url to invoke.
- `method` is the HTTP verb to use for the request.
## Spec metadata fields
## Output Binding Supported Operations
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|--------|--------|---------|
| url | Y | Output |The base URL of the HTTP endpoint to invoke | `http://host:port/path`, `http://myservice:8000/customers`
* create
## Binding support
This component supports **output binding** with the folowing [HTTP methods/verbs](https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html):
- `create` : For backward compatability and treated like a post
- `get` : Read data/records
- `head` : Identical to get except that the server does not return a response body
- `post` : Typically used to create records or send commands
- `put` : Update data/records
- `patch` : Sometimes used to update a subset of fields of a record
- `delete` : Delete a data/record
- `options` : Requests for information about the communication options available (not commonly used)
- `trace` : Used to invoke a remote, application-layer loop- back of the request message (not commonly used)
### Request
#### Operation metadata fields
All of the operations above support the following metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| path | N | The path to append to the base URL. Used for accessing specific URIs | `"/1234"`, `"/search?lastName=Jones"`
| Headers* | N | Any fields that have a capital first letter are sent as request headers | `"Content-Type"`, `"Accept"`
#### Retrieving data
To retrieve data from the HTTP endpoint, invoke the HTTP binding with a `GET` method and the following JSON body:
```json
{
"operation": "get"
}
```
Optionally, a path can be specified to interact with resource URIs:
```json
{
"operation": "get",
"metadata": {
"path": "/things/1234"
}
}
```
### Response
The response body contains the data returned by the HTTP endpoint. The `data` field contains the HTTP response body as a byte slice (Base64 encoded via curl). The `metadata` field contains:
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| statusCode | Y | The [HTTP status code](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html) | `200`, `404`, `503`
| status | Y | The status description | `"200 OK"`, `"201 Created"`
| Headers* | N | Any fields that have a capital first letter are sent as request headers | `"Content-Type"`
#### Example
**Requesting the base URL**
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d "{ \"operation\": \"get\" }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "get" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
**Requesting a specific path**
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d "{ \"operation\": \"get\", \"metadata\": { \"path\": \"/things/1234\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "get", "metadata": { "path": "/things/1234" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
### Sending and updating data
To send data to the HTTP endpoint, invoke the HTTP binding with a `POST`, `PUT`, or `PATCH` method and the following JSON body:
{{% alert title="Note" color="primary" %}}
Any metadata field that starts with a capital letter is passed as a request header.
For example, the default content type is `application/json; charset=utf-8`. This can be overriden be setting the `Content-Type` metadata field.
{{% /alert %}}
```json
{
"operation": "post",
"data": "content (default is JSON)",
"metadata": {
"path": "/things",
"Content-Type": "application/json; charset=utf-8"
}
}
```
#### Example
**Posting a new record**
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d "{ \"operation\": \"post\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"path\": \"/things\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "post", "data": "YOUR_BASE_64_CONTENT", "metadata": { "path": "/things" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -5,7 +5,10 @@ linkTitle: "InfluxDB"
description: "Detailed documentation on the InfluxDB binding component"
---
## Setup Dapr component
## Component format
To setup InfluxDB binding create a component of type `bindings.influx`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -27,21 +30,29 @@ spec:
value: <BUCKET>
```
- `url` is the URL for the InfluxDB instance. eg. http://localhost:8086
- `token` is the authorization token for InfluxDB.
- `org` is the InfluxDB organization.
- `bucket` bucket name to write to.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Output Binding Supported Operations
## Spec metadata fields
* create
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| url | Y | Output | The URL for the InfluxDB instance| `"http://localhost:8086"` |
| token | Y | Output | The authorization token for InfluxDB | `"mytoken"` |
| org | Y | Output | The InfluxDB organization | `"myorg"` |
| bucket | Y | Output | Bucket name to write to | `"mybucket"` |
## Binding support
This component supports **output binding** with the following operations:
- `create`
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -2,10 +2,13 @@
type: docs
title: "Kafka binding spec"
linkTitle: "Kafka"
description: "Detailed documentation on the kafka binding component"
description: "Detailed documentation on the Kafka binding component"
---
## Setup Dapr component
## Component format
To setup Kafka binding create a component of type `bindings.kafka`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -27,23 +30,38 @@ spec:
value: topic3
- name: authRequired # Required. default: "true"
value: "false"
- name: saslUsername # Optional.
- name: saslUsername # Optional.
value: "user"
- name: saslPassword # Optional.
- name: saslPassword # Optional.
value: "password"
- name: maxMessageBytes # Optional.
value: 1024
```
- `topics` is a comma separated string of topics for an input binding.
- `brokers` is a comma separated string of kafka brokers.
- `consumerGroup` is a kafka consumer group to listen on.
- `publishTopic` is the topic to publish for an output binding.
- `authRequired` determines whether to use SASL authentication or not.
- `saslUsername` is the SASL username for authentication. Only used if `authRequired` is set to - `"true"`.
- `saslPassword` is the SASL password for authentication. Only used if `authRequired` is set to - `"true"`.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| topics | N | Input | A comma separated string of topics | `"mytopic1,topic2"` |
| brokers | Y | Input/Output | A comma separated string of kafka brokers | `"localhost:9092,localhost:9093"` |
| consumerGroup | N | Input | A kafka consumer group to listen on | `"group1"` |
| publishTopic | Y | Output | The topic to publish to | `"mytopic"` |
| authRequired | Y | Input/Output | Determines whether to use SASL authentication or not. Defaults to `"true"` | `"true"`, `"false"` |
| saslUsername | N | Input/Output | The SASL username for authentication. Only used if `authRequired` is set to - `"true"` | `"user"` |
| saslPassword | N | Input/Output | The SASL password for authentication. Only used if `authRequired` is set to - `"true"` | `"password"` |
| maxMessageBytes | N | Input/Output | The maximum size allowed for a single Kafka message. Defaults to 1024 | `2048` |
## Binding support
This component supports both **input and output** binding interfaces.
This component supports **output binding** with the following operations:
- `create`
## Specifying a partition key
@ -67,12 +85,11 @@ curl -X POST http://localhost:3500/v1.0/bindings/myKafka \
}'
```
## Output Binding Supported Operations
* create
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -4,10 +4,11 @@ title: "AWS Kinesis binding spec"
linkTitle: "AWS Kinesis"
description: "Detailed documentation on the AWS Kinesis binding component"
---
## Component format
To setup AWS Kinesis binding create a component of type `bindings.aws.kinesis`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
See [this](https://aws.amazon.com/kinesis/data-streams/getting-started/) for instructions on how to set up an AWS Kinesis data streams
## Setup Dapr component
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes
```yaml
@ -36,22 +37,34 @@ spec:
value: *****************
```
- `mode` Accepted values: shared, extended. shared - Shared throughput, extended - Extended/Enhanced fanout methods. More details are [here](https://docs.aws.amazon.com/streams/latest/dev/building-consumers.html)
- `streamName` is the AWS Kinesis Stream Name.
- `consumerName` is the AWS Kinesis Consumer Name.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Output Binding Supported Operations
## Spec metadata fields
* create
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| mode | N | Input| The Kinesis stream mode. `shared`- Shared throughput, `extended` - Extended/Enhanced fanout methods. More details are [here](https://docs.aws.amazon.com/streams/latest/dev/building-consumers.html). Defaults to `"shared"` | `"shared"`, `"extended"` |
| streamName | Y | Input/Output | The AWS Kinesis Stream Name | `"stream"` |
| consumerName | Y | Input | The AWS Kinesis Consumer Name | `"myconsumer"` |
| region | Y | Output | The specific AWS region the AWS Kinesis instance is deployed in | `"us-east-1"` |
| accessKey | Y | Output | The AWS Access Key to access this resource | `"key"` |
| secretKey | Y | Output | The AWS Secret Access Key to access this resource | `"secretAccessKey"` |
| sessionToken | N | Output | The AWS session token to use | `"sessionToken"` |
## Binding support
This component supports both **input and output** binding interfaces.
This component supports **output binding** with the following operations:
- `create`
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Authenticating to AWS]({{< ref authenticating-aws.md >}})
- [Authenticating to AWS]({{< ref authenticating-aws.md >}})

View File

@ -5,7 +5,10 @@ linkTitle: "Kubernetes Events"
description: "Detailed documentation on the Kubernetes Events binding component"
---
## Setup Dapr component
## Component format
To setup Kubernetes Events binding create a component of type `bindings.kubernetes`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -23,8 +26,18 @@ spec:
vale: "<seconds>"
```
- `namespace` (required) is the Kubernetes namespace to read events from.
- `resyncPeriodInSec` (optional, default `10`) the period of time to refresh event list from Kubernetes API server.
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| namespace | Y | Input | The Kubernetes namespace to read events from | `"default"` |
| resyncPeriodInSec | N | Te period of time to refresh event list from Kubernetes API server. Defaults to `"10"` | `"15"`
## Binding support
This component supports **input** binding interface.
## Output format
Output received from the binding is of format `bindings.ReadResponse` with the `Data` field populated with the following structure:
@ -63,7 +76,7 @@ Three different event types are available:
- Delete : Only the `oldVal` field is populated, `newVal` field is an empty `v1.Event`, `event` is `delete`
- Update : Both the `oldVal` and `newVal` fields are populated, `event` is `update`
## Required permisiions
## Required permissions
For consuming `events` from Kubernetes, permissions need to be assigned to a User/Group/ServiceAccount using [RBAC Auth] mechanism of Kubernetes.
@ -102,7 +115,9 @@ roleRef:
```
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -5,7 +5,10 @@ linkTitle: "MQTT"
description: "Detailed documentation on the MQTT binding component"
---
## Setup Dapr component
## Component format
To setup MQTT binding create a component of type `bindings.mqtt`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -22,20 +25,28 @@ spec:
- name: topic
value: topic1
```
- `url` is the MQTT broker url.
- `topic` is the topic to listen on or send events to.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Output Binding Supported Operations
## Spec metadata fields
* create
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| url | Y | Input/Output | The MQTT broker url | `"mqtt[s]://[username][:password]@host.domain[:port]"` |
| topic | Y | Input/Output | The topic to listen on or send events to | `"mytopic"` |
## Binding support
This component supports both **input and output** binding interfaces.
This component supports **output binding** with the following operations:
- `create`
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -0,0 +1,153 @@
---
type: docs
title: "MySQL binding spec"
linkTitle: "MySQL"
description: "Detailed documentation on the MySQL binding component"
---
## Component format
To setup MySQL binding create a component of type `bindings.mysql`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
The MySQL binding uses [Go-MySQL-Driver](https://github.com/go-sql-driver/mysql) internally.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: bindings.mysql
version: v1
metadata:
- name: url # Required, define DB connection in DSN format
value: <CONNECTION_STRING>
- name: pemPath # Optional
value: <PEM PATH>
- name: maxIdleConns
value: <MAX_IDLE_CONNECTIONS>
- name: maxOpenConns
value: <MAX_OPEN_CONNECTIONS>
- name: connMaxLifetime
value: <CONNECTILN_MAX_LIFE_TIME>
- name: connMaxIdleTime
value: <CONNECTION_MAX_IDLE_TIME>
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| url | Y | Output | Represent DB connection in Data Source Name (DNS) format. See [here](#ssl-connection-details) SSL details | `"user:password@tcp(localhost:3306)/dbname"` |
| pemPath | Y | Output | Path to the PEM file. Used with SSL connection | `"path/to/pem/file"` |
| maxIdleConns | N | Output | The max idle connections. Integer greater than 0 | `"10"` |
| maxOpenConns | N | Output | The max open connections. Integer greater than 0 | `"10"` |
| connMaxLifetime | N | Output | The max connection lifetime. Duration string | `"12s"` |
| connMaxIdleTime | N | Output | The max connection idel time. Duration string | `"12s"` |
### SSL connection
If your server requires SSL your connection string must end of `&tls=custom` for example:
```bash
"<user>:<password>@tcp(<server>:3306)/<database>?allowNativePasswords=true&tls=custom"
```
You must replace the `<PEM PATH>` with a full path to the PEM file. If you are using [MySQL on Azure](http://bit.ly/AzureMySQLSSL) see the Azure [documentation on SSL database connections](http://bit.ly/MySQLSSL), for information on how to download the required certificate. The connection to MySQL will require a minimum TLS version of 1.2.
## Binding support
This component supports **output binding** with the following operations:
- `exec`
- `query`
- `close`
### exec
The `exec` operation can be used for DDL operations (like table creation), as well as `INSERT`, `UPDATE`, `DELETE` operations which return only metadata (e.g. number of affected rows).
**Request**
```json
{
"operation": "exec",
"metadata": {
"sql": "INSERT INTO foo (id, c1, ts) VALUES (1, 'demo', '2020-09-24T11:45:05Z07:00')"
}
}
```
**Response**
```json
{
"metadata": {
"operation": "exec",
"duration": "294µs",
"start-time": "2020-09-24T11:13:46.405097Z",
"end-time": "2020-09-24T11:13:46.414519Z",
"rows-affected": "1",
"sql": "INSERT INTO foo (id, c1, ts) VALUES (1, 'demo', '2020-09-24T11:45:05Z07:00')"
}
}
```
### query
The `query` operation is used for `SELECT` statements, which returns the metadata along with data in a form of an array of row values.
**Request**
```json
{
"operation": "query",
"metadata": {
"sql": "SELECT * FROM foo WHERE id < 3"
}
}
```
**Response**
```json
{
"metadata": {
"operation": "query",
"duration": "432µs",
"start-time": "2020-09-24T11:13:46.405097Z",
"end-time": "2020-09-24T11:13:46.420566Z",
"sql": "SELECT * FROM foo WHERE id < 3"
},
"data": "[
[0,\"test-0\",\"2020-09-24T04:13:46Z\"],
[1,\"test-1\",\"2020-09-24T04:13:46Z\"],
[2,\"test-2\",\"2020-09-24T04:13:46Z\"]
]"
}
```
### close
Finally, the `close` operation can be used to explicitly close the DB connection and return it to the pool. This operation doesn't have any response.
**Request**
```json
{
"operation": "close"
}
```
> Note, the MySQL binding itself doesn't prevent SQL injection, like with any database application, validate the input before executing query.
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -1,11 +1,14 @@
---
type: docs
title: "PostgrSQL binding spec"
linkTitle: "PostgrSQL"
description: "Detailed documentation on the PostgrSQL binding component"
title: "PostgreSQL binding spec"
linkTitle: "PostgreSQL"
description: "Detailed documentation on the PostgreSQL binding component"
---
## Setup Dapr component
## Component format
To setup PostgreSQL binding create a component of type `bindings.postgres`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -25,7 +28,15 @@ spec:
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
The PostgrSQL binding uses [pgx connection pool](https://github.com/jackc/pgx) internally so the `url` parameter can be any valid connection string, either in a `DSN` or `URL` format:
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| url | Y | Output | Postgres connection string See [here](#url-format) for more details | `"user=dapr password=secret host=dapr.example.com port=5432 dbname=dapr sslmode=verify-ca"` |
### URL format
The PostgreSQL binding uses [pgx connection pool](https://github.com/jackc/pgx) internally so the `url` parameter can be any valid connection string, either in a `DSN` or `URL` format:
**Example DSN**
@ -47,7 +58,10 @@ Both methods also support connection pool configuration variables:
- `pool_max_conn_idle_time`: duration string
- `pool_health_check_period`: duration string
## Output Binding Supported Operations
## Binding support
This component supports **output binding** with the following operations:
- `exec`
- `query`
@ -133,7 +147,9 @@ Finally, the `close` operation can be used to explicitly close the DB connection
> Note, the PostgreSql binding itself doesn't prevent SQL injection, like with any database application, validate the input before executing query.
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -0,0 +1,80 @@
---
type: docs
title: "Postmark binding spec"
linkTitle: "Postmark"
description: "Detailed documentation on the Postmark binding component"
---
## Component format
To setup Postmark binding create a component of type `bindings.postmark`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: postmark
namespace: default
spec:
type: bindings.postmark
metadata:
- name: accountToken
value: "YOUR_ACCOUNT_TOKEN" # required, this is your Postmark account token
- name: serverToken
value: "YOUR_SERVER_TOKEN" # required, this is your Postmark server token
- name: emailFrom
value: "testapp@dapr.io" # optional
- name: emailTo
value: "dave@dapr.io" # optional
- name: subject
value: "Hello!" # optional
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| accountToken | Y | Output | The Postmark account token, this should be considered a secret value | `"account token"` |
| serverToken | Y | Output | The Postmark server token, this should be considered a secret value | `"server token"` |
| emailFrom | N | Output | If set this specifies the 'from' email address of the email message | `"me@exmaple.com"` |
| emailTo | N | Output | If set this specifies the 'to' email address of the email message | `"me@example.com"` |
| emailCc | N | Output | If set this specifies the 'cc' email address of the email message | `"me@example.com"` |
| emailBcc | N | Output | If set this specifies the 'bcc' email address of the email message | `"me@example.com"` |
| subject | N | Output | If set this specifies the subject of the email message | `"me@example.com"` |
You can specify any of the optional metadata properties on the output binding request too (e.g. `emailFrom`, `emailTo`, `subject`, etc.)
Combined, the optional metadata properties in the component configuration and the request payload should at least contain the `emailFrom`, `emailTo` and `subject` fields, as these are required to send an email with success.
## Binding support
This component supports **output binding** with the following operations:
- `create`
## Example request payload
```json
{
"operation": "create",
"metadata": {
"emailTo": "changeme@example.net",
"subject": "An email from Dapr Postmark binding"
},
"data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}
```
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -5,7 +5,10 @@ linkTitle: "RabbitMQ"
description: "Detailed documentation on the RabbitMQ binding component"
---
## Setup Dapr component
## Component format
To setup RabbitMQ binding create a component of type `bindings.rabbitmq`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -29,20 +32,37 @@ spec:
value: 60
- name: prefetchCount
value: 0
- name: exclusive
value: false
- name: maxPriority
value: 5
```
- `queueName` is the RabbitMQ queue name.
- `host` is the RabbitMQ host address.
- `durable` tells RabbitMQ to persist message in storage.
- `deleteWhenUnused` enables or disables auto-delete.
- `ttlInSeconds` is an optional parameter to set the [default message time to live at RabbitMQ queue level](https://www.rabbitmq.com/ttl.html). If this parameter is omitted, messages won't expire, continuing to exist on the queue until processed.
- `prefetchCount` is an optional parameter to set the [Channel Prefetch Setting (QoS)](https://www.rabbitmq.com/confirms.html#channel-qos-prefetch). If this parameter is omiited, QoS would set value to 0 as no limit.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Specifying a time to live on message level
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| queueName | Y | Input/Output | The RabbitMQ queue name | `"myqueue"` |
| host | Y | Input/Output | The RabbitMQ host address | `"amqp://[username][:password]@host.domain[:port]"` |
| durable | N | Output | Tells RabbitMQ to persist message in storage. Defaults to `"false"` | `"true"`, `"false"` |
| deleteWhenUnused | N | Input/Output | Enables or disables auto-delete. Defaults to `"false"` | `"true"`, `"false"` |
| ttlInSeconds | N | Output | Set the [default message time to live at RabbitMQ queue level](https://www.rabbitmq.com/ttl.html). If this parameter is omitted, messages won't expire, continuing to exist on the queue until processed. See [also](#specifying-a-ttl-per-message) | `60` |
| prefetchCount | N | Input | Set the [Channel Prefetch Setting (QoS)](https://www.rabbitmq.com/confirms.html#channel-qos-prefetch). If this parameter is omiited, QoS would set value to 0 as no limit | `0` |
| exclusive | N | Input/Output | Determines whether the topic will be an exclusive topic or not. Defaults to `"false"` | `"true"`, `"false"` |
| maxPriority| N | Input/Output | Parameter to set the [priority queue](https://www.rabbitmq.com/priority.html). If this parameter is omitted, queue will be created as a general queue instead of a priority queue. Value between 1 and 255. See [also](#specifying-a-priority-per-message) | `"1"`, `"10"` |
## Binding support
This component supports both **input and output** binding interfaces.
This component supports **output binding** with the following operations:
- `create`
## Specifying a TTL per message
Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.
@ -52,7 +72,25 @@ The field name is `ttlInSeconds`.
Example:
{{< tabs Windows Linux >}}
{{% codetab %}}
```shell
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
-H "Content-Type: application/json" \
-d "{
\"data\": {
\"message\": \"Hi\"
},
\"metadata\": {
\"ttlInSeconds\": "60"
},
\"operation\": \"create\"
}"
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
-H "Content-Type: application/json" \
-d '{
@ -65,13 +103,58 @@ curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
"operation": "create"
}'
```
{{% /codetab %}}
{{< /tabs >}}
## Output Binding Supported Operations
* create
## Specifying a priority per message
Priority can be defined at the message level. If `maxPriority` parameter is set, high priority messages will have priority over other low priority messages.
To set priority at message level use the `metadata` section in the request body during the binding invocation.
The field name is `priority`.
Example:
{{< tabs Windows Linux >}}
{{% codetab %}}
```shell
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
-H "Content-Type: application/json" \
-d "{
\"data\": {
\"message\": \"Hi\"
},
\"metadata\": {
"priority": \"5\"
},
\"operation\": \"create\"
}"
```
{{% /codetab %}}
{{% codetab %}}
```shell
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
},
"metadata": {
"priority": "5"
},
"operation": "create"
}'
```
{{% /codetab %}}
{{< /tabs >}}
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -5,7 +5,10 @@ linkTitle: "Redis"
description: "Detailed documentation on the Redis binding component"
---
## Setup Dapr component
## Component format
To setup Redis binding create a component of type `bindings.redis`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -25,20 +28,91 @@ spec:
value: <bool>
```
- `redisHost` is the Redis host address.
- `redisPassword` is the Redis password.
- `enableTLS` - If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Output Binding Supported Operations
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| redisHost | Y | Output | The Redis host address | `"localhost:6379"` |
| redisPassword | Y | Output | The Redis password | `"password"` |
| enableTLS | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |
## Binding support
This component supports **output binding** with the following operations:
- `create`
## Create a Redis instance
Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service, provided the version of Redis is 5.0.0 or later.
{{< tabs "Self-Hosted" "Kubernetes" "AWS" "GCP" "Azure">}}
{{% codetab %}}
The Dapr CLI will automatically create and setup a Redis Streams instance for you.
The Redis instance will be installed via Docker when you run `dapr init`, and the component file will be created in default directory. (`$HOME/.dapr/components` directory (Mac/Linux) or `%USERPROFILE%\.dapr\components` on Windows).
{{% /codetab %}}
{{% codetab %}}
You can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm](https://github.com/helm/helm#install).
1. Install Redis into your cluster.
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install redis bitnami/redis
```
2. Run `kubectl get pods` to see the Redis containers now running in your cluster.
3. Add `redis-master:6379` as the `redisHost` in your redis.yaml file. For example:
```yaml
metadata:
- name: redisHost
value: redis-master:6379
```
4. Next, we'll get our Redis password, which is slightly different depending on the OS we're using:
- **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files.
- **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password.
Add this password as the `redisPassword` value in your redis.yaml file. For example:
```yaml
- name: redisPassword
value: "lhDOkwTlp0"
```
{{% /codetab %}}
{{% codetab %}}
[AWS Redis](https://aws.amazon.com/redis/)
{{% /codetab %}}
{{% codetab %}}
[GCP Cloud MemoryStore](https://cloud.google.com/memorystore/)
{{% /codetab %}}
{{% codetab %}}
[Azure Redis](https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/quickstart-create-redis)
{{% /codetab %}}
{{< /tabs >}}
{{% alert title="Note" color="primary" %}}
The Dapr CLI automatically deploys a local redis instance in self hosted mode as part of the `dapr init` command.
{{% /alert %}}
* create
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -5,16 +5,14 @@ linkTitle: "RethinkDB"
description: "Detailed documentation on the RethinkDB binding component"
---
## Introduction
## Component format
The RethinkDB state store supports transactions which means it can be used to support Dapr actors. Dapr persists only the actor's current state which doesn't allow the users to track how actor's state may have changed over time.
The [RethinkDB state store]({{<ref setup-rethinkdb.md>}}) supports transactions which means it can be used to support Dapr actors. Dapr persists only the actor's current state which doesn't allow the users to track how actor's state may have changed over time.
To enable users to track change of the state of actors, this binding leverages RethinkDB's built-in capability to monitor RethinkDB table and event on change with both the `old` and `new` state. This binding creates a subscription on the Dapr state table and streams these changes using the Dapr input binding interface.
## Setup Dapr component
To setup RethinkDB statechange binding create a component of type `bindings.rethinkdb.statechange`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
Create the following YAML file and save this to the `components` directory in your application directory.
(Use the `--components-path` flag with `dapr run` to point to your custom components dir)
```yaml
apiVersion: dapr.io/v1alpha1
@ -31,10 +29,22 @@ spec:
value: <REPLACE-RETHINKDB-DB-NAME> # Required, e.g. dapr (alpha-numerics only)
```
For example on how to combine this binding with Dapr Pub/Sub to stream state changes to a topic see [here](https://github.com/mchmarny/dapr-state-store-change-handler).
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| address | Y | Input | Address of RethinkDB server | `"27.0.0.1:28015"`, `"rethinkdb.default.svc.cluster.local:28015"` |
| database | Y | Input | RethinDB database name | `"dapr"` |
## Binding support
This component only supports **input** binding interface.
## Related links
- [Combine this binding with Dapr Pub/Sub](https://github.com/mchmarny/dapr-state-store-change-handler) to stream state changes to a topic
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

Some files were not shown because too many files have changed in this diff Show More