mirror of https://github.com/dapr/docs.git
Merge branch 'dapr:v1.2' into master
This commit is contained in:
commit
a6248e9ae1
|
@ -0,0 +1,10 @@
|
|||
# Documentation and examples for what this does:
|
||||
#
|
||||
# https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners
|
||||
|
||||
# This file is a list of rules, with the last rule being most specific
|
||||
# All of the people (and only those people) from the matching rule will be notified
|
||||
|
||||
# Default rule: anything that doesn't match a more specific rule goes here
|
||||
|
||||
* @dapr/approvers-docs @dapr/maintainers-docs
|
|
@ -1,31 +0,0 @@
|
|||
---
|
||||
name: Bug report
|
||||
about: Report a bug in Dapr docs
|
||||
title: ''
|
||||
labels: kind/bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
## Expected Behavior
|
||||
|
||||
<!-- Briefly describe what you expect to happen -->
|
||||
|
||||
|
||||
## Actual Behavior
|
||||
|
||||
<!-- Briefly describe what is actually happening -->
|
||||
|
||||
|
||||
## Steps to Reproduce the Problem
|
||||
|
||||
<!-- How can a maintainer reproduce this issue (be detailed) -->
|
||||
|
||||
## Release Note
|
||||
<!-- How should the fix for this issue be communicated in our release notes? It can be populated later. -->
|
||||
<!-- Keep it as a single line. Examples: -->
|
||||
|
||||
<!-- RELEASE NOTE: **ADD** New feature in Dapr. -->
|
||||
<!-- RELEASE NOTE: **FIX** Bug in runtime. -->
|
||||
<!-- RELEASE NOTE: **UPDATE** Runtime dependency. -->
|
||||
|
||||
RELEASE NOTE:
|
|
@ -1,8 +0,0 @@
|
|||
---
|
||||
name: Feature Request
|
||||
about: Start a discussion for Dapr docs
|
||||
title: ''
|
||||
labels: kind/discussion
|
||||
assignees: ''
|
||||
|
||||
---
|
|
@ -1,19 +0,0 @@
|
|||
---
|
||||
name: Feature Request
|
||||
about: Create a Feature Request for Dapr docs
|
||||
title: ''
|
||||
labels: kind/enhancement
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
## Describe the feature
|
||||
|
||||
## Release Note
|
||||
<!-- How should this new feature be announced in our release notes? It can be populated later. -->
|
||||
<!-- Keep it as a single line. Examples: -->
|
||||
|
||||
<!-- RELEASE NOTE: **ADD** New feature in Dapr. -->
|
||||
<!-- RELEASE NOTE: **FIX** Bug in runtime. -->
|
||||
<!-- RELEASE NOTE: **UPDATE** Runtime dependency. -->
|
||||
|
||||
RELEASE NOTE:
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
name: New Content Needed
|
||||
about: Topic is missing and needs to be written
|
||||
title: ''
|
||||
labels: needs-triage,content/missing-information
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**What content needs to be created or modified?**
|
||||
<!--A clear and concise description of what the problem is. Ex. There should be docs on how pub/sub works...-->
|
||||
|
||||
**Describe the solution you'd like**
|
||||
<!--A clear and concise description of what you want to happen-->
|
||||
|
||||
**Where should the new material be placed?**
|
||||
<!--Please suggest where in the docs structure the new content should be created-->
|
||||
|
||||
**The associated pull request from dapr/dapr, dapr/components-contrib, or other Dapr code repos**
|
||||
<!--
|
||||
Specify the URL to the associated pull request, if applicable
|
||||
|
||||
For example: https://github.com/dapr/dapr/pull/3277
|
||||
-->
|
||||
|
||||
**Additional context**
|
||||
<!--Add any other context or screenshots about the feature request here-->
|
|
@ -1,9 +0,0 @@
|
|||
---
|
||||
name: Proposal
|
||||
about: Create a proposal for Dapr docs
|
||||
title: ''
|
||||
labels: kind/proposal
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
## Describe the proposal
|
|
@ -1,9 +0,0 @@
|
|||
---
|
||||
name: Question
|
||||
about: Ask a question about Dapr docs
|
||||
title: ''
|
||||
labels: kind/question
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
## Ask your question here
|
|
@ -0,0 +1,23 @@
|
|||
---
|
||||
name: Typo
|
||||
about: Report incorrect language/small updates to fix readability
|
||||
title: ''
|
||||
labels: needs-triage,content/typo
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**URL of the docs page**
|
||||
<!--The URL(s) on docs.dapr.io where the typo occurs-->
|
||||
|
||||
**How is it currently worded?**
|
||||
<!--Please copy and paste the sentence where the typo occurs-->
|
||||
|
||||
**How should it be worded?**
|
||||
<!--Please correct the sentence-->
|
||||
|
||||
**Screenshots**
|
||||
<!--If applicable, add screenshots to help explain your problem-->
|
||||
|
||||
**Additional context**
|
||||
<!--Add any other context about the problem here-->
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
name: Website Issue
|
||||
about: The website is broken or not working correctly.
|
||||
title: ''
|
||||
labels: needs-triage,website/functionality
|
||||
assignees: AaronCrawfis
|
||||
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
<!--A clear and concise description of what the bug is.-->
|
||||
|
||||
**Steps to reproduce**
|
||||
<!--
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
-->
|
||||
|
||||
**Expected behavior**
|
||||
<!--A clear and concise description of what you expected to happen.-->
|
||||
|
||||
**Screenshots**
|
||||
<!--If applicable, add screenshots to help explain your problem.-->
|
||||
|
||||
**Desktop (please complete the following information):**
|
||||
- OS: <!--[e.g. iOS]-->
|
||||
- Browser <!--[e.g. chrome, safari]-->
|
||||
- Version <!--[e.g. 22]-->
|
||||
|
||||
**Smartphone (please complete the following information):**
|
||||
- Device: <!--[e.g. iPhone6]-->
|
||||
- OS: <!--[e.g. iOS8.1]-->
|
||||
- Browser <!--[e.g. stock browser, safari]-->
|
||||
- Version <!--[e.g. 22]-->
|
||||
|
||||
**Additional context**
|
||||
<!--Add any other context about the problem here-->
|
|
@ -0,0 +1,23 @@
|
|||
---
|
||||
name: Wrong Information/Code/Steps
|
||||
about: Something in the docs is incorrect
|
||||
title: ''
|
||||
labels: needs-triage,content/incorrect-information
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Describe the issue**
|
||||
<!--A clear and concise description of what the bug is-->
|
||||
|
||||
**URL of the docs**
|
||||
<!--Paste the URL (docs.dapr.io/concepts/......) of the page-->
|
||||
|
||||
**Expected content**
|
||||
<!--A clear and concise description of what you expected to happen-->
|
||||
|
||||
**Screenshots**
|
||||
<!--If applicable, add screenshots to help explain your problem-->
|
||||
|
||||
**Additional context**
|
||||
<!--Add any other context about the problem here-->
|
|
@ -1,15 +1,20 @@
|
|||
Thank you for helping make the Dapr documentation better!
|
||||
|
||||
If you are a new contributor, please see the [this contribution guidance](https://docs.dapr.io/contributing/contributing-docs/) which helps keep the Dapr documentation readable, consistent and useful for Dapr users.
|
||||
**Please follow this checklist before submitting:**
|
||||
|
||||
Note that you must see that the suggested changes do not break the website [docs.dapr.io](https://docs.dapr.io). See [this overview](https://github.com/dapr/docs/blob/master/README.md#overview) on how to setup a local version of the website and make sure the website is built correctly.
|
||||
- [ ] [Read the contribution guide](https://docs.dapr.io/contributing/contributing-docs/)
|
||||
- [ ] Commands include options for Linux, MacOS, and Windows within codetabs
|
||||
- [ ] New file and folder names are globally unique
|
||||
- [ ] Page references use shortcodes instead of markdown or URL links
|
||||
- [ ] Images use HTML style and have alternative text
|
||||
- [ ] Places where multiple code/command options are given have codetabs
|
||||
|
||||
In addition, please fill out the following to help reviewers understand this pull request:
|
||||
|
||||
## Description
|
||||
|
||||
_Please explain the changes you've made_
|
||||
<!--Please explain the changes you've made-->
|
||||
|
||||
## Issue reference
|
||||
|
||||
_Please reference the issue this PR will close: #[issue number]_
|
||||
<!--Please reference the issue this PR will close: #[issue number]-->
|
||||
|
|
|
@ -0,0 +1,60 @@
|
|||
# ------------------------------------------------------------
|
||||
# Copyright (c) Microsoft Corporation and Dapr Contributors.
|
||||
# Licensed under the MIT License.
|
||||
# ------------------------------------------------------------
|
||||
|
||||
# This script automerges PRs in Dapr.
|
||||
|
||||
import os
|
||||
|
||||
from github import Github
|
||||
|
||||
|
||||
g = Github(os.getenv("GITHUB_TOKEN"))
|
||||
repo = g.get_repo(os.getenv("GITHUB_REPOSITORY"))
|
||||
maintainers = [m.strip() for m in os.getenv("MAINTAINERS").split(',')]
|
||||
|
||||
def fetch_pulls(mergeable_state):
|
||||
return [pr for pr in repo.get_pulls(state='open', sort='created') \
|
||||
if pr.mergeable_state == mergeable_state and 'automerge' in [l.name for l in pr.labels]]
|
||||
|
||||
def is_approved(pr):
|
||||
approvers = [r.user.login for r in pr.get_reviews() if r.state == 'APPROVED' and r.user.login in maintainers]
|
||||
return len([a for a in approvers if repo.get_collaborator_permission(a) in ['admin', 'write']]) > 0
|
||||
|
||||
# First, find a PR that can be merged
|
||||
pulls = fetch_pulls('clean')
|
||||
print(f"Detected {len(pulls)} open pull requests in {repo.name} to be automerged.")
|
||||
merged = False
|
||||
for pr in pulls:
|
||||
if is_approved(pr):
|
||||
# Merge only one PR per run.
|
||||
print(f"Merging PR {pr.html_url}")
|
||||
try:
|
||||
pr.merge(merge_method='squash')
|
||||
merged = True
|
||||
break
|
||||
except:
|
||||
print(f"Failed to merge PR {pr.html_url}")
|
||||
|
||||
if len(pulls) > 0 and not merged:
|
||||
print("No PR was automerged.")
|
||||
|
||||
# Now, update all PRs that are behind.
|
||||
pulls = fetch_pulls('behind')
|
||||
print(f"Detected {len(pulls)} open pull requests in {repo.name} to be updated.")
|
||||
for pr in pulls:
|
||||
if is_approved(pr):
|
||||
# Update all PRs since there is no guarantee they will all pass.
|
||||
print(f"Updating PR {pr.html_url}")
|
||||
try:
|
||||
pr.update_branch()
|
||||
except:
|
||||
print(f"Failed to update PR {pr.html_url}")
|
||||
|
||||
pulls = fetch_pulls('dirty')
|
||||
print(f"Detected {len(pulls)} open pull requests in {repo.name} to be automerged but are in dirty state.")
|
||||
for pr in pulls:
|
||||
print(f"PR is in dirty state: {pr.html_url}")
|
||||
|
||||
print("Done.")
|
|
@ -0,0 +1,26 @@
|
|||
# ------------------------------------------------------------
|
||||
# Copyright (c) Microsoft Corporation and Dapr Contributors.
|
||||
# Licensed under the MIT License.
|
||||
# ------------------------------------------------------------
|
||||
|
||||
name: dapr-automerge
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '*/10 * * * *'
|
||||
workflow_dispatch:
|
||||
jobs:
|
||||
automerge:
|
||||
if: github.repository_owner == 'dapr'
|
||||
name: Automerge and update PRs.
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@v2
|
||||
- name: Install dependencies
|
||||
run: pip install PyGithub
|
||||
- name: Automerge and update
|
||||
env:
|
||||
MAINTAINERS: AaronCrawfis,orizohar,msfussell
|
||||
GITHUB_TOKEN: ${{ secrets.DAPR_BOT_TOKEN }}
|
||||
run: python ./.github/scripts/automerge.py
|
|
@ -0,0 +1,29 @@
|
|||
name: validate-links
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- v*
|
||||
tags:
|
||||
- v*
|
||||
pull_request:
|
||||
branches:
|
||||
- v*
|
||||
jobs:
|
||||
validate:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
PYTHON_VER: 3.7
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Set up Python ${{ env.PYTHON_VER }}
|
||||
uses: actions/setup-python@v2
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VER }}
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python3 -m pip install --upgrade pip
|
||||
pip3 install setuptools wheel twine tox mechanical-markdown
|
||||
- name: Check Markdown Files
|
||||
run: |
|
||||
for name in `find . -name "*.md"`; do echo -e "------\n$name" ; mm.py -l $name || exit 1 ;done
|
|
@ -17,5 +17,5 @@ jobs:
|
|||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
stale-pr-message: 'Stale PR, paging all reviewers'
|
||||
stale-pr-label: 'stale'
|
||||
exempt-pr-labels: 'question,"help wanted",do-not-merge'
|
||||
exempt-pr-labels: 'question,"help wanted",do-not-merge,waiting-on-code-pr'
|
||||
days-before-stale: 5
|
||||
|
|
|
@ -0,0 +1,53 @@
|
|||
name: Azure Static Web App Root
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- v1.2
|
||||
pull_request:
|
||||
types: [opened, synchronize, reopened, closed]
|
||||
branches:
|
||||
- v1.2
|
||||
|
||||
jobs:
|
||||
build_and_deploy_job:
|
||||
if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
|
||||
runs-on: ubuntu-latest
|
||||
name: Build and Deploy Job
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
with:
|
||||
submodules: recursive
|
||||
- name: Setup Docsy
|
||||
run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli
|
||||
- name: Build And Deploy
|
||||
id: builddeploy
|
||||
uses: Azure/static-web-apps-deploy@v0.0.1-preview
|
||||
env:
|
||||
HUGO_ENV: production
|
||||
HUGO_VERSION: "0.74.3"
|
||||
with:
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_PROUD_BAY_0E9E0E81E }}
|
||||
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
|
||||
skip_deploy_on_missing_secrets: true
|
||||
action: "upload"
|
||||
###### Repository/Build Configurations - These values can be configured to match your app requirements. ######
|
||||
# For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
|
||||
app_location: "/daprdocs" # App source code path
|
||||
api_location: "api" # Api source code path - optional
|
||||
output_location: "public" # Built app content directory - optional
|
||||
app_build_command: "hugo"
|
||||
###### End of Repository/Build Configurations ######
|
||||
|
||||
close_pull_request_job:
|
||||
if: github.event_name == 'pull_request' && github.event.action == 'closed'
|
||||
runs-on: ubuntu-latest
|
||||
name: Close Pull Request Job
|
||||
steps:
|
||||
- name: Close Pull Request
|
||||
id: closepullrequest
|
||||
uses: Azure/static-web-apps-deploy@v0.0.1-preview
|
||||
with:
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_PROUD_BAY_0E9E0E81E }}
|
||||
skip_deploy_on_missing_secrets: true
|
||||
action: "close"
|
|
@ -1,53 +1,53 @@
|
|||
name: Azure Static Web Apps CI/CD
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- website
|
||||
pull_request:
|
||||
types: [opened, synchronize, reopened, closed]
|
||||
branches:
|
||||
- website
|
||||
|
||||
jobs:
|
||||
build_and_deploy_job:
|
||||
if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
|
||||
runs-on: ubuntu-latest
|
||||
name: Build and Deploy Job
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
with:
|
||||
submodules: recursive
|
||||
- name: Setup Docsy
|
||||
run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli
|
||||
- name: Build And Deploy
|
||||
id: builddeploy
|
||||
uses: Azure/static-web-apps-deploy@v0.0.1-preview
|
||||
env:
|
||||
HUGO_ENV: production
|
||||
HUGO_VERSION: "0.74.3"
|
||||
with:
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_GREEN_HILL_0D7377310 }}
|
||||
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
|
||||
skip_deploy_on_missing_secrets: true
|
||||
action: "upload"
|
||||
###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
|
||||
# For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
|
||||
app_location: "daprdocs" # App source code path
|
||||
api_location: "api" # Api source code path - optional
|
||||
app_artifact_location: 'public' # Built app content directory - optional
|
||||
app_build_command: "hugo"
|
||||
###### End of Repository/Build Configurations ######
|
||||
|
||||
close_pull_request_job:
|
||||
if: github.event_name == 'pull_request' && github.event.action == 'closed'
|
||||
runs-on: ubuntu-latest
|
||||
name: Close Pull Request Job
|
||||
steps:
|
||||
- name: Close Pull Request
|
||||
id: closepullrequest
|
||||
uses: Azure/static-web-apps-deploy@v0.0.1-preview
|
||||
with:
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_GREEN_HILL_0D7377310 }}
|
||||
skip_deploy_on_missing_secrets: true
|
||||
action: "close"
|
||||
name: Azure Static Web App v1.2
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- v1.2
|
||||
pull_request:
|
||||
types: [opened, synchronize, reopened, closed]
|
||||
branches:
|
||||
- v1.2
|
||||
|
||||
jobs:
|
||||
build_and_deploy_job:
|
||||
if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
|
||||
runs-on: ubuntu-latest
|
||||
name: Build and Deploy Job
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
with:
|
||||
submodules: recursive
|
||||
- name: Setup Docsy
|
||||
run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli
|
||||
- name: Build And Deploy
|
||||
id: builddeploy
|
||||
uses: Azure/static-web-apps-deploy@v0.0.1-preview
|
||||
env:
|
||||
HUGO_ENV: production
|
||||
HUGO_VERSION: "0.74.3"
|
||||
with:
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_WONDERFUL_ISLAND_07C05FD1E }}
|
||||
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
|
||||
skip_deploy_on_missing_secrets: true
|
||||
action: "upload"
|
||||
###### Repository/Build Configurations - These values can be configured to match your app requirements. ######
|
||||
# For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
|
||||
app_location: "/daprdocs" # App source code path
|
||||
api_location: "api" # Api source code path - optional
|
||||
output_location: "public" # Built app content directory - optional
|
||||
app_build_command: "hugo"
|
||||
###### End of Repository/Build Configurations ######
|
||||
|
||||
close_pull_request_job:
|
||||
if: github.event_name == 'pull_request' && github.event.action == 'closed'
|
||||
runs-on: ubuntu-latest
|
||||
name: Close Pull Request Job
|
||||
steps:
|
||||
- name: Close Pull Request
|
||||
id: closepullrequest
|
||||
uses: Azure/static-web-apps-deploy@v0.0.1-preview
|
||||
with:
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_WONDERFUL_ISLAND_07C05FD1E }}
|
||||
skip_deploy_on_missing_secrets: true
|
||||
action: "close"
|
|
@ -1,5 +1,6 @@
|
|||
# Visual Studio 2015/2017/2019 cache/options directory
|
||||
.vs/
|
||||
.idea/
|
||||
node_modules/
|
||||
daprdocs/public
|
||||
daprdocs/resources/_gen
|
||||
|
|
|
@ -1,3 +1,19 @@
|
|||
[submodule "daprdocs/themes/docsy"]
|
||||
path = daprdocs/themes/docsy
|
||||
url = https://github.com/google/docsy.git
|
||||
[submodule "sdkdocs/python"]
|
||||
path = sdkdocs/python
|
||||
url = https://github.com/dapr/python-sdk.git
|
||||
[submodule "sdkdocs/php"]
|
||||
path = sdkdocs/php
|
||||
url = https://github.com/dapr/php-sdk.git
|
||||
[submodule "sdkdocs/dotnet"]
|
||||
path = sdkdocs/dotnet
|
||||
url = https://github.com/dapr/dotnet-sdk.git
|
||||
[submodule "translations/docs-zh"]
|
||||
path = translations/docs-zh
|
||||
url = https://github.com/dapr/docs-zh.git
|
||||
branch = v1.0_content
|
||||
[submodule "sdkdocs/go"]
|
||||
path = sdkdocs/go
|
||||
url = https://github.com/dapr/go-sdk.git
|
||||
|
|
Binary file not shown.
|
@ -1,20 +0,0 @@
|
|||
# Dapr presentations
|
||||
|
||||
Here you can find previous Dapr presentations, as well as a PowerPoint & guidance on how you can give your own Dapr presentation.
|
||||
|
||||
## Previous Dapr presentations
|
||||
|
||||
| Presentation | Recording | Deck |
|
||||
|--------------|-----------|------|
|
||||
| Ignite 2019: Mark Russinovich Presents the Future of Cloud Native Applications | [Link](https://www.youtube.com/watch?v=LAUDVk8PaCY) | [Link](./PastPresentations/2019IgniteCloudNativeApps.pdf)
|
||||
| Azure Community Live: Build microservice applications using DAPR with Mark Fussell | [Link](https://www.youtube.com/watch?v=CgqI7nen-Ng) | N/A
|
||||
|
||||
There are other Dapr resources on the [community](https://github.com/dapr/dapr#community) page.
|
||||
|
||||
## Giving a Dapr presentation
|
||||
|
||||
- Begin by downloading the [Dapr Presentation Deck](./Dapr%20Presentation%20Deck.pptx). This contains slides and diagrams needed to give a Dapr presentation.
|
||||
|
||||
- Next, review the [Docs](../README.md) to make sure you understand the [concepts](../concepts) and [best-practices](../best-practices).
|
||||
|
||||
- Use the Dapr [quickstarts](https://github.com/dapr/quickstarts) repo and [samples](https://github.com/dapr/samples) repo to show demos of how to use Dapr
|
36
README.md
36
README.md
|
@ -6,6 +6,19 @@ If you are looking to explore the Dapr documentation, please go to the documenta
|
|||
|
||||
This repo contains the markdown files which generate the above website. See below for guidance on running with a local environment to contribute to the docs.
|
||||
|
||||
## Branch guidance
|
||||
|
||||
The Dapr docs handles branching differently than most code repositories. Instead of having a `master` or `main` branch, every branch is labeled to match the major and minor version of a runtime release.
|
||||
|
||||
The following branches are currently maintained:
|
||||
|
||||
| Branch | Website | Description |
|
||||
|--------|---------|-------------|
|
||||
| [v1.2](https://github.com/dapr/docs) (primary) | https://docs.dapr.io | Latest Dapr release documentation. Typo fixes, clarifications, and most documentation goes here.
|
||||
| [v1.3](https://github.com/dapr/docs/tree/v1.3) (pre-release) | https://v1-3.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.3+ go here.
|
||||
|
||||
For more information visit the [Dapr branch structure](https://docs.dapr.io/contributing/contributing-docs/#branch-guidance) document.
|
||||
|
||||
## Contribution guidelines
|
||||
|
||||
Before making your first contribution, make sure to review the [contributing section](http://docs.dapr.io/contributing/) in the docs.
|
||||
|
@ -28,35 +41,32 @@ The [daprdocs](./daprdocs) directory contains the hugo project, markdown files,
|
|||
```sh
|
||||
git clone https://github.com/dapr/docs.git
|
||||
```
|
||||
3. Change to daprdocs directory:
|
||||
3. Change to daprdocs directory:
|
||||
```sh
|
||||
cd daprdocs
|
||||
cd ./docs/daprdocs
|
||||
```
|
||||
4. Add Docsy submodule:
|
||||
```sh
|
||||
git submodule add https://github.com/google/docsy.git themes/docsy
|
||||
```
|
||||
5. Update submodules:
|
||||
4. Update submodules:
|
||||
```sh
|
||||
git submodule update --init --recursive
|
||||
```
|
||||
6. Install npm packages:
|
||||
5. Install npm packages:
|
||||
```sh
|
||||
npm install
|
||||
```
|
||||
|
||||
## Run local server
|
||||
1. Make sure you're still in the `daprdocs` directory
|
||||
2. Run
|
||||
2. Run
|
||||
```sh
|
||||
hugo server --disableFastRender
|
||||
hugo server
|
||||
```
|
||||
3. Navigate to `http://localhost:1313/docs`
|
||||
3. Navigate to `http://localhost:1313/`
|
||||
|
||||
## Update docs
|
||||
1. Fork repo into your account
|
||||
1. Create new branch
|
||||
1. Commit and push changes to content
|
||||
1. Submit pull request to `master`
|
||||
1. Commit and push changes to forked branch
|
||||
1. Submit pull request from downstream branch to the upstream branch for the correct version you are targeting
|
||||
1. Staging site will automatically get created and linked to PR to review and test
|
||||
|
||||
## Code of Conduct
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
node_modules/
|
|
@ -0,0 +1,15 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<svg width="206px" height="206px" viewBox="0 0 206 206" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
|
||||
<!-- Generator: Sketch 51.3 (57544) - http://www.bohemiancoding.com/sketch -->
|
||||
<title>dark on white</title>
|
||||
<desc>Created with Sketch.</desc>
|
||||
<defs></defs>
|
||||
<g id="dark-on-white" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
|
||||
<path d="M63.08125,128 L51.55,128 L51.55,124.378906 C50.448432,125.761726 49.3351619,126.769528 48.2101562,127.402344 C46.2413964,128.503912 44.0031375,129.054688 41.4953125,129.054688 C37.4406047,129.054688 33.8312658,127.66017 30.6671875,124.871094 C26.8937311,121.542952 25.0070312,117.136746 25.0070312,111.652344 C25.0070312,106.074191 26.9406057,101.62111 30.8078125,98.2929688 C33.8781404,95.644518 37.4054488,94.3203125 41.3898437,94.3203125 C43.7101679,94.3203125 45.8898336,94.8124951 47.9289062,95.796875 C49.1007871,96.3593778 50.3078063,97.2851498 51.55,98.5742188 L51.55,75.1953125 L63.08125,75.1953125 L63.08125,128 Z M51.9015625,111.6875 C51.9015625,109.62499 51.1750073,107.873054 49.721875,106.431641 C48.2687427,104.990227 46.5109478,104.269531 44.4484375,104.269531 C42.151551,104.269531 40.2648511,105.13671 38.7882812,106.871094 C37.5929628,108.277351 36.9953125,109.882803 36.9953125,111.6875 C36.9953125,113.492197 37.5929628,115.097649 38.7882812,116.503906 C40.2414135,118.23829 42.1281134,119.105469 44.4484375,119.105469 C46.5343854,119.105469 48.2980397,118.390632 49.7394531,116.960938 C51.1808666,115.531243 51.9015625,113.773448 51.9015625,111.6875 Z M106.329687,128 L94.7984375,128 L94.7984375,124.378906 C93.6968695,125.761726 92.5835994,126.769528 91.4585937,127.402344 C89.4898339,128.503912 87.251575,129.054688 84.74375,129.054688 C80.6890422,129.054688 77.0797033,127.66017 73.915625,124.871094 C70.1421686,121.542952 68.2554687,117.136746 68.2554687,111.652344 C68.2554687,106.074191 70.1890432,101.62111 74.05625,98.2929688 C77.1265779,95.644518 80.6538863,94.3203125 84.6382812,94.3203125 C86.9586054,94.3203125 89.1382711,94.8124951 91.1773437,95.796875 C92.3492246,96.3593778 93.5562438,97.2851498 94.7984375,98.5742188 L94.7984375,95.375 L106.329687,95.375 L106.329687,128 Z M95.15,111.6875 C95.15,109.62499 94.4234448,107.873054 92.9703125,106.431641 C91.5171802,104.990227 89.7593853,104.269531 87.696875,104.269531 C85.3999885,104.269531 83.5132886,105.13671 82.0367187,106.871094 C80.8414003,108.277351 80.24375,109.882803 80.24375,111.6875 C80.24375,113.492197 80.8414003,115.097649 82.0367187,116.503906 C83.489851,118.23829 85.3765509,119.105469 87.696875,119.105469 C89.7828229,119.105469 91.5464772,118.390632 92.9878906,116.960938 C94.4293041,115.531243 95.15,113.773448 95.15,111.6875 Z M150.878906,111.722656 C150.878906,117.300809 148.945332,121.75389 145.078125,125.082031 C142.007797,127.730482 138.480489,129.054688 134.496094,129.054688 C132.17577,129.054688 129.996104,128.562505 127.957031,127.578125 C126.78515,127.015622 125.578131,126.08985 124.335937,124.800781 L124.335937,144.3125 L112.804687,144.3125 L112.804687,95.375 L124.335937,95.375 L124.335937,98.9960938 C125.367193,97.636712 126.480463,96.6289095 127.675781,95.9726562 C129.644541,94.8710882 131.8828,94.3203125 134.390625,94.3203125 C138.445333,94.3203125 142.054672,95.7148298 145.21875,98.5039062 C148.992206,101.832048 150.878906,106.238254 150.878906,111.722656 Z M138.890625,111.6875 C138.890625,109.835928 138.304693,108.230476 137.132812,106.871094 C135.656243,105.13671 133.757824,104.269531 131.4375,104.269531 C129.351552,104.269531 127.587898,104.984368 126.146484,106.414062 C124.705071,107.843757 123.984375,109.601552 123.984375,111.6875 C123.984375,113.75001 124.71093,115.501946 126.164062,116.943359 C127.617195,118.384773 129.37499,119.105469 131.4375,119.105469 C133.757824,119.105469 135.644524,118.23829 137.097656,116.503906 C138.292975,115.097649 138.890625,113.492197 138.890625,111.6875 Z M180.521875,106.027344 C178.904679,105.253902 177.264071,104.867188 175.6,104.867188 C171.803106,104.867188 169.342193,106.414047 168.217187,109.507812 C167.79531,110.632818 167.584375,112.144522 167.584375,114.042969 L167.584375,128 L156.053125,128 L156.053125,95.375 L167.584375,95.375 L167.584375,100.71875 C168.803131,98.820303 170.115618,97.449223 171.521875,96.6054688 C173.420322,95.4804631 175.670299,94.9179688 178.271875,94.9179688 C178.881253,94.9179688 179.631246,94.9531246 180.521875,95.0234375 L180.521875,106.027344 Z" id="dapr" fill="#000000"></path>
|
||||
<polygon id="tie" fill="#000000" fill-rule="nonzero" points="112.713867 128.237305 124.324219 128.237305 125.324219 155.49707 118.519043 160.265625 111.713867 155.49707"></polygon>
|
||||
<rect id="Rectangle-4" fill="#000000" fill-rule="nonzero" x="86.6816586" y="46" width="44.0478543" height="31" rx="2"></rect>
|
||||
<rect id="Rectangle-4" fill="#FFFFFF" fill-rule="nonzero" opacity="0.15" x="86.6816586" y="46" width="16.2935291" height="31"></rect>
|
||||
<rect id="Rectangle-3" fill="#000000" fill-rule="nonzero" x="72.7718099" y="75" width="71.2879747" height="7.44032012" rx="3.72016"></rect>
|
||||
<rect id="Rectangle-4" fill="#FFFFFF" fill-rule="nonzero" opacity="0.15" x="72.7718099" y="75" width="22.0566132" height="9.15731707"></rect>
|
||||
</g>
|
||||
</svg>
|
After Width: | Height: | Size: 5.2 KiB |
|
@ -0,0 +1,15 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<svg width="480px" height="480px" viewBox="0 0 480 480" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
|
||||
<!-- Generator: Sketch 51.3 (57544) - http://www.bohemiancoding.com/sketch -->
|
||||
<title>logo large</title>
|
||||
<desc>Created with Sketch.</desc>
|
||||
<defs></defs>
|
||||
<g id="logo-large" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
|
||||
<path d="M147.109839,298.504854 L120.218166,298.504854 L120.218166,290.060213 C117.649233,293.285044 115.05301,295.635309 112.429419,297.111079 C107.838135,299.680011 102.618361,300.964459 96.7699392,300.964459 C87.3140802,300.964459 78.8968522,297.712348 71.5180026,291.208029 C62.7180414,283.446572 58.3181267,273.170995 58.3181267,260.380989 C58.3181267,247.372351 62.827356,236.987459 71.8459499,229.226003 C79.0061668,223.049632 87.2320942,219.961493 96.5239788,219.961493 C101.935135,219.961493 107.018266,221.109297 111.773525,223.404939 C114.506432,224.716735 117.321284,226.875699 120.218166,229.881897 L120.218166,175.36067 L147.109839,175.36067 L147.109839,298.504854 Z M121.038034,260.462976 C121.038034,255.653059 119.343657,251.567424 115.954852,248.205948 C112.566047,244.844472 108.466748,243.16376 103.656831,243.16376 C98.3003328,243.16376 93.9004182,245.186081 90.456955,249.230783 C87.6693897,252.510272 86.2756279,256.254299 86.2756279,260.462976 C86.2756279,264.671653 87.6693897,268.41568 90.456955,271.695169 C93.84576,275.739871 98.2456746,277.762192 103.656831,277.762192 C108.521406,277.762192 112.63437,276.095144 115.995845,272.760997 C119.357321,269.42685 121.038034,265.327551 121.038034,260.462976 Z M247.968187,298.504854 L221.076514,298.504854 L221.076514,290.060213 C218.507581,293.285044 215.911358,295.635309 213.287767,297.111079 C208.696483,299.680011 203.476709,300.964459 197.628287,300.964459 C188.172428,300.964459 179.7552,297.712348 172.376351,291.208029 C163.576389,283.446572 159.176475,273.170995 159.176475,260.380989 C159.176475,247.372351 163.685704,236.987459 172.704298,229.226003 C179.864515,223.049632 188.090442,219.961493 197.382327,219.961493 C202.793483,219.961493 207.876614,221.109297 212.631873,223.404939 C215.36478,224.716735 218.179632,226.875699 221.076514,229.881897 L221.076514,222.421098 L247.968187,222.421098 L247.968187,298.504854 Z M221.896382,260.462976 C221.896382,255.653059 220.202005,251.567424 216.8132,248.205948 C213.424395,244.844472 209.325096,243.16376 204.515179,243.16376 C199.158681,243.16376 194.758766,245.186081 191.315303,249.230783 C188.527738,252.510272 187.133976,256.254299 187.133976,260.462976 C187.133976,264.671653 188.527738,268.41568 191.315303,271.695169 C194.704108,275.739871 199.104023,277.762192 204.515179,277.762192 C209.379754,277.762192 213.492717,276.095144 216.854193,272.760997 C220.215669,269.42685 221.896382,265.327551 221.896382,260.462976 Z M351.860046,260.544963 C351.860046,273.553601 347.350817,283.938493 338.332223,291.699949 C331.172006,297.87632 322.946079,300.964459 313.654194,300.964459 C308.243038,300.964459 303.159907,299.816655 298.404648,297.521013 C295.671741,296.209217 292.856889,294.050253 289.960007,291.044055 L289.960007,336.546733 L263.068334,336.546733 L263.068334,222.421098 L289.960007,222.421098 L289.960007,230.865739 C292.364966,227.695566 294.961188,225.345301 297.748754,223.814873 C302.340038,221.24594 307.559812,219.961493 313.408234,219.961493 C322.864093,219.961493 331.281321,223.213604 338.66017,229.717923 C347.460132,237.47938 351.860046,247.754957 351.860046,260.544963 Z M323.902545,260.462976 C323.902545,256.144983 322.536112,252.400956 319.803205,249.230783 C316.359742,245.186081 311.932498,243.16376 306.521342,243.16376 C301.656767,243.16376 297.543804,244.830808 294.182328,248.164955 C290.820852,251.499102 289.140139,255.598401 289.140139,260.462976 C289.140139,265.272893 290.834516,269.358528 294.223321,272.720004 C297.612126,276.081479 301.711425,277.762192 306.521342,277.762192 C311.932498,277.762192 316.332413,275.739871 319.721218,271.695169 C322.508783,268.41568 323.902545,264.671653 323.902545,260.462976 Z M420.9895,247.2631 C417.218088,245.459381 413.392075,244.557535 409.511347,244.557535 C400.656728,244.557535 394.917709,248.164919 392.294118,255.379794 C391.310271,258.003385 390.818355,261.528782 390.818355,265.956092 L390.818355,298.504854 L363.926682,298.504854 L363.926682,222.421098 L390.818355,222.421098 L390.818355,234.883092 C393.660579,230.455782 396.721389,227.258329 400.000877,225.290636 C404.428187,222.667045 409.67529,221.355269 415.742344,221.355269 C417.163456,221.355269 418.912491,221.437255 420.9895,221.601229 L420.9895,247.2631 Z" id="dapr" fill="#0D2192"></path>
|
||||
<polygon id="tie" fill="#0D2192" fill-rule="nonzero" points="262.856535 299.058265 289.932678 299.058265 292.264747 362.629924 276.394607 373.750524 260.524466 362.629924"></polygon>
|
||||
<rect id="Rectangle-4" fill="#0D2192" fill-rule="nonzero" x="202.147624" y="107.275182" width="102.722643" height="72.2941444" rx="2"></rect>
|
||||
<rect id="Rectangle-4" fill="#FFFFFF" fill-rule="nonzero" opacity="0.0799999982" x="202.147624" y="107.275182" width="37.9976369" height="72.2941444"></rect>
|
||||
<rect id="Rectangle-3" fill="#0D2192" fill-rule="nonzero" x="169.708895" y="174.905188" width="166.248488" height="17.3513412" rx="3.72016"></rect>
|
||||
<rect id="Rectangle-4" fill="#FFFFFF" fill-rule="nonzero" opacity="0.0799999982" x="169.708895" y="174.905188" width="51.4375478" height="21.3554969"></rect>
|
||||
</g>
|
||||
</svg>
|
After Width: | Height: | Size: 5.4 KiB |
|
@ -0,0 +1 @@
|
|||
// Intentionally blank
|
|
@ -1,14 +1,55 @@
|
|||
// Code formatting.
|
||||
|
||||
.copy-code-button {
|
||||
color: #272822;
|
||||
background-color: #FFF;
|
||||
border-color: #0D2192;
|
||||
border: 2px solid;
|
||||
border-radius: 3px 3px 0px 0px;
|
||||
|
||||
/* right-align */
|
||||
display: block;
|
||||
margin-left: auto;
|
||||
margin-right: 0;
|
||||
|
||||
margin-bottom: -2px;
|
||||
padding: 3px 8px;
|
||||
font-size: 0.8em;
|
||||
}
|
||||
|
||||
.copy-code-button:hover {
|
||||
cursor: pointer;
|
||||
background-color: #F2F2F2;
|
||||
}
|
||||
|
||||
.copy-code-button:focus {
|
||||
/* Avoid an ugly focus outline on click in Chrome,
|
||||
but darken the button for accessibility.
|
||||
See https://stackoverflow.com/a/25298082/1481479 */
|
||||
background-color: #E6E6E6;
|
||||
outline: 0;
|
||||
}
|
||||
|
||||
.copy-code-button:active {
|
||||
background-color: #D9D9D9;
|
||||
}
|
||||
|
||||
.highlight pre {
|
||||
/* Avoid pushing up the copy buttons. */
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.td-content {
|
||||
// Highlighted code.
|
||||
.highlight {
|
||||
@extend .card;
|
||||
|
||||
margin: 2rem 0;
|
||||
padding: 0;
|
||||
margin: 0rem 0;
|
||||
padding: 0rem;
|
||||
|
||||
max-width: 80%;
|
||||
margin-bottom: 2rem;
|
||||
|
||||
max-width: 100%;
|
||||
|
||||
pre {
|
||||
margin: 0;
|
||||
|
@ -37,7 +78,8 @@
|
|||
word-wrap: normal;
|
||||
background-color: $gray-100;
|
||||
padding: $spacer;
|
||||
|
||||
|
||||
max-width: 100%;
|
||||
|
||||
> code {
|
||||
background-color: inherit !important;
|
||||
|
|
|
@ -1,58 +0,0 @@
|
|||
//
|
||||
// Right side toc
|
||||
//
|
||||
.td-toc {
|
||||
border-left: 1px solid $border-color;
|
||||
|
||||
@supports (position: sticky) {
|
||||
position: sticky;
|
||||
top: 4rem;
|
||||
height: calc(100vh - 10rem);
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
order: 2;
|
||||
padding-top: 2.75rem;
|
||||
padding-bottom: 1.5rem;
|
||||
vertical-align: top;
|
||||
|
||||
a {
|
||||
display: block;
|
||||
font-weight: $font-weight-medium;
|
||||
padding-bottom: .25rem;
|
||||
}
|
||||
|
||||
li {
|
||||
list-style: none;
|
||||
display: block;
|
||||
font-size: 1.1rem;
|
||||
}
|
||||
|
||||
li li {
|
||||
margin-left: 1.5rem;
|
||||
font-size: 1.1rem;
|
||||
}
|
||||
|
||||
.td-page-meta {
|
||||
a {
|
||||
font-weight: $font-weight-medium;
|
||||
}
|
||||
}
|
||||
|
||||
#TableOfContents {
|
||||
// Hugo's ToC is a mouthful, this can be used to style the top level h2 entries.
|
||||
> ul > li > ul > li > a {}
|
||||
|
||||
a {
|
||||
color: rgb(68, 68, 68);
|
||||
&:hover {
|
||||
color: $blue;
|
||||
text-decoration: none;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ul {
|
||||
padding-left: 0;
|
||||
}
|
||||
}
|
|
@ -93,9 +93,9 @@
|
|||
@include media-breakpoint-up(md) {
|
||||
padding-top: 4rem;
|
||||
background-color: $td-sidebar-bg-color;
|
||||
padding-right: 1rem;
|
||||
padding-right: .5rem;
|
||||
padding-left: .5rem;
|
||||
border-right: 1px solid $td-sidebar-border-color;
|
||||
min-width: 18rem;
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -2,14 +2,25 @@
|
|||
baseURL = "https://docs.dapr.io/"
|
||||
title = "Dapr Docs"
|
||||
theme = "docsy"
|
||||
disableFastRender = true
|
||||
|
||||
enableRobotsTXT = true
|
||||
enableGitInfo = true
|
||||
|
||||
# Language Configuration
|
||||
languageCode = "en-us"
|
||||
contentDir = "content/en"
|
||||
defaultContentLanguage = "en"
|
||||
|
||||
[languages]
|
||||
[languages.en]
|
||||
title = "Dapr Docs"
|
||||
weight = 1
|
||||
contentDir = "content/en"
|
||||
languageName = "English"
|
||||
[languages.zh-hans]
|
||||
title = "Dapr 文档库"
|
||||
weight = 2
|
||||
contentDir = "content/zh-hans"
|
||||
languageName = "简体中文"
|
||||
|
||||
# Disable categories & tags
|
||||
disableKinds = ["taxonomy", "term"]
|
||||
|
@ -18,6 +29,78 @@ disableKinds = ["taxonomy", "term"]
|
|||
[services.googleAnalytics]
|
||||
id = "UA-149338238-3"
|
||||
|
||||
# Mounts
|
||||
[module]
|
||||
[[module.mounts]]
|
||||
source = "content/en"
|
||||
target = "content"
|
||||
lang = "en"
|
||||
[[module.mounts]]
|
||||
source = "static"
|
||||
target = "static"
|
||||
[[module.mounts]]
|
||||
source = "layouts"
|
||||
target = "layouts"
|
||||
[[module.mounts]]
|
||||
source = "data"
|
||||
target = "data"
|
||||
[[module.mounts]]
|
||||
source = "assets"
|
||||
target = "assets"
|
||||
[[module.mounts]]
|
||||
source = "archetypes"
|
||||
target = "archetypes"
|
||||
|
||||
[[module.mounts]]
|
||||
source = "../sdkdocs/python/daprdocs/content/en/python-sdk-docs"
|
||||
target = "content/developing-applications/sdks/python"
|
||||
lang = "en"
|
||||
[[module.mounts]]
|
||||
source = "../sdkdocs/python/daprdocs/content/en/python-sdk-contributing"
|
||||
target = "content/contributing/"
|
||||
lang = "en"
|
||||
[[module.mounts]]
|
||||
source = "../sdkdocs/php/daprdocs/content/en/php-sdk-docs"
|
||||
target = "content/developing-applications/sdks/php"
|
||||
lang = "en"
|
||||
[[module.mounts]]
|
||||
source = "../sdkdocs/dotnet/daprdocs/content/en/dotnet-sdk-docs"
|
||||
target = "content/developing-applications/sdks/dotnet"
|
||||
lang = "en"
|
||||
[[module.mounts]]
|
||||
source = "../sdkdocs/dotnet/daprdocs/content/en/dotnet-sdk-contributing"
|
||||
target = "content/contributing/"
|
||||
lang = "en"
|
||||
[[module.mounts]]
|
||||
source = "../sdkdocs/go/daprdocs/content/en/go-sdk-docs"
|
||||
target = "content/developing-applications/sdks/go"
|
||||
lang = "en"
|
||||
[[module.mounts]]
|
||||
source = "../sdkdocs/go/daprdocs/content/en/go-sdk-contributing"
|
||||
target = "content/contributing/"
|
||||
lang = "en"
|
||||
|
||||
[[module.mounts]]
|
||||
source = "../translations/docs-zh/content/zh-hans"
|
||||
target = "content"
|
||||
lang = "zh-hans"
|
||||
[[module.mounts]]
|
||||
source = "../translations/docs-zh/content/contributing"
|
||||
target = "content/contributing/"
|
||||
lang = "zh-hans"
|
||||
[[module.mounts]]
|
||||
source = "../translations/docs-zh/content/sdks_python"
|
||||
target = "content/developing-applications/sdks/python"
|
||||
lang = "zh-hans"
|
||||
[[module.mounts]]
|
||||
source = "../translations/docs-zh/content/sdks_php"
|
||||
target = "content/developing-applications/sdks/php"
|
||||
lang = "zh-hans"
|
||||
[[module.mounts]]
|
||||
source = "../translations/docs-zh/content/sdks_dotnet"
|
||||
target = "content/developing-applications/sdks/dotnet"
|
||||
lang = "zh-hans"
|
||||
|
||||
# Markdown Engine - Allow inline html
|
||||
[markup]
|
||||
[markup.goldmark]
|
||||
|
@ -26,25 +109,25 @@ id = "UA-149338238-3"
|
|||
|
||||
# Top Nav Bar
|
||||
[[menu.main]]
|
||||
name = "Home"
|
||||
name = "Homepage"
|
||||
weight = 40
|
||||
url = "https://dapr.io"
|
||||
[[menu.main]]
|
||||
name = "About"
|
||||
name = "GitHub"
|
||||
weight = 50
|
||||
url = "https://dapr.io/#about"
|
||||
[[menu.main]]
|
||||
name = "Download"
|
||||
weight = 60
|
||||
url = "https://dapr.io/#download"
|
||||
url = "https://github.com/dapr"
|
||||
[[menu.main]]
|
||||
name = "Blog"
|
||||
weight = 70
|
||||
weight = 60
|
||||
url = "https://blog.dapr.io/posts"
|
||||
[[menu.main]]
|
||||
name = "Discord"
|
||||
weight = 70
|
||||
url = "https://aka.ms/dapr-discord"
|
||||
[[menu.main]]
|
||||
name = "Community"
|
||||
weight = 80
|
||||
url = "https://dapr.io/#community"
|
||||
url = "https://github.com/dapr/community/blob/master/README.md"
|
||||
|
||||
[params]
|
||||
copyright = "Dapr"
|
||||
|
@ -58,16 +141,26 @@ offlineSearch = false
|
|||
github_repo = "https://github.com/dapr/docs"
|
||||
github_project_repo = "https://github.com/dapr/dapr"
|
||||
github_subdir = "daprdocs"
|
||||
github_branch = "website"
|
||||
github_branch = "v1.2"
|
||||
|
||||
# Versioning
|
||||
version_menu = "Releases"
|
||||
version = "v0.11"
|
||||
version_menu = "v1.2 (latest)"
|
||||
version = "v1.2"
|
||||
archived_version = false
|
||||
url_latest_version = "https://docs.dapr.io"
|
||||
|
||||
[[params.versions]]
|
||||
version = "v0.11"
|
||||
version = "v1.2 (latest)"
|
||||
url = "#"
|
||||
[[params.versions]]
|
||||
version = "v1.1"
|
||||
url = "https://v1-1.docs.dapr.io"
|
||||
[[params.versions]]
|
||||
version = "v1.0"
|
||||
url = "https://v1-0.docs.dapr.io"
|
||||
[[params.versions]]
|
||||
version = "v0.11"
|
||||
url = "https://v0-11.docs.dapr.io"
|
||||
[[params.versions]]
|
||||
version = "v0.10"
|
||||
url = "https://github.com/dapr/docs/tree/v0.10.0"
|
||||
|
@ -108,9 +201,9 @@ sidebar_search_disable = true
|
|||
icon = "fab fa-github"
|
||||
desc = "Development takes place here!"
|
||||
[[params.links.developer]]
|
||||
name = "Gitter"
|
||||
url = "https://gitter.im/Dapr/community"
|
||||
icon = "fab fa-gitter"
|
||||
name = "Discord"
|
||||
url = "https://aka.ms/dapr-discord"
|
||||
icon = "fab fa-discord"
|
||||
desc = "Conversations happen here!"
|
||||
[[params.links.developer]]
|
||||
name = "Zoom"
|
||||
|
|
|
@ -1,7 +1,121 @@
|
|||
---
|
||||
type: docs
|
||||
no_list: true
|
||||
---
|
||||
|
||||
# <img src="/images/home-title.png" alt="Dapr Docs" width=400>
|
||||
|
||||
Welcome to the Dapr documentation site!
|
||||
Welcome to the Dapr documentation site!
|
||||
|
||||
### Sections
|
||||
|
||||
<div class="card-deck">
|
||||
<div class="card">
|
||||
<div class="card-body">
|
||||
<h5 class="card-title"><b>Concepts</b></h5>
|
||||
<p class="card-text">Learn about Dapr, including its main features and capabilities.</p>
|
||||
<a href="{{< ref concepts >}}" class="stretched-link"></a>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card">
|
||||
<div class="card-body">
|
||||
<h5 class="card-title"><b>Getting started</b></h5>
|
||||
<p class="card-text">How to get up and running with Dapr in your environment in minutes.</p>
|
||||
<a href="{{< ref getting-started >}}" class="stretched-link"></a>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card">
|
||||
<div class="card-body">
|
||||
<h5 class="card-title"><b>Developing applications</b></h5>
|
||||
<p class="card-text">Tools, tips, and information on how to build your application with Dapr.</p>
|
||||
<a href="{{< ref developing-applications >}}" class="stretched-link"></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br />
|
||||
<div class="card-deck">
|
||||
<div class="card">
|
||||
<div class="card-body">
|
||||
<h5 class="card-title"><b>Operations</b></h5>
|
||||
<p class="card-text">Hosting options, best-practices, and other guides and running your application on Dapr.</p>
|
||||
<a href="{{< ref operations >}}" class="stretched-link"></a>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card">
|
||||
<div class="card-body">
|
||||
<h5 class="card-title"><b>Reference</b></h5>
|
||||
<p class="card-text">Detailed documentation on the Dapr API, CLI, bindings and more.</p>
|
||||
<a href="{{< ref reference >}}" class="stretched-link"></a>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card">
|
||||
<div class="card-body">
|
||||
<h5 class="card-title"><b>Contributing</b></h5>
|
||||
<p class="card-text">How to contribute to the Dapr project and the various repositories.</p>
|
||||
<a href="{{< ref contributing >}}" class="stretched-link"></a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
||||
### Tooling
|
||||
|
||||
<div class="media">
|
||||
<a class="pr-1" href="{{< ref ides >}}">
|
||||
<img class="mr-3" src="/images/homepage/vscode.svg" alt="Visual studio code icon" width=40>
|
||||
</a>
|
||||
<div class="media-body">
|
||||
<h5 class="mt-0"><b>IDE Integrations</b></h5>
|
||||
<p>Learn how to get up and running with Dapr in your preferred integrated development environment.</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="media">
|
||||
<a class="pr-1" href="{{< ref sdks >}}">
|
||||
<img class="mr-3" src="/images/homepage/code.svg" alt="Code icon" width=40>
|
||||
</a>
|
||||
<div class="media-body">
|
||||
<h5 class="mt-0"><b>Language SDKs</b></h5>
|
||||
<p>Create Dapr applications in your preferred language using the Dapr SDKs.</p>
|
||||
<div class="media mt-3">
|
||||
<a class="pr-3" href="{{< ref dotnet >}}">
|
||||
<img src="/images/homepage/dotnet.png" alt=".NET logo" width=30>
|
||||
</a>
|
||||
<div class="media-body">
|
||||
<h5 class="mt-0"><b>.NET</b></h5>
|
||||
</div>
|
||||
</div>
|
||||
<div class="media mt-3">
|
||||
<a class="pr-3" href="{{< ref python >}}">
|
||||
<img src="/images/homepage/python.png" alt="Python logo" width=30>
|
||||
</a>
|
||||
<div class="media-body">
|
||||
<h5 class="mt-0"><b>Python</b></h5>
|
||||
</div>
|
||||
</div>
|
||||
<div class="media mt-3">
|
||||
<a class="pr-4" href="{{< ref sdks >}}">
|
||||
<img src="/images/homepage/java.png" alt="Java logo" width=20>
|
||||
</a>
|
||||
<div class="media-body">
|
||||
<h5 class="mt-0"><b>Java</b></h5>
|
||||
</div>
|
||||
</div>
|
||||
<div class="media mt-3">
|
||||
<a class="pr-4" href="{{< ref go >}}">
|
||||
<img src="/images/homepage/golang.svg" alt="Go logo" width=30>
|
||||
</a>
|
||||
<div class="media-body">
|
||||
<h5 class="mt-0"><b>Go</b></h5>
|
||||
</div>
|
||||
</div>
|
||||
<div class="media mt-3">
|
||||
<a class="pr-4" href="{{< ref php >}}">
|
||||
<img src="/images/homepage/php.png" alt="PHP logo" width=30>
|
||||
</a>
|
||||
<div class="media-body">
|
||||
<h5 class="mt-0"><b>PHP</b></h5>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<br />
|
||||
|
|
|
@ -6,14 +6,14 @@ weight: 200
|
|||
description: "Modular best practices accessible over standard HTTP or gRPC APIs"
|
||||
---
|
||||
|
||||
A [building block]({{< ref building-blocks >}}) is an HTTP or gRPC API that can be called from your code and uses one or more Dapr components.
|
||||
A [building block]({{< ref building-blocks >}}) is an HTTP or gRPC API that can be called from your code and uses one or more Dapr components.
|
||||
|
||||
Building blocks address common challenges in building resilient, microservices applications and codify best practices and patterns. Dapr consists of a set of building blocks, with extensibility to add new building blocks.
|
||||
|
||||
The diagram below shows how building blocks expose a public API that is called from your code, using components to implement the building blocks' capability.
|
||||
|
||||
<img src="/images/concepts-building-blocks.png" width=250>
|
||||
|
||||
|
||||
The following are the building blocks provided by Dapr:
|
||||
|
||||
<img src="/images/building_blocks.png" width=1000>
|
||||
|
@ -24,6 +24,6 @@ The following are the building blocks provided by Dapr:
|
|||
| [**State management**]({{<ref "state-management-overview.md">}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state API with pluggable state stores for persistence.
|
||||
| [**Publish and subscribe**]({{<ref "pubsub-overview.md">}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publishes messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
|
||||
| [**Resource bindings**]({{<ref "bindings-overview.md">}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
|
||||
| [**Actors**]({{<ref "actors-overview.md">}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the Virtual Actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use. See * [Actor Overview](./actors#understanding-actors)
|
||||
| [**Actors**]({{<ref "actors-overview.md">}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the Virtual Actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.
|
||||
| [**Observability**]({{<ref "observability-concept.md">}}) | `N/A` | Dapr system components and runtime emit metrics, logs, and traces to debug, operate and monitor Dapr system services, components and user applications.
|
||||
| [**Secrets**]({{<ref "secrets-overview.md">}}) | `/v1.0/secrets` | Dapr offers a secrets building block API and integrates with secret stores such as Azure Key Vault and Kubernetes to store the secrets. Service code can call the secrets API to retrieve secrets out of the Dapr supported secret stores.
|
||||
|
|
|
@ -7,26 +7,49 @@ description: "Modular functionality used by building blocks and applications"
|
|||
---
|
||||
|
||||
Dapr uses a modular design where functionality is delivered as a component. Each component has an interface definition. All of the components are pluggable so that you can swap out one component with the same interface for another. The [components contrib repo](https://github.com/dapr/components-contrib) is where you can contribute implementations for the component interfaces and extends Dapr's capabilities.
|
||||
|
||||
|
||||
A building block can use any combination of components. For example the [actors]({{<ref "actors-overview.md">}}) building block and the [state management]({{<ref "state-management-overview.md">}}) building block both use [state components](https://github.com/dapr/components-contrib/tree/master/state). As another example, the [Pub/Sub]({{<ref "pubsub-overview.md">}}) building block uses [Pub/Sub components](https://github.com/dapr/components-contrib/tree/master/pubsub).
|
||||
|
||||
You can get a list of current components available in the current hosting environment using the `dapr components` CLI command.
|
||||
|
||||
The following are the component types provided by Dapr:
|
||||
|
||||
* [Bindings](https://github.com/dapr/components-contrib/tree/master/bindings)
|
||||
* [Pub/sub](https://github.com/dapr/components-contrib/tree/master/pubsub)
|
||||
* [Middleware](https://github.com/dapr/components-contrib/tree/master/middleware)
|
||||
* [Service discovery name resolution](https://github.com/dapr/components-contrib/tree/master/nameresolution)
|
||||
* [Secret stores](https://github.com/dapr/components-contrib/tree/master/secretstores)
|
||||
* [State](https://github.com/dapr/components-contrib/tree/master/state)
|
||||
* [Tracing exporters](https://github.com/dapr/components-contrib/tree/master/exporters)
|
||||
## State stores
|
||||
|
||||
State store components are data stores (databases, files, memory) that store key-value pairs as part of the [state management]({{< ref "state-management-overview.md" >}}) building block.
|
||||
|
||||
- [List of state stores]({{< ref supported-state-stores >}})
|
||||
- [State store implementations](https://github.com/dapr/components-contrib/tree/master/state)
|
||||
|
||||
## Service discovery
|
||||
|
||||
### Service invocation and service discovery components
|
||||
Service discovery components are used with the [service invocation]({{<ref "service-invocation-overview.md">}}) building block to integrate with the hosting environment to provide service-to-service discovery. For example, the Kubernetes service discovery component integrates with the Kubernetes DNS service and self hosted uses mDNS.
|
||||
|
||||
### Service invocation and middleware components
|
||||
Dapr allows custom [middleware]({{<ref "middleware-concept.md">}}) to be plugged into the request processing pipeline. Middleware can perform additional actions on a request, such as authentication, encryption and message transformation before the request is routed to the user code, or before the request is returned to the client. The middleware components are used with the [service invocation]({{<ref "service-invocation-overview.md">}}) building block.
|
||||
- [Service discovery name resolution implementations](https://github.com/dapr/components-contrib/tree/master/nameresolution)
|
||||
|
||||
### Secret store components
|
||||
In Dapr, a [secret]({{<ref "secrets-overview.md">}}) is any piece of private information that you want to guard against unwanted users. Secrets stores, used to store secrets, are Dapr components and can be used by any of the building blocks.
|
||||
## Middleware
|
||||
|
||||
Dapr allows custom [middleware]({{<ref "middleware.md">}}) to be plugged into the request processing pipeline. Middleware can perform additional actions on a request, such as authentication, encryption and message transformation before the request is routed to the user code, or before the request is returned to the client. The middleware components are used with the [service invocation]({{<ref "service-invocation-overview.md">}}) building block.
|
||||
|
||||
- [Middleware implementations](https://github.com/dapr/components-contrib/tree/master/middleware)
|
||||
|
||||
## Pub/sub brokers
|
||||
|
||||
Pub/sub broker components are message brokers that can pass messages to/from services as part of the [publish & subscribe]({{< ref pubsub-overview.md >}}) building block.
|
||||
|
||||
- [List of pub/sub brokers]({{< ref supported-pubsub >}})
|
||||
- [Pub/sub broker implementations](https://github.com/dapr/components-contrib/tree/master/pubsub)
|
||||
|
||||
## Bindings
|
||||
|
||||
External resources can connect to Dapr in order to trigger a service or be called from a service as part of the [bindings]({{< ref bindings-overview.md >}}) building block.
|
||||
|
||||
- [List of supported bindings]({{< ref supported-bindings >}})
|
||||
- [Binding implementations](https://github.com/dapr/components-contrib/tree/master/bindings)
|
||||
|
||||
## Secret stores
|
||||
|
||||
In Dapr, a [secret]({{<ref "secrets-overview.md">}}) is any piece of private information that you want to guard against unwanted users. Secrets stores are used to store secrets that can be retrieved and used in services.
|
||||
|
||||
- [List of supported secret stores]({{< ref supported-secret-stores >}})
|
||||
- [Secret store implementations](https://github.com/dapr/components-contrib/tree/master/secretstores)
|
||||
|
|
|
@ -6,7 +6,23 @@ weight: 400
|
|||
description: "Change the behavior of Dapr sidecars or globally on Dapr system services"
|
||||
---
|
||||
|
||||
Dapr configurations are settings that enable you to change the behavior of individual Dapr application sidecars or globally on the system services in the Dapr control plane.
|
||||
An example of a per Dapr application sidecar setting is configuring trace settings. An example of a Dapr control plane setting is mutual TLS which is a global setting on the Sentry system service.
|
||||
Dapr configurations are settings that enable you to change both the behavior of individual Dapr applications, or the global behavior of the system services in the Dapr control plane.
|
||||
|
||||
Configurations are defined and deployed as a YAML file. An application configuration example is like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: daprConfig
|
||||
namespace: default
|
||||
spec:
|
||||
tracing:
|
||||
samplingRate: "1"
|
||||
zipkin:
|
||||
endpointAddress: "http://localhost:9411/api/v2/spans"
|
||||
```
|
||||
|
||||
This configuration configures tracing for telemetry recording. It can be loaded in self-hosted mode by editing the default configuration file called `config.yaml` file in your `.dapr` directory, or by applying it to your Kubernetes cluster with kubectl/helm.
|
||||
|
||||
Read [this page]({{<ref "configuration-overview.md">}}) for a list of all configuration options.
|
||||
|
|
|
@ -6,55 +6,34 @@ weight: 1000
|
|||
description: "Common questions asked about Dapr"
|
||||
---
|
||||
|
||||
## Networking and service meshes
|
||||
|
||||
### Understanding how Dapr works with service meshes
|
||||
|
||||
Dapr is a distributed application runtime. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build microservices. Dapr is developer-centric versus service meshes being infrastructure-centric.
|
||||
|
||||
Dapr can be used alongside any service mesh such as Istio and Linkerd. A service mesh is a dedicated network infrastructure layer designed to connect services to one another and provide insightful telemetry. A service mesh doesn’t introduce new functionality to an application.
|
||||
|
||||
That is where Dapr comes in. Dapr is a language agnostic programming model built on http and gRPC that provides distributed system building blocks via open APIs for asynchronous pub-sub, stateful services, service discovery and invocation, actors and distributed tracing. Dapr introduces new functionality to an app’s runtime. Both service meshes and Dapr run as side-car services to your application, one giving network features and the other distributed application capabilities.
|
||||
|
||||
Watch this [video](https://www.youtube.com/watch?v=xxU68ewRmz8&feature=youtu.be&t=140) on how Dapr and service meshes work together.
|
||||
|
||||
### Understanding how Dapr interoperates with the service mesh interface (SMI)
|
||||
|
||||
SMI is an abstraction layer that provides a common API surface across different service mesh technology. Dapr can leverage any service mesh technology including SMI.
|
||||
|
||||
### Differences between Dapr, Istio and Linkerd
|
||||
|
||||
Read [How does Dapr work with service meshes?](https://github.com/dapr/dapr/wiki/FAQ#how-does-dapr-work-with-service-meshes) Istio is an open source service mesh implementation that focuses on Layer7 routing, traffic flow management and mTLS authentication between services. Istio uses a sidecar to intercept traffic going into and out of a container and enforces a set of network policies on them.
|
||||
|
||||
Istio is not a programming model and does not focus on application level features such as state management, pub-sub, bindings etc. That is where Dapr comes in.
|
||||
## How does Dapr compare to service meshes such as Istio, Linkerd or OSM?
|
||||
Dapr is not a service mesh. While service meshes focus on fine-grained network control, Dapr is focused on helping developers build distributed applications. Both Dapr and service meshes use the sidecar pattern and run alongside the application. They do have some overlapping features, but also offer unique benefits. For more information please read the [Dapr & service meshes]({{<ref service-mesh>}}) concept page.
|
||||
|
||||
## Performance Benchmarks
|
||||
The Dapr project is focused on performance due to the inherent discussion of Dapr being a sidecar to your application. This [performance benchmark video](https://youtu.be/4kV3SHs1j2k?t=783) discusses and demos the work that has been done so far. The performance benchmark data is planned to be published on a regular basis. You can also run the perf tests in your own environment to get perf numbers.
|
||||
The Dapr project is focused on performance due to the inherent discussion of Dapr being a sidecar to your application. See [here]({{< ref perf-service-invocation.md >}}) for updated performance numbers.
|
||||
|
||||
## Actors
|
||||
|
||||
### What is the relationship between Dapr, Orleans and Service Fabric Reliable Actors?
|
||||
|
||||
The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments.
|
||||
Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview](https://github.com/dapr/docs/blob/master/overview/README.md).
|
||||
The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premisis environments.
|
||||
Moreover, Dapr is about more than just actors. It provides you with a set of best-practice building blocks to build into any microservices application. See [Dapr overview]({{< ref overview.md >}}).
|
||||
|
||||
### Differences between Dapr from an actor framework
|
||||
### Differences between Dapr and an actor framework
|
||||
|
||||
Virtual actors capabilities are one of the building blocks that Dapr provides in its runtime. With Dapr because it is programming language agnostic with an http/gRPC API, the actors can be called from any language. This allows actors written in one language to invoke actors written in a different language.
|
||||
Virtual actor capabilities are one of the building blocks that Dapr provides in its runtime. With Dapr, because it is programming-language agnostic with an http/gRPC API, the actors can be called from any language. This allows actors written in one language to invoke actors written in a different language.
|
||||
|
||||
Creating a new actor follows a local call like `http://localhost:3500/v1.0/actors/<actorType>/<actorId>/…`, for example `http://localhost:3500/v1.0/actors/myactor/50/method/getData` to call the `getData` method on the newly created `myactor` with id `50`.
|
||||
Creating a new actor follows a local call like `http://localhost:3500/v1.0/actors/<actorType>/<actorId>/…`. For example, `http://localhost:3500/v1.0/actors/myactor/50/method/getData` calls the `getData` method on the newly created `myactor` with id `50`.
|
||||
|
||||
The Dapr runtime SDKs have language specific actor frameworks. The .NET SDK for example has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java and Python SDK have actor frameworks.
|
||||
The Dapr runtime SDKs have language-specific actor frameworks. For example, the .NET SDK has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java and Python SDK have actor frameworks.
|
||||
|
||||
## Developer language SDKs and frameworks
|
||||
|
||||
### Does Dapr have any SDKs if I want to work with a particular programming language or framework?
|
||||
### Does Dapr have any SDKs I can use if I want to work with a particular programming language or framework?
|
||||
|
||||
To make using Dapr more natural for different languages, it includes language specific SDKs for Go, Java, JavaScript, .NET, Python, RUST and C++.
|
||||
To make using Dapr more natural for different languages, it includes [language specific SDKs]({{<ref sdks>}}) for Go, Java, JavaScript, .NET, Python, PHP, Rust and C++. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of your choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
|
||||
|
||||
These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
|
||||
|
||||
### What frameworks does Dapr integrated with?
|
||||
### What frameworks does Dapr integrate with?
|
||||
Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services.
|
||||
|
||||
Dapr is integrated with the following frameworks;
|
||||
|
|
|
@ -1,66 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Middleware pipelines"
|
||||
linkTitle: "Middleware"
|
||||
weight: 400
|
||||
description: "Custom processing pipelines of chained middleware components"
|
||||
---
|
||||
|
||||
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. A request goes through all defined middleware components before it's routed to user code, and then goes through the defined middleware, in reverse order, before it's returned to the client, as shown in the following diagram.
|
||||
|
||||
<img src="/images/middleware.png" width=400>
|
||||
|
||||
## Customize processing pipeline
|
||||
|
||||
When launched, a Dapr sidecar constructs a middleware processing pipeline. By default the pipeline consists of [tracing middleware]({{< ref tracing.md >}}) and CORS middleware. Additional middleware, configured by a Dapr [configuration]({{< ref configuration-concept.md >}}), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, security and others.
|
||||
|
||||
> **NOTE:** Dapr provides a **middleware.http.uppercase** pre-registered component that changes all text in a request body to uppercase. You can use it to test/verify if your custom pipeline is in place.
|
||||
|
||||
The following configuration example defines a custom pipeline that uses a [OAuth 2.0 middleware]({{< ref oauth.md >}}) and an uppercase middleware component. In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: pipeline
|
||||
namespace: default
|
||||
spec:
|
||||
httpPipeline:
|
||||
handlers:
|
||||
- name: oauth2
|
||||
type: middleware.http.oauth2
|
||||
- name: uppercase
|
||||
type: middleware.http.uppercase
|
||||
```
|
||||
|
||||
## Writing a custom middleware
|
||||
|
||||
Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement it's HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns a **fasthttp.RequestHandler**:
|
||||
|
||||
```go
|
||||
type Middleware interface {
|
||||
GetHandler(metadata Metadata) (func(h fasthttp.RequestHandler) fasthttp.RequestHandler, error)
|
||||
}
|
||||
```
|
||||
|
||||
Your handler implementation can include any inbound logic, outbound logic, or both:
|
||||
|
||||
```go
|
||||
func GetHandler(metadata Metadata) fasthttp.RequestHandler {
|
||||
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
|
||||
return func(ctx *fasthttp.RequestCtx) {
|
||||
//inboud logic
|
||||
h(ctx) //call the downstream handler
|
||||
//outbound logic
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Adding new middleware components
|
||||
Your middleware component can be contributed to the [components-contrib repository](https://github.com/dapr/components-contrib/tree/master/middleware).
|
||||
|
||||
Then submit another pull request against the [Dapr runtime repository](https://github.com/dapr/dapr) to register the new middleware type. You'll need to modify the **Load()** method in [registry.go]( https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/registry.go) to register your middleware using the **Register** method.
|
||||
|
||||
## Next steps
|
||||
* [How-To: Configure API authorization with OAuth]({{< ref oauth.md >}})
|
|
@ -4,60 +4,41 @@ title: "Observability"
|
|||
linkTitle: "Observability"
|
||||
weight: 500
|
||||
description: >
|
||||
How to monitor applications through tracing, metrics, logs and health
|
||||
Monitor applications through tracing, metrics, logs and health
|
||||
---
|
||||
|
||||
Observability is a term from control theory. Observability means you can answer questions about what's happening on the inside of a system by observing the outside of the system, without having to ship new code to answer new questions. Observability is critical in production environments and services to debug, operate and monitor Dapr system services, components and user applications.
|
||||
When building an application, understanding how the system is behaving is an important part of operating it - this includes having the ability to observe the internal calls of an application, gauging its performance and becoming aware of problems as soon as they occur. This is challenging for any system, but even more so for a distributed system comprised of multiple microservices where a flow, made of several calls, may start in one microservices but continue in another. Observability is critical in production environments, but also useful during development to understand bottlenecks, improve performance and perform basic debugging across the span of microservices.
|
||||
|
||||
The observability capabilities enable users to monitor the Dapr system services, their interaction with user applications and understand how these monitored services behave. The observability capabilities are divided into the following areas;
|
||||
While some data points about an application can be gathered from the underlying infrastructure (e.g. memory consumption, CPU usage), other meaningful information must be collected from an "application-aware" layer - one that can show how an important series of calls is executed across microservices. This usually means a developer must add some code to instrument an application for this purpose. Often, instrumentation code is simply meant to send collected data such as traces and metrics to an external monitoring tool or service that can help store, visualize and analyze all this information.
|
||||
|
||||
## Distributed tracing
|
||||
Having to maintain this code, which is not part of the core logic of the application, is another burden on the developer, sometimes requiring understanding the monitoring tools' APIs, using additional SDKs etc. This instrumentation may also add to the portability challenges of an application, which may require different instrumentation depending on where the application is deployed. For example, different cloud providers offer different monitoring solutions and an on-prem deployment might require an on-prem solution.
|
||||
|
||||
[Distributed tracing]({{<ref "tracing.md">}}) is used to profile and monitor Dapr system services and user apps. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing is particularly well-suited to debugging and monitoring distributed software architectures, such as microservices.
|
||||
## Observability for your application with Dapr
|
||||
When building an application which leverages Dapr building blocks to perform service-to-service calls and pub/sub messaging, Dapr offers an advantage with respect to [distributed tracing]({{<ref tracing>}}). Because this inter-service communication flows through the Dapr sidecar, the sidecar is in a unique position to offload the burden of application-level instrumentation.
|
||||
|
||||
You can use distributed tracing to help debug and optimize application code. Distributed tracing contains trace spans between the Dapr runtime, Dapr system services, and user apps across process, nodes, network, and security boundaries. It provides a detailed understanding of service invocations (call flows) and service dependencies.
|
||||
### Distributed tracing
|
||||
Dapr can be [configured to emit tracing data]({{<ref setup-tracing.md>}}), and because Dapr does so using widely adopted protocols such as the [Zipkin](https://zipkin.io) protocol, it can be easily integrated with multiple [monitoring backends]({{<ref supported-tracing-backends>}}).
|
||||
|
||||
Dapr uses [W3C tracing context for distributed tracing]({{<ref w3c-tracing>}})
|
||||
<img src="/images/observability-tracing.png" width=1000 alt="Distributed tracing with Dapr">
|
||||
|
||||
It is generally recommended to run Dapr in production with tracing.
|
||||
### OpenTelemetry collector
|
||||
Dapr can also be configured to work with the [OpenTelemetry Collector]({{<ref open-telemetry-collector>}}) which offers even more compatibility with external monitoring tools.
|
||||
|
||||
### Open Telemetry
|
||||
<img src="/images/observability-opentelemetry-collector.png" width=1000 alt="Distributed tracing via OpenTelemetry collector">
|
||||
|
||||
Dapr integrates with [OpenTelemetry](https://opentelemetry.io/) for tracing, metrics and logs. With OpenTelemetry, you can configure various exporters for tracing and metrics based on your environment, whether it is running in the cloud or on-premises.
|
||||
### Tracing context
|
||||
Dapr uses [W3C tracing]({{<ref w3c-tracing>}}) specification for tracing context and can generate and propagate the context header itself or propagate user-provided context headers.
|
||||
|
||||
#### Next steps
|
||||
## Observability for the Dapr sidecar and system services
|
||||
As for other parts of your system, you will want to be able to observe Dapr itself and collect metrics and logs emitted by the Dapr sidecar that runs along each microservice, as well as the Dapr-related services in your environment such as the control plane services that are deployed for a Dapr-enabled Kubernetes cluster.
|
||||
|
||||
- [How-To: Set up Zipkin]({{< ref zipkin.md >}})
|
||||
- [How-To: Set up Application Insights with Open Telemetry Collector]({{< ref open-telemetry-collector.md >}})
|
||||
<img src="/images/observability-sidecar.png" width=1000 alt="Dapr sidecar metrics, logs and health checks">
|
||||
|
||||
## Metrics
|
||||
### Logging
|
||||
Dapr generates [logs]({{<ref "logs.md">}}) to provide visibility into sidecar operation and to help users identify issues and perform debugging. Log events contain warning, error, info, and debug messages produced by Dapr system services. Dapr can also be configured to send logs to collectors such as [Fluentd]({{< ref fluentd.md >}}) and [Azure Monitor]({{< ref azure-monitor.md >}}) so they can be easily searched, analyzed and provide insights.
|
||||
|
||||
[Metrics]({{<ref "metrics.md">}}) are the series of measured values and counts that are collected and stored over time. Dapr metrics provide monitoring and understanding of the behavior of Dapr system services and user apps.
|
||||
### Metrics
|
||||
Metrics are the series of measured values and counts that are collected and stored over time. [Dapr metrics]({{<ref "metrics">}}) provide monitoring capabilities to understand the behavior of the Dapr sidecar and system services. For example, the metrics between a Dapr sidecar and the user application show call latency, traffic failures, error rates of requests, etc. Dapr [system services metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md) show sidecar injection failures and the health of system services, including CPU usage, number of actor placements made, etc.
|
||||
|
||||
For example, the service metrics between Dapr sidecars and user apps show call latency, traffic failures, error rates of requests etc.
|
||||
|
||||
Dapr system services metrics show side car injection failures, health of the system services including CPU usage, number of actor placement made etc.
|
||||
|
||||
#### Next steps
|
||||
|
||||
- [How-To: Set up Prometheus and Grafana]({{< ref prometheus.md >}})
|
||||
- [How-To: Set up Azure Monitor]({{< ref azure-monitor.md >}})
|
||||
|
||||
## Logs
|
||||
|
||||
[Logs]({{<ref "logs.md">}}) are records of events that occur and can be used to determine failures or another status.
|
||||
|
||||
Logs events contain warning, error, info, and debug messages produced by Dapr system services. Each log event includes metadata such as message type, hostname, component name, App ID, ip address, etc.
|
||||
|
||||
#### Next steps
|
||||
|
||||
- [How-To: Set up Fluentd, Elastic search and Kibana in Kubernetes]({{< ref fluentd.md >}})
|
||||
- [How-To: Set up Azure Monitor]({{< ref azure-monitor.md >}})
|
||||
|
||||
## Health
|
||||
|
||||
Dapr provides a way for a hosting platform to determine its [Health]({{<ref "sidecar-health.md">}}) using an HTTP endpoint. With this endpoint, the Dapr process, or sidecar, can be probed to determine its readiness and liveness and action taken accordingly.
|
||||
|
||||
#### Next steps
|
||||
|
||||
- [Health API]({{< ref health_api.md >}})
|
||||
### Health checks
|
||||
The Dapr sidecar exposes an HTTP endpoint for [health checks]({{<ref sidecar-health.md>}}). With this API, user code or hosting environments can probe the Dapr sidecar to determine its status and identify issues with sidecar readiness.
|
||||
|
|
|
@ -7,7 +7,9 @@ description: >
|
|||
Introduction to the Distributed Application Runtime
|
||||
---
|
||||
|
||||
Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, stateless and stateful microservice applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
|
||||
Dapr is a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
|
||||
|
||||
<iframe width="1120" height="630" src="https://www.youtube.com/embed/9o9iDAgYBA8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
## Any language, any framework, anywhere
|
||||
|
||||
|
@ -15,107 +17,105 @@ Dapr is a portable, event-driven runtime that makes it easy for enterprise devel
|
|||
|
||||
Today we are experiencing a wave of cloud adoption. Developers are comfortable with web + database application architectures (for example classic 3-tier designs) but not with microservice application architectures which are inherently distributed. It’s hard to become a distributed systems expert, nor should you have to. Developers want to focus on business logic, while leaning on the platforms to imbue their applications with scale, resiliency, maintainability, elasticity and the other attributes of cloud-native architectures.
|
||||
|
||||
This is where Dapr comes in. Dapr codifies the *best practices* for building microservice applications into open, independent, building blocks that enable you to build portable applications with the language and framework of your choice. Each building block is completely independent and you can use one, some, or all of them in your application.
|
||||
This is where Dapr comes in. Dapr codifies the *best practices* for building microservice applications into open, independent building blocks that enable you to build portable applications with the language and framework of your choice. Each building block is completely independent and you can use one, some, or all of them in your application.
|
||||
|
||||
In addition Dapr is platform agnostic meaning you can run your applications locally, on any Kubernetes cluster, and other hosting environments that Dapr integrates with. This enables you to build microservice applications that can run on the cloud and edge.
|
||||
In addition, Dapr is platform agnostic, meaning you can run your applications locally, on any Kubernetes cluster, and in other hosting environments that Dapr integrates with. This enables you to build microservice applications that can run on the cloud and edge.
|
||||
|
||||
Using Dapr you can easily build microservice applications using any language, any framework, and run them anywhere.
|
||||
Using Dapr you can easily build microservice applications using any language and any framework, and run them anywhere.
|
||||
|
||||
## Microservice building blocks for cloud and edge
|
||||
|
||||
<img src="/images/building_blocks.png" width=1000>
|
||||
|
||||
There are many considerations when architecting microservices applications. Dapr provides best practices for common capabilities when building microservice applications that developers can use in a standard way and deploy to any environment. It does this by providing distributed system building blocks.
|
||||
There are many considerations when architecting microservices applications. Dapr provides best practices for common capabilities when building microservice applications that developers can use in a standard way, and deploy to any environment. It does this by providing distributed system building blocks.
|
||||
|
||||
Each of these building blocks is independent, meaning that you can use one, some or all of them in your application. In this initial release of Dapr, the following building blocks are provided:
|
||||
Each of these building blocks is independent, meaning that you can use one, some, or all of them in your application. Today, the following building blocks are available:
|
||||
|
||||
| Building Block | Description |
|
||||
|----------------|-------------|
|
||||
| [**Service-to-service invocation**]({{<ref "service-invocation-overview.md">}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services wherever they are located in the supported hosting environment.
|
||||
| [**State management**]({{<ref "state-management-overview.md">}}) | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, Azure SQL Server, PostgreSQL, AWS DynamoDB or Redis among others.
|
||||
| [**Publish and subscribe**]({{<ref "pubsub-overview.md">}}) | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at least once message delivery guarantee.
|
||||
| [**Service-to-service invocation**]({{<ref "service-invocation-overview.md">}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services, wherever they are located in the supported hosting environment.
|
||||
| [**State management**]({{<ref "state-management-overview.md">}}) | With state management for storing key/value pairs, long-running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, Azure SQL Server, PostgreSQL, AWS DynamoDB or Redis, among others.
|
||||
| [**Publish and subscribe**]({{<ref "pubsub-overview.md">}}) | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at-least-once message delivery guarantee.
|
||||
| [**Resource bindings**]({{<ref "bindings-overview.md">}}) | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc.
|
||||
| [**Actors**]({{<ref "actors-overview.md">}}) | A pattern for stateful and stateless objects that make concurrency simple with method and state encapsulation. Dapr provides many capabilities in its actor runtime including concurrency, state, life-cycle management for actor activation/deactivation and timers and reminders to wake-up actors.
|
||||
| [**Observability**]({{<ref "observability-concept.md">}}) | Dapr emit metrics, logs, and traces to debug and monitor both Dapr and user applications. Dapr supports distributed tracing to easily diagnose and serve inter-service calls in production using the W3C Trace Context standard and Open Telemetry to send to different monitoring tools.
|
||||
| [**Secrets**]({{<ref "secrets-overview.md">}}) | Dapr provides secrets management and integrates with public cloud and local secret stores to retrieve the secrets for use in application code.
|
||||
| [**Actors**]({{<ref "actors-overview.md">}}) | A pattern for stateful and stateless objects that makes concurrency simple, with method and state encapsulation. Dapr provides many capabilities in its actor runtime, including concurrency, state, and life-cycle management for actor activation/deactivation, and timers and reminders to wake up actors.
|
||||
| [**Observability**]({{<ref "observability-concept.md">}}) | Dapr emits metrics, logs, and traces to debug and monitor both Dapr and user applications. Dapr supports distributed tracing to easily diagnose and serve inter-service calls in production using the W3C Trace Context standard and Open Telemetry to send to different monitoring tools.
|
||||
| [**Secrets**]({{<ref "secrets-overview.md">}}) | Dapr provides secrets management, and integrates with public-cloud and local-secret stores to retrieve the secrets for use in application code.
|
||||
|
||||
## Sidecar architecture
|
||||
|
||||
Dapr exposes its APIs as a sidecar architecture, either as a container or as a process, not requiring the application code to include any Dapr runtime code. This makes integration with Dapr easy from other runtimes, as well as providing separation of the application logic for improved supportability.
|
||||
Dapr exposes its HTTP and gRPC APIs as a sidecar architecture, either as a container or as a process, not requiring the application code to include any Dapr runtime code. This makes integration with Dapr easy from other runtimes, as well as providing separation of the application logic for improved supportability.
|
||||
|
||||
## Hosting Environments
|
||||
Dapr can be hosted in multiple environments, including self hosted for local development or to deploy to a group of VMs, Kubernetes and edge environments such as Azure IoT Edge.
|
||||
<img src="/images/overview-sidecar-model.png" width=700>
|
||||
|
||||
### Self hosted
|
||||
## Hosting environments
|
||||
|
||||
In self hosted mode Dapr runs as a separate side-car process which your service code can call via HTTP or gRPC. In self hosted mode, you can also deploy Dapr onto a set of VMs.
|
||||
Dapr can be hosted in multiple environments, including self-hosted on a Windows/Linux/macOS machine and on Kubernetes.
|
||||
|
||||
<img src="/images/overview-sidecar.png" width=1000>
|
||||
### Self-hosted
|
||||
|
||||
In [self-hosted mode]({{< ref self-hosted-overview.md >}}) Dapr runs as a separate sidecar process which your service code can call via HTTP or gRPC. Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.
|
||||
|
||||
You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr-enabled application on your local machine. Try this out with the [getting started samples]({{< ref getting-started >}}).
|
||||
|
||||
<img src="/images/overview_standalone.png" width=1000 alt="Architecture diagram of Dapr in self-hosted mode">
|
||||
|
||||
### Kubernetes hosted
|
||||
|
||||
In container hosting environments such as Kubernetes, Dapr runs as a side-car container with the application container in the same pod.
|
||||
In container hosting environments such as Kubernetes, Dapr runs as a sidecar container with the application container in the same pod.
|
||||
|
||||
<img src="/images/overview-sidecar-kubernetes.png" width=1000>
|
||||
The `dapr-sidecar-injector` and `dapr-operator` services provide first-class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned in the cluster.
|
||||
|
||||
The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service, read the [security overview]({{< ref "security-concept.md#dapr-to-dapr-communication" >}})
|
||||
|
||||
Deploying and running a Dapr-enabled application into your Kubernetes cluster is as simple as adding a few annotations to the deployment schemes. Visit the [Dapr on Kubernetes docs]({{< ref kubernetes >}})
|
||||
|
||||
<img src="/images/overview_kubernetes.png" width=1000 alt="Architecture diagram of Dapr in Kubernetes mode">
|
||||
|
||||
## Developer language SDKs and frameworks
|
||||
|
||||
To make using Dapr more natural for different languages, it also includes language specific SDKs for Go, Java, JavaScript, .NET and Python. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
|
||||
Dapr offers a variety of SDKs and frameworks to make it easy to begin developing with Dapr in your preferred language.
|
||||
|
||||
### SDKs
|
||||
### Dapr SDKs
|
||||
|
||||
- **[C++ SDK](https://github.com/dapr/cpp-sdk)**
|
||||
- **[Go SDK](https://github.com/dapr/go-sdk)**
|
||||
- **[Java SDK](https://github.com/dapr/java-sdk)**
|
||||
- **[Javascript SDK](https://github.com/dapr/js-sdk)**
|
||||
- **[Python SDK](https://github.com/dapr/python-sdk)**
|
||||
- **[RUST SDK](https://github.com/dapr/rust-sdk)**
|
||||
- **[.NET SDK](https://github.com/dapr/dotnet-sdk)**
|
||||
To make using Dapr more natural for different languages, it also includes [language specific SDKs]({{<ref sdks>}}) for:
|
||||
- C++
|
||||
- Go
|
||||
- Java
|
||||
- JavaScript
|
||||
- Python
|
||||
- Rust
|
||||
- .NET
|
||||
- PHP
|
||||
|
||||
> Note: Dapr is language agnostic and provides a [RESTful HTTP API]({{< ref api >}}) in addition to the protobuf clients.
|
||||
These SDKs expose the functionality of the Dapr building blocks through a typed language API, rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of your choice. And because these SDKs share the Dapr runtime, you get cross-language actor and function support.
|
||||
|
||||
### Developer frameworks
|
||||
Dapr can be used from any developer framework. Here are some that have been integrated with Dapr.
|
||||
|
||||
Dapr can be used from any developer framework. Here are some that have been integrated with Dapr:
|
||||
|
||||
#### Web
|
||||
In the Dapr [.NET SDK](https://github.com/dapr/dotnet-sdk) you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services.
|
||||
|
||||
In the Dapr [Java SDK](https://github.com/dapr/java-sdk) you can find [Spring Boot](https://spring.io/) integration.
|
||||
|
||||
Dapr integrates easily with Python [Flask](https://pypi.org/project/Flask/) and node [Express](http://expressjs.com/). See examples in the [Dapr quickstarts](https://github.com/dapr/quickstarts).
|
||||
|
||||
#### Actors
|
||||
Dapr SDKs support for [virtual actors]({{< ref actors >}}) which are stateful objects that make concurrency simple, have method and state encapsulation, and are designed for scalable, distributed applications.
|
||||
| Language | Frameworks | Description |
|
||||
|----------|------------|-------------|
|
||||
| [.NET]({{< ref dotnet >}}) | [ASP.NET]({{< ref dotnet-aspnet.md >}}) | Brings stateful routing controllers that respond to pub/sub events from other services. Can also take advantage of [ASP.NET Core gRPC Services](https://docs.microsoft.com/en-us/aspnet/core/grpc/).
|
||||
| [Java](https://github.com/dapr/java-sdk) | [Spring Boot](https://spring.io/)
|
||||
| [Python]({{< ref python >}}) | [Flask]({{< ref python-flask.md >}})
|
||||
| [Javascript](https://github.com/dapr/js-sdk) | [Express](http://expressjs.com/)
|
||||
| [PHP]({{< ref php >}}) | | You can serve with Apache, Nginx, or Caddyserver.
|
||||
|
||||
#### Azure Functions
|
||||
Dapr integrates with the Azure Functions runtime via an extension that lets a function seamlessly interact with Dapr. Azure Functions provides an event-driven programming model and Dapr provides cloud-native building blocks. With this extension, you can bring both together for serverless and event-driven apps. For more information read
|
||||
[Azure Functions extension for Dapr](https://cloudblogs.microsoft.com/opensource/2020/07/01/announcing-azure-functions-extension-for-dapr/) and visit the [Azure Functions extension](https://github.com/dapr/azure-functions-extension) repo to try out the samples.
|
||||
#### Integrations and extensions
|
||||
|
||||
#### Dapr workflows
|
||||
To enable developers to easily build workflow applications that use Dapr’s capabilities including diagnostics and multi-language support, you can use Dapr workflows. Dapr integrates with workflow engines such as Logic Apps. For more information read
|
||||
[cloud-native workflows using Dapr and Logic Apps](https://cloudblogs.microsoft.com/opensource/2020/05/26/announcing-cloud-native-workflows-dapr-logic-apps/) and visit the [Dapr workflow](https://github.com/dapr/workflows) repo to try out the samples.
|
||||
Visit the [integrations]({{< ref integrations >}}) page to learn about some of the first-class support Dapr has for various frameworks and external products, including:
|
||||
- Azure Functions runtime
|
||||
- Azure Logic Apps runtime
|
||||
- Azure API Management
|
||||
- KEDA
|
||||
- Visual Studio Code
|
||||
|
||||
## Designed for Operations
|
||||
Dapr is designed for [operations](/operations/). The [services dashboard](https://github.com/dapr/dashboard), installed via the Dapr CLI, provides a web-based UI enabling you to see information, view logs and more for the Dapr sidecars.
|
||||
## Designed for operations
|
||||
|
||||
The [monitoring tools support](/operations/monitoring/) provides deeper visibility into the Dapr system services and side-cars and the [observability capabilities]({{<ref "observability-concept.md">}}) of Dapr provide insights into your application such as tracing and metrics.
|
||||
Dapr is designed for [operations]({{< ref operations >}}) and security. The Dapr sidecars, runtime, components, and configuration can all be managed and deployed easily and securly to match your organization's needs.
|
||||
|
||||
## Run anywhere
|
||||
The [services dashboard](https://github.com/dapr/dashboard), installed via the Dapr CLI, provides a web-based UI enabling you to see information, view logs and more for the Dapr sidecars.
|
||||
|
||||
### Running Dapr on a local developer machine in self hosted mode
|
||||
|
||||
Dapr can be configured to run on your local developer machine in [self-hosted mode]({{< ref self-hosted >}}). Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.
|
||||
|
||||
You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr enabled application on your local machine. Try this out with the [getting started samples]({{< ref getting-started >}}).
|
||||
|
||||
<img src="/images/overview_standalone.png" width=800>
|
||||
|
||||
### Running Dapr in Kubernetes mode
|
||||
|
||||
Dapr can be configured to run on any [Kubernetes cluster]({{< ref kubernetes >}}). In Kubernetes the `dapr-sidecar-injector` and `dapr-operator` services provide first class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned into the cluster.
|
||||
|
||||
The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service read the [security overview]({{< ref "security-concept.md#dapr-to-dapr-communication" >}})
|
||||
|
||||
<img src="/images/overview_kubernetes.png" width=800>
|
||||
|
||||
Deploying and running a Dapr enabled application into your Kubernetes cluster is a simple as adding a few annotations to the deployment schemes. You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes/deploy) in the Kubernetes getting started sample. Try this out with the [Kubernetes quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes).
|
||||
The [monitoring tools support]({{< ref monitoring >}}) provides deeper visibility into the Dapr system services and side-cars and the [observability capabilities]({{<ref "observability-concept.md">}}) of Dapr provide insights into your application such as tracing and metrics.
|
||||
|
|
|
@ -11,8 +11,8 @@ This article addresses multiple security considerations when using Dapr in a dis
|
|||
|
||||
Several of the areas above are addressed through encryption of data in transit. One of the security mechanisms that Dapr employs for encrypting data in transit is [mutual authentication TLS](https://en.wikipedia.org/wiki/Mutual_authentication) or mTLS. mTLS offers a few key features for network traffic inside your application:
|
||||
|
||||
- Two way authentication - the client proving its identify to the server, and vice-versa
|
||||
- An encrypted channel for all in-flight communication, after two-way authentication is established
|
||||
- Two way authentication - the client proving its identity to the server, and vice-versa
|
||||
- An encrypted channel for all in-flight communication, after two-way authentication is established
|
||||
|
||||
Mutual TLS is useful in almost all scenarios, but especially so for systems subject to regulations such as [HIPAA](https://en.wikipedia.org/wiki/Health_Insurance_Portability_and_Accountability_Act) and [PCI](https://en.wikipedia.org/wiki/Payment_Card_Industry_Data_Security_Standard).
|
||||
|
||||
|
@ -20,7 +20,7 @@ Dapr enables mTLS and all the features described in this document in your applic
|
|||
|
||||
## Sidecar-to-app communication
|
||||
|
||||
The Dapr sidecar runs close to the application through **localhost**, and is recommended to run under the same network boundary as the app. While many cloud-native systems today consider the pod level (on Kubernetes, for example) as a trusted security boundary, Dapr provides user with API level authentication using tokens. This feature guarantees that even on localhost, only an authenticated caller may call into Dapr.
|
||||
The Dapr sidecar runs close to the application through **localhost**, and is recommended to run under the same network boundary as the app. While many cloud-native systems today consider the pod level (on Kubernetes, for example) as a trusted security boundary, Dapr provides the user with API level authentication using tokens. This feature guarantees that even on localhost, only an authenticated caller may call into Dapr.
|
||||
|
||||
## Sidecar-to-sidecar communication
|
||||
|
||||
|
@ -29,20 +29,20 @@ To achieve this, Dapr leverages a system service named `Sentry` which acts as a
|
|||
|
||||
Dapr also manages workload certificate rotation, and does so with zero downtime to the application.
|
||||
|
||||
Sentry, the CA service, automatically creates and persists self signed root certificates valid for one year, unless existing root certs have been provided by the user.
|
||||
Sentry, the CA service, automatically creates and persists self-signed root certificates valid for one year, unless existing root certs have been provided by the user.
|
||||
|
||||
When root certs are replaced (secret in Kubernetes mode and filesystem for self hosted mode), the Sentry picks them up and re-builds the trust chain without needing to restart, with zero downtime to Sentry.
|
||||
When root certs are replaced (secret in Kubernetes mode and filesystem for self-hosted mode), the Sentry picks them up and rebuilds the trust chain without needing to restart, with zero downtime to Sentry.
|
||||
|
||||
When a new Dapr sidecar initializes, it first checks if mTLS is enabled. If it is, an ECDSA private key and certificate signing request are generated and sent to Sentry via a gRPC interface. The communication between the Dapr sidecar and Sentry is authenticated using the trust chain cert, which is injected into each Dapr instance by the Dapr Sidecar Injector system service.
|
||||
|
||||
In a Kubernetes cluster, the secret that holds the root certificates is scoped to the namespace in which the Dapr components are deployed to and is only accessible by the Dapr system pods.
|
||||
In a Kubernetes cluster, the secret that holds the root certificates is scoped to the namespace in which the Dapr components are deployed and is only accessible by the Dapr system pods.
|
||||
|
||||
Dapr also supports strong identities when deployed on Kubernetes, relying on a pod's Service Account token which is sent as part of the certificate signing request (CSR) to Sentry.
|
||||
|
||||
By default, a workload cert is valid for 24 hours and the clock skew is set to 15 minutes.
|
||||
|
||||
Mutual TLS can be turned off/on by editing the default configuration that is deployed with Dapr via the `spec.mtls.enabled` field.
|
||||
This can be done for both Kubernetes and self hosted modes. Details for how to do this can be found [here]({{< ref mtls.md >}}).
|
||||
This can be done for both Kubernetes and self-hosted modes. Details for how to do this can be found [here]({{< ref mtls.md >}}).
|
||||
|
||||
### mTLS self hosted
|
||||
The diagram below shows how the Sentry system service issues certificates for applications based on the root/issuer certificate that is provided by an operator or generated by the Sentry service as stored in a file
|
||||
|
@ -50,7 +50,7 @@ The diagram below shows how the Sentry system service issues certificates for ap
|
|||
<img src="/images/security-mTLS-sentry-selfhosted.png" width=1000>
|
||||
|
||||
### mTLS in Kubernetes
|
||||
The diagram below shows how the Sentry system service issues certificates for applications based on the root/issuer certificate that is provided by an operator or generated by the Sentry service as stored as a Kubernetes secret
|
||||
The diagram below shows how the Sentry system service issues certificates for applications based on the root/issuer certificate that is provided by an operator or generated by the Sentry service and stored as a Kubernetes secret
|
||||
|
||||
<img src="/images/security-mTLS-sentry-kubernetes.png" width=1000>
|
||||
|
||||
|
@ -58,13 +58,13 @@ The diagram below shows how the Sentry system service issues certificates for ap
|
|||
|
||||
In addition to automatic mTLS between Dapr sidecars, Dapr offers mandatory mTLS between the Dapr sidecar and the Dapr system services, namely the Sentry service (Certificate Authority), Placement service (actor placement) and the Kubernetes Operator.
|
||||
|
||||
When mTLS is enabled, Sentry writes the root and issuer certificates to a Kubernetes secret that is scoped to the namespace where the control plane is installed. In self hosted mode, Sentry writes the certificates to a configurable filesystem path.
|
||||
When mTLS is enabled, Sentry writes the root and issuer certificates to a Kubernetes secret that is scoped to the namespace where the control plane is installed. In self-hosted mode, Sentry writes the certificates to a configurable file system path.
|
||||
|
||||
In Kubernetes, when the Dapr system services start, they automatically mount the secret containing the root and issuer certs and use those to secure the gRPC server that is used by the Dapr sidecar.
|
||||
In Kubernetes, when Dapr system services start, they automatically mount the secret containing the root and issuer certs and use those to secure the gRPC server that is used by the Dapr sidecar.
|
||||
|
||||
In self hosted mode, each system service can be mounted to a filesystem path to get the credentials.
|
||||
In self-hosted mode, each system service can be mounted to a filesystem path to get the credentials.
|
||||
|
||||
When the Dapr sidecar initializes, it authenticates with the system pods using the mounted leaf certificates and issuer private key. these are mounted as environment variables on the sidecar container.
|
||||
When the Dapr sidecar initializes, it authenticates with the system pods using the mounted leaf certificates and issuer private key. These are mounted as environment variables on the sidecar container.
|
||||
|
||||
### mTLS to system services in Kubernetes
|
||||
The diagram below shows secure communication between the Dapr sidecar and the Dapr Sentry (Certificate Authority), Placement (actor placement) and the Kubernetes Operator system services
|
||||
|
@ -77,13 +77,11 @@ Dapr components are namespaced. That means a Dapr runtime sidecar instance can o
|
|||
|
||||
Dapr components uses Dapr's built-in secret management capability to manage secrets. See the [secret store overview]({{<ref "secrets-overview.md">}}) for more details.
|
||||
|
||||
In addition, Dapr offers application-level scoping for components by allowing users to specify which applications can consume given components.For more information about application level scoping, see [here]({{<ref "component-scopes.md#application-access-to-components-with-scopes">}}).
|
||||
In addition, Dapr offers application-level scoping for components by allowing users to specify which applications can consume given components. For more information about application level scoping, see [here]({{<ref "component-scopes.md#application-access-to-components-with-scopes">}}).
|
||||
|
||||
## Network security
|
||||
|
||||
You can adopt common network security technologies such as network security groups (NSGs), demilitarized zones (DMZs) and firewalls to provide layers of protections over your networked resources.
|
||||
|
||||
For example, unless configured to talk to an external binding target, Dapr sidecars don’t open connections to the internet. And most binding implementations use outbound connections only. You can design your firewall rules to allow outbound connections only through designated ports.
|
||||
You can adopt common network security technologies such as network security groups (NSGs), demilitarized zones (DMZs) and firewalls to provide layers of protection over your networked resources. For example, unless configured to talk to an external binding target, Dapr sidecars don’t open connections to the internet. And most binding implementations use outbound connections only. You can design your firewall rules to allow outbound connections only through designated ports.
|
||||
|
||||
## Bindings security
|
||||
|
||||
|
@ -95,7 +93,7 @@ Dapr doesn't transform the state data from applications. This means Dapr doesn't
|
|||
|
||||
Dapr does not store any data at rest.
|
||||
|
||||
Dapr uses the configured authentication method to authenticate with the underlying state store. And many state store implementations use official client libraries that generally use secured communication channels with the servers.
|
||||
Dapr uses the configured authentication method to authenticate with the underlying state store. Many state store implementations use official client libraries that generally use secured communication channels with the servers.
|
||||
|
||||
## Management security
|
||||
|
||||
|
@ -104,20 +102,36 @@ When deploying on Kubernetes, you can use regular [Kubernetes RBAC]( https://kub
|
|||
When deploying on Azure Kubernetes Service (AKS), you can use [Azure Active Directory (AD) service principals]( https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) to control access to management activities and resource management.
|
||||
|
||||
## Threat model
|
||||
Threat modeling is a process by which potential threats, such as structural vulnerabilities or the absence of appropriate safeguards, can be identified, enumerated, and mitigations can be prioritized. The Dapr threat model is below.
|
||||
Threat modeling is a process by which potential threats, such as structural vulnerabilities or the absence of appropriate safeguards, can be identified and enumerated, and mitigations can be prioritized. The Dapr threat model is below.
|
||||
|
||||
<img src="/images/security-threat-model.png" alt="Dapr threat model" width=1000>
|
||||
|
||||
## Security audit
|
||||
|
||||
### June 2020
|
||||
### February 2021
|
||||
|
||||
In June 2020, Dapr has undergone a security audit from Cure53, a CNCF approved cybersecurity firm.
|
||||
In February 2021, Dapr went through a 2nd security audit targeting it's 1.0 release by Cure53.
|
||||
The test focused on the following:
|
||||
|
||||
* Dapr runtime code base evaluation
|
||||
* Dapr components code base evaluation
|
||||
* Dapr CLI code base evaluation
|
||||
* Dapr runtime codebase evaluation since last audit
|
||||
* Access control lists
|
||||
* Secrets management
|
||||
* Penetration testing
|
||||
* Validating fixes for previous high/medium issues
|
||||
|
||||
The full report can be found [here](/docs/Dapr-february-2021-security-audit-report.pdf).
|
||||
|
||||
One high issue was detected and fixed during the test.
|
||||
As of February 16th 2021, Dapr has 0 criticals, 0 highs, 0 mediums, 2 lows, 2 infos.
|
||||
|
||||
### June 2020
|
||||
|
||||
In June 2020, Dapr underwent a security audit from Cure53, a CNCF-approved cybersecurity firm.
|
||||
The test focused on the following:
|
||||
|
||||
* Dapr runtime codebase evaluation
|
||||
* Dapr components codebase evaluation
|
||||
* Dapr CLI codebase evaluation
|
||||
* Privilege escalation
|
||||
* Traffic spoofing
|
||||
* Secrets management
|
||||
|
@ -129,5 +143,6 @@ The test focused on the following:
|
|||
|
||||
The full report can be found [here](/docs/Dapr-july-2020-security-audit-report.pdf).
|
||||
|
||||
Two issues, one critical and one high, were fixed during the test.
|
||||
As of July 21st 2020, Dapr has 0 criticals, 2 highs, 2 mediums, 1 low, 1 info.
|
||||
## Reporting a security issue
|
||||
|
||||
Visit [this page]({{< ref support-security-issues.md >}}) to report a security issue to the Dapr maintainers.
|
||||
|
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Dapr and service meshes"
|
||||
linkTitle: "Service meshes"
|
||||
weight: 700
|
||||
description: >
|
||||
How Dapr compares to, and works with, service meshes
|
||||
---
|
||||
|
||||
Dapr uses a sidecar architecture, running as a separate process alongside the application and includes features such as service invocation, network security, and distributed tracing. This often raises the question: how does Dapr compare to service mesh solutions such as Linkerd, Istio and Open Service Mesh (OSM)?
|
||||
|
||||
## How Dapr and service meshes compare
|
||||
While Dapr and service meshes do offer some overlapping capabilities, **Dapr is not a service mesh**, where a service mesh is defined as a *networking* service mesh. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric, versus service meshes which are infrastructure-centric.
|
||||
|
||||
In most cases, developers do not need to be aware that the application they are building will be deployed in an environment which includes a service mesh, since a service mesh intercepts network traffic. Service meshes are mostly managed and deployed by system operators, whereas Dapr building block APIs are intended to be used by developers explicitly in their code.
|
||||
|
||||
Some common capabilities that Dapr shares with service meshes include:
|
||||
- Secure service-to-service communication with mTLS encryption
|
||||
- Service-to-service metric collection
|
||||
- Service-to-service distributed tracing
|
||||
- Resiliency through retries
|
||||
|
||||
Importantly, Dapr provides service discovery and invocation via names, which is a developer-centric concern. This means that through Dapr's service invocation API, developers call a method on a service name, whereas service meshes deal with network concepts such as IP addresses and DNS addresses. However, Dapr does not provide capabilities for traffic behavior such as routing or traffic splitting. Traffic routing is often addressed with ingress proxies to an application and does not have to use a service mesh. In addition, Dapr provides other application-level building blocks for state management, pub/sub messaging, actors, and more.
|
||||
|
||||
Another difference between Dapr and service meshes is observability (tracing and metrics). Service meshes operate at the network level and trace the network calls between services. Dapr does this with service invocation. Moreover, Dapr also provides observability (tracing and metrics) over pub/sub calls using trace IDs written into the Cloud Events envelope. This means that metrics and tracing with Dapr is more extensive than with a service mesh for applications that use both service-to-service invocation and pub/sub to communicate.
|
||||
|
||||
The illustration below captures the overlapping features and unique capabilities that Dapr and service meshes offer:
|
||||
|
||||
<img src="/images/service-mesh.png" width=1000>
|
||||
|
||||
## Using Dapr with a service mesh
|
||||
Dapr does work with service meshes. In the case where both are deployed together, both Dapr and service mesh sidecars are running in the application environment. In this case, it is recommended to configure only Dapr or only the service mesh to perform mTLS encryption and distributed tracing.
|
||||
|
||||
Watch these recordings from the Dapr community calls showing presentations on running Dapr together with different service meshes:
|
||||
- General overview and a demo of [Dapr and Linkerd](https://youtu.be/xxU68ewRmz8?t=142)
|
||||
- Demo of running [Dapr and Istio](https://youtu.be/ngIDOQApx8g?t=335)
|
||||
|
||||
## When to choose using Dapr, a service mesh, or both
|
||||
Should you be using Dapr, a service mesh, or both? The answer depends on your requirements. If, for example, you are looking to use Dapr for one or more building blocks such as state management or pub/sub, and you are considering using a service mesh just for network security or observability, you may find that Dapr is a good fit and that a service mesh is not required.
|
||||
|
||||
Typically you would use a service mesh with Dapr where there is a corporate policy that traffic on the network must be encrypted for all applications. For example, you may be using Dapr in only part of your application, and other services and processes that are not using Dapr in your application also need their traffic encrypted. In this scenario a service mesh is the better option, and most likely you should use mTLS and distributed tracing on the service mesh and disable this on Dapr.
|
||||
|
||||
If you need traffic splitting for A/B testing scenarios you would benefit from using a service mesh, since Dapr does not provide these capabilities.
|
||||
|
||||
In some cases, where you require capabilities that are unique to both, you will find it useful to leverage both Dapr and a service mesh; as mentioned above, there is no limitation to using them together.
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Dapr terminology and definitions"
|
||||
linkTitle: "Terminology"
|
||||
weight: 800
|
||||
description: Definitions for common terms and acronyms in the Dapr documentation
|
||||
---
|
||||
|
||||
This page details all of the common terms you may come across in the Dapr docs.
|
||||
|
||||
| Term | Definition | More information |
|
||||
|:-----|------------|------------------|
|
||||
| App/Application | A running service/binary, usually one that you as the user create and run.
|
||||
| Building block | An API that Dapr provides to users to help in the creation of microservices and applications. | [Dapr building blocks]({{< ref building-blocks-concept.md >}})
|
||||
| Component | Modular types of functionality that are used either individually or with a collection of other components, by a Dapr building block. | [Dapr components]({{< ref components-concept.md >}})
|
||||
| Configuration | A YAML file declaring all of the settings for Dapr sidecars or the Dapr control plane. This is where you can configure control plane mTLS settings, or the tracing and middleware settings for an application instance. | [Dapr configuration]({{< ref configuration-concept.md >}})
|
||||
| Dapr | Distributed Application Runtime. | [Dapr overview]({{< ref overview.md >}})
|
||||
| Dapr control plane | A collection of services that are part of a Dapr installation on a hosting platform such as a Kubernetes cluster. This allows Dapr-enabled applications to run on the platform and handles Dapr capabilities such as actor placement, Dapr sidecar injection, or certificate issuance/rollover. | [Self-hosted overview]({{< ref self-hosted-overview >}})<br />[Kubernetes overview]({{< ref kubernetes-overview >}})
|
||||
| Self-hosted | Windows/macOS/Linux machine(s) where you can run your applications with Dapr. Dapr provides the capability to run on machines in "self-hosted" mode. | [Self-hosted mode]({{< ref self-hosted-overview.md >}})
|
||||
| Service | A running application or binary. This can refer to your application or to a Dapr application.
|
||||
| Sidecar | A program that runs alongside your application as a separate process or container. | [Sidecar pattern](https://docs.microsoft.com/en-us/azure/architecture/patterns/sidecar)
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Contributing with GitHub Codespaces"
|
||||
linkTitle: "GitHub Codespaces"
|
||||
weight: 2500
|
||||
description: "How to work with Dapr repos in GitHub Codespaces"
|
||||
aliases:
|
||||
- "/developing-applications/ides/codespaces/"
|
||||
---
|
||||
|
||||
[GitHub Codespaces](https://github.com/features/codespaces) are the easiest way to get up and running for contributing to a Dapr repo. In as little as a single click, you can have an environment with all of the prerequisites ready to go in your browser.
|
||||
|
||||
{{% alert title="Private Beta" color="warning" %}}
|
||||
GitHub Codespaces is currently in a private beta. Sign up [here](https://github.com/features/codespaces/signup).
|
||||
{{% /alert %}}
|
||||
|
||||
## Features
|
||||
|
||||
- **Click and Run**: Get a dedicated and sandboxed environment with all of the required frameworks and packages ready to go.
|
||||
- **Usage-based Billing**: Only pay for the time you spend developing in the Codespace. Environments are spun down automatically when not in use.
|
||||
- **Portable**: Run in your browser or in Visual Studio Code
|
||||
|
||||
## Open a Dapr repo in a Codespace
|
||||
|
||||
To open a Dapr repository in a Codespace simply select "Code" from the repo homepage and "Open with Codespaces":
|
||||
|
||||
<img src="/images/codespaces-create.png" alt="Screenshot of creating a Dapr Codespace" width="300">
|
||||
|
||||
If you haven't already forked the repo, creating the Codespace will also create a fork for you and use it inside the Codespace.
|
||||
|
||||
### Supported repos
|
||||
|
||||
- [Dapr](https://github.com/dapr/dapr)
|
||||
- [Components-contrib](https://github.com/dapr/components-contrib)
|
||||
- [Python SDK](https://github.com/dapr/python-sdk)
|
||||
|
||||
### Developing Dapr Components in a Codespace
|
||||
|
||||
Developing a new Dapr component requires working with both the [components-contrib](https://github.com/dapr/components-contrib) and [dapr](https://github.com/dapr/dapr) repos together under the `$GOPATH` tree for testing purposes. To facilitate this, the `/go/src/github.com/dapr` folder in the components-contrib Codespace will already be set up with your fork of components-contrib, and a clone of the dapr repo as described in the [component development documentation](https://github.com/dapr/components-contrib/blob/master/docs/developing-component.md). A few things to note in this configuration:
|
||||
|
||||
- The components-contrib and dapr repos only define Codespaces for the Linux amd64 environment at the moment.
|
||||
- The `/go/src/github.com/dapr/components-contrib` folder is a soft link to Codespace's default `/workspace/components-contrib` folder, so changes in one will be automatically reflected in the other.
|
||||
- Since the `/go/src/github.com/dapr/dapr` folder uses a clone of the official dapr repo rather than a fork, you will not be able to make a pull request from changes made in that folder directly. You can use the dapr Codespace separately for that PR, or if you would like to use the same Codespace for the dapr changes as well, you should remap the dapr repo origin to your fork in the components-contrib Codespace. For example, to use a dapr fork under `my-git-alias`:
|
||||
|
||||
```bash
|
||||
cd /go/src/github.com/dapr/dapr
|
||||
git remote set-url origin https://github.com/my-git-alias/dapr
|
||||
git fetch
|
||||
git reset --hard
|
||||
```
|
||||
|
||||
## Related links
|
||||
- [GitHub documentation](https://docs.github.com/en/github/developing-online-with-codespaces/about-codespaces)
|
|
@ -7,30 +7,39 @@ description: >
|
|||
Guidelines for contributing to the Dapr Docs
|
||||
---
|
||||
|
||||
This guide contains information about contributions to the [Dapr docs repository](https://github.com/dapr/docs). Please review the guidelines below before making a contribution to the Dapr docs. This guide assumes you have already reviewed the [general guidance]({{< ref contributing-overview>}}) which applies to any Dapr project contributions.
|
||||
This guide contains information about contributions to the [Dapr docs repository](https://github.com/dapr/docs). Please review the guidelines below before making a contribution to the Dapr docs. This guide assumes you have already reviewed the [general guidance]({{< ref contributing-overview>}}) which applies to any Dapr project contributions.
|
||||
|
||||
Dapr docs are published to [docs.dapr.io](https://docs.dapr.io). Therefore, any contribution must ensure docs can be compiled and published correctly.
|
||||
|
||||
## Prerequisites
|
||||
## Prerequisites
|
||||
The Dapr docs are built using [Hugo](https://gohugo.io/) with the [Docsy](https://docsy.dev) theme. To verify docs are built correctly before submitting a contribution, you should setup your local environment to build and display the docs locally.
|
||||
|
||||
Fork the [docs repository](https://github.com/dapr/docs) to work on any changes
|
||||
|
||||
Follow the instructions in the repository [README.md](https://github.com/dapr/docs/blob/master/README.md#environment-setup) to install Hugo locally and build the docs website.
|
||||
|
||||
## Branch guidance
|
||||
|
||||
The Dapr docs handles branching differently than most code repositories. Instead of having a `master` or `main` branch, every branch is labeled to match the major and minor version of a runtime release. For the full list visit the [Docs repo](https://github.com/dapr/docs#branch-guidance)
|
||||
|
||||
Overall, all updates should go into the docs branch for the latest release of Dapr. You can find this directly at [https://github.com/dapr/docs](https://github.com/dapr/docs), as the latest release will be the default branch. For any docs changes that are applicable to a release candidate or a pre-release version of the docs, make your changes into that particular branch.
|
||||
|
||||
For example, if you are fixing a typo, adding notes, or clarifying a point, make your changes into the default Dapr branch. If you are documenting an upcoming change to a component or the runtime, make your changes to the pre-release branch. Branches can be found in the [Docs repo](https://github.com/dapr/docs#branch-guidance)
|
||||
|
||||
## Style and tone
|
||||
These conventions should be followed throughout all Dapr documentation to ensure a consistent experience across all docs.
|
||||
|
||||
- **Casing** - Use upper case only at the start of a sentence or for proper nouns including names of technologies (Dapr, Redis, Kubernetes etc.).
|
||||
- **Headers and titles** - Headers and titles must be descriptive and clear, use sentence casing i.e. use the above casing guidance for headers and titles too
|
||||
- **Headers and titles** - Headers and titles must be descriptive and clear, use sentence casing i.e. use the above casing guidance for headers and titles too
|
||||
- **Use simple sentences** - Easy-to-read sentences mean the reader can quickly use the guidance you share.
|
||||
- **Avoid the first person** - Use 2nd person "you", "your" instead of "I", "we", "our".
|
||||
- **Assume a new developer audience** - Some obvious steps can seem hard. E.g. Now set an environment variable Dapr to a value X. It is better to give the reader the explicit command to do this, rather than having them figure this out.
|
||||
- **Use present tense** - Avoid sentences like "this command will install redis", which implies the action is in the future. Instead use "This command installs redis" which is in the present tense.
|
||||
|
||||
## Contributing a new docs page
|
||||
- Make sure the documentation you are writing is in the correct place in the hierarchy.
|
||||
- Make sure the documentation you are writing is in the correct place in the hierarchy.
|
||||
- Avoid creating new sections where possible, there is a good chance a proper place in the docs hierarchy already exists.
|
||||
- Make sure to include a complete [Hugo front-matter](front-matter).
|
||||
- Make sure to include a complete [Hugo front-matter](#front-matter).
|
||||
|
||||
### Contributing a new concept doc
|
||||
- Ensure the reader can understand why they should care about this feature. What problems does it help them solve?
|
||||
|
@ -104,11 +113,26 @@ This shortcode will link to a specific page:
|
|||
```md
|
||||
{{</* ref "page.md" */>}}
|
||||
```
|
||||
> Note that all pages and folders need to have globally unique names in order for the ref shortcode to work properly. If there are duplicate names the build will break and an error will be thrown.
|
||||
|
||||
> Note that all pages and folders need to have globally unique names in order for the ref shortcode to work properly.
|
||||
#### Referencing sections in other pages
|
||||
|
||||
To reference a specific section in another page, add `#section-short-name` to the end of your reference.
|
||||
|
||||
As a general rule, the section short name is the text of the section title, all lowercase, with spaces changed to "-". You can check the section short name by visiting the website page, clicking the link icon (🔗) next to the section, and see how the URL renders in the nav bar. The content after the "#" is your section shortname.
|
||||
|
||||
As an example, for this specific section the complete reference to the page and section would be:
|
||||
|
||||
```md
|
||||
{{</* ref "contributing-docs.md#referencing-sections-in-other-pages" */>}}
|
||||
```
|
||||
|
||||
## Shortcodes
|
||||
|
||||
The following are useful shortcodes for writing Dapr documentation
|
||||
|
||||
### Images
|
||||
The markdown spec used by Docsy and Hugo does not give an option to resize images using markdown notation. Instead, raw HMTL is used.
|
||||
The markdown spec used by Docsy and Hugo does not give an option to resize images using markdown notation. Instead, raw HTML is used.
|
||||
|
||||
Begin by placing images under `/daprdocs/static/images` with the naming convention of `[page-name]-[image-name].[png|jpg|svg]`.
|
||||
|
||||
|
@ -127,7 +151,7 @@ This HTML will display the `dapr-overview.png` image on the `overview.md` page:
|
|||
```
|
||||
|
||||
### Tabbed content
|
||||
Tabs are made possible through [Hugo shortcodes](https://gohugo.io/content-management/shortcodes/).
|
||||
Tabs are made possible through [Hugo shortcodes](https://gohugo.io/content-management/shortcodes/).
|
||||
|
||||
The overall format is:
|
||||
```
|
||||
|
@ -195,6 +219,102 @@ brew install dapr/tap/dapr-cli
|
|||
|
||||
{{< /tabs >}}
|
||||
|
||||
### Embedded code snippets
|
||||
|
||||
Use the `code-snippet` shortcode to reference code snippets from the `static/code` directory.
|
||||
|
||||
```
|
||||
{{</* code-snippet file="myfile.py" lang="python" */>}}
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
All Dapr sample code should be self-contained in separate files, not in markdown. Use the techniques described here to highlight the parts of the sample code users should focus on.
|
||||
{{% /alert %}}
|
||||
|
||||
Use the `lang` (default `txt`) parameter to configure the language used for syntax highlighting.
|
||||
|
||||
Use the `marker` parameter to limit the embedded snipped to a portion of the sample file. This is useful when you want to show just a portion of a larger file. The typical way to do this is surround the interesting code with comments, and then pass the comment text into `marker`.
|
||||
|
||||
The shortcode below and code sample:
|
||||
|
||||
```
|
||||
{{</* code-snippet file="./contributing-1.py" lang="python" marker="#SAMPLE" */>}}
|
||||
```
|
||||
|
||||
```python
|
||||
import json
|
||||
import time
|
||||
|
||||
from dapr.clients import DaprClient
|
||||
|
||||
#SAMPLE
|
||||
with DaprClient() as d:
|
||||
req_data = {
|
||||
'id': 1,
|
||||
'message': 'hello world'
|
||||
}
|
||||
|
||||
while True:
|
||||
# Create a typed message with content type and body
|
||||
resp = d.invoke_method(
|
||||
'invoke-receiver',
|
||||
'my-method',
|
||||
data=json.dumps(req_data),
|
||||
)
|
||||
|
||||
# Print the response
|
||||
print(resp.content_type, flush=True)
|
||||
print(resp.text(), flush=True)
|
||||
|
||||
time.sleep(2)
|
||||
#SAMPLE
|
||||
```
|
||||
|
||||
Will result in the following output:
|
||||
|
||||
{{< code-snippet file="contributing-1.py" lang="python" marker="#SAMPLE" >}}
|
||||
|
||||
Use the `replace-key-[token]` and `replace-value-[token]` parameters to limit the embedded snipped to a portion of the sample file. This is useful when you want abbreviate a portion of the code sample. Multiple replacements are supported with multiple values of `token`.
|
||||
|
||||
The shortcode below and code sample:
|
||||
|
||||
```
|
||||
{{</* code-snippet file="./contributing-2.py" lang="python" replace-key-imports="#IMPORTS" replace-value-imports="# Import statements" */>}}
|
||||
```
|
||||
|
||||
```python
|
||||
#IMPORTS
|
||||
import json
|
||||
import time
|
||||
#IMPORTS
|
||||
|
||||
from dapr.clients import DaprClient
|
||||
|
||||
with DaprClient() as d:
|
||||
req_data = {
|
||||
'id': 1,
|
||||
'message': 'hello world'
|
||||
}
|
||||
|
||||
while True:
|
||||
# Create a typed message with content type and body
|
||||
resp = d.invoke_method(
|
||||
'invoke-receiver',
|
||||
'my-method',
|
||||
data=json.dumps(req_data),
|
||||
)
|
||||
|
||||
# Print the response
|
||||
print(resp.content_type, flush=True)
|
||||
print(resp.text(), flush=True)
|
||||
|
||||
time.sleep(2)
|
||||
```
|
||||
|
||||
Will result in the following output:
|
||||
|
||||
{{< code-snippet file="./contributing-2.py" lang="python" replace-key-imports="#IMPORTS" replace-value-imports="# Import statements" >}}
|
||||
|
||||
### YouTube videos
|
||||
Hugo can automatically embed YouTube videos using a shortcode:
|
||||
```
|
||||
|
@ -210,5 +330,74 @@ The shortcode would be:
|
|||
{{</* youtube dQw4w9WgXcQ */>}}
|
||||
```
|
||||
|
||||
### Buttons
|
||||
|
||||
To create a button in a webpage, use the `button` shortcode.
|
||||
|
||||
#### Link to an external page
|
||||
|
||||
```
|
||||
{{</* button text="My Button" link="https://example.com" */>}}
|
||||
```
|
||||
|
||||
{{< button text="My Button" link="https://example.com" >}}
|
||||
|
||||
#### Link to another docs page
|
||||
|
||||
You can also reference pages in your button as well:
|
||||
```
|
||||
{{</* button text="My Button" page="contributing" */>}}
|
||||
```
|
||||
|
||||
{{< button text="My Button" page="contributing" >}}
|
||||
|
||||
#### Button colors
|
||||
|
||||
You can customize the colors using the Bootstrap colors:
|
||||
```
|
||||
{{</* button text="My Button" link="https://example.com" color="primary" */>}}
|
||||
{{</* button text="My Button" link="https://example.com" color="secondary" */>}}
|
||||
{{</* button text="My Button" link="https://example.com" color="success" */>}}
|
||||
{{</* button text="My Button" link="https://example.com" color="danger" */>}}
|
||||
{{</* button text="My Button" link="https://example.com" color="warning" */>}}
|
||||
{{</* button text="My Button" link="https://example.com" color="info" */>}}
|
||||
```
|
||||
|
||||
{{< button text="My Button" link="https://example.com" color="primary" >}}
|
||||
{{< button text="My Button" link="https://example.com" color="secondary" >}}
|
||||
{{< button text="My Button" link="https://example.com" color="success" >}}
|
||||
{{< button text="My Button" link="https://example.com" color="danger" >}}
|
||||
{{< button text="My Button" link="https://example.com" color="warning" >}}
|
||||
{{< button text="My Button" link="https://example.com" color="info" >}}
|
||||
|
||||
### References
|
||||
- [Docsy authoring guide](https://www.docsy.dev/docs/adding-content/)
|
||||
|
||||
## Translations
|
||||
|
||||
The Dapr Docs supports adding language translations into the docs using git submodules and Hugo's built in language support.
|
||||
|
||||
You can find an example PR of adding Chinese language support in [PR 1286](https://github.com/dapr/docs/pull/1286).
|
||||
|
||||
Steps to add a language:
|
||||
- Open an issue in the Docs repo requesting to create a new language-specific docs repo
|
||||
- Once created, create a git submodule within the docs repo:
|
||||
```sh
|
||||
git submodule add <remote_url> translations/<language_code>
|
||||
```
|
||||
- Add a language entry within `daprdocs/config.toml`:
|
||||
```toml
|
||||
[languages.<language_code>]
|
||||
title = "Dapr Docs"
|
||||
weight = 3
|
||||
contentDir = "content/<language_code>"
|
||||
languageName = "<language_name>"
|
||||
```
|
||||
- Create a mount within `daprdocs/config.toml`:
|
||||
```toml
|
||||
[[module.mounts]]
|
||||
source = "../translations/docs-<language_code>/content/<language_code>"
|
||||
target = "content"
|
||||
lang = "<language_code>"
|
||||
```
|
||||
- Repeat above step as necessary for all other translation directories
|
|
@ -7,8 +7,8 @@ description: >
|
|||
General guidance for contributing to any of the Dapr project repositories
|
||||
---
|
||||
|
||||
Thank you for your interest in Dapr!
|
||||
This document provides the guidelines for how to contribute to the [Dapr project](https://github.com/dapr) through issues and pull-requests. Contributions can also come in additional ways such as engaging with the community in community calls, commenting on issues or pull requests and more.
|
||||
Thank you for your interest in Dapr!
|
||||
This document provides the guidelines for how to contribute to the [Dapr project](https://github.com/dapr) through issues and pull-requests. Contributions can also come in additional ways such as engaging with the community in community calls, commenting on issues or pull requests and more.
|
||||
|
||||
See the [Dapr community repository](https://github.com/dapr/community) for more information on community engagement and community membership.
|
||||
|
||||
|
@ -38,7 +38,7 @@ Before you submit an issue, make sure you've checked the following:
|
|||
- 👎 down-vote
|
||||
1. For bugs
|
||||
- Check it's not an environment issue. For example, if running on Kubernetes, make sure prerequisites are in place. (state stores, bindings, etc.)
|
||||
- You have as much data as possible. This usually comes in the form of logs and/or stacktrace. If running on Kubernetes or other environment, look at the logs of the Dapr services (runtime, operator, placement service). More details on how to get logs can be found [here](https://github.com/dapr/docs/tree/master/best-practices/troubleshooting/logs.md).
|
||||
- You have as much data as possible. This usually comes in the form of logs and/or stacktrace. If running on Kubernetes or other environment, look at the logs of the Dapr services (runtime, operator, placement service). More details on how to get logs can be found [here]({{< ref "logs-troubleshooting.md" >}}).
|
||||
1. For proposals
|
||||
- Many changes to the Dapr runtime may require changes to the API. In that case, the best place to discuss the potential feature is the main [Dapr repo](https://github.com/dapr/dapr).
|
||||
- Other examples could include bindings, state stores or entirely new components.
|
||||
|
@ -49,6 +49,7 @@ All contributions come through pull requests. To submit a proposed change, follo
|
|||
|
||||
1. Make sure there's an issue (bug or proposal) raised, which sets the expectations for the contribution you are about to make.
|
||||
1. Fork the relevant repo and create a new branch
|
||||
- Some Dapr repos support [Codespaces]({{< ref codespaces.md >}}) to provide an instant environment for you to build and test your changes.
|
||||
1. Create your change
|
||||
- Code changes require tests
|
||||
1. Update relevant documentation for the change
|
||||
|
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Giving a presentation on Dapr"
|
||||
linkTitle: "Presentations"
|
||||
weight: 1500
|
||||
description: How to give a presentation on Dapr and examples
|
||||
---
|
||||
|
||||
We welcome community members giving presentations on Dapr and spreading the word about all the awesome Dapr features! We offer a template PowerPoint file to get started.
|
||||
|
||||
{{< button text="Download the Dapr Presentation Deck" link="/presentations/dapr-slidedeck.zip" >}}
|
||||
|
||||
## Giving a Dapr presentation
|
||||
|
||||
- Begin by downloading the [Dapr Presentation Deck](/presentations/dapr-slidedeck.zip). This contains slides and diagrams needed to give a Dapr presentation.
|
||||
- Next, review the docs to make sure you understand the [concepts]({{< ref concepts >}}).
|
||||
- Use the Dapr [quickstarts](https://github.com/dapr/quickstarts) repo and [samples](https://github.com/dapr/samples) repo to show demos of how to use Dapr.
|
||||
|
||||
## Previous Dapr presentations
|
||||
|
||||
| Presentation | Recording | Deck |
|
||||
|--------------|-----------|------|
|
||||
| Ignite 2019: Mark Russinovich Presents the Future of Cloud Native Applications | [Link](https://www.youtube.com/watch?v=LAUDVk8PaCY) | [Link](/presentations/2019IgniteCloudNativeApps.pdf)
|
||||
| Azure Community Live: Build microservice applications using DAPR with Mark Fussell | [Link](https://www.youtube.com/watch?v=CgqI7nen-Ng) | N/A
|
||||
| Ready 2020: Mark Russinovich Presents Cloud Native Applications | [Link](https://youtu.be/eJCu6a-x9uo?t=1614) | [Link](/presentations/2020ReadyCloudNativeApps.pdf)
|
||||
| Ignite 2021: Mark Russinovich Presents Dapr v1.0 Release | [Link](https://youtu.be/69PrhWQorEM?t=3789) | N/A
|
||||
|
||||
## Additional resources
|
||||
|
||||
There are other Dapr resources on the [community](https://github.com/dapr/community) repo.
|
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Dapr Roadmap"
|
||||
linkTitle: "Roadmap"
|
||||
description: "The Dapr Roadmap is a tool to help with visibility into investments across the Dapr project"
|
||||
weight: 1100
|
||||
no_list: true
|
||||
---
|
||||
|
||||
|
||||
Dapr encourages the community to help with prioritization. A GitHub project board is available to view and provide feedback on proposed issues and track them across development.
|
||||
|
||||
[<img src="/images/roadmap.png" alt="Screenshot of the Dapr Roadmap board" width=500 >](https://aka.ms/dapr/roadmap)
|
||||
|
||||
{{< button text="View the backlog" link="https://aka.ms/dapr/roadmap" color="primary" >}}
|
||||
<br />
|
||||
|
||||
Please vote by adding a 👍 on the GitHub issues for the feature capabilities you would most like to see Dapr support. This will help the Dapr maintainers understand which features will provide the most value.
|
||||
|
||||
Contributions from the community is also welcomed. If there are features on the roadmap that you are interested in contributing to, please comment on the GitHub issue and include your solution proposal.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
The Dapr roadmap includes issues only from the v1.2 release and onwards. Issues closed and released prior to v1.2 are not included.
|
||||
{{% /alert %}}
|
||||
|
||||
## Stages
|
||||
|
||||
The Dapr Roadmap progresses through the following stages:
|
||||
|
||||
{{< cardpane >}}
|
||||
{{< card title="**[📄 Backlog](https://github.com/orgs/dapr/projects/52#column-14691591)**" >}}
|
||||
Issues (features) that need 👍 votes from the community to prioritize. Updated by Dapr maintainers.
|
||||
{{< /card >}}
|
||||
{{< card title="**[⏳ Planned (Committed)](https://github.com/orgs/dapr/projects/52#column-14561691)**" >}}
|
||||
Issues with a proposal and/or targeted release milestone. This is where design proposals are discussed and designed.
|
||||
{{< /card >}}
|
||||
{{< card title="**[👩💻 In Progress (Development)](https://github.com/orgs/dapr/projects/52#column-14561696)**" >}}
|
||||
Implementation specifics have been agreed upon and the feature is under active development.
|
||||
{{< /card >}}
|
||||
{{< /cardpane >}}
|
||||
{{< cardpane >}}
|
||||
{{< card title="**[☑ Done](https://github.com/orgs/dapr/projects/52#column-14561700)**" >}}
|
||||
The feature capability has been completed and is scheduled for an upcoming release.
|
||||
{{< /card >}}
|
||||
{{< card title="**[✅ Released](https://github.com/orgs/dapr/projects/52#column-14659973)**" >}}
|
||||
The feature is released and available for use.
|
||||
{{< /card >}}
|
||||
{{< /cardpane >}}
|
|
@ -0,0 +1,94 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-to: Enable and use actor reentrancy in Dapr"
|
||||
linkTitle: "How-To: Actor reentrancy"
|
||||
weight: 30
|
||||
description: Learn more about actor reentrancy
|
||||
---
|
||||
|
||||
{{% alert title="Preview feature" color="warning" %}}
|
||||
Actor reentrancy is currently in [preview]({{< ref preview-features.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Actor reentrancy
|
||||
A core tenet of the virtual actor pattern is the single-threaded nature of actor execution. Before reentrancy, this caused the Dapr runtime to lock an actor on any given request. A second request could not start until the first had completed. This behavior means an actor cannot call itself, or have another actor call into it even if it is part of the same chain. Reentrancy solves this by allowing requests from the same chain or context to re-enter into an already locked actor. Examples of chains that reentrancy allows can be seen below:
|
||||
|
||||
```
|
||||
Actor A -> Actor A
|
||||
ActorA -> Actor B -> Actor A
|
||||
```
|
||||
|
||||
With reentrancy, there can be more complex actor calls without sacrificing the single-threaded behavior of virtual actors.
|
||||
|
||||
## Enabling actor reentrancy
|
||||
Actor reentrancy is currently in preview, so enabling it is a two step process.
|
||||
|
||||
### Preview feature configuration
|
||||
Before using reentrancy, the feature must be enabled in Dapr. For more information on preview configurations, see [the full guide on opting into preview features in Dapr]({{< ref preview-features.md >}}). Below is an example of the configuration for actor reentrancy:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: reentrantconfig
|
||||
spec:
|
||||
features:
|
||||
- name: Actor.Reentrancy
|
||||
enabled: true
|
||||
```
|
||||
|
||||
### Actor runtime configuration
|
||||
Once actor reentrancy is enabled as an opt-in preview feature, the actor that will be reentrant must also provide the appropriate configuration to use reentrancy. This is done by the actor's endpoint for `GET /dapr/config`, similar to other actor configuration elements. Here is a snipet of an actor written in Golang providing the configuration:
|
||||
|
||||
```go
|
||||
type daprConfig struct {
|
||||
Entities []string `json:"entities,omitempty"`
|
||||
ActorIdleTimeout string `json:"actorIdleTimeout,omitempty"`
|
||||
ActorScanInterval string `json:"actorScanInterval,omitempty"`
|
||||
DrainOngoingCallTimeout string `json:"drainOngoingCallTimeout,omitempty"`
|
||||
DrainRebalancedActors bool `json:"drainRebalancedActors,omitempty"`
|
||||
Reentrancy config.ReentrancyConfig `json:"reentrancy,omitempty"`
|
||||
}
|
||||
|
||||
var daprConfigResponse = daprConfig{
|
||||
[]string{defaultActorType},
|
||||
actorIdleTimeout,
|
||||
actorScanInterval,
|
||||
drainOngoingCallTimeout,
|
||||
drainRebalancedActors,
|
||||
config.ReentrancyConfig{Enabled: true, MaxStackDepth: &maxStackDepth},
|
||||
}
|
||||
|
||||
func configHandler(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
json.NewEncoder(w).Encode(daprConfigResponse)
|
||||
}
|
||||
```
|
||||
|
||||
### Handling reentrant requests
|
||||
The key to a reentrant request is the `Dapr-Reentrancy-Id` header. The value of this header is used to match requests to their call chain and allow them to bypass the actor's lock.
|
||||
|
||||
The header is generated by the Dapr runtime for any actor request that has a reentrant config specified. Once it is generated, it is used to lock the actor and must be passed to all future requests. Below is a snippet of code from an actor handling this is Golang:
|
||||
|
||||
```go
|
||||
func reentrantCallHandler(w http.ResponseWriter, r *http.Request) {
|
||||
/*
|
||||
* Omitted.
|
||||
*/
|
||||
|
||||
req, _ := http.NewRequest("PUT", url, bytes.NewReader(nextBody))
|
||||
|
||||
reentrancyID := r.Header.Get("Dapr-Reentrancy-Id")
|
||||
req.Header.Add("Dapr-Reentrancy-Id", reentrancyID)
|
||||
|
||||
client := http.Client{}
|
||||
resp, err := client.Do(req)
|
||||
|
||||
/*
|
||||
* Omitted.
|
||||
*/
|
||||
}
|
||||
```
|
||||
|
||||
Currently, no SDK supports actor reentrancy. In the future, the method for handling the reentrancy id may be different based on the SDK that is being used.
|
|
@ -1,104 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Introduction to actors"
|
||||
linkTitle: "Actors background"
|
||||
weight: 20
|
||||
description: Learn more about the actor pattern
|
||||
---
|
||||
|
||||
The [actor pattern](https://en.wikipedia.org/wiki/Actor_model) describes **actors** as the lowest-level "unit of computation". In other words, you write your code in a self-contained unit (called an actor) that receives messages and processes them one at a time, without any kind of concurrency or threading.
|
||||
|
||||
While your code processes a message, it can send one or more messages to other actors, or create new actors. An underlying **runtime** manages how, when and where each actor runs, and also routes messages between actors.
|
||||
|
||||
A large number of actors can execute simultaneously, and actors execute independently from each other.
|
||||
|
||||
Dapr includes a runtime that specifically implements the [Virtual Actor pattern](https://www.microsoft.com/en-us/research/project/orleans-virtual-actors/). With Dapr's implementation, you write your Dapr actors according to the Actor model, and Dapr leverages the scalability and reliability guarantees that the underlying platform provides.
|
||||
|
||||
## Quick links
|
||||
|
||||
- [Dapr Actor Features]({{< ref actors-overview.md >}})
|
||||
- [Dapr Actor API Spec]({{< ref actors_api.md >}} )
|
||||
|
||||
### When to use actors
|
||||
|
||||
As with any other technology decision, you should decide whether to use actors based on the problem you're trying to solve.
|
||||
|
||||
The actor design pattern can be a good fit to a number of distributed systems problems and scenarios, but the first thing you should consider are the constraints of the pattern. Generally speaking, consider the actor pattern to model your problem or scenario if:
|
||||
|
||||
* Your problem space involves a large number (thousands or more) of small, independent, and isolated units of state and logic.
|
||||
* You want to work with single-threaded objects that do not require significant interaction from external components, including querying state across a set of actors.
|
||||
* Your actor instances won't block callers with unpredictable delays by issuing I/O operations.
|
||||
|
||||
## Actors in dapr
|
||||
|
||||
Every actor is defined as an instance of an actor type, identical to the way an object is an instance of a class. For example, there may be an actor type that implements the functionality of a calculator and there could be many actors of that type that are distributed on various nodes across a cluster. Each such actor is uniquely identified by an actor ID.
|
||||
|
||||
<img src="/images/actor_background_game_example.png" width=400>
|
||||
|
||||
## Actor lifetime
|
||||
|
||||
Dapr actors are virtual, meaning that their lifetime is not tied to their in-memory representation. As a result, they do not need to be explicitly created or destroyed. The Dapr actors runtime automatically activates an actor the first time it receives a request for that actor ID. If an actor is not used for a period of time, the Dapr Actors runtime garbage-collects the in-memory object. It will also maintain knowledge of the actor's existence should it need to be reactivated later.
|
||||
|
||||
Invocation of actor methods and reminders reset the idle time, e.g. reminder firing will keep the actor active. Actor reminders fire whether an actor is active or inactive, if fired for inactive actor, it will activate the actor first. Actor timers do not reset the idle time, so timer firing will not keep the actor active. Timers only fire while the actor is active.
|
||||
|
||||
The idle timeout and scan interval Dapr runtime uses to see if an actor can be garbage-collected is configurable. This information can be passed when Dapr runtime calls into the actor service to get supported actor types.
|
||||
|
||||
This virtual actor lifetime abstraction carries some caveats as a result of the virtual actor model, and in fact the Dapr Actors implementation deviates at times from this model.
|
||||
|
||||
An actor is automatically activated (causing an actor object to be constructed) the first time a message is sent to its actor ID. After some period of time, the actor object is garbage collected. In the future, using the actor ID again, causes a new actor object to be constructed. An actor's state outlives the object's lifetime as state is stored in configured state provider for Dapr runtime.
|
||||
|
||||
## Distribution and failover
|
||||
|
||||
To provide scalability and reliability, actors instances are distributed throughout the cluster and Dapr automatically migrates them from failed nodes to healthy ones as required.
|
||||
|
||||
Actors are distributed across the instances of the actor service, and those instance are distributed across the nodes in a cluster. Each service instance contains a set of actors for a given actor type.
|
||||
|
||||
### Actor placement service
|
||||
The Dapr actor runtime manages distribution scheme and key range settings for you. This is done by the actor `Placement` service. When a new instance of a service is created, the corresponding Dapr runtime register the actor types it can create and the `Placement` service calculates the partitioning across all the instances for a given actor type. This table of partition information for each actor type is updated and stored in each Dapr instance running in the environment and can change dynamically as new instance of actor services are created and destroyed. This is shown in the diagram below.
|
||||
|
||||
<img src="/images/actors_background_placement_service_registration.png" width=600>
|
||||
|
||||
When a client calls an actor with a particular id (for example, actor id 123), the Dapr instance for the client hashes the actor type and id, and uses the information to call onto the corresponding Dapr instance that can serve the requests for that particular actor id. As a result, the same partition (or service instance) is always called for any given actor id. This is shown in the diagram below.
|
||||
|
||||
<img src="/images/actors_background_id_hashing_calling.png" width=600>
|
||||
|
||||
This simplifies some choices but also carries some consideration:
|
||||
|
||||
* By default, actors are randomly placed into pods resulting in uniform distribution.
|
||||
* Because actors are randomly placed, it should be expected that actor operations always require network communication, including serialization and deserialization of method call data, incurring latency and overhead.
|
||||
|
||||
Note: The Dapr actor Placement service is only used for actor placement and therefore is not needed if your services are not using Dapr actors. The Placement service can run in all hosting environments for example, self hosted, Kubernetes
|
||||
|
||||
## Actor communication
|
||||
|
||||
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint
|
||||
|
||||
```bash
|
||||
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/<method/state/timers/reminders>
|
||||
```
|
||||
|
||||
You can provide any data for the actor method in the request body, and the response for the request would be in the response body which is the data from actor call.
|
||||
|
||||
Refer to [Dapr Actor Features]({{< ref actors-overview.md >}}) for more details.
|
||||
|
||||
### Concurrency
|
||||
|
||||
The Dapr Actors runtime provides a simple turn-based access model for accessing actor methods. This means that no more than one thread can be active inside an actor object's code at any time. Turn-based access greatly simplifies concurrent systems as there is no need for synchronization mechanisms for data access. It also means systems must be designed with special considerations for the single-threaded access nature of each actor instance.
|
||||
|
||||
A single actor instance cannot process more than one request at a time. An actor instance can cause a throughput bottleneck if it is expected to handle concurrent requests.
|
||||
|
||||
Actors can deadlock on each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The Dapr actor runtime automatically times out on actor calls and throw an exception to the caller to interrupt possible deadlock situations.
|
||||
|
||||
<img src="/images/actors_background_communication.png" width=600>
|
||||
|
||||
|
||||
### Turn-based access
|
||||
|
||||
A turn consists of the complete execution of an actor method in response to a request from other actors or clients, or the complete execution of a timer/reminder callback. Even though these methods and callbacks are asynchronous, the Dapr Actors runtime does not interleave them. A turn must be fully finished before a new turn is allowed. In other words, an actor method or timer/reminder callback that is currently executing must be fully finished before a new call to a method or callback is allowed. A method or callback is considered to have finished if the execution has returned from the method or callback and the task returned by the method or callback has finished. It is worth emphasizing that turn-based concurrency is respected even across different methods, timers, and callbacks.
|
||||
|
||||
The Dapr actors runtime enforces turn-based concurrency by acquiring a per-actor lock at the beginning of a turn and releasing the lock at the end of the turn. Thus, turn-based concurrency is enforced on a per-actor basis and not across actors. Actor methods and timer/reminder callbacks can execute simultaneously on behalf of different actors.
|
||||
|
||||
The following example illustrates the above concepts. Consider an actor type that implements two asynchronous methods (say, Method1 and Method2), a timer, and a reminder. The diagram below shows an example of a timeline for the execution of these methods and callbacks on behalf of two actors (ActorId1 and ActorId2) that belong to this actor type.
|
||||
|
||||
<img src="/images/actors_background_concurrency.png" width=600>
|
||||
|
|
@ -1,137 +1,104 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Dapr actors overview"
|
||||
title: "Actors overview"
|
||||
linkTitle: "Overview"
|
||||
weight: 10
|
||||
description: Overview of Dapr support for actors
|
||||
description: Overview of the actors building block
|
||||
aliases:
|
||||
- "/developing-applications/building-blocks/actors/actors-background"
|
||||
---
|
||||
|
||||
The Dapr actors runtime provides support for [virtual actors]({{< ref actors-background.md >}}) through following capabilities:
|
||||
## Introduction
|
||||
The [actor pattern](https://en.wikipedia.org/wiki/Actor_model) describes actors as the lowest-level "unit of computation". In other words, you write your code in a self-contained unit (called an actor) that receives messages and processes them one at a time, without any kind of concurrency or threading.
|
||||
|
||||
## Actor method invocation
|
||||
While your code processes a message, it can send one or more messages to other actors, or create new actors. An underlying runtime manages how, when and where each actor runs, and also routes messages between actors.
|
||||
|
||||
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint
|
||||
A large number of actors can execute simultaneously, and actors execute independently from each other.
|
||||
|
||||
Dapr includes a runtime that specifically implements the [Virtual Actor pattern](https://www.microsoft.com/en-us/research/project/orleans-virtual-actors/). With Dapr's implementation, you write your Dapr actors according to the Actor model, and Dapr leverages the scalability and reliability guarantees that the underlying platform provides.
|
||||
|
||||
### When to use actors
|
||||
|
||||
As with any other technology decision, you should decide whether to use actors based on the problem you're trying to solve.
|
||||
|
||||
The actor design pattern can be a good fit to a number of distributed systems problems and scenarios, but the first thing you should consider are the constraints of the pattern. Generally speaking, consider the actor pattern to model your problem or scenario if:
|
||||
|
||||
* Your problem space involves a large number (thousands or more) of small, independent, and isolated units of state and logic.
|
||||
* You want to work with single-threaded objects that do not require significant interaction from external components, including querying state across a set of actors.
|
||||
* Your actor instances won't block callers with unpredictable delays by issuing I/O operations.
|
||||
|
||||
## Actors in dapr
|
||||
|
||||
Every actor is defined as an instance of an actor type, identical to the way an object is an instance of a class. For example, there may be an actor type that implements the functionality of a calculator and there could be many actors of that type that are distributed on various nodes across a cluster. Each such actor is uniquely identified by an actor ID.
|
||||
|
||||
<img src="/images/actor_background_game_example.png" width=400>
|
||||
|
||||
## Actor lifetime
|
||||
|
||||
Dapr actors are virtual, meaning that their lifetime is not tied to their in-memory representation. As a result, they do not need to be explicitly created or destroyed. The Dapr actors runtime automatically activates an actor the first time it receives a request for that actor ID. If an actor is not used for a period of time, the Dapr Actors runtime garbage-collects the in-memory object. It will also maintain knowledge of the actor's existence should it need to be reactivated later.
|
||||
|
||||
Invocation of actor methods and reminders reset the idle time, e.g. reminder firing will keep the actor active. Actor reminders fire whether an actor is active or inactive, if fired for inactive actor, it will activate the actor first. Actor timers do not reset the idle time, so timer firing will not keep the actor active. Timers only fire while the actor is active.
|
||||
|
||||
The idle timeout and scan interval Dapr runtime uses to see if an actor can be garbage-collected is configurable. This information can be passed when Dapr runtime calls into the actor service to get supported actor types.
|
||||
|
||||
This virtual actor lifetime abstraction carries some caveats as a result of the virtual actor model, and in fact the Dapr Actors implementation deviates at times from this model.
|
||||
|
||||
An actor is automatically activated (causing an actor object to be constructed) the first time a message is sent to its actor ID. After some period of time, the actor object is garbage collected. In the future, using the actor ID again, causes a new actor object to be constructed. An actor's state outlives the object's lifetime as state is stored in configured state provider for Dapr runtime.
|
||||
|
||||
## Distribution and failover
|
||||
|
||||
To provide scalability and reliability, actors instances are distributed throughout the cluster and Dapr automatically migrates them from failed nodes to healthy ones as required.
|
||||
|
||||
Actors are distributed across the instances of the actor service, and those instance are distributed across the nodes in a cluster. Each service instance contains a set of actors for a given actor type.
|
||||
|
||||
### Actor placement service
|
||||
The Dapr actor runtime manages distribution scheme and key range settings for you. This is done by the actor `Placement` service. When a new instance of a service is created, the corresponding Dapr runtime registers the actor types it can create and the `Placement` service calculates the partitioning across all the instances for a given actor type. This table of partition information for each actor type is updated and stored in each Dapr instance running in the environment and can change dynamically as new instance of actor services are created and destroyed. This is shown in the diagram below.
|
||||
|
||||
<img src="/images/actors_background_placement_service_registration.png" width=600>
|
||||
|
||||
When a client calls an actor with a particular id (for example, actor id 123), the Dapr instance for the client hashes the actor type and id, and uses the information to call onto the corresponding Dapr instance that can serve the requests for that particular actor id. As a result, the same partition (or service instance) is always called for any given actor id. This is shown in the diagram below.
|
||||
|
||||
<img src="/images/actors_background_id_hashing_calling.png" width=600>
|
||||
|
||||
This simplifies some choices but also carries some consideration:
|
||||
|
||||
* By default, actors are randomly placed into pods resulting in uniform distribution.
|
||||
* Because actors are randomly placed, it should be expected that actor operations always require network communication, including serialization and deserialization of method call data, incurring latency and overhead.
|
||||
|
||||
Note: The Dapr actor Placement service is only used for actor placement and therefore is not needed if your services are not using Dapr actors. The Placement service can run in all [hosting environments]({{< ref hosting >}}), including self-hosted and Kubernetes.
|
||||
|
||||
## Actor communication
|
||||
|
||||
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint.
|
||||
|
||||
```bash
|
||||
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/method/<method>
|
||||
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/<method/state/timers/reminders>
|
||||
```
|
||||
|
||||
You can provide any data for the actor method in the request body and the response for the request is in response body which is data from actor method call.
|
||||
You can provide any data for the actor method in the request body, and the response for the request would be in the response body which is the data from actor call.
|
||||
|
||||
Refer [api spec]({{< ref "actors_api.md#invoke-actor-method" >}}) for more details.
|
||||
Refer to [Dapr Actor Features]({{< ref howto-actors.md >}}) for more details.
|
||||
|
||||
## Actor state management
|
||||
### Concurrency
|
||||
|
||||
Actors can save state reliably using state management capability.
|
||||
The Dapr Actors runtime provides a simple turn-based access model for accessing actor methods. This means that no more than one thread can be active inside an actor object's code at any time. Turn-based access greatly simplifies concurrent systems as there is no need for synchronization mechanisms for data access. It also means systems must be designed with special considerations for the single-threaded access nature of each actor instance.
|
||||
|
||||
You can interact with Dapr through HTTP/gRPC endpoints for state management.
|
||||
A single actor instance cannot process more than one request at a time. An actor instance can cause a throughput bottleneck if it is expected to handle concurrent requests.
|
||||
|
||||
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The following state stores implement this interface:
|
||||
Actors can deadlock on each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The Dapr actor runtime automatically times out on actor calls and throw an exception to the caller to interrupt possible deadlock situations.
|
||||
|
||||
- Redis
|
||||
- MongoDB
|
||||
- PostgreSQL
|
||||
- SQL Server
|
||||
- Azure CosmosDB
|
||||
<img src="/images/actors_background_communication.png" width=600>
|
||||
|
||||
## Actor timers and reminders
|
||||
#### Reentrancy
|
||||
As an enhancement to the base actors in dapr, reentrancy can now be enabled as a preview feature. To learn more about it, see [actor reentrancy]({{<ref actor-reentrancy.md>}})
|
||||
|
||||
Actors can schedule periodic work on themselves by registering either timers or reminders.
|
||||
### Turn-based access
|
||||
|
||||
### Actor timers
|
||||
A turn consists of the complete execution of an actor method in response to a request from other actors or clients, or the complete execution of a timer/reminder callback. Even though these methods and callbacks are asynchronous, the Dapr Actors runtime does not interleave them. A turn must be fully finished before a new turn is allowed. In other words, an actor method or timer/reminder callback that is currently executing must be fully finished before a new call to a method or callback is allowed. A method or callback is considered to have finished if the execution has returned from the method or callback and the task returned by the method or callback has finished. It is worth emphasizing that turn-based concurrency is respected even across different methods, timers, and callbacks.
|
||||
|
||||
You can register a callback on actor to be executed based on a timer.
|
||||
The Dapr actors runtime enforces turn-based concurrency by acquiring a per-actor lock at the beginning of a turn and releasing the lock at the end of the turn. Thus, turn-based concurrency is enforced on a per-actor basis and not across actors. Actor methods and timer/reminder callbacks can execute simultaneously on behalf of different actors.
|
||||
|
||||
The Dapr actor runtime ensures that the callback methods respect the turn-based concurrency guarantees.This means that no other actor methods or timer/reminder callbacks will be in progress until this callback completes execution.
|
||||
The following example illustrates the above concepts. Consider an actor type that implements two asynchronous methods (say, Method1 and Method2), a timer, and a reminder. The diagram below shows an example of a timeline for the execution of these methods and callbacks on behalf of two actors (ActorId1 and ActorId2) that belong to this actor type.
|
||||
|
||||
The next period of the timer starts after the callback completes execution. This implies that the timer is stopped while the callback is executing and is started when the callback finishes.
|
||||
<img src="/images/actors_background_concurrency.png" width=600>
|
||||
|
||||
The Dapr actors runtime saves changes made to the actor's state when the callback finishes. If an error occurs in saving the state, that actor object is deactivated and a new instance will be activated.
|
||||
|
||||
All timers are stopped when the actor is deactivated as part of garbage collection. No timer callbacks are invoked after that. Also, the Dapr actors runtime does not retain any information about the timers that were running before deactivation. It is up to the actor to register any timers that it needs when it is reactivated in the future.
|
||||
|
||||
You can create a timer for an actor by calling the HTTP/gRPC request to Dapr.
|
||||
|
||||
```http
|
||||
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
|
||||
```
|
||||
|
||||
The timer `duetime` and callback are specified in the request body. The due time represents when the timer will first fire after registration. The `period` represents how often the timer fires after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid.
|
||||
|
||||
The following request body configures a timer with a `dueTime` of 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m9s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a timer with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it fires immediately after registration, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m0s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
You can remove the actor timer by calling
|
||||
|
||||
```http
|
||||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
|
||||
```
|
||||
|
||||
Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
|
||||
|
||||
### Actor reminders
|
||||
|
||||
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
|
||||
|
||||
You can create a persistent reminder for an actor by calling the Http/gRPC request to Dapr.
|
||||
|
||||
```http
|
||||
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
The reminder `duetime` and callback can be specified in the request body. The due time represents when the reminder first fires after registration. The `period` represents how often the reminder will fire after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid. To register a reminder that fires only once, set the period to an empty string.
|
||||
|
||||
The following request body configures a reminder with a `dueTime` 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m9s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a reminder with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it will fire immediately after registration, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m0s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a reminder with a `dueTime` 15 seconds and a `period` of empty string. This means it will first fire after 15 seconds, then never fire again.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m15s0ms",
|
||||
"period":""
|
||||
}
|
||||
```
|
||||
|
||||
#### Retrieve actor reminder
|
||||
|
||||
You can retrieve the actor reminder by calling
|
||||
|
||||
```http
|
||||
GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
#### Remove the actor reminder
|
||||
|
||||
You can remove the actor reminder by calling
|
||||
|
||||
```http
|
||||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
Refer [api spec]({{< ref "actors_api.md#invoke-reminder" >}}) for more details.
|
||||
|
|
|
@ -0,0 +1,205 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-to: Use virtual actors in Dapr"
|
||||
linkTitle: "How-To: Virtual actors"
|
||||
weight: 20
|
||||
description: Learn more about the actor pattern
|
||||
---
|
||||
|
||||
The Dapr actors runtime provides support for [virtual actors]({{< ref actors-overview.md >}}) through following capabilities:
|
||||
|
||||
## Actor method invocation
|
||||
|
||||
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint
|
||||
|
||||
```html
|
||||
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/method/<method>
|
||||
```
|
||||
|
||||
You can provide any data for the actor method in the request body and the response for the request is in response body which is data from actor method call.
|
||||
|
||||
Refer [api spec]({{< ref "actors_api.md#invoke-actor-method" >}}) for more details.
|
||||
|
||||
## Actor state management
|
||||
|
||||
Actors can save state reliably using state management capability.
|
||||
You can interact with Dapr through HTTP/gRPC endpoints for state management.
|
||||
|
||||
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The list of components that support transactions/actors can be found here: [supported state stores]({{< ref supported-state-stores.md >}}). Only a single state store component can be used as the statestore for all actors.
|
||||
|
||||
## Actor timers and reminders
|
||||
|
||||
Actors can schedule periodic work on themselves by registering either timers or reminders.
|
||||
|
||||
### Actor timers
|
||||
|
||||
You can register a callback on actor to be executed based on a timer.
|
||||
|
||||
The Dapr actor runtime ensures that the callback methods respect the turn-based concurrency guarantees.This means that no other actor methods or timer/reminder callbacks will be in progress until this callback completes execution.
|
||||
|
||||
The next period of the timer starts after the callback completes execution. This implies that the timer is stopped while the callback is executing and is started when the callback finishes.
|
||||
|
||||
The Dapr actors runtime saves changes made to the actor's state when the callback finishes. If an error occurs in saving the state, that actor object is deactivated and a new instance will be activated.
|
||||
|
||||
All timers are stopped when the actor is deactivated as part of garbage collection. No timer callbacks are invoked after that. Also, the Dapr actors runtime does not retain any information about the timers that were running before deactivation. It is up to the actor to register any timers that it needs when it is reactivated in the future.
|
||||
|
||||
You can create a timer for an actor by calling the HTTP/gRPC request to Dapr.
|
||||
|
||||
```md
|
||||
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
|
||||
```
|
||||
|
||||
The timer `duetime` and callback are specified in the request body. The due time represents when the timer will first fire after registration. The `period` represents how often the timer fires after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid.
|
||||
|
||||
The following request body configures a timer with a `dueTime` of 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m9s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a timer with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it fires immediately after registration, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m0s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
You can remove the actor timer by calling
|
||||
|
||||
```md
|
||||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
|
||||
```
|
||||
|
||||
Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
|
||||
|
||||
### Actor reminders
|
||||
|
||||
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
|
||||
|
||||
You can create a persistent reminder for an actor by calling the Http/gRPC request to Dapr.
|
||||
|
||||
```md
|
||||
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
The reminder `duetime` and callback can be specified in the request body. The due time represents when the reminder first fires after registration. The `period` represents how often the reminder will fire after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid. To register a reminder that fires only once, set the period to an empty string.
|
||||
|
||||
The following request body configures a reminder with a `dueTime` 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m9s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a reminder with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it will fire immediately after registration, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m0s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a reminder with a `dueTime` 15 seconds and a `period` of empty string. This means it will first fire after 15 seconds, then never fire again.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m15s0ms",
|
||||
"period":""
|
||||
}
|
||||
```
|
||||
|
||||
#### Retrieve actor reminder
|
||||
|
||||
You can retrieve the actor reminder by calling
|
||||
|
||||
```md
|
||||
GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
#### Remove the actor reminder
|
||||
|
||||
You can remove the actor reminder by calling
|
||||
|
||||
```md
|
||||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
Refer [api spec]({{< ref "actors_api.md#invoke-reminder" >}}) for more details.
|
||||
|
||||
## Actor runtime configuration
|
||||
|
||||
You can configure the Dapr Actors runtime configuration to modify the default runtime behavior.
|
||||
|
||||
### Configuration parameters
|
||||
- `actorIdleTimeout` - The timeout before deactivating an idle actor. Checks for timeouts occur every `actorScanInterval` interval. **Default: 60 minutes**
|
||||
- `actorScanInterval` - The duration which specifies how often to scan for actors to deactivate idle actors. Actors that have been idle longer than actor_idle_timeout will be deactivated. **Default: 30 seconds**
|
||||
- `drainOngoingCallTimeout` - The duration when in the process of draining rebalanced actors. This specifies the timeout for the current active actor method to finish. If there is no current actor method call, this is ignored. **Default: 60 seconds**
|
||||
- `drainRebalancedActors` - If true, Dapr will wait for `drainOngoingCallTimeout` duration to allow a current actor call to complete before trying to deactivate an actor. **Default: true**
|
||||
- `reentrancy` (ActorReentrancyConfig) - Configure the reentrancy behavior for an actor. If not provided, reentrancy is diabled. **Default: disabled**
|
||||
|
||||
{{< tabs Java Dotnet Python >}}
|
||||
|
||||
{{% codetab %}}
|
||||
```java
|
||||
// import io.dapr.actors.runtime.ActorRuntime;
|
||||
// import java.time.Duration;
|
||||
|
||||
ActorRuntime.getInstance().getConfig().setActorIdleTimeout(Duration.ofMinutes(60));
|
||||
ActorRuntime.getInstance().getConfig().setActorScanInterval(Duration.ofSeconds(30));
|
||||
ActorRuntime.getInstance().getConfig().setDrainOngoingCallTimeout(Duration.ofSeconds(60));
|
||||
ActorRuntime.getInstance().getConfig().setDrainBalancedActors(true);
|
||||
ActorRuntime.getInstance().getConfig().setActorReentrancyConfig(false, null);
|
||||
```
|
||||
|
||||
See [this example](https://github.com/dapr/java-sdk/blob/master/examples/src/main/java/io/dapr/examples/actors/DemoActorService.java)
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```csharp
|
||||
// In Startup.cs
|
||||
public void ConfigureServices(IServiceCollection services)
|
||||
{
|
||||
// Register actor runtime with DI
|
||||
services.AddActors(options =>
|
||||
{
|
||||
// Register actor types and configure actor settings
|
||||
options.Actors.RegisterActor<MyActor>();
|
||||
|
||||
// Configure default settings
|
||||
options.ActorIdleTimeout = TimeSpan.FromMinutes(60);
|
||||
options.ActorScanInterval = TimeSpan.FromSeconds(30);
|
||||
options.DrainOngoingCallTimeout = TimeSpan.FromSeconds(60);
|
||||
options.DrainRebalancedActors = true;
|
||||
// reentrancy not implemented in the .NET SDK at this time
|
||||
});
|
||||
|
||||
// Register additional services for use with actors
|
||||
services.AddSingleton<BankService>();
|
||||
}
|
||||
```
|
||||
See the .NET SDK [documentation](https://github.com/dapr/dotnet-sdk/blob/master/daprdocs/content/en/dotnet-sdk-docs/dotnet-actors/dotnet-actors-usage.md#registering-actors).
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
from datetime import timedelta
|
||||
from dapr.actor.runtime.config import ActorRuntimeConfig, ActorReentrancyConfig
|
||||
|
||||
ActorRuntime.set_actor_config(
|
||||
ActorRuntimeConfig(
|
||||
actor_idle_timeout=timedelta(hours=1),
|
||||
actor_scan_interval=timedelta(seconds=30),
|
||||
drain_ongoing_call_timeout=timedelta(minutes=1),
|
||||
drain_rebalanced_actors=True,
|
||||
reentrancy=ActorReentrancyConfig(enabled=False)
|
||||
)
|
||||
)
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
Refer to the documentation and examples of the [Dapr SDKs]({{< ref "developing-applications/sdks/#sdk-languages" >}}) for more details.
|
|
@ -3,5 +3,5 @@ type: docs
|
|||
title: "Bindings"
|
||||
linkTitle: "Bindings"
|
||||
weight: 40
|
||||
description: Trigger code from and interface with a large array of external resources
|
||||
description: Interface with or be triggered from external systems
|
||||
---
|
||||
|
|
|
@ -3,7 +3,7 @@ type: docs
|
|||
title: "Bindings overview"
|
||||
linkTitle: "Overview"
|
||||
weight: 100
|
||||
description: Overview of the Dapr bindings building block
|
||||
description: Overview of the bindings building block
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
@ -37,19 +37,18 @@ Read the [Create an event-driven app using input bindings]({{< ref howto-trigger
|
|||
|
||||
## Output bindings
|
||||
|
||||
Output bindings allow users to invoke external resources.
|
||||
An optional payload and metadata can be sent with the invocation request.
|
||||
Output bindings allow you to invoke external resources. An optional payload and metadata can be sent with the invocation request.
|
||||
|
||||
In order to invoke an output binding:
|
||||
|
||||
1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.)
|
||||
2. Use the HTTP endpoint or gRPC method to invoke the binding with an optional payload
|
||||
|
||||
Read the [Send events to external systems using output bindings]({{< ref howto-bindings.md >}}) page to get started with output bindings.
|
||||
|
||||
|
||||
|
||||
## Related Topics
|
||||
- [Trigger a service from different resources with input bindings]({{< ref howto-triggers.md >}})
|
||||
- [Invoke different resources using output bindings]({{< ref howto-bindings.md >}})
|
||||
Read the [Use output bindings to interface with external resources]({{< ref howto-bindings.md >}}) page to get started with output bindings.
|
||||
|
||||
## Next Steps
|
||||
* Follow these guides on:
|
||||
* [How-To: Trigger a service from different resources with input bindings]({{< ref howto-triggers.md >}})
|
||||
* [How-To: Use output bindings to interface with external resources]({{< ref howto-bindings.md >}})
|
||||
* Try out the [bindings quickstart](https://github.com/dapr/quickstarts/tree/master/bindings/README.md) which shows how to bind to a Kafka queue
|
||||
* Read the [bindings API specification]({{< ref bindings_api.md >}})
|
||||
|
|
|
@ -1,28 +1,35 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-To: Use bindings to interface with external resources"
|
||||
linkTitle: "How-To: Bindings"
|
||||
description: "Invoke external systems with Dapr output bindings"
|
||||
title: "How-To: Use output bindings to interface with external resources"
|
||||
linkTitle: "How-To: Output bindings"
|
||||
description: "Invoke external systems with output bindings"
|
||||
weight: 300
|
||||
---
|
||||
|
||||
Using bindings, it is possible to invoke external resources without tying in to special SDK or libraries.
|
||||
Output bindings enable you to invoke external resources without taking dependencies on special SDK or libraries.
|
||||
For a complete sample showing output bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings).
|
||||
|
||||
Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&t=1960) on how to use bi-directional output bindings.
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/ysklxm81MTs?start=1960" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
|
||||
## 1. Create a binding
|
||||
|
||||
An output binding represents a resource that Dapr will use invoke and send messages to.
|
||||
An output binding represents a resource that Dapr uses to invoke and send messages to.
|
||||
|
||||
For the purpose of this guide, you'll use a Kafka binding. You can find a list of the different binding specs [here]({{< ref bindings >}}).
|
||||
For the purpose of this guide, you'll use a Kafka binding. You can find a list of the different binding specs [here]({{< ref setup-bindings >}}).
|
||||
|
||||
Create the following YAML file, named binding.yaml, and save this to a `components` sub-folder in your application directory.
|
||||
Create a new binding component with the name of `myevent`.
|
||||
|
||||
Inside the `metadata` section, configure Kafka related properties such as the topic to publish the message to and the broker.
|
||||
|
||||
{{< tabs "Self-Hosted (CLI)" Kubernetes >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Create the following YAML file, named `binding.yaml`, and save this to a `components` sub-folder in your application directory.
|
||||
(Use the `--components-path` flag with `dapr run` to point to your custom components dir)
|
||||
|
||||
*Note: When running in Kubernetes, apply this file to your cluster using `kubectl apply -f binding.yaml`*
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
|
@ -39,29 +46,51 @@ spec:
|
|||
value: topic1
|
||||
```
|
||||
|
||||
Here, create a new binding component with the name of `myevent`.
|
||||
{{% /codetab %}}
|
||||
|
||||
Inside the `metadata` section, configure Kafka related properties such as the topic to publish the message to and the broker.
|
||||
{{% codetab %}}
|
||||
|
||||
To deploy this into a Kubernetes cluster, fill in the `metadata` connection details of your [desired binding component]({{< ref setup-bindings >}}) in the yaml below (in this case kafka), save as `binding.yaml`, and run `kubectl apply -f binding.yaml`.
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: myevent
|
||||
namespace: default
|
||||
spec:
|
||||
type: bindings.kafka
|
||||
version: v1
|
||||
metadata:
|
||||
- name: brokers
|
||||
value: localhost:9092
|
||||
- name: publishTopic
|
||||
value: topic1
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## 2. Send an event
|
||||
|
||||
All that's left now is to invoke the bindings endpoint on a running Dapr instance.
|
||||
All that's left now is to invoke the output bindings endpoint on a running Dapr instance.
|
||||
|
||||
You can do so using HTTP:
|
||||
|
||||
```bash
|
||||
curl -X POST -H http://localhost:3500/v1.0/bindings/myevent -d '{ "data": { "message": "Hi!" }, "operation": "create" }'
|
||||
curl -X POST -H 'Content-Type: application/json' http://localhost:3500/v1.0/bindings/myevent -d '{ "data": { "message": "Hi!" }, "operation": "create" }'
|
||||
```
|
||||
|
||||
As seen above, you invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myevent`.
|
||||
The payload goes inside the mandatory `data` field, and can be any JSON serializable value.
|
||||
|
||||
You'll also notice that there's an `operation` field that tells the binding what you need it to do.
|
||||
You can check [here]({{< ref bindings >}}) which operations are supported for every output binding.
|
||||
|
||||
You can check [here]({{< ref supported-bindings >}}) which operations are supported for every output binding.
|
||||
|
||||
## References
|
||||
|
||||
- [Binding API]({{< ref bindings_api.md >}})
|
||||
- [Binding components]({{< ref bindings >}})
|
||||
- [Binding detailed specifications]({{< ref supported-bindings >}})
|
||||
- [Binding detailed specifications]({{< ref supported-bindings >}})
|
||||
|
|
|
@ -97,6 +97,7 @@ Event delivery guarantees are controlled by the binding implementation. Dependin
|
|||
|
||||
## References
|
||||
|
||||
* Binding [API](https://github.com/dapr/docs/blob/master/reference/api/bindings_api.md)
|
||||
* Binding [Components](https://github.com/dapr/docs/tree/master/concepts/bindings)
|
||||
* Binding [Detailed specifications](https://github.com/dapr/docs/tree/master/reference/specs/bindings)
|
||||
* [Bindings building block]({{< ref bindings >}})
|
||||
* [Bindings API]({{< ref bindings_api.md >}})
|
||||
* [Components concept]({{< ref components-concept.md >}})
|
||||
* [Supported bindings]({{< ref supported-bindings >}})
|
||||
|
|
|
@ -6,4 +6,4 @@ weight: 60
|
|||
description: See and measure the message calls across components and networked services
|
||||
---
|
||||
|
||||
This section includes guides for developers in the context of observability. See other sections for a [general overview of the observability concept]({{< ref observability >}}) in Dapr and for [operations guidance on monitoring]({{< ref monitoring >}}).
|
||||
This section includes guides for developers in the context of observability. See other sections for a [general overview of the observability concept]({{< ref observability-concept >}}) in Dapr and for [operations guidance on monitoring]({{< ref monitoring >}}).
|
||||
|
|
|
@ -1,42 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Metrics"
|
||||
linkTitle: "Metrics"
|
||||
weight: 4000
|
||||
description: "Observing Dapr metrics"
|
||||
---
|
||||
|
||||
Dapr exposes a [Prometheus](https://prometheus.io/) metrics endpoint that you can scrape to gain a greater understanding of how Dapr is behaving and to setup alerts for specific conditions.
|
||||
|
||||
## Configuration
|
||||
|
||||
The metrics endpoint is enabled by default, you can disable it by passing the command line argument `--enable-metrics=false` to Dapr system processes.
|
||||
|
||||
The default metrics port is `9090`. This can be overridden by passing the command line argument `--metrics-port` to Daprd.
|
||||
|
||||
To disable the metrics in the Dapr side car, you can use the `metric` spec configuration and set `enabled: false` to disable the metrics in the Dapr runtime.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: tracing
|
||||
namespace: default
|
||||
spec:
|
||||
tracing:
|
||||
samplingRate: "1"
|
||||
metric:
|
||||
enabled: false
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Each Dapr system process emits Go runtime/process metrics by default and have their own metrics:
|
||||
|
||||
- [Dapr metric list](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md)
|
||||
|
||||
## References
|
||||
|
||||
* [Howto: Run Prometheus locally]({{< ref prometheus.md >}})
|
||||
* [Howto: Set up Prometheus and Grafana for metrics]({{< ref grafana.md >}})
|
||||
* [Howto: Set up Azure monitor to search logs and collect metrics for Dapr]({{< ref azure-monitor.md >}})
|
|
@ -9,9 +9,9 @@ description: Dapr sidecar health checks.
|
|||
Dapr provides a way to determine it's health using an HTTP /healthz endpoint.
|
||||
With this endpoint, the Dapr process, or sidecar, can be probed for its health and hence determine its readiness and liveness. See [health API ]({{< ref health_api.md >}})
|
||||
|
||||
The Dapr `/healthz` endpoint can be used by health probes from the application hosting platform. This topic describes how Dapr integrates with probes from different hosting platforms.
|
||||
The Dapr `/healthz` endpoint can be used by health probes from the application hosting platform. This topic describes how Dapr integrates with probes from different hosting platforms.
|
||||
|
||||
As a user, when deploying Dapr to a hosting platform (for example Kubernetes), the Dapr health endpoint is automatically configured for you. There is nothing you need to configure.
|
||||
As a user, when deploying Dapr to a hosting platform (for example Kubernetes), the Dapr health endpoint is automatically configured for you. There is nothing you need to configure.
|
||||
|
||||
Note: Dapr actors also have a health API endpoint where Dapr probes the application for a response to a signal from Dapr that the actor application is healthy and running. See [actor health API]({{< ref "actors_api.md#health-check" >}})
|
||||
|
||||
|
@ -24,7 +24,7 @@ For example, liveness probes could catch a deadlock, where an application is run
|
|||
|
||||
The kubelet uses readiness probes to know when a container is ready to start accepting traffic. A pod is considered ready when all of its containers are ready. One use of this readiness signal is to control which Pods are used as backends for Kubernetes services. When a pod is not ready, it is removed from Kubernetes service load balancers.
|
||||
|
||||
When integrating with Kubernetes, the Dapr sidecar is injected with a Kubernetes probe configuration telling it to use the Dapr healthz endpoint. This is done by the `Sidecar Injector` system service. The integration with the kubelet is shown in the diagram below.
|
||||
When integrating with Kubernetes, the Dapr sidecar is injected with a Kubernetes probe configuration telling it to use the Dapr healthz endpoint. This is done by the `Sidecar Injector` system service. The integration with the kubelet is shown in the diagram below.
|
||||
|
||||
<img src="/images/security-mTLS-dapr-system-services.png" width=600>
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ weight: 1000
|
|||
description: "Use Dapr tracing to get visibility for distributed application"
|
||||
---
|
||||
|
||||
Dapr uses OpenTelemetry (previously known as OpenCensus) for distributed traces and metrics collection. OpenTelemetry supports various backends including [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), [Instana](https://www.instana.com), [Jaeger](https://www.jaegertracing.io/), [SignalFX](https://www.signalfx.com/), [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io) and others.
|
||||
Dapr uses the Zipkin protocol for distributed traces and metrics collection. Due to the ubiquity of the Zipkin protocol, many backends are supported out of the box, for examples [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io), [New Relic](https://newrelic.com) and others. Combining with the OpenTelemetry Collector, Dapr can export traces to many other backends including but not limted to [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), [Instana](https://www.instana.com), [Jaeger](https://www.jaegertracing.io/), and [SignalFX](https://www.signalfx.com/).
|
||||
|
||||
<img src="/images/tracing.png" width=600>
|
||||
|
||||
|
@ -14,10 +14,10 @@ Dapr uses OpenTelemetry (previously known as OpenCensus) for distributed traces
|
|||
|
||||
Dapr adds a HTTP/gRPC middleware to the Dapr sidecar. The middleware intercepts all Dapr and application traffic and automatically injects correlation IDs to trace distributed transactions. This design has several benefits:
|
||||
|
||||
* No need for code instrumentation. All traffic is automatically traced (with configurable tracing levels).
|
||||
* No need for code instrumentation. All traffic is automatically traced with configurable tracing levels.
|
||||
* Consistent tracing behavior across microservices. Tracing is configured and managed on Dapr sidecar so that it remains consistent across services made by different teams and potentially written in different programming languages.
|
||||
* Configurable and extensible. By leveraging OpenTelemetry, Dapr tracing can be configured to work with popular tracing backends, including custom backends a customer may have.
|
||||
* OpenTelemetry exporters are defined as first-class Dapr components. You can define and enable multiple exporters at the same time.
|
||||
* Configurable and extensible. By leveraging the Zipkin API and the OpenTelemetry Collector, Dapr tracing can be configured to work with popular tracing backends, including custom backends a customer may have.
|
||||
* You can define and enable multiple exporters at the same time.
|
||||
|
||||
## W3C Correlation ID
|
||||
|
||||
|
@ -27,9 +27,9 @@ Read [W3C distributed tracing]({{< ref w3c-tracing >}}) for more background on W
|
|||
|
||||
## Configuration
|
||||
|
||||
Dapr uses [probabilistic sampling](https://opencensus.io/tracing/sampling/probabilistic/) as defined by OpenCensus. The sample rate defines the probability a tracing span will be sampled and can have a value between 0 and 1 (inclusive). The deafault sample rate is 0.0001 (i.e. 1 in 10,000 spans is sampled).
|
||||
Dapr uses probabilistic sampling. The sample rate defines the probability a tracing span will be sampled and can have a value between 0 and 1 (inclusive). The default sample rate is 0.0001 (i.e. 1 in 10,000 spans is sampled).
|
||||
|
||||
To change the default tracing behavior, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (i.e. every span is sampled):
|
||||
To change the default tracing behavior, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (i.e. every span is sampled), and sends trace using Zipkin protocol to the Zipkin server at http://zipkin.default.svc.cluster.local
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -40,30 +40,14 @@ metadata:
|
|||
spec:
|
||||
tracing:
|
||||
samplingRate: "1"
|
||||
zipkin:
|
||||
endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
|
||||
```
|
||||
|
||||
Similarly, changing `samplingRate` to 0 will disable tracing altogether.
|
||||
Note: Changing `samplingRate` to 0 disables tracing altogether.
|
||||
|
||||
See the [References](#references) section for more details on how to configure tracing on local environment and Kubernetes environment.
|
||||
|
||||
Dapr supports pluggable exporters, defined by configuration files (in self hosted mode) or a Kubernetes custom resource object (in Kubernetes mode). For example, the following manifest defines a Zipkin exporter:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: zipkin
|
||||
namespace: default
|
||||
spec:
|
||||
type: exporters.zipkin
|
||||
version: v1
|
||||
metadata:
|
||||
- name: enabled
|
||||
value: "true"
|
||||
- name: exporterAddress
|
||||
value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [How-To: Setup Application Insights for distributed tracing with OpenTelemetry Collector]({{< ref open-telemetry-collector.md >}})
|
|
@ -10,7 +10,7 @@ type: docs
|
|||
# How to use trace context
|
||||
Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Dapr does all the heavy lifting of generating and propagating the trace context information and there are very few cases where you need to either propagate or create a trace context. First read scenarios in the [W3C distributed tracing]({{< ref w3c-tracing >}}) article to understand whether you need to propagate or create a trace context.
|
||||
|
||||
To view traces, read the [how to diagnose with tracing]({{< ref tracing.md >}}) article.
|
||||
To view traces, read the [how to diagnose with tracing]({{< ref tracing-overview.md >}}) article.
|
||||
|
||||
## How to retrieve trace context from a response
|
||||
`Note: There are no helper methods exposed in Dapr SDKs to propagate and retrieve trace context. You need to use http/gRPC clients to propagate and retrieve trace headers through http headers and gRPC metadata.`
|
||||
|
@ -98,7 +98,7 @@ f.SpanContextToRequest(traceContext, req)
|
|||
traceContext := span.SpanContext()
|
||||
traceContextBinary := propagation.Binary(traceContext)
|
||||
```
|
||||
|
||||
|
||||
You can then pass the trace context through [gRPC metadata](https://google.golang.org/grpc/metadata) through `grpc-trace-bin` header.
|
||||
|
||||
```go
|
||||
|
|
|
@ -72,7 +72,7 @@ In these scenarios Dapr does some of the work for you and you need to either cre
|
|||
To understand how to extract the trace headers from a response and add the trace headers into a request, see the [how to use trace context]({{< ref w3c-tracing >}}) article.
|
||||
|
||||
2. You have chosen to generate your own trace context headers.
|
||||
This is much more unusual. There may be occassions where you specifically chose to add W3C trace headers into a service call, for example if you have an existing application that does not currently use Dapr. In this case Dapr still propagates the trace context headers for you. If you decide to generate trace headers yourself, there are three ways this can be done :
|
||||
This is much more unusual. There may be occasions where you specifically chose to add W3C trace headers into a service call, for example if you have an existing application that does not currently use Dapr. In this case Dapr still propagates the trace context headers for you. If you decide to generate trace headers yourself, there are three ways this can be done :
|
||||
|
||||
1. You can use the industry standard OpenCensus/OpenTelemetry SDKs to generate trace headers and pass these trace headers to a Dapr enabled service. This is the preferred recommendation.
|
||||
|
||||
|
|
|
@ -12,9 +12,19 @@ Pub/Sub is a common pattern in a distributed system with many services that want
|
|||
Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers.
|
||||
|
||||
Dapr provides an extensible Pub/Sub system with At-Least-Once guarantees, allowing developers to publish and subscribe to topics.
|
||||
Dapr provides different implementation of the underlying system, and allows operators to bring in their preferred infrastructure, for example Redis Streams, Kafka, etc.
|
||||
Dapr provides components for pub/sub, that enable operators to use their preferred infrastructure, for example Redis Streams, Kafka, etc.
|
||||
|
||||
## Content Types
|
||||
|
||||
When publishing a message, it's important to specify the content type of the data being sent.
|
||||
Unless specified, Dapr will assume `text/plain`. When using Dapr's HTTP API, the content type can be set in a `Content-Type` header.
|
||||
gRPC clients and SDKs have a dedicated content type parameter.
|
||||
|
||||
## Step 1: Setup the Pub/Sub component
|
||||
The following example creates applications to publish and subscribe to a topic called `deathStarStatus`.
|
||||
|
||||
<img src="/images/pubsub-publish-subscribe-example.png" width=1000>
|
||||
<br></br>
|
||||
|
||||
The first step is to setup the Pub/Sub component:
|
||||
|
||||
|
@ -68,8 +78,14 @@ spec:
|
|||
## Step 2: Subscribe to topics
|
||||
|
||||
Dapr allows two methods by which you can subscribe to topics:
|
||||
- **Declaratively**, where subscriptions are are defined in an external file.
|
||||
- **Programatically**, where subscriptions are defined in user code
|
||||
|
||||
- **Declaratively**, where subscriptions are defined in an external file.
|
||||
- **Programmatically**, where subscriptions are defined in user code.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Both declarative and programmatic approaches support the same features. The declarative approach removes the Dapr dependency from your code and allows, for example, existing applications to subscribe to topics, without having to change code. The programmatic approach implements the subscription in your code.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
### Declarative subscriptions
|
||||
|
||||
|
@ -97,9 +113,9 @@ Set the component with:
|
|||
{{< tabs "Self-Hosted (CLI)" Kubernetes>}}
|
||||
|
||||
{{% codetab %}}
|
||||
Place the CRD in your `./components` directory. When Dapr starts up, it will load subscriptions along with components.
|
||||
Place the CRD in your `./components` directory. When Dapr starts up, it loads subscriptions along with components.
|
||||
|
||||
*Note: By default, Dapr loads components from `$HOME/.dapr/components` on MacOS/Linux and `%USERPROFILE%\.dapr\components` on Windows.*
|
||||
Note: By default, Dapr loads components from `$HOME/.dapr/components` on MacOS/Linux and `%USERPROFILE%\.dapr\components` on Windows.
|
||||
|
||||
You can also override the default directory by pointing the Dapr CLI to a components path:
|
||||
|
||||
|
@ -123,7 +139,7 @@ kubectl apply -f subscription.yaml
|
|||
|
||||
#### Example
|
||||
|
||||
{{< tabs Python Node>}}
|
||||
{{< tabs Python Node PHP>}}
|
||||
|
||||
{{% codetab %}}
|
||||
Create a file named `app1.py` and paste in the following:
|
||||
|
@ -140,11 +156,11 @@ CORS(app)
|
|||
@app.route('/dsstatus', methods=['POST'])
|
||||
def ds_subscriber():
|
||||
print(request.json, flush=True)
|
||||
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
|
||||
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
|
||||
|
||||
app.run()
|
||||
```
|
||||
After creating `app1.py` ensute flask and flask_cors are installed:
|
||||
After creating `app1.py` ensure flask and flask_cors are installed:
|
||||
|
||||
```bash
|
||||
pip install flask
|
||||
|
@ -183,19 +199,50 @@ dapr --app-id app2 --app-port 3000 run node app2.js
|
|||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Create a file named `app1.php` and paste in the following:
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create();
|
||||
$app->post('/dsstatus', function(
|
||||
#[\Dapr\Attributes\FromBody]
|
||||
\Dapr\PubSub\CloudEvent $cloudEvent,
|
||||
\Psr\Log\LoggerInterface $logger
|
||||
) {
|
||||
$logger->alert('Received event: {event}', ['event' => $cloudEvent]);
|
||||
return ['status' => 'SUCCESS'];
|
||||
}
|
||||
);
|
||||
$app->start();
|
||||
```
|
||||
|
||||
After creating `app1.php`, and with the [SDK installed](https://docs.dapr.io/developing-applications/sdks/php/),
|
||||
go ahead and start the app:
|
||||
|
||||
```bash
|
||||
dapr --app-id app1 --app-port 3000 run -- php -S 0.0.0.0:3000 app1.php
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
### Programmatic subscriptions
|
||||
### Programmatic subscriptions
|
||||
|
||||
To subscribe to topics, start a web server in the programming language of your choice and listen on the following `GET` endpoint: `/dapr/subscribe`.
|
||||
The Dapr instance will call into your app at startup and expect a JSON response for the topic subscriptions with:
|
||||
- `pubsubname`: Which pub/sub component Dapr should use
|
||||
- `topic`: Which topic to subscribe to
|
||||
- `route`: Which endpoint for Dapr to call on when a message comes to that topic
|
||||
The Dapr instance calls into your app at startup and expect a JSON response for the topic subscriptions with:
|
||||
- `pubsubname`: Which pub/sub component Dapr should use.
|
||||
- `topic`: Which topic to subscribe to.
|
||||
- `route`: Which endpoint for Dapr to call on when a message comes to that topic.
|
||||
|
||||
#### Example
|
||||
|
||||
{{< tabs Python Node>}}
|
||||
{{< tabs Python Node PHP>}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
|
@ -218,10 +265,10 @@ def subscribe():
|
|||
@app.route('/dsstatus', methods=['POST'])
|
||||
def ds_subscriber():
|
||||
print(request.json, flush=True)
|
||||
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
|
||||
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
|
||||
app.run()
|
||||
```
|
||||
After creating `app1.py` ensute flask and flask_cors are installed:
|
||||
After creating `app1.py` ensure flask and flask_cors are installed:
|
||||
|
||||
```bash
|
||||
pip install flask
|
||||
|
@ -249,7 +296,7 @@ app.get('/dapr/subscribe', (req, res) => {
|
|||
{
|
||||
pubsubname: "pubsub",
|
||||
topic: "deathStarStatus",
|
||||
route: "dsstatus"
|
||||
route: "dsstatus"
|
||||
}
|
||||
]);
|
||||
})
|
||||
|
@ -268,27 +315,63 @@ dapr --app-id app2 --app-port 3000 run node app2.js
|
|||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Update `app1.php` with the following:
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions(['dapr.subscriptions' => [
|
||||
new \Dapr\PubSub\Subscription(pubsubname: 'pubsub', topic: 'deathStarStatus', route: '/dsstatus'),
|
||||
]]));
|
||||
$app->post('/dsstatus', function(
|
||||
#[\Dapr\Attributes\FromBody]
|
||||
\Dapr\PubSub\CloudEvent $cloudEvent,
|
||||
\Psr\Log\LoggerInterface $logger
|
||||
) {
|
||||
$logger->alert('Received event: {event}', ['event' => $cloudEvent]);
|
||||
return ['status' => 'SUCCESS'];
|
||||
}
|
||||
);
|
||||
$app->start();
|
||||
```
|
||||
|
||||
Run this app with:
|
||||
|
||||
```bash
|
||||
dapr --app-id app1 --app-port 3000 run -- php -S 0.0.0.0:3000 app1.php
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
The `/dsstatus` endpoint matches the `route` defined in the subscriptions and this is where Dapr will send all topic messages to.
|
||||
|
||||
## Step 3: Publish a topic
|
||||
|
||||
To publish a message to a topic, invoke the following endpoint on a Dapr instance:
|
||||
To publish a topic you need to run an instance of a Dapr sidecar to use the pubsub Redis component. You can use the default Redis component installed into your local environment.
|
||||
|
||||
Start an instance of Dapr with an app-id called `testpubsub`:
|
||||
|
||||
```bash
|
||||
dapr run --app-id testpubsub --dapr-http-port 3500
|
||||
```
|
||||
{{< tabs "Dapr CLI" "HTTP API (Bash)" "HTTP API (PowerShell)">}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Then publish a message to the `deathStarStatus` topic:
|
||||
|
||||
```bash
|
||||
dapr publish --pubsub pubsub --topic deathStarStatus --data '{"status": "completed"}'
|
||||
dapr publish --publish-app-id testpubsub --pubsub pubsub --topic deathStarStatus --data '{"status": "completed"}'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Begin by ensuring a Dapr sidecar is running:
|
||||
```bash
|
||||
dapr --app-id myapp --port 3500 run
|
||||
```
|
||||
Then publish a message to the `deathStarStatus` topic:
|
||||
```bash
|
||||
curl -X POST http://localhost:3500/v1.0/publish/pubsub/deathStarStatus -H "Content-Type: application/json" -d '{"status": "completed"}'
|
||||
|
@ -296,10 +379,6 @@ curl -X POST http://localhost:3500/v1.0/publish/pubsub/deathStarStatus -H "Conte
|
|||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Begin by ensuring a Dapr sidecar is running:
|
||||
```bash
|
||||
dapr --app-id myapp --port 3500 run
|
||||
```
|
||||
Then publish a message to the `deathStarStatus` topic:
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"status": "completed"}' -Uri 'http://localhost:3500/v1.0/publish/pubsub/deathStarStatus'
|
||||
|
@ -308,7 +387,7 @@ Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"status":
|
|||
|
||||
{{< /tabs >}}
|
||||
|
||||
Dapr automatically wraps the user payload in a Cloud Events v1.0 compliant envelope.
|
||||
Dapr automatically wraps the user payload in a Cloud Events v1.0 compliant envelope, using `Content-Type` header value for `datacontenttype` attribute.
|
||||
|
||||
## Step 4: ACK-ing a message
|
||||
|
||||
|
@ -323,7 +402,7 @@ In order to tell Dapr that a message was processed successfully, return a `200 O
|
|||
@app.route('/dsstatus', methods=['POST'])
|
||||
def ds_subscriber():
|
||||
print(request.json, flush=True)
|
||||
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
|
||||
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
@ -337,7 +416,77 @@ app.post('/dsstatus', (req, res) => {
|
|||
|
||||
{{< /tabs >}}
|
||||
|
||||
## (Optional) Step 5: Publishing a topic with code
|
||||
|
||||
{{< tabs Node PHP>}}
|
||||
|
||||
{{% codetab %}}
|
||||
If you prefer publishing a topic using code, here is an example.
|
||||
|
||||
```javascript
|
||||
const express = require('express');
|
||||
const path = require('path');
|
||||
const request = require('request');
|
||||
const bodyParser = require('body-parser');
|
||||
|
||||
const app = express();
|
||||
app.use(bodyParser.json());
|
||||
|
||||
const daprPort = process.env.DAPR_HTTP_PORT || 3500;
|
||||
const daprUrl = `http://localhost:${daprPort}/v1.0`;
|
||||
const port = 8080;
|
||||
const pubsubName = 'pubsub';
|
||||
|
||||
app.post('/publish', (req, res) => {
|
||||
console.log("Publishing: ", req.body);
|
||||
const publishUrl = `${daprUrl}/publish/${pubsubName}/deathStarStatus`;
|
||||
request( { uri: publishUrl, method: 'POST', json: req.body } );
|
||||
res.sendStatus(200);
|
||||
});
|
||||
|
||||
app.listen(process.env.PORT || port, () => console.log(`Listening on port ${port}!`));
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
If you prefer publishing a topic using code, here is an example.
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create();
|
||||
$app->run(function(\DI\FactoryInterface $factory, \Psr\Log\LoggerInterface $logger) {
|
||||
$publisher = $factory->make(\Dapr\PubSub\Publish::class, ['pubsub' => 'pubsub']);
|
||||
$publisher->topic('deathStarStatus')->publish('operational');
|
||||
$logger->alert('published!');
|
||||
});
|
||||
```
|
||||
|
||||
You can save this to `app2.php` and while `app1` is running in another terminal, execute:
|
||||
|
||||
```bash
|
||||
dapr --app-id app2 run -- php app2.php
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Sending a custom CloudEvent
|
||||
|
||||
Dapr automatically takes the data sent on the publish request and wraps it in a CloudEvent 1.0 envelope.
|
||||
If you want to use your own custom CloudEvent, make sure to specify the content type as `application/cloudevents+json`.
|
||||
|
||||
Read about content types [here](#content-types), and about the [Cloud Events message format]({{< ref "pubsub-overview.md#cloud-events-message-format" >}}).
|
||||
|
||||
## Next steps
|
||||
- [Scope access to your pub/sub topics]({{< ref pubsub-scopes.md >}})
|
||||
- [Pub/Sub quickstart](https://github.com/dapr/quickstarts/tree/master/pub-sub)
|
||||
- [Pub/sub components]({{< ref setup-pubsub >}})
|
||||
|
||||
- Try the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/pub-sub)
|
||||
- Learn about [topic scoping]({{< ref pubsub-scopes.md >}})
|
||||
- Learn about [message time-to-live]({{< ref pubsub-message-ttl.md >}})
|
||||
- Learn [how to configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
|
||||
- List of [pub/sub components]({{< ref setup-pubsub >}})
|
||||
- Read the [API reference]({{< ref pubsub_api.md >}})
|
||||
|
|
|
@ -0,0 +1,92 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Message Time-to-Live (TTL)"
|
||||
linkTitle: "Message TTL"
|
||||
weight: 6000
|
||||
description: "Use time-to-live in Pub/Sub messages."
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Dapr enables per-message time-to-live (TTL). This means that applications can set time-to-live per message, and subscribers do not receive those messages after expiration.
|
||||
|
||||
All Dapr [pub/sub components]({{< ref supported-pubsub >}}) are compatible with message TTL, as Dapr handles the TTL logic within the runtime. Simply set the `ttlInSeconds` metadata when publishing a message.
|
||||
|
||||
In some components, such as Kafka, time-to-live can be configured in the topic via `retention.ms` as per [documentation](https://kafka.apache.org/documentation/#topicconfigs_retention.ms). With message TTL in Dapr, applications using Kafka can now set time-to-live per message in addition to per topic.
|
||||
|
||||
## Native message TTL support
|
||||
|
||||
When message time-to-live has native support in the pub/sub component, Dapr simply forwards the time-to-live configuration without adding any extra logic, keeping predictable behavior. This is helpful when the expired messages are handled differently by the component. For example, with Azure Service Bus, where expired messages are stored in the dead letter queue and are not simply deleted.
|
||||
|
||||
### Supported components
|
||||
|
||||
#### Azure Service Bus
|
||||
|
||||
Azure Service Bus supports [entity level time-to-live](https://docs.microsoft.com/en-us/azure/service-bus-messaging/message-expiration). This means that messages have a default time-to-live but can also be set with a shorter timespan at publishing time. Dapr propagates the time-to-live metadata for the message and lets Azure Service Bus handle the expiration directly.
|
||||
|
||||
## Non-Dapr subscribers
|
||||
|
||||
If messages are consumed by subscribers not using Dapr, the expired messages are not automatically dropped, as expiration is handled by the Dapr runtime when a Dapr sidecar receives a message. However, subscribers can programmatically drop expired messages by adding logic to handle the `expiration` attribute in the cloud event, which follows the [RFC3339](https://tools.ietf.org/html/rfc3339) format.
|
||||
|
||||
When non-Dapr subscribers use components such as Azure Service Bus, which natively handle message TTL, they do not receive expired messages. Here, no extra logic is needed.
|
||||
|
||||
## Example
|
||||
|
||||
Message TTL can be set in the metadata as part of the publishing request:
|
||||
|
||||
{{< tabs curl "Python SDK" "PHP SDK">}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -X "POST" http://localhost:3500/v1.0/publish/pubsub/TOPIC_A?metadata.ttlInSeconds=120 -H "Content-Type: application/json" -d '{"order-number": "345"}'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
from dapr.clients import DaprClient
|
||||
|
||||
with DaprClient() as d:
|
||||
req_data = {
|
||||
'order-number': '345'
|
||||
}
|
||||
# Create a typed message with content type and body
|
||||
resp = d.publish_event(
|
||||
pubsub_name='pubsub',
|
||||
topic='TOPIC_A',
|
||||
data=json.dumps(req_data),
|
||||
metadata=(
|
||||
('ttlInSeconds', '120')
|
||||
)
|
||||
)
|
||||
# Print the request
|
||||
print(req_data, flush=True)
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create();
|
||||
$app->run(function(\DI\FactoryInterface $factory) {
|
||||
$publisher = $factory->make(\Dapr\PubSub\Publish::class, ['pubsub' => 'pubsub']);
|
||||
$publisher->topic('TOPIC_A')->publish('data', ['ttlInSeconds' => '120']);
|
||||
});
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
See [this guide]({{< ref pubsub_api.md >}}) for a reference on the pub/sub API.
|
||||
|
||||
## Related links
|
||||
|
||||
- Learn about [topic scoping]({{< ref pubsub-scopes.md >}})
|
||||
- Learn [how to configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
|
||||
- List of [pub/sub components]({{< ref supported-pubsub >}})
|
||||
- Read the [API reference]({{< ref pubsub_api.md >}})
|
|
@ -3,48 +3,52 @@ type: docs
|
|||
title: "Publish and subscribe overview"
|
||||
linkTitle: "Overview"
|
||||
weight: 1000
|
||||
description: "Overview of the Dapr Pub/Sub building block"
|
||||
description: "Overview of the Pub/Sub building block"
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
The [publish/subscribe pattern](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) allows your microservices to communicate with each other purely by sending messages. In this system, the **producer** of a message sends it to a **topic**, with no knowledge of what service will receive the message. A messages can even be sent if there's no consumer for it.
|
||||
The [publish/subscribe pattern](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) allows microservices to communicate with each other using messages. The **producer or publisher** sends messages to a **topic** without knowledge of what application will receive them. This involves writing them to an input channel. Similarly, a **consumer or subscriber** subscribes to the topic and receive its messages without any knowledge of what service produced these messages. This involves receiving messages from an output channel. An intermediary message broker is responsible for copying each message from an input channel to an output channels for all subscribers interested in that message. This pattern is especially useful when you need to decouple microservices from one another.
|
||||
|
||||
Similarly, a **consumer** will receive messages from a topic without knowledge of what producer sent it. This pattern is especially useful when you need to decouple microservices from one another.
|
||||
The publish/subscribe API in Dapr provides an at-least-once guarantee and integrates with various message brokers and queuing systems. The specific implementation used by your service is pluggable and configured as a Dapr pub/sub component at runtime. This approach removes the dependency from your service and, as a result, makes your service more portable and flexible to changes.
|
||||
|
||||
Dapr provides a publish/subscribe API that provides at-least-once guarantees and integrates with various message brokers implementations. These implementations are pluggable, and developed outside of the Dapr runtime in [components-contrib](https://github.com/dapr/components-contrib/tree/master/pubsub).
|
||||
The complete list of Dapr pub/sub components is [here]({{< ref supported-pubsub >}}).
|
||||
|
||||
<img src="/images/pubsub-overview-pattern.png" width=1000>
|
||||
|
||||
<br></br>
|
||||
|
||||
The Dapr pub/sub building block provides a platform-agnostic API to send and receive messages. Your services publish messages to a named topic and also subscribe to a topic to consume the messages.
|
||||
|
||||
The service makes a network call to a Dapr pub/sub building block, exposed as a sidecar. This building block then makes calls into a Dapr pub/sub component that encapsulates a specific message broker product. To receive topics, Dapr subscribes to the Dapr pub/sub component on behalf of your service and delivers the messages to an endpoint when they arrive.
|
||||
|
||||
The diagram below shows an example of a "shipping" service and an "email" service that have both subscribed to topics that are published by the "cart" service. Each service loads pub/sub component configuration files that point to the same pub/sub message bus component, for example Redis Streams, NATS Streaming, Azure Service Bus, or GCP Pub/Sub.
|
||||
|
||||
<img src="/images/pubsub-overview-components.png" width=1000>
|
||||
<br></br>
|
||||
|
||||
The diagram below has the same services, however this time showing the Dapr publish API that sends an "order" topic and order endpoints on the subscribing services that these topic messages are delivered posted to by Dapr.
|
||||
|
||||
<img src="/images/pubsub-overview-publish-API.png" width=1000>
|
||||
<br></br>
|
||||
|
||||
## Features
|
||||
The pub/sub building block provides several features to your application.
|
||||
|
||||
### Publish/Subscribe API
|
||||
### Cloud Events message format
|
||||
|
||||
The API for Publish/Subscribe can be found in the [spec repo]({{< ref pubsub_api.md >}}).
|
||||
To enable message routing and to provide additional context with each message, Dapr uses the [CloudEvents 1.0 specification](https://github.com/cloudevents/spec/tree/v1.0) as its message format. Any message sent by an application to a topic using Dapr is automatically "wrapped" in a Cloud Events envelope, using `Content-Type` header value for `datacontenttype` attribute.
|
||||
|
||||
### At-Least-Once guarantee
|
||||
Dapr implements the following Cloud Events fields:
|
||||
|
||||
Dapr guarantees At-Least-Once semantics for message delivery.
|
||||
That means that when an application publishes a message to a topic using the Publish/Subscribe API, it can assume the message is delivered at least once to any subscriber when the response status code from that endpoint is `200`, or returns no error if using the gRPC client.
|
||||
|
||||
### Consumer groups and multiple instances
|
||||
|
||||
The burden of dealing with concepts like consumer groups and multiple instances inside consumer groups is all catered for by Dapr.
|
||||
|
||||
When multiple instances of the same application ID subscribe to a topic, Dapr will make sure to deliver the message to only one instance. If two different applications with different IDs subscribe to a topic, at least one instance in each application receives a copy of the same message.
|
||||
|
||||
### Cloud events
|
||||
|
||||
Dapr follows the [CloudEvents 1.0 Spec](https://github.com/cloudevents/spec/tree/v1.0) and wraps any payload sent to a topic inside a Cloud Events envelope.
|
||||
|
||||
The following fields from the Cloud Events spec are implemented with Dapr:
|
||||
- `id`
|
||||
- `source`
|
||||
- `specversion`
|
||||
- `type`
|
||||
- `datacontenttype` (Optional)
|
||||
|
||||
> Starting with Dapr v0.9, Dapr no longer wraps published content into CloudEvent if the published payload itself is already in CloudEvent format.
|
||||
* `id`
|
||||
* `source`
|
||||
* `specversion`
|
||||
* `type`
|
||||
* `datacontenttype` (Optional)
|
||||
|
||||
The following example shows an XML content in CloudEvent v1.0 serialized as JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"specversion" : "1.0",
|
||||
|
@ -58,11 +62,59 @@ The following example shows an XML content in CloudEvent v1.0 serialized as JSON
|
|||
}
|
||||
```
|
||||
|
||||
### Message subscription
|
||||
|
||||
Dapr applications can subscribe to published topics. Dapr allows two methods by which your applications can subscribe to topics:
|
||||
|
||||
- **Declarative**, where a subscription is defined in an external file,
|
||||
- **Programmatic**, where a subscription is defined in the user code.
|
||||
|
||||
Both declarative and programmatic approaches support the same features. The declarative approach removes the Dapr dependency from your code and allows for existing applications to subscribe to topics, without having to change code. The programmatic approach implements the subscription in your code.
|
||||
|
||||
For more information read [How-To: Publish a message and subscribe to a topic]({{< ref howto-publish-subscribe >}}).
|
||||
|
||||
|
||||
### Message delivery
|
||||
|
||||
In principle, Dapr considers message successfully delivered when the subscriber responds with a non-error response after processing the message. For more granular control, Dapr's publish/subscribe API also provides explicit statuses, defined in the response payload, which the subscriber can use to indicate the specific handling instructions to Dapr (e.g. `RETRY` or `DROP`). For more information on message routing read [Dapr publish/subscribe API documentation]({{< ref "pubsub_api.md#provide-routes-for-dapr-to-deliver-topic-events" >}})
|
||||
|
||||
### At-least-once guarantee
|
||||
|
||||
Dapr guarantees "At-Least-Once" semantics for message delivery. That means that when an application publishes a message to a topic using the publish/subscribe API, Dapr ensures that this message will be delivered at least once to every subscriber.
|
||||
|
||||
### Consumer groups and competing consumers pattern
|
||||
|
||||
The burden of dealing with concepts like consumer groups and multiple application instances using a single consumer group is all handled automatically by Dapr. When multiple instances of the same application (running same app-IDs) subscribe to a topic, Dapr delivers each message to *only one instance of **that** application*. This is commonly known as the competing consumers pattern and is illustrated in the diagram below.
|
||||
|
||||
<img src="/images/pubsub-overview-pattern-competing-consumers.png" width=1000>
|
||||
<br></br>
|
||||
|
||||
Similarly, if two different applications (different app-IDs) subscribe to the same topic, Dapr deliver each message to *only one instance of **each** application*.
|
||||
|
||||
### Topic scoping
|
||||
|
||||
Limit which topics applications are able to publish/subscibe to in order to limit access to potentially sensitive data streams. Read [Pub/Sub scoping]({{< ref pubsub-scopes.md >}}) for more information.
|
||||
By default, all topics backing the Dapr pub/sub component (e.g. Kafka, Redis Stream, RabbitMQ) are available to every application configured with that component. To limit which application can publish or subscribe to topics, Dapr provides topic scoping. This enables to you say which topics an application is allowed to published and which topics an application is allowed to subscribed to. For more information read [publish/subscribe topic scoping]({{< ref pubsub-scopes.md >}}).
|
||||
|
||||
### Message Time-to-Live (TTL)
|
||||
Dapr can set a timeout message on a per message basis, meaning that if the message is not read from the pub/sub component, then the message is discarded. This is to prevent the build up of messages that are not read. A message that has been in the queue for longer than the configured TTL is said to be dead. For more information read [publish/subscribe message time-to-live]({{< ref pubsub-message-ttl.md >}}).
|
||||
|
||||
- Note: Message TTL can also be set for a given queue at the time of component creation. Look at the specific characteristic of the component that you are using.
|
||||
|
||||
### Communication with applications not using Dapr and CloudEvents
|
||||
For scenarios where one application uses Dapr but another doesn't, CloudEvent wrapping can be disabled for a publisher or subscriber. This allows partial adoption of Dapr pubsub in applications that cannot adopt Dapr all at once. For more information read [how to use pubsub without CloudEvent]({{< ref pubsub-raw.md >}}).
|
||||
|
||||
### Publish/Subscribe API
|
||||
|
||||
The publish/subscribe API is located in the [API reference]({{< ref pubsub_api.md >}}).
|
||||
|
||||
## Next steps
|
||||
|
||||
- Read the How-To guide on [publishing and subscribing]({{< ref howto-publish-subscribe.md >}})
|
||||
- Learn about [Pub/Sub scopes]({{< ref pubsub-scopes.md >}})
|
||||
* Follow these guides on:
|
||||
* [How-To: Publish a message and subscribe to a topic]({{< ref howto-publish-subscribe.md >}})
|
||||
* [How-To: Configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
|
||||
* Try out the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/pub-sub)
|
||||
* Learn about [topic scoping]({{< ref pubsub-scopes.md >}})
|
||||
* Learn about [message time-to-live (TTL)]({{< ref pubsub-message-ttl.md >}})
|
||||
* Learn about [pubsub without CloudEvent]({{< ref pubsub-raw.md >}})
|
||||
* List of [pub/sub components]({{< ref supported-pubsub.md >}})
|
||||
* Read the [pub/sub API reference]({{< ref pubsub_api.md >}})
|
||||
|
|
|
@ -0,0 +1,160 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Pub/Sub without CloudEvents"
|
||||
linkTitle: "Pub/Sub without CloudEvents"
|
||||
weight: 7000
|
||||
description: "Use Pub/Sub without CloudEvents."
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Dapr uses CloudEvents to provide additional context to the event payload, enabling features like:
|
||||
* Tracing
|
||||
* Deduplication by message Id
|
||||
* Content-type for proper deserialization of event's data
|
||||
|
||||
For more information about CloudEvents, read the [CloudEvents specification](https://github.com/cloudevents/spec).
|
||||
|
||||
When adding Dapr to your application, some services may still need to communicate via raw pub/sub messages not encapsulated in CloudEvents. This may be for compatibility reasons, or because some apps are not using Dapr. Dapr enables apps to publish and subscribe to raw events that are not wrapped in a CloudEvent.
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
Not using CloudEvents disables support for tracing, event deduplication per messageId, content-type metadata, and any other features built using the CloudEvent schema.
|
||||
{{% /alert %}}
|
||||
|
||||
## Publishing raw messages
|
||||
|
||||
Dapr apps are able to publish raw events to pub/sub topics without CloudEvent encapsulation, for compatibility with non-Dapr apps.
|
||||
|
||||
<img src="/images/pubsub_publish_raw.png" alt="Diagram showing how to publish with Dapr when subscriber does not use Dapr or CloudEvent" width=1000>
|
||||
|
||||
To disable CloudEvent wrapping, set the `rawPayload` metadata to `true` as part of the publishing request. This allows subscribers to receive these messages without having to parse the CloudEvent schema.
|
||||
|
||||
{{< tabs curl "Python SDK" "PHP SDK">}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -X "POST" http://localhost:3500/v1.0/publish/pubsub/TOPIC_A?metadata.rawPayload=true -H "Content-Type: application/json" -d '{"order-number": "345"}'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
from dapr.clients import DaprClient
|
||||
|
||||
with DaprClient() as d:
|
||||
req_data = {
|
||||
'order-number': '345'
|
||||
}
|
||||
# Create a typed message with content type and body
|
||||
resp = d.publish_event(
|
||||
pubsub_name='pubsub',
|
||||
topic='TOPIC_A',
|
||||
data=json.dumps(req_data),
|
||||
metadata=(
|
||||
('rawPayload', 'true')
|
||||
)
|
||||
)
|
||||
# Print the request
|
||||
print(req_data, flush=True)
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create();
|
||||
$app->run(function(\DI\FactoryInterface $factory) {
|
||||
$publisher = $factory->make(\Dapr\PubSub\Publish::class, ['pubsub' => 'pubsub']);
|
||||
$publisher->topic('TOPIC_A')->publish('data', ['rawPayload' => 'true']);
|
||||
});
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Subscribing to raw messages
|
||||
|
||||
Dapr apps are also able to subscribe to raw events coming from existing pub/sub topics that do not use CloudEvent encapsulation.
|
||||
|
||||
<img src="/images/pubsub_subscribe_raw.png" alt="Diagram showing how to subscribe with Dapr when publisher does not use Dapr or CloudEvent" width=1000>
|
||||
|
||||
|
||||
### Programmatically subscribe to raw events
|
||||
|
||||
When subscribing programmatically, add the additional metadata entry for `rawPayload` so the Dapr sidecar automatically wraps the payloads into a CloudEvent that is compatible with current Dapr SDKs.
|
||||
|
||||
{{< tabs "Python" "PHP SDK" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
import flask
|
||||
from flask import request, jsonify
|
||||
from flask_cors import CORS
|
||||
import json
|
||||
import sys
|
||||
|
||||
app = flask.Flask(__name__)
|
||||
CORS(app)
|
||||
|
||||
@app.route('/dapr/subscribe', methods=['GET'])
|
||||
def subscribe():
|
||||
subscriptions = [{'pubsubname': 'pubsub',
|
||||
'topic': 'deathStarStatus',
|
||||
'route': 'dsstatus',
|
||||
'metadata': {
|
||||
'rawPayload': 'true',
|
||||
} }]
|
||||
return jsonify(subscriptions)
|
||||
|
||||
@app.route('/dsstatus', methods=['POST'])
|
||||
def ds_subscriber():
|
||||
print(request.json, flush=True)
|
||||
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
|
||||
|
||||
app.run()
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
{{% codetab %}}
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions(['dapr.subscriptions' => [
|
||||
new \Dapr\PubSub\Subscription(pubsubname: 'pubsub', topic: 'deathStarStatus', route: '/dsstatus', metadata: [ 'rawPayload' => 'true'] ),
|
||||
]]));
|
||||
|
||||
$app->post('/dsstatus', function(
|
||||
#[\Dapr\Attributes\FromBody]
|
||||
\Dapr\PubSub\CloudEvent $cloudEvent,
|
||||
\Psr\Log\LoggerInterface $logger
|
||||
) {
|
||||
$logger->alert('Received event: {event}', ['event' => $cloudEvent]);
|
||||
return ['status' => 'SUCCESS'];
|
||||
}
|
||||
);
|
||||
|
||||
$app->start();
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Declaratively subscribe to raw events
|
||||
|
||||
Subscription Custom Resources Definitions (CRDs) do not currently contain metadata attributes ([issue #3225](https://github.com/dapr/dapr/issues/3225)). At this time subscribing to raw events can only be done through programmatic subscriptions.
|
||||
|
||||
## Related links
|
||||
|
||||
- Learn more about [how to publish and subscribe]({{< ref howto-publish-subscribe.md >}})
|
||||
- List of [pub/sub components]({{< ref supported-pubsub >}})
|
||||
- Read the [API reference]({{< ref pubsub_api.md >}})
|
|
@ -3,7 +3,7 @@ type: docs
|
|||
title: "Scope Pub/Sub topic access"
|
||||
linkTitle: "Scope topic access"
|
||||
weight: 5000
|
||||
description: "Use scopes to limit Pub/Sub topics to specific applications"
|
||||
description: "Use scopes to limit Pub/Sub topics to specific applications"
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
@ -32,9 +32,9 @@ To use this topic scoping three metadata properties can be set for a pub/sub com
|
|||
- `spec.metadata.allowedTopics`
|
||||
- A comma-separated list of allowed topics for all applications.
|
||||
- If `allowedTopics` is not set (default behavior), all topics are valid. `subscriptionScopes` and `publishingScopes` still take place if present.
|
||||
- `publishingScopes` or `subscriptionScopes` can be used in conjuction with `allowedTopics` to add granular limitations
|
||||
- `publishingScopes` or `subscriptionScopes` can be used in conjunction with `allowedTopics` to add granular limitations
|
||||
|
||||
These metadata properties can be used for all pub/sub components. The following examples use Redis as pub/sub component.
|
||||
These metadata properties can be used for all pub/sub components. The following examples use Redis as pub/sub component.
|
||||
|
||||
## Example 1: Scope topic access
|
||||
|
||||
|
@ -158,4 +158,11 @@ The table below shows which application is allowed to subscribe to the topics:
|
|||
|
||||
## Demo
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VdWBBGcbHQ?start=513" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VdWBBGcbHQ?start=513" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
## Related links
|
||||
|
||||
- Learn [how to configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
|
||||
- Learn about [message time-to-live]({{< ref pubsub-message-ttl.md >}})
|
||||
- List of [pub/sub components]({{< ref supported-pubsub >}})
|
||||
- Read the [API reference]({{< ref pubsub_api.md >}})
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Secrets building block"
|
||||
linkTitle: "Secrets"
|
||||
title: "Secrets management"
|
||||
linkTitle: "Secrets management"
|
||||
weight: 70
|
||||
description: Securely access secrets from your application
|
||||
---
|
||||
|
|
|
@ -6,42 +6,66 @@ weight: 2000
|
|||
description: "Use the secret store building block to securely retrieve a secret"
|
||||
---
|
||||
|
||||
It's common for applications to store sensitive information such as connection strings, keys and tokens that are used to authenticate with databases, services and external systems in secrets by using a dedicated secret store.
|
||||
This article provides guidance on using Dapr's secrets API in your code to leverage the [secrets store building block]({{<ref secrets-overview>}}). The secrets API allows you to easily retrieve secrets in your application code from a configured secret store.
|
||||
|
||||
Usually this involves setting up a secret store such as Azure Key Vault, Hashicorp Vault and others and storing the application level secrets there. To access these secret stores, the application needs to import the secret store SDK, and use it to access the secrets.
|
||||
## Set up a secret store
|
||||
|
||||
This usually involves writing a fair amount of boilerplate code that is not related to the actual business domain of the app, and this becomes an even greater challenge in multi-cloud scenarios: if an app needs to deploy to two different environments and use two different secret stores, the amount of boilerplate code gets doubled, and the effort increases.
|
||||
Before retrieving secrets in your application's code, you must have a secret store component configured. For the purposes of this guide, as an example you will configure a local secret store which uses a local JSON file to store secrets.
|
||||
|
||||
In addition, not all secret stores have native SDKs for all programming languages.
|
||||
>Note: The component used in this example is not secured and is not recommended for production deployments. You can find other alternatives [here]({{<ref supported-secret-stores >}}).
|
||||
|
||||
To make it easier for developers everywhere to consume application secrets, Dapr has a dedicated secrets building block API that allows developers to get secrets from a secret store.
|
||||
Create a file named `mysecrets.json` with the following contents:
|
||||
|
||||
## Setting up a secret store component
|
||||
|
||||
The first step involves setting up a secret store, either in the cloud or in the hosting environment such as a cluster. This is done by using the relevant instructions from the cloud provider or secret store implementation.
|
||||
|
||||
The second step is to configure the secret store with Dapr.
|
||||
|
||||
To deploy in Kubernetes, save the file above to `aws_secret_manager.yaml` and then run:
|
||||
|
||||
```bash
|
||||
kubectl apply -f aws_secret_manager.yaml
|
||||
```json
|
||||
{
|
||||
"my-secret" : "I'm Batman"
|
||||
}
|
||||
```
|
||||
|
||||
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
|
||||
Create a directory for your components file named `components` and inside it create a file named `localSecretStore.yaml` with the following contents:
|
||||
|
||||
Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&feature=youtu.be&t=1818) for an example on how to use the secrets API. Or watch this [video](https://www.youtube.com/watch?v=8W-iBDNvCUM&feature=youtu.be&t=1765) for an example on how to component scopes with secret components and the secrets API.
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: my-secrets-store
|
||||
namespace: default
|
||||
spec:
|
||||
type: secretstores.local.file
|
||||
version: v1
|
||||
metadata:
|
||||
- name: secretsFile
|
||||
value: <PATH TO SECRETS FILE>/mysecrets.json
|
||||
- name: nestedSeparator
|
||||
value: ":"
|
||||
```
|
||||
|
||||
## Calling the secrets API
|
||||
Make sure to replace `<PATH TO SECRETS FILE>` with the path to the JSON file you just created.
|
||||
|
||||
Now that the secret store is set up, you can call Dapr to get the secrets for a given key for a specific secret store.
|
||||
To configure a different kind of secret store see the guidance on [how to configure a secret store]({{<ref setup-secret-store>}}) and review [supported secret stores]({{<ref supported-secret-stores >}}) to see specific details required for different secret store solutions.
|
||||
## Get a secret
|
||||
|
||||
For a full API reference, go [here](https://github.com/dapr/docs/blob/master/reference/api/secrets_api.md).
|
||||
Now run the Dapr sidecar (with no application)
|
||||
|
||||
Here are a few examples in different programming languages:
|
||||
```bash
|
||||
dapr run --app-id my-app --dapr-http-port 3500 --components-path ./components
|
||||
```
|
||||
|
||||
### Go
|
||||
And now you can get the secret by calling the Dapr sidecar using the secrets API:
|
||||
|
||||
```bash
|
||||
curl http://localhost:3500/v1.0/secrets/my-secrets-store/my-secret
|
||||
```
|
||||
|
||||
For a full API reference, go [here]({{< ref secrets_api.md >}}).
|
||||
|
||||
## Calling the secrets API from your code
|
||||
|
||||
Once you have a secret store set up, you can call Dapr to get the secrets from your application code. Here are a few examples in different programming languages:
|
||||
|
||||
{{< tabs "Go" "Javascript" "Python" "Rust" "C#" "PHP" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
```Go
|
||||
import (
|
||||
"fmt"
|
||||
|
@ -49,11 +73,11 @@ import (
|
|||
)
|
||||
|
||||
func main() {
|
||||
url := "http://localhost:3500/v1.0/secrets/kubernetes/my-secret"
|
||||
url := "http://localhost:3500/v1.0/secrets/my-secrets-store/my-secret"
|
||||
|
||||
res, err := http.Get(url)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
panic(err)
|
||||
}
|
||||
defer res.Body.Close()
|
||||
|
||||
|
@ -61,13 +85,16 @@ func main() {
|
|||
fmt.Println(string(body))
|
||||
}
|
||||
```
|
||||
### Javascript
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```javascript
|
||||
require('isomorphic-fetch');
|
||||
const secretsUrl = `http://localhost:3500/v1.0/secrets`;
|
||||
|
||||
fetch(`${secretsUrl}/kubernetes/my-secret`)
|
||||
fetch(`${secretsUrl}/my-secrets-store/my-secret`)
|
||||
.then((response) => {
|
||||
if (!response.ok) {
|
||||
throw "Could not get secret";
|
||||
|
@ -78,16 +105,21 @@ fetch(`${secretsUrl}/kubernetes/my-secret`)
|
|||
});
|
||||
```
|
||||
|
||||
### Python
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
import requests as req
|
||||
|
||||
resp = req.get("http://localhost:3500/v1.0/secrets/kubernetes/my-secret")
|
||||
resp = req.get("http://localhost:3500/v1.0/secrets/my-secrets-store/my-secret")
|
||||
print(resp.text)
|
||||
```
|
||||
|
||||
### Rust
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```rust
|
||||
#![deny(warnings)]
|
||||
|
@ -95,7 +127,7 @@ use std::{thread};
|
|||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), reqwest::Error> {
|
||||
let res = reqwest::get("http://localhost:3500/v1.0/secrets/kubernetes/my-secret").await?;
|
||||
let res = reqwest::get("http://localhost:3500/v1.0/secrets/my-secrets-store/my-secret").await?;
|
||||
let body = res.text().await?;
|
||||
println!("Secret:{}", body);
|
||||
|
||||
|
@ -105,13 +137,43 @@ async fn main() -> Result<(), reqwest::Error> {
|
|||
}
|
||||
```
|
||||
|
||||
### C#
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
var client = new HttpClient();
|
||||
var response = await client.GetAsync("http://localhost:3500/v1.0/secrets/kubernetes/my-secret");
|
||||
var response = await client.GetAsync("http://localhost:3500/v1.0/secrets/my-secrets-store/my-secret");
|
||||
response.EnsureSuccessStatusCode();
|
||||
|
||||
string secret = await response.Content.ReadAsStringAsync();
|
||||
Console.WriteLine(secret);
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create();
|
||||
$app->run(function(\Dapr\SecretManager $secretManager, \Psr\Log\LoggerInterface $logger) {
|
||||
$secret = $secretManager->retrieve(secret_store: 'my-secret-store', name: 'my-secret');
|
||||
$logger->alert('got secret: {secret}', ['secret' => $secret]);
|
||||
});
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Related links
|
||||
|
||||
- [Dapr secrets overview]({{<ref secrets-overview>}})
|
||||
- [Secrets API reference]({{<ref secrets_api>}})
|
||||
- [Configure a secret store]({{<ref setup-secret-store>}})
|
||||
- [Supported secrets]({{<ref supported-secret-stores>}})
|
||||
- [Using secrets in components]({{<ref component-secrets>}})
|
||||
- [Secret stores quickstart](https://github.com/dapr/quickstarts/tree/master/secretstore)
|
||||
|
|
|
@ -1,30 +1,25 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Secrets stores overview"
|
||||
linkTitle: "Secrets stores overview"
|
||||
title: "Secrets management overview"
|
||||
linkTitle: "Overview"
|
||||
weight: 1000
|
||||
description: "Overview of Dapr secrets management building block"
|
||||
description: "Overview of secrets management building block"
|
||||
---
|
||||
|
||||
Almost all non-trivial applications need to _securely_ store secret data like API keys, database passwords, and more. By nature, these secrets should not be checked into the version control system, but they also need to be accessible to code running in production. This is generally a hard problem, but it's critical to get it right. Otherwise, critical production systems can be compromised.
|
||||
It's common for applications to store sensitive information such as connection strings, keys and tokens that are used to authenticate with databases, services and external systems in secrets by using a dedicated secret store.
|
||||
|
||||
Dapr's solution to this problem is the secrets API and secrets stores.
|
||||
Usually this involves setting up a secret store such as Azure Key Vault, Hashicorp Vault and others and storing the application level secrets there. To access these secret stores, the application needs to import the secret store SDK, and use it to access the secrets. This may require a fair amount of boilerplate code that is not related to the actual business domain of the app, and so becomes an even greater challenge in multi-cloud scenarios where different vendor specific secret stores may be used.
|
||||
|
||||
Here's how it works:
|
||||
To make it easier for developers everywhere to consume application secrets, Dapr has a dedicated secrets building block API that allows developers to get secrets from a secret store.
|
||||
|
||||
- Dapr is set up to use a **secret store** - a place to securely store secret data
|
||||
- Application code uses the standard Dapr secrets API to retrieve secrets.
|
||||
Using Dapr's secret store building block typically involves the following:
|
||||
1. Setting up a component for a specific secret store solution.
|
||||
1. Retrieving secrets using the Dapr secrets API in the application code.
|
||||
1. Optionally, referencing secrets in Dapr component files.
|
||||
|
||||
Some examples for secret stores include `Kubernetes`, `Hashicorp Vault`, `Azure KeyVault`. See [secret stores](https://github.com/dapr/components-contrib/tree/master/secretstores) for the list of supported stores.
|
||||
|
||||
See [Setup secret stores](https://github.com/dapr/docs/tree/master/howto/setup-secret-store) for a HowTo guide for setting up and using secret stores.
|
||||
|
||||
## Referencing secret stores in Dapr components
|
||||
|
||||
Instead of including credentials directly within a Dapr component file, you can place the credentials within a Dapr supported secret store and reference the secret within the Dapr component. This is preferred approach and is a recommended best practice especially in production environments.
|
||||
|
||||
For more information read [Referencing Secret Stores in Components]({{< ref component-secrets.md >}})
|
||||
## Setting up a secret store
|
||||
|
||||
See [Setup secret stores]({{< ref howto-secrets.md >}}) for guidance on how to setup a secret store with Dapr.
|
||||
|
||||
## Using secrets in your application
|
||||
|
||||
|
@ -35,21 +30,28 @@ For example, the diagram below shows an application requesting the secret called
|
|||
|
||||
<img src="/images/secrets-overview-cloud-stores.png" width=600>
|
||||
|
||||
Applications can use the secrets API to access secrets from a Kubernetes secret store. In the example below, the application retrieves the same secret "mysecret" from a Kubernetes secret store.
|
||||
Applications can use the secrets API to access secrets from a Kubernetes secret store. In the example below, the application retrieves the same secret "mysecret" from a Kubernetes secret store.
|
||||
|
||||
<img src="/images/secrets-overview-kubernetes-store.png" width=600>
|
||||
|
||||
In Azure Dapr can be configured to use Managed Identities to authenticate with Azure Key Vault in order to retrieve secrets. In the example below, an Azure Kubernetes Service (AKS) cluster is configured to use managed identities. Then Dapr uses [pod identities](https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-identity#use-pod-identities) to retrieve secrets from Azure Key Vault on behalf of the application.
|
||||
In Azure Dapr can be configured to use Managed Identities to authenticate with Azure Key Vault in order to retrieve secrets. In the example below, an Azure Kubernetes Service (AKS) cluster is configured to use managed identities. Then Dapr uses [pod identities](https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-identity#use-pod-identities) to retrieve secrets from Azure Key Vault on behalf of the application.
|
||||
|
||||
<img src="/images/secrets-overview-azure-aks-keyvault.png" width=600>
|
||||
|
||||
Notice that in all of the examples above the application code did not have to change to get the same secret. Dapr did all the heavy lifting here via the secrets building block API and using the secret components.
|
||||
|
||||
See [Access Application Secrets using the Secrets API](https://github.com/dapr/docs/tree/master/howto/get-secrets) for a How To guide to use secrets in your application.
|
||||
|
||||
|
||||
For detailed API information read [Secrets API](https://github.com/dapr/docs/blob/master/reference/api/secrets_api.md).
|
||||
|
||||
|
||||
See [Access Application Secrets using the Secrets API]({{< ref howto-secrets.md >}}) for a How To guide to use secrets in your application.
|
||||
|
||||
For detailed API information read [Secrets API]({{< ref secrets_api.md >}}).
|
||||
|
||||
## Referencing secret stores in Dapr components
|
||||
|
||||
When configuring Dapr components such as state stores it is often required to include credentials in components files. Instead of doing that, you can place the credentials within a Dapr supported secret store and reference the secret within the Dapr component. This is preferred approach and is a recommended best practice especially in production environments.
|
||||
|
||||
For more information read [referencing secret stores in components]({{< ref component-secrets.md >}})
|
||||
|
||||
## Limiting access to secrets
|
||||
|
||||
To provide more granular control on access to secrets, Dapr provides the ability to define scopes and restricting access permissions. Learn more about [using secret scoping]({{<ref secrets-scopes>}})
|
||||
|
||||
|
||||
|
|
|
@ -3,21 +3,24 @@ type: docs
|
|||
title: "How To: Use secret scoping"
|
||||
linkTitle: "How To: Use secret scoping"
|
||||
weight: 3000
|
||||
description: "Use scoping to limit the secrets that can be read from secret stores"
|
||||
description: "Use scoping to limit the secrets that can be read by your application from secret stores"
|
||||
type: docs
|
||||
---
|
||||
|
||||
Follow [these instructions]({{< ref setup-secret-store >}}) to configure secret store for an application. Once configured, any secret defined within that store will be accessible from the Dapr application.
|
||||
You can read [guidance on setting up secret store components]({{< ref setup-secret-store >}}) to configure a secret store for an application. Once configured, by default *any* secret defined within that store is accessible from the Dapr application.
|
||||
|
||||
To limit the secrets to which the Dapr application has access, users can define secret scopes by augmenting existing configuration CRD with restrictive permissions.
|
||||
To limit the secrets to which the Dapr application has access to, you can can define secret scopes by adding a secret scope policy to the application configuration with restrictive permissions. Follow [these instructions]({{< ref configuration-concept.md >}}) to define an application configuration.
|
||||
|
||||
Follow [these instructions]({{< ref configuration-concept.md >}}) to define a configuration CRD.
|
||||
The secret scoping policy applies to any [secret store]({{< ref supported-secret-stores.md >}}), whether that is a local secret store, a Kubernetes secret store or a public cloud secret store. For details on how to set up a [secret stores]({{< ref setup-secret-store.md >}}) read [How To: Retrieve a secret]({{< ref howto-secrets.md >}})
|
||||
|
||||
Watch this [video](https://youtu.be/j99RN_nxExA?start=2272) for a demo on how to use secret scoping with your application.
|
||||
<iframe width="688" height="430" src="https://www.youtube.com/embed/j99RN_nxExA?start=2272" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
## Scenario 1 : Deny access to all secrets for a secret store
|
||||
|
||||
In Kubernetes cluster, the native Kubernetes secret store is added to Dapr application by default. In some scenarios it may be necessary to deny access to Dapr secrets for a given application. To add this configuration follow the steps below:
|
||||
This example uses Kubernetes. The native Kubernetes secret store is added to you Dapr application by default. In some scenarios it may be necessary to deny access to Dapr secrets for a given application. To add this configuration follow the steps below:
|
||||
|
||||
Define the following `appconfig.yaml` and apply it to the Kubernetes cluster using the command `kubectl apply -f appconfig.yaml`.
|
||||
Define the following `appconfig.yaml` configuration and apply it to the Kubernetes cluster using the command `kubectl apply -f appconfig.yaml`.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -31,17 +34,17 @@ spec:
|
|||
defaultAccess: deny
|
||||
```
|
||||
|
||||
For applications that need to be denied access to the Kubernetes secret store, follow [these instructions]({{< ref kubernetes-overview.md >}}), and add the following annotation to the application pod.
|
||||
For applications that need to be denied access to the Kubernetes secret store, follow [these instructions]({{< ref kubernetes-overview.md >}}), and add the following annotation to the application pod.
|
||||
|
||||
```yaml
|
||||
dapr.io/config: appconfig
|
||||
```
|
||||
|
||||
With this defined, the application no longer has access to Kubernetes secret store.
|
||||
With this defined, the application no longer has access to any secrets in the Kubernetes secret store.
|
||||
|
||||
## Scenario 2 : Allow access to only certain secrets in a secret store
|
||||
|
||||
To allow a Dapr application to have access to only certain secrets, define the following `config.yaml`:
|
||||
This example uses a secret store that is named `vault`. For example this could be a Hashicorp secret store component that has been set on your application. To allow a Dapr application to have access to only certain secrets `secret1` and `secret2` in the `vault` secret store, define the following `appconfig.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -56,9 +59,9 @@ spec:
|
|||
allowedSecrets: ["secret1", "secret2"]
|
||||
```
|
||||
|
||||
This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
|
||||
This example defines configuration for secret store named `vault`. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
|
||||
|
||||
## Scenario 3: Deny access to certain senstive secrets in a secret store
|
||||
## Scenario 3: Deny access to certain sensitive secrets in a secret store
|
||||
|
||||
Define the following `config.yaml`:
|
||||
|
||||
|
@ -75,11 +78,11 @@ spec:
|
|||
deniedSecrets: ["secret1", "secret2"]
|
||||
```
|
||||
|
||||
The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
|
||||
This example uses a secret store that is named `vault`. The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
|
||||
|
||||
## Permission priority
|
||||
|
||||
The `allowedSecrets` and `deniedSecrets` list values take priorty over the `defaultAccess`.
|
||||
The `allowedSecrets` and `deniedSecrets` list values take priority over the `defaultAccess` policy.
|
||||
|
||||
Scenarios | defaultAccess | allowedSecrets | deniedSecrets | permission
|
||||
---- | ------- | -----------| ----------| ------------
|
||||
|
@ -90,6 +93,8 @@ Scenarios | defaultAccess | allowedSecrets | deniedSecrets | permission
|
|||
5 - Default deny with denied list | deny | empty | ["s1"] | deny
|
||||
6 - Default deny/allow with both lists | deny/allow | ["s1"] | ["s2"] | only "s1" can be accessed
|
||||
|
||||
## Related links
|
||||
* List of [secret stores]({{< ref supported-secret-stores.md >}})
|
||||
* Overview of [secret stores]({{< ref setup-secret-store.md >}})
|
||||
|
||||
|
||||
|
||||
howto-secrets/
|
|
@ -1,8 +1,8 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-To: Invoke and discover services"
|
||||
title: "How-To: Invoke services using HTTP"
|
||||
linkTitle: "How-To: Invoke services"
|
||||
description: "How-to guide on how to use Dapr service invocation in a distributed application"
|
||||
description: "Call between services using service invocation"
|
||||
weight: 2000
|
||||
---
|
||||
|
||||
|
@ -140,6 +140,6 @@ The example above showed you how to directly invoke a different service running
|
|||
For more information on tracing and logs see the [observability]({{< ref observability-concept.md >}}) article.
|
||||
|
||||
## Related Links
|
||||
|
||||
|
||||
* [Service invocation overview]({{< ref service-invocation-overview.md >}})
|
||||
* [Service invocation API specification]({{< ref service_invocation_api.md >}})
|
||||
|
|
|
@ -8,68 +8,54 @@ description: "Overview of the service invocation building block"
|
|||
|
||||
## Introduction
|
||||
|
||||
Using service invocation, your application can discover and reliably and securely communicate with other applications using the standard protocols of [gRPC](https://grpc.io) or [HTTP](https://www.w3.org/Protocols/).
|
||||
Using service invocation, your application can reliably and securely communicate with other applications using the standard [gRPC](https://grpc.io) or [HTTP](https://www.w3.org/Protocols/) protocols.
|
||||
|
||||
In many environments with multiple services that need to communicate with each other, developers often ask themselves the following questions:
|
||||
|
||||
* How do I discover and invoke methods on different services?
|
||||
* How do I call other services securely?
|
||||
* How do I handle retries and transient errors?
|
||||
* How do I use distributed tracing to see a call graph to diagnose issues in production?
|
||||
- How do I discover and invoke methods on different services?
|
||||
- How do I call other services securely with encryption and apply access control on the methods?
|
||||
- How do I handle retries and transient errors?
|
||||
- How do I use tracing to see a call graph with metrics to diagnose issues in production?
|
||||
|
||||
Dapr allows you to overcome these challenges by providing an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing, metrics, error handling and more.
|
||||
Dapr addresses these challenges by providing a service invocation API that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing, metrics, error handling, encryption and more.
|
||||
|
||||
Dapr uses a sidecar, decentralized architecture. To invoke an application using Dapr, you use the `invoke` API on any Dapr instance. The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another.
|
||||
Dapr uses a sidecar architecture. To invoke an application using Dapr, you use the `invoke` API on any Dapr instance. The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another.
|
||||
|
||||
### Invoke logic
|
||||
### Service invocation
|
||||
|
||||
The diagram below is an overview of how Dapr's service invocation works.
|
||||
|
||||
<img src="/images/service-invocation-overview.png" width=800 alt="Diagram showing the steps of service invocation">
|
||||
|
||||
1. Service A makes an http/gRPC call targeting Service B. The call goes to the local Dapr sidecar.
|
||||
2. Dapr discovers Service B's location using the [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) which is running on the given [hosting platform]({{< ref "hosting" >}}).
|
||||
3. Dapr forwards the message to Service B's Dapr sidecar
|
||||
|
||||
**Note**: All calls between Dapr sidecars go over gRPC for performance. Only calls between services and Dapr sidecars can be either HTTP or gRPC
|
||||
|
||||
1. Service A makes an HTTP or gRPC call targeting Service B. The call goes to the local Dapr sidecar.
|
||||
2. Dapr discovers Service B's location using the [name resolution component]({{< ref supported-name-resolution >}}) which is running on the given [hosting platform]({{< ref "hosting" >}}).
|
||||
3. Dapr forwards the message to Service B's Dapr sidecar
|
||||
- **Note**: All calls between Dapr sidecars go over gRPC for performance. Only calls between services and Dapr sidecars can be either HTTP or gRPC
|
||||
4. Service B's Dapr sidecar forwards the request to the specified endpoint (or method) on Service B. Service B then runs its business logic code.
|
||||
5. Service B sends a response to Service A. The response goes to Service B's sidecar.
|
||||
6. Dapr forwards the response to Service A's Dapr sidecar.
|
||||
7. Service A receives the response.
|
||||
|
||||
## Features
|
||||
Service invocation provides several features to make it easy for you to call methods on remote applications.
|
||||
|
||||
### Service invocation API
|
||||
|
||||
The API for Pservice invocation can be found in the [spec repo]({{< ref service_invocation_api.md >}}).
|
||||
Service invocation provides several features to make it easy for you to call methods between applications.
|
||||
|
||||
### Namespaces scoping
|
||||
|
||||
Service invocation supports calls across namespaces. On all supported hosting platforms, Dapr app IDs conform to a valid FQDN format that includes the target namespace.
|
||||
|
||||
For example, the following string contains the app ID `nodeapp` in addition to the namespace the app runs in `production`.
|
||||
By default, users can invoke services within the same namespaces by simply referencing the app ID (`nodeapp`):
|
||||
|
||||
```sh
|
||||
localhost:3500/v1.0/invoke/nodeapp/method/neworder
|
||||
```
|
||||
|
||||
Service invocation also supports calls across namespaces. On all supported hosting platforms, Dapr app IDs conform to a valid FQDN format that includes the target namespace.
|
||||
|
||||
Users can specify both the app ID (`nodeapp`) in addition to the namespace the app runs in (`production`):
|
||||
|
||||
```sh
|
||||
localhost:3500/v1.0/invoke/nodeapp.production/method/neworder
|
||||
```
|
||||
|
||||
This is especially useful in cross namespace calls in a Kubernetes cluster. Watch this video for a demo on how to use namespaces with service invocation.
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/LYYV_jouEuA?start=497" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
### Retries
|
||||
|
||||
Service invocation performs automatic retries with backoff time periods in the event of call failures and transient errors.
|
||||
|
||||
Errors that cause retries are:
|
||||
|
||||
* Network errors including endpoint unavailability and refused connections
|
||||
* Authentication errors due to a renewing certificate on the calling/callee Dapr sidecars
|
||||
|
||||
Per call retries are performed with a backoff interval of 1 second up to a threshold of 3 times.
|
||||
Connection establishment via gRPC to the target sidecar has a timeout of 5 seconds.
|
||||
This is especially useful in cross namespace calls in a Kubernetes cluster.
|
||||
|
||||
### Service-to-service security
|
||||
|
||||
|
@ -77,30 +63,53 @@ All calls between Dapr applications can be made secure with mutual (mTLS) authen
|
|||
|
||||
For more information read the [service-to-service security]({{< ref "security-concept.md#sidecar-to-sidecar-communication" >}}) article.
|
||||
|
||||
<img src="/images/security-mTLS-sentry-selfhosted.png" width=800>
|
||||
|
||||
### Service access security
|
||||
### Access control
|
||||
|
||||
Applications can control which other applications are allowed to call them and what they are authorized to do via access policies. This enables you to restrict sensitive applications, that say have personnel information, from being accessed by unauthorized applications, and combined with service-to-service secure communication, provides for soft multi-tenancy deployments.
|
||||
|
||||
For more information read the [access control allow lists for service invocation]({{< ref invoke-allowlist.md >}}) article.
|
||||
|
||||
### Observability
|
||||
### Retries
|
||||
|
||||
By default, all calls between applications are traced and metrics are gathered to provide insights and diagnostics for applications, which is especially important in production scenarios.
|
||||
Service invocation performs automatic retries with backoff time periods in the event of call failures and transient errors.
|
||||
|
||||
For more information read the [observability]({{< ref observability-concept.md >}}) article.
|
||||
Errors that cause retries are:
|
||||
|
||||
- Network errors including endpoint unavailability and refused connections.
|
||||
- Authentication errors due to a renewing certificate on the calling/callee Dapr sidecars.
|
||||
|
||||
Per call retries are performed with a backoff interval of 1 second up to a threshold of 3 times.
|
||||
Connection establishment via gRPC to the target sidecar has a timeout of 5 seconds.
|
||||
|
||||
### Pluggable service discovery
|
||||
|
||||
Dapr can run on any [hosting platform]({{< ref hosting >}}). For the supported hosting platforms this means they have a [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) developed for them that enables service discovery. For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster.
|
||||
Dapr can run on a variety of [hosting platforms]({{< ref hosting >}}). To enable service discovery and service invocation, Dapr uses pluggable [name resolution components]({{< ref supported-name-resolution >}}). For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster. Self-hosted machines can use the mDNS name resolution component. The Consul name resolution component can be used in any hosting environment including Kubernetes or self-hosted.
|
||||
|
||||
### Round robin load balancing with mDNS
|
||||
|
||||
Dapr provides round robin load balancing of service invocation requests with the mDNS protocol, for example with a single machine or with multiple, networked, physical machines.
|
||||
|
||||
The diagram below shows an example of how this works. If you have 1 instance of an application with app ID `FrontEnd` and 3 instances of application with app ID `Cart` and you call from `FrontEnd` app to `Cart` app, Dapr round robins' between the 3 instances. These instance can be on the same machine or on different machines. .
|
||||
|
||||
<img src="/images/service-invocation-mdns-round-robin.png" width=600 alt="Diagram showing the steps of service invocation">
|
||||
|
||||
**Note**: You can have N instances of the same app with the same app ID as app ID is unique per app. And you can have multiple instances of that app where all those instances have the same app ID.
|
||||
|
||||
### Tracing and metrics with observability
|
||||
|
||||
By default, all calls between applications are traced and metrics are gathered to provide insights and diagnostics for applications, which is especially important in production scenarios. This gives you call graphs and metrics on the calls between your services. For more information read about [observability]({{< ref observability-concept.md >}}).
|
||||
|
||||
### Service invocation API
|
||||
|
||||
The API for service invocation can be found in the [service invocation API reference]({{< ref service_invocation_api.md >}}) which describes how to invoke a method on another service.
|
||||
|
||||
## Example
|
||||
|
||||
Following the above call sequence, suppose you have the applications as described in the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md), where a python app invokes a node.js app. In such a scenario, the python app would be "Service A" , and a Node.js app would be "Service B".
|
||||
|
||||
The diagram below shows sequence 1-7 again on a local machine showing the API call:
|
||||
The diagram below shows sequence 1-7 again on a local machine showing the API calls:
|
||||
|
||||
<img src="/images/service-invocation-overview-example.png" width=800>
|
||||
<img src="/images/service-invocation-overview-example.png" width=800 />
|
||||
|
||||
1. The Node.js app has a Dapr app ID of `nodeapp`. The python app invokes the Node.js app's `neworder` method by POSTing `http://localhost:3500/v1.0/invoke/nodeapp/method/neworder`, which first goes to the python app's local Dapr sidecar.
|
||||
2. Dapr discovers the Node.js app's location using name resolution component (in this case mDNS while self-hosted) which runs on your local machine.
|
||||
|
@ -108,13 +117,13 @@ The diagram below shows sequence 1-7 again on a local machine showing the API ca
|
|||
4. The Node.js app's sidecar forwards the request to the Node.js app. The Node.js app performs its business logic, logging the incoming message and then persist the order ID into Redis (not shown in the diagram)
|
||||
5. The Node.js app sends a response to the Python app through the Node.js sidecar.
|
||||
6. Dapr forwards the response to the Python Dapr sidecar
|
||||
7. The Python app receives the resposne.
|
||||
7. The Python app receives the response.
|
||||
|
||||
## Next steps
|
||||
|
||||
* Follow these guide on:
|
||||
* [How-to: Get started with HTTP service invocation]({{< ref howto-invoke-discover-services.md >}})
|
||||
* [How-to: Get started with Dapr and gRPC]({{< ref grpc >}})
|
||||
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or visit the samples in each of the [Dapr SDKs]({{< ref sdks >}})
|
||||
* Read the [service invocation API specification]({{< ref service_invocation_api.md >}})
|
||||
* See the [service invocation performance]({{< ref perf-service-invocation.md >}}) numbers
|
||||
- Follow these guides on:
|
||||
- [How-to: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}})
|
||||
- [How-To: Configure Dapr to use gRPC]({{< ref grpc >}})
|
||||
- Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or try the samples in the [Dapr SDKs]({{< ref sdks >}})
|
||||
- Read the [service invocation API specification]({{< ref service_invocation_api.md >}})
|
||||
- Understand the [service invocation performance]({{< ref perf-service-invocation.md >}}) numbers
|
||||
|
|
|
@ -14,127 +14,192 @@ Dealing with different databases libraries, testing them, handling retries and f
|
|||
Dapr provides state management capabilities that include consistency and concurrency options.
|
||||
In this guide we'll start of with the basics: Using the key/value state API to allow an application to save, get and delete state.
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
- [Dapr CLI]({{< ref install-dapr-cli.md >}})
|
||||
- Initialized [Dapr environment]({{< ref install-dapr-selfhost.md >}})
|
||||
|
||||
## Step 1: Setup a state store
|
||||
|
||||
A state store component represents a resource that Dapr uses to communicate with a database.
|
||||
For the purpose of this how to we'll use a Redis state store, but any state store from the [supported list]({{< ref supported-state-stores >}}) will work.
|
||||
|
||||
For the purpose of this guide we'll use a Redis state store, but any state store from the [supported list]({{< ref supported-state-stores >}}) will work.
|
||||
|
||||
{{< tabs "Self-Hosted (CLI)" Kubernetes>}}
|
||||
|
||||
{{% codetab %}}
|
||||
When using `Dapr init` in Standalone mode, the Dapr CLI automatically provisions a state store (Redis) and creates the relevant YAML in a `components` directory, which for Linux/MacOS is `$HOME/.dapr/components` and for Windows is `%USERPROFILE%\.dapr\components`
|
||||
When using `dapr init` in Standalone mode, the Dapr CLI automatically provisions a state store (Redis) and creates the relevant YAML in a `components` directory, which for Linux/MacOS is `$HOME/.dapr/components` and for Windows is `%USERPROFILE%\.dapr\components`
|
||||
|
||||
To change the state store being used, replace the YAML under `/components` with the file of your choice.
|
||||
To optionally change the state store being used, replace the YAML file `statestore.yaml` under `/components` with the file of your choice.
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
To deploy this into a Kubernetes cluster, fill in the `metadata` connection details of your [desired statestore component]({{< ref supported-state-stores >}}) in the yaml below, save as `statestore.yaml`, and run `kubectl apply -f statestore.yaml`.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.redis
|
||||
version: v1
|
||||
metadata:
|
||||
- name: redisHost
|
||||
value: localhost:6379
|
||||
- name: redisPassword
|
||||
value: ""
|
||||
```
|
||||
See the instructions [here]({{< ref "setup-state-store" >}}) on how to setup different state stores on Kubernetes.
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Step 2: Save state
|
||||
|
||||
The following example shows how to save two key/value pairs in a single call using the state management API.
|
||||
|
||||
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK">}}
|
||||
|
||||
{{% codetab %}}
|
||||
Begin by ensuring a Dapr sidecar is running:
|
||||
```bash
|
||||
dapr --app-id myapp --port 3500 run
|
||||
```
|
||||
{{% alert title="Note" color="info" %}}
|
||||
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
Then in a separate terminal run:
|
||||
```bash
|
||||
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' http://localhost:3500/v1.0/state/statestore
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Begin by ensuring a Dapr sidecar is running:
|
||||
```bash
|
||||
dapr --app-id myapp --port 3500 run
|
||||
```
|
||||
|
||||
{{% alert title="Note" color="info" %}}
|
||||
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
Then in a separate terminal run:
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' -Uri 'http://localhost:3500/v1.0/state/statestore'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Make sure to install the Dapr Python SDK with `pip3 install dapr`. Then create a file named `state.py` with:
|
||||
```python
|
||||
from dapr.clients import DaprClient
|
||||
from dapr.clients.grpc._state import StateItem
|
||||
|
||||
with DaprClient() as d:
|
||||
d.save_states(store_name="statestore",
|
||||
states=[
|
||||
StateItem(key="key1", value="value1"),
|
||||
StateItem(key="key2", value="value2")
|
||||
])
|
||||
|
||||
```
|
||||
|
||||
Run with `dapr run --app-id myapp run python state.py`
|
||||
|
||||
{{% alert title="Note" color="info" %}}
|
||||
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Step 3: Get state
|
||||
## Step 2: Save and retrieve a single state
|
||||
|
||||
The following example shows how to get an item by using a key with the state management API:
|
||||
The following example shows how to a single key/value pair using the Dapr state building block.
|
||||
|
||||
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK">}}
|
||||
{{% alert title="Note" color="warning" %}}
|
||||
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
|
||||
{{% /alert %}}
|
||||
|
||||
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK" "PHP SDK">}}
|
||||
|
||||
{{% codetab %}}
|
||||
With the same dapr instance running from above run:
|
||||
Begin by launching a Dapr sidecar:
|
||||
|
||||
```bash
|
||||
dapr run --app-id myapp --dapr-http-port 3500
|
||||
```
|
||||
|
||||
Then in a separate terminal save a key/value pair into your statestore:
|
||||
```bash
|
||||
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "key1", "value": "value1"}]' http://localhost:3500/v1.0/state/statestore
|
||||
```
|
||||
|
||||
Now get the state you just saved:
|
||||
```bash
|
||||
curl http://localhost:3500/v1.0/state/statestore/key1
|
||||
```
|
||||
|
||||
You can also restart your sidecar and try retrieving state again to see that state persists separate from the app.
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
With the same dapr instance running from above run:
|
||||
|
||||
Begin by launching a Dapr sidecar:
|
||||
|
||||
```bash
|
||||
dapr --app-id myapp --port 3500 run
|
||||
```
|
||||
|
||||
Then in a separate terminal save a key/value pair into your statestore:
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{"key": "key1", "value": "value1"}]' -Uri 'http://localhost:3500/v1.0/state/statestore'
|
||||
```
|
||||
|
||||
Now get the state you just saved:
|
||||
```powershell
|
||||
Invoke-RestMethod -Uri 'http://localhost:3500/v1.0/state/statestore/key1'
|
||||
```
|
||||
|
||||
You can also restart your sidecar and try retrieving state again to see that state persists separate from the app.
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Add the following code to `state.py` from above and run again:
|
||||
|
||||
Save the following to a file named `pythonState.py`:
|
||||
|
||||
```python
|
||||
data = d.get_state(store_name="statestore",
|
||||
key="key1",
|
||||
state_metadata={"metakey": "metavalue"}).data
|
||||
from dapr.clients import DaprClient
|
||||
|
||||
with DaprClient() as d:
|
||||
d.save_state(store_name="statestore", key="myFirstKey", value="myFirstValue" )
|
||||
print("State has been stored")
|
||||
|
||||
data = d.get_state(store_name="statestore", key="myFirstKey").data
|
||||
print(f"Got value: {data}")
|
||||
|
||||
```
|
||||
|
||||
Once saved run the following command to launch a Dapr sidecar and run the Python application:
|
||||
|
||||
```bash
|
||||
dapr --app-id myapp run python pythonState.py
|
||||
```
|
||||
|
||||
You should get an output similar to the following, which will show both the Dapr and app logs:
|
||||
|
||||
```md
|
||||
== DAPR == time="2021-01-06T21:34:33.7970377-08:00" level=info msg="starting Dapr Runtime -- version 0.11.3 -- commit a1a8e11" app_id=Braidbald-Boot scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T21:34:33.8040378-08:00" level=info msg="standalone mode configured" app_id=Braidbald-Boot scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T21:34:33.8040378-08:00" level=info msg="app id: Braidbald-Boot" app_id=Braidbald-Boot scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T21:34:33.9750400-08:00" level=info msg="component loaded. name: statestore, type: state.redis" app_id=Braidbald-Boot scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T21:34:33.9760387-08:00" level=info msg="API gRPC server is running on port 51656" app_id=Braidbald-Boot scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T21:34:33.9770372-08:00" level=info msg="dapr initialized. Status: Running. Init Elapsed 172.9994ms" app_id=Braidbald-Boot scope=dapr.
|
||||
|
||||
Checking if Dapr sidecar is listening on GRPC port 51656
|
||||
Dapr sidecar is up and running.
|
||||
Updating metadata for app command: python pythonState.py
|
||||
You are up and running! Both Dapr and your app logs will appear here.
|
||||
|
||||
== APP == State has been stored
|
||||
== APP == Got value: b'myFirstValue'
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Save the following in `state-example.php`:
|
||||
|
||||
```php
|
||||
<?php
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create();
|
||||
$app->run(function(\Dapr\State\StateManager $stateManager, \Psr\Log\LoggerInterface $logger) {
|
||||
$stateManager->save_state(store_name: 'statestore', item: new \Dapr\State\StateItem(
|
||||
key: 'myFirstKey',
|
||||
value: 'myFirstValue'
|
||||
));
|
||||
$logger->alert('State has been stored');
|
||||
|
||||
$data = $stateManager->load_state(store_name: 'statestore', key: 'myFirstKey')->value;
|
||||
$logger->alert("Got value: {data}", ['data' => $data]);
|
||||
});
|
||||
```
|
||||
|
||||
Once saved run the following command to launch a Dapr sidecar and run the PHP application:
|
||||
|
||||
```bash
|
||||
dapr --app-id myapp run -- php state-example.php
|
||||
```
|
||||
|
||||
You should get an output similar to the following, which will show both the Dapr and app logs:
|
||||
|
||||
```md
|
||||
✅ You're up and running! Both Dapr and your app logs will appear here.
|
||||
|
||||
== APP == [2021-02-12T16:30:11.078777+01:00] APP.ALERT: State has been stored [] []
|
||||
|
||||
== APP == [2021-02-12T16:30:11.082620+01:00] APP.ALERT: Got value: myFirstValue {"data":"myFirstValue"} []
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Step 4: Delete state
|
||||
|
||||
## Step 3: Delete state
|
||||
|
||||
The following example shows how to delete an item by using a key with the state management API:
|
||||
|
||||
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK">}}
|
||||
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK" "PHP SDK">}}
|
||||
|
||||
{{% codetab %}}
|
||||
With the same dapr instance running from above run:
|
||||
|
@ -153,16 +218,374 @@ Try getting state again and note that no value is returned.
|
|||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Add the following code to `state.py` from above and run again:
|
||||
|
||||
Update `pythonState.py` with:
|
||||
|
||||
```python
|
||||
d.delete_state(store_name="statestore"",
|
||||
key="key1",
|
||||
state_metadata={"metakey": "metavalue"})
|
||||
data = d.get_state(store_name="statestore",
|
||||
key="key1",
|
||||
state_metadata={"metakey": "metavalue"}).data
|
||||
from dapr.clients import DaprClient
|
||||
|
||||
with DaprClient() as d:
|
||||
d.save_state(store_name="statestore", key="key1", value="value1" )
|
||||
print("State has been stored")
|
||||
|
||||
data = d.get_state(store_name="statestore", key="key1").data
|
||||
print(f"Got value: {data}")
|
||||
|
||||
d.delete_state(store_name="statestore", key="key1")
|
||||
|
||||
data = d.get_state(store_name="statestore", key="key1").data
|
||||
print(f"Got value after delete: {data}")
|
||||
```
|
||||
|
||||
Now run your program with:
|
||||
|
||||
```bash
|
||||
dapr --app-id myapp run python pythonState.py
|
||||
```
|
||||
|
||||
You should see an output similar to the following:
|
||||
|
||||
```md
|
||||
Starting Dapr with id Yakchocolate-Lord. HTTP Port: 59457. gRPC Port: 59458
|
||||
|
||||
== DAPR == time="2021-01-06T22:55:36.5570696-08:00" level=info msg="starting Dapr Runtime -- version 0.11.3 -- commit a1a8e11" app_id=Yakchocolate-Lord scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T22:55:36.5690367-08:00" level=info msg="standalone mode configured" app_id=Yakchocolate-Lord scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T22:55:36.7220140-08:00" level=info msg="component loaded. name: statestore, type: state.redis" app_id=Yakchocolate-Lord scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T22:55:36.7230148-08:00" level=info msg="API gRPC server is running on port 59458" app_id=Yakchocolate-Lord scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T22:55:36.7240207-08:00" level=info msg="dapr initialized. Status: Running. Init Elapsed 154.984ms" app_id=Yakchocolate-Lord scope=dapr.runtime type=log ver=0.11.3
|
||||
|
||||
Checking if Dapr sidecar is listening on GRPC port 59458
|
||||
Dapr sidecar is up and running.
|
||||
Updating metadata for app command: python pythonState.py
|
||||
You're up and running! Both Dapr and your app logs will appear here.
|
||||
|
||||
== APP == State has been stored
|
||||
== APP == Got value: b'value1'
|
||||
== APP == Got value after delete: b''
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Update `state-example.php` with the following contents:
|
||||
|
||||
```php
|
||||
<?php
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create();
|
||||
$app->run(function(\Dapr\State\StateManager $stateManager, \Psr\Log\LoggerInterface $logger) {
|
||||
$stateManager->save_state(store_name: 'statestore', item: new \Dapr\State\StateItem(
|
||||
key: 'myFirstKey',
|
||||
value: 'myFirstValue'
|
||||
));
|
||||
$logger->alert('State has been stored');
|
||||
|
||||
$data = $stateManager->load_state(store_name: 'statestore', key: 'myFirstKey')->value;
|
||||
$logger->alert("Got value: {data}", ['data' => $data]);
|
||||
|
||||
$stateManager->delete_keys(store_name: 'statestore', keys: ['myFirstKey']);
|
||||
$data = $stateManager->load_state(store_name: 'statestore', key: 'myFirstKey')->value;
|
||||
$logger->alert("Got value after delete: {data}", ['data' => $data]);
|
||||
});
|
||||
```
|
||||
|
||||
Now run it with:
|
||||
|
||||
```bash
|
||||
dapr --app-id myapp run -- php state-example.php
|
||||
```
|
||||
|
||||
You should see something similar the following output:
|
||||
|
||||
```md
|
||||
✅ You're up and running! Both Dapr and your app logs will appear here.
|
||||
|
||||
== APP == [2021-02-12T16:38:00.839201+01:00] APP.ALERT: State has been stored [] []
|
||||
|
||||
== APP == [2021-02-12T16:38:00.841997+01:00] APP.ALERT: Got value: myFirstValue {"data":"myFirstValue"} []
|
||||
|
||||
== APP == [2021-02-12T16:38:00.845721+01:00] APP.ALERT: Got value after delete: {"data":null} []
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Step 4: Save and retrieve multiple states
|
||||
|
||||
Dapr also allows you to save and retrieve multiple states in the same call.
|
||||
|
||||
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK" "PHP SDK">}}
|
||||
|
||||
{{% codetab %}}
|
||||
With the same dapr instance running from above save two key/value pairs into your statestore:
|
||||
```bash
|
||||
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' http://localhost:3500/v1.0/state/statestore
|
||||
```
|
||||
|
||||
Now get the states you just saved:
|
||||
```bash
|
||||
curl -X POST -H "Content-Type: application/json" -d '{"keys":["key1", "key2"]}' http://localhost:3500/v1.0/state/statestore/bulk
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
With the same dapr instance running from above save two key/value pairs into your statestore:
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' -Uri 'http://localhost:3500/v1.0/state/statestore'
|
||||
```
|
||||
|
||||
Now get the states you just saved:
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"keys":["key1", "key2"]}' -Uri 'http://localhost:3500/v1.0/state/statestore/bulk'
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
The `StateItem` object can be used to store multiple Dapr states with the `save_states` and `get_states` methods.
|
||||
|
||||
Update your `pythonState.py` file with the following code:
|
||||
|
||||
```python
|
||||
from dapr.clients import DaprClient
|
||||
from dapr.clients.grpc._state import StateItem
|
||||
|
||||
with DaprClient() as d:
|
||||
s1 = StateItem(key="key1", value="value1")
|
||||
s2 = StateItem(key="key2", value="value2")
|
||||
|
||||
d.save_bulk_state(store_name="statestore", states=[s1,s2])
|
||||
print("States have been stored")
|
||||
|
||||
items = d.get_bulk_state(store_name="statestore", keys=["key1", "key2"]).items
|
||||
print(f"Got items: {[i.data for i in items]}")
|
||||
```
|
||||
|
||||
Now run your program with:
|
||||
|
||||
```bash
|
||||
dapr --app-id myapp run python pythonState.py
|
||||
```
|
||||
|
||||
You should see an output similar to the following:
|
||||
|
||||
```md
|
||||
== DAPR == time="2021-01-06T21:54:56.7262358-08:00" level=info msg="starting Dapr Runtime -- version 0.11.3 -- commit a1a8e11" app_id=Musesequoia-Sprite scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T21:54:56.7401933-08:00" level=info msg="standalone mode configured" app_id=Musesequoia-Sprite scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T21:54:56.8754240-08:00" level=info msg="Initialized name resolution to standalone" app_id=Musesequoia-Sprite scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T21:54:56.8844248-08:00" level=info msg="component loaded. name: statestore, type: state.redis" app_id=Musesequoia-Sprite scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T21:54:56.8854273-08:00" level=info msg="API gRPC server is running on port 60614" app_id=Musesequoia-Sprite scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T21:54:56.8854273-08:00" level=info msg="dapr initialized. Status: Running. Init Elapsed 145.234ms" app_id=Musesequoia-Sprite scope=dapr.runtime type=log ver=0.11.3
|
||||
|
||||
Checking if Dapr sidecar is listening on GRPC port 60614
|
||||
Dapr sidecar is up and running.
|
||||
Updating metadata for app command: python pythonState.py
|
||||
You're up and running! Both Dapr and your app logs will appear here.
|
||||
|
||||
== APP == States have been stored
|
||||
== APP == Got items: [b'value1', b'value2']
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
To batch load and save state with PHP, just create a "Plain Ole' PHP Object" (POPO) and annotate it with
|
||||
the StateStore annotation.
|
||||
|
||||
Update the `state-example.php` file:
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
#[\Dapr\State\Attributes\StateStore('statestore', \Dapr\consistency\EventualLastWrite::class)]
|
||||
class MyState {
|
||||
public string $key1 = 'value1';
|
||||
public string $key2 = 'value2';
|
||||
}
|
||||
|
||||
$app = \Dapr\App::create();
|
||||
$app->run(function(\Dapr\State\StateManager $stateManager, \Psr\Log\LoggerInterface $logger) {
|
||||
$obj = new MyState();
|
||||
$stateManager->save_object(item: $obj);
|
||||
$logger->alert('States have been stored');
|
||||
|
||||
$stateManager->load_object(into: $obj);
|
||||
$logger->alert("Got value: {data}", ['data' => $obj]);
|
||||
});
|
||||
```
|
||||
|
||||
Run the app:
|
||||
|
||||
```bash
|
||||
dapr --app-id myapp run -- php state-example.php
|
||||
```
|
||||
|
||||
And see the following output:
|
||||
|
||||
```md
|
||||
✅ You're up and running! Both Dapr and your app logs will appear here.
|
||||
|
||||
== APP == [2021-02-12T16:55:02.913801+01:00] APP.ALERT: States have been stored [] []
|
||||
|
||||
== APP == [2021-02-12T16:55:02.917850+01:00] APP.ALERT: Got value: [object MyState] {"data":{"MyState":{"key1":"value1","key2":"value2"}}} []
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Step 5: Perform state transactions
|
||||
|
||||
{{% alert title="Note" color="warning" %}}
|
||||
State transactions require a state store that supports multi-item transactions. Visit the [supported state stores page]({{< ref supported-state-stores >}}) page for a full list. Note that the default Redis container created in a self-hosted environment supports them.
|
||||
{{% /alert %}}
|
||||
|
||||
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK" "PHP SDK">}}
|
||||
|
||||
{{% codetab %}}
|
||||
With the same dapr instance running from above perform two state transactions:
|
||||
```bash
|
||||
curl -X POST -H "Content-Type: application/json" -d '{"operations": [{"operation":"upsert", "request": {"key": "key1", "value": "newValue1"}}, {"operation":"delete", "request": {"key": "key2"}}]}' http://localhost:3500/v1.0/state/statestore/transaction
|
||||
```
|
||||
|
||||
Now see the results of your state transactions:
|
||||
```bash
|
||||
curl -X POST -H "Content-Type: application/json" -d '{"keys":["key1", "key2"]}' http://localhost:3500/v1.0/state/statestore/bulk
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
With the same dapr instance running from above save two key/value pairs into your statestore:
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"operations": [{"operation":"upsert", "request": {"key": "key1", "value": "newValue1"}}, {"operation":"delete", "request": {"key": "key2"}}]}' -Uri 'http://localhost:3500/v1.0/state/statestore'
|
||||
```
|
||||
|
||||
Now see the results of your state transactions:
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"keys":["key1", "key2"]}' -Uri 'http://localhost:3500/v1.0/state/statestore/bulk'
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
The `TransactionalStateOperation` can perform a state transaction if your state stores need to be transactional.
|
||||
|
||||
Update your `pythonState.py` file with the following code:
|
||||
|
||||
```python
|
||||
from dapr.clients import DaprClient
|
||||
from dapr.clients.grpc._state import StateItem
|
||||
from dapr.clients.grpc._request import TransactionalStateOperation, TransactionOperationType
|
||||
|
||||
with DaprClient() as d:
|
||||
s1 = StateItem(key="key1", value="value1")
|
||||
s2 = StateItem(key="key2", value="value2")
|
||||
|
||||
d.save_bulk_state(store_name="statestore", states=[s1,s2])
|
||||
print("States have been stored")
|
||||
|
||||
d.execute_state_transaction(
|
||||
store_name="statestore",
|
||||
operations=[
|
||||
TransactionalStateOperation(key="key1", data="newValue1", operation_type=TransactionOperationType.upsert),
|
||||
TransactionalStateOperation(key="key2", data="value2", operation_type=TransactionOperationType.delete)
|
||||
]
|
||||
)
|
||||
print("State transactions have been completed")
|
||||
|
||||
items = d.get_bulk_state(store_name="statestore", keys=["key1", "key2"]).items
|
||||
print(f"Got items: {[i.data for i in items]}")
|
||||
```
|
||||
|
||||
Now run your program with:
|
||||
|
||||
```bash
|
||||
dapr run python pythonState.py
|
||||
```
|
||||
|
||||
You should see an output similar to the following:
|
||||
|
||||
```md
|
||||
Starting Dapr with id Singerchecker-Player. HTTP Port: 59533. gRPC Port: 59534
|
||||
== DAPR == time="2021-01-06T22:18:14.1246721-08:00" level=info msg="starting Dapr Runtime -- version 0.11.3 -- commit a1a8e11" app_id=Singerchecker-Player scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T22:18:14.1346254-08:00" level=info msg="standalone mode configured" app_id=Singerchecker-Player scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T22:18:14.2747063-08:00" level=info msg="component loaded. name: statestore, type: state.redis" app_id=Singerchecker-Player scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T22:18:14.2757062-08:00" level=info msg="API gRPC server is running on port 59534" app_id=Singerchecker-Player scope=dapr.runtime type=log ver=0.11.3
|
||||
== DAPR == time="2021-01-06T22:18:14.2767059-08:00" level=info msg="dapr initialized. Status: Running. Init Elapsed 142.0805ms" app_id=Singerchecker-Player scope=dapr.runtime type=log ver=0.11.3
|
||||
|
||||
Checking if Dapr sidecar is listening on GRPC port 59534
|
||||
Dapr sidecar is up and running.
|
||||
Updating metadata for app command: python pythonState.py
|
||||
You're up and running! Both Dapr and your app logs will appear here.
|
||||
|
||||
== APP == State transactions have been completed
|
||||
== APP == Got items: [b'value1', b'']
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Transactional state is supported by extending `TransactionalState` base object which hooks into your
|
||||
object via setters and getters to provide a transaction. Before you created your own transactional object,
|
||||
but now you'll ask the Dependency Injection framework to build one for you.
|
||||
|
||||
Modify the `state-example.php` file again:
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
#[\Dapr\State\Attributes\StateStore('statestore', \Dapr\consistency\EventualLastWrite::class)]
|
||||
class MyState extends \Dapr\State\TransactionalState {
|
||||
public string $key1 = 'value1';
|
||||
public string $key2 = 'value2';
|
||||
}
|
||||
|
||||
$app = \Dapr\App::create();
|
||||
$app->run(function(MyState $obj, \Psr\Log\LoggerInterface $logger, \Dapr\State\StateManager $stateManager) {
|
||||
$obj->begin();
|
||||
$obj->key1 = 'hello world';
|
||||
$obj->key2 = 'value3';
|
||||
$obj->commit();
|
||||
$logger->alert('Transaction committed!');
|
||||
|
||||
// begin a new transaction which reloads from the store
|
||||
$obj->begin();
|
||||
$logger->alert("Got value: {key1}, {key2}", ['key1' => $obj->key1, 'key2' => $obj->key2]);
|
||||
});
|
||||
```
|
||||
|
||||
Run the application:
|
||||
|
||||
```bash
|
||||
dapr --app-id myapp run -- php state-example.php
|
||||
```
|
||||
|
||||
Observe the following output:
|
||||
|
||||
```md
|
||||
✅ You're up and running! Both Dapr and your app logs will appear here.
|
||||
|
||||
== APP == [2021-02-12T17:10:06.837110+01:00] APP.ALERT: Transaction committed! [] []
|
||||
|
||||
== APP == [2021-02-12T17:10:06.840857+01:00] APP.ALERT: Got value: hello world, value3 {"key1":"hello world","key2":"value3"} []
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Next steps
|
||||
|
||||
- Read the full [State API reference]({{< ref state_api.md >}})
|
||||
- Try one of the [Dapr SDKs]({{< ref sdks >}})
|
||||
- Build a [stateful service]({{< ref howto-stateful-service.md >}})
|
||||
|
|
|
@ -0,0 +1,95 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-To: Share state between applications"
|
||||
linkTitle: "How-To: Share state between applications"
|
||||
weight: 400
|
||||
description: "Choose different strategies for sharing state between different applications"
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Dapr offers developers different ways to share state between applications.
|
||||
|
||||
Different architectures might have different needs when it comes to sharing state. For example, in one scenario you may want to encapsulate all state within a given application and have Dapr manage the access for you. In a different scenario, you may need to have two applications working on the same state be able to get and save the same keys.
|
||||
|
||||
To enable state sharing, Dapr supports the following key prefixes strategies:
|
||||
|
||||
* **`appid`** - This is the default strategy. the `appid` prefix allows state to be managed only by the app with the specified `appid`. All state keys will be prefixed with the `appid`, and are scoped for the application.
|
||||
|
||||
* **`name`** - This setting uses the name of the state store component as the prefix. Multiple applications can share the same state for a given state store.
|
||||
|
||||
* **`none`** - This setting uses no prefixing. Multiple applications share state across different state stores.
|
||||
|
||||
## Specifying a state prefix strategy
|
||||
|
||||
To specify a prefix strategy, add a metadata key named `keyPrefix` on a state component:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: production
|
||||
spec:
|
||||
type: state.redis
|
||||
version: v1
|
||||
metadata:
|
||||
- name: keyPrefix
|
||||
value: <key-prefix-strategy>
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
The following examples will show you how state retrieval looks like with each of the supported prefix strategies:
|
||||
|
||||
### `appid` (default)
|
||||
|
||||
A Dapr application with app id `myApp` is saving state into a state store named `redis`:
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/state/redis \
|
||||
-H "Content-Type: application/json"
|
||||
-d '[
|
||||
{
|
||||
"key": "darth",
|
||||
"value": "nihilus"
|
||||
}
|
||||
]'
|
||||
```
|
||||
|
||||
The key will be saved as `myApp||darth`.
|
||||
|
||||
### `name`
|
||||
|
||||
A Dapr application with app id `myApp` is saving state into a state store named `redis`:
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/state/redis \
|
||||
-H "Content-Type: application/json"
|
||||
-d '[
|
||||
{
|
||||
"key": "darth",
|
||||
"value": "nihilus"
|
||||
}
|
||||
]'
|
||||
```
|
||||
|
||||
The key will be saved as `redis||darth`.
|
||||
|
||||
### `none`
|
||||
|
||||
A Dapr application with app id `myApp` is saving state into a state store named `redis`:
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/state/redis \
|
||||
-H "Content-Type: application/json"
|
||||
-d '[
|
||||
{
|
||||
"key": "darth",
|
||||
"value": "nihilus"
|
||||
}
|
||||
]'
|
||||
```
|
||||
|
||||
The key will be saved as `darth`.
|
||||
|
|
@ -24,7 +24,7 @@ To change the state store being used, replace the YAML under `/components` with
|
|||
|
||||
### Kubernetes
|
||||
|
||||
See the instructions [here]({{<ref setup-state-store-overview>}}) on how to setup different state stores on Kubernetes.
|
||||
See the instructions [here]({{<ref setup-state-store>}}) on how to setup different state stores on Kubernetes.
|
||||
|
||||
## Strong and Eventual consistency
|
||||
|
||||
|
|
|
@ -2,8 +2,8 @@
|
|||
type: docs
|
||||
title: "Work with backend state stores"
|
||||
linkTitle: "Backend stores"
|
||||
weight: 400
|
||||
weight: 500
|
||||
description: "Guides for working with specific backend states stores"
|
||||
---
|
||||
|
||||
Explore the **Operations** section to see a list of [supported state stores]({{<ref supported-state-stores.md>}}) and how to setup [state store components]({{<ref setup-state-store-overview.md>}}).
|
||||
Explore the **Operations** section to see a list of [supported state stores]({{<ref supported-state-stores.md>}}) and how to setup [state store components]({{<ref setup-state-store.md>}}).
|
|
@ -6,7 +6,7 @@ weight: 2000
|
|||
description: "Use Redis as a backend state store"
|
||||
---
|
||||
|
||||
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{<ref state_api.md>}}). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
|
||||
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{<ref state_api.md>}}). You can directly interact with the underlying store to manipulate the state data, such as querying states, creating aggregated views and making backups.
|
||||
|
||||
>**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation.
|
||||
|
||||
|
|
|
@ -19,17 +19,17 @@ The easiest way to connect to your SQL Server instance is to use the [Azure Data
|
|||
To get all state keys associated with application "myapp", use the query:
|
||||
|
||||
```sql
|
||||
SELECT * FROM states WHERE [Key] LIKE 'myapp-%'
|
||||
SELECT * FROM states WHERE [Key] LIKE 'myapp||%'
|
||||
```
|
||||
|
||||
The above query returns all rows with id containing "myapp-", which is the prefix of the state keys.
|
||||
The above query returns all rows with id containing "myapp||", which is the prefix of the state keys.
|
||||
|
||||
## 3. Get specific state data
|
||||
|
||||
To get the state data by a key "balance" for the application "myapp", use the query:
|
||||
|
||||
```sql
|
||||
SELECT * FROM states WHERE [Key] = 'myapp-balance'
|
||||
SELECT * FROM states WHERE [Key] = 'myapp||balance'
|
||||
```
|
||||
|
||||
Then, read the **Data** field of the returned row.
|
||||
|
@ -37,7 +37,7 @@ Then, read the **Data** field of the returned row.
|
|||
To get the state version/ETag, use the command:
|
||||
|
||||
```sql
|
||||
SELECT [RowVersion] FROM states WHERE [Key] = 'myapp-balance'
|
||||
SELECT [RowVersion] FROM states WHERE [Key] = 'myapp||balance'
|
||||
```
|
||||
|
||||
## 4. Get filtered state data
|
||||
|
@ -53,13 +53,13 @@ SELECT * FROM states WHERE JSON_VALUE([Data], '$.color') = 'blue'
|
|||
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
|
||||
|
||||
```sql
|
||||
SELECT * FROM states WHERE [Key] LIKE 'mypets-cat-leroy-%'
|
||||
SELECT * FROM states WHERE [Key] LIKE 'mypets||cat||leroy||%'
|
||||
```
|
||||
|
||||
And to get a specific actor state such as "food", use the command:
|
||||
|
||||
```sql
|
||||
SELECT * FROM states WHERE [Key] = 'mypets-cat-leroy-food'
|
||||
SELECT * FROM states WHERE [Key] = 'mypets||cat||leroy||food'
|
||||
```
|
||||
|
||||
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.
|
||||
|
|
|
@ -8,65 +8,47 @@ description: "Overview of the state management building block"
|
|||
|
||||
## Introduction
|
||||
|
||||
Dapr offers key/value storage APIs for state management. If a microservice uses state management, it can use these APIs to leverage any of the [supported state stores]({{< ref supported-state-stores.md >}}), without adding or learning a third party SDK.
|
||||
Using state management, your application can store data as key/value pairs in the [supported state stores]({{< ref supported-state-stores.md >}}).
|
||||
|
||||
When using state management your application can leverage several features that would otherwise be complicated and error-prone to build yourself such as:
|
||||
When using state management your application can leverage features that would otherwise be complicated and error-prone to build yourself such as:
|
||||
|
||||
- Distributed concurrency and data consistency
|
||||
- Retry policies
|
||||
- Bulk [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations
|
||||
|
||||
See below for a diagram of state management's high level architecture.
|
||||
Your application can use Dapr's state management API to save and read key/value pairs using a state store component, as shown in the diagram below. For example, by using HTTP POST you can save key/value pairs and by using HTTP GET you can read a key and have its value returned.
|
||||
|
||||
<img src="/images/state-management-overview.png" width=900>
|
||||
|
||||
|
||||
## Features
|
||||
|
||||
- [State management API](#state-management-api)
|
||||
- [State store behaviors](#state-store-behaviors)
|
||||
- [Concurrency](#concurrency)
|
||||
- [Consistency](#consistency)
|
||||
- [Retry policies](#retry-policies)
|
||||
- [Bulk operations](#bulk-operations)
|
||||
- [Querying state store directly](#querying-state-store-directly)
|
||||
### Pluggable state stores
|
||||
|
||||
### State management API
|
||||
Dapr data stores are modeled as components, which can be swapped out without any changes to your service code. See [supported state stores]({{< ref supported-state-stores >}}) to see the list.
|
||||
|
||||
Developers can use the state management API to retrieve, save and delete state values by providing keys.
|
||||
### Configurable state store behavior
|
||||
|
||||
Dapr data stores are components. Dapr ships with [Redis](https://redis.io) out-of-box for local development in self hosted mode. Dapr allows you to plug in other data stores as components such as [Azure CosmosDB](https://azure.microsoft.com/services/cosmos-db/), [SQL Server](https://azure.microsoft.com/services/sql-database/), [AWS DynamoDB](https://aws.amazon.com/DynamoDB), [GCP Cloud Spanner](https://cloud.google.com/spanner) and [Cassandra](http://cassandra.apache.org/).
|
||||
Dapr allows developers to attach additional metadata to a state operation request that describes how the request is expected to be handled. You can attach:
|
||||
- Concurrency requirements
|
||||
- Consistency requirements
|
||||
|
||||
Visit [State API]({{< ref state_api.md >}}) for more information.
|
||||
By default, your application should assume a data store is **eventually consistent** and uses a **last-write-wins** concurrency pattern.
|
||||
|
||||
> **NOTE:** Dapr prefixes state keys with the ID of the current Dapr instance. This allows multiple Dapr instances to share the same state store.
|
||||
|
||||
### State store behaviors
|
||||
|
||||
Dapr allows developers to attach to a state operation request additional metadata that describes how the request is expected to be handled. For example, you can attach concurrency requirements, consistency requirements, and retry policy to any state operation requests.
|
||||
|
||||
By default, your application should assume a data store is **eventually consistent** and uses a **last-write-wins** concurrency pattern. On the other hand, if you do attach metadata to your requests, Dapr passes the metadata along with the requests to the state store and expects the data store to fulfill the requests.
|
||||
|
||||
Not all stores are created equal. To ensure portability of your application, you can query the capabilities of the store and make your code adaptive to different store capabilities.
|
||||
|
||||
The following table summarizes the capabilities of existing data store implementations.
|
||||
|
||||
| Store | Strong consistent write | Strong consistent read | ETag |
|
||||
|-------------------|-------------------------|------------------------|------|
|
||||
| Cosmos DB | Yes | Yes | Yes |
|
||||
| PostgreSQL | Yes | Yes | Yes |
|
||||
| Redis | Yes | Yes | Yes |
|
||||
| Redis (clustered) | Yes | No | Yes |
|
||||
| SQL Server | Yes | Yes | Yes |
|
||||
[Not all stores are created equal]({{< ref supported-state-stores.md >}}). To ensure portability of your application you can query the capabilities of the store and make your code adaptive to different store capabilities.
|
||||
|
||||
### Concurrency
|
||||
|
||||
Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an **ETag** property to the returned state. And when the user code tries to update or delete a state, it's expected to attach the ETag through the **If-Match** header. The write operation can succeed only when the provided ETag matches with the ETag in the state store.
|
||||
Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an ETag property to the returned state. When the user code tries to update or delete a state, it’s expected to attach the ETag either through the request body for updates or the `If-Match` header for deletes. The write operation can succeed only when the provided ETag matches with the ETag in the state store.
|
||||
|
||||
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a [retry policy](#Retry-Policies) to compensate for such conflicts when using ETags.
|
||||
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a retry policy to compensate for such conflicts when using ETags.
|
||||
|
||||
If your application omits ETags in writing requests, Dapr skips ETag checks while handling the requests. This essentially enables the **last-write-wins** pattern, compared to the **first-write-wins** pattern with ETags.
|
||||
|
||||
> **NOTE:** For stores that don't natively support ETags, it's expected that the corresponding Dapr state store implementation simulates ETags and follows the Dapr state management API specification when handling states. Because Dapr state store implementations are technically clients to the underlying data store, such simulation should be straightforward using the concurrency control mechanisms provided by the store.
|
||||
{{% alert title="Note on ETags" color="primary" %}}
|
||||
For stores that don't natively support ETags, it's expected that the corresponding Dapr state store implementation simulates ETags and follows the Dapr state management API specification when handling states. Because Dapr state store implementations are technically clients to the underlying data store, such simulation should be straightforward using the concurrency control mechanisms provided by the store.
|
||||
{{% /alert %}}
|
||||
|
||||
Read the [API reference]({{< ref state_api.md >}}) to learn how to set concurrency options.
|
||||
|
||||
### Consistency
|
||||
|
||||
|
@ -74,15 +56,18 @@ Dapr supports both **strong consistency** and **eventual consistency**, with eve
|
|||
|
||||
When strong consistency is used, Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request. When eventual consistency is used, Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica.
|
||||
|
||||
### Retry policies
|
||||
|
||||
Dapr allows you to attach a retry policy to any write request. A policy is described by an **retryInterval**, a **retryPattern** and a **retryThreshold**. Dapr keeps retrying the request at the given interval up to the specified threshold. You can choose between a **linear** retry pattern or an **exponential** (backoff) pattern. When the **exponential** pattern is used, the retry interval is doubled after each attempt.
|
||||
Read the [API reference]({{< ref state_api.md >}}) to learn how to set consistency options.
|
||||
|
||||
### Bulk operations
|
||||
|
||||
Dapr supports two types of bulk operations - **bulk** or **multi**. You can group several requests of the same type into a bulk (or a batch). Dapr submits requests in the bulk as individual requests to the underlying data store. In other words, bulk operations are not transactional. On the other hand, you can group requests of different types into a multi-operation, which is handled as an atomic transaction.
|
||||
|
||||
### Querying state store directly
|
||||
Read the [API reference]({{< ref state_api.md >}}) to learn how use bulk and multi options.
|
||||
|
||||
### Actor state
|
||||
Transactional state stores can be used to store actor state. To specify which state store to be used for actors, specify value of property `actorStateStore` as `true` in the metadata section of the state store component. Actors state is stored with a specific scheme in transactional state stores, which allows for consistent querying. Only a single state store component can be used as the statestore for all actors. Read the [API reference]({{< ref state_api.md >}}) to learn more about state stores for actors and the [actors API reference]({{< ref actors_api.md >}})
|
||||
|
||||
### Query state store directly
|
||||
|
||||
Dapr saves and retrieves state values without any transformation. You can query and aggregate state directly from the [underlying state store]({{< ref query-state-store >}}).
|
||||
|
||||
|
@ -92,8 +77,6 @@ For example, to get all state keys associated with an application ID "myApp" in
|
|||
KEYS "myApp*"
|
||||
```
|
||||
|
||||
> **NOTE:** See [How to query Redis store]({{< ref query-redis-store.md >}} ) for details on how to query a Redis store.
|
||||
|
||||
#### Querying actor state
|
||||
|
||||
If the data store supports SQL queries, you can query an actor's state using SQL queries. For example use:
|
||||
|
@ -108,10 +91,20 @@ You can also perform aggregate queries across actor instances, avoiding the comm
|
|||
SELECT AVG(value) FROM StateTable WHERE Id LIKE '<app-id>||<thermometer>||*||temperature'
|
||||
```
|
||||
|
||||
> **NOTE:** Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the actor instances.
|
||||
{{% alert title="Note on direct queries" color="primary" %}}
|
||||
Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the Dapr state management or actors APIs.
|
||||
{{% /alert %}}
|
||||
|
||||
### State management API
|
||||
|
||||
The API for state management can be found in the [state management API reference]({{< ref state_api.md >}}) which describes how to retrieve, save and delete state values by providing keys.
|
||||
|
||||
## Next steps
|
||||
|
||||
* Follow the [state store setup guides]({{< ref setup-state-store >}})
|
||||
* Read the [state management API specification]({{< ref state_api.md >}})
|
||||
* Read the [actors API specification]({{< ref actors_api.md >}})
|
||||
* Follow these guides on:
|
||||
* [How-To: Save and get state]({{< ref howto-get-save-state.md >}})
|
||||
* [How-To: Build a stateful service]({{< ref howto-stateful-service.md >}})
|
||||
* [How-To: Share state between applications]({{< ref howto-share-state.md >}})
|
||||
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use state management or try the samples in the [Dapr SDKs]({{< ref sdks >}})
|
||||
* List of [state store components]({{< ref supported-state-stores.md >}})
|
||||
* Read the [state management API reference]({{< ref state_api.md >}})
|
||||
* Read the [actors API reference]({{< ref actors_api.md >}})
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Debugging Dapr applications and the Dapr control plane"
|
||||
linkTitle: "Debugging"
|
||||
weight: 60
|
||||
description: "Guides on how to debug Dapr applications and the Dapr control plane"
|
||||
---
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Bridge to Kubernetes support for Dapr services"
|
||||
linkTitle: "Bridge to Kubernetes"
|
||||
weight: 300
|
||||
description: "Debug Dapr apps locally which still connected to your Kubernetes cluster"
|
||||
---
|
||||
|
||||
Bridge to Kubernetes allows you to run and debug code on your development computer, while still connected to your Kubernetes cluster with the rest of your application or services. This type of debugging is often called *local tunnel debugging*.
|
||||
|
||||
{{< button text="Learn more about Bridge to Kubernetes" link="https://aka.ms/bridge-vscode-dapr" >}}
|
||||
|
||||
## Debug Dapr apps
|
||||
|
||||
Bridge to Kubernetes supports debugging Dapr apps on your machine, while still having them interact with the services and applications running on your Kubernetes cluster. This example showcases Bridge to Kubernetes enabling a developer to debug the [distributed calculator quickstart](https://github.com/dapr/quickstarts/tree/master/distributed-calculator):
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/rxwg-__otso" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
{{% alert title="Isolation mode" color="warning" %}}
|
||||
[Isolation mode](https://aka.ms/bridge-isolation-vscode-dapr) is currently not supported with Dapr apps. Make sure to launch Bridge to Kubernetes mode without isolation.
|
||||
{{% /alert %}}
|
||||
|
||||
## Further reading
|
||||
|
||||
- [Bridge to Kubernetes documentation](https://code.visualstudio.com/docs/containers/bridge-to-kubernetes)
|
||||
- [VSCode integration]({{< ref vscode >}})
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Debug Dapr in Kubernetes mode"
|
||||
linkTitle: "Kubernetes"
|
||||
weight: 200
|
||||
description: "How to debug Dapr on your Kubernetes cluster"
|
||||
---
|
|
@ -0,0 +1,114 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Debug Dapr control plane on Kubernetes"
|
||||
linkTitle: "Dapr control plane"
|
||||
weight: 1000
|
||||
description: "How to debug Dapr control plane on your Kubernetes cluster"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Sometimes it is necessary to understand what's going on in Dapr control plane (aka, Kubernetes services), including `dapr-sidecar-injector`, `dapr-operator`, `dapr-placement`, and `dapr-sentry`, especially when you diagnose your Dapr application and wonder if there's something wrong in Dapr itself. Additionally, you may be developing a new feature for Dapr on Kubernetes and want to debug your code.
|
||||
|
||||
This guide will cover how to use Dapr debugging binaries to debug the Dapr services on your Kubernetes cluster.
|
||||
|
||||
## Debugging Dapr Kubernetes services
|
||||
|
||||
### Pre-requisites
|
||||
|
||||
- Familiarize yourself with [this guide]({{< ref kubernetes-deploy.md >}}) to learn how to deploy Dapr to your Kubernetes cluster.
|
||||
- Setup your [dev environment](https://github.com/dapr/dapr/blob/master/docs/development/developing-dapr.md)
|
||||
- [Helm](https://github.com/helm/helm/releases)
|
||||
|
||||
### 1. Build Dapr debugging binaries
|
||||
|
||||
In order to debug Dapr Kubernetes services, it's required to rebuild all Dapr binaries and Docker images to disable compiler optimization. To do this, execute the following commands:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/dapr/dapr.git
|
||||
cd dapr
|
||||
make release GOOS=linux GOARCH=amd64 DEBUG=1
|
||||
```
|
||||
|
||||
>On Windows download [MingGW](https://sourceforge.net/projects/mingw/files/MinGW/Extension/make/mingw32-make-3.80-3/) and use `ming32-make.exe` instead of `make`.
|
||||
|
||||
In the above command, 'DEBUG' is specified to '1' to disable compiler optimization. 'GOOS=linux' and 'GOARCH=amd64' are also necessary since the binaries will be packaged into Linux-based Docker image in the next step.
|
||||
|
||||
The binaries could be found under 'dist/linux_amd64/debug' sub-directory under the 'dapr' directory.
|
||||
|
||||
### 2. Build Dapr debugging Docker images
|
||||
|
||||
Use the following commands to package the debugging binaries into Docker images. Before this, you need to login your docker.io account, and if you don't have it yet, you may need to consider registering one from "https://hub.docker.com/".
|
||||
|
||||
```bash
|
||||
export DAPR_TAG=dev
|
||||
export DAPR_REGISTRY=<your docker.io id>
|
||||
docker login
|
||||
make docker-push DEBUG=1
|
||||
```
|
||||
|
||||
Once the Dapr Docker images are built and pushed onto Docker hub, then you are ready to re-install Dapr in your Kubernetes cluster.
|
||||
|
||||
### 3. Install Dapr debugging binaries
|
||||
|
||||
If Dapr has already been installed in your Kubernetes cluster, uninstall it first:
|
||||
|
||||
```bash
|
||||
dapr uninstall -k
|
||||
```
|
||||
|
||||
We will use 'helm' to install Dapr debugging binaries. In the following sections, we will use Dapr operator as an example to demonstrate how to configure, install, and debug Dapr services in a Kubernetes environment.
|
||||
|
||||
First configure a values file with these options:
|
||||
|
||||
```yaml
|
||||
global:
|
||||
registry: docker.io/<your docker.io id>
|
||||
tag: "dev-linux-amd64"
|
||||
dapr_operator:
|
||||
debug:
|
||||
enabled: true
|
||||
initialDelaySeconds: 3000
|
||||
```
|
||||
|
||||
{{% alert title="Notice" color="primary" %}}
|
||||
If you need to debug the startup time of Dapr services, you need to consider configuring `initialDelaySeconds` to a very long time value, e.g. "3000" seconds. If this is not the case, configure it to a short time value, e.g. "3" seconds.
|
||||
{{% /alert %}}
|
||||
|
||||
Then step into 'dapr' directory which's cloned from GitHub in the beginning of this guide if you haven't, and execute the following command:
|
||||
|
||||
```bash
|
||||
helm install dapr charts/dapr --namespace dapr-system --values values.yml --wait
|
||||
```
|
||||
|
||||
### 4. Forward debugging port
|
||||
|
||||
To debug the target Dapr service (Dapr operator in this case), its pre-configured debug port needs to be visible to your IDE. In order to achieve this, we need to find the target Dapr service's pod first:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods -n dapr-system -o wide
|
||||
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
dapr-dashboard-64b46f98b6-dl2n9 1/1 Running 0 61s 172.17.0.9 minikube <none> <none>
|
||||
dapr-operator-7878f94fcd-6bfx9 1/1 Running 1 61s 172.17.0.7 minikube <none> <none>
|
||||
dapr-placement-server-0 1/1 Running 1 61s 172.17.0.8 minikube <none> <none>
|
||||
dapr-sentry-68c7d4c7df-sc47x 1/1 Running 0 61s 172.17.0.6 minikube <none> <none>
|
||||
dapr-sidecar-injector-56c8f489bb-t2st9 1/1 Running 0 61s 172.17.0.10 minikube <none> <none>
|
||||
```
|
||||
|
||||
Then use kubectl's `port-forward` command to expose the internal debug port to the external IDE:
|
||||
|
||||
```bash
|
||||
$ kubectl port-forward dapr-operator-7878f94fcd-6bfx9 40000:40000 -n dapr-system
|
||||
|
||||
Forwarding from 127.0.0.1:40000 -> 40000
|
||||
Forwarding from [::1]:40000 -> 40000
|
||||
```
|
||||
|
||||
All done. Now you can point to port 40000 and start a remote debug session from your favorite IDE.
|
||||
|
||||
## Related links
|
||||
|
||||
- [Overview of Dapr on Kubernetes]({{< ref kubernetes-overview >}})
|
||||
- [Deploy Dapr to a Kubernetes cluster]({{< ref kubernetes-deploy >}})
|
||||
- [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes)
|
|
@ -0,0 +1,95 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Debug daprd on Kubernetes"
|
||||
linkTitle: "Dapr sidecar"
|
||||
weight: 2000
|
||||
description: "How to debug the Dapr sidecar (daprd) on your Kubernetes cluster"
|
||||
---
|
||||
|
||||
|
||||
## Overview
|
||||
|
||||
Sometimes it is necessary to understand what's going on in the Dapr sidecar (daprd), which runs as a sidecar next to your application, especially when you diagnose your Dapr application and wonder if there's something wrong in Dapr itself. Additionally, you may be developing a new feature for Dapr on Kubernetes and want to debug your code.
|
||||
|
||||
his guide will cover how to use built-in Dapr debugging to debug the Dapr sidecar in your Kubernetes pods.
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
- Refer to [this guide]({{< ref kubernetes-deploy.md >}}) to learn how to deploy Dapr to your Kubernetes cluster.
|
||||
- Follow [this guide]({{< ref "debug-dapr-services.md">}}) to build the Dapr debugging binaries you will be deploying in the next step.
|
||||
|
||||
|
||||
## Initialize Dapr in debug mode
|
||||
|
||||
If Dapr has already been installed in your Kubernetes cluster, uninstall it first:
|
||||
|
||||
```bash
|
||||
dapr uninstall -k
|
||||
```
|
||||
We will use 'helm' to install Dapr debugging binaries. For more information refer to [Install with Helm]({{< ref "kubernetes-deploy.md#install-with-helm-advanced" >}}).
|
||||
|
||||
First configure a values file named `values.yml` with these options:
|
||||
|
||||
```yaml
|
||||
global:
|
||||
registry: docker.io/<your docker.io id>
|
||||
tag: "dev-linux-amd64"
|
||||
```
|
||||
|
||||
Then step into 'dapr' directory from your cloned [dapr/dapr repository](https://github.com/dapr/dapr) and execute the following command:
|
||||
|
||||
```bash
|
||||
helm install dapr charts/dapr --namespace dapr-system --values values.yml --wait
|
||||
```
|
||||
|
||||
To enable debug mode for daprd, you need to put an extra annotation `dapr.io/enable-debug` in your application's deployment file. Let's use [quickstarts/hello-kubernetes](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes) as an example. Modify 'deploy/node.yaml' like below:
|
||||
|
||||
```diff
|
||||
diff --git a/hello-kubernetes/deploy/node.yaml b/hello-kubernetes/deploy/node.yaml
|
||||
index 23185a6..6cdb0ae 100644
|
||||
--- a/hello-kubernetes/deploy/node.yaml
|
||||
+++ b/hello-kubernetes/deploy/node.yaml
|
||||
@@ -33,6 +33,7 @@ spec:
|
||||
dapr.io/enabled: "true"
|
||||
dapr.io/app-id: "nodeapp"
|
||||
dapr.io/app-port: "3000"
|
||||
+ dapr.io/enable-debug: "true"
|
||||
spec:
|
||||
containers:
|
||||
- name: node
|
||||
```
|
||||
|
||||
The annotation `dapr.io/enable-debug` will hint Dapr injector to inject Dapr sidecar into the debug mode. You can also specify the debug port with annotation `dapr.io/debug-port`, otherwise the default port will be "40000".
|
||||
|
||||
Deploy the application with the following command. For the complete guide refer to the [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes):
|
||||
|
||||
```bash
|
||||
kubectl apply -f ./deploy/node.yaml
|
||||
```
|
||||
|
||||
Figure out the target application's pod name with the following command:
|
||||
|
||||
```bash
|
||||
$ kubectl get pods
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nodeapp-78866448f5-pqdtr 1/2 Running 0 14s
|
||||
```
|
||||
|
||||
Then use kubectl's `port-forward` command to expose the internal debug port to the external IDE:
|
||||
|
||||
```bash
|
||||
$ kubectl port-forward nodeapp-78866448f5-pqdtr 40000:40000
|
||||
|
||||
Forwarding from 127.0.0.1:40000 -> 40000
|
||||
Forwarding from [::1]:40000 -> 40000
|
||||
```
|
||||
|
||||
All done. Now you can point to port 40000 and start a remote debug session to daprd from your favorite IDE.
|
||||
|
||||
## Related links
|
||||
|
||||
- [Overview of Dapr on Kubernetes]({{< ref kubernetes-overview >}})
|
||||
- [Deploy Dapr to a Kubernetes cluster]({{< ref kubernetes-deploy >}})
|
||||
- [Debug Dapr services on Kubernetes]({{< ref debug-dapr-services >}})
|
||||
- [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes)
|
|
@ -2,7 +2,7 @@
|
|||
type: docs
|
||||
title: "IntelliJ"
|
||||
linkTitle: "IntelliJ"
|
||||
weight: 1000
|
||||
weight: 2000
|
||||
description: "Configuring IntelliJ community edition for debugging with Dapr"
|
||||
---
|
||||
|
||||
|
@ -23,9 +23,44 @@ Let's get started!
|
|||
|
||||
## Add Dapr as an 'External Tool'
|
||||
|
||||
First, quit IntelliJ.
|
||||
First, quit IntelliJ before modifying the configurations file directly.
|
||||
|
||||
Create or edit the file in `$HOME/.IdeaIC2019.3/config/tools/External\ Tools.xml` (change IntelliJ version in path if needed) to add a new `<tool></tool>` entry:
|
||||
### IntelliJ configuration file location
|
||||
For versions [2020.1](https://www.jetbrains.com/help/idea/2020.1/tuning-the-ide.html#config-directory) and above the configuration files for tools should be located in:
|
||||
|
||||
{{< tabs Windows Linux MacOS >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```powershell
|
||||
%USERPROFILE%\AppData\Roaming\JetBrains\IntelliJIdea2020.1\tools\
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
{{% codetab %}}
|
||||
```shell
|
||||
$HOME/.config/JetBrains/IntelliJIdea2020.1/tools/
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
{{% codetab %}}
|
||||
```shell
|
||||
~/Library/Application Support/JetBrains/IntelliJIdea2020.1/tools/
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
> The configuration file location is different for version 2019.3 or prior. See [here](https://www.jetbrains.com/help/idea/2019.3/tuning-the-ide.html#config-directory) for more details.
|
||||
|
||||
Change the version of IntelliJ in the path if needed.
|
||||
|
||||
Create or edit the file in `<CONFIG PATH>/tools/External\ Tools.xml` (change IntelliJ version in path if needed). The `<CONFIG PATH>` is OS dependennt as seen above.
|
||||
|
||||
Add a new `<tool></tool>` entry:
|
||||
|
||||
```xml
|
||||
<toolSet name="External Tools">
|
||||
|
@ -33,10 +68,10 @@ Create or edit the file in `$HOME/.IdeaIC2019.3/config/tools/External\ Tools.xml
|
|||
<!-- 1. Each tool has its own app-id, so create one per application to be debugged -->
|
||||
<tool name="dapr for DemoService in examples" description="Dapr sidecar" showInMainMenu="false" showInEditor="false" showInProject="false" showInSearchPopup="false" disabled="false" useConsole="true" showConsoleOnStdOut="true" showConsoleOnStdErr="true" synchronizeAfterRun="true">
|
||||
<exec>
|
||||
<!-- 2. For Linux or MacOS use: /usr/local/bin/daprd -->
|
||||
<option name="COMMAND" value="C:\dapr\daprd.exe" />
|
||||
<!-- 2. For Linux or MacOS use: /usr/local/bin/dapr -->
|
||||
<option name="COMMAND" value="C:\dapr\dapr.exe" />
|
||||
<!-- 3. Choose app, http and grpc ports that do not conflict with other daprd command entries (placement address should not change). -->
|
||||
<option name="PARAMETERS" value="-app-id demoservice -app-port 3000 -dapr-http-port 3005 -dapr-grpc-port 52000 -placement-host-address localhost:50005" />
|
||||
<option name="PARAMETERS" value="run -app-id demoservice -app-port 3000 -dapr-http-port 3005 -dapr-grpc-port 52000" />
|
||||
<!-- 4. Use the folder where the `components` folder is located -->
|
||||
<option name="WORKING_DIRECTORY" value="C:/Code/dapr/java-sdk/examples" />
|
||||
</exec>
|
||||
|
@ -45,7 +80,7 @@ Create or edit the file in `$HOME/.IdeaIC2019.3/config/tools/External\ Tools.xml
|
|||
</toolSet>
|
||||
```
|
||||
|
||||
Optionally, you may also create a new entry for a sidecar tool that can be reused accross many projects:
|
||||
Optionally, you may also create a new entry for a sidecar tool that can be reused across many projects:
|
||||
|
||||
```xml
|
||||
<toolSet name="External Tools">
|
||||
|
@ -53,7 +88,7 @@ Optionally, you may also create a new entry for a sidecar tool that can be reuse
|
|||
<!-- 1. Reusable entry for apps with app port. -->
|
||||
<tool name="dapr with app-port" description="Dapr sidecar" showInMainMenu="false" showInEditor="false" showInProject="false" showInSearchPopup="false" disabled="false" useConsole="true" showConsoleOnStdOut="true" showConsoleOnStdErr="true" synchronizeAfterRun="true">
|
||||
<exec>
|
||||
<!-- 2. For Linux or MacOS use: /usr/bin/dapr -->
|
||||
<!-- 2. For Linux or MacOS use: /usr/local/bin/dapr -->
|
||||
<option name="COMMAND" value="c:\dapr\dapr.exe" />
|
||||
<!-- 3. Prompts user 4 times (in order): app id, app port, Dapr's http port, Dapr's grpc port. -->
|
||||
<option name="PARAMETERS" value="run --app-id $Prompt$ --app-port $Prompt$ --dapr-http-port $Prompt$ --dapr-grpc-port $Prompt$" />
|
||||
|
@ -64,7 +99,7 @@ Optionally, you may also create a new entry for a sidecar tool that can be reuse
|
|||
<!-- 1. Reusable entry for apps without app port. -->
|
||||
<tool name="dapr without app-port" description="Dapr sidecar" showInMainMenu="false" showInEditor="false" showInProject="false" showInSearchPopup="false" disabled="false" useConsole="true" showConsoleOnStdOut="true" showConsoleOnStdErr="true" synchronizeAfterRun="true">
|
||||
<exec>
|
||||
<!-- 2. For Linux or MacOS use: /usr/bin/dapr -->
|
||||
<!-- 2. For Linux or MacOS use: /usr/local/bin/dapr -->
|
||||
<option name="COMMAND" value="c:\dapr\dapr.exe" />
|
||||
<!-- 3. Prompts user 3 times (in order): app id, Dapr's http port, Dapr's grpc port. -->
|
||||
<option name="PARAMETERS" value="run --app-id $Prompt$ --dapr-http-port $Prompt$ --dapr-grpc-port $Prompt$" />
|
||||
|
@ -108,3 +143,11 @@ After debugging, make sure you stop both `dapr` and your app in IntelliJ.
|
|||
>Note: Since you launched the service(s) using the **dapr** ***run*** CLI command, the **dapr** ***list*** command will show runs from IntelliJ in the list of apps that are currently running with Dapr.
|
||||
|
||||
Happy debugging!
|
||||
|
||||
## Related links
|
||||
|
||||
<!-- IGNORE_LINKS -->
|
||||
|
||||
- [Change](https://intellij-support.jetbrains.com/hc/en-us/articles/206544519-Directories-used-by-the-IDE-to-store-settings-caches-plugins-and-logs) in IntelliJ configuration directory location
|
||||
|
||||
<!-- END_IGNORE -->
|
|
@ -1,24 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "VS Code remote containers"
|
||||
linkTitle: "VS Code remote containers"
|
||||
weight: 3000
|
||||
description: "Application development and debugging with Visual Studio Code remote containers"
|
||||
---
|
||||
|
||||
## Using remote containers for your application development
|
||||
|
||||
The Visual Studio Code Remote - Containers extension lets you use a Docker container as a full-featured development environment enabling you to [develop inside a container](https://code.visualstudio.com/docs/remote/containers).
|
||||
|
||||
Dapr has pre-built Docker remote containers for each of the language SDKs. You can pick the one of your choice for a ready made environment. Note these pre-built containers automatically update to the latest Dapr release.
|
||||
|
||||
Watch this [video](https://www.youtube.com/watch?v=D2dO4aGpHcg&t=120) on how to use the Dapr VS Code Remote Containers with your application.
|
||||
|
||||
These are the steps to use Dapr Remote Containers
|
||||
1. Open your application workspace in VS Code
|
||||
2. In the command command palette select the `Remote-Containers: Add Development Container Configuration Files...` command
|
||||
3. Type `dapr` to filter the list to available Dapr remote containers and choose the language container that matches your application. See screen shot below.
|
||||
4. Follow the prompts to rebuild your application in container.
|
||||
|
||||
<img src="../../../../static/images/vscode_remote_containers.png" width=800>
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Visual Studio Code integration with Dapr"
|
||||
linkTitle: "Visual Studio Code"
|
||||
weight: 1000
|
||||
description: "How to develop and run Dapr applications in Visual Studio Code"
|
||||
---
|
|
@ -0,0 +1,66 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Dapr Visual Studio Code extension overview"
|
||||
linkTitle: "Dapr extension"
|
||||
weight: 10000
|
||||
description: "How to develop and run Dapr applications with the Dapr extension"
|
||||
---
|
||||
|
||||
|
||||
Dapr offers a *preview* [Dapr Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-dapr) for local development which enables users a variety of features related to better managing their Dapr applications and debugging of your Dapr applications for all supported Dapr languages which are .NET, Go, PHP, Python and Java.
|
||||
|
||||
<a href="vscode:extension/ms-azuretools.vscode-dapr" class="btn btn-primary" role="button">Open in VSCode</a>
|
||||
|
||||
## Features
|
||||
|
||||
### Scaffold Dapr debugging tasks
|
||||
|
||||
The Dapr extension helps you debug your applications with Dapr using Visual Studio Code's [built-in debugging capability](https://code.visualstudio.com/Docs/editor/debugging).
|
||||
|
||||
Using the `Dapr: Scaffold Dapr Tasks` [Command Palette](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette) operation, you can update your existing `task.json` and `launch.json` files to launch and configure the Dapr sidecar when you begin debugging.
|
||||
|
||||
1. Make sure you have a launch configuration set for your app. ([Learn more](https://code.visualstudio.com/Docs/editor/debugging))
|
||||
2. Open the Command Palette with `Ctrl+Shift+P`
|
||||
3. Select `Dapr: Scaffold Dapr Tasks`
|
||||
4. Run your app and the Dapr sidecar with `F5` or via the Run view.
|
||||
|
||||
### Scaffold Dapr components
|
||||
|
||||
When adding Dapr to your application, you may want to have a dedicated components directory, separate from the default components initialized as part of `dapr init`.
|
||||
|
||||
To create a dedicated components folder with the default `statestore`, `pubsub`, and `zipkin` components, use the `Dapr: Scaffold Dapr Components` [Command Palette](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette) operation.
|
||||
|
||||
1. Open your application directory in Visual Studio Code
|
||||
2. Open the Command Palette with `Ctrl+Shift+P`
|
||||
3. Select `Dapr: Scaffold Dapr Components`
|
||||
4. Run your application with `dapr run --components-path ./components -- ...`
|
||||
|
||||
### View running Dapr applications
|
||||
|
||||
The Applications view shows Dapr applications running locally on your machine.
|
||||
|
||||
<br /><img src="/images/vscode-extension-view.png" alt="Screenshot of the Dapr VSCode extension view running applications option" width="800">
|
||||
|
||||
### Invoke Dapr applications
|
||||
|
||||
Within the Applications view, users can right-click and invoke Dapr apps via GET or POST methods, optionally specifying a payload.
|
||||
|
||||
<br /><img src="/images/vscode-extension-invoke.png" alt="Screenshot of the Dapr VSCode extension invoke option" width="800">
|
||||
|
||||
### Publish events to Dapr applications
|
||||
|
||||
Within the Applications view, users can right-click and publish messages to a running Dapr application, specifying the topic and payload.
|
||||
|
||||
Users can also publish messages to all running applications.
|
||||
|
||||
<br /><img src="/images/vscode-extension-publish.png" alt="Screenshot of the Dapr VSCode extension publish option" width="800">
|
||||
## Additional resources
|
||||
|
||||
### Debugging multiple Dapr applications at the same time
|
||||
|
||||
Using the VS Code extension, you can debug multiple Dapr applications at the same time with [Multi-target debugging](https://code.visualstudio.com/docs/editor/debugging#_multitarget-debugging).
|
||||
|
||||
### Community call demo
|
||||
|
||||
Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&t=85) on how to use the Dapr VS Code extension:
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/OtbYCBt9C34?start=85" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
|
@ -1,42 +1,28 @@
|
|||
---
|
||||
type: docs
|
||||
title: "VS Code"
|
||||
linkTitle: "VS Code"
|
||||
weight: 2000
|
||||
description: "Application development and debugging with Visual Studio Code"
|
||||
title: "Visual Studio Code manual debugging configuration"
|
||||
linkTitle: "Manual debugging"
|
||||
weight: 30000
|
||||
description: "How to manually setup Visual Studio Code debugging"
|
||||
---
|
||||
|
||||
## Visual Studio Code Dapr extension
|
||||
It is recommended to use the *preview* of the [Dapr Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-dapr) available in the Visual Studio marketplace for local development and debugging of your Dapr applications.
|
||||
The [Dapr VSCode extension]({{< ref vscode-dapr-extension.md >}}) automates the setup of [VSCode debugging](https://code.visualstudio.com/Docs/editor/debugging).
|
||||
|
||||
Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&t=85) on how to use the Dapr VS Code extension.
|
||||
If instead you wish to manually configure the `[tasks.json](https://code.visualstudio.com/Docs/editor/tasks)` and `[launch.json](https://code.visualstudio.com/Docs/editor/debugging)` files to use Dapr, these are the steps.
|
||||
|
||||
### Debugging multiple Dapr applications at the same time
|
||||
Using the VS Code extension you can debug multiple Dapr applications at the same time with [Multi-target debugging](https://code.visualstudio.com/docs/editor/debugging#_multitarget-debugging)
|
||||
|
||||
|
||||
## Manually configuring Visual Studio Code for debugging with daprd
|
||||
If instead of using the Dapr VS Code extension you wish to configure a project to use Dapr in the [tasks.json](https://code.visualstudio.com/Docs/editor/tasks) and [launch.json](https://code.visualstudio.com/Docs/editor/debugging) files these are the manual steps.
|
||||
|
||||
When developing Dapr applications, you typically use the dapr cli to start your daprized service similar to this:
|
||||
When developing Dapr applications, you typically use the Dapr cli to start your daprized service similar to this:
|
||||
|
||||
```bash
|
||||
dapr run --app-id nodeapp --app-port 3000 --dapr-http-port 3500 app.js
|
||||
```
|
||||
|
||||
This will generate the components yaml files (if they don't exist) so that your service can interact with the local redis container. This is great when you are just getting started but what if you want to attach a debugger to your service and step through the code? This is where you can use the dapr runtime (daprd) to help facilitate this.
|
||||
|
||||
>Note: The dapr runtime (daprd) will not automatically generate the components yaml files for Redis. These will need to be created manually or you will need to run the dapr cli (dapr) once in order to have them created automatically.
|
||||
|
||||
One approach to attaching the debugger to your service is to first run daprd with the correct arguments from the command line and then launch your code and attach the debugger. While this is a perfectly acceptable solution, it does require a few extra steps and some instruction to developers who might want to clone your repo and hit the "play" button to begin debugging.
|
||||
|
||||
Using the [tasks.json](https://code.visualstudio.com/Docs/editor/tasks) and [launch.json](https://code.visualstudio.com/Docs/editor/debugging) files in Visual Studio Code, you can simplify the process and request that VS Code kick off the daprd process prior to launching the debugger.
|
||||
|
||||
Let's get started!
|
||||
#### Modifying launch.json configurations to include a preLaunchTask
|
||||
|
||||
### Modifying launch.json configurations to include a preLaunchTask
|
||||
|
||||
In your [launch.json](https://code.visualstudio.com/Docs/editor/debugging) file add a [preLaunchTask](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) for each configuration that you want daprd launched. The [preLaunchTask](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) will reference tasks that you define in your tasks.json file. Here is an example for both Node and .NET Core. Notice the [preLaunchTasks](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) referenced: daprd-web and daprd-leaderboard.
|
||||
In your [launch.json](https://code.visualstudio.com/Docs/editor/debugging) file add a [preLaunchTask](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) for each configuration that you want daprd launched. The [preLaunchTask](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) references tasks that you define in your tasks.json file. Here is an example for both Node and .NET Core. Notice the [preLaunchTasks](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) referenced: daprd-web and daprd-leaderboard.
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -76,19 +62,19 @@ In your [launch.json](https://code.visualstudio.com/Docs/editor/debugging) file
|
|||
}
|
||||
```
|
||||
|
||||
## Adding daprd tasks to tasks.json
|
||||
#### Adding daprd tasks to tasks.json
|
||||
|
||||
You will need to define a task and problem matcher for daprd in your [tasks.json](https://code.visualstudio.com/Docs/editor/tasks) file. Here are two examples (both referenced via the [preLaunchTask](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) members above). Notice that in the case of the .NET Core daprd task (daprd-leaderboard) there is also a [dependsOn](https://code.visualstudio.com/Docs/editor/tasks#_compound-tasks) member that references the build task to ensure the latest code is being run/debugged. The [problemMatcher](https://code.visualstudio.com/Docs/editor/tasks#_defining-a-problem-matcher) is used so that VSCode can understand when the daprd process is up and running.
|
||||
You need to define a task and problem matcher for daprd in your [tasks.json](https://code.visualstudio.com/Docs/editor/tasks) file. Here are two examples (both referenced via the [preLaunchTask](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) members above). Notice that in the case of the .NET Core daprd task (daprd-leaderboard) there is also a [dependsOn](https://code.visualstudio.com/Docs/editor/tasks#_compound-tasks) member that references the build task to ensure the latest code is being run/debugged. The [problemMatcher](https://code.visualstudio.com/Docs/editor/tasks#_defining-a-problem-matcher) is used so that VSCode can understand when the daprd process is up and running.
|
||||
|
||||
Let's take a quick look at the args that are being passed to the daprd command.
|
||||
|
||||
* -app-id -- the id (how you will locate it via service invocation) of your microservice
|
||||
* -app-id -- the id (how you locate it via service invocation) of your microservice
|
||||
* -app-port -- the port number that your application code is listening on
|
||||
* -dapr-http-port -- the http port for the dapr api
|
||||
* -dapr-grpc-port -- the grpc port for the dapr api
|
||||
* -placement-host-address -- the location of the placement service (this should be running in docker as it was created when you installed dapr and ran ```dapr init```)
|
||||
|
||||
>Note: You will need to ensure that you specify different http/grpc (-dapr-http-port and -dapr-grpc-port) ports for each daprd task that you create, otherwise you will run into port conflicts when you attempt to launch the second configuration.
|
||||
>Note: You need to ensure that you specify different http/grpc (-dapr-http-port and -dapr-grpc-port) ports for each daprd task that you create, otherwise you run into port conflicts when you attempt to launch the second configuration.
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -173,9 +159,10 @@ Let's take a quick look at the args that are being passed to the daprd command.
|
|||
}
|
||||
```
|
||||
|
||||
### Wrapping up
|
||||
#### Wrapping up
|
||||
|
||||
Once you have made the required changes, you should be able to switch to the [debug](https://code.visualstudio.com/Docs/editor/debugging) view in VSCode and launch your daprized configurations by clicking the "play" button. If everything was configured correctly, you should see daprd launch in the VSCode terminal window and the [debugger](https://code.visualstudio.com/Docs/editor/debugging) should attach to your application (you should see it's output in the debug window).
|
||||
|
||||
>Note: Since you didn't launch the service(s) using the **dapr** ***run*** cli command, but instead by running **daprd**, the **dapr** ***list*** command will not show a list of apps that are currently running.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Since you didn't launch the service(s) using the **dapr** ***run*** cli command, but instead by running **daprd**, the **dapr** ***list*** command will not show a list of apps that are currently running.
|
||||
{{% /alert %}}
|
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Developing Dapr applications with remote dev containers"
|
||||
linkTitle: "Remote dev containers"
|
||||
weight: 20000
|
||||
description: "How to setup a remote dev container environment with Dapr"
|
||||
---
|
||||
|
||||
The Visual Studio Code [Remote Containers extension](https://code.visualstudio.com/docs/remote/containers) lets you use a Docker container as a full-featured development environment without installing any additional frameworks or packages to your local filesystem.
|
||||
|
||||
Dapr has pre-built Docker remote containers for NodeJS and C#. You can pick the one of your choice for a ready made environment. Note these pre-built containers automatically update to the latest Dapr release.
|
||||
|
||||
### Setup a remote dev container
|
||||
|
||||
#### Prerequisites
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
- [Visual Studio Code](https://code.visualstudio.com/)
|
||||
- [VSCode Remote Development extension pack](https://aka.ms/vscode-remote/download/extension)
|
||||
|
||||
#### Create remote Dapr container
|
||||
1. Open your application workspace in VS Code
|
||||
2. In the command command palette (`CTRL+SHIFT+P`) type and select `Remote-Containers: Add Development Container Configuration Files...`
|
||||
<br /><img src="/images/vscode-remotecontainers-addcontainer.png" alt="Screenshot of adding a remote container" width="700">
|
||||
3. Type `dapr` to filter the list to available Dapr remote containers and choose the language container that matches your application. Note you may need to select `Show All Definitions...`
|
||||
<br /><img src="/images/vscode-remotecontainers-daprcontainers.png" alt="Screenshot of adding a Dapr container" width="700">
|
||||
4. Follow the prompts to rebuild your application in container.
|
||||
<br /><img src="/images/vscode-remotecontainers-reopen.png" alt="Screenshot of reopening an application in the dev container" width="700">
|
||||
|
||||
#### Example
|
||||
Watch this [video](https://www.youtube.com/watch?v=D2dO4aGpHcg&t=120) on how to use the Dapr VS Code Remote Containers with your application.
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/D2dO4aGpHcg?start=120" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
|
@ -1,15 +1,16 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Autoscaling a Dapr app with KEDA"
|
||||
linkTitle: "Autoscale"
|
||||
linkTitle: "Autoscale with KEDA"
|
||||
description: "How to configure your Dapr application to autoscale using KEDA"
|
||||
weight: 2000
|
||||
---
|
||||
|
||||
Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components]({{< ref pubsub >}}), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting later.
|
||||
|
||||
For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), an event driven autoscaler for Kubernetes. Many of Dapr's pub/sub components overlap with the scalers provided by [KEDA](https://github.com/kedacore/keda) so it's easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA.
|
||||
For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), an event driven autoscaler for Kubernetes. Many of Dapr's pub/sub components overlap with the scalers provided by [KEDA](https://github.com/kedacore/keda) so it's easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA.
|
||||
|
||||
This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to [pub/sub components]({{< ref pubsub >}}) offered by Dapr.
|
||||
This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to any [pub/sub components]({{< ref pubsub >}}) offered by Dapr.
|
||||
|
||||
## Install KEDA
|
||||
|
||||
|
@ -59,7 +60,7 @@ kubectl -n kafka exec -it kafka-client -- kafka-topics \
|
|||
--if-not-exists
|
||||
```
|
||||
|
||||
## Deploy a Dapr Pub/Sub component
|
||||
## Deploy a Dapr Pub/Sub component
|
||||
|
||||
Next, we'll deploy the Dapr Kafka pub/sub component for Kubernetes. Paste the following YAML into a file named `kafka-pubsub.yaml`:
|
||||
|
||||
|
@ -80,9 +81,9 @@ spec:
|
|||
value: autoscaling-subscriber
|
||||
```
|
||||
|
||||
The above YAML defines the pub/sub component that your application subscribes to, the `demo-topic` we created above. If you used the Kafka Helm install instructions above you can leave the `brokers` value as is. Otherwise, change this to the connection string to your Kafka brokers.
|
||||
The above YAML defines the pub/sub component that your application subscribes to, the `demo-topic` we created above. If you used the Kafka Helm install instructions above you can leave the `brokers` value as is. Otherwise, change this to the connection string to your Kafka brokers.
|
||||
|
||||
Also notice the `autoscaling-subscriber` value set for `consumerID` which is used later to make sure that KEDA and your deployment use the same [Kafka partition offset](http://cloudurable.com/blog/kafka-architecture-topics/index.html#:~:text=Kafka%20continually%20appended%20to%20partitions,fit%20on%20a%20single%20server.).
|
||||
Also notice the `autoscaling-subscriber` value set for `consumerID` which is used later to make sure that KEDA and your deployment use the same [Kafka partition offset](http://cloudurable.com/blog/kafka-architecture-topics/index.html#:~:text=Kafka%20continually%20appended%20to%20partitions,fit%20on%20a%20single%20server.).
|
||||
|
||||
Now, deploy the component to the cluster:
|
||||
|
||||
|
@ -92,7 +93,7 @@ kubectl apply -f kafka-pubsub.yaml
|
|||
|
||||
## Deploy KEDA autoscaler for Kafka
|
||||
|
||||
Next, we will deploy the KEDA scaling object that monitors the lag on the specified Kafka topic and configures the Kubernetes Horizontal Pod Autoscaler (HPA) to scale your Dapr deployment in and out.
|
||||
Next, we will deploy the KEDA scaling object that monitors the lag on the specified Kafka topic and configures the Kubernetes Horizontal Pod Autoscaler (HPA) to scale your Dapr deployment in and out.
|
||||
|
||||
Paste the following into a file named `kafka_scaler.yaml`, and configure your Dapr deployment in the required place:
|
||||
|
||||
|
@ -126,7 +127,7 @@ A few things to review here in the above file:
|
|||
* Similarly the `bootstrapServers` should be set to the same broker connection string used in the `kafka-pubsub.yaml` file
|
||||
* The `consumerGroup` should be set to the same value as the `consumerID` in the `kafka-pubsub.yaml` file
|
||||
|
||||
> Note: setting the connection string, topic, and consumer group to the *same* values for both the Dapr service subscription and the KEDA scaler configuration is critical to ensure the autoscaling works correctly.
|
||||
> Note: setting the connection string, topic, and consumer group to the *same* values for both the Dapr service subscription and the KEDA scaler configuration is critical to ensure the autoscaling works correctly.
|
||||
|
||||
Next, deploy the KEDA scaler to Kubernetes:
|
||||
|
||||
|
@ -138,4 +139,4 @@ All done!
|
|||
|
||||
Now, that the `ScaledObject` KEDA object is configured, your deployment will scale based on the lag of the Kafka topic. More information on configuring KEDA for Kafka topics is available [here](https://keda.sh/docs/2.0/scalers/apache-kafka/).
|
||||
|
||||
You can now start publishing messages to your Kafka topic `demo-topic` and watch the pods autoscale when the lag threshold is higher than `5` topics, as we have defined in the KEDA scaler manifest. You can publish messages to the Kafka Dapr component by using the Dapr [Publish](https://github.com/dapr/CLI#publishsubscribe) CLI command
|
||||
You can now start publishing messages to your Kafka topic `demo-topic` and watch the pods autoscale when the lag threshold is higher than `5` topics, as we have defined in the KEDA scaler manifest. You can publish messages to the Kafka Dapr component by using the Dapr [Publish]({{< ref dapr-publish >}}) CLI command
|
||||
|
|
|
@ -0,0 +1,11 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Dapr integration policies for Azure API Management"
|
||||
linkTitle: "Azure API Management"
|
||||
description: "Publish APIs for Dapr services and components through Azure API Management policies"
|
||||
weight: 6000
|
||||
---
|
||||
|
||||
Azure API Management (APIM) is a way to create consistent and modern API gateways for back-end services, including as those built with Dapr. Dapr support can be enabled in self-hosted API Management gateways to allow them to forward requests to Dapr services, send messages to Dapr Pub/Sub topics, or trigger Dapr output bindings. For more information, read the guide on [API Management Dapr Integration policies](https://docs.microsoft.com/en-us/azure/api-management/api-management-dapr-policies) and try out the [Dapr & Azure API Management Integration Demo](https://github.com/dapr/samples/tree/master/dapr-apim-integration).
|
||||
|
||||
{{< button text="Learn more" link="https://docs.microsoft.com/en-us/azure/api-management/api-management-dapr-policies" >}}
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Dapr extension for Azure Functions runtime"
|
||||
linkTitle: "Azure Functions"
|
||||
description: "Access Dapr capabilities from your Functions runtime application"
|
||||
weight: 3000
|
||||
---
|
||||
|
||||
Dapr integrates with the Azure Functions runtime via an extension that lets a function seamlessly interact with Dapr. Azure Functions provides an event-driven programming model and Dapr provides cloud-native building blocks. With this extension, you can bring both together for serverless and event-driven apps. For more information read
|
||||
[Azure Functions extension for Dapr](https://cloudblogs.microsoft.com/opensource/2020/07/01/announcing-azure-functions-extension-for-dapr/) and visit the [Azure Functions extension](https://github.com/dapr/azure-functions-extension) repo to try out the samples.
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Integrations with cloud providers"
|
||||
linkTitle: "Cloud providers"
|
||||
weight: 5000
|
||||
description: "Information about authentication and configuration for various cloud providers"
|
||||
---
|
|
@ -0,0 +1,64 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Authenticating to AWS"
|
||||
linkTitle: "Authenticating to AWS"
|
||||
weight: 10
|
||||
description: "Information about authentication and configuration options for AWS"
|
||||
aliases:
|
||||
- /developing-applications/integrations/authenticating/authenticating-aws/
|
||||
---
|
||||
|
||||
All Dapr components using various AWS services (DynamoDB, SQS, S3, etc) use a standardized set of attributes for configuration, these are described below.
|
||||
|
||||
[This article](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials) provides a good overview of how the AWS SDK (which Dapr uses) handles credentials
|
||||
|
||||
None of the following attributes are required, since the AWS SDK may be configured using the default provider chain described in the link above. It's important to test the component configuration and inspect the log output from the Dapr runtime to ensure that components initialize correctly.
|
||||
|
||||
- `region`: Which AWS region to connect to. In some situations (when running Dapr in self-hosted mode, for example) this flag can be provided by the environment variable `AWS_REGION`. Since Dapr sidecar injection doesn't allow configuring environment variables on the Dapr sidecar, it is recommended to always set the `region` attribute in the component spec.
|
||||
- `endpoint`: The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html).
|
||||
- `accessKey`: AWS Access key id.
|
||||
- `secretKey`: AWS Secret access key. Use together with `accessKey` to explicitly specify credentials.
|
||||
- `sessionToken`: AWS Session token. Used together with `accessKey` and `secretKey`. When using a regular IAM user's access key and secret, a session token is normally not required.
|
||||
|
||||
## Alternatives to explicitly specifying credentials in component manifest files
|
||||
In production scenarios, it is recommended to use a solution such as [Kiam](https://github.com/uswitch/kiam) or [Kube2iam](https://github.com/jtblin/kube2iam). If running on AWS EKS, you can [link an IAM role to a Kubernetes service account](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html), which your pod can use.
|
||||
|
||||
All of these solutions solve the same problem: They allow the Dapr runtime process (or sidecar) to retrive credentials dynamically, so that explicit credentials aren't needed. This provides several benefits, such as automated key rotation, and avoiding having to manage secrets.
|
||||
|
||||
Both Kiam and Kube2IAM work by intercepting calls to the [instance metadata service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html).
|
||||
|
||||
## Using instance role/profile when running in stand-alone mode on AWS EC2
|
||||
If running Dapr directly on an AWS EC2 instance in stand-alone mode, instance profiles can be used. Simply configure an iam role and [attach it to the instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) for the ec2 instance, and Dapr should be able to authenticate to AWS without specifying credentials in the Dapr component manifest.
|
||||
|
||||
## Authenticating to AWS when running dapr locally in stand-alone mode
|
||||
When running Dapr (or the Dapr runtime directly) in stand-alone mode, you have the option of injecting environment variables into the process like this (on Linux/MacOS:
|
||||
```bash
|
||||
FOO=bar daprd --app-id myapp
|
||||
```
|
||||
If you have [configured named AWS profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html) locally , you can tell Dapr (or the Dapr runtime) which profile to use by specifying the "AWS_PROFILE" environment variable:
|
||||
|
||||
```bash
|
||||
AWS_PROFILE=myprofile dapr run...
|
||||
```
|
||||
or
|
||||
```bash
|
||||
AWS_PROFILE=myprofile daprd...
|
||||
```
|
||||
You can use any of the [supported environment variables](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list) to configure Dapr in this manner.
|
||||
|
||||
On Windows, the environment variable needs to be set before starting the `dapr` or `daprd` command, doing it inline as shown above is not supported.
|
||||
|
||||
## Authenticating to AWS if using AWS SSO based profiles
|
||||
If you authenticate to AWS using [AWS SSO](https://aws.amazon.com/single-sign-on/), some AWS SDKs (including the Go SDK) don't yet support this natively. There are several utilities you can use to "bridge the gap" between AWS SSO-based credentials, and "legacy" credentials, such as [AwsHelper](https://pypi.org/project/awshelper/) or [aws-sso-util](https://github.com/benkehoe/aws-sso-util).
|
||||
|
||||
If using AwsHelper, start Dapr like this:
|
||||
```bash
|
||||
AWS_PROFILE=myprofile awshelper dapr run...
|
||||
```
|
||||
or
|
||||
```bash
|
||||
AWS_PROFILE=myprofile awshelper daprd...
|
||||
```
|
||||
|
||||
On Windows, the environment variable needs to be set before starting the `awshelper` command, doing it inline as shown above is not supported.
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Dapr's gRPC Interface"
|
||||
linkTitle: "gRPC"
|
||||
linkTitle: "gRPC interface"
|
||||
weight: 1000
|
||||
description: "Use the Dapr gRPC API in your application"
|
||||
type: docs
|
||||
|
@ -82,11 +82,11 @@ import (
|
|||
// just for this demo
|
||||
ctx := context.Background()
|
||||
data := []byte("ping")
|
||||
|
||||
|
||||
// create the client
|
||||
client, err := dapr.NewClient()
|
||||
if err != nil {
|
||||
logger.Panic(err)
|
||||
log.Panic(err)
|
||||
}
|
||||
defer client.Close()
|
||||
```
|
||||
|
@ -95,11 +95,11 @@ defer client.Close()
|
|||
|
||||
```go
|
||||
// save state with the key key1
|
||||
err = client.SaveStateData(ctx, "statestore", "key1", "1", data)
|
||||
err = client.SaveState(ctx, "statestore", "key1", data)
|
||||
if err != nil {
|
||||
logger.Panic(err)
|
||||
log.Panic(err)
|
||||
}
|
||||
logger.Println("data saved")
|
||||
log.Println("data saved")
|
||||
```
|
||||
|
||||
Hooray!
|
||||
|
@ -135,6 +135,7 @@ import (
|
|||
```go
|
||||
// server is our user app
|
||||
type server struct {
|
||||
pb.UnimplementedAppCallbackServer
|
||||
}
|
||||
|
||||
// EchoMethod is a simple demo method to invoke
|
||||
|
@ -176,16 +177,16 @@ func (s *server) ListInputBindings(ctx context.Context, in *empty.Empty) (*pb.Li
|
|||
}, nil
|
||||
}
|
||||
|
||||
// This method gets invoked every time a new event is fired from a registerd binding. The message carries the binding name, a payload and optional metadata
|
||||
// This method gets invoked every time a new event is fired from a registered binding. The message carries the binding name, a payload and optional metadata
|
||||
func (s *server) OnBindingEvent(ctx context.Context, in *pb.BindingEventRequest) (*pb.BindingEventResponse, error) {
|
||||
fmt.Println("Invoked from binding")
|
||||
return &pb.BindingEventResponse{}, nil
|
||||
}
|
||||
|
||||
// This method is fired whenever a message has been published to a topic that has been subscribed. Dapr sends published messages in a CloudEvents 0.3 envelope.
|
||||
func (s *server) OnTopicEvent(ctx context.Context, in *pb.TopicEventRequest) (*empty.Empty, error) {
|
||||
func (s *server) OnTopicEvent(ctx context.Context, in *pb.TopicEventRequest) (*pb.TopicEventResponse, error) {
|
||||
fmt.Println("Topic message arrived")
|
||||
return &empty.Empty{}, nil
|
||||
return &pb.TopicEventResponse{}, nil
|
||||
}
|
||||
|
||||
```
|
||||
|
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Running Dapr and Open Service Mesh together"
|
||||
linkTitle: "Open Service Mesh"
|
||||
weight: 4000
|
||||
description: "Learn how to run both Open Service Mesh and Dapr on the same Kubernetes cluster"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
[Open Service Mesh (OSM)](https://openservicemesh.io/) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
|
||||
|
||||
{{< button text="Learn more" link="https://openservicemesh.io/" >}}
|
||||
|
||||
## Dapr integration
|
||||
|
||||
Users are able to leverage both OSM SMI traffic policies and Dapr capabilities on the same Kubernetes cluster. Visit [this guide](https://docs.openservicemesh.io/docs/integrations/demo_dapr/) to get started.
|
||||
|
||||
{{< button text="Deploy OSM and Dapr" link="https://docs.openservicemesh.io/docs/integrations/demo_dapr/" >}}
|
||||
|
||||
## Example
|
||||
|
||||
Watch the OSM team present the OSM and Dapr integration in the 05/18/2021 community call:
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/LSYyTL0nS8Y?start=1916" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
## Additional resources
|
||||
|
||||
- [Dapr and service meshes]({{< ref service-mesh.md >}})
|
|
@ -0,0 +1,229 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Build workflows with Logic Apps"
|
||||
linkTitle: "Workflows"
|
||||
description: "Learn how to build workflows using Dapr Workflows and Logic Apps"
|
||||
weight: 4000
|
||||
---
|
||||
|
||||
Dapr Workflows is a lightweight host that allows developers to run cloud-native workflows locally, on-premises or any cloud environment using the [Azure Logic Apps](https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-overview) workflow engine and Dapr.
|
||||
|
||||
## Benefits
|
||||
|
||||
By using a workflow engine, business logic can be defined in a declarative, no-code fashion so application code doesn't need to change when a workflow changes. Dapr Workflows allows you to use workflows in a distributed application along with these added benefits:
|
||||
|
||||
- **Run workflows anywhere**: on your local machine, on-premises, on Kubernetes or in the cloud
|
||||
- **Built-in observability**: tracing, metrics and mTLS through Dapr
|
||||
- **gRPC and HTTP endpoints** for your workflows
|
||||
- Kick off workflows based on **Dapr bindings** events
|
||||
- Orchestrate complex workflows by **calling back to Dapr** to save state, publish a message and more
|
||||
|
||||
<img src="/images/workflows-diagram.png" width=500 alt="Diagram of Dapr Workflows">
|
||||
|
||||
## How it works
|
||||
|
||||
Dapr Workflows hosts a gRPC server that implements the Dapr Client API.
|
||||
|
||||
This allows users to start workflows using gRPC and HTTP endpoints through Dapr, or start a workflow asynchronously using Dapr bindings.
|
||||
Once a workflow request comes in, Dapr Workflows uses the Logic Apps SDK to execute the workflow.
|
||||
|
||||
## Supported workflow features
|
||||
|
||||
### Supported actions and triggers
|
||||
|
||||
- [HTTP](https://docs.microsoft.com/en-us/azure/connectors/connectors-native-http)
|
||||
- [Schedule](https://docs.microsoft.com/en-us/azure/logic-apps/concepts-schedule-automated-recurring-tasks-workflows)
|
||||
- [Request / Response](https://docs.microsoft.com/en-us/azure/connectors/connectors-native-reqres)
|
||||
|
||||
### Supported control workflows
|
||||
|
||||
- [All control workflows](https://docs.microsoft.com/en-us/azure/connectors/apis-list#control-workflow)
|
||||
|
||||
### Supported data manipulation
|
||||
|
||||
- [All data operations](https://docs.microsoft.com/en-us/azure/connectors/apis-list#manage-or-manipulate-data)
|
||||
|
||||
### Not supported
|
||||
|
||||
- [Managed connectors](https://docs.microsoft.com/en-us/azure/connectors/apis-list#managed-connectors)
|
||||
|
||||
## Example
|
||||
|
||||
Dapr Workflows can be used as the orchestrator for many otherwise complex activities. For example, invoking an external endpoint, saving the data to a state store, publishing the result to a different app or invoking a binding can all be done by calling back into Dapr from the workflow itself.
|
||||
|
||||
This is due to the fact Dapr runs as a sidecar next to the workflow host just as if it was any other app.
|
||||
|
||||
Examine [workflow2.json](/code/workflow.json) as an example of a workflow that does the following:
|
||||
|
||||
1. Calls into Azure Functions to get a JSON response
|
||||
2. Saves the result to a Dapr state store
|
||||
3. Sends the result to a Dapr binding
|
||||
4. Returns the result to the caller
|
||||
|
||||
Since Dapr supports many pluggable state stores and bindings, the workflow becomes portable between different environments (cloud, edge or on-premises) without the user changing the code - *because there is no code involved*.
|
||||
|
||||
## Get started
|
||||
|
||||
Prerequisites:
|
||||
|
||||
1. Install the [Dapr CLI]({{< ref install-dapr-cli.md >}})
|
||||
2. [Azure blob storage account](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-create-account-block-blob?tabs=azure-portal)
|
||||
|
||||
### Self-hosted
|
||||
|
||||
1. Make sure you have the Dapr runtime initialized:
|
||||
|
||||
```bash
|
||||
dapr init
|
||||
```
|
||||
|
||||
1. Set up the environment variables containing the Azure Storage Account credentials:
|
||||
|
||||
{{< tabs Windows "macOS/Linux" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
export STORAGE_ACCOUNT_KEY=<YOUR-STORAGE-ACCOUNT-KEY>
|
||||
export STORAGE_ACCOUNT_NAME=<YOUR-STORAGE-ACCOUNT-NAME>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
set STORAGE_ACCOUNT_KEY=<YOUR-STORAGE-ACCOUNT-KEY>
|
||||
set STORAGE_ACCOUNT_NAME=<YOUR-STORAGE-ACCOUNT-NAME>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
1. Move to the workflows directory and run the sample runtime:
|
||||
|
||||
```bash
|
||||
cd src/Dapr.Workflows
|
||||
|
||||
dapr run --app-id workflows --protocol grpc --port 3500 --app-port 50003 -- dotnet run --workflows-path ../../samples
|
||||
```
|
||||
|
||||
1. Invoke a workflow:
|
||||
|
||||
```bash
|
||||
curl http://localhost:3500/v1.0/invoke/workflows/method/workflow1
|
||||
|
||||
{"value":"Hello from Logic App workflow running with Dapr!"}
|
||||
```
|
||||
|
||||
### Kubernetes
|
||||
|
||||
1. Make sure you have a running Kubernetes cluster and `kubectl` in your path.
|
||||
|
||||
1. Once you have the Dapr CLI installed, run:
|
||||
|
||||
```bash
|
||||
dapr init --kubernetes
|
||||
```
|
||||
|
||||
1. Wait until the Dapr pods have the status `Running`.
|
||||
|
||||
1. Create a Config Map for the workflow:
|
||||
|
||||
```bash
|
||||
kubectl create configmap workflows --from-file ./samples/workflow1. json
|
||||
```
|
||||
|
||||
1. Create a secret containing the Azure Storage Account credentials. Replace the account name and key values below with the actual credentials:
|
||||
|
||||
```bash
|
||||
kubectl create secret generic dapr-workflows --from-literal=accountName=<YOUR-STORAGE-ACCOUNT-NAME> --from-literal=accountKey=<YOUR-STORAGE-ACCOUNT-KEY>
|
||||
```
|
||||
|
||||
1. Deploy Dapr Worfklows:
|
||||
|
||||
```bash
|
||||
kubectl apply -f deploy/deploy.yaml
|
||||
```
|
||||
|
||||
1. Create a port-forward to the dapr workflows container:
|
||||
|
||||
```bash
|
||||
kubectl port-forward deploy/dapr-workflows-host 3500:3500
|
||||
```
|
||||
|
||||
1. Invoke logic apps through Dapr:
|
||||
|
||||
```bash
|
||||
curl http://localhost:3500/v1.0/invoke/workflows/method/workflow1
|
||||
|
||||
{"value":"Hello from Logic App workflow running with Dapr!"}
|
||||
```
|
||||
|
||||
## Invoking workflows using Dapr bindings
|
||||
|
||||
1. First, create any [Dapr binding]({{< ref components-reference >}}) of your choice. See [this]({{< ref howto-triggers >}}) How-To tutorial.
|
||||
|
||||
In order for Dapr Workflows to be able to start a workflow from a Dapr binding event, simply name the binding with the name of the workflow you want it to trigger.
|
||||
|
||||
Here's an example of a Kafka binding that will trigger a workflow named `workflow1`:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: workflow1
|
||||
spec:
|
||||
type: bindings.kafka
|
||||
metadata:
|
||||
- name: topics
|
||||
value: topic1
|
||||
- name: brokers
|
||||
value: localhost:9092
|
||||
- name: consumerGroup
|
||||
value: group1
|
||||
- name: authRequired
|
||||
value: "false"
|
||||
```
|
||||
|
||||
1. Next, apply the Dapr component:
|
||||
|
||||
{{< tabs Self-hosted Kubernetes >}}
|
||||
|
||||
{{% codetab %}}
|
||||
Place the binding yaml file above in a `components` directory at the root of your application.
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
kubectl apply -f my_binding.yaml
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
1. Once an event is sent to the bindings component, check the logs Dapr Workflows to see the output.
|
||||
|
||||
{{< tabs Self-hosted Kubernetes >}}
|
||||
|
||||
{{% codetab %}}
|
||||
In standalone mode, the output will be printed to the local terminal.
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
On Kubernetes, run the following command:
|
||||
|
||||
```bash
|
||||
kubectl logs -l app=dapr-workflows-host -c host
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Example
|
||||
|
||||
Watch an example from the Dapr community call:
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/7fP-0Ixmi-w?start=116" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
## Additional resources
|
||||
|
||||
- [Blog announcement](https://cloudblogs.microsoft.com/opensource/2020/05/26/announcing-cloud-native-workflows-dapr-logic-apps/)
|
||||
- [Repo](https://github.com/dapr/workflows)
|
|
@ -0,0 +1,77 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Middleware"
|
||||
linkTitle: "Middleware"
|
||||
weight: 50
|
||||
description: "Customize processing pipelines by adding middleware components"
|
||||
aliases:
|
||||
- /developing-applications/middleware/middleware-overview/
|
||||
- /concepts/middleware-concept/
|
||||
---
|
||||
|
||||
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. A request goes through all defined middleware components before it's routed to user code, and then goes through the defined middleware, in reverse order, before it's returned to the client, as shown in the following diagram.
|
||||
|
||||
<img src="/images/middleware.png" width=800>
|
||||
|
||||
## Configuring middleware pipelines
|
||||
|
||||
When launched, a Dapr sidecar constructs a middleware processing pipeline. By default the pipeline consists of [tracing middleware]({{< ref tracing-overview.md >}}) and CORS middleware. Additional middleware, configured by a Dapr [configuration]({{< ref configuration-concept.md >}}), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, security and others.
|
||||
|
||||
The following configuration example defines a custom pipeline that uses a [OAuth 2.0 middleware]({{< ref middleware-oauth2.md >}}) and an [uppercase middleware component]({{< ref middleware-uppercase.md >}}). In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: pipeline
|
||||
namespace: default
|
||||
spec:
|
||||
httpPipeline:
|
||||
handlers:
|
||||
- name: oauth2
|
||||
type: middleware.http.oauth2
|
||||
- name: uppercase
|
||||
type: middleware.http.uppercase
|
||||
```
|
||||
|
||||
As with other building block components, middleware components are extensible and can be found in the [supported Middleware reference]({{< ref supported-middleware >}}) and in the [components-contrib repo](https://github.com/dapr/components-contrib/tree/master/middleware/http).
|
||||
|
||||
{{< button page="supported-middleware" text="See all middleware components">}}
|
||||
|
||||
## Writing a custom middleware
|
||||
|
||||
Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement its HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns **fasthttp.RequestHandler** and **error**:
|
||||
|
||||
```go
|
||||
type Middleware interface {
|
||||
GetHandler(metadata Metadata) (func(h fasthttp.RequestHandler) fasthttp.RequestHandler, error)
|
||||
}
|
||||
```
|
||||
|
||||
Your handler implementation can include any inbound logic, outbound logic, or both:
|
||||
|
||||
```go
|
||||
|
||||
func (m *customMiddleware) GetHandler(metadata Metadata) (func(fasthttp.RequestHandler) fasthttp.RequestHandler, error) {
|
||||
var err error
|
||||
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
|
||||
return func(ctx *fasthttp.RequestCtx) {
|
||||
// inboud logic
|
||||
h(ctx) // call the downstream handler
|
||||
// outbound logic
|
||||
}
|
||||
}, err
|
||||
}
|
||||
```
|
||||
|
||||
## Adding new middleware components
|
||||
|
||||
Your middleware component can be contributed to the [components-contrib repository](https://github.com/dapr/components-contrib/tree/master/middleware).
|
||||
|
||||
After the components-contrib change has been accepted, submit another pull request against the [Dapr runtime repository](https://github.com/dapr/dapr) to register the new middleware type. You'll need to modify **[runtime.WithHTTPMiddleware](https://github.com/dapr/dapr/blob/f4d50b1369e416a8f7b93e3e226c4360307d1313/cmd/daprd/main.go#L394-L424)** method in [cmd/daprd/main.go](https://github.com/dapr/dapr/blob/master/cmd/daprd/main.go) to register your middleware with Dapr's runtime.
|
||||
|
||||
## Related links
|
||||
|
||||
* [Component schema]({{< ref component-schema.md >}})
|
||||
* [Configuration overview]({{< ref configuration-overview.md >}})
|
||||
* [Middleware quickstart](https://github.com/dapr/quickstarts/tree/master/middleware)
|
|
@ -1,7 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Middleware"
|
||||
linkTitle: "Middleware"
|
||||
weight: 50
|
||||
description: "Customize Dapr processing pipelines by adding middleware components"
|
||||
---
|
|
@ -1,22 +1,45 @@
|
|||
---
|
||||
type: docs
|
||||
title: "SDKs"
|
||||
title: "Dapr Software Development Kits (SDKs)"
|
||||
linkTitle: "SDKs"
|
||||
weight: 20
|
||||
description: "Use your favorite languages with Dapr"
|
||||
no_list: true
|
||||
---
|
||||
|
||||
### .NET
|
||||
See the [.NET SDK repository](https://github.com/dapr/dotnet-sdk)
|
||||
The Dapr SDKs are the easiest way for you to get Dapr into your application. Choose your favorite language and get up and running with Dapr in minutes.
|
||||
|
||||
### Java
|
||||
See the [Java SDK repository](https://github.com/dapr/java-sdk)
|
||||
## SDK packages
|
||||
|
||||
### Go
|
||||
See the [Go SDK repository](https://github.com/dapr/go-sdk)
|
||||
- **Client SDK**: The Dapr client allows you to invoke Dapr building block APIs and perform actions such as:
|
||||
- [Invoke]({{< ref service-invocation >}}) methods on other services
|
||||
- Store and get [state]({{< ref state-management >}})
|
||||
- [Publish and subscribe]({{< ref pubsub >}}) to message topics
|
||||
- Interact with external resources through input and output [bindings]({{< ref bindings >}})
|
||||
- Get [secrets]({{< ref secrets >}}) from secret stores
|
||||
- Interact with [virtual actors]({{< ref actors >}})
|
||||
- **Server extensions**: The Dapr service extensions allow you to create services that can:
|
||||
- Be [invoked]({{< ref service-invocation >}}) by other services
|
||||
- [Subscribe]({{< ref pubsub >}}) to topics
|
||||
- **Actor SDK**: The Dapr Actor SDK allows you to build virtual actors with:
|
||||
- Methods that can be [invoked]({{< ref "howto-actors.md#actor-method-invocation" >}}) by other services
|
||||
- [State]({{< ref "howto-actors.md#actor-state-management" >}}) that can be stored and retrieved
|
||||
- [Timers]({{< ref "howto-actors.md#actor-timers" >}}) with callbacks
|
||||
- Persistent [reminders]({{< ref "howto-actors.md#actor-reminders" >}})
|
||||
|
||||
### Python
|
||||
See the [Python SDK repository](https://github.com/dapr/python-sdk)
|
||||
## SDK languages
|
||||
|
||||
### Javascript
|
||||
See the [Javascript SDK repository](https://github.com/dapr/js-sdk)
|
||||
| Language | Status | Client SDK | Server extensions | Actor SDK |
|
||||
|----------|:------|:----------:|:-----------:|:---------:|
|
||||
| [.NET]({{< ref dotnet >}}) | Stable | ✔ | [ASP.NET Core]({{< ref dotnet-aspnet >}}) | ✔ |
|
||||
| [Python]({{< ref python >}}) | Stable | ✔ | [gRPC]({{< ref python-grpc.md >}}) | [FastAPI]({{< ref python-fastapi.md >}})<br />[Flask]({{< ref python-flask.md >}}) |
|
||||
| [Java](https://github.com/dapr/java-sdk) | Stable | ✔ | Spring Boot | ✔ |
|
||||
| [Go]({{< ref go >}}) | Stable | ✔ | ✔ | |
|
||||
| [PHP]({{< ref php >}}) | Stable | ✔ | ✔ | ✔ |
|
||||
| [C++](https://github.com/dapr/cpp-sdk) | In development | ✔ | |
|
||||
| [Rust](https://github.com/dapr/rust-sdk) | In development | ✔ | | |
|
||||
| [Javascript](https://github.com/dapr/js-sdk) | In development| ✔ | |
|
||||
|
||||
## Further reading
|
||||
|
||||
- [Serialization in the Dapr SDKs]({{< ref sdk-serialization.md >}})
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue