Recurring merge from master to rc.1 branch

This commit is contained in:
Ori Zohar 2020-12-03 10:00:41 -08:00
commit b0bbee79ce
63 changed files with 636 additions and 190 deletions

View File

@ -1,31 +0,0 @@
---
name: Bug report
about: Report a bug in Dapr docs
title: ''
labels: kind/bug
assignees: ''
---
## Expected Behavior
<!-- Briefly describe what you expect to happen -->
## Actual Behavior
<!-- Briefly describe what is actually happening -->
## Steps to Reproduce the Problem
<!-- How can a maintainer reproduce this issue (be detailed) -->
## Release Note
<!-- How should the fix for this issue be communicated in our release notes? It can be populated later. -->
<!-- Keep it as a single line. Examples: -->
<!-- RELEASE NOTE: **ADD** New feature in Dapr. -->
<!-- RELEASE NOTE: **FIX** Bug in runtime. -->
<!-- RELEASE NOTE: **UPDATE** Runtime dependency. -->
RELEASE NOTE:

View File

@ -1,8 +0,0 @@
---
name: Feature Request
about: Start a discussion for Dapr docs
title: ''
labels: kind/discussion
assignees: ''
---

View File

@ -1,19 +0,0 @@
---
name: Feature Request
about: Create a Feature Request for Dapr docs
title: ''
labels: kind/enhancement
assignees: ''
---
## Describe the feature
## Release Note
<!-- How should this new feature be announced in our release notes? It can be populated later. -->
<!-- Keep it as a single line. Examples: -->
<!-- RELEASE NOTE: **ADD** New feature in Dapr. -->
<!-- RELEASE NOTE: **FIX** Bug in runtime. -->
<!-- RELEASE NOTE: **UPDATE** Runtime dependency. -->
RELEASE NOTE:

View File

@ -0,0 +1,20 @@
---
name: New Content Needed
about: Topic is missing and needs to be written
title: "[CONTENT]"
labels: content/missing-information
assignees: ''
---
**What content needs to be created or modified?**
A clear and concise description of what the problem is. Ex. There should be docs on how pub/sub works...
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Where should the new material be placed?**
Please suggest where in the docs structure the new content should be created.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@ -1,9 +0,0 @@
---
name: Proposal
about: Create a proposal for Dapr docs
title: ''
labels: kind/proposal
assignees: ''
---
## Describe the proposal

View File

@ -1,9 +0,0 @@
---
name: Question
about: Ask a question about Dapr docs
title: ''
labels: kind/question
assignees: ''
---
## Ask your question here

23
.github/ISSUE_TEMPLATE/typo.md vendored Normal file
View File

@ -0,0 +1,23 @@
---
name: Typo
about: Report incorrect language/small updates to fix readability
title: "[TYPO]"
labels: content/typo
assignees: ''
---
**URL of the docs page**
The URL(s) on docs.dapr.io where the typo occurs
**How is it currently worded?**
Please copy and paste the sentence where the typo occurs.
**How should it be worded?**
Please correct the sentence
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.

38
.github/ISSUE_TEMPLATE/website-issue.md vendored Normal file
View File

@ -0,0 +1,38 @@
---
name: Website Issue
about: The website is broken or not working correctly.
title: "[WEBSITE]"
labels: website/functionality
assignees: AaronCrawfis
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@ -0,0 +1,23 @@
---
name: Wrong Information/Code/Steps
about: Something in the docs is incorrect
title: "[CONTENT]"
labels: P1, content/incorrect-information
assignees: ''
---
**Describe the issue**
A clear and concise description of what the bug is.
**URL of the docs**
Paste the URL (docs.dapr.io/concepts/......) of the page
**Expected content**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.

View File

@ -29,6 +29,7 @@ jobs:
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_GREEN_HILL_0D7377310 }}
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
skip_deploy_on_missing_secrets: true
action: "upload"
###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
# For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
@ -48,4 +49,5 @@ jobs:
uses: Azure/static-web-apps-deploy@v0.0.1-preview
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_GREEN_HILL_0D7377310 }}
skip_deploy_on_missing_secrets: true
action: "close"

View File

@ -30,17 +30,13 @@ git clone https://github.com/dapr/docs.git
```
3. Change to daprdocs directory:
```sh
cd daprdocs
cd ./docs/daprdocs
```
4. Add Docsy submodule:
```sh
git submodule add https://github.com/google/docsy.git themes/docsy
```
5. Update submodules:
4. Update submodules:
```sh
git submodule update --init --recursive
```
6. Install npm packages:
5. Install npm packages:
```sh
npm install
```

View File

@ -40,7 +40,7 @@ id = "UA-149338238-3"
[[menu.main]]
name = "Blog"
weight = 70
url = "https://blog.dapr.io/blog"
url = "https://blog.dapr.io/posts"
[[menu.main]]
name = "Community"
weight = 80
@ -111,12 +111,12 @@ sidebar_search_disable = true
icon = "fab fa-github"
desc = "Development takes place here!"
[[params.links.developer]]
name = "Gitter"
url = "https://gitter.im/Dapr/community"
icon = "fab fa-gitter"
name = "Discord"
url = "https://aka.ms/dapr-discord"
icon = "fab fa-discord"
desc = "Conversations happen here!"
[[params.links.developer]]
name = "Zoom"
url = "https://aka.ms/dapr-community-call"
icon = "fas fa-video"
desc = "Meetings happen here!"
desc = "Meetings happen here!"

View File

@ -6,7 +6,7 @@ weight: 200
description: "Modular best practices accessible over standard HTTP or gRPC APIs"
---
A [building block]({{< ref building-blocks >}}) is as an HTTP or gRPC API that can be called from your code and uses one or more Dapr components.
A [building block]({{< ref building-blocks >}}) is an HTTP or gRPC API that can be called from your code and uses one or more Dapr components.
Building blocks address common challenges in building resilient, microservices applications and codify best practices and patterns. Dapr consists of a set of building blocks, with extensibility to add new building blocks.

View File

@ -58,7 +58,7 @@ func GetHandler(metadata Metadata) fasthttp.RequestHandler {
```
## Adding new middleware components
Your middleware component can be contributed to the [components-contrib repository](https://github.com/dapr/components-contrib/tree/master/middleware).
Your middleware component can be contributed to the [components-contrib repository](https://github.com/dapr/components-contrib/tree/master/middleware).
Then submit another pull request against the [Dapr runtime repository](https://github.com/dapr/dapr) to register the new middleware type. You'll need to modify the **Load()** method in [registry.go]( https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/registry.go) to register your middleware using the **Register** method.

View File

@ -126,7 +126,7 @@ This HTML will display the `dapr-overview.png` image on the `overview.md` page:
<img src="/images/overview-dapr-overview.png" width=1000 alt="Overview diagram of Dapr and its building blocks">
```
### Tabbed Content
### Tabbed content
Tabs are made possible through [Hugo shortcodes](https://gohugo.io/content-management/shortcodes/).
The overall format is:
@ -195,7 +195,7 @@ brew install dapr/tap/dapr-cli
{{< /tabs >}}
### YouTube Videos
### YouTube videos
Hugo can automatically embed YouTube videos using a shortcode:
```
{{</* youtube [VIDEO ID] */>}}

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Authenticating to services"
linkTitle: "Authenticating to services"
weight: 3000
description: "Information about authentication and configuration for various cloud providers"
---

View File

@ -0,0 +1,62 @@
---
type: docs
title: "Authenticating to AWS"
linkTitle: "Authenticating to AWS"
weight: 10
description: "Information about authentication and configuration options for AWS"
---
All Dapr components using various AWS services (DynamoDB, SQS, S3, etc) use a standardized set of attributes for configuration, these are described below.
[This article](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials) provides a good overview of how the AWS SDK (which Dapr uses) handles credentials
None of the following attributes are required, since the AWS SDK may be configured using the default provider chain described in the link above. It's important to test the component configuration and inspect the log output from the Dapr runtime to ensure that components initialize correctly.
`region`: Which AWS region to connect to. In some situations (when running Dapr in self-hosted mode, for example) this flag can be provided by the environment variable `AWS_REGION`. Since Dapr sidecar injection doesn't allow configuring environment variables on the Dapr sidecar, it is recommended to always set the `region` attribute in the component spec.
`endpoint`: The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html).
`accessKey`: AWS Access key id.
`secretKey`: AWS Secret access key. Use together with `accessKey` to explicitly specify credentials.
`sessionToken`: AWS Session token. Used together with `accessKey` and `secretKey`. When using a regular IAM user's access key and secret, a session token is normally not required.
## Alternatives to explicitly specifying credentials in component manifest files
In production scenarios, it is recommended to use a solution such as [Kiam](https://github.com/uswitch/kiam) or [Kube2iam](https://github.com/jtblin/kube2iam). If running on AWS EKS, you can [link an IAM role to a Kubernetes service account](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html), which your pod can use.
All of these solutions solve the same problem: They allow the Dapr runtime process (or sidecar) to retrive credentials dynamically, so that explicit credentials aren't needed. This provides several benefits, such as automated key rotation, and avoiding having to manage secrets.
Both Kiam and Kube2IAM work by intercepting calls to the [instance metadata service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html).
## Using instance role/profile when running in stand-alone mode on AWS EC2
If running Dapr directly on an AWS EC2 instance in stand-alone mode, instance profiles can be used. Simply configure an iam role and [attach it to the instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) for the ec2 instance, and Dapr should be able to authenticate to AWS without specifying credentials in the Dapr component manifest.
## Authenticating to AWS when running dapr locally in stand-alone mode
When running Dapr (or the Dapr runtime directly) in stand-alone mode, you have the option of injecting environment variables into the process like this (on Linux/MacOS:
```bash
FOO=bar daprd --app-id myapp
```
If you have [configured named AWS profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html) locally , you can tell Dapr (or the Dapr runtime) which profile to use by specifying the "AWS_PROFILE" environment variable:
```bash
AWS_PROFILE=myprofile dapr run...
```
or
```bash
AWS_PROFILE=myprofile daprd...
```
You can use any of the [supported environment variables](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html#envvars-list) to configure Dapr in this manner.
On Windows, the environment variable needs to be set before starting the `dapr` or `daprd` command, doing it inline as shown above is not supported.
## Authenticating to AWS if using AWS SSO based profiles
If you authenticate to AWS using [AWS SSO](https://aws.amazon.com/single-sign-on/), some AWS SDKs (including the Go SDK) don't yet support this natively. There are several utilities you can use to "bridge the gap" between AWS SSO-based credentials, and "legacy" credentials, such as [AwsHelper](https://pypi.org/project/awshelper/) or [aws-sso-util](https://github.com/benkehoe/aws-sso-util).
If using AwsHelper, start Dapr like this:
```bash
AWS_PROFILE=myprofile awshelper dapr run...
```
or
```bash
AWS_PROFILE=myprofile awshelper daprd...
```
On Windows, the environment variable needs to be set before starting the `awshelper` command, doing it inline as shown above is not supported.

View File

@ -238,6 +238,7 @@ Once the chart installation is complete, verify the dapr-operator, dapr-placemen
$ kubectl get pods -n dapr-system -w
NAME READY STATUS RESTARTS AGE
dapr-dashboard-7bd6cbf5bf-xglsr 1/1 Running 0 40s
dapr-operator-7bd6cbf5bf-xglsr 1/1 Running 0 40s
dapr-placement-7f8f76778f-6vhl2 1/1 Running 0 40s
dapr-sidecar-injector-8555576b6f-29cqm 1/1 Running 0 40s

View File

@ -6,6 +6,7 @@ description: "Detailed documentation on the AWS DynamoDB binding component"
---
## Setup Dapr component
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes
```yaml
apiVersion: dapr.io/v1alpha1
@ -17,21 +18,22 @@ spec:
type: bindings.aws.dynamodb
version: v1
metadata:
- name: table
value: items
- name: region
value: us-west-2
- name: accessKey
value: *****************
- name: secretKey
value: *****************
- name: table
value: items
```
- name: sessionToken
value: *****************
- `region` is the AWS region.
- `accessKey` is the AWS access key.
- `secretKey` is the AWS secret key.
```
- `table` is the DynamoDB table name.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
@ -44,4 +46,5 @@ The above example uses secrets as plain strings. It is recommended to use a secr
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Authenticating to AWS]({{< ref authenticating-aws.md >}})

View File

@ -8,6 +8,7 @@ description: "Detailed documentation on the AWS Kinesis binding component"
See [this](https://aws.amazon.com/kinesis/data-streams/getting-started/) for instructions on how to set up an AWS Kinesis data streams
## Setup Dapr component
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes
```yaml
apiVersion: dapr.io/v1alpha1
@ -19,23 +20,22 @@ spec:
type: bindings.aws.kinesis
version: v1
metadata:
- name: region
value: AWS_REGION #replace
- name: accessKey
value: AWS_ACCESS_KEY # replace
- name: secretKey
value: AWS_SECRET_KEY #replace
- name: streamName
value: KINESIS_STREAM_NAME # Kinesis stream name
- name: consumerName
value: KINESIS_CONSUMER_NAME # Kinesis consumer name
- name: mode
value: shared # shared - Shared throughput or extended - Extended/Enhanced fanout
```
- name: region
value: AWS_REGION #replace
- name: accessKey
value: AWS_ACCESS_KEY # replace
- name: secretKey
value: AWS_SECRET_KEY #replace
- name: sessionToken
value: *****************
- `region` is the AWS region.
- `accessKey` is the AWS access key.
- `secretKey` is the AWS secret key.
```
- `mode` Accepted values: shared, extended. shared - Shared throughput, extended - Extended/Enhanced fanout methods. More details are [here](https://docs.aws.amazon.com/streams/latest/dev/building-consumers.html)
- `streamName` is the AWS Kinesis Stream Name.
- `consumerName` is the AWS Kinesis Consumer Name.
@ -53,4 +53,5 @@ The above example uses secrets as plain strings. It is recommended to use a secr
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Authenticating to AWS]({{< ref authenticating-aws.md >}})

View File

@ -6,6 +6,7 @@ description: "Detailed documentation on the AWS S3 binding component"
---
## Setup Dapr component
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes
```yaml
apiVersion: dapr.io/v1alpha1
@ -17,6 +18,8 @@ spec:
type: bindings.aws.s3
version: v1
metadata:
- name: bucket
value: mybucket
- name: region
value: us-west-2
- name: accessKey
@ -27,10 +30,7 @@ spec:
value: mybucket
```
- `region` is the AWS region.
- `accessKey` is the AWS access key.
- `secretKey` is the AWS secret key.
- `table` is the name of the S3 bucket to write to.
- `bucket` is the name of the S3 bucket to write to.
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
@ -44,4 +44,5 @@ The above example uses secrets as plain strings. It is recommended to use a secr
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Authenticating to AWS]({{< ref authenticating-aws.md >}})

View File

@ -6,6 +6,7 @@ description: "Detailed documentation on the AWS SNS binding component"
---
## Setup Dapr component
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes
```yaml
apiVersion: dapr.io/v1alpha1
@ -17,19 +18,19 @@ spec:
type: bindings.aws.sns
version: v1
metadata:
- name: topicArn
value: mytopic
- name: region
value: us-west-2
- name: accessKey
value: *****************
- name: secretKey
value: *****************
- name: topicArn
value: mytopic
- name: sessionToken
value: *****************
```
- `region` is the AWS region.
- `accessKey` is the AWS access key.
- `secretKey` is the AWS secret key.
- `topicArn` is the SNS topic name.
{{% alert title="Warning" color="warning" %}}
@ -44,4 +45,5 @@ The above example uses secrets as plain strings. It is recommended to use a secr
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Authenticating to AWS]({{< ref authenticating-aws.md >}})

View File

@ -6,6 +6,7 @@ description: "Detailed documentation on the AWS SQS binding component"
---
## Setup Dapr component
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes
```yaml
apiVersion: dapr.io/v1alpha1
@ -17,19 +18,19 @@ spec:
type: bindings.aws.sqs
version: v1
metadata:
- name: queueName
value: items
- name: region
value: us-west-2
- name: accessKey
value: *****************
- name: secretKey
value: *****************
- name: queueName
value: items
- name: sessionToken
value: *****************
```
- `region` is the AWS region.
- `accessKey` is the AWS access key.
- `secretKey` is the AWS secret key.
- `queueName` is the SQS queue name.
{{% alert title="Warning" color="warning" %}}
@ -45,4 +46,5 @@ The above example uses secrets as plain strings. It is recommended to use a secr
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Authenticating to AWS]({{< ref authenticating-aws.md >}})

View File

@ -14,8 +14,10 @@ This article describes configuring Dapr to use AWS SNS/SQS for pub/sub on local
{{% codetab %}}
For local development the [localstack project](https://github.com/localstack/localstack) is used to integrate AWS SNS/SQS. Follow the instructions [here](https://github.com/localstack/localstack#installing) to install the localstack CLI.
In order to use localstack with your pubsub binding, you need to provide the `awsEndpoint` configuration
in the component metadata. The `awsEndpoint` is unncessary when running against production AWS.
In order to use localstack with your pubsub binding, you need to provide the `endpoint` configuration
in the component metadata. The `endpoint` is unncessary when running against production AWS.
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes
```yaml
apiVersion: dapr.io/v1alpha1
@ -26,7 +28,7 @@ spec:
type: pubsub.snssqs
version: v1
metadata:
- name: awsEndpoint
- name: endpoint
value: http://localhost:4566
# Use us-east-1 for localstack
- name: awsRegion
@ -37,7 +39,7 @@ spec:
{{% codetab %}}
To run localstack on Kubernetes, you can apply the configuration below. Localstack is then
reachable at the DNS name `http://localstack.default.svc.cluster.local:4566`
(assuming this was applied to the default namespace) and this should be used as the `awsEndpoint`
(assuming this was applied to the default namespace) and this should be used as the `endpoint`
```yaml
apiVersion: apps/v1
kind: Deployment
@ -105,15 +107,15 @@ spec:
version: v1
metadata:
# ID of the AWS account with appropriate permissions to SNS and SQS
- name: awsAccountID
value: <AWS account ID>
- name: accessKey
value: **********
# Secret for the AWS user
- name: awsSecret
value: <AWS secret>
- name: secretKey
value: **********
# The AWS region you want to operate in.
# See this page for valid regions: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html
# Make sure that SNS and SQS are available in that region.
- name: awsRegion
- name: region
value: us-east-1
```
@ -130,3 +132,4 @@ Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >
- [AWS SQS as subscriber to SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html)
- [AWS SNS API refernce](https://docs.aws.amazon.com/sns/latest/api/Welcome.html)
- [AWS SQS API refernce](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/Welcome.html)
- [Authenticating to AWS]({{< ref authenticating-aws.md >}})

View File

@ -10,6 +10,7 @@ description: Detailed information on the decret store component
Setup AWS Secrets Manager using the AWS documentation: https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html.
## Create the Dapr component
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes
```yaml
apiVersion: dapr.io/v1alpha1
@ -22,12 +23,12 @@ spec:
version: v1
metadata:
- name: region
value: [aws_region] # Required.
- name: accessKey # Required.
value: "[aws_region]"
- name: accessKey
value: "[aws_access_key]"
- name: secretKey # Required.
- name: secretKey
value: "[aws_secret_key]"
- name: sessionToken # Required.
- name: sessionToken
value: "[aws_session_token]"
```
@ -68,4 +69,5 @@ The above example uses secrets as plain strings. It is recommended to use a loca
- [Secrets building block]({{< ref secrets >}})
- [How-To: Retreive a secret]({{< ref "howto-secrets.md" >}})
- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}})
- [Secrets API reference]({{< ref secrets_api.md >}})
- [Secrets API reference]({{< ref secrets_api.md >}})
- [Authenticating to AWS]({{< ref authenticating-aws.md >}})

View File

@ -59,10 +59,10 @@ You can find information [here]({{< ref "install-dapr.md#using-helm-advanced" >}
When deploying Dapr in a production-ready configuration, it's recommended to deploy with a highly available configuration of the control plane:
```bash
helm install dapr dapr/dapr --namespace dapr-system --set global.ha.enabled=true
helm install dapr dapr/dapr --version=<Dapr chart version> --namespace dapr-system --set global.ha.enabled=true
```
This command will run 3 replicas of each control plane pod with the exception of the Placement pod in the dapr-system namespace.
This command will run 3 replicas of each control plane pod in the dapr-system namespace.
*Note: The Dapr Helm chart automatically deploys with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy the Dapr control plane to Windows nodes, but most users should not need to. For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster]({{< ref "kubernetes-hybrid-clusters.md" >}})*
@ -77,7 +77,7 @@ Dapr supports zero downtime upgrades. The upgrade path includes the following st
### Upgrading the CLI
To upgrade the Dapr CLI, [download a release version](https://github.com/dapr/cli/releases) of the CLI that matches the Dapr runtime version.
For example, if upgrading to Dapr 0.9.0, download a CLI version of 0.9.x.
For example, if upgrading to Dapr 1.0.0-rc.x, download a CLI version of 1.0.0-rc.x.
After you downloaded the binary, it's recommended you put the CLI binary in your path.
@ -122,6 +122,8 @@ You should have the following files containing the base64 decoded text from the
#### Updating the control plane pods
> Note: To upgrade Dapr from 0.11.x to 1.0.0 version, please refer to [this section](#upgrade-from-dapr-011x-to-100).
Next, you need to find a Helm chart version that installs the new desired version of Dapr and perform a `helm upgrade` operation.
First, update the Helm Chart repos:
@ -133,11 +135,10 @@ helm repo update
List all charts in the Dapr repo:
```bash
helm search repo dapr
helm search repo dapr --devel
NAME CHART VERSION APP VERSION DESCRIPTION
dapr/dapr 0.4.3 0.9.0 A Helm chart for Dapr on Kubernetes
dapr/dapr 0.4.2 0.8.0 A Helm chart for Dapr on Kubernetes
dapr/dapr 1.0.0-rc.1 1.0.0-rc.1 A Helm chart for Dapr on Kubernetes
```
The APP VERSION column tells us which Dapr runtime version is installed by the chart.
@ -145,7 +146,7 @@ The APP VERSION column tells us which Dapr runtime version is installed by the c
Use the following command to upgrade Dapr to your desired runtime version providing a path to the certificate files you saved:
```bash
helm upgrade dapr dapr/dapr --version <CHART-VERSION> --namespace dapr-system --reset-values --set-file dapr_sentry.tls.root.certPEM=ca.crt --set-file dapr_sentry.tls.issuer.certPEM=issuer.crt --set-file dapr_sentry.tls.issuer.keyPEM=issuer.key
helm upgrade dapr dapr/dapr --version <Dapr chart version> --namespace dapr-system --reset-values --set-file dapr_sentry.tls.root.certPEM=ca.crt --set-file dapr_sentry.tls.issuer.certPEM=issuer.crt --set-file dapr_sentry.tls.issuer.keyPEM=issuer.key
```
Kubernetes now performs a rolling update. Wait until all the new pods appear as running:
@ -154,11 +155,11 @@ Kubernetes now performs a rolling update. Wait until all the new pods appear as
kubectl get po -n dapr-system -w
NAME READY STATUS RESTARTS AGE
dapr-dashboard-dcd7cc8fb-ml2p8 1/1 Running 0 12s
dapr-operator-7b57d884cd-6spcq 1/1 Running 0 12s
dapr-placement-86b76f6545-vkc6f 1/1 Running 0 12s
dapr-sentry-7c7bf75d8b-dcnhm 1/1 Running 0 12s
dapr-sidecar-injector-7b847db96f-kwst9 1/1 Running 0 12s
dapr-dashboard-86b94bb768-w4wmj 1/1 Running 0 39s
dapr-operator-67d7d7bb6c-qqkk7 1/1 Running 0 39s
dapr-placement-server-0 1/1 Running 0 39s
dapr-sentry-647759cd46-nwzkw 1/1 Running 0 39s
dapr-sidecar-injector-74648c9dcb-px2m5 1/1 Running 0 39s
```
You can verify the health and version of the control plane using the Dapr CLI:
@ -166,11 +167,12 @@ You can verify the health and version of the control plane using the Dapr CLI:
```bash
dapr status -k
NAME NAMESPACE HEALTHY STATUS VERSION AGE CREATED
dapr-placement dapr-system True Running 0.9.0 12s 2020-07-24 15:38.15
dapr-operator dapr-system True Running 0.9.0 12s 2020-07-24 15:38.15
dapr-sidecar-injector dapr-system True Running 0.9.0 12s 2020-07-24 15:38.16
dapr-sentry dapr-system True Running 0.9.0 12s 2020-07-24 15:38.15
NAME NAMESPACE HEALTHY STATUS REPLICAS VERSION AGE CREATED
dapr-sidecar-injector dapr-system True Running 1 1.0.0-rc.1 1m 2020-11-16 14:42.19
dapr-sentry dapr-system True Running 1 1.0.0-rc.1 1m 2020-11-16 14:42.19
dapr-dashboard dapr-system True Running 1 0.3.0 1m 2020-11-16 14:42.19
dapr-operator dapr-system True Running 1 1.0.0-rc.1 1m 2020-11-16 14:42.19
dapr-placement-server dapr-system True Running 1 1.0.0-rc.1 1m 2020-11-16 14:42.19
```
*Note: If new fields have been added to the target Helm Chart being upgraded to, the `helm upgrade` command will fail. If that happens, you need to find which new fields have been added in the new chart and add them as parameters to the upgrade command, for example: `--set <field-name>=<value>`.*
@ -181,7 +183,7 @@ The last step is to update pods that are running Dapr to pick up the new version
To do that, simply issue a rollout restart command for any deployment that has the `dapr.io/enabled` annotation:
```
kubectl rollout restart deploy/<DEPLOYMENT-NAME>
kubectl rollout restart deploy/<Application deployment name>
```
To see a list of all your Dapr enabled deployments, you can either use the [Dapr Dashboard](https://github.com/dapr/dashboard) or run the following command using the Dapr CLI:
@ -239,7 +241,7 @@ dapr-sidecar-injector-68f868668f-ltxq4 1/1 Running 0 36s
Update pods that are running Dapr to pick up the new version of the Dapr runtime.
```sh
kubectl rollout restart deploy/<DEPLOYMENT-NAME>
kubectl rollout restart deploy/<Application deployment name>
```
Once the deployment is completed, delete 0.11.x dapr-placement service by following commands:

View File

@ -40,11 +40,14 @@ By default for Linux/MacOS the `placement` binary is installed in `/$HOME/.dapr/
```bash
$ $HOME/.dapr/bin/placement
INFO[0000] starting Dapr Placement Service -- version 0.8.0 -- commit 74db927 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0
INFO[0000] log level set to: info instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0
INFO[0000] metrics server started on :9090/ instance=host.localhost.name scope=dapr.metrics type=log ver=0.8.0
INFO[0000] placement Service started on port 50005 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0
INFO[0000] Healthz server is listening on :8080 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0
INFO[0000] starting Dapr Placement Service -- version 1.0.0-rc.1 -- commit 13ae49d instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement type=log ver=1.0.0-rc.1
INFO[0000] log level set to: info instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement type=log ver=1.0.0-rc.1
INFO[0000] metrics server started on :9090/ instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.metrics type=log ver=1.0.0-rc.1
INFO[0000] Raft server is starting on 127.0.0.1:8201... instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement.raft type=log ver=1.0.0-rc.1
INFO[0000] placement service started on port 50005 instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement type=log ver=1.0.0-rc.1
INFO[0000] Healthz server is listening on :8080 instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement type=log ver=1.0.0-rc.1
INFO[0001] cluster leadership acquired instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement type=log ver=1.0.0-rc.1
INFO[0001] leader is established. instance=Nicoletaz-L10.redmond.corp.microsoft.com scope=dapr.placement type=log ver=1.0.0-rc.1
```
@ -57,12 +60,6 @@ Update the state store configuration files to have the Redis host and password m
value: "true"
```
The logs of the placement service are updated whenever a host that uses actors is added or removed similar to the following output:
```
INFO[0446] host added: 192.168.1.6 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0
INFO[0450] host removed: 192.168.1.6 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0
```
## Cleanup

View File

@ -15,7 +15,7 @@ description: "Enable Dapr metrics and logs with Azure Monitor for Azure Kubernet
## Enable Prometheus metric scrape using config map
1. Make sure that omsagnets are running
1. Make sure that omsagents are running
```bash
$ kubectl get pods -n kube-system

View File

@ -16,20 +16,28 @@ description: "How to view Dapr metrics in a Grafana dashboard."
1. Install Grafana
Add the Grafana Helm repo:
```bash
helm install grafana stable/grafana -n dapr-monitoring
helm repo add grafana https://grafana.github.io/helm-charts
```
Install the chart:
```bash
helm install grafana grafana/grafana -n dapr-monitoring
```
If you are Minikube user or want to disable persistent volume for development purpose, you can disable it by using the following command:
If you are Minikube user or want to disable persistent volume for development purpose, you can disable it by using the following command:
```bash
helm install grafana stable/grafana -n dapr-monitoring --set persistence.enabled=false
helm install grafana grafana/grafana -n dapr-monitoring --set persistence.enabled=false
```
2. Retrieve the admin password for Grafana login
```bash
kubectl get secret --namespace dapr-monitoring grafana -o jsonpath="{. data.admin-password}" | base64 --decode
kubectl get secret --namespace dapr-monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1%
```
@ -131,9 +139,8 @@ You can find `grafana-actor-dashboard.json`, `grafana-sidecar-dashboard.json` an
## References
* [Set up Prometheus and Grafana]({{< ref grafana.md >}})
* [Prometheus Installation](https://github.com/helm/charts/tree/master/stable/prometheus-operator)
* [Prometheus Installation](https://github.com/prometheus-community/helm-charts)
* [Prometheus on Kubernetes](https://github.com/coreos/kube-prometheus)
* [Prometheus Kubernetes Operator](https://github.com/helm/charts/tree/master/stable/prometheus-operator)
* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)
## Example

View File

@ -0,0 +1,159 @@
---
type: docs
title: "How-To: Set up Jaeger for distributed tracing"
linkTitle: "Jaeger"
weight: 3000
description: "Set up Jaeger for distributed tracing"
type: docs
---
Dapr currently supports two kind of tracing protocol: OpenCensus and
Zipkin. Since Jaeger is compatible with Zipkin, the Zipkin
protocol can be used to talk to Jaeger.
## Configure self hosted mode
### Setup
The simplest way to start Jaeger is to use the pre-built all-in-one
Jaeger image published to DockerHub:
```bash
docker run -d --name jaeger \
-e COLLECTOR_ZIPKIN_HTTP_PORT=9412 \
-p 16686:16686 \
-p 9412:9412 \
jaegertracing/all-in-one:1.21
```
Next, create the following YAML files locally:
* **jaeger.yaml**: Note that because we are using the Zipkin protocol to talk to Jaeger,
the type of the exporter in the YAML file below is `exporter.zipkin`,
while the `exporterAddress` is the address of the Jaeger instance.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: zipkin
spec:
type: exporters.zipkin
metadata:
- name: enabled
value: "true"
- name: exporterAddress
value: "http://localhost:9412/api/v2/spans"
```
To launch the application referring to the new YAML file, you can use
`--components-path`. Assuming that, the **jaeger.yaml** file is in the
current directory, you can use
```bash
dapr run --app-id mynode --app-port 3000 node app.js --components-path .
```
### Viewing Traces
To view traces, in your browser go to http://localhost:16686 and you
will see the Zipkin UI.
## Configure Kubernetes
The following steps shows you how to configure Dapr to send distributed tracing data to Jaeger running as a container in your Kubernetes cluster, how to view them.
### Setup
First create the following YAML file to install Jaeger
* jaeger-operator.yaml
```yaml
apiVersion: jaegertracing.io/v1
kind: "Jaeger"
metadata:
name: "jaeger"
spec:
strategy: allInOne
ingress:
enabled: false
allInOne:
image: jaegertracing/all-in-one:1.13
options:
query:
base-path: /jaeger
```
Now, use the above YAML file to install Jaeger
```bash
# Install Jaeger
helm repo add jaegertracing https://jaegertracing.github.io/helm-charts
helm install jaeger-operator jaegertracing/jaeger-operator
kubectl apply -f jaeger-operator.yaml
# Wait for Jaeger to be up and running
kubectl wait deploy --selector app.kubernetes.io/name=jaeger --for=condition=available
```
Next, create the following YAML files locally:
* **jaeger.yaml**: Note that because we are using the Zipkin protocol to talk to Jaeger,
the type of the exporter in the YAML file below is `exporter.zipkin`,
while the `exporterAddress` is the address of the Jaeger instance.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: zipkin
spec:
type: exporters.zipkin
metadata:
- name: enabled
value: "true"
- name: exporterAddress
value: "http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans"
```
* **tracing.yaml**
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
```
Finally, deploy the the Dapr component and configuration files:
```bash
kubectl apply -f tracing.yaml
kubectl apply -f jaeger.yaml
```
In order to enable this configuration for your Dapr sidecar, add the following annotation to your pod spec template:
```yml
annotations:
dapr.io/config: "tracing"
```
That's it! your sidecar is now configured for use with Jaeger.
### Viewing Tracing Data
To view traces, connect to the Jaeger Service and open the UI:
```bash
kubectl port-forward svc/jaeger-query 16686
```
In your browser, go to `http://localhost:16686` and you will see the Jaeger UI.
![jaeger](/images/jaeger_ui.png)
## References
- [Jaeger Getting Started](https://www.jaegertracing.io/docs/1.21/getting-started/#all-in-one)
- [W3C distributed tracing]({{< ref w3c-tracing >}})

View File

@ -0,0 +1,117 @@
---
type: docs
title: "How-To: Set-up New Relic for Dapr observability"
linkTitle: "New Relic"
weight: 2000
description: "Set-up New Relic for Dapr observability"
---
## Prerequisites
- Perpetually [free New Relic account](https://newrelic.com/signup), 100 GB/month of free data ingest, 1 free full access user, unlimited free basic users
## Configure Zipkin Exporter
Dapr natively captures metrics and traces that can be send directly to New Relic. The easiest way to export these is by providing a Zipkin exporter configured to send the traces to [New Relic's Trace API](https://docs.newrelic.com/docs/understand-dependencies/distributed-tracing/trace-api/report-zipkin-format-traces-trace-api#existing-zipkin).
In order for the integration to send data to New Relic [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform), you need a [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#insights-insert-key).
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: zipkin
namespace: default
spec:
type: exporters.zipkin
metadata:
- name: enabled
value: "true"
- name: exporterAddress
value: "https://trace-api.newrelic.com/trace/v1?Api-Key=<NR-INSIGHTS-INSERT-API-KEY>&Data-Format=zipkin&Data-Format-Version=2"
```
### Viewing Traces
New Relic Distributed Tracing overview
![New Relic Kubernetes Cluster Explorer App](/images/nr-distributed-tracing-overview.png)
New Relic Distributed Tracing details
![New Relic Kubernetes Cluster Explorer App](/images/nr-distributed-tracing-detail.png)
## (optional) New Relic Instrumentation
In order for the integrations to send data to New Relic Telemetry Data Platform, you either need a [New Relic license key](https://docs.newrelic.com/docs/accounts/accounts-billing/account-setup/new-relic-license-key) or [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#insights-insert-key).
### OpenTelemetry instrumentation
Leverage the different language specific OpenTelemetry implementations, for example [New Relic Telemetry SDK and OpenTelemetry support for .NET](https://github.com/newrelic/newrelic-telemetry-sdk-dotnet). In this case, use the [OpenTelemetry Trace Exporter](https://github.com/newrelic/newrelic-telemetry-sdk-dotnet/tree/main/src/NewRelic.OpenTelemetry). See example [here](https://github.com/harrykimpel/quickstarts/blob/master/distributed-calculator/csharp-otel/Startup.cs).
### New Relic Language agent
Similarly to the OpenTelemetry instrumentation, you can also leverage a New Relic language agent. As an example, the [New Relic agent instrumentation for .NET Core](https://docs.newrelic.com/docs/agents/net-agent/installation/install-docker-container) is part of the Dockerfile. See example [here](https://github.com/harrykimpel/quickstarts/blob/master/distributed-calculator/csharp/Dockerfile).
## (optional) Enable New Relic Kubernetes integration
In case Dapr and your applications run in the context of a Kubernetes environment, you can enable additional metrics and logs.
The easiest way to install the New Relic Kubernetes integration is to use the [automated installer](https://one.newrelic.com/launcher/nr1-core.settings?pane=eyJuZXJkbGV0SWQiOiJrOHMtY2x1c3Rlci1leHBsb3Jlci1uZXJkbGV0Lms4cy1zZXR1cCJ9) to generate a manifest. It bundles not just the integration DaemonSets, but also other New Relic Kubernetes configurations, like [Kubernetes events](https://docs.newrelic.com/docs/integrations/kubernetes-integration/kubernetes-events/install-kubernetes-events-integration), [Prometheus OpenMetrics](https://docs.newrelic.com/docs/integrations/prometheus-integrations/get-started/new-relic-prometheus-openmetrics-integration-kubernetes), and [New Relic log monitoring](https://docs.newrelic.com/docs/logs).
### New Relic Kubernetes Cluster Explorer
The [New Relic Kubernetes Cluster Explorer](https://docs.newrelic.com/docs/integrations/kubernetes-integration/understand-use-data/kubernetes-cluster-explorer) provides a unique visualization of the entire data and deployments of the data collected by the Kubernetes integration.
It is a good starting point to observe all your data and dig deeper into any performance issues or incidents happening inside of the application or microservices.
![New Relic Kubernetes Cluster Explorer App](/images/nr-k8s-cluster-explorer-app.png)
Automated correlation is part of the visualization capabilities of New Relic.
### Pod-level details
![New Relic K8s Pod Level Details](/images/nr-k8s-pod-level-details.png)
### Logs in Context
![New Relic K8s Logs In Context](/images/nr-k8s-logs-in-context.png)
## New Relic Dashboards
### Kubernetes Overview
![New Relic Dashboard Kubernetes Overview](/images/nr-dashboard-k8s-overview.png)
### Dapr System Services
![New Relic Dashboard Dapr System Services](/images/nr-dashboard-dapr-system-services.png)
### Dapr Metrics
![New Relic Dashboard Dapr Metrics 1](/images/nr-dashboard-dapr-metrics-1.png)
## New Relic Grafana integration
New Relic teamed up with [Grafana Labs](https://grafana.com/) so you can use the [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform) as a data source for Prometheus metrics and see them in your existing dashboards, seamlessly tapping into the reliability, scale, and security provided by New Relic.
[Grafana dashboard templates](https://github.com/dapr/dapr/blob/227028e7b76b7256618cd3236d70c1d4a4392c9a/grafana/README.md) to monitor Dapr system services and sidecars can easily be used without any changes. New Relic provides a [native endpoint for Prometheus metrics](https://docs.newrelic.com/docs/integrations/grafana-integrations/set-configure/configure-new-relic-prometheus-data-source-grafana) into Grafana. A datasource can easily be set-up:
![New Relic Grafana Data Source](/images/nr-grafana-datasource.png)
And the exact same dashboard templates from Dapr can be imported to visualize Dapr system services and sidecars.
![New Relic Grafana Dashboard](/images/nr-grafana-dashboard.png)
## New Relic Alerts
All the data that is collected from Dapr, Kubernetes or any services that run on top of can be used to set-up alerts and notifications into the preferred channel of your choice. See [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence).
## Related Links/References
* [New Relic Account Signup](https://newrelic.com/signup)
* [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform)
* [Distributed Tracing](https://docs.newrelic.com/docs/understand-dependencies/distributed-tracing/get-started/introduction-distributed-tracing)
* [New Relic Trace API](https://docs.newrelic.com/docs/understand-dependencies/distributed-tracing/trace-api)
* [New Relic Metric API](https://docs.newrelic.com/docs/telemetry-data-platform/get-data/apis/introduction-metric-api)
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys)
* [New Relic OpenTelemetry User Experience](https://blog.newrelic.com/product-news/opentelemetry-user-experience/)
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence)

View File

@ -81,15 +81,16 @@ kubectl create namespace dapr-monitoring
2. Install Prometheus
```bash
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install dapr-prom stable/prometheus -n dapr-monitoring
helm install dapr-prom prometheus-community/prometheus -n dapr-monitoring
```
If you are Minikube user or want to disable persistent volume for development purposes, you can disable it by using the following command.
```bash
helm install dapr-prom stable/prometheus -n dapr-monitoring --set alertmanager.persistentVolume.enable=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false
helm install dapr-prom prometheus-community/prometheus -n dapr-monitoring
--set alertmanager.persistentVolume.enable=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false
```
3. Validation
@ -114,7 +115,5 @@ dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0
## References
* [Prometheus Installation](https://github.com/helm/charts/tree/master/stable/prometheus-operator)
* [Prometheus on Kubernetes](https://github.com/coreos/kube-prometheus)
* [Prometheus Kubernetes Operator](https://github.com/helm/charts/tree/master/stable/prometheus-operator)
* [Prometheus Installation](https://github.com/prometheus-community/helm-charts)
* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)

View File

@ -1,6 +1,6 @@
---
type: docs
title: "Configure API authention with OAUTH"
title: "Configure API authorization with OAuth"
linkTitle: "OAuth"
weight: 2000
description: "Enable OAUTH authorization on Dapr endpoints for your web APIs"

View File

@ -101,3 +101,9 @@ curl http://localhost:3500/v1.0/secrets/vault/db-secret \
```shell
curl http://localhost:3500/v1.0/secrets/vault/db-secret?metadata.version_id=15&metadata.version_stage=AAA \
```
> Note, in case of deploying into namespace other than default`, the above query will also have to include the namespace metadata (e.g. `production` below)
```shell
curl http://localhost:3500/v1.0/secrets/vault/db-secret?metadata.version_id=15&?metadata.namespace=production
```

View File

@ -217,7 +217,7 @@ curl http://localhost:3500/v1.0/state/myRedisStore/bulk \
-H "Content-Type: application/json" \
-d '{
"keys": [ "key1", "key2" ],
"parallelism": 10,
"parallelism": 10
}'
```
@ -234,7 +234,7 @@ curl http://localhost:3500/v1.0/state/myRedisStore/bulk \
"key": "key2",
"data": "value2",
"etag": "1"
},
}
]
```
To pass metadata as query parammeter:
@ -448,7 +448,7 @@ curl -X POST http://localhost:3500/v1.0/state/starwars \
"etag": "xxxxx",
"options": {
"concurrency": "first-write",
"consistency": "strong",
"consistency": "strong"
}
}
]'

View File

@ -0,0 +1,24 @@
---
type: docs
title: "invokeGet CLI command reference"
linkTitle: "invokeGet"
description: "Detailed information on the invokeGet CLI command"
---
## Description
Issue HTTP GET to Dapr app
## Usage
```bash
dapr invokeGet [flags]
```
## Flags
| Name | Environment Variable | Default | Description
| --- | --- | --- | --- |
| `--app-id`, `-a` | | | The app ID to invoke |
| `--help`, `-h` | | | Help for invokeGet |
| `--method`, `-m` | | | The method to invoke |

View File

@ -0,0 +1,25 @@
---
type: docs
title: "invokePost CLI command reference"
linkTitle: "invokePost"
description: "Detailed information on the invokePost CLI command"
---
## Description
Issue HTTP POST to Dapr app with an optional payload
## Usage
```bash
dapr invokePost [flags]
```
## Flags
| Name | Environment Variable | Default | Description
| --- | --- | --- | --- |
| `--app-id`, `-a` | | | The app ID to invoke |
| `--help`, `-h` | | | Help for invokePost |
| `--method`, `-m` | | | The method to invoke |
| `--payload`, `-p` | | | (optional) a json payload |

View File

@ -22,4 +22,4 @@ dapr publish [flags]
| `--data`, `-d` | | | The JSON serialized string (optional) |
| `--help`, `-h` | | | Print this help message |
| `--pubsub` | | | The name of the pub/sub component
| `--topic`, `-t` | | | The topic to be published to |
| `--topic`, `-t` | | | The topic to be published to |

View File

@ -22,4 +22,4 @@ dapr status -k
| Name | Environment Variable | Default | Description
| --- | --- | --- | --- |
| `--help`, `-h` | | | Print this help message |
| `--kubernetes`, `-k` | | `false` | Show the health status of Dapr services on Kubernetes cluster |
| `--kubernetes`, `-k` | | `false` | Show the health status of Dapr services on Kubernetes cluster |

Binary file not shown.

After

Width:  |  Height:  |  Size: 376 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 321 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 427 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 423 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 412 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 384 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 206 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 186 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB