Merge branch 'issue_3868' of https://github.com/hhunter-ms/docs into issue_3868

This commit is contained in:
Hannah Hunter 2024-02-01 14:44:55 -05:00
commit 4fefea73f3
29 changed files with 587 additions and 47 deletions

View File

@ -0,0 +1,36 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/ubuntu
{
"name": "Ubuntu",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/base:jammy",
"features": {
"ghcr.io/devcontainers/features/go:1": {
"version": "latest"
},
"ghcr.io/devcontainers/features/hugo:1": {
"extended": true,
"version": "latest"
},
"ghcr.io/devcontainers/features/node:1": {
"nodeGypDependencies": true,
"version": "lts",
"nvmVersion": "latest"
}
},
"customizations": {
"vscode": {
"extensions": [
"streetsidesoftware.code-spell-checker",
"tamasfe.even-better-toml",
"davidanson.vscode-markdownlint",
"budparr.language-hugo-vscode"
],
"settings": {
"git.alwaysSignOff": true
}
}
},
"forwardPorts": [1313],
"postAttachCommand": "bash scripts/init-container.sh"
}

3
.gitattributes vendored Normal file
View File

@ -0,0 +1,3 @@
* text=auto eol=lf
*.{cmd,[cC][mM][dD]} text eol=crlf
*.{bat,[bB][aA][tT]} text eol=crlf

View File

@ -29,21 +29,44 @@ The Dapr docs are built using [Hugo](https://gohugo.io/) with the [Docsy](https:
The [daprdocs](./daprdocs) directory contains the hugo project, markdown files, and theme configurations.
## Pre-requisites
## Setup with a devcontainer
This repository comes with a [devcontainer](/.devcontainer/devcontainer.json) configuration that automatically installs all the required dependencies and VSCode extensions to build and run the docs.
This devcontainer can be used to develop locally with VSCode or via GitHub Codespaces completely in the browser. Other IDEs that support [devcontainers](https://containers.dev/) can be used but won't have the extensions preconfigured and will likely have different performance characteristics.
### Pre-requisites
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
- [VSCode](https://code.visualstudio.com/download)
### Environment setup
1. [Fork](https://github.com/dapr/docs/fork) and clone this repository.
1. Open the forked repository in VS Code
```sh
code .
```
1. When prompted, click "Reopen in Container" to open the repository in the devcontainer.
Continue with the [Run local server](#run-local-server) steps.
## Setup without a devcontainer
### Pre-requisites
- [Hugo extended version](https://gohugo.io/getting-started/installing)
- [Node.js](https://nodejs.org/en/)
## Environment setup
### Environment setup
1. Ensure pre-requisites are installed
2. Clone this repository
1. Ensure pre-requisites are installed.
1. [Fork](https://github.com/dapr/docs/fork) and clone this repository.
```sh
git clone https://github.com/dapr/docs.git
```
3. Change to daprdocs directory:
1. Change to daprdocs directory:
```sh
cd ./daprdocs
@ -63,7 +86,7 @@ npm install
## Run local server
1. Make sure you're still in the `daprdocs` directory
1. Make sure you're in the `daprdocs` directory
2. Run
```sh
@ -72,14 +95,13 @@ hugo server
3. Navigate to `http://localhost:1313/`
## Update docs
1. Fork repo into your account
1. Create new branch
1. Commit and push changes to forked branch
1. Submit pull request from downstream branch to the upstream branch for the correct version you are targeting
1. Staging site will automatically get created and linked to PR to review and test
1. Ensure you are in your forked repo
2. Create new branch
3. Commit and push changes to forked branch
4. Submit pull request from downstream branch to the upstream branch for the correct version you are targeting
5. Staging site will automatically get created and linked to PR to review and test
## Code of Conduct

View File

@ -30,6 +30,7 @@ If you haven't already forked the repo, creating the Codespace will also create
- [dapr/dapr](https://github.com/dapr/dapr)
- [dapr/components-contrib](https://github.com/dapr/components-contrib)
- [dapr/cli](https://github.com/dapr/cli)
- [dapr/docs](https://github.com/dapr/docs)
- [dapr/python-sdk](https://github.com/dapr/python-sdk)
## Developing Dapr Components in a Codespace

View File

@ -8,6 +8,8 @@ description: Get started with contributing to the Dapr docs
In this guide, you'll learn how to contribute to the [Dapr docs repository](https://github.com/dapr/docs). Since Dapr docs are published to [docs.dapr.io](https://docs.dapr.io), you must make sure your contributions compile and publish correctly.
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/uPYuXcaEs-c" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
## Prerequisites
Before contributing to the Dapr docs:

View File

@ -85,4 +85,5 @@ Watch this [demo video of the Cryptography API from the Dapr Community Call #83]
## Related links
- [Cryptography overview]({{< ref cryptography-overview.md >}})
- [Cryptography component specs]({{< ref supported-cryptography >}})
- [Cryptography component specs]({{< ref supported-cryptography >}})
- [Cryptography API reference doc]({{< ref cryptography_api >}})

View File

@ -28,9 +28,14 @@ The `filter` specifies the query conditions in the form of a tree, where each no
The following operations are supported:
| Operator | Operands | Description |
|----------|-------------|--------------|
| `EQ` | key:value | key == value |
| Operator | Operands | Description |
|----------|-------------|--------------------------------------------------------------|
| `EQ` | key:value | key == value |
| `NEQ` | key:value | key != value |
| `GT` | key:value | key > value |
| `GTE` | key:value | key >= value |
| `LT` | key:value | key < value |
| `LTE` | key:value | key <= value |
| `IN` | key:[]value | key == value[0] OR key == value[1] OR ... OR key == value[n] |
| `AND` | []operation | operation[0] AND operation[1] AND ... AND operation[n] |
| `OR` | []operation | operation[0] OR operation[1] OR ... OR operation[n] |

View File

@ -162,7 +162,7 @@ APIs that generate random numbers, random UUIDs, or the current date are _non-de
For example, instead of this:
{{< tabs ".NET" Java Go >}}
{{< tabs ".NET" Java JavaScript Go >}}
{{% codetab %}}
@ -188,8 +188,20 @@ string randomString = GetRandomString();
{{% codetab %}}
```javascript
// DON'T DO THIS!
const currentTime = new Date();
const newIdentifier = uuidv4();
const randomString = getRandomString();
```
{{% /codetab %}}
{{% codetab %}}
```go
// DON'T DO THIS!
```
{{% /codetab %}}
@ -198,7 +210,7 @@ string randomString = GetRandomString();
Do this:
{{< tabs ".NET" Java Go >}}
{{< tabs ".NET" Java JavaScript Go >}}
{{% codetab %}}
@ -224,12 +236,23 @@ String randomString = context.callActivity(GetRandomString.class.getName(), Stri
{{% codetab %}}
```go
```javascript
// Do this!!
const currentTime = context.getCurrentUtcDateTime();
const randomString = yield context.callActivity(getRandomString);
```
{{% /codetab %}}
{{% codetab %}}
```go
// Do this!!
```
{{% /codetab %}}
{{< /tabs >}}
@ -240,7 +263,7 @@ Instead, workflows should interact with external state _indirectly_ using workfl
For example, instead of this:
{{< tabs ".NET" Java Go >}}
{{< tabs ".NET" Java JavaScript Go >}}
{{% codetab %}}
@ -265,6 +288,25 @@ HttpResponse<String> response = HttpClient.newBuilder().build().send(request, Ht
{{% codetab %}}
```javascript
// DON'T DO THIS!
// Accessing an Environment Variable (Node.js)
const configuration = process.env.MY_CONFIGURATION;
fetch('https://postman-echo.com/get')
.then(response => response.text())
.then(data => {
console.log(data);
})
.catch(error => {
console.error('Error:', error);
});
```
{{% /codetab %}}
{{% codetab %}}
```go
// DON'T DO THIS!
```
@ -275,7 +317,7 @@ HttpResponse<String> response = HttpClient.newBuilder().build().send(request, Ht
Do this:
{{< tabs ".NET" Java Go >}}
{{< tabs ".NET" Java JavaScript Go >}}
{{% codetab %}}
@ -297,6 +339,17 @@ String data = ctx.callActivity(MakeHttpCall.class, "https://example.com/api/data
{{% /codetab %}}
{{% codetab %}}
```javascript
// Do this!!
const configuation = workflowInput.getConfiguration(); // imaginary workflow input argument
const data = yield ctx.callActivity(makeHttpCall, "https://example.com/api/data");
```
{{% /codetab %}}
{{% codetab %}}
```go
@ -304,7 +357,6 @@ String data = ctx.callActivity(MakeHttpCall.class, "https://example.com/api/data
```
{{% /codetab %}}
{{< /tabs >}}

View File

@ -162,9 +162,9 @@ apps:
The following rules apply for all the paths present in the template file:
- If the path is absolute, it is used as is.
- All relative paths under command section should be provided relative to the template file path.
- All relative paths under common section should be provided relative to the template file path.
- `appDirPath` under apps section should be provided relative to the template file path.
- All relative paths under app section should be provided relative to the `appDirPath`.
- All other relative paths under apps section should be provided relative to the `appDirPath`.
{{% /codetab %}}

View File

@ -23,9 +23,9 @@ Dapr initialization includes:
1. Running a **Dapr placement service container instance** for local actor support.
{{% alert title="Docker" color="primary" %}}
The recommended development environment requires [Docker](https://docs.docker.com/install/). While you can [initialize Dapr without a dependency on Docker]({{<ref self-hosted-no-docker.md>}})), the next steps in this guide assume the recommended Docker development environment.
The recommended development environment requires [Docker](https://docs.docker.com/install/). While you can [initialize Dapr without a dependency on Docker]({{< ref self-hosted-no-docker.md >}})), the next steps in this guide assume the recommended Docker development environment.
You can also install [Podman](https://podman.io/) in place of Docker. Read more about [initializing Dapr using Podman]({{<ref dapr-init.md>}}).
You can also install [Podman](https://podman.io/) in place of Docker. Read more about [initializing Dapr using Podman]({{< ref dapr-init.md >}}).
{{% /alert %}}
### Step 1: Open an elevated terminal
@ -54,12 +54,35 @@ Run Windows Terminal or command prompt as administrator.
### Step 2: Run the init CLI command
{{< tabs "Linux/MacOS" "Windows">}}
{{% codetab %}}
Install the latest Dapr runtime binaries:
```bash
dapr init
```
**If you are installing on Mac OS Silicon with Docker,** you may need to perform the following workaround to enable `dapr init` to talk to Docker without using Kubernetes.
1. Navigate to **Docker Desktop** > **Settings** > **Advanced**.
1. Select the **Enable default Docker socket** checkbox.
{{% /codetab %}}
{{% codetab %}}
Install the latest Dapr runtime binaries:
```bash
dapr init
```
{{% /codetab %}}
{{< /tabs >}}
### Step 3: Verify Dapr version
```bash

View File

@ -51,6 +51,20 @@ From the root of the Quickstarts directory, navigate into the pub/sub directory:
cd pub_sub/python/sdk
```
Install the dependencies for the `order-processor` and `checkout` apps:
```bash
cd ./checkout
pip3 install -r requirements.txt
cd ..
cd ./order-processor
pip3 install -r requirements.txt
cd ..
cd ./order-processor-fastapi
pip3 install -r requirements.txt
cd ..
```
### Step 3: Run the publisher and subscriber
With the following command, simultaneously run the following services alongside their own Dapr sidecars:
@ -215,6 +229,17 @@ From the root of the Quickstarts directory, navigate into the pub/sub directory:
cd pub_sub/javascript/sdk
```
Install the dependencies for the `order-processor` and `checkout` apps:
```bash
cd ./order-processor
npm install
cd ..
cd ./checkout
npm install
cd ..
```
### Step 3: Run the publisher and subscriber
With the following command, simultaneously run the following services alongside their own Dapr sidecars:
@ -352,6 +377,18 @@ From the root of the Quickstarts directory, navigate into the pub/sub directory:
cd pub_sub/csharp/sdk
```
Install the dependencies for the `order-processor` and `checkout` apps:
```bash
cd ./order-processor
dotnet restore
dotnet build
cd ../checkout
dotnet restore
dotnet build
cd ..
```
### Step 3: Run the publisher and subscriber
With the following command, simultaneously run the following services alongside their own Dapr sidecars:
@ -497,6 +534,17 @@ From the root of the Quickstarts directory, navigate into the pub/sub directory:
cd pub_sub/java/sdk
```
Install the dependencies for the `order-processor` and `checkout` apps:
```bash
cd ./order-processor
mvn clean install
cd ..
cd ./checkout
mvn clean install
cd ..
```
### Step 3: Run the publisher and subscriber
With the following command, simultaneously run the following services alongside their own Dapr sidecars:
@ -647,6 +695,16 @@ From the root of the Quickstarts directory, navigate into the pub/sub directory:
cd pub_sub/go/sdk
```
Install the dependencies for the `order-processor` and `checkout` apps:
```bash
cd ./order-processor
go build .
cd ../checkout
go build .
cd ..
```
### Step 3: Run the publisher and subscriber
With the following command, simultaneously run the following services alongside their own Dapr sidecars:
@ -878,7 +936,7 @@ with DaprClient() as client:
### Step 5: View the Pub/sub outputs
Notice, as specified in the code above, the publisher pushes a random number to the Dapr sidecar while the subscriber receives it.
The publisher sends orders to the Dapr sidecar while the subscriber receives them.
Publisher output:

View File

@ -48,6 +48,16 @@ From the root of the Quickstart clone directory, navigate to the quickstart dire
cd service_invocation/python/http
```
Install the dependencies for the `order-processor` and `checkout` apps:
```bash
cd ./order-processor
pip3 install -r requirements.txt
cd ../checkout
pip3 install -r requirements.txt
cd ..
```
### Step 3: Run the `order-processor` and `checkout` services
With the following command, simultaneously run the following services alongside their own Dapr sidecars:
@ -184,6 +194,16 @@ From the root of the Quickstart clone directory, navigate to the quickstart dire
cd service_invocation/javascript/http
```
Install the dependencies for the `order-processor` and `checkout` apps:
```bash
cd ./order-processor
npm install
cd ../checkout
npm install
cd ..
```
### Step 3: Run the `order-processor` and `checkout` services
With the following command, simultaneously run the following services alongside their own Dapr sidecars:
@ -314,6 +334,18 @@ From the root of the Quickstart clone directory, navigate to the quickstart dire
cd service_invocation/csharp/http
```
Install the dependencies for the `order-processor` and `checkout` apps:
```bash
cd ./order-processor
dotnet restore
dotnet build
cd ../checkout
dotnet restore
dotnet build
cd ..
```
### Step 3: Run the `order-processor` and `checkout` services
With the following command, simultaneously run the following services alongside their own Dapr sidecars:
@ -448,6 +480,16 @@ From the root of the Quickstart clone directory, navigate to the quickstart dire
cd service_invocation/java/http
```
Install the dependencies for the `order-processor` and `checkout` apps:
```bash
cd ./order-processor
mvn clean install
cd ../checkout
mvn clean install
cd ..
```
### Step 3: Run the `order-processor` and `checkout` services
With the following command, simultaneously run the following services alongside their own Dapr sidecars:
@ -577,6 +619,16 @@ From the root of the Quickstart clone directory, navigate to the quickstart dire
cd service_invocation/go/http
```
Install the dependencies for the `order-processor` and `checkout` apps:
```bash
cd ./order-processor
go build .
cd ../checkout
go build .
cd ..
```
### Step 3: Run the `order-processor` and `checkout` services
With the following command, simultaneously run the following services alongside their own Dapr sidecars:

View File

@ -48,6 +48,12 @@ In a terminal window, navigate to the `order-processor` directory.
cd state_management/python/sdk/order-processor
```
Install the dependencies:
```bash
pip3 install -r requirements.txt
```
Run the `order-processor` service alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}).
```bash
@ -163,6 +169,14 @@ Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quic
git clone https://github.com/dapr/quickstarts.git
```
Install the dependencies for the `order-processor` app:
```bash
cd ./order-processor
npm install
cd ..
```
### Step 2: Manipulate service state
In a terminal window, navigate to the `order-processor` directory.
@ -171,6 +185,12 @@ In a terminal window, navigate to the `order-processor` directory.
cd state_management/javascript/sdk/order-processor
```
Install the dependencies:
```bash
npm install
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash
@ -297,6 +317,13 @@ In a terminal window, navigate to the `order-processor` directory.
cd state_management/csharp/sdk/order-processor
```
Install the dependencies:
```bash
dotnet restore
dotnet build
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash
@ -557,6 +584,12 @@ In a terminal window, navigate to the `order-processor` directory.
cd state_management/go/sdk/order-processor
```
Install the dependencies:
```bash
go build .
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash

View File

@ -113,6 +113,43 @@ If you are using Docker Desktop, verify that you have [the recommended settings]
1. Navigate to `http://localhost:9999` to validate a successful setup.
## Install metrics-server on the Kind Kubernetes Cluster
1. Get metrics-server manifests
```bash
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
```
1. Add insecure TLS parameter to the components.yaml file
```yaml
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --kubelet-insecure-tls <==== Add this
- --metric-resolution=15s
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
```
1. Apply modified manifest
```bash
kubectl apply -f components.yaml
```
## Related links
- [Try out a Dapr quickstart]({{< ref quickstarts.md >}})
- Learn how to [deploy Dapr on your cluster]({{< ref kubernetes-deploy.md >}})

View File

@ -251,7 +251,7 @@ To use Mariner-based images for Dapr, you need to add `-mariner` to your Docker
In the Dapr CLI, you can switch to using Mariner-based images with the `--image-variant` flag.
```sh
dapr init --image-variant mariner
dapr init -k --image-variant mariner
```
{{% /codetab %}}

View File

@ -45,6 +45,8 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes |
|--------------------|:--------:|:--------|---------|---------|---------|------------|
| January 17th 2024 | 1.12.4</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.1 </br>PHP 1.2.0 </br>Python 1.12.0 </br>.NET 1.12.0 </br>JS 3.2.0 | 0.14.0 | Supported (current) | [v1.12.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.4) |
| January 2nd 2024 | 1.12.3</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.1 </br>PHP 1.2.0 </br>Python 1.12.0 </br>.NET 1.12.0 </br>JS 3.2.0 | 0.14.0 | Supported (current) | [v1.12.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.3) |
| November 18th 2023 | 1.12.2</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.1 </br>PHP 1.2.0 </br>Python 1.12.0 </br>.NET 1.12.0 </br>JS 3.2.0 | 0.14.0 | Supported (current) | [v1.12.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.2) |
| November 16th 2023 | 1.12.1</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.1 </br>PHP 1.2.0 </br>Python 1.12.0 </br>.NET 1.12.0 </br>JS 3.2.0 | 0.14.0 | Supported | [v1.12.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.1) |
| October 11th 2023 | 1.12.0</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.0 </br>PHP 1.1.0 </br>Python 1.11.0 </br>.NET 1.12.0 </br>JS 3.1.2 | 0.14.0 | Supported | [v1.12.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.0) |
@ -120,7 +122,8 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
| 1.9.0 | N/A | 1.9.6 |
| 1.10.0 | N/A | 1.10.8 |
| 1.11.0 | N/A | 1.11.4 |
| 1.12.0 | N/A | 1.12.0 |
| 1.12.0 | N/A | 1.12.4 |
## Upgrade on Hosting platforms

View File

@ -0,0 +1,131 @@
---
type: docs
title: "Cryptography API reference"
linkTitle: "Cryptography API"
description: "Detailed documentation on the cryptography API"
weight: 1300
---
Dapr provides cross-platform and cross-language support for encryption and decryption support via the
cryptography building block. Besides the [language specific SDKs]({{<ref sdks>}}), a developer can invoke these capabilities using
the HTTP API endpoints below.
> The HTTP APIs are intended for development and testing only. For production scenarios, the use of the SDKs is strongly
> recommended as they implement the gRPC APIs providing higher performance and capability than the HTTP APIs.
## Encrypt Payload
This endpoint lets you encrypt a value provided as a byte array using a specified key and crypto component.
### HTTP Request
```
PUT http://localhost:<daprPort>/v1.0/crypto/<crypto-store-name>/encrypt
```
#### URL Parameters
| Parameter | Description |
|-------------------|-------------------------------------------------------------|
| daprPort | The Dapr port |
| crypto-store-name | The name of the crypto store to get the encryption key from |
> Note, all URL parameters are case-sensitive.
#### Headers
Additional encryption parameters are configured by setting headers with the appropriate
values. The following table details the required and optional headers to set with every
encryption request.
| Header Key | Description | Allowed Values | Required |
|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|----------------------------------------------------------|
| dapr-key-name | The name of the key to use for the encryption operation | | Yes |
| dapr-key-wrap-algorithm | The key wrap algorithm to use | `A256KW`, `A128CBC`, `A192CBC`, `RSA-OAEP-256` | Yes |
| dapr-omit-decryption-key-name | If true, omits the decryption key name from header `dapr-decryption-key-name` from the output. If false, includes the specified decryption key name specified in header `dapr-decryption-key-name`. | The following values will be accepted as true: `y`, `yes`, `true`, `t`, `on`, `1` | No |
| dapr-decryption-key-name | If `dapr-omit-decryption-key-name` is true, this contains the name of the intended decryption key to include in the output. | | Required only if `dapr-omit-decryption-key-name` is true |
| dapr-data-encryption-cipher | The cipher to use for the encryption operation | `aes-gcm` or `chacha20-poly1305` | No |
### HTTP Response
#### Response Body
The response to an encryption request will have its content type header set to `application/octet-stream` as it
returns an array of bytes with the encrypted payload.
#### Response Codes
| Code | Description |
|------|-------------------------------------------------------------------------|
| 200 | OK |
| 400 | Crypto provider not found |
| 500 | Request formatted correctly, error in dapr code or underlying component |
### Examples
```shell
curl http://localhost:3500/v1.0/crypto/myAzureKeyVault/encrypt \
-X PUT \
-H "dapr-key-name: myCryptoKey" \
-H "dapr-key-wrap-algorithm: aes-gcm" \
-H "Content-Type: application/octet-string" \
--data-binary "\x68\x65\x6c\x6c\x6f\x20\x77\x6f\x72\x6c\x64"
```
> The above command sends an array of UTF-8 encoded bytes representing "hello world" and would return
> a stream of 8-bit values in the response similar to the following containing the encrypted payload:
```bash
gAAAAABhZfZ0Ywz4dQX8y9J0Zl5v7w6Z7xq4jV3cW9o2l4pQ0YD1LdR0Zk7zIYi4n2Ll7t6f0Z4X7r8x9o6a8GyL0X1m9Q0Z0A==
```
## Decrypt Payload
This endpoint lets you decrypt a value provided as a byte array using a specified key and crypto component.
#### HTTP Request
```
PUT curl http://localhost:3500/v1.0/crypto/<crypto-store-name>/decrypt
```
#### URL Parameters
| Parameter | Description |
|-------------------|-------------------------------------------------------------|
| daprPort | The Dapr port |
| crypto-store-name | The name of the crypto store to get the decryption key from |
> Note all parameters are case-sensitive.
#### Headers
Additional decryption parameters are configured by setting headers with the appropriate values. The following table
details the required and optional headers to set with every decryption request.
| Header Key | Description | Required |
|---------------|----------------------------------------------------------|----------|
| dapr-key-name | The name of the key to use for the decryption operation. | Yes |
### HTTP Response
#### Response Body
The response to a decryption request will have its content type header to set `application/octet-stream` as it
returns an array of bytes representing the decrypted payload.
#### Response Codes
| Code | Description |
|------|-------------------------------------------------------------------------|
| 200 | OK |
| 400 | Crypto provider not found |
| 500 | Request formatted correctly, error in dapr code or underlying component |
### Examples
```bash
curl http://localhost:3500/v1.0/crypto/myAzureKeyVault/decrypt \
-X PUT
-H "dapr-key-name: myCryptoKey"\
-H "Content-Type: application/octet-stream" \
--data-binary "gAAAAABhZfZ0Ywz4dQX8y9J0Zl5v7w6Z7xq4jV3cW9o2l4pQ0YD1LdR0Zk7zIYi4n2Ll7t6f0Z4X7r8x9o6a8GyL0X1m9Q0Z0A=="
```
> The above command sends a base-64 encoded string of the encrypted message payload and would return a response with
> the content type header set to `application/octet-stream` returning the response body `hello world`.
```bash
hello world
```

View File

@ -3,7 +3,7 @@ type: docs
title: "Error codes returned by APIs"
linkTitle: "Error codes"
description: "Detailed reference of the Dapr API error codes"
weight: 1300
weight: 1400
---
For http calls made to Dapr runtime, when an error is encountered, an error json is returned in http response body. The json contains an error code and an descriptive error message, e.g.

View File

@ -151,12 +151,12 @@ To use the S3 component, you need to use an existing bucket. Follow the [AWS doc
This component supports **output binding** with the following operations:
- `create` : [Create file](#create-file)
- `get` : [Get file](#get-file)
- `delete` : [Delete file](#delete-file)
- `list`: [List file](#list-files)
- `create` : [Create object](#create-object)
- `get` : [Get object](#get-object)
- `delete` : [Delete object](#delete-object)
- `list`: [List objects](#list-objects)
### Create file
### Create object
To perform a create operation, invoke the AWS S3 binding with a `POST` method and the following JSON body:

View File

@ -69,8 +69,8 @@ spec:
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| brokers | Y | A comma-separated list of Kafka brokers. | `"localhost:9092,dapr-kafka.myapp.svc.cluster.local:9093"`
| consumerGroup | N | A kafka consumer group to listen on. Each record published to a topic is delivered to one consumer within each consumer group subscribed to the topic. | `"group1"`
| consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | `"channel1"`
| consumerGroup | N | A kafka consumer group to listen on. Each record published to a topic is delivered to one consumer within each consumer group subscribed to the topic. If a value for `consumerGroup` is provided, any value for `consumerID` is ignored - a combination of the consumer group and a random unique identifier will be set for the `consumerID` instead. | `"group1"`
| consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. If a value for `consumerGroup` is provided, any value for `consumerID` is ignored - a combination of the consumer group and a random unique identifier will be set for the `consumerID` instead. | `"channel1"`
| clientID | N | A user-provided string sent with every request to the Kafka brokers for logging, debugging, and auditing purposes. Defaults to `"namespace.appID"` for Kubernetes mode or `"appID"` for Self-Hosted mode. | `"my-namespace.my-dapr-app"`, `"my-dapr-app"`
| authRequired | N | *Deprecated* Enable [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) authentication with the Kafka brokers. | `"true"`, `"false"`
| authType | Y | Configure or disable authentication. Supported values: `none`, `password`, `mtls`, or `oidc` | `"password"`, `"none"`
@ -315,6 +315,44 @@ auth:
secretStore: <SECRET_STORE_NAME>
```
## Consuming from multiple topics
When consuming from multiple topics using a single pub/sub component, there is no guarantee about how the consumers in your consumer group are balanced across the topic partitions.
For instance, let's say you are subscribing to two topics with 10 partitions per topic and you have 20 replicas of your service consuming from the two topics. There is no guarantee that 10 will be assigned to the first topic and 10 to the second topic. Instead, the partitions could be divided unequally, with more than 10 assigned to the first topic and the rest assigned to the second topic.
This can result in idle consumers listening to the first topic and over-extended consumers on the second topic, or vice versa. This same behavior can be observed when using auto-scalers such as HPA or KEDA.
If you run into this particular issue, it is recommended that you configure a single pub/sub component per topic with uniquely defined consumer groups per component. This guarantees that all replicas of your service are fully allocated to the unique consumer group, where each consumer group targets one specific topic.
For example, you may define two Dapr components with the following configuration:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub-topic-one
spec:
type: pubsub.kafka
version: v1
metadata:
- name: consumerGroup
value: "{appID}-topic-one"
```
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub-topic-two
spec:
type: pubsub.kafka
version: v1
metadata:
- name: consumerGroup
value: "{appID}-topic-two"
```
## Sending and receiving multiple messages
Apache Kafka component supports sending and receiving multiple messages in a single operation using the bulk Pub/sub API.

View File

@ -30,6 +30,8 @@ spec:
value: "[aws_secret_key]"
- name: sessionToken
value: "[aws_session_token]"
- name: prefix
value: "[secret_name]"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a local secret store such as [Kubernetes secret store]({{< ref kubernetes-secret-store.md >}}) or a [local file]({{< ref file-secret-store.md >}}) to bootstrap secure key storage.
@ -43,6 +45,7 @@ The above example uses secrets as plain strings. It is recommended to use a loca
| accessKey | Y | The AWS Access Key to access this resource | `"key"` |
| secretKey | Y | The AWS Secret Access Key to access this resource | `"secretAccessKey"` |
| sessionToken | N | The AWS session token to use | `"sessionToken"` |
| prefix | N | Allows you to specify more than one SSM parameter store secret store component. | `"prefix"` |
{{% alert title="Important" color="warning" %}}
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using.

View File

@ -172,6 +172,42 @@ az cosmosdb sql role assignment create \
--role-definition-id "$ROLE_ID"
```
## Optimizing Cosmos DB for bulk operation write performance
If you are building a system that only ever reads data from Cosmos DB via key (`id`), which is the default Dapr behavior when using the state management API or actors, there are ways you can optimize Cosmos DB for improved write speeds. This is done by excluding all paths from indexing. By default, Cosmos DB indexes all fields inside of a document. On systems that are write-heavy and run little-to-no queries on values within a document, this indexing policy slows down the time it takes to write or update a document in Cosmos DB. This is exacerbated in high-volume systems.
For example, the default Terraform definition for a Cosmos SQL container indexing reads as follows:
```tf
indexing_policy {
indexing_mode = "consistent"
included_path {
path = "/*"
}
}
```
It is possible to force Cosmos DB to only index the `id` and `partitionKey` fields by excluding all other fields from indexing. This can be done by updating the above to read as follows:
```tf
indexing_policy {
# This could also be set to "none" if you are using the container purely as a key-value store. This may be applicable if your container is only going to be used as a distributed cache.
indexing_mode = "consistent"
# Note that included_path has been replaced with excluded_path
excluded_path {
path = "/*"
}
}
```
{{% alert title="Note" color="primary" %}}
This optimization comes at the cost of queries against fields inside of documents within the state store. This would likely impact any stored procedures or SQL queries defined and executed. It is only recommended that this optimization be applied only if you are using the Dapr State Management API or Dapr Actors to interact with Cosmos DB.
{{% /alert %}}
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})

View File

@ -1 +1 @@
{{- if .Get "short" }}1.12{{ else if .Get "long" }}1.12.0{{ else if .Get "cli" }}1.12.0{{ else }}1.12.0{{ end -}}
{{- if .Get "short" }}1.12{{ else if .Get "long" }}1.12.4{{ else if .Get "cli" }}1.12.0{{ else }}1.12.4{{ end -}}

View File

@ -0,0 +1,4 @@
git config --global --add safe.directory '*'
cd ./daprdocs
git submodule update --init --recursive
npm install

@ -1 +1 @@
Subproject commit 10ef81873b3448fb136c73ad26a9fd2768954c2f
Subproject commit d023a43ba4fd4cddb7aa2c0962cf786f01f58c24

@ -1 +1 @@
Subproject commit 04f7b595b6d19bbf1c42a3364992016c3ae3e40e
Subproject commit a65eddaa4e9217ed5cdf436b3438d2ffd837ba55

@ -1 +1 @@
Subproject commit 6759f19f8374c7c550c709b1fe8118ce738280a8
Subproject commit a9a09ba2acc39bc7e54a5a7092e1c5820818e23c

@ -1 +1 @@
Subproject commit 6e89215f5ca26f8f4d109424e2cad7792b9d8a28
Subproject commit 5c2b40ac94b50f6a5bdb32008f6a47da69946d95

@ -1 +1 @@
Subproject commit c08e71494a644f9ff875941c669c6a1e1f3a3340
Subproject commit ef732090e8e04629ca573d127c5ee187a505aba4