mirror of https://github.com/dapr/docs.git
Merge branch 'v1.13' into issue_3704
This commit is contained in:
commit
c1a840ae29
|
@ -1,3 +1,3 @@
|
||||||
# Contributing to Dapr docs
|
# Contributing to Dapr docs
|
||||||
|
|
||||||
Please see [this docs section](https://docs.dapr.io/contributing/) for general guidance on contributions to the Dapr project as well as specific guidelines on contributions to the docs repo.
|
Please see [this docs section](https://docs.dapr.io/contributing/) for general guidance on contributions to the Dapr project as well as specific guidelines on contributions to the docs repo. Learn more about [Dapr bot commands and labels](https://docs.dapr.io/contributing/daprbot/) to improve your docs contributing experience.
|
|
@ -12,7 +12,7 @@ Dapr bot is triggered by a list of commands that helps with common tasks in the
|
||||||
|
|
||||||
| Command | Target | Description | Who can use | Repository |
|
| Command | Target | Description | Who can use | Repository |
|
||||||
| ---------------- | --------------------- | -------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | -------------------------------------- |
|
| ---------------- | --------------------- | -------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | -------------------------------------- |
|
||||||
| `/assign` | Issue | Assigns an issue to a user or group of users | Anyone | `dapr`, `components-contrib`, `go-sdk` |
|
| `/assign` | Issue | Assigns an issue to a user or group of users | Anyone | `dapr`, `docs`, `quickstarts`, `cli`, `components-contrib`, `go-sdk`, `js-sdk`, `java-sdk`, `python-sdk`, `dotnet-sdk` |
|
||||||
| `/ok-to-test` | Pull request | `dapr`: trigger end to end tests <br/> `components-contrib`: trigger conformance and certification tests | Users listed in the [bot](https://github.com/dapr/dapr/blob/master/.github/scripts/dapr_bot.js) | `dapr`, `components-contrib` |
|
| `/ok-to-test` | Pull request | `dapr`: trigger end to end tests <br/> `components-contrib`: trigger conformance and certification tests | Users listed in the [bot](https://github.com/dapr/dapr/blob/master/.github/scripts/dapr_bot.js) | `dapr`, `components-contrib` |
|
||||||
| `/ok-to-perf` | Pull request | Trigger performance tests. | Users listed in the [bot](https://github.com/dapr/dapr/blob/master/.github/scripts/dapr_bot.js) | `dapr` |
|
| `/ok-to-perf` | Pull request | Trigger performance tests. | Users listed in the [bot](https://github.com/dapr/dapr/blob/master/.github/scripts/dapr_bot.js) | `dapr` |
|
||||||
| `/make-me-laugh` | Issue or pull request | Posts a random joke | Users listed in the [bot](https://github.com/dapr/dapr/blob/master/.github/scripts/dapr_bot.js) | `dapr`, `components-contrib` |
|
| `/make-me-laugh` | Issue or pull request | Posts a random joke | Users listed in the [bot](https://github.com/dapr/dapr/blob/master/.github/scripts/dapr_bot.js) | `dapr`, `components-contrib` |
|
||||||
|
|
|
@ -14,7 +14,9 @@ Now that you've learned about the [actor building block]({{< ref "actors-overvie
|
||||||
|
|
||||||
Dapr actors are virtual, meaning that their lifetime is not tied to their in-memory representation. As a result, they do not need to be explicitly created or destroyed. The Dapr actor runtime automatically activates an actor the first time it receives a request for that actor ID. If an actor is not used for a period of time, the Dapr actor runtime garbage-collects the in-memory object. It will also maintain knowledge of the actor's existence should it need to be reactivated later.
|
Dapr actors are virtual, meaning that their lifetime is not tied to their in-memory representation. As a result, they do not need to be explicitly created or destroyed. The Dapr actor runtime automatically activates an actor the first time it receives a request for that actor ID. If an actor is not used for a period of time, the Dapr actor runtime garbage-collects the in-memory object. It will also maintain knowledge of the actor's existence should it need to be reactivated later.
|
||||||
|
|
||||||
Invocation of actor methods and reminders reset the idle time, e.g. reminder firing will keep the actor active. Actor reminders fire whether an actor is active or inactive, if fired for inactive actor, it will activate the actor first. Actor timers do not reset the idle time, so timer firing will not keep the actor active. Timers only fire while the actor is active.
|
Invocation of actor methods, timers, and reminders reset the actor idle time. For example, a reminder firing keeps the actor active.
|
||||||
|
- Actor reminders fire whether an actor is active or inactive. If fired for an inactive actor, it activates the actor first.
|
||||||
|
- Actor timers firing reset the idle time; however, timers only fire while the actor is active.
|
||||||
|
|
||||||
The idle timeout and scan interval Dapr runtime uses to see if an actor can be garbage-collected is configurable. This information can be passed when Dapr runtime calls into the actor service to get supported actor types.
|
The idle timeout and scan interval Dapr runtime uses to see if an actor can be garbage-collected is configurable. This information can be passed when Dapr runtime calls into the actor service to get supported actor types.
|
||||||
|
|
||||||
|
|
|
@ -6,16 +6,16 @@ weight: 2000
|
||||||
description: "Learn how to encrypt and decrypt files"
|
description: "Learn how to encrypt and decrypt files"
|
||||||
---
|
---
|
||||||
|
|
||||||
Now that you've read about [Cryptography as a Dapr building block]({{< ref cryptography-overview.md >}}), let's walk through using the cryptography APIs with the SDKs.
|
Now that you've read about [Cryptography as a Dapr building block]({{< ref cryptography-overview.md >}}), let's walk through using the cryptography APIs with the SDKs.
|
||||||
|
|
||||||
{{% alert title="Note" color="primary" %}}
|
{{% alert title="Note" color="primary" %}}
|
||||||
Dapr cryptography is currently in alpha.
|
Dapr cryptography is currently in alpha.
|
||||||
|
|
||||||
{{% /alert %}}
|
{{% /alert %}}
|
||||||
|
|
||||||
## Encrypt
|
## Encrypt
|
||||||
|
|
||||||
{{< tabs "JavaScript" "Go" >}}
|
{{< tabs "JavaScript" "Go" ".NET" >}}
|
||||||
|
|
||||||
{{% codetab %}}
|
{{% codetab %}}
|
||||||
|
|
||||||
|
@ -136,12 +136,32 @@ if err != nil {
|
||||||
|
|
||||||
{{% /codetab %}}
|
{{% /codetab %}}
|
||||||
|
|
||||||
|
{{% codetab %}}
|
||||||
|
|
||||||
|
<!-- .NET -->
|
||||||
|
Using the Dapr SDK in your project, with the gRPC APIs, you can encrypt data in a string or a byte array:
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
using var client = new DaprClientBuilder().Build();
|
||||||
|
|
||||||
|
const string componentName = "azurekeyvault"; //Change this to match your cryptography component
|
||||||
|
const string keyName = "myKey"; //Change this to match the name of the key in your cryptographic store
|
||||||
|
|
||||||
|
const string plainText = "This is the value we're going to encrypt today";
|
||||||
|
|
||||||
|
//Encode the string to a UTF-8 byte array and encrypt it
|
||||||
|
var plainTextBytes = Encoding.UTF8.GetBytes(plainText);
|
||||||
|
var encryptedBytesResult = await client.EncryptAsync(componentName, plaintextBytes, keyName, new EncryptionOptions(KeyWrapAlgorithm.Rsa));
|
||||||
|
```
|
||||||
|
|
||||||
|
{{% /codetab %}}
|
||||||
|
|
||||||
{{< /tabs >}}
|
{{< /tabs >}}
|
||||||
|
|
||||||
|
|
||||||
## Decrypt
|
## Decrypt
|
||||||
|
|
||||||
{{< tabs "JavaScript" "Go" >}}
|
{{< tabs "JavaScript" "Go" ".NET" >}}
|
||||||
|
|
||||||
{{% codetab %}}
|
{{% codetab %}}
|
||||||
|
|
||||||
|
@ -186,6 +206,29 @@ out, err := sdkClient.Decrypt(context.Background(), rf, dapr.EncryptOptions{
|
||||||
|
|
||||||
{{% /codetab %}}
|
{{% /codetab %}}
|
||||||
|
|
||||||
|
{{% codetab %}}
|
||||||
|
|
||||||
|
<!-- .NET -->
|
||||||
|
To decrypt a string, use the 'DecryptAsync' gRPC API in your project.
|
||||||
|
|
||||||
|
In the following example, we'll take a byte array (such as from the example above) and decrypt it to a UTF-8 encoded string.
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
public async Task<string> DecryptBytesAsync(byte[] encryptedBytes)
|
||||||
|
{
|
||||||
|
using var client = new DaprClientBuilder().Build();
|
||||||
|
|
||||||
|
const string componentName = "azurekeyvault"; //Change this to match your cryptography component
|
||||||
|
const string keyName = "myKey"; //Change this to match the name of the key in your cryptographic store
|
||||||
|
|
||||||
|
var decryptedBytes = await client.DecryptAsync(componentName, encryptedBytes, keyName);
|
||||||
|
var decryptedString = Encoding.UTF8.GetString(decryptedBytes.ToArray());
|
||||||
|
return decryptedString;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
{{% /codetab %}}
|
||||||
|
|
||||||
{{< /tabs >}}
|
{{< /tabs >}}
|
||||||
|
|
||||||
## Next steps
|
## Next steps
|
||||||
|
|
|
@ -146,6 +146,8 @@ Different state store implementations may implicitly put restrictions on the typ
|
||||||
|
|
||||||
Similarly, if a state store imposes restrictions on the size of a batch transaction, that may limit the number of parallel actions that can be scheduled by a workflow.
|
Similarly, if a state store imposes restrictions on the size of a batch transaction, that may limit the number of parallel actions that can be scheduled by a workflow.
|
||||||
|
|
||||||
|
Workflow state can be purged from a state store, including all its history. Each Dapr SDK exposes APIs for purging all metadata related to specific workflow instances.
|
||||||
|
|
||||||
## Workflow scalability
|
## Workflow scalability
|
||||||
|
|
||||||
Because Dapr Workflows are internally implemented using actors, Dapr Workflows have the same scalability characteristics as actors. The placement service:
|
Because Dapr Workflows are internally implemented using actors, Dapr Workflows have the same scalability characteristics as actors. The placement service:
|
||||||
|
|
|
@ -63,6 +63,8 @@ You can use the following two techniques to write workflows that may need to sch
|
||||||
|
|
||||||
1. **Use the _continue-as-new_ API**:
|
1. **Use the _continue-as-new_ API**:
|
||||||
Each workflow SDK exposes a _continue-as-new_ API that workflows can invoke to restart themselves with a new input and history. The _continue-as-new_ API is especially ideal for implementing "eternal workflows", like monitoring agents, which would otherwise be implemented using a `while (true)`-like construct. Using _continue-as-new_ is a great way to keep the workflow history size small.
|
Each workflow SDK exposes a _continue-as-new_ API that workflows can invoke to restart themselves with a new input and history. The _continue-as-new_ API is especially ideal for implementing "eternal workflows", like monitoring agents, which would otherwise be implemented using a `while (true)`-like construct. Using _continue-as-new_ is a great way to keep the workflow history size small.
|
||||||
|
|
||||||
|
> The _continue-as-new_ API truncates the existing history, replacing it with a new history.
|
||||||
|
|
||||||
1. **Use child workflows**:
|
1. **Use child workflows**:
|
||||||
Each workflow SDK exposes an API for creating child workflows. A child workflow behaves like any other workflow, except that it's scheduled by a parent workflow. Child workflows have:
|
Each workflow SDK exposes an API for creating child workflows. A child workflow behaves like any other workflow, except that it's scheduled by a parent workflow. Child workflows have:
|
||||||
|
@ -159,6 +161,12 @@ The backend implementation is largely decoupled from the workflow core engine or
|
||||||
|
|
||||||
In that sense, it's similar to Dapr's state store abstraction, except designed specifically for workflow. All APIs and programming model features are the same, regardless of which backend is used.
|
In that sense, it's similar to Dapr's state store abstraction, except designed specifically for workflow. All APIs and programming model features are the same, regardless of which backend is used.
|
||||||
|
|
||||||
|
## Purging
|
||||||
|
|
||||||
|
Workflow state can be purged from a state store, purging all its history and removing all metadata related to a specific workflow instance. The purge capability is used for workflows that have run to a `COMPLETED`, `FAILED`, or `TERMINATED` state.
|
||||||
|
|
||||||
|
Learn more in [the workflow API reference guide]({{< ref workflow_api.md >}}).
|
||||||
|
|
||||||
## Limitations
|
## Limitations
|
||||||
|
|
||||||
### Workflow determinism and code restraints
|
### Workflow determinism and code restraints
|
||||||
|
|
|
@ -823,9 +823,8 @@ public class MonitorWorkflow extends Workflow {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Put the workflow to sleep until the determined time
|
// Put the workflow to sleep until the determined time
|
||||||
// Note: ctx.createTimer() method is not supported in the Java SDK yet
|
|
||||||
try {
|
try {
|
||||||
TimeUnit.SECONDS.sleep(nextSleepInterval.getSeconds());
|
ctx.createTimer(nextSleepInterval);
|
||||||
} catch (InterruptedException e) {
|
} catch (InterruptedException e) {
|
||||||
throw new RuntimeException(e);
|
throw new RuntimeException(e);
|
||||||
}
|
}
|
||||||
|
|
|
@ -3,10 +3,11 @@ type: docs
|
||||||
title: "Use the Dapr API"
|
title: "Use the Dapr API"
|
||||||
linkTitle: "Use the Dapr API"
|
linkTitle: "Use the Dapr API"
|
||||||
weight: 30
|
weight: 30
|
||||||
description: "Run a Dapr sidecar and try out the state API"
|
description: "Run a Dapr sidecar and try out the state management API"
|
||||||
---
|
---
|
||||||
|
|
||||||
In this guide, you'll simulate an application by running the sidecar and calling the API directly. After running Dapr using the Dapr CLI, you'll:
|
In this guide, you'll simulate an application by running the sidecar and calling the state management API directly.
|
||||||
|
After running Dapr using the Dapr CLI, you'll:
|
||||||
|
|
||||||
- Save a state object.
|
- Save a state object.
|
||||||
- Read/get the state object.
|
- Read/get the state object.
|
||||||
|
@ -21,7 +22,8 @@ In this guide, you'll simulate an application by running the sidecar and calling
|
||||||
|
|
||||||
### Step 1: Run the Dapr sidecar
|
### Step 1: Run the Dapr sidecar
|
||||||
|
|
||||||
The [`dapr run`]({{< ref dapr-run.md >}}) command launches an application, together with a sidecar.
|
The [`dapr run`]({{< ref dapr-run.md >}}) command normally runs your application and a Dapr sidecar. In this case,
|
||||||
|
it only runs the sidecar since you are interacting with the state management API directly.
|
||||||
|
|
||||||
Launch a Dapr sidecar that will listen on port 3500 for a blank application named `myapp`:
|
Launch a Dapr sidecar that will listen on port 3500 for a blank application named `myapp`:
|
||||||
|
|
||||||
|
|
|
@ -15,6 +15,11 @@ You'll use the Dapr CLI as the main tool for various Dapr-related tasks. You can
|
||||||
|
|
||||||
The Dapr CLI works with both [self-hosted]({{< ref self-hosted >}}) and [Kubernetes]({{< ref Kubernetes >}}) environments.
|
The Dapr CLI works with both [self-hosted]({{< ref self-hosted >}}) and [Kubernetes]({{< ref Kubernetes >}}) environments.
|
||||||
|
|
||||||
|
{{% alert title="Before you begin" color="primary" %}}
|
||||||
|
In Docker Desktop's advanced options, verify you've allowed the default Docker socket to be used.
|
||||||
|
<img src="/images/docker-desktop-setting.png" width=800 style="padding-bottom:15px;">
|
||||||
|
{{% /alert %}}
|
||||||
|
|
||||||
### Step 1: Install the Dapr CLI
|
### Step 1: Install the Dapr CLI
|
||||||
|
|
||||||
{{< tabs Linux Windows MacOS Binaries>}}
|
{{< tabs Linux Windows MacOS Binaries>}}
|
||||||
|
|
|
@ -22,8 +22,12 @@ Dapr initialization includes:
|
||||||
1. Creating a **default components folder** with component definitions for the above.
|
1. Creating a **default components folder** with component definitions for the above.
|
||||||
1. Running a **Dapr placement service container instance** for local actor support.
|
1. Running a **Dapr placement service container instance** for local actor support.
|
||||||
|
|
||||||
|
{{% alert title="Kubernetes Development Environment" color="primary" %}}
|
||||||
|
To initialize Dapr in your local or remote **Kubernetes** cluster for development (including the Redis and Zipkin containers listed above), see [how to initialize Dapr for development on Kubernetes]({{<ref "kubernetes-deploy.md#install-dapr-from-the-official-dapr-helm-chart-with-development-flag">}})
|
||||||
|
{{% /alert %}}
|
||||||
|
|
||||||
{{% alert title="Docker" color="primary" %}}
|
{{% alert title="Docker" color="primary" %}}
|
||||||
The recommended development environment requires [Docker](https://docs.docker.com/install/). While you can [initialize Dapr without a dependency on Docker]({{< ref self-hosted-no-docker.md >}})), the next steps in this guide assume the recommended Docker development environment.
|
The recommended development environment requires [Docker](https://docs.docker.com/install/). While you can [initialize Dapr without a dependency on Docker]({{< ref self-hosted-no-docker.md >}}), the next steps in this guide assume the recommended Docker development environment.
|
||||||
|
|
||||||
You can also install [Podman](https://podman.io/) in place of Docker. Read more about [initializing Dapr using Podman]({{< ref dapr-init.md >}}).
|
You can also install [Podman](https://podman.io/) in place of Docker. Read more about [initializing Dapr using Podman]({{< ref dapr-init.md >}}).
|
||||||
{{% /alert %}}
|
{{% /alert %}}
|
||||||
|
@ -66,7 +70,7 @@ dapr init
|
||||||
|
|
||||||
**If you are installing on Mac OS Silicon with Docker,** you may need to perform the following workaround to enable `dapr init` to talk to Docker without using Kubernetes.
|
**If you are installing on Mac OS Silicon with Docker,** you may need to perform the following workaround to enable `dapr init` to talk to Docker without using Kubernetes.
|
||||||
1. Navigate to **Docker Desktop** > **Settings** > **Advanced**.
|
1. Navigate to **Docker Desktop** > **Settings** > **Advanced**.
|
||||||
1. Select the **Enable default Docker socket** checkbox.
|
1. Select the **Allow the default Docker socket to be used (requires password)** checkbox.
|
||||||
|
|
||||||
{{% /codetab %}}
|
{{% /codetab %}}
|
||||||
|
|
||||||
|
@ -82,6 +86,7 @@ dapr init
|
||||||
|
|
||||||
{{< /tabs >}}
|
{{< /tabs >}}
|
||||||
|
|
||||||
|
[See the troubleshooting guide if you encounter any error messages regarding Docker not being installed or running.]({{< ref "common_issues.md#dapr-cant-connect-to-docker-when-installing-the-dapr-cli" >}})
|
||||||
|
|
||||||
### Step 3: Verify Dapr version
|
### Step 3: Verify Dapr version
|
||||||
|
|
||||||
|
@ -135,9 +140,14 @@ ls $HOME/.dapr
|
||||||
{{% /codetab %}}
|
{{% /codetab %}}
|
||||||
|
|
||||||
{{% codetab %}}
|
{{% codetab %}}
|
||||||
|
You can verify using either PowerShell or command line. If using PowerShell, run:
|
||||||
```powershell
|
```powershell
|
||||||
explorer "%USERPROFILE%\.dapr\"
|
explorer "$env:USERPROFILE\.dapr"
|
||||||
|
```
|
||||||
|
|
||||||
|
If using command line, run:
|
||||||
|
```cmd
|
||||||
|
explorer "%USERPROFILE%\.dapr"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Result:**
|
**Result:**
|
||||||
|
|
|
@ -72,6 +72,51 @@ The `-k` flag initializes Dapr on the Kubernetes cluster in your current context
|
||||||
dapr dashboard -k -n <your-namespace>
|
dapr dashboard -k -n <your-namespace>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
#### Install Dapr from the offical Dapr Helm chart (with development flag)
|
||||||
|
|
||||||
|
Adding the `--dev` flag initializes Dapr on the Kubernetes cluster on your current context, with the addition of Redis and Zipkin deployments.
|
||||||
|
|
||||||
|
The steps are similar to [installing from the Dapr Helm chart](#install-dapr-from-an-official-dapr-helm-chart), except for appending the `--dev` flag to the `init` command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dapr init -k --dev
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected output:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
⌛ Making the jump to hyperspace...
|
||||||
|
ℹ️ Note: To install Dapr using Helm, see here: https://docs.dapr.io/getting-started/install-dapr-kubernetes/#install-with-helm-advanced
|
||||||
|
|
||||||
|
ℹ️ Container images will be pulled from Docker Hub
|
||||||
|
✅ Deploying the Dapr control plane with latest version to your cluster...
|
||||||
|
✅ Deploying the Dapr dashboard with latest version to your cluster...
|
||||||
|
✅ Deploying the Dapr Redis with latest version to your cluster...
|
||||||
|
✅ Deploying the Dapr Zipkin with latest version to your cluster...
|
||||||
|
ℹ️ Applying "statestore" component to Kubernetes "default" namespace.
|
||||||
|
ℹ️ Applying "pubsub" component to Kubernetes "default" namespace.
|
||||||
|
ℹ️ Applying "appconfig" zipkin configuration to Kubernetes "default" namespace.
|
||||||
|
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run `dapr status -k' in your terminal. To get started, go here: https://aka.ms/dapr-getting-started
|
||||||
|
```
|
||||||
|
|
||||||
|
After a short period of time (or using the `--wait` flag and specifying an amount of time to wait), you can check that the Redis and Zipkin components have been deployed to the cluster.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl get pods --namespace default
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected output:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
dapr-dev-zipkin-bfb4b45bb-sttz7 1/1 Running 0 159m
|
||||||
|
dapr-dev-redis-master-0 1/1 Running 0 159m
|
||||||
|
dapr-dev-redis-replicas-0 1/1 Running 0 159m
|
||||||
|
dapr-dev-redis-replicas-1 1/1 Running 0 159m
|
||||||
|
dapr-dev-redis-replicas-2 1/1 Running 0 158m
|
||||||
|
```
|
||||||
|
|
||||||
#### Install Dapr from a private Dapr Helm chart
|
#### Install Dapr from a private Dapr Helm chart
|
||||||
|
|
||||||
Installing [Dapr from a private Helm chart](#install-dapr-from-an-official-dapr-helm-chart) can be helpful for when you:
|
Installing [Dapr from a private Helm chart](#install-dapr-from-an-official-dapr-helm-chart) can be helpful for when you:
|
||||||
|
|
|
@ -88,7 +88,11 @@ kubectl rollout restart deployment/<deployment-name> --namespace <namespace-name
|
||||||
|
|
||||||
## Adding API token to client API invocations
|
## Adding API token to client API invocations
|
||||||
|
|
||||||
Once token authentication is configured in Dapr, all clients invoking Dapr API will have to append the API token token to every request:
|
Once token authentication is configured in Dapr, all clients invoking Dapr API need to append the `dapr-api-token` token to every request.
|
||||||
|
|
||||||
|
> **Note:** The Dapr SDKs read the [DAPR_API_TOKEN]({{< ref environment >}}) environment variable and set it for you by default.
|
||||||
|
|
||||||
|
<img src="/images/tokens-auth.png" width=800 style="padding-bottom:15px;">
|
||||||
|
|
||||||
### HTTP
|
### HTTP
|
||||||
|
|
||||||
|
|
|
@ -89,11 +89,13 @@ kubectl rollout restart deployment/<deployment-name> --namespace <namespace-name
|
||||||
|
|
||||||
## Authenticating requests from Dapr
|
## Authenticating requests from Dapr
|
||||||
|
|
||||||
Once app token authentication is configured in Dapr, all requests *coming from Dapr* include the token.
|
Once app token authentication is configured using the environment variable or Kubernetes secret `app-api-token`, the Dapr sidecar always includes the HTTP header/gRPC metadata `dapr-api-token: <token>` in the calls to the app. From the app side, ensure you are authenticating using the `dapr-api-token` value which uses the `app-api-token` you set to authenticate requests from Dapr.
|
||||||
|
|
||||||
|
<img src="/images/tokens-auth.png" width=800 style="padding-bottom:15px;">
|
||||||
|
|
||||||
### HTTP
|
### HTTP
|
||||||
|
|
||||||
In case of HTTP, in your code look for the HTTP header `dapr-api-token` in incoming requests:
|
In your code, look for the HTTP header `dapr-api-token` in incoming requests:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
dapr-api-token: <token>
|
dapr-api-token: <token>
|
||||||
|
|
|
@ -6,6 +6,24 @@ weight: 1000
|
||||||
description: "Common issues and problems faced when running Dapr applications"
|
description: "Common issues and problems faced when running Dapr applications"
|
||||||
---
|
---
|
||||||
|
|
||||||
|
This guide covers common issues you may encounter while installing and running Dapr.
|
||||||
|
|
||||||
|
## Dapr can't connect to Docker when installing the Dapr CLI
|
||||||
|
|
||||||
|
When installing and initializing the Dapr CLI, if you see the following error message after running `dapr init`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
⌛ Making the jump to hyperspace...
|
||||||
|
❌ could not connect to docker. docker may not be installed or running
|
||||||
|
```
|
||||||
|
|
||||||
|
Troubleshoot the error by ensuring:
|
||||||
|
|
||||||
|
1. [The correct containers are running.]({{< ref "install-dapr-selfhost.md#step-4-verify-containers-are-running" >}})
|
||||||
|
1. In Docker Desktop, verify the **Allow the default Docker socket to be used (requires password)** option is selected.
|
||||||
|
|
||||||
|
<img src="/images/docker-desktop-setting.png" width=800 style="padding-bottom:15px;">
|
||||||
|
|
||||||
## I don't see the Dapr sidecar injected to my pod
|
## I don't see the Dapr sidecar injected to my pod
|
||||||
|
|
||||||
There could be several reasons to why a sidecar will not be injected into a pod.
|
There could be several reasons to why a sidecar will not be injected into a pod.
|
||||||
|
|
|
@ -182,8 +182,7 @@ POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanc
|
||||||
```
|
```
|
||||||
|
|
||||||
{{% alert title="Note" color="primary" %}}
|
{{% alert title="Note" color="primary" %}}
|
||||||
Purging a workflow purges all of the child workflows created by the workflow instance.
|
Only `COMPLETED`, `FAILED`, or `TERMINATED` workflows can be purged.
|
||||||
|
|
||||||
{{% /alert %}}
|
{{% /alert %}}
|
||||||
|
|
||||||
### URL parameters
|
### URL parameters
|
||||||
|
@ -247,7 +246,7 @@ The API call will provide a JSON response similar to this:
|
||||||
|
|
||||||
Parameter | Description
|
Parameter | Description
|
||||||
--------- | -----------
|
--------- | -----------
|
||||||
`runtimeStatus` | The status of the workflow instance. Values include: `RUNNING`, `TERMINATED`, `PAUSED`
|
`runtimeStatus` | The status of the workflow instance. Values include: `"RUNNING"`, `"COMPLETED"`, `"CONTINUED_AS_NEW"`, `"FAILED"`, `"CANCELED"`, `"TERMINATED"`, `"PENDING"`, `"SUSPENDED"`
|
||||||
|
|
||||||
## Component format
|
## Component format
|
||||||
|
|
||||||
|
|
|
@ -162,6 +162,12 @@ dapr uninstall --all --network mynet
|
||||||
dapr init -k
|
dapr init -k
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Using the `--dev` flag initializes Dapr in dev mode, which includes Zipkin and Redis.
|
||||||
|
```bash
|
||||||
|
dapr init -k --dev
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
You can wait for the installation to complete its deployment with the `--wait` flag.
|
You can wait for the installation to complete its deployment with the `--wait` flag.
|
||||||
The default timeout is 300s (5 min), but can be customized with the `--timeout` flag.
|
The default timeout is 300s (5 min), but can be customized with the `--timeout` flag.
|
||||||
|
|
||||||
|
|
|
@ -276,6 +276,11 @@ The response body contains the following JSON:
|
||||||
[0.018574921,-0.00023652936,-0.0057790717,.... (1536 floats total for ada)]
|
[0.018574921,-0.00023652936,-0.0057790717,.... (1536 floats total for ada)]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Learn more about the Azure OpenAI output binding
|
||||||
|
|
||||||
|
Watch [the following Community Call presentation](https://youtu.be/rTovKpG0rhY?si=g7hZTQSpSEXz4pV1&t=80) to learn more about the Azure OpenAI output binding.
|
||||||
|
|
||||||
|
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/rTovKpG0rhY?si=XP1S-80SIg1ptJuG&start=80" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||||
|
|
||||||
## Related links
|
## Related links
|
||||||
|
|
||||||
|
|
|
@ -43,8 +43,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
||||||
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"`
|
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"`
|
||||||
| consumerID | N | The consumer group ID | `"myGroup"`
|
| consumerID | N | The consumer group ID | `"myGroup"`
|
||||||
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
|
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
|
||||||
| redeliverInterval | N | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`
|
| redeliverInterval | N | The interval between checking for pending messages to redeliver. Can use either be Go duration string (for example "ms", "s", "m") or milliseconds number. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`, `"5000"`
|
||||||
| processingTimeout | N | The amount time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"`
|
| processingTimeout | N | The amount time that a message must be pending before attempting to redeliver it. Can use either be Go duration string ( for example "ms", "s", "m") or milliseconds number. Defaults to `"15s"`. `"0"` disables redelivery. | `"60s"`, `"600000"`
|
||||||
| queueDepth | N | The size of the message queue for processing. Defaults to `"100"`. | `"1000"`
|
| queueDepth | N | The size of the message queue for processing. Defaults to `"100"`. | `"1000"`
|
||||||
| concurrency | N | The number of concurrent workers that are processing messages. Defaults to `"10"`. | `"15"`
|
| concurrency | N | The number of concurrent workers that are processing messages. Defaults to `"10"`. | `"15"`
|
||||||
| redisType | N | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"`
|
| redisType | N | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"`
|
||||||
|
|
|
@ -47,20 +47,23 @@ spec:
|
||||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||||
{{% /alert %}}
|
{{% /alert %}}
|
||||||
|
|
||||||
If you wish to use MongoDB as an actor store, append the following to the yaml.
|
### Actor state store and transactions support
|
||||||
|
|
||||||
|
When using as an actor state store or to leverage transactions, MongoDB must be running in a [Replica Set](https://www.mongodb.com/docs/manual/replication/).
|
||||||
|
|
||||||
|
If you wish to use MongoDB as an actor store, add this metadata option to your Component YAML:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
- name: actorStateStore
|
- name: actorStateStore
|
||||||
value: "true"
|
value: "true"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Spec metadata fields
|
## Spec metadata fields
|
||||||
|
|
||||||
| Field | Required | Details | Example |
|
| Field | Required | Details | Example |
|
||||||
|--------------------|:--------:|---------|---------|
|
|--------------------|:--------:|---------|---------|
|
||||||
| server | Y<sup>*</sup> | The server to connect to, when using DNS SRV record | `"server.example.com"`
|
| server | Y<sup>1</sup> | The server to connect to, when using DNS SRV record | `"server.example.com"`
|
||||||
| host | Y<sup>*</sup> | The host to connect to | `"mongo-mongodb.default.svc.cluster.local:27017"`
|
| host | Y<sup>1</sup> | The host to connect to | `"mongo-mongodb.default.svc.cluster.local:27017"`
|
||||||
| username | N | The username of the user to connect with (applicable in conjunction with `host`) | `"admin"`
|
| username | N | The username of the user to connect with (applicable in conjunction with `host`) | `"admin"`
|
||||||
| password | N | The password of the user (applicable in conjunction with `host`) | `"password"`
|
| password | N | The password of the user (applicable in conjunction with `host`) | `"password"`
|
||||||
| databaseName | N | The name of the database to use. Defaults to `"daprStore"` | `"daprStore"`
|
| databaseName | N | The name of the database to use. Defaults to `"daprStore"` | `"daprStore"`
|
||||||
|
@ -68,46 +71,36 @@ If you wish to use MongoDB as an actor store, append the following to the yaml.
|
||||||
| writeConcern | N | The write concern to use | `"majority"`
|
| writeConcern | N | The write concern to use | `"majority"`
|
||||||
| readConcern | N | The read concern to use | `"majority"`, `"local"`,`"available"`, `"linearizable"`, `"snapshot"`
|
| readConcern | N | The read concern to use | `"majority"`, `"local"`,`"available"`, `"linearizable"`, `"snapshot"`
|
||||||
| operationTimeout | N | The timeout for the operation. Defaults to `"5s"` | `"5s"`
|
| operationTimeout | N | The timeout for the operation. Defaults to `"5s"` | `"5s"`
|
||||||
| params | N<sup>**</sup> | Additional parameters to use | `"?authSource=daprStore&ssl=true"`
|
| params | N<sup>2</sup> | Additional parameters to use | `"?authSource=daprStore&ssl=true"`
|
||||||
|
|
||||||
> <sup>[*]</sup> The `server` and `host` fields are mutually exclusive. If neither or both are set, Dapr will return an error.
|
> <sup>[1]</sup> The `server` and `host` fields are mutually exclusive. If neither or both are set, Dapr returns an error.
|
||||||
|
|
||||||
> <sup>[**]</sup> The `params` field accepts a query string that specifies connection specific options as `<name>=<value>` pairs, separated by `"&"` and prefixed with `"?"`. e.g. to use "daprStore" db as authentication database and enabling SSL/TLS in connection, specify params as `"?authSource=daprStore&ssl=true"`. See [the mongodb manual](https://docs.mongodb.com/manual/reference/connection-string/#std-label-connections-connection-options) for the list of available options and their use cases.
|
> <sup>[2]</sup> The `params` field accepts a query string that specifies connection specific options as `<name>=<value>` pairs, separated by `&` and prefixed with `?`. e.g. to use "daprStore" db as authentication database and enabling SSL/TLS in connection, specify params as `?authSource=daprStore&ssl=true`. See [the mongodb manual](https://docs.mongodb.com/manual/reference/connection-string/#std-label-connections-connection-options) for the list of available options and their use cases.
|
||||||
|
|
||||||
## Setup MongoDB
|
## Setup MongoDB
|
||||||
|
|
||||||
{{< tabs "Self-Hosted" "Kubernetes" >}}
|
{{< tabs "Self-Hosted" "Kubernetes" >}}
|
||||||
|
|
||||||
{{% codetab %}}
|
{{% codetab %}}
|
||||||
You can run MongoDB locally using Docker:
|
You can run a single MongoDB instance locally using Docker:
|
||||||
|
|
||||||
```
|
```sh
|
||||||
docker run --name some-mongo -d mongo
|
docker run --name some-mongo -d mongo
|
||||||
```
|
```
|
||||||
|
|
||||||
You can then interact with the server using `localhost:27017`.
|
You can then interact with the server at `localhost:27017`. If you do not specify a `databaseName` value in your component definition, make sure to create a database named `daprStore`.
|
||||||
|
|
||||||
If you do not specify a `databaseName` value in your component definition, make sure to create a database named `daprStore`.
|
|
||||||
|
|
||||||
|
In order to use the MongoDB state store for transactions and as an actor state store, you need to run MongoDB as a Replica Set. Refer to [the official documentation](https://www.mongodb.com/compatibility/deploying-a-mongodb-cluster-with-docker) for how to create a 3-node Replica Set using Docker.
|
||||||
{{% /codetab %}}
|
{{% /codetab %}}
|
||||||
|
|
||||||
{{% codetab %}}
|
{{% codetab %}}
|
||||||
The easiest way to install MongoDB on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/mongodb):
|
You can conveniently install MongoDB on Kubernetes using the [Helm chart packaged by Bitnami](https://github.com/bitnami/charts/tree/main/bitnami/mongodb/). Refer to the documentation for the Helm chart for deploying MongoDB, both as a standalone server, and with a Replica Set (required for using transactions and actors).
|
||||||
|
|
||||||
```
|
|
||||||
helm install mongo stable/mongodb
|
|
||||||
```
|
|
||||||
|
|
||||||
This installs MongoDB into the `default` namespace.
|
This installs MongoDB into the `default` namespace.
|
||||||
To interact with MongoDB, find the service with: `kubectl get svc mongo-mongodb`.
|
To interact with MongoDB, find the service with: `kubectl get svc mongo-mongodb`.
|
||||||
|
For example, if installing using the Helm defaults above, the MongoDB host address would be:
|
||||||
For example, if installing using the example above, the MongoDB host address would be:
|
|
||||||
|
|
||||||
`mongo-mongodb.default.svc.cluster.local:27017`
|
`mongo-mongodb.default.svc.cluster.local:27017`
|
||||||
|
|
||||||
|
|
||||||
Follow the on-screen instructions to get the root password for MongoDB.
|
Follow the on-screen instructions to get the root password for MongoDB.
|
||||||
The username is `admin` by default.
|
The username is typically `admin` by default.
|
||||||
{{% /codetab %}}
|
{{% /codetab %}}
|
||||||
|
|
||||||
{{< /tabs >}}
|
{{< /tabs >}}
|
||||||
|
@ -117,6 +110,7 @@ The username is `admin` by default.
|
||||||
This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate when the data should be considered "expired".
|
This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate when the data should be considered "expired".
|
||||||
|
|
||||||
## Related links
|
## Related links
|
||||||
|
|
||||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||||
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
|
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
|
||||||
- [State management building block]({{< ref state-management >}})
|
- [State management building block]({{< ref state-management >}})
|
||||||
|
|
|
@ -30,18 +30,14 @@ spec:
|
||||||
value: <PASSWORD>
|
value: <PASSWORD>
|
||||||
- name: enableTLS
|
- name: enableTLS
|
||||||
value: <bool> # Optional. Allowed: true, false.
|
value: <bool> # Optional. Allowed: true, false.
|
||||||
- name: failover
|
|
||||||
value: <bool> # Optional. Allowed: true, false.
|
|
||||||
- name: sentinelMasterName
|
|
||||||
value: <string> # Optional
|
|
||||||
- name: maxRetries
|
- name: maxRetries
|
||||||
value: # Optional
|
value: # Optional
|
||||||
- name: maxRetryBackoff
|
- name: maxRetryBackoff
|
||||||
value: # Optional
|
value: # Optional
|
||||||
- name: failover
|
- name: failover
|
||||||
value: # Optional
|
value: <bool> # Optional. Allowed: true, false.
|
||||||
- name: sentinelMasterName
|
- name: sentinelMasterName
|
||||||
value: # Optional
|
value: <string> # Optional
|
||||||
- name: redeliverInterval
|
- name: redeliverInterval
|
||||||
value: # Optional
|
value: # Optional
|
||||||
- name: processingTimeout
|
- name: processingTimeout
|
||||||
|
|
|
@ -14,6 +14,14 @@
|
||||||
features:
|
features:
|
||||||
input: true
|
input: true
|
||||||
output: true
|
output: true
|
||||||
|
- component: Azure OpenAI
|
||||||
|
link: openai
|
||||||
|
state: Alpha
|
||||||
|
version: v1
|
||||||
|
since: "1.11"
|
||||||
|
features:
|
||||||
|
input: true
|
||||||
|
output: true
|
||||||
- component: Azure SignalR
|
- component: Azure SignalR
|
||||||
link: signalr
|
link: signalr
|
||||||
state: Alpha
|
state: Alpha
|
||||||
|
|
Binary file not shown.
After Width: | Height: | Size: 255 KiB |
Binary file not shown.
After Width: | Height: | Size: 24 KiB |
Binary file not shown.
Binary file not shown.
|
@ -1 +1 @@
|
||||||
Subproject commit d023a43ba4fd4cddb7aa2c0962cf786f01f58c24
|
Subproject commit c07eb698ac5d1b152a60d76c64af4841ffa07397
|
|
@ -1 +1 @@
|
||||||
Subproject commit a65eddaa4e9217ed5cdf436b3438d2ffd837ba55
|
Subproject commit 5ef7aa2234d4d4c07769ad31cde223ef11c4e33e
|
|
@ -1 +1 @@
|
||||||
Subproject commit a9a09ba2acc39bc7e54a5a7092e1c5820818e23c
|
Subproject commit 2f5947392a33bc7911e6669601ddb9e8b59b58fe
|
|
@ -1 +1 @@
|
||||||
Subproject commit 5c2b40ac94b50f6a5bdb32008f6a47da69946d95
|
Subproject commit 4189a3d2ad6897406abd766f4ccbf2300c8f8852
|
|
@ -1 +1 @@
|
||||||
Subproject commit ef732090e8e04629ca573d127c5ee187a505aba4
|
Subproject commit 0b7aafdab1d4fade424b1b6c9569329ad10bb516
|
Loading…
Reference in New Issue