Merge branch 'v1.15' into trace-info-propagation

This commit is contained in:
Hannah Hunter 2024-09-04 12:17:34 -04:00 committed by GitHub
commit 9f5b30b4a7
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
62 changed files with 2058 additions and 414 deletions

View File

@ -34,16 +34,17 @@ This assumes you have an existing [user-assigned managed identity](https://learn
2) Deploy using the Azure Dev CLI
The first time, and any updates to this environment
```bash
azd up
```
For subsequent environments/sites, create a side-by-side environment like this:
Start by creating a create a side-by-side azd environment:
```bash
azd env new
```
For example, you can name the new environment something like: `dapr-docs-v1-15`.
Now, deploy the Dapr docs SWA in the new azd environment using the following command:
```bash
azd up
```

View File

@ -1,14 +1,14 @@
name: Azure Static Web App v1.14
name: Azure Static Web App v1.15
on:
workflow_dispatch:
push:
branches:
- v1.14
- v1.15
pull_request:
types: [opened, synchronize, reopened, closed]
branches:
- v1.14
- v1.15
jobs:
build_and_deploy_job:
@ -29,7 +29,7 @@ jobs:
HUGO_ENV: production
HUGO_VERSION: "0.100.2"
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_14 }}
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_15 }}
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
skip_deploy_on_missing_secrets: true
action: "upload"
@ -50,6 +50,6 @@ jobs:
id: closepullrequest
uses: Azure/static-web-apps-deploy@v1
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_14 }}
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_15 }}
skip_deploy_on_missing_secrets: true
action: "close"

View File

@ -1,6 +1,6 @@
# Dapr documentation
[![GitHub License](https://img.shields.io/github/license/dapr/docs?style=flat&label=License&logo=github)](https://github.com/dapr/docs/blob/v1.13/LICENSE) [![GitHub issue custom search in repo](https://img.shields.io/github/issues-search/dapr/docs?query=type%3Aissue%20is%3Aopen%20label%3A%22good%20first%20issue%22&label=Good%20first%20issues&style=flat&logo=github)](https://github.com/dapr/docs/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) [![Discord](https://img.shields.io/discord/778680217417809931?label=Discord&style=flat&logo=discord)](http://bit.ly/dapr-discord) [![YouTube Channel Views](https://img.shields.io/youtube/channel/views/UCtpSQ9BLB_3EXdWAUQYwnRA?style=flat&label=YouTube%20views&logo=youtube)](https://youtube.com/@daprdev) [![X (formerly Twitter) Follow](https://img.shields.io/twitter/follow/daprdev?logo=x&style=flat)](https://twitter.com/daprdev)
[![GitHub License](https://img.shields.io/github/license/dapr/docs?style=flat&label=License&logo=github)](https://github.com/dapr/docs/blob/v1.13/LICENSE) [![GitHub issue custom search in repo](https://img.shields.io/github/issues-search/dapr/docs?query=type%3Aissue%20is%3Aopen%20label%3A%22good%20first%20issue%22&label=Good%20first%20issues&style=flat&logo=github)](https://github.com/dapr/docs/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) [![Discord](https://img.shields.io/discord/778680217417809931?label=Discord&style=flat&logo=discord)](https://bit.ly/dapr-discord) [![YouTube Channel Views](https://img.shields.io/youtube/channel/views/UCtpSQ9BLB_3EXdWAUQYwnRA?style=flat&label=YouTube%20views&logo=youtube)](https://youtube.com/@daprdev) [![X (formerly Twitter) Follow](https://img.shields.io/twitter/follow/daprdev?logo=x&style=flat)](https://twitter.com/daprdev)
If you are looking to explore the Dapr documentation, please go to the documentation website:
@ -16,14 +16,14 @@ The following branches are currently maintained:
| Branch | Website | Description |
| ------------------------------------------------------------ | -------------------------- | ------------------------------------------------------------------------------------------------ |
| [v1.13](https://github.com/dapr/docs) (primary) | https://docs.dapr.io | Latest Dapr release documentation. Typo fixes, clarifications, and most documentation goes here. |
| [v1.14](https://github.com/dapr/docs/tree/v1.14) (pre-release) | https://v1-14.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.14+ go here. |
| [v1.14](https://github.com/dapr/docs) (primary) | https://docs.dapr.io | Latest Dapr release documentation. Typo fixes, clarifications, and most documentation goes here. |
| [v1.15](https://github.com/dapr/docs/tree/v1.15) (pre-release) | https://v1-15.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.15+ go here. |
For more information visit the [Dapr branch structure](https://docs.dapr.io/contributing/docs-contrib/contributing-docs/#branch-guidance) document.
## Contribution guidelines
Before making your first contribution, make sure to review the [contributing section](http://docs.dapr.io/contributing/) in the docs.
Before making your first contribution, make sure to review the [contributing section](https://docs.dapr.io/contributing/) in the docs.
## Overview

View File

@ -1,5 +1,5 @@
# Site Configuration
baseURL = "https://v1-14.docs.dapr.io"
baseURL = "https://v1-15.docs.dapr.io"
title = "Dapr Docs"
theme = "docsy"
disableFastRender = true
@ -196,20 +196,23 @@ offlineSearch = false
github_repo = "https://github.com/dapr/docs"
github_project_repo = "https://github.com/dapr/dapr"
github_subdir = "daprdocs"
github_branch = "v1.14"
github_branch = "v1.15"
# Versioning
version_menu = "v1.14 (preview)"
version = "v1.14"
version_menu = "v1.15 (preview)"
version = "v1.15"
archived_version = false
url_latest_version = "https://docs.dapr.io"
[[params.versions]]
version = "v1.14 (preview)"
version = "v1.15 (preview)"
url = "#"
[[params.versions]]
version = "v1.13 (latest)"
version = "v1.14 (latest)"
url = "https://docs.dapr.io"
[[params.versions]]
version = "v1.13"
url = "https://v1-13.docs.dapr.io"
[[params.versions]]
version = "v1.12"
url = "https://v1-12.docs.dapr.io"

View File

@ -7,7 +7,7 @@ description: >
How Dapr compares to and works with service meshes
---
Dapr uses a sidecar architecture, running as a separate process alongside the application and includes features such as service invocation, network security, and distributed tracing. This often raises the question: how does Dapr compare to service mesh solutions such as [Linkerd](https://linkerd.io/), [Istio](https://istio.io/) and [Open Service Mesh](https://openservicemesh.io/) among others?
Dapr uses a sidecar architecture, running as a separate process alongside the application and includes features such as service invocation, network security, and [distributed tracing](https://middleware.io/blog/what-is-distributed-tracing/). This often raises the question: how does Dapr compare to service mesh solutions such as [Linkerd](https://linkerd.io/), [Istio](https://istio.io/) and [Open Service Mesh](https://openservicemesh.io/) among others?
## How Dapr and service meshes compare
While Dapr and service meshes do offer some overlapping capabilities, **Dapr is not a service mesh**, where a service mesh is defined as a *networking* service mesh. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric, versus service meshes which are infrastructure-centric.

View File

@ -139,7 +139,7 @@ Dapr can be used from any developer framework. Here are some that have been inte
| [.NET]({{< ref dotnet >}}) | [ASP.NET Core](https://github.com/dapr/dotnet-sdk/tree/master/examples/AspNetCore) | Brings stateful routing controllers that respond to pub/sub events from other services. Can also take advantage of [ASP.NET Core gRPC Services](https://docs.microsoft.com/aspnet/core/grpc/).
| [Java]({{< ref java >}}) | [Spring Boot](https://spring.io/) | Build Spring boot applications with Dapr APIs
| [Python]({{< ref python >}}) | [Flask]({{< ref python-flask.md >}}) | Build Flask applications with Dapr APIs
| [JavaScript](https://github.com/dapr/js-sdk) | [Express](http://expressjs.com/) | Build Express applications with Dapr APIs
| [JavaScript](https://github.com/dapr/js-sdk) | [Express](https://expressjs.com/) | Build Express applications with Dapr APIs
| [PHP]({{< ref php >}}) | | You can serve with Apache, Nginx, or Caddyserver.
#### Integrations and extensions

View File

@ -83,6 +83,7 @@ After upmerge, prepare the docs branches for the release. In two separate PRs, y
- Archive the latest release.
- Bring the preview/release branch as the current, live version of the docs.
- Create a new preview branch.
#### Latest release
@ -193,79 +194,24 @@ These steps will prepare the upcoming release branch for promotion to latest rel
| [v1.2](https://github.com/dapr/docs/tree/v1.2) (pre-release) | https://v1-2.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.2+ go here. |
```
1. In VS Code, search for any `v1.0` references and replace them with `v1.1` as applicable.
1. Update the `dapr-latest-version.html` shortcode partial to the new minor/patch version (in this example, `1.1.0` and `1.1`).
1. Commit the staged changes and push to your branch (`release_v1.1`).
1. Open a PR from `release/v1.1` to `v1.1`.
1. Have a docs maintainer or approver review. Wait to merge the PR until release.
### Create new website for future release
#### Future preview branch
Next, create a new website for the future Dapr release, which you point to from the latest website. To do this, you'll need to:
##### Create preview branch
- Deploy an Azure Static Web App.
- Configure DNS via request from CNCF.
1. In GitHub UI, select the branch drop-down menu and select **View all branches**.
1. Click **New branch**.
1. In **New branch name**, enter the preview branch version number. In this example, it would be `v1.2`.
1. Select **v1.1** as the source.
1. Click **Create new branch**.
These steps require authentication.
##### Configure preview branch
#### Deploy Azure Static Web App
Deploy a new Azure Static Web App for the future Dapr release. For this example, we use v1.2 as the future release.
{{% alert title="Important" color="primary" %}}
You need Microsoft employee access to create a new Azure Static Web App.
{{% /alert %}}
1. Use Azure PIM to [elevate into the Owner role](https://eng.ms/docs/cloud-ai-platform/devdiv/devdiv-azure-service-dmitryr/azure-devex-philon/dapr/dapr/assets/azure) for the Dapr Prod subscription.
1. Navigate to the [docs-website](https://ms.portal.azure.com/#@microsoft.onmicrosoft.com/resource/subscriptions/38875a89-0178-4f27-a141-dc6fc01f183d/resourceGroups/docs-website/overview) resource group.
1. Select **+ Create** and search for **Static Web App**. Select **Create**.
1. Enter in the following information:
- Subscription: `Dapr Prod`
- Resource Group: `docs-website`
- Name: `daprdocs-v1-2`
- Hosting Plan: `Free`
- Region: `West US 2`
- Source: `Other`
1. Select **Review + create**, and then deploy the static web app.
1. Wait for deployment, and navigate to the new static web app resource.
1. Select **Manage deployment token** and copy the value.
1. Navigate to the docs repo **Secrets management** page under **Settings** and create a new secret named `AZURE_STATIC_WEB_APPS_V1_2`, and provide the value of the deployment token.
#### Configure DNS
{{% alert title="Important" color="primary" %}}
This section can only be completed on a Secure Admin Workstation (SAW). If you do not have a SAW device, ask a team member with one to assist.
{{% /alert %}}
1. Ensure you are a member of the `DMAdaprweb` security group in IDWeb.
1. Navigate to [https://prod.msftdomains.com/dns/form?environment=0](https://prod.msftdomains.com/dns/form?environment=0) on a SAW
1. Enter the following details in the left-hand pane:
- Team Owning Alias: `DMAdaprweb`
- Business Justification/Notes: `Configuring DNS for new Dapr docs website`
- Environment: `Internet/Public-facing`
- Zone: `dapr.io`
- Action: `Add`
- Incident ID: Leave blank
1. Back in the new static web app you just deployed, navigate to the **Custom domains** blade and select **+ Add**
1. Enter `v1-2.docs.dapr.io` under **Domain name**. Click **Next**.
1. Keep **Hostname record type** as `CNAME`, and copy the value of **Value**.
1. Back in the domain portal, enter the following information in the main pane:
- Name: `v1-2.docs`
- Type: `CNAME`
- Data: Value you just copied from the static web app
1. Click **Submit** in the top right corner.
1. Wait for two emails:
- One saying your request was received.
- One saying the request was completed.
1. Back in the Azure Portal, click **Add**. You may need to click a couple times to account for DNS delay.
1. A TLS certificate is now generated for you and the DNS record is saved. This may take 2-3 minutes.
1. Navigate to `https://v1-2.docs.dapr.io` and verify a blank website loads correctly.
### Configure future website branch
1. Open VS Code to the Dapr docs repo.
1. In a terminal window, navigate to the `docs` repo.
1. Switch to the upcoming release branch (`v1.1`) and synchronize changes:
```bash
@ -339,15 +285,94 @@ You need Microsoft employee access to create a new Azure Static Web App.
url = "https://v1-0.docs.dapr.io"
```
1. Commit the staged changes and push to the v1.2 branch.
1. Navigate to the [docs Actions page](https://github.com/dapr/docs/actions) and make sure the build & release successfully completed.
1. Navigate to the new `https://v1-2.docs.dapr.io` website and verify that the new version is displayed.
1. Commit the staged changes and push to a new PR against the v1.2 branch.
1. Hold on merging the PR until after release and the other `v1.0` and `v1.1` PRs have been merged.
### Create new website for future release
Next, create a new website for the future Dapr release. To do this, you'll need to:
- Deploy an Azure Static Web App.
- Configure DNS via request from CNCF.
#### Prerequisites
- Docs maintainer status in the `dapr/docs` repo.
- Access to the active Dapr Azure Subscription with Contributor or Owner access to create resources.
- [Azure Developer CLI](https://learn.microsoft.com/azure/developer/azure-developer-cli/install-azd?tabs=winget-windows%2Cbrew-mac%2Cscript-linux&pivots=os-windows) installed on your machine.
- Your own fork of the [`dapr/docs` repo](https://github.com/dapr/docs) cloned to your machine.
#### Deploy Azure Static Web App
Deploy a new Azure Static Web App for the future Dapr release. For this example, we use v1.1 as the future release.
1. In a terminal window, navigate to the `iac/swa` folder in the `dapr/docs` directory.
```bash
cd .github/iac/swa
```
1. Log into Azure Developer CLI (`azd`) using the Dapr Azure subscription.
```bash
azd login
```
1. In the browser prompt, verify you're logging in as Dapr and complete the login.
1. In a new terminal, replace the following values with the website values you prefer.
```bash
export AZURE_RESOURCE_GROUP=rg-dapr-docs-test
export IDENTITY_RESOURCE_GROUP=rg-my-identities
export AZURE_STATICWEBSITE_NAME=daprdocs-latest
```
1. Create a new [`azd` environment](https://learn.microsoft.com/azure/developer/azure-developer-cli/faq#what-is-an-environment-name).
```bash
azd env new
```
1. When prompted, enter a new environment name. For this example, you'd name the environment something like: `dapr-docs-v1-1`.
1. Once the environment is created, deploy the Dapr docs SWA into the new environment using the following command:
```bash
azd up
```
1. When prompted, select an Azure subscription and location. Match these to the Dapr Azure subscription.
#### Configure the SWA in the Azure portal
Head over to the Dapr subscription in the [Azure portal](https://portal.azure.com) and verify that your new Dapr docs site has been deployed.
Optionally, grant the correct minimal permissions for inbound publishing and outbound access to dependencies using the **Static Web App** > **Access control (IAM)** blade in the portal.
#### Configure DNS
1. In the Azure portal, from the new SWA you just created, naviage to **Custom domains** from the left side menu.
1. Copy the "CNAME" value of the web app.
1. Using your own account, [submit a CNCF ticket](https://jira.linuxfoundation.org/secure/Dashboard.jspa) to create a new domain name mapped to the CNAME value you copied. For this example, to create a new domain for Dapr v1.1, you'd request to map to `v1-1.docs.dapr.io`.
Request resolution may take some time.
1. Once the new domain has been confirmed, return to the static web app in the portal.
1. Navigate to the **Custom domains** blade and select **+ Add**.
1. Select **Custom domain on other DNS**.
1. Enter `v1-1.docs.dapr.io` under **Domain name**. Click **Next**.
1. Keep **Hostname record type** as `CNAME`, and copy the value of **Value**.
1. Click **Add**.
1. Navigate to `https://v1-1.docs.dapr.io` and verify a blank website loads correctly.
You can repeat these steps for any preview versions.
### On the new Dapr release date
1. Wait for all code/containers/Helm charts to be published.
1. Merge the the PR from `release_v1.0` to `v1.0`. Delete the release/v1.0 branch.
1. Merge the the PR from `release_v1.1` to `v1.1`. Delete the release/v1.1 branch.
1. Merge the PR from `release_v1.0` to `v1.0`. Delete the release/v1.0 branch.
1. Merge the PR from `release_v1.1` to `v1.1`. Delete the release/v1.1 branch.
1. Merge the PR from `release_v1.2` to `v1.2`. Delete the release/v1.2 branch.
Congrats on the new docs release! 🚀 🎉 🎈

View File

@ -15,7 +15,39 @@ Dapr cryptography is currently in alpha.
## Encrypt
{{< tabs "JavaScript" "Go" ".NET" >}}
{{< tabs "Python" "JavaScript" ".NET" "Go" >}}
{{% codetab %}}
<!--Python-->
Using the Dapr SDK in your project, with the gRPC APIs, you can encrypt a stream of data, such as a file or a string:
```python
# When passing data (a buffer or string), `encrypt` returns a Buffer with the encrypted message
def encrypt_decrypt_string(dapr: DaprClient):
message = 'The secret is "passw0rd"'
# Encrypt the message
resp = dapr.encrypt(
data=message.encode(),
options=EncryptOptions(
# Name of the cryptography component (required)
component_name=CRYPTO_COMPONENT_NAME,
# Key stored in the cryptography component (required)
key_name=RSA_KEY_NAME,
# Algorithm used for wrapping the key, which must be supported by the key named above.
# Options include: "RSA", "AES"
key_wrap_algorithm='RSA',
),
)
# The method returns a readable stream, which we read in full in memory
encrypt_bytes = resp.read()
print(f'Encrypted the message, got {len(encrypt_bytes)} bytes')
```
{{% /codetab %}}
{{% codetab %}}
@ -59,6 +91,26 @@ await pipeline(
{{% codetab %}}
<!-- .NET -->
Using the Dapr SDK in your project, with the gRPC APIs, you can encrypt data in a string or a byte array:
```csharp
using var client = new DaprClientBuilder().Build();
const string componentName = "azurekeyvault"; //Change this to match your cryptography component
const string keyName = "myKey"; //Change this to match the name of the key in your cryptographic store
const string plainText = "This is the value we're going to encrypt today";
//Encode the string to a UTF-8 byte array and encrypt it
var plainTextBytes = Encoding.UTF8.GetBytes(plainText);
var encryptedBytesResult = await client.EncryptAsync(componentName, plaintextBytes, keyName, new EncryptionOptions(KeyWrapAlgorithm.Rsa));
```
{{% /codetab %}}
{{% codetab %}}
<!--go-->
Using the Dapr SDK in your project, you can encrypt a stream of data, such as a file.
@ -136,32 +188,45 @@ if err != nil {
{{% /codetab %}}
{{% codetab %}}
<!-- .NET -->
Using the Dapr SDK in your project, with the gRPC APIs, you can encrypt data in a string or a byte array:
```csharp
using var client = new DaprClientBuilder().Build();
const string componentName = "azurekeyvault"; //Change this to match your cryptography component
const string keyName = "myKey"; //Change this to match the name of the key in your cryptographic store
const string plainText = "This is the value we're going to encrypt today";
//Encode the string to a UTF-8 byte array and encrypt it
var plainTextBytes = Encoding.UTF8.GetBytes(plainText);
var encryptedBytesResult = await client.EncryptAsync(componentName, plaintextBytes, keyName, new EncryptionOptions(KeyWrapAlgorithm.Rsa));
```
{{% /codetab %}}
{{< /tabs >}}
## Decrypt
{{< tabs "JavaScript" "Go" ".NET" >}}
{{< tabs "Python" "JavaScript" ".NET" "Go" >}}
{{% codetab %}}
<!--python-->
To decrypt a stream of data, use `decrypt`.
```python
def encrypt_decrypt_string(dapr: DaprClient):
message = 'The secret is "passw0rd"'
# ...
# Decrypt the encrypted data
resp = dapr.decrypt(
data=encrypt_bytes,
options=DecryptOptions(
# Name of the cryptography component (required)
component_name=CRYPTO_COMPONENT_NAME,
# Key stored in the cryptography component (required)
key_name=RSA_KEY_NAME,
),
)
# The method returns a readable stream, which we read in full in memory
decrypt_bytes = resp.read()
print(f'Decrypted the message, got {len(decrypt_bytes)} bytes')
print(decrypt_bytes.decode())
assert message == decrypt_bytes.decode()
```
{{% /codetab %}}
{{% codetab %}}
@ -191,23 +256,6 @@ await pipeline(
{{% codetab %}}
<!--go-->
To decrypt a file, use the `Decrypt` gRPC API to your project.
In the following example, `out` is a stream that can be written to file or read in memory, as in the examples above.
```go
out, err := sdkClient.Decrypt(context.Background(), rf, dapr.EncryptOptions{
// Only required option is the component name
ComponentName: "mycryptocomponent",
})
```
{{% /codetab %}}
{{% codetab %}}
<!-- .NET -->
To decrypt a string, use the 'DecryptAsync' gRPC API in your project.
@ -229,6 +277,23 @@ public async Task<string> DecryptBytesAsync(byte[] encryptedBytes)
{{% /codetab %}}
{{% codetab %}}
<!--go-->
To decrypt a file, use the `Decrypt` gRPC API to your project.
In the following example, `out` is a stream that can be written to file or read in memory, as in the examples above.
```go
out, err := sdkClient.Decrypt(context.Background(), rf, dapr.EncryptOptions{
// Only required option is the component name
ComponentName: "mycryptocomponent",
})
```
{{% /codetab %}}
{{< /tabs >}}
## Next steps

View File

@ -0,0 +1,149 @@
---
type: docs
title: "How-To: Schedule and handle triggered jobs"
linkTitle: "How-To: Schedule and handle triggered jobs"
weight: 2000
description: "Learn how to use the jobs API to schedule and handle triggered jobs"
---
Now that you've learned what the [jobs building block]({{< ref jobs-overview.md >}}) provides, let's look at an example of how to use the API. The code example below describes an application that schedules jobs for a database backup application and handles them at trigger time, also known as the time the job was sent back to the application because it reached it's dueTime.
<!--
Include a diagram or image, if possible.
-->
## Start the Scheduler service
When you [run `dapr init` in either self-hosted mode or on Kubernetes]({{< ref install-dapr-selfhost.md >}}), the Dapr Scheduler service is started.
## Set up the Jobs API
In your code, set up and schedule jobs within your application.
{{< tabs "Go" >}}
{{% codetab %}}
<!--go-->
The following Go SDK code sample schedules the job named `prod-db-backup`. Job data is housed in a backup database (`"my-prod-db"`) and is scheduled with `ScheduleJobAlpha1`. This provides the `jobData`, which includes:
- The backup `Task` name
- The backup task's `Metadata`, including:
- The database name (`DBName`)
- The database location (`BackupLocation`)
```go
package main
import (
//...
daprc "github.com/dapr/go-sdk/client"
"github.com/dapr/go-sdk/examples/dist-scheduler/api"
"github.com/dapr/go-sdk/service/common"
daprs "github.com/dapr/go-sdk/service/grpc"
)
func main() {
// Initialize the server
server, err := daprs.NewService(":50070")
// ...
if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil {
log.Fatalf("failed to register job event handler: %v", err)
}
log.Println("starting server")
go func() {
if err = server.Start(); err != nil {
log.Fatalf("failed to start server: %v", err)
}
}()
// ...
// Set up backup location
jobData, err := json.Marshal(&api.DBBackup{
Task: "db-backup",
Metadata: api.Metadata{
DBName: "my-prod-db",
BackupLocation: "/backup-dir",
},
},
)
// ...
}
```
The job is scheduled with a `Schedule` set and the amount of `Repeats` desired. These settings determine a max amount of times the job should be triggered and sent back to the app.
In this example, at trigger time, which is `@every 1s` according to the `Schedule`, this job is triggered and sent back to the application up to the max `Repeats` (`10`).
```go
// ...
// Set up the job
job := daprc.Job{
Name: "prod-db-backup",
Schedule: "@every 1s",
Repeats: 10,
Data: &anypb.Any{
Value: jobData,
},
}
```
At the trigger time, the `prodDBBackupHandler` function is called, executing the desired business logic for this job at trigger time. For example:
```go
// ...
// At job trigger time this function is called
func prodDBBackupHandler(ctx context.Context, job *common.JobEvent) error {
var jobData common.Job
if err := json.Unmarshal(job.Data, &jobData); err != nil {
// ...
}
decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value)
// ...
var jobPayload api.DBBackup
if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil {
// ...
}
fmt.Printf("job %d received:\n type: %v \n typeurl: %v\n value: %v\n extracted payload: %v\n", jobCount, job.JobType, jobData.TypeURL, jobData.Value, jobPayload)
jobCount++
return nil
}
```
{{% /codetab %}}
{{< /tabs >}}
## Run the Dapr sidecar
Once you've set up the Jobs API in your application, in a terminal window run the Dapr sidecar with the following command.
{{< tabs "Go" >}}
{{% codetab %}}
```bash
dapr run --app-id=distributed-scheduler \
--metrics-port=9091 \
--dapr-grpc-port 50001 \
--app-port 50070 \
--app-protocol grpc \
--log-level debug \
go run ./main.go
```
{{% /codetab %}}
{{< /tabs >}}
## Next steps
- [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}})
- [Jobs API reference]({{< ref jobs_api.md >}})

View File

@ -1,32 +0,0 @@
---
type: docs
title: "How-To: Schedule jobs"
linkTitle: "How-To: Schedule jobs"
weight: 2000
description: "Learn how to use the jobs API to schedule jobs"
---
Now that you've learned what the [jobs building block]({{< ref jobs-overview.md >}}) provides, let's look at an example of how to use the API. The code example below describes an application that schedules jobs for a **TBD** application.
<!--
Include a diagram or image, if possible.
-->
## Set up the Scheduler service
When you run `dapr init` in either self-hosted mode or on Kubernetes, the Dapr scheduler service is started.
## Run the Dapr sidecar
Run the Dapr sidecar alongside your application.
```bash
dapr run --app-id=jobs --app-port 50070 --app-protocol grpc --log-level debug -- go run main.go
```
## Next steps
- [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}})
- [Jobs API reference]({{< ref jobs_api.md >}})

View File

@ -8,10 +8,10 @@ description: "Overview of the jobs API building block"
Many applications require job scheduling, or the need to take an action in the future. The jobs API is an orchestrator for scheduling these future jobs, either at a specific time or for a specific interval.
Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the scheduler service to schedule actor reminders.
Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the scheduler service to schedule actor reminders.
Jobs in Dapr consist of:
- The jobs API building block
- [The jobs API building block]({{< ref jobs_api.md >}})
- [The Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}})
[See example scenarios.]({{< ref "#scenarios" >}})
@ -27,26 +27,26 @@ The jobs API is a job scheduler, not the executor which runs the job. The design
All job details and user-associated data for scheduled jobs are stored in an embedded Etcd database in the Scheduler service.
You can use jobs to:
- **Delay your [pub/sub messaging]({<< ref pubsub-overview.md >>}).** You can publish a message in a future specific time (for example: a week from today, or a specific UTC date/time).
- **Delay your [pub/sub messaging]({{< ref pubsub-overview.md >}}).** You can publish a message in a future specific time (for example: a week from today, or a specific UTC date/time).
- **Schedule [service invocation]({{< ref service-invocation-overview.md >}}) method calls between applications.**
## Scenarios
Job scheduling can prove helpful in the following scenarios:
- **Automated Database Backups**:
- **Automated Database Backups**:
Ensure a database is backed up daily to prevent data loss. Schedule a backup script to run every night at 2 AM, which will create a backup of the database and store it in a secure location.
- **Regular Data Processing and ETL (Extract, Transform, Load)**:
- **Regular Data Processing and ETL (Extract, Transform, Load)**:
Process and transform raw data from various sources and load it into a data warehouse. Schedule ETL jobs to run at specific times (for example: hourly, daily) to fetch new data, process it, and update the data warehouse with the latest information.
- **Email Notifications and Reports**:
- **Email Notifications and Reports**:
Receive daily sales reports and weekly performance summaries via email. Schedule a job that generates the required reports and sends them via email at 6 a.m. every day for daily reports and 8 a.m. every Monday for weekly summaries.
- **Maintenance Tasks and System Updates**:
- **Maintenance Tasks and System Updates**:
Perform regular maintenance tasks such as clearing temporary files, updating software, and checking system health. Schedule various maintenance scripts to run at off-peak hours, such as weekends or late nights, to minimize disruption to users.
- **Batch Processing for Financial Transactions**:
- **Batch Processing for Financial Transactions**:
Processes a large number of transactions that need to be batched and settled at the end of each business day. Schedule batch processing jobs to run at 5 PM every business day, aggregating the days transactions and performing necessary settlements and reconciliations.
Dapr's jobs API ensures the tasks represented in these scenarios are performed consistently and reliably without manual intervention, improving efficiency and reducing the risk of errors.
@ -65,10 +65,10 @@ Actors have actor reminders, but present some limitations involving scalability
## Try out the jobs API
You can try out the jobs API in your application. After [Dapr is installed]({{< ref install-dapr-cli.md >}}), you can begin using the jobs API, starting with [the How-to: Schedule jobs guide]({{< ref howto-schedule-jobs.md >}}).
You can try out the jobs API in your application. After [Dapr is installed]({{< ref install-dapr-cli.md >}}), you can begin using the jobs API, starting with [the How-to: Schedule jobs guide]({{< ref howto-schedule-and-handle-triggered-jobs.md >}}).
## Next steps
- [Learn how to use the jobs API]({{< ref howto-schedule-jobs.md >}})
- [Learn how to use the jobs API]({{< ref howto-schedule-and-handle-triggered-jobs.md >}})
- [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}})
- [Jobs API reference]({{< ref jobs_api.md >}})

View File

@ -69,7 +69,7 @@ spec:
allowedSecrets: ["secret1", "secret2"]
```
The default access to the `vault` secret store is `deny`, while some secrets are accessible by the application, based on the `allowedSecrets` list. [Learn how to apply configuration to the sidecar]]({{< ref configuration-concept.md >}}).
The default access to the `vault` secret store is `deny`, while some secrets are accessible by the application, based on the `allowedSecrets` list. [Learn how to apply configuration to the sidecar]({{< ref configuration-concept.md >}}).
## Scenario 3: Deny access to certain sensitive secrets in a secret store
@ -88,7 +88,7 @@ spec:
deniedSecrets: ["secret1", "secret2"]
```
This example configuration explicitly denies access to `secret1` and `secret2` from the secret store named `vault` while allowing access to all other secrets. [Learn how to apply configuration to the sidecar]]({{< ref configuration-concept.md >}}).
This example configuration explicitly denies access to `secret1` and `secret2` from the secret store named `vault` while allowing access to all other secrets. [Learn how to apply configuration to the sidecar]({{< ref configuration-concept.md >}}).
## Permission priority

View File

@ -242,12 +242,10 @@ namespace EventService
var orderId = random.Next(1,1000);
//Using Dapr SDK to invoke a method
var order = new Order("1");
var orderJson = JsonSerializer.Serialize<Order>(order);
var content = new StringContent(orderJson, Encoding.UTF8, "application/json");
var order = new Order(orderId.ToString());
var httpClient = DaprClient.CreateInvokeHttpClient();
var response = await httpClient.PostAsJsonAsync("http://order-processor/orders", content);
var response = await httpClient.PostAsJsonAsync("http://order-processor/orders", order);
var result = await response.Content.ReadAsStringAsync();
Console.WriteLine("Order requested: " + orderId);

View File

@ -34,7 +34,7 @@ The outbox feature can be used with using any [transactional state store]({{< re
Message brokers that work with the competing consumer pattern (for example, [Apache Kafka]({{< ref setup-apache-kafka>}})) are encouraged to reduce the chances of duplicate events.
{{% /alert %}}
## Usage
## Enable the outbox pattern
To enable the outbox feature, add the following required and optional fields on a state store component:
@ -68,6 +68,8 @@ spec:
| outboxPubsub | No | `outboxPublishPubsub` | Sets the pub/sub component used by Dapr to coordinate the state and pub/sub transactions. If not set, the pub/sub component configured with `outboxPublishPubsub` is used. This is useful if you want to separate the pub/sub component used to send the notification state changes from the one used to coordinate the transaction
| outboxDiscardWhenMissingState | No | `false` | By setting `outboxDiscardWhenMissingState` to `true`, Dapr discards the transaction if it cannot find the state in the database and does not retry. This setting can be useful if the state store data has been deleted for any reason before Dapr was able to deliver the message and you would like Dapr to drop the items from the pub/sub and stop retrying to fetch the state
## Additional configurations
### Combining outbox and non-outbox messages on the same state store
If you want to use the same state store for sending both outbox and non-outbox messages, simply define two state store components that connect to the same state store, where one has the outbox feature and the other does not.
@ -106,6 +108,575 @@ spec:
value: "newOrder"
```
### Shape the outbox pattern message
You can override the outbox pattern message published to the pub/sub broker by setting another transaction that is not be saved to the database and is explicitly mentioned as a projection. This transaction is added a metadata key named `outbox.projection` with a value set to `true`. When added to the state array saved in a transaction, this payload is ignored when the state is written and the data is used as the payload sent to the upstream subscriber.
To use correctly, the `key` values must match between the operation on the state store and the message projection. If the keys do not match, the whole transaction fails.
If you have two or more `outbox.projection` enabled state items for the same key, the first one defined is used and the others are ignored.
[Learn more about default and custom CloudEvent messages.]({{< ref pubsub-cloudevents.md >}})
{{< tabs Python JavaScript ".NET" Java Go HTTP >}}
{{% codetab %}}
<!--python-->
In the following Python SDK example of a state transaction, the value of `"2"` is saved to the database, but the value of `"3"` is published to the end-user topic.
```python
DAPR_STORE_NAME = "statestore"
async def main():
client = DaprClient()
# Define the first state operation to save the value "2"
op1 = StateItem(
key="key1",
value=b"2"
)
# Define the second state operation to publish the value "3" with metadata
op2 = StateItem(
key="key1",
value=b"3",
options=StateOptions(
metadata={
"outbox.projection": "true"
}
)
)
# Create the list of state operations
ops = [op1, op2]
# Execute the state transaction
await client.state.transaction(DAPR_STORE_NAME, operations=ops)
print("State transaction executed.")
```
By setting the metadata item `"outbox.projection"` to `"true"` and making sure the `key` values match (`key1`):
- The first operation is written to the state store and no message is written to the message broker.
- The second operation value is published to the configured pub/sub topic.
{{% /codetab %}}
{{% codetab %}}
<!--javascript-->
In the following JavaScript SDK example of a state transaction, the value of `"2"` is saved to the database, but the value of `"3"` is published to the end-user topic.
```javascript
const { DaprClient, StateOperationType } = require('@dapr/dapr');
const DAPR_STORE_NAME = "statestore";
async function main() {
const client = new DaprClient();
// Define the first state operation to save the value "2"
const op1 = {
operation: StateOperationType.UPSERT,
request: {
key: "key1",
value: "2"
}
};
// Define the second state operation to publish the value "3" with metadata
const op2 = {
operation: StateOperationType.UPSERT,
request: {
key: "key1",
value: "3",
metadata: {
"outbox.projection": "true"
}
}
};
// Create the list of state operations
const ops = [op1, op2];
// Execute the state transaction
await client.state.transaction(DAPR_STORE_NAME, ops);
console.log("State transaction executed.");
}
main().catch(err => {
console.error(err);
});
```
By setting the metadata item `"outbox.projection"` to `"true"` and making sure the `key` values match (`key1`):
- The first operation is written to the state store and no message is written to the message broker.
- The second operation value is published to the configured pub/sub topic.
{{% /codetab %}}
{{% codetab %}}
<!--dotnet-->
In the following .NET SDK example of a state transaction, the value of `"2"` is saved to the database, but the value of `"3"` is published to the end-user topic.
```csharp
public class Program
{
private const string DAPR_STORE_NAME = "statestore";
public static async Task Main(string[] args)
{
var client = new DaprClientBuilder().Build();
// Define the first state operation to save the value "2"
var op1 = new StateTransactionRequest(
key: "key1",
value: Encoding.UTF8.GetBytes("2"),
operationType: StateOperationType.Upsert
);
// Define the second state operation to publish the value "3" with metadata
var metadata = new Dictionary<string, string>
{
{ "outbox.projection", "true" }
};
var op2 = new StateTransactionRequest(
key: "key1",
value: Encoding.UTF8.GetBytes("3"),
operationType: StateOperationType.Upsert,
metadata: metadata
);
// Create the list of state operations
var ops = new List<StateTransactionRequest> { op1, op2 };
// Execute the state transaction
await client.ExecuteStateTransactionAsync(DAPR_STORE_NAME, ops);
Console.WriteLine("State transaction executed.");
}
}
```
By setting the metadata item `"outbox.projection"` to `"true"` and making sure the `key` values match (`key1`):
- The first operation is written to the state store and no message is written to the message broker.
- The second operation value is published to the configured pub/sub topic.
{{% /codetab %}}
{{% codetab %}}
<!--java-->
In the following Java SDK example of a state transaction, the value of `"2"` is saved to the database, but the value of `"3"` is published to the end-user topic.
```java
public class Main {
private static final String DAPR_STORE_NAME = "statestore";
public static void main(String[] args) {
try (DaprClient client = new DaprClientBuilder().build()) {
// Define the first state operation to save the value "2"
StateOperation<String> op1 = new StateOperation<>(
StateOperationType.UPSERT,
"key1",
"2"
);
// Define the second state operation to publish the value "3" with metadata
Map<String, String> metadata = new HashMap<>();
metadata.put("outbox.projection", "true");
StateOperation<String> op2 = new StateOperation<>(
StateOperationType.UPSERT,
"key1",
"3",
metadata
);
// Create the list of state operations
List<StateOperation<?>> ops = new ArrayList<>();
ops.add(op1);
ops.add(op2);
// Execute the state transaction
client.executeStateTransaction(DAPR_STORE_NAME, ops).block();
System.out.println("State transaction executed.");
} catch (Exception e) {
e.printStackTrace();
}
}
}
```
By setting the metadata item `"outbox.projection"` to `"true"` and making sure the `key` values match (`key1`):
- The first operation is written to the state store and no message is written to the message broker.
- The second operation value is published to the configured pub/sub topic.
{{% /codetab %}}
{{% codetab %}}
<!--go-->
In the following Go SDK example of a state transaction, the value of `"2"` is saved to the database, but the value of `"3"` is published to the end-user topic.
```go
ops := make([]*dapr.StateOperation, 0)
op1 := &dapr.StateOperation{
Type: dapr.StateOperationTypeUpsert,
Item: &dapr.SetStateItem{
Key: "key1",
Value: []byte("2"),
},
}
op2 := &dapr.StateOperation{
Type: dapr.StateOperationTypeUpsert,
Item: &dapr.SetStateItem{
Key: "key1",
Value: []byte("3"),
// Override the data payload saved to the database
Metadata: map[string]string{
"outbox.projection": "true",
},
},
}
ops = append(ops, op1, op2)
meta := map[string]string{}
err := testClient.ExecuteStateTransaction(ctx, store, meta, ops)
```
By setting the metadata item `"outbox.projection"` to `"true"` and making sure the `key` values match (`key1`):
- The first operation is written to the state store and no message is written to the message broker.
- The second operation value is published to the configured pub/sub topic.
{{% /codetab %}}
{{% codetab %}}
<!--http-->
You can pass the message override using the following HTTP request:
```bash
curl -X POST http://localhost:3500/v1.0/state/starwars/transaction \
-H "Content-Type: application/json" \
-d '{
"operations": [
{
"operation": "upsert",
"request": {
"key": "order1",
"value": {
"orderId": "7hf8374s",
"type": "book",
"name": "The name of the wind"
}
}
},
{
"operation": "upsert",
"request": {
"key": "order1",
"value": {
"orderId": "7hf8374s"
},
"metadata": {
"outbox.projection": "true"
},
"contentType": "application/json"
}
}
]
}'
```
By setting the metadata item `"outbox.projection"` to `"true"` and making sure the `key` values match (`key1`):
- The first operation is written to the state store and no message is written to the message broker.
- The second operation value is published to the configured pub/sub topic.
{{% /codetab %}}
{{< /tabs >}}
### Override Dapr-generated CloudEvent fields
You can override the [Dapr-generated CloudEvent fields]({{< ref "pubsub-cloudevents.md#dapr-generated-cloudevents-example" >}}) on the published outbox event with custom CloudEvent metadata.
{{< tabs Python JavaScript ".NET" Java Go HTTP >}}
{{% codetab %}}
<!--python-->
```python
async def execute_state_transaction():
async with DaprClient() as client:
# Define state operations
ops = []
op1 = {
'operation': 'upsert',
'request': {
'key': 'key1',
'value': b'2', # Convert string to byte array
'metadata': {
'cloudevent.id': 'unique-business-process-id',
'cloudevent.source': 'CustomersApp',
'cloudevent.type': 'CustomerCreated',
'cloudevent.subject': '123',
'my-custom-ce-field': 'abc'
}
}
}
ops.append(op1)
# Execute state transaction
store_name = 'your-state-store-name'
try:
await client.execute_state_transaction(store_name, ops)
print('State transaction executed.')
except Exception as e:
print('Error executing state transaction:', e)
# Run the async function
if __name__ == "__main__":
asyncio.run(execute_state_transaction())
```
{{% /codetab %}}
{{% codetab %}}
<!--javascript-->
```javascript
const { DaprClient } = require('dapr-client');
async function executeStateTransaction() {
// Initialize Dapr client
const daprClient = new DaprClient();
// Define state operations
const ops = [];
const op1 = {
operationType: 'upsert',
request: {
key: 'key1',
value: Buffer.from('2'),
metadata: {
'id': 'unique-business-process-id',
'source': 'CustomersApp',
'type': 'CustomerCreated',
'subject': '123',
'my-custom-ce-field': 'abc'
}
}
};
ops.push(op1);
// Execute state transaction
const storeName = 'your-state-store-name';
const metadata = {};
}
executeStateTransaction();
```
{{% /codetab %}}
{{% codetab %}}
<!--csharp-->
```csharp
public class StateOperationExample
{
public async Task ExecuteStateTransactionAsync()
{
var daprClient = new DaprClientBuilder().Build();
// Define the value "2" as a string and serialize it to a byte array
var value = "2";
var valueBytes = JsonSerializer.SerializeToUtf8Bytes(value);
// Define the first state operation to save the value "2" with metadata
// Override Cloudevent metadata
var metadata = new Dictionary<string, string>
{
{ "cloudevent.id", "unique-business-process-id" },
{ "cloudevent.source", "CustomersApp" },
{ "cloudevent.type", "CustomerCreated" },
{ "cloudevent.subject", "123" },
{ "my-custom-ce-field", "abc" }
};
var op1 = new StateTransactionRequest(
key: "key1",
value: valueBytes,
operationType: StateOperationType.Upsert,
metadata: metadata
);
// Create the list of state operations
var ops = new List<StateTransactionRequest> { op1 };
// Execute the state transaction
var storeName = "your-state-store-name";
await daprClient.ExecuteStateTransactionAsync(storeName, ops);
Console.WriteLine("State transaction executed.");
}
public static async Task Main(string[] args)
{
var example = new StateOperationExample();
await example.ExecuteStateTransactionAsync();
}
}
```
{{% /codetab %}}
{{% codetab %}}
<!--java-->
```java
public class StateOperationExample {
public static void main(String[] args) {
executeStateTransaction();
}
public static void executeStateTransaction() {
// Build Dapr client
try (DaprClient daprClient = new DaprClientBuilder().build()) {
// Define the value "2"
String value = "2";
// Override CloudEvent metadata
Map<String, String> metadata = new HashMap<>();
metadata.put("cloudevent.id", "unique-business-process-id");
metadata.put("cloudevent.source", "CustomersApp");
metadata.put("cloudevent.type", "CustomerCreated");
metadata.put("cloudevent.subject", "123");
metadata.put("my-custom-ce-field", "abc");
// Define state operations
List<StateOperation<?>> ops = new ArrayList<>();
StateOperation<String> op1 = new StateOperation<>(
StateOperationType.UPSERT,
"key1",
value,
metadata
);
ops.add(op1);
// Execute state transaction
String storeName = "your-state-store-name";
daprClient.executeStateTransaction(storeName, ops).block();
System.out.println("State transaction executed.");
} catch (Exception e) {
e.printStackTrace();
}
}
}
```
{{% /codetab %}}
{{% codetab %}}
<!--go-->
```go
func main() {
// Create a Dapr client
client, err := dapr.NewClient()
if err != nil {
log.Fatalf("failed to create Dapr client: %v", err)
}
defer client.Close()
ctx := context.Background()
store := "your-state-store-name"
// Define state operations
ops := make([]*dapr.StateOperation, 0)
op1 := &dapr.StateOperation{
Type: dapr.StateOperationTypeUpsert,
Item: &dapr.SetStateItem{
Key: "key1",
Value: []byte("2"),
// Override Cloudevent metadata
Metadata: map[string]string{
"cloudevent.id": "unique-business-process-id",
"cloudevent.source": "CustomersApp",
"cloudevent.type": "CustomerCreated",
"cloudevent.subject": "123",
"my-custom-ce-field": "abc",
},
},
}
ops = append(ops, op1)
// Metadata for the transaction (if any)
meta := map[string]string{}
// Execute state transaction
err = client.ExecuteStateTransaction(ctx, store, meta, ops)
if err != nil {
log.Fatalf("failed to execute state transaction: %v", err)
}
log.Println("State transaction executed.")
}
```
{{% /codetab %}}
{{% codetab %}}
<!--http-->
```bash
curl -X POST http://localhost:3500/v1.0/state/starwars/transaction \
-H "Content-Type: application/json" \
-d '{
"operations": [
{
"operation": "upsert",
"request": {
"key": "key1",
"value": "2"
}
},
],
"metadata": {
"id": "unique-business-process-id",
"source": "CustomersApp",
"type": "CustomerCreated",
"subject": "123",
"my-custom-ce-field": "abc",
}
}'
```
{{% /codetab %}}
{{< /tabs >}}
{{% alert title="Note" color="primary" %}}
The `data` CloudEvent field is reserved for Dapr's use only, and is non-customizable.
{{% /alert %}}
## Demo
Watch [this video for an overview of the outbox pattern](https://youtu.be/rTovKpG0rhY?t=1338):

View File

@ -51,7 +51,9 @@ If you're familiar with Dapr actors, you may notice a few differences in terms o
The `durabletask-go` core used by the workflow engine writes distributed traces using Open Telemetry SDKs. These traces are captured automatically by the Dapr sidecar and exported to the configured Open Telemetry provider, such as Zipkin.
Each workflow instance managed by the engine is represented as one or more spans. There is a single parent span representing the full workflow execution and child spans for the various tasks, including spans for activity task execution and durable timers. Workflow activity code also has access to the trace context, allowing distributed trace context to flow to external services that are invoked by the workflow.
Each workflow instance managed by the engine is represented as one or more spans. There is a single parent span representing the full workflow execution and child spans for the various tasks, including spans for activity task execution and durable timers.
> Workflow activity code currently **does not** have access to the trace context.
## Internal workflow actors

View File

@ -16,7 +16,7 @@ You'll use the Dapr CLI as the main tool for various Dapr-related tasks. You can
The Dapr CLI works with both [self-hosted]({{< ref self-hosted >}}) and [Kubernetes]({{< ref Kubernetes >}}) environments.
{{% alert title="Before you begin" color="primary" %}}
In Docker Desktop's advanced options, verify you've allowed the default Docker socket to be used.
In Docker Desktop's advanced options, verify you've allowed the default Docker socket to be used. This option is not available if you are using WSL integration on Windows.
<img src="/images/docker-desktop-setting.png" width=800 style="padding-bottom:15px;">
{{% /alert %}}

View File

@ -95,28 +95,11 @@ dapr init
**Expected output:**
```
⌛ Making the jump to hyperspace...
✅ Downloaded binaries and completed components set up.
daprd binary has been installed to $HOME/.dapr/bin.
dapr_placement container is running.
dapr_scheduler container is running.
dapr_redis container is running.
dapr_zipkin container is running.
Use `docker ps` to check running containers.
✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started
```
<img src="/images/install-dapr-selfhost/dapr-init-output.png" style=
"padding-bottom: 5px" >
[See the troubleshooting guide if you encounter any error messages regarding Docker not being installed or running.]({{< ref "common_issues.md#dapr-cant-connect-to-docker-when-installing-the-dapr-cli" >}})
#### Slim init
To install the CLI without any default configuration files or Docker containers, use the `--slim` flag. [Learn more about the `init` command and its flags.]({{< ref dapr-init.md >}})
```bash
dapr init --slim
```
### Step 3: Verify Dapr version
```bash
@ -138,7 +121,7 @@ docker ps
**Output:**
<img src="/images/install-dapr-selfhost/docker-containers.png" width=800>
<img src="/images/install-dapr-selfhost/docker-containers.png">
### Step 5: Verify components directory has been initialized
@ -189,5 +172,14 @@ explorer "%USERPROFILE%\.dapr"
<br>
### Slim init
To install the CLI without any default configuration files or Docker containers, use the `--slim` flag. [Learn more about the `init` command and its flags.]({{< ref dapr-init.md >}})
```bash
dapr init --slim
```
{{< button text="Next step: Use the Dapr API >>" page="getting-started/get-started-api.md" >}}

View File

@ -32,3 +32,5 @@ Hit the ground running with our Dapr quickstarts, complete with code samples aim
| [Configuration]({{< ref configuration-quickstart.md >}}) | Get configuration items and subscribe for configuration updates. |
| [Resiliency]({{< ref resiliency >}}) | Define and apply fault-tolerance policies to your Dapr API requests. |
| [Cryptography]({{< ref cryptography-quickstart.md >}}) | Encrypt and decrypt data using Dapr's cryptographic APIs. |
| [Jobs]({{< ref jobs-quickstart.md >}}) | Schedule, retrieve, and delete jobs using Dapr's jobs APIs. |

View File

@ -37,7 +37,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/actors).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/actors/csharp/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git

View File

@ -33,7 +33,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/bindings).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/bindings/python/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -244,7 +244,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/bindings).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/bindings/javascript/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -450,7 +450,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/bindings).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/bindings/csharp/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -651,7 +651,7 @@ In the YAML file:
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- Java JDK 17 (or greater):
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
@ -661,7 +661,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/bindings).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/bindings/java/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -872,7 +872,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/bindings).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/bindings/go/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git

View File

@ -35,7 +35,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration/python/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -151,14 +151,14 @@ if unsubscribed == True:
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Python 3.7+ installed](https://www.python.org/downloads/).
<!-- IGNORE_LINKS -->
- [Latest Node.js installed](https://nodejs.org/download/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration/javascript/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -279,7 +279,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration/csharp/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -389,7 +389,7 @@ try
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- Java JDK 17 (or greater):
- [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
@ -399,7 +399,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration/java/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -514,7 +514,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration/go/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git

View File

@ -43,7 +43,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/cryptography)
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/cryptography/javascript/sdk)
```bash
git clone https://github.com/dapr/quickstarts.git
@ -245,7 +245,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/cryptography)
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/cryptography/go/sdk)
```bash
git clone https://github.com/dapr/quickstarts.git

View File

@ -0,0 +1,533 @@
---
type: docs
title: "Quickstart: Jobs"
linkTitle: Jobs
weight: 80
description: Get started with the Dapr jobs building block
---
{{% alert title="Alpha" color="warning" %}}
The jobs building block is currently in **alpha**.
{{% /alert %}}
Let's take a look at the [Dapr jobs building block]({{< ref jobs-overview.md >}}), which schedules and runs jobs at a specific time or interval. In this Quickstart, you'll schedule, get, and delete a job using Dapr's Job API.
You can try out this jobs quickstart by either:
- [Running all applications in this sample simultaneously with the Multi-App Run template file]({{< ref "#run-using-multi-app-run" >}}), or
- [Running one application at a time]({{< ref "#run-one-job-application-at-a-time" >}})
## Run using Multi-App Run
Select your preferred language-specific Dapr SDK before proceeding with the Quickstart. Currently, you can experiment with the jobs API with the Go SDK.
{{< tabs Go >}}
<!-- Go -->
{{% codetab %}}
This quickstart includes two apps:
- **`job-scheduler.go`:** schedules, retrieves, and deletes jobs.
- **`job-service.go`:** handles the scheduled jobs.
### Step 1: Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Latest version of Go](https://go.dev/dl/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/jobs/go/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
```
From the root of the Quickstarts directory, navigate into the jobs directory:
```bash
cd jobs/go/sdk
```
### Step 3: Schedule jobs
Run the application and schedule jobs with one command:
```bash
dapr run -f .
```
**Expected output**
```text
== APP - job-service == dapr client initializing for: 127.0.0.1:6281
== APP - job-service == Registered job handler for: R2-D2
== APP - job-service == Registered job handler for: C-3PO
== APP - job-service == Registered job handler for: BB-8
== APP - job-service == Starting server on port: 6200
== APP - job-service == Job scheduled: R2-D2
== APP - job-service == Job scheduled: C-3PO
== APP - job-service == 2024/07/17 18:09:59 job:{name:"C-3PO" due_time:"10s" data:{value:"{\"droid\":\"C-3PO\",\"Task\":\"Memory Wipe\"}"}}
== APP - job-scheduler == Get job response: {"droid":"C-3PO","Task":"Memory Wipe"}
== APP - job-service == Job scheduled: BB-8
== APP - job-service == 2024/07/17 18:09:59 job:{name:"BB-8" due_time:"15s" data:{value:"{\"droid\":\"BB-8\",\"Task\":\"Internal Gyroscope Check\"}"}}
== APP - job-scheduler == Get job response: {"droid":"BB-8","Task":"Internal Gyroscope Check"}
== APP - job-scheduler == Deleted job: BB-8
```
After 5 seconds, the terminal output should present the `R2-D2` job being processed:
```text
== APP - job-service == Starting droid: R2-D2
== APP - job-service == Executing maintenance job: Oil Change
```
After 10 seconds, the terminal output should present the `C3-PO` job being processed:
```text
== APP - job-service == Starting droid: C-3PO
== APP - job-service == Executing maintenance job: Memory Wipe
```
Once the process has completed, you can stop and clean up application processes with a single command.
```bash
dapr stop -f .
```
### What happened?
When you ran `dapr init` during Dapr install:
- The `dapr_scheduler` control plane was started alongside other Dapr services.
- [The `dapr.yaml` Multi-App Run template file]({{< ref "#dapryaml-multi-app-run-template-file" >}}) was generated in the `.dapr/components` directory.
Running `dapr run -f .` in this Quickstart started both the `job-scheduler` and the `job-service`. In the terminal output, you can see the following jobs being scheduled, retrieved, and deleted.
- The `R2-D2` job is being scheduled.
- The `C-3PO` job is being scheduled.
- The `C-3PO` job is being retrieved.
- The `BB-8` job is being scheduled.
- The `BB-8` job is being retrieved.
- The `BB-8` job is being deleted.
- The `R2-D2` job is being executed after 5 seconds.
- The `R2-D2` job is being executed after 10 seconds.
#### `dapr.yaml` Multi-App Run template file
Running the [Multi-App Run template file]({{< ref multi-app-dapr-run >}}) with `dapr run -f .` starts all applications in your project. In this Quickstart, the `dapr.yaml` file contains the following:
```yml
version: 1
apps:
- appDirPath: ./job-service/
appID: job-service
appPort: 6200
daprGRPCPort: 6281
appProtocol: grpc
command: ["go", "run", "."]
- appDirPath: ./job-scheduler/
appID: job-scheduler
appPort: 6300
command: ["go", "run", "."]
```
#### `job-service` app
The `job-service` application creates service invocation handlers to manage the lifecycle of the job (`scheduleJob`, `getJob`, and `deleteJob`).
```go
if err := server.AddServiceInvocationHandler("scheduleJob", scheduleJob); err != nil {
log.Fatalf("error adding invocation handler: %v", err)
}
if err := server.AddServiceInvocationHandler("getJob", getJob); err != nil {
log.Fatalf("error adding invocation handler: %v", err)
}
if err := server.AddServiceInvocationHandler("deleteJob", deleteJob); err != nil {
log.Fatalf("error adding invocation handler: %v", err)
}
```
Next, job event handlers are registered for all droids:
```go
for _, jobName := range jobNames {
if err := server.AddJobEventHandler(jobName, handleJob); err != nil {
log.Fatalf("failed to register job event handler: %v", err)
}
fmt.Println("Registered job handler for: ", jobName)
}
fmt.Println("Starting server on port: " + appPort)
if err = server.Start(); err != nil {
log.Fatalf("failed to start server: %v", err)
}
```
The `job-service` then call functions that handle scheduling, getting, deleting, and handling job events.
```go
// Handler that schedules a DroidJob
func scheduleJob(ctx context.Context, in *common.InvocationEvent) (out *common.Content, err error) {
if in == nil {
err = errors.New("no invocation parameter")
return
}
droidJob := DroidJob{}
err = json.Unmarshal(in.Data, &droidJob)
if err != nil {
fmt.Println("failed to unmarshal job: ", err)
return nil, err
}
jobData := JobData{
Droid: droidJob.Name,
Task: droidJob.Job,
}
content, err := json.Marshal(jobData)
if err != nil {
fmt.Printf("Error marshalling job content")
return nil, err
}
// schedule job
job := daprc.Job{
Name: droidJob.Name,
DueTime: droidJob.DueTime,
Data: &anypb.Any{
Value: content,
},
}
err = app.daprClient.ScheduleJobAlpha1(ctx, &job)
if err != nil {
fmt.Println("failed to schedule job. err: ", err)
return nil, err
}
fmt.Println("Job scheduled: ", droidJob.Name)
out = &common.Content{
Data: in.Data,
ContentType: in.ContentType,
DataTypeURL: in.DataTypeURL,
}
return out, err
}
// Handler that gets a job by name
func getJob(ctx context.Context, in *common.InvocationEvent) (out *common.Content, err error) {
if in == nil {
err = errors.New("no invocation parameter")
return nil, err
}
job, err := app.daprClient.GetJobAlpha1(ctx, string(in.Data))
if err != nil {
fmt.Println("failed to get job. err: ", err)
}
out = &common.Content{
Data: job.Data.Value,
ContentType: in.ContentType,
DataTypeURL: in.DataTypeURL,
}
return out, err
}
// Handler that deletes a job by name
func deleteJob(ctx context.Context, in *common.InvocationEvent) (out *common.Content, err error) {
if in == nil {
err = errors.New("no invocation parameter")
return nil, err
}
err = app.daprClient.DeleteJobAlpha1(ctx, string(in.Data))
if err != nil {
fmt.Println("failed to delete job. err: ", err)
}
out = &common.Content{
Data: in.Data,
ContentType: in.ContentType,
DataTypeURL: in.DataTypeURL,
}
return out, err
}
// Handler that handles job events
func handleJob(ctx context.Context, job *common.JobEvent) error {
var jobData common.Job
if err := json.Unmarshal(job.Data, &jobData); err != nil {
return fmt.Errorf("failed to unmarshal job: %v", err)
}
decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value)
if err != nil {
return fmt.Errorf("failed to decode job payload: %v", err)
}
var jobPayload JobData
if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil {
return fmt.Errorf("failed to unmarshal payload: %v", err)
}
fmt.Println("Starting droid:", jobPayload.Droid)
fmt.Println("Executing maintenance job:", jobPayload.Task)
return nil
}
```
#### `job-scheduler` app
In the `job-scheduler` application, the R2D2, C3PO, and BB8 jobs are first defined as `[]DroidJob`:
```go
droidJobs := []DroidJob{
{Name: "R2-D2", Job: "Oil Change", DueTime: "5s"},
{Name: "C-3PO", Job: "Memory Wipe", DueTime: "15s"},
{Name: "BB-8", Job: "Internal Gyroscope Check", DueTime: "30s"},
}
```
The jobs are then scheduled, retrieved, and deleted using the jobs API. As you can see from the terminal output, first the R2D2 job is scheduled:
```go
// Schedule R2D2 job
err = schedule(droidJobs[0])
if err != nil {
log.Fatalln("Error scheduling job: ", err)
}
```
Then, the C3PO job is scheduled, and returns job data:
```go
// Schedule C-3PO job
err = schedule(droidJobs[1])
if err != nil {
log.Fatalln("Error scheduling job: ", err)
}
// Get C-3PO job
resp, err := get(droidJobs[1])
if err != nil {
log.Fatalln("Error retrieving job: ", err)
}
fmt.Println("Get job response: ", resp)
```
The BB8 job is then scheduled, retrieved, and deleted:
```go
// Schedule BB-8 job
err = schedule(droidJobs[2])
if err != nil {
log.Fatalln("Error scheduling job: ", err)
}
// Get BB-8 job
resp, err = get(droidJobs[2])
if err != nil {
log.Fatalln("Error retrieving job: ", err)
}
fmt.Println("Get job response: ", resp)
// Delete BB-8 job
err = delete(droidJobs[2])
if err != nil {
log.Fatalln("Error deleting job: ", err)
}
fmt.Println("Job deleted: ", droidJobs[2].Name)
```
The `job-scheduler.go` also defines the `schedule`, `get`, and `delete` functions, calling from `job-service.go`.
```go
// Schedules a job by invoking grpc service from job-service passing a DroidJob as an argument
func schedule(droidJob DroidJob) error {
jobData, err := json.Marshal(droidJob)
if err != nil {
fmt.Println("Error marshalling job content")
return err
}
content := &daprc.DataContent{
ContentType: "application/json",
Data: []byte(jobData),
}
// Schedule Job
_, err = app.daprClient.InvokeMethodWithContent(context.Background(), "job-service", "scheduleJob", "POST", content)
if err != nil {
fmt.Println("Error invoking method: ", err)
return err
}
return nil
}
// Gets a job by invoking grpc service from job-service passing a job name as an argument
func get(droidJob DroidJob) (string, error) {
content := &daprc.DataContent{
ContentType: "text/plain",
Data: []byte(droidJob.Name),
}
//get job
resp, err := app.daprClient.InvokeMethodWithContent(context.Background(), "job-service", "getJob", "GET", content)
if err != nil {
fmt.Println("Error invoking method: ", err)
return "", err
}
return string(resp), nil
}
// Deletes a job by invoking grpc service from job-service passing a job name as an argument
func delete(droidJob DroidJob) error {
content := &daprc.DataContent{
ContentType: "text/plain",
Data: []byte(droidJob.Name),
}
_, err := app.daprClient.InvokeMethodWithContent(context.Background(), "job-service", "deleteJob", "DELETE", content)
if err != nil {
fmt.Println("Error invoking method: ", err)
return err
}
return nil
}
```
{{% /codetab %}}
{{< /tabs >}}
## Run one job application at a time
{{< tabs Go >}}
<!-- Go -->
{{% codetab %}}
This quickstart includes two apps:
- **`job-scheduler.go`:** schedules, retrieves, and deletes jobs.
- **`job-service.go`:** handles the scheduled jobs.
### Step 1: Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Latest version of Go](https://go.dev/dl/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/jobs).
```bash
git clone https://github.com/dapr/quickstarts.git
```
From the root of the Quickstarts directory, navigate into the jobs directory:
```bash
cd jobs/go/sdk
```
### Step 3: Schedule jobs
In the terminal, run the `job-service` app:
```bash
dapr run --app-id job-service --app-port 6200 --dapr-grpc-port 6281 --app-protocol grpc -- go run .
```
**Expected output**
```text
== APP == dapr client initializing for: 127.0.0.1:6281
== APP == Registered job handler for: R2-D2
== APP == Registered job handler for: C-3PO
== APP == Registered job handler for: BB-8
== APP == Starting server on port: 6200
```
In a new terminal window, run the `job-scheduler` app:
```bash
dapr run --app-id job-scheduler --app-port 6300 -- go run .
```
**Expected output**
```text
== APP == dapr client initializing for:
== APP == Get job response: {"droid":"C-3PO","Task":"Memory Wipe"}
== APP == Get job response: {"droid":"BB-8","Task":"Internal Gyroscope Check"}
== APP == Job deleted: BB-8
```
Return to the `job-service` app terminal window. The output should be:
```text
== APP == Job scheduled: R2-D2
== APP == Job scheduled: C-3PO
== APP == 2024/07/17 18:25:36 job:{name:"C-3PO" due_time:"10s" data:{value:"{\"droid\":\"C-3PO\",\"Task\":\"Memory Wipe\"}"}}
== APP == Job scheduled: BB-8
== APP == 2024/07/17 18:25:36 job:{name:"BB-8" due_time:"15s" data:{value:"{\"droid\":\"BB-8\",\"Task\":\"Internal Gyroscope Check\"}"}}
== APP == Starting droid: R2-D2
== APP == Executing maintenance job: Oil Change
== APP == Starting droid: C-3PO
== APP == Executing maintenance job: Memory Wipe
```
Unpack what happened in the [`job-service`]({{< ref "#job-service-app" >}}) and [`job-scheduler`]({{< ref "#job-scheduler-app" >}}) applications when you ran `dapr run`.
{{% /codetab %}}
{{< /tabs >}}
## Watch the demo
See the jobs API in action using a Go HTTP example, recorded during the [Dapr Community Call #107(https://www.youtube.com/live/WHGOc7Ec_YQ?si=JlOlcJKkhRuhf5R1&t=849)].
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/WHGOc7Ec_YQ?si=JlOlcJKkhRuhf5R1&amp;start=849" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
## Tell us what you think!
We're continuously working to improve our Quickstart examples and value your feedback. Did you find this Quickstart helpful? Do you have suggestions for improvement?
Join the discussion in our [discord channel](https://discord.com/channels/778680217417809931/953427615916638238).
## Next steps
- HTTP samples of this quickstart:
- [Go](https://github.com/dapr/quickstarts/tree/master/jobs/go/http)
- Learn more about [the jobs building block]({{< ref jobs-overview.md >}})
- Learn more about [the scheduler control plane]({{< ref scheduler.md >}})
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}

View File

@ -39,7 +39,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub/python/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -217,7 +217,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub/javascript/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -365,7 +365,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub/csharp/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -512,7 +512,7 @@ Console.WriteLine("Published data: " + order);
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- Java JDK 17 (or greater):
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
@ -522,7 +522,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub/java/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -683,7 +683,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub/go/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -843,7 +843,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub/python/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -1017,7 +1017,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub/javascript/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -1175,7 +1175,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub/csharp/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -1321,7 +1321,7 @@ In the YAML file:
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- Java JDK 17 (or greater):
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
@ -1331,7 +1331,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub/java/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -1493,7 +1493,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/pub_sub/go/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git

View File

@ -32,7 +32,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation/python/http).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -238,7 +238,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation/javascript/http).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -468,7 +468,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation/csharp/http).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -693,7 +693,7 @@ dapr run --app-port 7001 --app-id order-processor --app-protocol http --dapr-htt
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- Java JDK 17 (or greater):
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
@ -703,7 +703,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation/java/http).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -933,7 +933,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation/go/http).
```bash
git clone https://github.com/dapr/quickstarts.git

View File

@ -32,7 +32,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/resiliency).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/python/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -61,27 +61,28 @@ Run the `order-processor` service alongside a Dapr sidecar. The Dapr sidecar the
metadata:
name: myresiliency
scopes:
- checkout
- order-processor
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
duration: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
components:
statestore:
outbound:
retry: retryForever
circuitBreaker: simpleCB
```
@ -202,7 +203,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/resiliency).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/javascript/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -371,7 +372,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/resiliency).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/csharp/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -533,7 +534,7 @@ INFO[0036] Recovered processing operation component[statestore] output.
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- Java JDK 17 (or greater):
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
@ -543,7 +544,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/resiliency).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/java/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -711,7 +712,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/resiliency).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/go/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git

View File

@ -32,7 +32,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/secrets_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/secrets_management/python/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -141,7 +141,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/secrets_management).
Clone the [sample provided in the Quickstarts repo](hhttps://github.com/dapr/quickstarts/tree/master/secrets_management/javascript/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -254,7 +254,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/secrets_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/secrets_management/csharp/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -356,7 +356,7 @@ Order-processor output:
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- Java JDK 17 (or greater):
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
@ -366,7 +366,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/secrets_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/secrets_management/java/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -471,7 +471,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/secrets_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/secrets_management/go/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git

View File

@ -36,7 +36,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation/python/http).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -182,7 +182,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation/javascript/http).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -322,7 +322,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation/csharp/http).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -458,7 +458,7 @@ var response = await client.PostAsync($"{baseURL}/orders", content);
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- Java JDK 17 (or greater):
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
@ -468,7 +468,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation/java/http).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -606,7 +606,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/service_invocation/go/http).
```bash
@ -1156,7 +1156,7 @@ Dapr invokes an application on any Dapr instance. In the code, the sidecar progr
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- Java JDK 17 (or greater):
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.

View File

@ -34,7 +34,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/python/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -163,7 +163,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/javascript/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -295,7 +295,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/csharp/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -419,7 +419,7 @@ In the YAML file:
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- Java JDK 17 (or greater):
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
@ -429,7 +429,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/java/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -562,7 +562,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management).
Clone the [sample provided in the Quickstarts repo](hhttps://github.com/dapr/quickstarts/tree/master/state_management/go/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -702,7 +702,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/python/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -818,7 +818,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/javascript/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -940,7 +940,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/csharp/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -1050,7 +1050,7 @@ In the YAML file:
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- Java JDK 17 (or greater):
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
@ -1060,7 +1060,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/java/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -1179,7 +1179,7 @@ For this example, you will need:
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/state_management/go/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git

View File

@ -7,10 +7,10 @@ description: Get started with the Dapr Workflow building block
---
{{% alert title="Note" color="primary" %}}
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
{{% /alert %}}
Let's take a look at the Dapr [Workflow building block]({{< ref workflow-overview.md >}}). In this Quickstart, you'll create a simple console application to demonstrate Dapr's workflow programming model and the workflow management APIs.
Let's take a look at the Dapr [Workflow building block]({{< ref workflow-overview.md >}}). In this Quickstart, you'll create a simple console application to demonstrate Dapr's workflow programming model and the workflow management APIs.
In this guide, you'll:
@ -29,8 +29,8 @@ Select your preferred language-specific Dapr SDK before proceeding with the Quic
The `order-processor` console app starts and manages the `order_processing_workflow`, which simulates purchasing items from a store. The workflow consists of five unique workflow activities, or tasks:
- `notify_activity`: Utilizes a logger to print out messages throughout the workflow. These messages notify you when:
- You have insufficient inventory
- Your payment couldn't be processed, etc.
- You have insufficient inventory
- Your payment couldn't be processed, etc.
- `process_payment_activity`: Processes and authorizes the payment.
- `verify_inventory_activity`: Checks the state store to ensure there is enough inventory present for purchase.
- `update_inventory_activity`: Removes the requested items from the state store and updates the store with the new remaining inventory value.
@ -48,7 +48,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows/python/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -71,10 +71,11 @@ pip3 install -r requirements.txt
In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}):
```bash
cd workflows/python/sdk
dapr run -f .
```
This starts the `order-processor` app with unique workflow ID and runs the workflow activities.
This starts the `order-processor` app with unique workflow ID and runs the workflow activities.
Expected output:
@ -105,7 +106,7 @@ Running `dapr init` launches the [openzipkin/zipkin](https://hub.docker.com/r/op
docker run -d -p 9411:9411 openzipkin/zipkin
```
View the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
View the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
<img src="/images/workflow-trace-spans-zipkin.png" width=800 style="padding-bottom:15px;">
@ -122,9 +123,10 @@ When you ran `dapr run -f .`:
1. The `NotifyActivity` workflow activity sends a notification saying that order `f4e1926e-3721-478d-be8a-f5bebd1995da` has completed.
1. The workflow terminates as completed.
#### `order-processor/app.py`
#### `order-processor/app.py`
In the application's program file:
- The unique workflow order ID is generated
- The workflow is scheduled
- The workflow status is retrieved
@ -276,7 +278,6 @@ The `order-processor` console app starts and manages the lifecycle of an order p
- `processPaymentActivity`: Processes and authorizes the payment.
- `updateInventoryActivity`: Updates the state store with the new remaining inventory value.
### Step 1: Pre-requisites
For this example, you will need:
@ -289,7 +290,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows/javascript/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -318,11 +319,11 @@ In the terminal, start the order processor app alongside a Dapr sidecar using [M
dapr run -f .
```
This starts the `order-processor` app with unique workflow ID and runs the workflow activities.
This starts the `order-processor` app with unique workflow ID and runs the workflow activities.
Expected output:
```
```log
== APP - workflowApp == == APP == Orchestration scheduled with ID: 0c332155-1e02-453a-a333-28cfc7777642
== APP - workflowApp == == APP == Waiting 30 seconds for instance 0c332155-1e02-453a-a333-28cfc7777642 to complete...
== APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642'
@ -393,7 +394,7 @@ Running `dapr init` launches the [openzipkin/zipkin](https://hub.docker.com/r/op
docker run -d -p 9411:9411 openzipkin/zipkin
```
View the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
View the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
<img src="/images/workflow-trace-spans-zipkin.png" width=800 style="padding-bottom:15px;">
@ -410,9 +411,10 @@ When you ran `dapr run -f .`:
1. The `notifyActivity` workflow activity sends a notification saying that order `0c332155-1e02-453a-a333-28cfc7777642` has completed.
1. The workflow terminates as completed.
#### `order-processor/workflowApp.ts`
#### `order-processor/workflowApp.ts`
In the application file:
- The unique workflow order ID is generated
- The workflow is scheduled
- The workflow status is retrieved
@ -489,12 +491,12 @@ start().catch((e) => {
{{% codetab %}}
The `order-processor` console app starts and manages the lifecycle of an order processing workflow that stores and retrieves data in a state store. The workflow consists of four workflow activities, or tasks:
- `NotifyActivity`: Utilizes a logger to print out messages throughout the workflow
- `ReserveInventoryActivity`: Checks the state store to ensure that there is enough inventory for the purchase
- `ProcessPaymentActivity`: Processes and authorizes the payment
- `UpdateInventoryActivity`: Removes the requested items from the state store and updates the store with the new remaining inventory value
### Step 1: Pre-requisites
For this example, you will need:
@ -507,16 +509,16 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows/csharp/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
```
In a new terminal window, navigate to the `order-processor` directory:
In a new terminal window, navigate to the `sdk` directory:
```bash
cd workflows/csharp/sdk/order-processor
cd workflows/csharp/sdk
```
### Step 3: Run the order processor app
@ -527,7 +529,7 @@ In the terminal, start the order processor app alongside a Dapr sidecar using [M
dapr run -f .
```
This starts the `order-processor` app with unique workflow ID and runs the workflow activities.
This starts the `order-processor` app with unique workflow ID and runs the workflow activities.
Expected output:
@ -567,7 +569,7 @@ Running `dapr init` launches the [openzipkin/zipkin](https://hub.docker.com/r/op
docker run -d -p 9411:9411 openzipkin/zipkin
```
View the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
View the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
<img src="/images/workflow-trace-spans-zipkin.png" width=800 style="padding-bottom:15px;">
@ -584,9 +586,10 @@ When you ran `dapr run -f .`:
1. The `NotifyActivity` workflow activity sends a notification saying that order `6d2abcc9` has completed.
1. The workflow terminates as completed.
#### `order-processor/Program.cs`
#### `order-processor/Program.cs`
In the application's program file:
- The unique workflow order ID is generated
- The workflow is scheduled
- The workflow status is retrieved
@ -717,6 +720,7 @@ class OrderProcessingWorkflow : Workflow<OrderPayload, OrderResult>
#### `order-processor/Activities` directory
The `Activities` directory holds the four workflow activities used by the workflow, defined in the following files:
- `NotifyActivity.cs`
- `ReserveInventoryActivity.cs`
- `ProcessPaymentActivity.cs`
@ -734,22 +738,22 @@ Watch [this video to walk through the Dapr Workflow .NET demo](https://youtu.be/
{{% codetab %}}
The `order-processor` console app starts and manages the lifecycle of an order processing workflow that stores and retrieves data in a state store. The workflow consists of four workflow activities, or tasks:
- `NotifyActivity`: Utilizes a logger to print out messages throughout the workflow
- `RequestApprovalActivity`: Requests approval for processing payment
- `ReserveInventoryActivity`: Checks the state store to ensure that there is enough inventory for the purchase
- `ProcessPaymentActivity`: Processes and authorizes the payment
- `UpdateInventoryActivity`: Removes the requested items from the state store and updates the store with the new remaining inventory value
### Step 1: Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- [Microsoft JDK 11](https://docs.microsoft.com/java/openjdk/download#openjdk-11)
- [Oracle JDK 11](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11)
- [OpenJDK 11](https://jdk.java.net/11/)
- Java JDK 17 (or greater):
- [Microsoft JDK 17](https://docs.microsoft.com/java/openjdk/download#openjdk-17)
- [Oracle JDK 17](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK17)
- [OpenJDK 17](https://jdk.java.net/17/)
- [Apache Maven](https://maven.apache.org/install.html) version 3.x.
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
@ -757,7 +761,7 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows/java/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
@ -780,10 +784,11 @@ mvn clean install
In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}):
```bash
cd workflows/java/sdk
dapr run -f .
```
This starts the `order-processor` app with unique workflow ID and runs the workflow activities.
This starts the `order-processor` app with unique workflow ID and runs the workflow activities.
Expected output:
@ -826,7 +831,7 @@ Running `dapr init` launches the [openzipkin/zipkin](https://hub.docker.com/r/op
docker run -d -p 9411:9411 openzipkin/zipkin
```
View the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
View the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
<img src="/images/workflow-trace-spans-zipkin.png" width=800 style="padding-bottom:15px;">
@ -1073,7 +1078,6 @@ The `Activities` directory holds the four workflow activities used by the workfl
<!-- Go -->
{{% codetab %}}
The `order-processor` console app starts and manages the `OrderProcessingWorkflow` workflow, which simulates purchasing items from a store. The workflow consists of five unique workflow activities, or tasks:
- `NotifyActivity`: Utilizes a logger to print out messages throughout the workflow. These messages notify you when:
@ -1096,16 +1100,16 @@ For this example, you will need:
### Step 2: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows).
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows/go/sdk).
```bash
git clone https://github.com/dapr/quickstarts.git
```
In a new terminal window, navigate to the `order-processor` directory:
In a new terminal window, navigate to the `sdk` directory:
```bash
cd workflows/go/sdk/order-processor
cd workflows/go/sdk
```
### Step 3: Run the order processor app
@ -1116,7 +1120,7 @@ In the terminal, start the order processor app alongside a Dapr sidecar using [M
dapr run -f .
```
This starts the `order-processor` app with unique workflow ID and runs the workflow activities.
This starts the `order-processor` app with unique workflow ID and runs the workflow activities.
Expected output:
@ -1157,7 +1161,7 @@ Running `dapr init` launches the [openzipkin/zipkin](https://hub.docker.com/r/op
docker run -d -p 9411:9411 openzipkin/zipkin
```
View the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
View the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
<img src="/images/workflow-trace-spans-zipkin.png" width=800 style="padding-bottom:15px;">
@ -1174,9 +1178,10 @@ When you ran `dapr run`:
1. The `NotifyActivity` workflow activity sends a notification saying that order `48ee83b7-5d80-48d5-97f9-6b372f5480a5` has completed.
1. The workflow terminates as completed.
#### `order-processor/main.go`
#### `order-processor/main.go`
In the application's program file:
- The unique workflow order ID is generated
- The workflow is scheduled
- The workflow status is retrieved
@ -1317,6 +1322,7 @@ Meanwhile, the `OrderProcessingWorkflow` and its activities are defined as metho
{{< /tabs >}}
## Tell us what you think!
We're continuously working to improve our Quickstart examples and value your feedback. Did you find this Quickstart helpful? Do you have suggestions for improvement?
Join the discussion in our [discord channel](https://discord.com/channels/778680217417809931/953427615916638238).

View File

@ -114,11 +114,12 @@ The `name` field takes the name of the Dapr API you would like to enable.
See this list of values corresponding to the different Dapr APIs:
| API group | HTTP API | [gRPC API](https://github.com/dapr/dapr/blob/master/pkg/grpc/endpoints.go) |
| API group | HTTP API | [gRPC API](https://github.com/dapr/dapr/tree/master/pkg/api/grpc) |
| ----- | ----- | ----- |
| [Service Invocation]({{< ref service_invocation_api.md >}}) | `invoke` (`v1.0`) | `invoke` (`v1`) |
| [State]({{< ref state_api.md>}})| `state` (`v1.0` and `v1.0-alpha1`) | `state` (`v1` and `v1alpha1`) |
| [Pub/Sub]({{< ref pubsub.md >}}) | `publish` (`v1.0` and `v1.0-alpha1`) | `publish` (`v1` and `v1alpha1`) |
| Subscribe | n/a | `subscribe` (`v1alpha1`) |
| [(Output) Bindings]({{< ref bindings_api.md >}}) | `bindings` (`v1.0`) |`bindings` (`v1`) |
| [Secrets]({{< ref secrets_api.md >}})| `secrets` (`v1.0`) | `secrets` (`v1`) |
| [Actors]({{< ref actors_api.md >}}) | `actors` (`v1.0`) |`actors` (`v1`) |

View File

@ -108,6 +108,7 @@ The `metrics` section under the `Configuration` spec contains the following prop
metrics:
enabled: true
rules: []
latencyDistributionBuckets: []
http:
increasedCardinality: true
pathMatching:
@ -121,17 +122,18 @@ metrics:
excludeVerbs: false
```
In the examples above, the path filter `/orders/{orderID}/items/{itemID}` would return a single metric count matching all the `orderIDs` and all the `itemIDs`, rather than multiple metrics for each `itemID`. For more information, see [HTTP metrics path matching]({{< ref "metrics-overview.md#http-metrics-path-matching" >}}).
In the examples above this path filter `/orders/{orderID}/items/{itemID}` would return a single metric count matching all the orderIDs and all the itemIDs rather than multiple metrics for each itemID. For more information see [HTTP metrics path matching]({{< ref "metrics-overview.md#http-metrics-path-matching" >}})
The following table lists the properties for metrics:
| Property | Type | Description |
|--------------|--------|-------------|
| `enabled` | boolean | When set to true, the default, enables metrics collection and the metrics endpoint. |
| `rules` | array | Named rule to filter metrics. Each rule contains a set of `labels` to filter on and a `regex` expression to apply to the metrics path. |
| `http.increasedCardinality` | boolean | When set to `true` (default), in the Dapr HTTP server, each request path causes the creation of a new "bucket" of metrics. This can cause issues, including excessive memory consumption when there many different requested endpoints (such as when interacting with RESTful APIs).<br> To mitigate high memory usage and egress costs associated with [high cardinality metrics]({{< ref "metrics-overview.md#high-cardinality-metrics" >}}) with the HTTP server, you should set the `metrics.http.increasedCardinality` property to `false`.|
| `http.pathMatching` | array | Paths used for path matching, allowing users to define matching paths in order to manage cardinality. |
| `http.excludeVerbs` | boolean | When set to `true` (default is `false`), the Dapr HTTP server ignores each request HTTP verb when building the method metric label. |
| Property | Type | Description |
|------------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `enabled` | boolean | When set to true, the default, enables metrics collection and the metrics endpoint. |
| `rules` | array | Named rule to filter metrics. Each rule contains a set of `labels` to filter on and a `regex` expression to apply to the metrics path. |
| `latencyDistributionBuckets` | array | Array of latency distribution buckets in milliseconds for latency metrics histograms. |
| `http.increasedCardinality` | boolean | When set to `true` (default), in the Dapr HTTP server each request path causes the creation of a new "bucket" of metrics. This can cause issues, including excessive memory consumption, when there many different requested endpoints (such as when interacting with RESTful APIs).<br> To mitigate high memory usage and egress costs associated with [high cardinality metrics]({{< ref "metrics-overview.md#high-cardinality-metrics" >}}) with the HTTP server, you should set the `metrics.http.increasedCardinality` property to `false`. |
| `http.pathMatching` | array | Array of paths for path matching, allowing users to define matching paths to manage cardinality. |
| `http.excludeVerbs` | boolean | When set to true (default is false), the Dapr HTTP server ignores each request HTTP verb when building the method metric label. |
To further help managing cardinality, path matching allows specified paths matched according to defined patterns, reducing the number of unique metrics paths and thus controlling metric cardinality. This feature is particularly useful for applications with dynamic URLs, ensuring that metrics remain meaningful and manageable without excessive memory consumption.

View File

@ -0,0 +1,68 @@
---
type: docs
title: "Set up an Elastic Kubernetes Service (EKS) cluster"
linkTitle: "Elastic Kubernetes Service (EKS)"
weight: 4000
description: >
Learn how to set up an EKS Cluster
---
This guide walks you through installing an Elastic Kubernetes Service (EKS) cluster. If you need more information, refer to [Create an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
## Prerequisites
- Install:
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [AWS CLI](https://aws.amazon.com/cli/)
- [eksctl](https://eksctl.io/)
- [An existing VPC and subnets](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html)
## Deploy an EKS cluster
1. In the terminal, log into AWS.
```bash
aws configure
```
1. Create an EKS cluster. To use a specific version of Kubernetes, use `--version` (1.13.x or newer version required).
```bash
eksctl create cluster --name [your_eks_cluster_name] --region [your_aws_region] --version [kubernetes_version] --vpc-private-subnets [subnet_list_seprated_by_comma] --without-nodegroup
```
Change the values for `vpc-private-subnets` to meet your requirements. You can also add additional IDs. You must specify at least two subnet IDs. If you'd rather specify public subnets, you can change `--vpc-private-subnets` to `--vpc-public-subnets`.
1. Verify kubectl context:
```bash
kubectl config current-context
```
1. Update the security group rule to allow the EKS cluster to communicate with the Dapr Sidecar by creating an inbound rule for port 4000.
```bash
aws ec2 authorize-security-group-ingress --region [your_aws_region] \
--group-id [your_security_group] \
--protocol tcp \
--port 4000 \
--source-group [your_security_group]
```
## Troubleshooting
### Access permissions
If you face any access permissions, make sure you are using the same AWS profile that was used to create the cluster. If needed, update the kubectl configuration with the correct profile:
```bash
aws eks --region [your_aws_region] update-kubeconfig --name [your_eks_cluster_name] --profile [your_profile_name]
```
## Related links
- [Learn more about EKS clusters](https://docs.aws.amazon.com/eks/latest/userguide/clusters.html)
- [Learn more about eksctl](https://eksctl.io/getting-started/)
- [Try out a Dapr quickstart]({{< ref quickstarts.md >}})
- Learn how to [deploy Dapr on your cluster]({{< ref kubernetes-deploy.md >}})
- [Kubernetes production guidelines]({{< ref kubernetes-production.md >}})

View File

@ -0,0 +1,92 @@
---
type: docs
title: "How-to: Persist Scheduler Jobs"
linkTitle: "How-to: Persist Scheduler Jobs"
weight: 50000
description: "Configure Scheduler to persist its database to make it resilient to restarts"
---
The [Scheduler]({{< ref scheduler.md >}}) service is responsible for writing jobs to its embedded Etcd database and scheduling them for execution.
By default, the Scheduler service database writes this data to a Persistent Volume Claim of 1Gb of size using the cluster's default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/). This means that there is no additional parameter required to run the scheduler service reliably on most Kubernetes deployments, although you will need additional configuration in some deployments or for a production environment.
## Production Setup
In case your Kubernetes deployment does not have a default storage class or you are configuring a production cluster, defining a storage class is required.
A persistent volume is backed by a real disk that is provided by the hosted Cloud Provider or Kubernetes infrastructure platform.
Disk size is determined by how many jobs are expected to be persisted at once; however, 64Gb should be more than sufficient for most production scenarios.
Some Kubernetes providers recommend using a [CSI driver](https://kubernetes.io/docs/concepts/storage/volumes/#csi) to provision the underlying disks.
Below are a list of useful links to the relevant documentation for creating a persistent disk for the major cloud providers:
- [Google Cloud Persistent Disk](https://cloud.google.com/compute/docs/disks)
- [Amazon EBS Volumes](https://aws.amazon.com/blogs/storage/persistent-storage-for-kubernetes/)
- [Azure AKS Storage Options](https://learn.microsoft.com/azure/aks/concepts-storage)
- [Digital Ocean Block Storage](https://www.digitalocean.com/docs/kubernetes/how-to/add-volumes/)
- [VMWare vSphere Storage](https://docs.vmware.com/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-A19F6480-40DC-4343-A5A9-A5D3BFC0742E.html)
- [OpenShift Persistent Storage](https://docs.openshift.com/container-platform/4.6/storage/persistent_storage/persistent-storage-aws-efs.html)
- [Alibaba Cloud Disk Storage](https://www.alibabacloud.com/help/ack/ack-managed-and-ack-dedicated/user-guide/create-a-pvc)
Once the storage class is available, you can install Dapr using the following command, with Scheduler configured to use the storage class (replace `my-storage-class` with the name of the storage class):
{{% alert title="Note" color="primary" %}}
If Dapr is already installed, the control plane needs to be completely [uninstalled]({{< ref dapr-uninstall.md >}}) in order for the Scheduler `StatefulSet` to be recreated with the new persistent volume.
{{% /alert %}}
{{< tabs "Dapr CLI" "Helm" >}}
<!-- Dapr CLI -->
{{% codetab %}}
```bash
dapr init -k --set dapr_scheduler.cluster.storageClassName=my-storage-class
```
{{% /codetab %}}
<!-- Helm -->
{{% codetab %}}
```bash
helm upgrade --install dapr dapr/dapr \
--version={{% dapr-latest-version short="true" %}} \
--namespace dapr-system \
--create-namespace \
--set dapr_scheduler.cluster.storageClassName=my-storage-class \
--wait
```
{{% /codetab %}}
{{< /tabs >}}
## Ephemeral Storage
Scheduler can be optionally made to use Ephemeral storage, which is in-memory storage which is **not** resilient to restarts, i.e. all Job data will be lost after a Scheduler restart.
This is useful for deployments where storage is not available or required, or for testing purposes.
{{% alert title="Note" color="primary" %}}
If Dapr is already installed, the control plane needs to be completely [uninstalled]({{< ref dapr-uninstall.md >}}) in order for the Scheduler `StatefulSet` to be recreated without the persistent volume.
{{% /alert %}}
{{< tabs "Dapr CLI" "Helm" >}}
<!-- Dapr CLI -->
{{% codetab %}}
```bash
dapr init -k --set dapr_scheduler.cluster.inMemoryStorage=true
```
{{% /codetab %}}
<!-- Helm -->
{{% codetab %}}
```bash
helm upgrade --install dapr dapr/dapr \
--version={{% dapr-latest-version short="true" %}} \
--namespace dapr-system \
--create-namespace \
--set dapr_scheduler.cluster.inMemoryStorage=true \
--wait
```
{{% /codetab %}}
{{< /tabs >}}

View File

@ -0,0 +1,27 @@
---
type: docs
title: "How-to: Persist Scheduler Jobs"
linkTitle: "How-to: Persist Scheduler Jobs"
weight: 50000
description: "Configure Scheduler to persist its database to make it resilient to restarts"
---
The [Scheduler]({{< ref scheduler.md >}}) service is responsible for writing jobs to its embedded database and scheduling them for execution.
By default, the Scheduler service database writes this data to the local volume `dapr_scheduler`, meaning that **this data is persisted across restarts**.
The host file location for this local volume is typically located at either `/var/lib/docker/volumes/dapr_scheduler/_data` or `~/.local/share/containers/storage/volumes/dapr_scheduler/_data`, depending on your container runtime.
Note that if you are using Docker Desktop, this volume is located in the Docker Desktop VM's filesystem, which can be accessed using:
```bash
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
```
The Scheduler persistent volume can be modified with a custom volume that is pre-existing, or is created by Dapr.
{{% alert title="Note" color="primary" %}}
By default `dapr init` creates a local persistent volume on your drive called `dapr_scheduler`. If Dapr is already installed, the control plane needs to be completely [uninstalled]({{< ref dapr-uninstall.md >}}) in order for the Scheduler container to be recreated with the new persistent volume.
{{% /alert %}}
```bash
dapr init --scheduler-volume my-scheduler-volume
```

View File

@ -74,7 +74,7 @@ spec:
When invoking Dapr using HTTP, metrics are created for each requested method by default. This can result in a high number of metrics, known as high cardinality, which can impact memory usage and CPU.
Path matching allows you to manage and control the cardinality of HTTP metrics in Dapr. This is an aggregation of metrics, so rather than having a metric for each event, you can reduce the number of metrics events and report an overall number. For details on how to set the cardinality in configuration see ({{< ref "configuration-overview.md#metrics" >}})
Path matching allows you to manage and control the cardinality of HTTP metrics in Dapr. This is an aggregation of metrics, so rather than having a metric for each event, you can reduce the number of metrics events and report an overall number. [Learn more about how to set the cardinality in configuration]({{< ref "configuration-overview.md#metrics" >}}).
This configuration is opt-in and is enabled via the Dapr configuration `spec.metrics.http.pathMatching`. When defined, it enables path matching, which standardizes specified paths for both metrics paths. This reduces the number of unique metrics paths, making metrics more manageable and reducing resource consumption in a controlled way.
@ -198,7 +198,58 @@ dapr_http_server_request_count{app_id="order-service",method="",path="/orders",s
In this example, the HTTP method is excluded from the metrics, resulting in a single metric for all requests to the `/orders` endpoint.
## Configuring custom latency histogram buckets
Dapr uses cumulative histogram metrics to group latency values into buckets, where each bucket contains:
- A count of the number of requests with that latency
- All the requests with lower latency
### Using the default latency bucket configurations
By default, Dapr groups request latency metrics into the following buckets:
```
1, 2, 3, 4, 5, 6, 8, 10, 13, 16, 20, 25, 30, 40, 50, 65, 80, 100, 130, 160, 200, 250, 300, 400, 500, 650, 800, 1000, 2000, 5000, 10000, 20000, 50000, 100000
```
Grouping latency values in a cumulative fashion allows buckets to be used or dropped as needed for increased or decreased granularity of data.
For example, if a request takes 3ms, it's counted in the 3ms bucket, the 4ms bucket, the 5ms bucket, and so on.
Similarly, if a request takes 10ms, it's counted in the 10ms bucket, the 13ms bucket, the 16ms bucket, and so on.
After these two requests have completed, the 3ms bucket has a count of 1 and the 10ms bucket has a count of 2, since both the 3ms and 10ms requests are included here.
This shows up as follows:
|1|2|3|4|5|6|8|10|13|16|20|25|30|40|50|65|80|100|130|160| ..... | 100000 |
|-|-|-|-|-|-|-|--|--|--|--|--|--|--|--|--|--|---|---|---|-------|--------|
|0|0|1|1|1|1|1| 2| 2| 2| 2| 2| 2| 2| 2| 2| 2| 2 | 2 | 2 | ..... | 2 |
The default number of buckets works well for most use cases, but can be adjusted as needed. Each request creates 34 different metrics, leaving this value to grow considerably for a large number of applications.
More accurate latency percentiles can be achieved by increasing the number of buckets. However, a higher number of buckets increases the amount of memory used to store the metrics, potentially negatively impacting your monitoring system.
It is recommended to keep the number of latency buckets set to the default value, unless you are seeing unwanted memory pressure in your monitoring system. Configuring the number of buckets allows you to choose applications where:
- You want to see more detail with a higher number of buckets
- Broader values are sufficient by reducing the buckets
Take note of the default latency values your applications are producing before configuring the number buckets.
### Customizing latency buckets to your scenario
Tailor the latency buckets to your needs, by modifying the `spec.metrics.latencyDistributionBuckets` field in the [Dapr configuration spec]({{< ref configuration-schema.md >}}) for your application(s).
For example, if you aren't interested in extremely low latency values (1-10ms), you can group them in a single 10ms bucket. Similarly, you can group the high values in a single bucket (1000-5000ms), while keeping more detail in the middle range of values that you are most interested in.
The following Configuration spec example replaces the default 34 buckets with 11 buckets, giving a higher level of granularity in the middle range of values:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: custom-metrics
spec:
metrics:
enabled: true
latencyDistributionBuckets: [10, 25, 40, 50, 70, 100, 150, 200, 500, 1000, 5000]
```
## Transform metrics with regular expressions
@ -231,4 +282,4 @@ Using regular expressions to reduce metrics cardinality is considered legacy. We
## References
* [Howto: Run Prometheus locally]({{< ref prometheus.md >}})
* [Howto: Set up Prometheus and Grafana for metrics]({{< ref grafana.md >}})
* [Howto: Set up Prometheus and Grafana for metrics]({{< ref grafana.md >}})

View File

@ -19,9 +19,7 @@ For CLI there is no explicit opt-in, just the version that this was first made a
| **Multi-App Run for Kubernetes** | Configure multiple Dapr applications from a single configuration file and run from a single command on Kubernetes | `dapr run -k -f` | [Multi-App Run]({{< ref multi-app-dapr-run.md >}}) | v1.12 |
| **Workflows** | Author workflows as code to automate and orchestrate tasks within your application, like messaging, state management, and failure handling | N/A | [Workflows concept]({{< ref "components-concept#workflows" >}})| v1.10 |
| **Cryptography** | Encrypt or decrypt data without having to manage secrets keys | N/A | [Cryptography concept]({{< ref "components-concept#cryptography" >}})| v1.11 |
| **Service invocation for non-Dapr endpoints** | Allow the invocation of non-Dapr endpoints by Dapr using the [Service invocation API]({{< ref service_invocation_api.md >}}). Read ["How-To: Invoke Non-Dapr Endpoints using HTTP"]({{< ref howto-invoke-non-dapr-endpoints.md >}}) for more information. | N/A | [Service invocation API]({{< ref service_invocation_api.md >}}) | v1.11 |
| **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{< ref actors_api.md >}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{< ref actors_api.md >}}) | v1.11 |
| **Transactional Outbox** | Allows state operations for inserts and updates to be published to a configured pub/sub topic using a single transaction across the state store and the pub/sub | N/A | [Transactional Outbox Feature]({{< ref howto-outbox.md >}}) | v1.12 |
| **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends. | `HotReload`| [Hot Reloading]({{< ref components-concept.md >}}) | v1.13 |
| **Subscription Hot Reloading** | Allows for declarative subscriptions to be "hot reloaded". A subscription is reloaded either when it is created/updated/deleted in Kubernetes, or on file in self-hosted mode. In-flight messages are unaffected when reloading. | `HotReload`| [Hot Reloading]({{< ref "subscription-methods.md#declarative-subscriptions" >}}) | v1.14 |
| **Job actor reminders** | Whilst the [Scheduler service]({{< ref "concepts/dapr-services/scheduler.md" >}}) is deployed by default, job actor reminders (used for scheduling actor reminders) are enabled through a preview feature and needs a feature flag. | `SchedulerReminders`| [Job actor reminders]({{< ref "jobs-overview.md#actor-reminders" >}}) | v1.14 |

View File

@ -45,23 +45,24 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes |
|--------------------|:--------:|:--------|---------|---------|---------|------------|
| May 29th 2024 | 1.13.4</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.4) |
| May 21st 2024 | 1.13.3</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.3) |
| April 3rd 2024 | 1.13.2</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.2) |
| March 26th 2024 | 1.13.1</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.1) |
| March 6th 2024 | 1.13.0</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.0) |
| January 17th 2024 | 1.12.4</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.1 </br>PHP 1.2.0 </br>Python 1.12.0 </br>.NET 1.12.0 </br>JS 3.2.0 | 0.14.0 | Supported (current) | [v1.12.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.4) |
| January 2nd 2024 | 1.12.3</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.1 </br>PHP 1.2.0 </br>Python 1.12.0 </br>.NET 1.12.0 </br>JS 3.2.0 | 0.14.0 | Supported (current) | [v1.12.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.3) |
| November 18th 2023 | 1.12.2</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.1 </br>PHP 1.2.0 </br>Python 1.12.0 </br>.NET 1.12.0 </br>JS 3.2.0 | 0.14.0 | Supported (current) | [v1.12.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.2) |
| August 14th 2024 | 1.14.0</br> | 1.14.0 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.0) |
| May 29th 2024 | 1.13.4</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported | [v1.13.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.4) |
| May 21st 2024 | 1.13.3</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported | [v1.13.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.3) |
| April 3rd 2024 | 1.13.2</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported | [v1.13.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.2) |
| March 26th 2024 | 1.13.1</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported | [v1.13.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.1) |
| March 6th 2024 | 1.13.0</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported | [v1.13.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.0) |
| January 17th 2024 | 1.12.4</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.1 </br>PHP 1.2.0 </br>Python 1.12.0 </br>.NET 1.12.0 </br>JS 3.2.0 | 0.14.0 | Supported | [v1.12.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.4) |
| January 2nd 2024 | 1.12.3</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.1 </br>PHP 1.2.0 </br>Python 1.12.0 </br>.NET 1.12.0 </br>JS 3.2.0 | 0.14.0 | Supported | [v1.12.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.3) |
| November 18th 2023 | 1.12.2</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.1 </br>PHP 1.2.0 </br>Python 1.12.0 </br>.NET 1.12.0 </br>JS 3.2.0 | 0.14.0 | Supported | [v1.12.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.2) |
| November 16th 2023 | 1.12.1</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.1 </br>PHP 1.2.0 </br>Python 1.12.0 </br>.NET 1.12.0 </br>JS 3.2.0 | 0.14.0 | Supported | [v1.12.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.1) |
| October 11th 2023 | 1.12.0</br> | 1.12.0 | Java 1.10.0 </br>Go 1.9.0 </br>PHP 1.1.0 </br>Python 1.11.0 </br>.NET 1.12.0 </br>JS 3.1.2 | 0.14.0 | Supported | [v1.12.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.12.0) |
| November 18th 2023 | 1.11.6</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Supported | [v1.11.6 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.6) |
| November 3rd 2023 | 1.11.5</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Supported | [v1.11.5 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.5) |
| October 5th 2023 | 1.11.4</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Supported | [v1.11.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.4) |
| August 31st 2023 | 1.11.3</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Supported | [v1.11.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.3) |
| July 20th 2023 | 1.11.2</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Supported | [v1.11.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.2) |
| June 22nd 2023 | 1.11.1</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Supported | [v1.11.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.1) |
| June 12th 2023 | 1.11.0</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Supported | [v1.11.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.0) |
| November 18th 2023 | 1.11.6</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Unsupported | [v1.11.6 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.6) |
| November 3rd 2023 | 1.11.5</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Unsupported | [v1.11.5 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.5) |
| October 5th 2023 | 1.11.4</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Unsupported | [v1.11.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.4) |
| August 31st 2023 | 1.11.3</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Unsupported | [v1.11.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.3) |
| July 20th 2023 | 1.11.2</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Unsupported | [v1.11.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.2) |
| June 22nd 2023 | 1.11.1</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Unsupported | [v1.11.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.1) |
| June 12th 2023 | 1.11.0</br> | 1.11.0 | Java 1.9.0 </br>Go 1.8.0 </br>PHP 1.1.0 </br>Python 1.10.0 </br>.NET 1.11.0 </br>JS 3.1.0 | 0.13.0 | Unsupported | [v1.11.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.11.0) |
| November 18th 2023 | 1.10.10</br> | 1.10.0 | Java 1.8.0 </br>Go 1.7.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 3.0.0 | 0.11.0 | Unsupported | |
| July 20th 2023 | 1.10.9</br> | 1.10.0 | Java 1.8.0 </br>Go 1.7.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 3.0.0 | 0.11.0 | Unsupported | |
| June 22nd 2023 | 1.10.8</br> | 1.10.0 | Java 1.8.0 </br>Go 1.7.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 3.0.0 | 0.11.0 | Unsupported | |
@ -137,9 +138,9 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
| 1.10.0 | N/A | 1.10.8 |
| 1.11.0 | N/A | 1.11.4 |
| 1.12.0 | N/A | 1.12.4 |
| 1.13.0 | N/A | 1.13.2 |
| 1.13.0 | N/A | 1.13.3 |
| 1.12.0 to 1.13.0 | N/A | 1.13.4 |
| 1.13.0 | N/A | 1.13.4 |
| 1.13.0 to 1.14.0 | N/A | 1.14.0 |
## Upgrade on Hosting platforms

View File

@ -12,6 +12,9 @@ The jobs API is currently in alpha.
With the jobs API, you can schedule jobs and tasks in the future.
> The HTTP APIs are intended for development and testing only. For production scenarios, the use of the SDKs is strongly
> recommended as they implement the gRPC APIs providing higher performance and capability than the HTTP APIs.
## Schedule a job
Schedule a job with a name.
@ -22,22 +25,50 @@ POST http://localhost:3500/v1.0-alpha1/jobs/<name>
### URL parameters
{{% alert title="Note" color="primary" %}}
At least one of `schedule` or `dueTime` must be provided, but they can also be provided together.
{{% /alert %}}
Parameter | Description
--------- | -----------
`name` | Name of the job you're scheduling
`data` | A string value and can be any related content. Content is returned when the reminder expires. For example, this may be useful for returning a URL or anything related to the content.
`dueTime` | Specifies the time after which this job is invoked. Its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration)
`data` | A protobuf message `@type`/`value` pair. `@type` must be of a [well-known type](https://protobuf.dev/reference/protobuf/google.protobuf). `value` is the serialized data.
`schedule` | An optional schedule at which the job is to be run. Details of the format are below.
`dueTime` | An optional time at which the job should be active, or the "one shot" time, if other scheduling type fields are not provided. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from creation time), or non-repeating ISO8601.
`repeats` | An optional number of times in which the job should be triggered. If not set, the job runs indefinitely or until expiration.
`ttl` | An optional time to live or expiration of the job. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from job creation time), or non-repeating ISO8601.
#### schedule
`schedule` accepts both systemd timer-style cron expressions, as well as human readable '@' prefixed period strings, as defined below.
Systemd timer style cron accepts 6 fields:
seconds | minutes | hours | day of month | month | day of week
0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-7/sun-sat
"0 30 * * * *" - every hour on the half hour
"0 15 3 * * *" - every day at 03:15
Period string expressions:
Entry | Description | Equivalent To
----- | ----------- | -------------
@every <duration> | Run every <duration> (e.g. '@every 1h30m') | N/A
@yearly (or @annually) | Run once a year, midnight, Jan. 1st | 0 0 0 1 1 *
@monthly | Run once a month, midnight, first of month | 0 0 0 1 * *
@weekly | Run once a week, midnight on Sunday | 0 0 0 * * 0
@daily (or @midnight) | Run once a day, midnight | 0 0 0 * * *
@hourly | Run once an hour, beginning of hour | 0 0 * * * *
### Request body
```json
{
"job": {
"data": {
"@type": "type.googleapis.com/google.type.Expr",
"expression": "<expression>"
},
"dueTime": "30s"
"data": {
"@type": "type.googleapis.com/google.protobuf.StringValue",
"value": "\"someData\""
},
"dueTime": "30s"
}
}
```
@ -46,24 +77,26 @@ Parameter | Description
Code | Description
---- | -----------
`202` | Accepted
`204` | Accepted
`400` | Request was malformed
`500` | Request formatted correctly, error in dapr code or Scheduler control plane service
### Response content
The following example curl command creates a job, naming the job `jobforjabba` and specifying the `dueTime` and the `data`.
The following example curl command creates a job, naming the job `jobforjabba` and specifying the `schedule`, `repeats` and the `data`.
```bash
$ curl -X POST \
http://localhost:3500/v1.0-alpha1/jobs/jobforjabba \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
-d '{
"job": {
"data": {
"HanSolo": "Running spice"
"@type": "type.googleapis.com/google.protobuf.StringValue",
"value": "Running spice"
},
"dueTime": "30s"
"schedule": "@every 1m",
"repeats": 5
}
}'
```
@ -87,33 +120,35 @@ Parameter | Description
Code | Description
---- | -----------
`202` | Accepted
`200` | Accepted
`400` | Request was malformed
`500` | Request formatted correctly, error in dapr code or Scheduler control plane service
`500` | Request formatted correctly, Job doesn't exist or error in dapr code or Scheduler control plane service
### Response content
After running the following example curl command, the returned response is JSON containing the `name` of the job, the `dueTime`, and the `data`.
```bash
$ curl -X GET http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Type: application/json"
$ curl -X GET http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Type: application/json"
```
```json
{
"name":"test1",
"dueTime":"30s",
"name": "jobforjabba",
"schedule": "@every 1m",
"repeats": 5,
"data": {
"HanSolo": "Running spice"
}
}
"@type": "type.googleapis.com/google.protobuf.StringValue",
"value": "Running spice"
}
}
```
## Delete a job
Delete a named job.
```
DELETE http://localhost:3500/v1.0-alpha1/jobs/<name>
DELETE http://localhost:3500/v1.0-alpha1/jobs/<name>
```
### URL parameters
@ -126,7 +161,7 @@ Parameter | Description
Code | Description
---- | -----------
`202` | Accepted
`204` | Accepted
`400` | Request was malformed
`500` | Request formatted correctly, error in dapr code or Scheduler control plane service
@ -135,7 +170,7 @@ Code | Description
In the following example curl command, the job named `test1` with app-id `sub` will be deleted
```bash
$ curl -X DELETE http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Type: application/json"
$ curl -X DELETE http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Type: application/json"
```

View File

@ -45,6 +45,7 @@ dapr init [flags]
| N/A | DAPR_HELM_REPO_PASSWORD | A password for a private Helm chart |The password required to access the private Dapr Helm chart. If it can be accessed publicly, this env variable does not need to be set| |
| `--container-runtime` | | `docker` | Used to pass in a different container runtime other than Docker. Supported container runtimes are: `docker`, `podman` |
| `--dev` | | | Creates Redis and Zipkin deployments when run in Kubernetes. |
| `--scheduler-volume` | | | Self-hosted only. Optionally, you can specify a volume for the scheduler service data directory. By default, without this flag, scheduler data is not persisted and not resilient to restarts. |
### Examples
@ -55,7 +56,9 @@ dapr init [flags]
**Install**
Install Dapr by pulling container images for Placement, Scheduler, Redis, and Zipkin. By default, these images are pulled from Docker Hub.
Install Dapr by pulling container images for Placement, Scheduler, Redis, and Zipkin. By default, these images are pulled from Docker Hub.
> By default, a `dapr_scheduler` local volume is created for Scheduler service to be used as the database directory. The host file location for this volume is likely located at `/var/lib/docker/volumes/dapr_scheduler/_data` or `~/.local/share/containers/storage/volumes/dapr_scheduler/_data`, depending on your container runtime.
```bash
dapr init

View File

@ -24,10 +24,10 @@ dapr uninstall [flags]
| Name | Environment Variable | Default | Description |
| -------------------- | -------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--all` | | `false` | Remove Redis, Zipkin containers in addition to the scheduler service and the actor placement container. Remove default dapr dir located at `$HOME/.dapr or %USERPROFILE%\.dapr\`. |
| `--all` | | `false` | Remove Redis, Zipkin containers in addition to the Scheduler service and the actor Placement service containers. Remove default Dapr dir located at `$HOME/.dapr or %USERPROFILE%\.dapr\`. |
| `--help`, `-h` | | | Print this help message |
| `--kubernetes`, `-k` | | `false` | Uninstall Dapr from a Kubernetes cluster |
| `--namespace`, `-n` | | `dapr-system` | The Kubernetes namespace to uninstall Dapr from |
| `--namespace`, `-n` | | `dapr-system` | The Kubernetes namespace from which Dapr is uninstalled |
| `--container-runtime` | | `docker` | Used to pass in a different container runtime other than Docker. Supported container runtimes are: `docker`, `podman` |
### Examples

View File

@ -39,7 +39,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| `redisHost` | Y | Output | The Redis host address | `"localhost:6379"` |
| `redisPassword` | Y | Output | The Redis password | `"password"` |
| `redisPassword` | N | Output | The Redis password | `"password"` |
| `redisUsername` | N | Output | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `"username"` |
| `useEntraID` | N | Output | Implements EntraID support for Azure Cache for Redis. Before enabling this: <ul><li>The `redisHost` name must be specified in the form of `"server:port"`</li><li>TLS must be enabled</li></ul> Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#create-a-redis-instance" >}}) | `"true"`, `"false"` |
| `enableTLS` | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |

View File

@ -73,7 +73,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `namespaceName`| N | Input/Output | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication. | `"namespace.servicebus.windows.net"` |
| `disableEntityManagement` | N | Input/Output | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
| `lockDurationInSec` | N | Input/Output | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `"30"`
| `autoDeleteOnIdleInSec` | N | Input/Output | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `"0"` (disabled) | `"3600"`
| `autoDeleteOnIdleInSec` | N | Input/Output | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: `"0"` (disabled) | `"3600"`
| `defaultMessageTimeToLiveInSec` | N | Input/Output | Default message time to live, in seconds. Used during subscription creation only. | `"10"`
| `maxDeliveryCount` | N | Input/Output | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. | `"10"`
| `minConnectionRecoveryInSec` | N | Input/Output | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `"2"` | `"5"`
@ -164,7 +164,9 @@ In addition to the [settable metadata listed above](#sending-a-message-with-meta
- `metadata.EnqueuedTimeUtc`
- `metadata.SequenceNumber`
To find out more details on the purpose of any of these metadata properties, please refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
To find out more details on the purpose of any of these metadata properties refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
In addition, all entries of `ApplicationProperties` from the original Azure Service Bus message are appended as `metadata.<application property's name>`.
{{% alert title="Note" color="primary" %}}
All times are populated by the server and are not adjusted for clock skews.

View File

@ -39,27 +39,26 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| redisHost | Y | The Redis host address | `"localhost:6379"` |
| redisPassword | Y | The Redis password | `"password"` |
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your Redis server version is 6 or above, and have created acl rule correctly. | `"username"` |
| useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this: <ul><li>The `redisHost` name must be specified in the form of `"server:port"`</li><li>TLS must be enabled</li></ul> Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#setup-redis" >}}) | `"true"`, `"false"` |
| enableTLS | N | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |
| failover | N | Property to enabled failover configuration. Needs sentinelMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The Sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| redisType | N | The type of Redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for Redis cluster mode. Defaults to `"node"`. | `"cluster"`
| redisDB | N | Database selected after connecting to Redis. If `"redisType"` is `"cluster"`, this option is ignored. Defaults to `"0"`. | `"0"`
| redisMaxRetries | N | Maximum number of times to retry commands before giving up. Default is to not retry failed commands. | `"5"`
| redisMinRetryInterval | N | Minimum backoff for Redis commands between each retry. Default is `"8ms"`; `"-1"` disables backoff. | `"8ms"`
| redisMaxRetryInterval | N | Maximum backoff for Redis commands between each retry. Default is `"512ms"`;`"-1"` disables backoff. | `"5s"`
| dialTimeout | N | Dial timeout for establishing new connections. Defaults to `"5s"`. | `"5s"`
| readTimeout | N | Timeout for socket reads. If reached, Redis commands fail with a timeout instead of blocking. Defaults to `"3s"`, `"-1"` for no timeout. | `"3s"`
| writeTimeout | N | Timeout for socket writes. If reached, Redis commands fail with a timeout instead of blocking. Defaults is readTimeout. | `"3s"`
| poolSize | N | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. | `"20"`
| poolTimeout | N | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | `"5s"`
| maxConnAge | N | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | `"30m"`
| minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to `"0"`. | `"2"`
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
| redisHost | Y | Output | The Redis host address | `"localhost:6379"` |
| redisPassword | N | Output | The Redis password | `"password"` |
| redisUsername | N | Output | Username for Redis host. Defaults to empty. Make sure your Redis server version is 6 or above, and have created acl rule correctly. | `"username"` |
| enableTLS | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |
| failover | N | Output | Property to enabled failover configuration. Needs sentinelMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | Output | The Sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| redisType | N | Output | The type of Redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for Redis cluster mode. Defaults to `"node"`. | `"cluster"`
| redisDB | N | Output | Database selected after connecting to Redis. If `"redisType"` is `"cluster"`, this option is ignored. Defaults to `"0"`. | `"0"`
| redisMaxRetries | N | Output | Maximum number of times to retry commands before giving up. Default is to not retry failed commands. | `"5"`
| redisMinRetryInterval | N | Output | Minimum backoff for Redis commands between each retry. Default is `"8ms"`; `"-1"` disables backoff. | `"8ms"`
| redisMaxRetryInterval | N | Output | Maximum backoff for Redis commands between each retry. Default is `"512ms"`;`"-1"` disables backoff. | `"5s"`
| dialTimeout | N | Output | Dial timeout for establishing new connections. Defaults to `"5s"`. | `"5s"`
| readTimeout | N | Output | Timeout for socket reads. If reached, Redis commands fail with a timeout instead of blocking. Defaults to `"3s"`, `"-1"` for no timeout. | `"3s"`
| writeTimeout | N | Output | Timeout for socket writes. If reached, Redis commands fail with a timeout instead of blocking. Defaults is readTimeout. | `"3s"`
| poolSize | N | Output | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. | `"20"`
| poolTimeout | N | Output | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | `"5s"`
| maxConnAge | N | Output | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | `"30m"`
| minIdleConns | N | Output | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to `"0"`. | `"2"`
| idleCheckFrequency | N | Output | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
| idleTimeout | N | Output | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
## Setup Redis

View File

@ -20,7 +20,7 @@ spec:
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
- name: redisPassword #Optional.
value: <PASSWORD>
- name: useEntraID
value: <bool> # Optional. Allowed: true, false.
@ -82,7 +82,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| redisHost | Y | Connection-string for the redis host | `localhost:6379`, `redis-master.default.svc.cluster.local:6379`
| redisPassword | Y | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"`
| redisPassword | N | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"`
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"`
| useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this: <ul><li>The `redisHost` name must be specified in the form of `"server:port"`</li><li>TLS must be enabled</li></ul> Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#setup-redis" >}}) | `"true"`, `"false"` |
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`

View File

@ -80,7 +80,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `0` (unlimited) | `10`
| `disableEntityManagement` | N | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
| `defaultMessageTimeToLiveInSec` | N | Default message time to live, in seconds. Used during subscription creation only. | `10`
| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600`
| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: `0` (disabled) | `3600`
| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. | `10`
| `lockDurationInSec` | N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30`
| `minConnectionRecoveryInSec` | N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5`
@ -151,7 +151,9 @@ In addition to the [settable metadata listed above](#sending-a-message-with-meta
- `metadata.EnqueuedTimeUtc`
- `metadata.SequenceNumber`
To find out more details on the purpose of any of these metadata properties, please refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
To find out more details on the purpose of any of these metadata properties refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
In addition, all entries of `ApplicationProperties` from the original Azure Service Bus message are appended as `metadata.<application property's name>`.
{{% alert title="Note" color="primary" %}}
All times are populated by the server and are not adjusted for clock skews.
@ -181,6 +183,23 @@ When subscribing to a topic, you can configure `bulkSubscribe` options. Refer to
Follow the instructions [here](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-quickstart-portal) on setting up Azure Service Bus Queues.
{{% alert title="Note" color="primary" %}}
Your queue must have the same name as the topic you are publishing to with Dapr. For example, if you are publishing to the pub/sub `"myPubsub"` on the topic `"orders"`, your queue must be named `"orders"`.
If you are using a shared access policy to connect to the queue, that policy must be able to "manage" the queue. To work with a dead-letter queue, the policy must live on the Service Bus Namespace that contains both the main queue and the dead-letter queue.
{{% /alert %}}
### Retry policy and dead-letter queues
By default, an Azure Service Bus Queue has a dead-letter queue. The messages are retried the amount given for `maxDeliveryCount`. The default `maxDeliveryCount` value defaults to 10, but can be set up to 2000. These retries happen very rapidly and the message is put in the dead-letter queue if no success is returned.
Dapr Pub/sub offers its own dead-letter queue concept that lets you control the retry policy and subscribe to the dead-letter queue through Dapr.
1. Set up a separate queue as that dead-letter queue in the Azure Service Bus namespace, and a resilience policy that defines how to retry.
1. Subscribe to the topic to get the failed messages and deal with them.
For example, setting up a dead-letter queue `orders-dlq` in the subscription and a resiliency policy lets you subscribe to the topic `orders-dlq` to handle failed messages.
For more details on setting up dead-letter queues, see the [dead-letter article]({{< ref pubsub-deadletter >}}).
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})

View File

@ -84,7 +84,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `disableEntityManagement` | N | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
| `defaultMessageTimeToLiveInSec` | N | Default message time to live, in seconds. Used during subscription creation only. | `10`
| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600`
| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. | `10`
| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Must be 300s or greater. Default set by server. | `10`
| `lockDurationInSec` | N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30`
| `minConnectionRecoveryInSec` | N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5`
| `maxConnectionRecoveryInSec` | N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600`
@ -155,7 +155,9 @@ In addition to the [settable metadata listed above](#sending-a-message-with-meta
- `metadata.EnqueuedTimeUtc`
- `metadata.SequenceNumber`
To find out more details on the purpose of any of these metadata properties, please refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
To find out more details on the purpose of any of these metadata properties refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
In addition, all entries of `ApplicationProperties` from the original Azure Service Bus message are appended as `metadata.<application property's name>`.
> Note: that all times are populated by the server and are not adjusted for clock skews.

View File

@ -99,6 +99,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| maxOutstandingMessages | N | Maximum number of outstanding messages a given [streaming-pull](https://cloud.google.com/pubsub/docs/pull#streamingpull_api) connection can have. Default: `1000` | `50`
| maxOutstandingBytes | N | Maximum number of outstanding bytes a given [streaming-pull](https://cloud.google.com/pubsub/docs/pull#streamingpull_api) connection can have. Default: `1000000000` | `1000000000`
| maxConcurrentConnections | N | Maximum number of concurrent [streaming-pull](https://cloud.google.com/pubsub/docs/pull#streamingpull_api) connections to be maintained. Default: `10` | `2`
| ackDeadline | N | Message acknowledgement duration deadline. Default: `20s` | `1m`

View File

@ -451,6 +451,30 @@ You can set a time-to-live (TTL) value at either the message or component level.
If you set both component-level and message-level TTL, the default component-level TTL is ignored in favor of the message-level TTL.
{{% /alert %}}
## Single Active Consumer
The RabbitMQ [Single Active Consumer](https://www.rabbitmq.com/docs/consumers#single-active-consumer) setup ensures that only one consumer at a time processes messages from a queue and switches to another registered consumer if the active one is canceled or fails. This approach might be required when it is crucial for messages to be consumed in the exact order they arrive in the queue and if distributed processing with multiple instances is not supported.
When this option is enabled on a queue by Dapr, an instance of the Dapr runtime will be the single active consumer. To allow another application instance to take over in case of failure, Dapr runtime must [probe the application's health]({{< ref "app-health.md" >}}) and unsubscribe from the pub/sub component.
{{% alert title="Note" color="primary" %}}
This pattern will prevent the application to scale as only one instance can process the load. While it might be interesting for Dapr integration with legacy or sensible applications, you should consider a design allowing distributed processing if you need scalability.
{{% /alert %}}
```yml
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: pubsub
spec:
topic: orders
routes:
default: /orders
pubsubname: order-pub-sub
metadata:
singleActiveConsumer: "true"
```
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}}) in the Related links section

View File

@ -41,7 +41,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| redisHost | Y | Connection-string for the redis host. If `"redisType"` is `"cluster"` it can be multiple hosts separated by commas or just a single host | `localhost:6379`, `redis-master.default.svc.cluster.local:6379`
| redisPassword | Y | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"`
| redisPassword | N | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"`
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"`
| consumerID | N | The consumer group ID. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this: <ul><li>The `redisHost` name must be specified in the form of `"server:port"`</li><li>TLS must be enabled</li></ul> Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#setup-redis" >}}) | `"true"`, `"false"` |

View File

@ -26,7 +26,7 @@ spec:
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
- name: redisPassword # Optional.
value: <PASSWORD>
- name: useEntraID
value: <bool> # Optional. Allowed: true, false.
@ -98,7 +98,7 @@ If you wish to use Redis as an actor store, append the following to the yaml.
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| redisHost | Y | Connection-string for the redis host | `localhost:6379`, `redis-master.default.svc.cluster.local:6379`
| redisPassword | Y | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"`
| redisPassword | N | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"`
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"`
| useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this: <ul><li>The `redisHost` name must be specified in the form of `"server:port"`</li><li>TLS must be enabled</li></ul> Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#setup-redis" >}}) | `"true"`, `"false"` |
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`

View File

@ -36,6 +36,9 @@ spec:
labels:
- name: <LABEL-NAME>
regex: {}
latencyDistributionBuckets:
- <BUCKET-VALUE-MS-0>
- <BUCKET-VALUE-MS-1>
http:
increasedCardinality: <TRUE-OR-FALSE>
pathMatching:

View File

@ -1 +1 @@
{{- if .Get "short" }}1.13{{ else if .Get "long" }}1.13.4{{ else if .Get "cli" }}1.13.0{{ else }}1.13.4{{ end -}}
{{- if .Get "short" }}1.14{{ else if .Get "long" }}1.14.0{{ else if .Get "cli" }}1.14.0{{ else }}1.14.0{{ end -}}

View File

@ -7,4 +7,4 @@ spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: "http://otel-collector.default.svc.cluster.local:9411/api/v2/spans"
endpointAddress: "https://otel-collector.default.svc.cluster.local:9411/api/v2/spans"

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 23 KiB

@ -1 +1 @@
Subproject commit c07eb698ac5d1b152a60d76c64af4841ffa07397
Subproject commit b8e276728935c66b0a335b5aa2ca4102c560dd3d

@ -1 +1 @@
Subproject commit 5ef7aa2234d4d4c07769ad31cde223ef11c4e33e
Subproject commit 7c03c7ce58d100a559ac1881bc0c80d6dedc5ab9

@ -1 +1 @@
Subproject commit 2f5947392a33bc7911e6669601ddb9e8b59b58fe
Subproject commit a98327e7d9a81611b0d7e91e59ea23ad48271948

@ -1 +1 @@
Subproject commit 4189a3d2ad6897406abd766f4ccbf2300c8f8852
Subproject commit 7350742b6869cc166633d1f4d17d76fbdbb12921

@ -1 +1 @@
Subproject commit 0b7aafdab1d4fade424b1b6c9569329ad10bb516
Subproject commit 64a4f2f6658e9023e8ea080eefdb019645cae802

@ -1 +1 @@
Subproject commit ed283c2e259c21cc77a24b3dbc03733103455f1b
Subproject commit 4abf5aa6504f7c0b0018d20f8dc038a486a67e3a