mirror of https://github.com/dapr/docs.git
Merge branch 'v1.6' into feat/1882-kafka-oauth2
This commit is contained in:
commit
efa31dbb3b
|
@ -1,7 +1,7 @@
|
|||
Thank you for helping make the Dapr documentation better!
|
||||
|
||||
**Please follow this checklist before submitting:**
|
||||
|
||||
- [ ] Commits are signed with Developer Certificate of Origin (DCO - [learn more](https://docs.dapr.io/contributing/contributing-overview/#developer-certificate-of-origin-signing-your-work))
|
||||
- [ ] [Read the contribution guide](https://docs.dapr.io/contributing/contributing-docs/)
|
||||
- [ ] Commands include options for Linux, MacOS, and Windows within codetabs
|
||||
- [ ] New file and folder names are globally unique
|
||||
|
|
|
@ -7,7 +7,7 @@ description: >
|
|||
How Dapr compares to and works with service meshes
|
||||
---
|
||||
|
||||
Dapr uses a sidecar architecture, running as a separate process alongside the application and includes features such as service invocation, network security, and distributed tracing. This often raises the question: how does Dapr compare to service mesh solutions such as [Linkerd](https://linkerd.io/), [Istio](https://istio.io/) and [Open Service Mesh](https://openservicemesh.io/) amoung others?
|
||||
Dapr uses a sidecar architecture, running as a separate process alongside the application and includes features such as service invocation, network security, and distributed tracing. This often raises the question: how does Dapr compare to service mesh solutions such as [Linkerd](https://linkerd.io/), [Istio](https://istio.io/) and [Open Service Mesh](https://openservicemesh.io/) among others?
|
||||
|
||||
## How Dapr and service meshes compare
|
||||
While Dapr and service meshes do offer some overlapping capabilities, **Dapr is not a service mesh**, where a service mesh is defined as a *networking* service mesh. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric, versus service meshes which are infrastructure-centric.
|
||||
|
|
|
@ -43,6 +43,7 @@ Before you submit an issue, make sure you've checked the following:
|
|||
- Many changes to the Dapr runtime may require changes to the API. In that case, the best place to discuss the potential feature is the main [Dapr repo](https://github.com/dapr/dapr).
|
||||
- Other examples could include bindings, state stores or entirely new components.
|
||||
|
||||
|
||||
## Pull Requests
|
||||
|
||||
All contributions come through pull requests. To submit a proposed change, follow this workflow:
|
||||
|
@ -53,18 +54,53 @@ All contributions come through pull requests. To submit a proposed change, follo
|
|||
1. Create your change
|
||||
- Code changes require tests
|
||||
1. Update relevant documentation for the change
|
||||
1. Commit and open a PR
|
||||
1. Commit with [DCO sign-off]({{< ref "contributing-overview.md#developer-certificate-of-origin-signing-your-work" >}}) and open a PR
|
||||
1. Wait for the CI process to finish and make sure all checks are green
|
||||
1. A maintainer of the project will be assigned, and you can expect a review within a few days
|
||||
|
||||
|
||||
#### Use work-in-progress PRs for early feedback
|
||||
|
||||
A good way to communicate before investing too much time is to create a "Work-in-progress" PR and share it with your reviewers. The standard way of doing this is to add a "[WIP]" prefix in your PR's title and assign the **do-not-merge** label. This will let people looking at your PR know that it is not well baked yet.
|
||||
|
||||
### Use of Third-party code
|
||||
## Use of Third-party code
|
||||
|
||||
- Third-party code must include licenses.
|
||||
|
||||
## Developer Certificate of Origin: Signing your work
|
||||
#### Every commit needs to be signed
|
||||
|
||||
The Developer Certificate of Origin (DCO) is a lightweight way for contributors to certify that they wrote or otherwise have the right to submit the code they are contributing to the project. Here is the full text of the [DCO](https://developercertificate.org/), reformatted for readability:
|
||||
```
|
||||
By making a contribution to this project, I certify that:
|
||||
(a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or
|
||||
(b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or
|
||||
(c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it.
|
||||
(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
|
||||
```
|
||||
Contributors sign-off that they adhere to these requirements by adding a `Signed-off-by` line to commit messages.
|
||||
|
||||
```
|
||||
This is my commit message
|
||||
Signed-off-by: Random J Developer <random@developer.example.org>
|
||||
```
|
||||
Git even has a `-s` command line option to append this automatically to your commit message:
|
||||
```
|
||||
$ git commit -s -m 'This is my commit message'
|
||||
```
|
||||
|
||||
Each Pull Request is checked whether or not commits in a Pull Request do contain a valid Signed-off-by line.
|
||||
|
||||
#### I didn't sign my commit, now what?!
|
||||
|
||||
No worries - You can easily replay your changes, sign them and force push them!
|
||||
|
||||
```
|
||||
git checkout <branch-name>
|
||||
git commit --amend --no-edit --signoff
|
||||
git push --force-with-lease <remote-name> <branch-name>
|
||||
```
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
Please see the [Dapr community code of conduct](https://github.com/dapr/community/blob/master/CODE-OF-CONDUCT.md).
|
||||
|
|
|
@ -9,11 +9,11 @@ weight: 300
|
|||
Output bindings enable you to invoke external resources without taking dependencies on special SDK or libraries.
|
||||
For a complete sample showing output bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings).
|
||||
|
||||
Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&t=1960) on how to use bi-directional output bindings.
|
||||
## Example:
|
||||
|
||||
<div class="embed-responsive embed-responsive-16by9">
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/ysklxm81MTs?start=1960" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
</div>
|
||||
The below code example loosely describes an application that processes orders. In the example, there is an order processing service which has a Dapr sidecar. The order processing service uses Dapr to invoke external resources, in this case a Kafka, via an output binding.
|
||||
|
||||
<img src="/images/building-block-output-binding-example.png" width=1000 alt="Diagram showing bindings of example service">
|
||||
|
||||
## 1. Create a binding
|
||||
|
||||
|
@ -21,7 +21,7 @@ An output binding represents a resource that Dapr uses to invoke and send messag
|
|||
|
||||
For the purpose of this guide, you'll use a Kafka binding. You can find a list of the different binding specs [here]({{< ref setup-bindings >}}).
|
||||
|
||||
Create a new binding component with the name of `myevent`.
|
||||
Create a new binding component with the name of `checkout`.
|
||||
|
||||
Inside the `metadata` section, configure Kafka related properties such as the topic to publish the message to and the broker.
|
||||
|
||||
|
@ -36,16 +36,24 @@ Create the following YAML file, named `binding.yaml`, and save this to a `compon
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: myevent
|
||||
namespace: default
|
||||
name: checkout
|
||||
spec:
|
||||
type: bindings.kafka
|
||||
version: v1
|
||||
metadata:
|
||||
# Kafka broker connection setting
|
||||
- name: brokers
|
||||
value: localhost:9092
|
||||
# consumer configuration: topic and consumer group
|
||||
- name: topics
|
||||
value: sample
|
||||
- name: consumerGroup
|
||||
value: group1
|
||||
# publisher configuration: topic
|
||||
- name: publishTopic
|
||||
value: topic1
|
||||
value: sample
|
||||
- name: authRequired
|
||||
value: "false"
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
@ -59,40 +67,273 @@ To deploy this into a Kubernetes cluster, fill in the `metadata` connection deta
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: myevent
|
||||
namespace: default
|
||||
name: checkout
|
||||
spec:
|
||||
type: bindings.kafka
|
||||
version: v1
|
||||
metadata:
|
||||
# Kafka broker connection setting
|
||||
- name: brokers
|
||||
value: localhost:9092
|
||||
# consumer configuration: topic and consumer group
|
||||
- name: topics
|
||||
value: sample
|
||||
- name: consumerGroup
|
||||
value: group1
|
||||
# publisher configuration: topic
|
||||
- name: publishTopic
|
||||
value: topic1
|
||||
value: sample
|
||||
- name: authRequired
|
||||
value: "false"
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## 2. Send an event
|
||||
## 2. Send an event (Output binding)
|
||||
|
||||
Below are code examples that leverage Dapr SDKs to interact with an output binding.
|
||||
|
||||
{{< tabs Dotnet Java Python Go Javascript>}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
//dependencies
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Net.Http;
|
||||
using System.Net.Http.Headers;
|
||||
using System.Threading.Tasks;
|
||||
using Dapr.Client;
|
||||
using Microsoft.AspNetCore.Mvc;
|
||||
using System.Threading;
|
||||
|
||||
//code
|
||||
namespace EventService
|
||||
{
|
||||
class Program
|
||||
{
|
||||
static async Task Main(string[] args)
|
||||
{
|
||||
string BINDING_NAME = "checkout";
|
||||
string BINDING_OPERATION = "create";
|
||||
while(true) {
|
||||
System.Threading.Thread.Sleep(5000);
|
||||
Random random = new Random();
|
||||
int orderId = random.Next(1,1000);
|
||||
using var client = new DaprClientBuilder().Build();
|
||||
//Using Dapr SDK to invoke output binding
|
||||
await client.InvokeBindingAsync(BINDING_NAME, BINDING_OPERATION, orderId);
|
||||
Console.WriteLine("Sending message: " + orderId);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 --app-ssl dotnet run
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```java
|
||||
//dependencies
|
||||
import io.dapr.client.DaprClient;
|
||||
import io.dapr.client.DaprClientBuilder;
|
||||
import io.dapr.client.domain.HttpExtension;
|
||||
import org.springframework.boot.autoconfigure.SpringBootApplication;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import java.util.Random;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
//code
|
||||
@SpringBootApplication
|
||||
public class OrderProcessingServiceApplication {
|
||||
|
||||
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
|
||||
|
||||
public static void main(String[] args) throws InterruptedException{
|
||||
String BINDING_NAME = "checkout";
|
||||
String BINDING_OPERATION = "create";
|
||||
while(true) {
|
||||
TimeUnit.MILLISECONDS.sleep(5000);
|
||||
Random random = new Random();
|
||||
int orderId = random.nextInt(1000-1) + 1;
|
||||
DaprClient client = new DaprClientBuilder().build();
|
||||
//Using Dapr SDK to invoke output binding
|
||||
client.invokeBinding(BINDING_NAME, BINDING_OPERATION, orderId).block();
|
||||
log.info("Sending message: " + orderId);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 mvn spring-boot:run
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
#dependencies
|
||||
import random
|
||||
from time import sleep
|
||||
import requests
|
||||
import logging
|
||||
import json
|
||||
from dapr.clients import DaprClient
|
||||
|
||||
#code
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
BINDING_NAME = 'checkout'
|
||||
BINDING_OPERATION = 'create'
|
||||
while True:
|
||||
sleep(random.randrange(50, 5000) / 1000)
|
||||
orderId = random.randint(1, 1000)
|
||||
with DaprClient() as client:
|
||||
#Using Dapr SDK to invoke output binding
|
||||
resp = client.invoke_binding(BINDING_NAME, BINDING_OPERATION, json.dumps(orderId))
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
logging.info('Sending message: ' + str(orderId))
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --app-protocol grpc python3 OrderProcessingService.py
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```go
|
||||
//dependencies
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"math/rand"
|
||||
"time"
|
||||
"strconv"
|
||||
dapr "github.com/dapr/go-sdk/client"
|
||||
|
||||
)
|
||||
|
||||
//code
|
||||
func main() {
|
||||
BINDING_NAME := "checkout";
|
||||
BINDING_OPERATION := "create";
|
||||
for i := 0; i < 10; i++ {
|
||||
time.Sleep(5000)
|
||||
orderId := rand.Intn(1000-1) + 1
|
||||
client, err := dapr.NewClient()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer client.Close()
|
||||
ctx := context.Background()
|
||||
//Using Dapr SDK to invoke output binding
|
||||
in := &dapr.InvokeBindingRequest{ Name: BINDING_NAME, Operation: BINDING_OPERATION , Data: []byte(strconv.Itoa(orderId))}
|
||||
err = client.InvokeOutputBinding(ctx, in)
|
||||
log.Println("Sending message: " + strconv.Itoa(orderId))
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 go run OrderProcessingService.go
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```javascript
|
||||
//dependencies
|
||||
|
||||
import { DaprServer, DaprClient, CommunicationProtocolEnum } from 'dapr-client';
|
||||
|
||||
//code
|
||||
const daprHost = "127.0.0.1";
|
||||
|
||||
var main = function() {
|
||||
for(var i=0;i<10;i++) {
|
||||
sleep(5000);
|
||||
var orderId = Math.floor(Math.random() * (1000 - 1) + 1);
|
||||
start(orderId).catch((e) => {
|
||||
console.error(e);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
async function start(orderId) {
|
||||
const BINDING_NAME = "checkout";
|
||||
const BINDING_OPERATION = "create";
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
//Using Dapr SDK to invoke output binding
|
||||
const result = await client.binding.send(BINDING_NAME, BINDING_OPERATION, { orderId: orderId });
|
||||
console.log("Sending message: " + orderId);
|
||||
}
|
||||
|
||||
function sleep(ms) {
|
||||
return new Promise(resolve => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
main();
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 npm start
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
All that's left now is to invoke the output bindings endpoint on a running Dapr instance.
|
||||
|
||||
You can do so using HTTP:
|
||||
You can also invoke the output bindings endpoint using HTTP:
|
||||
|
||||
```bash
|
||||
curl -X POST -H 'Content-Type: application/json' http://localhost:3500/v1.0/bindings/myevent -d '{ "data": { "message": "Hi!" }, "operation": "create" }'
|
||||
curl -X POST -H 'Content-Type: application/json' http://localhost:3601/v1.0/bindings/checkout -d '{ "data": { "orderId": "100" }, "operation": "create" }'
|
||||
```
|
||||
|
||||
As seen above, you invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myevent`.
|
||||
As seen above, you invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `checkout`.
|
||||
The payload goes inside the mandatory `data` field, and can be any JSON serializable value.
|
||||
|
||||
You'll also notice that there's an `operation` field that tells the binding what you need it to do.
|
||||
You can check [here]({{< ref supported-bindings >}}) which operations are supported for every output binding.
|
||||
|
||||
Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&t=1960) on how to use bi-directional output bindings.
|
||||
|
||||
<div class="embed-responsive embed-responsive-16by9">
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/ysklxm81MTs?start=1960" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
</div>
|
||||
|
||||
## References
|
||||
|
||||
- [Binding API]({{< ref bindings_api.md >}})
|
||||
- [Binding components]({{< ref bindings >}})
|
||||
- [Binding detailed specifications]({{< ref supported-bindings >}})
|
||||
- [Binding detailed specifications]({{< ref supported-bindings >}})
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-To: Trigger your application with input bindings"
|
||||
linkTitle: "How-To: Triggers"
|
||||
linkTitle: "How-To: Input bindings"
|
||||
description: "Use Dapr input bindings to trigger event driven applications"
|
||||
weight: 200
|
||||
---
|
||||
|
@ -18,78 +18,270 @@ Dapr bindings allow you to:
|
|||
|
||||
For more info on bindings, read [this overview]({{<ref bindings-overview.md>}}).
|
||||
|
||||
For a quickstart sample showing bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings).
|
||||
## Example:
|
||||
|
||||
The below code example loosely describes an application that processes orders. In the example, there is an order processing service which has a Dapr sidecar. The checkout service uses Dapr to trigger the application via an input binding.
|
||||
|
||||
<img src="/images/building-block-input-binding-example.png" width=1000 alt="Diagram showing bindings of example service">
|
||||
|
||||
## 1. Create a binding
|
||||
|
||||
An input binding represents an event resource that Dapr uses to read events from and push to your application.
|
||||
An input binding represents a resource that Dapr uses to read events from and push to your application.
|
||||
|
||||
For the purpose of this HowTo, we'll use a Kafka binding. You can find a list of the different binding specs [here]({{< ref supported-bindings >}}).
|
||||
For the purpose of this guide, you'll use a Kafka binding. You can find a list of supported binding components [here]({{< ref setup-bindings >}}).
|
||||
|
||||
Create the following YAML file, named binding.yaml, and save this to a `components` sub-folder in your application directory.
|
||||
Create a new binding component with the name of `checkout`.
|
||||
|
||||
Inside the `metadata` section, configure Kafka related properties, such as the topic to publish the message to and the broker.
|
||||
|
||||
{{< tabs "Self-Hosted (CLI)" Kubernetes >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Create the following YAML file, named `binding.yaml`, and save this to a `components` sub-folder in your application directory.
|
||||
(Use the `--components-path` flag with `dapr run` to point to your custom components dir)
|
||||
|
||||
*Note: When running in Kubernetes, apply this file to your cluster using `kubectl apply -f binding.yaml`*
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: myevent
|
||||
namespace: default
|
||||
name: checkout
|
||||
spec:
|
||||
type: bindings.kafka
|
||||
version: v1
|
||||
metadata:
|
||||
- name: topics
|
||||
value: topic1
|
||||
# Kafka broker connection setting
|
||||
- name: brokers
|
||||
value: localhost:9092
|
||||
# consumer configuration: topic and consumer group
|
||||
- name: topics
|
||||
value: sample
|
||||
- name: consumerGroup
|
||||
value: group1
|
||||
# publisher configuration: topic
|
||||
- name: publishTopic
|
||||
value: sample
|
||||
- name: authRequired
|
||||
value: "false"
|
||||
```
|
||||
|
||||
Here, you create a new binding component with the name of `myevent`.
|
||||
{{% /codetab %}}
|
||||
|
||||
Inside the `metadata` section, configure the Kafka related properties such as the topics to listen on, the brokers and more.
|
||||
{{% codetab %}}
|
||||
|
||||
## 2. Listen for incoming events
|
||||
To deploy this into a Kubernetes cluster, fill in the `metadata` connection details of your [desired binding component]({{< ref setup-bindings >}}) in the yaml below (in this case kafka), save as `binding.yaml`, and run `kubectl apply -f binding.yaml`.
|
||||
|
||||
Now configure your application to receive incoming events. If using HTTP, you need to listen on a `POST` endpoint with the name of the binding as specified in `metadata.name` in the file. In this example, this is `myevent`.
|
||||
|
||||
*The following example shows how you would listen for the event in Node.js, but this is applicable to any programming language*
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: checkout
|
||||
spec:
|
||||
type: bindings.kafka
|
||||
version: v1
|
||||
metadata:
|
||||
# Kafka broker connection setting
|
||||
- name: brokers
|
||||
value: localhost:9092
|
||||
# consumer configuration: topic and consumer group
|
||||
- name: topics
|
||||
value: sample
|
||||
- name: consumerGroup
|
||||
value: group1
|
||||
# publisher configuration: topic
|
||||
- name: publishTopic
|
||||
value: sample
|
||||
- name: authRequired
|
||||
value: "false"
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## 2. Listen for incoming events (input binding)
|
||||
|
||||
Now configure your application to receive incoming events. If using HTTP, you need to listen on a `POST` endpoint with the name of the binding as specified in `metadata.name` in the file.
|
||||
|
||||
Below are code examples that leverage Dapr SDKs to demonstrate an output binding.
|
||||
|
||||
{{< tabs Dotnet Java Python Go Javascript>}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
//dependencies
|
||||
using System.Collections.Generic;
|
||||
using System.Threading.Tasks;
|
||||
using System;
|
||||
using Microsoft.AspNetCore.Mvc;
|
||||
|
||||
//code
|
||||
namespace CheckoutService.controller
|
||||
{
|
||||
[ApiController]
|
||||
public class CheckoutServiceController : Controller
|
||||
{
|
||||
[HttpPost("/checkout")]
|
||||
public ActionResult<string> getCheckout([FromBody] int orderId)
|
||||
{
|
||||
Console.WriteLine("Received Message: " + orderId);
|
||||
return "CID" + orderId;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 --app-ssl dotnet run
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```java
|
||||
//dependencies
|
||||
import org.springframework.web.bind.annotation.*;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import reactor.core.publisher.Mono;
|
||||
|
||||
//code
|
||||
@RestController
|
||||
@RequestMapping("/")
|
||||
public class CheckoutServiceController {
|
||||
private static final Logger log = LoggerFactory.getLogger(CheckoutServiceController.class);
|
||||
@PostMapping(path = "/checkout")
|
||||
public Mono<String> getCheckout(@RequestBody(required = false) byte[] body) {
|
||||
return Mono.fromRunnable(() ->
|
||||
log.info("Received Message: " + new String(body)));
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 mvn spring-boot:run
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
#dependencies
|
||||
import logging
|
||||
from dapr.ext.grpc import App, BindingRequest
|
||||
|
||||
#code
|
||||
app = App()
|
||||
|
||||
@app.binding('checkout')
|
||||
def getCheckout(request: BindingRequest):
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
logging.info('Received Message : ' + request.text())
|
||||
|
||||
app.run(6002)
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --app-protocol grpc -- python3 CheckoutService.py
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```go
|
||||
//dependencies
|
||||
import (
|
||||
"encoding/json"
|
||||
"log"
|
||||
"net/http"
|
||||
"github.com/gorilla/mux"
|
||||
)
|
||||
|
||||
//code
|
||||
func getCheckout(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
var orderId int
|
||||
err := json.NewDecoder(r.Body).Decode(&orderId)
|
||||
log.Println("Received Message: ", orderId)
|
||||
if err != nil {
|
||||
log.Printf("error parsing checkout input binding payload: %s", err)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func main() {
|
||||
r := mux.NewRouter()
|
||||
r.HandleFunc("/checkout", getCheckout).Methods("POST", "OPTIONS")
|
||||
http.ListenAndServe(":6002", r)
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 go run CheckoutService.go
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```javascript
|
||||
const express = require('express')
|
||||
const bodyParser = require('body-parser')
|
||||
const app = express()
|
||||
app.use(bodyParser.json())
|
||||
//dependencies
|
||||
import { DaprServer, CommunicationProtocolEnum } from 'dapr-client';
|
||||
|
||||
const port = 3000
|
||||
//code
|
||||
const daprHost = "127.0.0.1";
|
||||
const serverHost = "127.0.0.1";
|
||||
const serverPort = "6002";
|
||||
const daprPort = "3602";
|
||||
|
||||
app.post('/myevent', (req, res) => {
|
||||
console.log(req.body)
|
||||
res.status(200).send()
|
||||
})
|
||||
start().catch((e) => {
|
||||
console.error(e);
|
||||
process.exit(1);
|
||||
});
|
||||
|
||||
async function start() {
|
||||
const server = new DaprServer(serverHost, serverPort, daprHost, daprPort, CommunicationProtocolEnum.HTTP);
|
||||
await server.binding.receive('checkout', async (orderId) => console.log(`Received Message: ${JSON.stringify(orderId)}`));
|
||||
await server.startServer();
|
||||
}
|
||||
|
||||
app.listen(port, () => console.log(`Kafka consumer app listening on port ${port}!`))
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 dotnet npm start
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
### ACK-ing an event
|
||||
|
||||
In order to tell Dapr that you successfully processed an event in your application, return a `200 OK` response from your HTTP handler.
|
||||
|
||||
```javascript
|
||||
res.status(200).send()
|
||||
```
|
||||
|
||||
### Rejecting an event
|
||||
|
||||
In order to tell Dapr that the event wasn't processed correctly in your application and schedule it for redelivery, return any response different from `200 OK`. For example, a `500 Error`.
|
||||
|
||||
```javascript
|
||||
res.status(500).send()
|
||||
```
|
||||
In order to tell Dapr that the event was not processed correctly in your application and schedule it for redelivery, return any response other than `200 OK`. For example, a `500 Error`.
|
||||
|
||||
### Specifying a custom route
|
||||
|
||||
|
@ -108,7 +300,6 @@ spec:
|
|||
### Event delivery Guarantees
|
||||
Event delivery guarantees are controlled by the binding implementation. Depending on the binding implementation, the event delivery can be exactly once or at least once.
|
||||
|
||||
|
||||
## References
|
||||
|
||||
* [Bindings building block]({{< ref bindings >}})
|
||||
|
|
|
@ -20,33 +20,48 @@ When publishing a message, it's important to specify the content type of the dat
|
|||
Unless specified, Dapr will assume `text/plain`. When using Dapr's HTTP API, the content type can be set in a `Content-Type` header.
|
||||
gRPC clients and SDKs have a dedicated content type parameter.
|
||||
|
||||
## Step 1: Setup the Pub/Sub component
|
||||
The following example creates applications to publish and subscribe to a topic called `deathStarStatus`.
|
||||
## Example:
|
||||
|
||||
<img src="/images/pubsub-publish-subscribe-example.png" width=1000>
|
||||
<br></br>
|
||||
The below code example loosely describes an application that processes orders. In the example, there are two services - an order processing service and a checkout service. Both services have Dapr sidecars. The order processing service uses Dapr to publish a message to RabbitMQ and the checkout service subscribes to the topic in the message queue.
|
||||
|
||||
<img src="/images/building-block-pub-sub-example.png" width=1000 alt="Diagram showing state management of example service">
|
||||
|
||||
## Step 1: Setup the Pub/Sub component
|
||||
The following example creates applications to publish and subscribe to a topic called `orders`.
|
||||
|
||||
The first step is to setup the Pub/Sub component:
|
||||
|
||||
{{< tabs "Self-Hosted (CLI)" Kubernetes >}}
|
||||
|
||||
{{% codetab %}}
|
||||
Redis Streams is installed by default on a local machine when running `dapr init`.
|
||||
The pubsub.yaml is created by default on your local machine when running `dapr init`. Verify by opening your components file under `%UserProfile%\.dapr\components\pubsub.yaml` on Windows or `~/.dapr/components/pubsub.yaml` on Linux/MacOS.
|
||||
|
||||
In this example, RabbitMQ is used for publish and subscribe. Replace `pubsub.yaml` file contents with the below contents.
|
||||
|
||||
Verify by opening your components file under `%UserProfile%\.dapr\components\pubsub.yaml` on Windows or `~/.dapr/components/pubsub.yaml` on Linux/MacOS:
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: pubsub
|
||||
name: order_pub_sub
|
||||
spec:
|
||||
type: pubsub.redis
|
||||
type: pubsub.rabbitmq
|
||||
version: v1
|
||||
metadata:
|
||||
- name: redisHost
|
||||
value: localhost:6379
|
||||
- name: redisPassword
|
||||
value: ""
|
||||
- name: host
|
||||
value: "amqp://localhost:5672"
|
||||
- name: durable
|
||||
value: "false"
|
||||
- name: deletedWhenUnused
|
||||
value: "false"
|
||||
- name: autoAck
|
||||
value: "false"
|
||||
- name: reconnectWait
|
||||
value: "0"
|
||||
- name: concurrency
|
||||
value: parallel
|
||||
scopes:
|
||||
- orderprocessing
|
||||
- checkout
|
||||
```
|
||||
|
||||
You can override this file with another Redis instance or another [pubsub component]({{< ref setup-pubsub >}}) by creating a `components` directory containing the file and using the flag `--components-path` with the `dapr run` CLI command.
|
||||
|
@ -59,16 +74,27 @@ To deploy this into a Kubernetes cluster, fill in the `metadata` connection deta
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: pubsub
|
||||
name: order_pub_sub
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.redis
|
||||
type: pubsub.rabbitmq
|
||||
version: v1
|
||||
metadata:
|
||||
- name: redisHost
|
||||
value: localhost:6379
|
||||
- name: redisPassword
|
||||
value: ""
|
||||
- name: host
|
||||
value: "amqp://localhost:5672"
|
||||
- name: durable
|
||||
value: "false"
|
||||
- name: deletedWhenUnused
|
||||
value: "false"
|
||||
- name: autoAck
|
||||
value: "false"
|
||||
- name: reconnectWait
|
||||
value: "0"
|
||||
- name: concurrency
|
||||
value: parallel
|
||||
scopes:
|
||||
- orderprocessing
|
||||
- checkout
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
@ -95,41 +121,72 @@ You can subscribe to a topic using the following Custom Resources Definition (CR
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: myevent-subscription
|
||||
name: order_pub_sub
|
||||
spec:
|
||||
topic: deathStarStatus
|
||||
route: /dsstatus
|
||||
pubsubname: pubsub
|
||||
topic: orders
|
||||
route: /checkout
|
||||
pubsubname: order_pub_sub
|
||||
scopes:
|
||||
- app1
|
||||
- app2
|
||||
- orderprocessing
|
||||
- checkout
|
||||
```
|
||||
|
||||
The example above shows an event subscription to topic `deathStarStatus`, for the pubsub component `pubsub`.
|
||||
- The `route` field tells Dapr to send all topic messages to the `/dsstatus` endpoint in the app.
|
||||
- The `scopes` field enables this subscription for apps with IDs `app1` and `app2`.
|
||||
The example above shows an event subscription to topic `orders`, for the pubsub component `order_pub_sub`.
|
||||
- The `route` field tells Dapr to send all topic messages to the `/checkout` endpoint in the app.
|
||||
- The `scopes` field enables this subscription for apps with IDs `orderprocessing` and `checkout`.
|
||||
|
||||
Set the component with:
|
||||
{{< tabs "Self-Hosted (CLI)" Kubernetes>}}
|
||||
|
||||
{{% codetab %}}
|
||||
Place the CRD in your `./components` directory. When Dapr starts up, it loads subscriptions along with components.
|
||||
|
||||
Note: By default, Dapr loads components from `$HOME/.dapr/components` on MacOS/Linux and `%USERPROFILE%\.dapr\components` on Windows.
|
||||
|
||||
You can also override the default directory by pointing the Dapr CLI to a components path:
|
||||
|
||||
{{< tabs Dotnet Java Python Go Javascript Kubernetes>}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
dapr run --app-id myapp --components-path ./myComponents -- python3 app1.py
|
||||
dapr run --app-id myapp --components-path ./myComponents -- dotnet run
|
||||
```
|
||||
|
||||
*Note: If you place the subscription in a custom components path, make sure the Pub/Sub component is present also.*
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
dapr run --app-id myapp --components-path ./myComponents -- mvn spring-boot:run
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
dapr run --app-id myapp --components-path ./myComponents -- python3 app.py
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
dapr run --app-id myapp --components-path ./myComponents -- go run app.go
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
dapr run --app-id myapp --components-path ./myComponents -- npm start
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
In Kubernetes, save the CRD to a file and apply it to the cluster:
|
||||
|
||||
```bash
|
||||
kubectl apply -f subscription.yaml
|
||||
```
|
||||
|
@ -137,251 +194,233 @@ kubectl apply -f subscription.yaml
|
|||
|
||||
{{< /tabs >}}
|
||||
|
||||
#### Example
|
||||
Below are code examples that leverage Dapr SDKs to subscribe to a topic.
|
||||
|
||||
{{< tabs Python Node PHP>}}
|
||||
|
||||
{{% codetab %}}
|
||||
Create a file named `app1.py` and paste in the following:
|
||||
```python
|
||||
import flask
|
||||
from flask import request, jsonify
|
||||
from flask_cors import CORS
|
||||
import json
|
||||
import sys
|
||||
|
||||
app = flask.Flask(__name__)
|
||||
CORS(app)
|
||||
|
||||
@app.route('/dsstatus', methods=['POST'])
|
||||
def ds_subscriber():
|
||||
print(request.json, flush=True)
|
||||
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
|
||||
|
||||
app.run()
|
||||
```
|
||||
After creating `app1.py` ensure flask and flask_cors are installed:
|
||||
|
||||
```bash
|
||||
pip install flask
|
||||
pip install flask_cors
|
||||
```
|
||||
|
||||
Then run:
|
||||
|
||||
```bash
|
||||
dapr --app-id app1 --app-port 5000 run python app1.py
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
After setting up the subscription above, download this javascript (Node > 4.16) into a `app2.js` file:
|
||||
|
||||
```javascript
|
||||
const express = require('express')
|
||||
const bodyParser = require('body-parser')
|
||||
const app = express()
|
||||
app.use(bodyParser.json({ type: 'application/*+json' }));
|
||||
|
||||
const port = 3000
|
||||
|
||||
app.post('/dsstatus', (req, res) => {
|
||||
console.log(req.body);
|
||||
res.sendStatus(200);
|
||||
});
|
||||
|
||||
app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
|
||||
```
|
||||
Run this app with:
|
||||
|
||||
```bash
|
||||
dapr --app-id app2 --app-port 3000 run node app2.js
|
||||
```
|
||||
{{% /codetab %}}
|
||||
{{< tabs Dotnet Java Python Go Javascript>}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Create a file named `app1.php` and paste in the following:
|
||||
```csharp
|
||||
//dependencies
|
||||
using System.Collections.Generic;
|
||||
using System.Threading.Tasks;
|
||||
using System;
|
||||
using Microsoft.AspNetCore.Mvc;
|
||||
using Dapr;
|
||||
using Dapr.Client;
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create();
|
||||
$app->post('/dsstatus', function(
|
||||
#[\Dapr\Attributes\FromBody]
|
||||
\Dapr\PubSub\CloudEvent $cloudEvent,
|
||||
\Psr\Log\LoggerInterface $logger
|
||||
) {
|
||||
$logger->alert('Received event: {event}', ['event' => $cloudEvent]);
|
||||
return ['status' => 'SUCCESS'];
|
||||
}
|
||||
);
|
||||
$app->start();
|
||||
```
|
||||
|
||||
After creating `app1.php`, and with the [SDK installed](https://docs.dapr.io/developing-applications/sdks/php/),
|
||||
go ahead and start the app:
|
||||
|
||||
```bash
|
||||
dapr --app-id app1 --app-port 3000 run -- php -S 0.0.0.0:3000 app1.php
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
### Programmatic subscriptions
|
||||
|
||||
To subscribe to topics, start a web server in the programming language of your choice and listen on the following `GET` endpoint: `/dapr/subscribe`.
|
||||
The Dapr instance calls into your app at startup and expect a JSON response for the topic subscriptions with:
|
||||
- `pubsubname`: Which pub/sub component Dapr should use.
|
||||
- `topic`: Which topic to subscribe to.
|
||||
- `route`: Which endpoint for Dapr to call on when a message comes to that topic.
|
||||
|
||||
#### Example
|
||||
|
||||
{{< tabs Python Node PHP>}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
import flask
|
||||
from flask import request, jsonify
|
||||
from flask_cors import CORS
|
||||
import json
|
||||
import sys
|
||||
|
||||
app = flask.Flask(__name__)
|
||||
CORS(app)
|
||||
|
||||
@app.route('/dapr/subscribe', methods=['GET'])
|
||||
def subscribe():
|
||||
subscriptions = [{'pubsubname': 'pubsub',
|
||||
'topic': 'deathStarStatus',
|
||||
'route': 'dsstatus'}]
|
||||
return jsonify(subscriptions)
|
||||
|
||||
@app.route('/dsstatus', methods=['POST'])
|
||||
def ds_subscriber():
|
||||
print(request.json, flush=True)
|
||||
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
|
||||
app.run()
|
||||
```
|
||||
After creating `app1.py` ensure flask and flask_cors are installed:
|
||||
|
||||
```bash
|
||||
pip install flask
|
||||
pip install flask_cors
|
||||
```
|
||||
|
||||
Then run:
|
||||
|
||||
```bash
|
||||
dapr --app-id app1 --app-port 5000 run python app1.py
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```javascript
|
||||
const express = require('express')
|
||||
const bodyParser = require('body-parser')
|
||||
const app = express()
|
||||
app.use(bodyParser.json({ type: 'application/*+json' }));
|
||||
|
||||
const port = 3000
|
||||
|
||||
app.get('/dapr/subscribe', (req, res) => {
|
||||
res.json([
|
||||
//code
|
||||
namespace CheckoutService.controller
|
||||
{
|
||||
[ApiController]
|
||||
public class CheckoutServiceController : Controller
|
||||
{
|
||||
//Subscribe to a topic
|
||||
[Topic("order_pub_sub", "orders")]
|
||||
[HttpPost("checkout")]
|
||||
public void getCheckout([FromBody] int orderId)
|
||||
{
|
||||
pubsubname: "pubsub",
|
||||
topic: "deathStarStatus",
|
||||
route: "dsstatus"
|
||||
Console.WriteLine("Subscriber received : " + orderId);
|
||||
}
|
||||
]);
|
||||
})
|
||||
|
||||
app.post('/dsstatus', (req, res) => {
|
||||
console.log(req.body);
|
||||
res.sendStatus(200);
|
||||
});
|
||||
|
||||
app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
|
||||
}
|
||||
}
|
||||
```
|
||||
Run this app with:
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr --app-id app2 --app-port 3000 run node app2.js
|
||||
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 --app-ssl dotnet run
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Update `app1.php` with the following:
|
||||
```java
|
||||
//dependencies
|
||||
import io.dapr.Topic;
|
||||
import io.dapr.client.domain.CloudEvent;
|
||||
import org.springframework.web.bind.annotation.*;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import reactor.core.publisher.Mono;
|
||||
|
||||
```php
|
||||
<?php
|
||||
//code
|
||||
@RestController
|
||||
public class CheckoutServiceController {
|
||||
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions(['dapr.subscriptions' => [
|
||||
new \Dapr\PubSub\Subscription(pubsubname: 'pubsub', topic: 'deathStarStatus', route: '/dsstatus'),
|
||||
]]));
|
||||
$app->post('/dsstatus', function(
|
||||
#[\Dapr\Attributes\FromBody]
|
||||
\Dapr\PubSub\CloudEvent $cloudEvent,
|
||||
\Psr\Log\LoggerInterface $logger
|
||||
) {
|
||||
$logger->alert('Received event: {event}', ['event' => $cloudEvent]);
|
||||
return ['status' => 'SUCCESS'];
|
||||
private static final Logger log = LoggerFactory.getLogger(CheckoutServiceController.class);
|
||||
//Subscribe to a topic
|
||||
@Topic(name = "orders", pubsubName = "order_pub_sub")
|
||||
@PostMapping(path = "/checkout")
|
||||
public Mono<Void> getCheckout(@RequestBody(required = false) CloudEvent<String> cloudEvent) {
|
||||
return Mono.fromRunnable(() -> {
|
||||
try {
|
||||
log.info("Subscriber received: " + cloudEvent.getData());
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
});
|
||||
}
|
||||
);
|
||||
$app->start();
|
||||
}
|
||||
```
|
||||
|
||||
Run this app with:
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr --app-id app1 --app-port 3000 run -- php -S 0.0.0.0:3000 app1.php
|
||||
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 mvn spring-boot:run
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
#dependencies
|
||||
from cloudevents.sdk.event import v1
|
||||
from dapr.ext.grpc import App
|
||||
import logging
|
||||
import json
|
||||
|
||||
#code
|
||||
app = App()
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
#Subscribe to a topic
|
||||
@app.subscribe(pubsub_name='order_pub_sub', topic='orders')
|
||||
def mytopic(event: v1.Event) -> None:
|
||||
data = json.loads(event.Data())
|
||||
logging.info('Subscriber received: ' + str(data))
|
||||
|
||||
app.run(6002)
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --app-protocol grpc -- python3 CheckoutService.py
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```go
|
||||
//dependencies
|
||||
import (
|
||||
"log"
|
||||
"net/http"
|
||||
"context"
|
||||
|
||||
"github.com/dapr/go-sdk/service/common"
|
||||
daprd "github.com/dapr/go-sdk/service/http"
|
||||
)
|
||||
|
||||
//code
|
||||
var sub = &common.Subscription{
|
||||
PubsubName: "order_pub_sub",
|
||||
Topic: "orders",
|
||||
Route: "/checkout",
|
||||
}
|
||||
|
||||
func main() {
|
||||
s := daprd.NewService(":6002")
|
||||
//Subscribe to a topic
|
||||
if err := s.AddTopicEventHandler(sub, eventHandler); err != nil {
|
||||
log.Fatalf("error adding topic subscription: %v", err)
|
||||
}
|
||||
if err := s.Start(); err != nil && err != http.ErrServerClosed {
|
||||
log.Fatalf("error listenning: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
|
||||
log.Printf("Subscriber received: %s", e.Data)
|
||||
return false, nil
|
||||
}
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 go run CheckoutService.go
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```javascript
|
||||
//dependencies
|
||||
import { DaprServer, CommunicationProtocolEnum } from 'dapr-client';
|
||||
|
||||
//code
|
||||
const daprHost = "127.0.0.1";
|
||||
const serverHost = "127.0.0.1";
|
||||
const serverPort = "6002";
|
||||
|
||||
start().catch((e) => {
|
||||
console.error(e);
|
||||
process.exit(1);
|
||||
});
|
||||
|
||||
async function start(orderId) {
|
||||
const server = new DaprServer(
|
||||
serverHost,
|
||||
serverPort,
|
||||
daprHost,
|
||||
process.env.DAPR_HTTP_PORT,
|
||||
CommunicationProtocolEnum.HTTP
|
||||
);
|
||||
//Subscribe to a topic
|
||||
await server.pubsub.subscribe("order_pub_sub", "orders", async (orderId) => {
|
||||
console.log(`Subscriber received: ${JSON.stringify(orderId)}`)
|
||||
});
|
||||
await server.startServer();
|
||||
}
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 npm start
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
The `/dsstatus` endpoint matches the `route` defined in the subscriptions and this is where Dapr will send all topic messages to.
|
||||
The `/checkout` endpoint matches the `route` defined in the subscriptions and this is where Dapr will send all topic messages to.
|
||||
|
||||
## Step 3: Publish a topic
|
||||
|
||||
To publish a topic you need to run an instance of a Dapr sidecar to use the pubsub Redis component. You can use the default Redis component installed into your local environment.
|
||||
|
||||
Start an instance of Dapr with an app-id called `testpubsub`:
|
||||
Start an instance of Dapr with an app-id called `orderprocessing`:
|
||||
|
||||
```bash
|
||||
dapr run --app-id testpubsub --dapr-http-port 3500
|
||||
dapr run --app-id orderprocessing --dapr-http-port 3601
|
||||
```
|
||||
{{< tabs "Dapr CLI" "HTTP API (Bash)" "HTTP API (PowerShell)">}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Then publish a message to the `deathStarStatus` topic:
|
||||
Then publish a message to the `orders` topic:
|
||||
|
||||
```bash
|
||||
dapr publish --publish-app-id testpubsub --pubsub pubsub --topic deathStarStatus --data '{"status": "completed"}'
|
||||
dapr publish --publish-app-id orderprocessing --pubsub order_pub_sub --topic orders --data '{"orderId": "100"}'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Then publish a message to the `deathStarStatus` topic:
|
||||
Then publish a message to the `orders` topic:
|
||||
```bash
|
||||
curl -X POST http://localhost:3500/v1.0/publish/pubsub/deathStarStatus -H "Content-Type: application/json" -d '{"status": "completed"}'
|
||||
curl -X POST http://localhost:3601/v1.0/publish/order_pub_sub/orders -H "Content-Type: application/json" -d '{"orderId": "100"}'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Then publish a message to the `deathStarStatus` topic:
|
||||
Then publish a message to the `orders` topic:
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"status": "completed"}' -Uri 'http://localhost:3500/v1.0/publish/pubsub/deathStarStatus'
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"orderId": "100"}' -Uri 'http://localhost:3601/v1.0/publish/order_pub_sub/orders'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
@ -389,96 +428,240 @@ Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"status":
|
|||
|
||||
Dapr automatically wraps the user payload in a Cloud Events v1.0 compliant envelope, using `Content-Type` header value for `datacontenttype` attribute.
|
||||
|
||||
## Step 4: ACK-ing a message
|
||||
Below are code examples that leverage Dapr SDKs to publish a topic.
|
||||
|
||||
In order to tell Dapr that a message was processed successfully, return a `200 OK` response. If Dapr receives any other return status code than `200`, or if your app crashes, Dapr will attempt to redeliver the message following At-Least-Once semantics.
|
||||
|
||||
#### Example
|
||||
|
||||
{{< tabs Python Node>}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
@app.route('/dsstatus', methods=['POST'])
|
||||
def ds_subscriber():
|
||||
print(request.json, flush=True)
|
||||
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```javascript
|
||||
app.post('/dsstatus', (req, res) => {
|
||||
res.sendStatus(200);
|
||||
});
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
{{% alert title="Note on message redelivery" color="primary" %}}
|
||||
Some pubsub components (e.g. Redis) will redeliver a message if a response is not sent back within a specified time window. Make sure to configure metadata such as `processingTimeout` to customize this behavior. For more information refer to the respective [component references]({{< ref supported-pubsub >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## (Optional) Step 5: Publishing a topic with code
|
||||
|
||||
{{< tabs Node PHP>}}
|
||||
|
||||
{{% codetab %}}
|
||||
If you prefer publishing a topic using code, here is an example.
|
||||
|
||||
```javascript
|
||||
const express = require('express');
|
||||
const path = require('path');
|
||||
const request = require('request');
|
||||
const bodyParser = require('body-parser');
|
||||
|
||||
const app = express();
|
||||
app.use(bodyParser.json());
|
||||
|
||||
const daprPort = process.env.DAPR_HTTP_PORT || 3500;
|
||||
const daprUrl = `http://localhost:${daprPort}/v1.0`;
|
||||
const port = 8080;
|
||||
const pubsubName = 'pubsub';
|
||||
|
||||
app.post('/publish', (req, res) => {
|
||||
console.log("Publishing: ", req.body);
|
||||
const publishUrl = `${daprUrl}/publish/${pubsubName}/deathStarStatus`;
|
||||
request( { uri: publishUrl, method: 'POST', json: req.body } );
|
||||
res.sendStatus(200);
|
||||
});
|
||||
|
||||
app.listen(process.env.PORT || port, () => console.log(`Listening on port ${port}!`));
|
||||
```
|
||||
{{% /codetab %}}
|
||||
{{< tabs Dotnet Java Python Go Javascript>}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
If you prefer publishing a topic using code, here is an example.
|
||||
```csharp
|
||||
//dependencies
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Net.Http;
|
||||
using System.Net.Http.Headers;
|
||||
using System.Threading.Tasks;
|
||||
using Dapr.Client;
|
||||
using Microsoft.AspNetCore.Mvc;
|
||||
using System.Threading;
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create();
|
||||
$app->run(function(\DI\FactoryInterface $factory, \Psr\Log\LoggerInterface $logger) {
|
||||
$publisher = $factory->make(\Dapr\PubSub\Publish::class, ['pubsub' => 'pubsub']);
|
||||
$publisher->topic('deathStarStatus')->publish('operational');
|
||||
$logger->alert('published!');
|
||||
});
|
||||
//code
|
||||
namespace EventService
|
||||
{
|
||||
class Program
|
||||
{
|
||||
static async Task Main(string[] args)
|
||||
{
|
||||
string PUBSUB_NAME = "order_pub_sub";
|
||||
string TOPIC_NAME = "orders";
|
||||
while(true) {
|
||||
System.Threading.Thread.Sleep(5000);
|
||||
Random random = new Random();
|
||||
int orderId = random.Next(1,1000);
|
||||
CancellationTokenSource source = new CancellationTokenSource();
|
||||
CancellationToken cancellationToken = source.Token;
|
||||
using var client = new DaprClientBuilder().Build();
|
||||
//Using Dapr SDK to publish a topic
|
||||
await client.PublishEventAsync(PUBSUB_NAME, TOPIC_NAME, orderId, cancellationToken);
|
||||
Console.WriteLine("Published data: " + orderId);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
You can save this to `app2.php` and while `app1` is running in another terminal, execute:
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr --app-id app2 run -- php app2.php
|
||||
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 --app-ssl dotnet run
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```java
|
||||
//dependencies
|
||||
import io.dapr.client.DaprClient;
|
||||
import io.dapr.client.DaprClientBuilder;
|
||||
import io.dapr.client.domain.Metadata;
|
||||
import static java.util.Collections.singletonMap;
|
||||
import org.springframework.boot.autoconfigure.SpringBootApplication;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import java.util.Random;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
//code
|
||||
@SpringBootApplication
|
||||
public class OrderProcessingServiceApplication {
|
||||
|
||||
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
|
||||
|
||||
public static void main(String[] args) throws InterruptedException{
|
||||
String MESSAGE_TTL_IN_SECONDS = "1000";
|
||||
String TOPIC_NAME = "orders";
|
||||
String PUBSUB_NAME = "order_pub_sub";
|
||||
|
||||
while(true) {
|
||||
TimeUnit.MILLISECONDS.sleep(5000);
|
||||
Random random = new Random();
|
||||
int orderId = random.nextInt(1000-1) + 1;
|
||||
DaprClient client = new DaprClientBuilder().build();
|
||||
//Using Dapr SDK to publish a topic
|
||||
client.publishEvent(
|
||||
PUBSUB_NAME,
|
||||
TOPIC_NAME,
|
||||
orderId,
|
||||
singletonMap(Metadata.TTL_IN_SECONDS, MESSAGE_TTL_IN_SECONDS)).block();
|
||||
log.info("Published data:" + orderId);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 mvn spring-boot:run
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
#dependencies
|
||||
import random
|
||||
from time import sleep
|
||||
import requests
|
||||
import logging
|
||||
import json
|
||||
from dapr.clients import DaprClient
|
||||
|
||||
#code
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
while True:
|
||||
sleep(random.randrange(50, 5000) / 1000)
|
||||
orderId = random.randint(1, 1000)
|
||||
PUBSUB_NAME = 'order_pub_sub'
|
||||
TOPIC_NAME = 'orders'
|
||||
with DaprClient() as client:
|
||||
#Using Dapr SDK to publish a topic
|
||||
result = client.publish_event(
|
||||
pubsub_name=PUBSUB_NAME,
|
||||
topic_name=TOPIC_NAME,
|
||||
data=json.dumps(orderId),
|
||||
data_content_type='application/json',
|
||||
)
|
||||
logging.info('Published data: ' + str(orderId))
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --app-protocol grpc python3 OrderProcessingService.py
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```go
|
||||
//dependencies
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"math/rand"
|
||||
"time"
|
||||
"strconv"
|
||||
dapr "github.com/dapr/go-sdk/client"
|
||||
)
|
||||
|
||||
//code
|
||||
var (
|
||||
PUBSUB_NAME = "order_pub_sub"
|
||||
TOPIC_NAME = "orders"
|
||||
)
|
||||
|
||||
func main() {
|
||||
for i := 0; i < 10; i++ {
|
||||
time.Sleep(5000)
|
||||
orderId := rand.Intn(1000-1) + 1
|
||||
client, err := dapr.NewClient()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer client.Close()
|
||||
ctx := context.Background()
|
||||
//Using Dapr SDK to publish a topic
|
||||
if err := client.PublishEvent(ctx, PUBSUB_NAME, TOPIC_NAME, []byte(strconv.Itoa(orderId)));
|
||||
err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
log.Println("Published data: " + strconv.Itoa(orderId))
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 go run OrderProcessingService.go
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```javascript
|
||||
//dependencies
|
||||
import { DaprServer, DaprClient, CommunicationProtocolEnum } from 'dapr-client';
|
||||
|
||||
const daprHost = "127.0.0.1";
|
||||
|
||||
var main = function() {
|
||||
for(var i=0;i<10;i++) {
|
||||
sleep(5000);
|
||||
var orderId = Math.floor(Math.random() * (1000 - 1) + 1);
|
||||
start(orderId).catch((e) => {
|
||||
console.error(e);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
async function start(orderId) {
|
||||
const PUBSUB_NAME = "order_pub_sub"
|
||||
const TOPIC_NAME = "orders"
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
console.log("Published data:" + orderId)
|
||||
//Using Dapr SDK to publish a topic
|
||||
await client.pubsub.publish(PUBSUB_NAME, TOPIC_NAME, orderId);
|
||||
}
|
||||
|
||||
function sleep(ms) {
|
||||
return new Promise(resolve => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 npm start
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Step 4: ACK-ing a message
|
||||
|
||||
In order to tell Dapr that a message was processed successfully, return a `200 OK` response. If Dapr receives any other return status code than `200`, or if your app crashes, Dapr will attempt to redeliver the message following at-least-once semantics.
|
||||
|
||||
## Sending a custom CloudEvent
|
||||
|
||||
Dapr automatically takes the data sent on the publish request and wraps it in a CloudEvent 1.0 envelope.
|
||||
|
@ -491,23 +674,23 @@ Read about content types [here](#content-types), and about the [Cloud Events mes
|
|||
{{< tabs "Dapr CLI" "HTTP API (Bash)" "HTTP API (PowerShell)">}}
|
||||
|
||||
{{% codetab %}}
|
||||
Publish a custom CloudEvent to the `deathStarStatus` topic:
|
||||
Publish a custom CloudEvent to the `orders` topic:
|
||||
```bash
|
||||
dapr publish --publish-app-id testpubsub --pubsub pubsub --topic deathStarStatus --data '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"status": "completed"}}'
|
||||
dapr publish --publish-app-id orderprocessing --pubsub order_pub_sub --topic orders --data '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"orderId": "100"}}'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Publish a custom CloudEvent to the `deathStarStatus` topic:
|
||||
Publish a custom CloudEvent to the `orders` topic:
|
||||
```bash
|
||||
curl -X POST http://localhost:3500/v1.0/publish/pubsub/deathStarStatus -H "Content-Type: application/cloudevents+json" -d '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"status": "completed"}}'
|
||||
curl -X POST http://localhost:3601/v1.0/publish/order_pub_sub/orders -H "Content-Type: application/cloudevents+json" -d '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"orderId": "100"}}'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Publish a custom CloudEvent to the `deathStarStatus` topic:
|
||||
Publish a custom CloudEvent to the `orders` topic:
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/cloudevents+json' -Body '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"status": "completed"}}' -Uri 'http://localhost:3500/v1.0/publish/pubsub/deathStarStatus'
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/cloudevents+json' -Body '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"orderId": "100"}}' -Uri 'http://localhost:3601/v1.0/publish/order_pub_sub/orders'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
@ -521,4 +704,4 @@ Invoke-RestMethod -Method Post -ContentType 'application/cloudevents+json' -Body
|
|||
- Learn about [message time-to-live]({{< ref pubsub-message-ttl.md >}})
|
||||
- Learn [how to configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
|
||||
- List of [pub/sub components]({{< ref setup-pubsub >}})
|
||||
- Read the [API reference]({{< ref pubsub_api.md >}})
|
||||
- Read the [API reference]({{< ref pubsub_api.md >}})
|
||||
|
|
|
@ -55,7 +55,6 @@ To configure a different kind of secret store see the guidance on [how to config
|
|||
|
||||
Run the Dapr sidecar with the application.
|
||||
|
||||
|
||||
{{< tabs Dotnet Java Python Go Javascript>}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
@ -110,9 +109,16 @@ Once you have a secret store, call Dapr to get the secrets from your application
|
|||
|
||||
{{% codetab %}}
|
||||
```csharp
|
||||
|
||||
//dependencies
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Net.Http;
|
||||
using System.Net.Http.Headers;
|
||||
using System.Threading.Tasks;
|
||||
using Dapr.Client;
|
||||
using Microsoft.AspNetCore.Mvc;
|
||||
using System.Threading;
|
||||
using System.Text.Json;
|
||||
|
||||
//code
|
||||
namespace EventService
|
||||
|
@ -126,54 +132,59 @@ namespace EventService
|
|||
//Using Dapr SDK to get a secret
|
||||
var secret = await client.GetSecretAsync(SECRET_STORE_NAME, "secret");
|
||||
Console.WriteLine($"Result: {string.Join(", ", secret)}");
|
||||
Console.WriteLine($"Result for bulk: {string.Join(", ", secret.Keys)}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```java
|
||||
|
||||
//dependencies
|
||||
import com.fasterxml.jackson.core.JsonProcessingException;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import io.dapr.client.DaprClient;
|
||||
import io.dapr.client.DaprClientBuilder;
|
||||
import org.springframework.boot.autoconfigure.SpringBootApplication;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import java.util.Map;
|
||||
|
||||
|
||||
//code
|
||||
@SpringBootApplication
|
||||
public class OrderProcessingServiceApplication {
|
||||
|
||||
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
|
||||
private static final ObjectMapper JSON_SERIALIZER = new ObjectMapper();
|
||||
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
|
||||
private static final ObjectMapper JSON_SERIALIZER = new ObjectMapper();
|
||||
|
||||
private static final String SECRET_STORE_NAME = "localsecretstore";
|
||||
private static final String SECRET_STORE_NAME = "localsecretstore";
|
||||
|
||||
public static void main(String[] args) throws InterruptedException, JsonProcessingException {
|
||||
DaprClient client = new DaprClientBuilder().build();
|
||||
public static void main(String[] args) throws InterruptedException, JsonProcessingException {
|
||||
DaprClient client = new DaprClientBuilder().build();
|
||||
//Using Dapr SDK to get a secret
|
||||
Map<String, String> secret = client.getSecret(SECRET_STORE_NAME, "secret").block();
|
||||
log.info("Result: " + JSON_SERIALIZER.writeValueAsString(secret));
|
||||
}
|
||||
Map<String, String> secret = client.getSecret(SECRET_STORE_NAME, "secret").block();
|
||||
log.info("Result: " + JSON_SERIALIZER.writeValueAsString(secret));
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
|
||||
#dependencies
|
||||
import random
|
||||
from time import sleep
|
||||
import requests
|
||||
import logging
|
||||
from dapr.clients import DaprClient
|
||||
from dapr.clients.grpc._state import StateItem
|
||||
from dapr.clients.grpc._request import TransactionalStateOperation, TransactionOperationType
|
||||
|
||||
#code
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
|
||||
DAPR_STORE_NAME = "localsecretstore"
|
||||
key = 'secret'
|
||||
|
||||
|
@ -182,21 +193,21 @@ with DaprClient() as client:
|
|||
secret = client.get_secret(store_name=DAPR_STORE_NAME, key=key)
|
||||
logging.info('Result: ')
|
||||
logging.info(secret.secret)
|
||||
#Using Dapr SDK to get bulk secrets
|
||||
secret = client.get_bulk_secret(store_name=DAPR_STORE_NAME)
|
||||
logging.info('Result for bulk secret: ')
|
||||
logging.info(sorted(secret.secrets.items()))
|
||||
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```go
|
||||
|
||||
//dependencies
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
|
||||
dapr "github.com/dapr/go-sdk/client"
|
||||
)
|
||||
|
||||
|
@ -209,35 +220,26 @@ func main() {
|
|||
}
|
||||
defer client.Close()
|
||||
ctx := context.Background()
|
||||
|
||||
//Using Dapr SDK to get a secret
|
||||
secret, err := client.GetSecret(ctx, SECRET_STORE_NAME, "secret", nil)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Got error for accessing key")
|
||||
}
|
||||
|
||||
if secret != nil {
|
||||
log.Println("Result : ")
|
||||
log.Println(secret)
|
||||
}
|
||||
|
||||
secretRandom, err := client.GetBulkSecret(ctx, SECRET_STORE_NAME, nil)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Got error for accessing key")
|
||||
}
|
||||
//Using Dapr SDK to get bulk secrets
|
||||
secretBulk, err := client.GetBulkSecret(ctx, SECRET_STORE_NAME, nil)
|
||||
|
||||
if secret != nil {
|
||||
log.Println("Result for bulk: ")
|
||||
log.Println(secretRandom)
|
||||
log.Println(secretBulk)
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```javascript
|
||||
|
||||
//dependencies
|
||||
import { DaprClient, HttpMethod, CommunicationProtocolEnum } from 'dapr-client';
|
||||
|
||||
|
@ -247,14 +249,15 @@ const daprHost = "127.0.0.1";
|
|||
async function main() {
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
const SECRET_STORE_NAME = "localsecretstore";
|
||||
//Using Dapr SDK to get a secret
|
||||
var secret = await client.secret.get(SECRET_STORE_NAME, "secret");
|
||||
console.log("Result: " + secret);
|
||||
//Using Dapr SDK to get bulk secrets
|
||||
secret = await client.secret.getBulk(SECRET_STORE_NAME);
|
||||
console.log("Result for bulk: " + secret);
|
||||
}
|
||||
|
||||
main();
|
||||
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
@ -267,4 +270,4 @@ main();
|
|||
- [Configure a secret store]({{<ref setup-secret-store>}})
|
||||
- [Supported secrets]({{<ref supported-secret-stores>}})
|
||||
- [Using secrets in components]({{<ref component-secrets>}})
|
||||
- [Secret stores quickstart](https://github.com/dapr/quickstarts/tree/master/secretstore)
|
||||
- [Secret stores quickstart](https://github.com/dapr/quickstarts/tree/master/secretstore)
|
||||
|
|
|
@ -183,9 +183,15 @@ Below are code examples that leverage Dapr SDKs for service invocation.
|
|||
|
||||
{{% codetab %}}
|
||||
```csharp
|
||||
|
||||
//dependencies
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Net.Http;
|
||||
using System.Net.Http.Headers;
|
||||
using System.Threading.Tasks;
|
||||
using Dapr.Client;
|
||||
using Microsoft.AspNetCore.Mvc;
|
||||
using System.Threading;
|
||||
|
||||
//code
|
||||
namespace EventService
|
||||
|
@ -194,113 +200,164 @@ namespace EventService
|
|||
{
|
||||
static async Task Main(string[] args)
|
||||
{
|
||||
int orderId = 100;
|
||||
CancellationTokenSource source = new CancellationTokenSource();
|
||||
CancellationToken cancellationToken = source.Token;
|
||||
//Using Dapr SDK to invoke a method
|
||||
using var client = new DaprClientBuilder().Build();
|
||||
var result = client.CreateInvokeMethodRequest(HttpMethod.Get, "checkout", "checkout/" + orderId, cancellationToken);
|
||||
await client.InvokeMethodAsync(result);
|
||||
while(true) {
|
||||
System.Threading.Thread.Sleep(5000);
|
||||
Random random = new Random();
|
||||
int orderId = random.Next(1,1000);
|
||||
CancellationTokenSource source = new CancellationTokenSource();
|
||||
CancellationToken cancellationToken = source.Token;
|
||||
using var client = new DaprClientBuilder().Build();
|
||||
//Using Dapr SDK to invoke a method
|
||||
var result = client.CreateInvokeMethodRequest(HttpMethod.Get, "checkout", "checkout/" + orderId, cancellationToken);
|
||||
await client.InvokeMethodAsync(result);
|
||||
Console.WriteLine("Order requested: " + orderId);
|
||||
Console.WriteLine("Result: " + result);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
{{% codetab %}}
|
||||
```java
|
||||
|
||||
//dependencies
|
||||
import io.dapr.client.DaprClient;
|
||||
import io.dapr.client.DaprClientBuilder;
|
||||
import io.dapr.client.domain.HttpExtension;
|
||||
import org.springframework.boot.autoconfigure.SpringBootApplication;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import java.util.Random;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
//code
|
||||
@SpringBootApplication
|
||||
public class OrderProcessingServiceApplication {
|
||||
public static void main(String[] args) throws InterruptedException {
|
||||
int orderId = 100;
|
||||
//Using Dapr SDK to invoke a method
|
||||
DaprClient client = new DaprClientBuilder().build();
|
||||
var result = client.invokeMethod(
|
||||
"checkout",
|
||||
"checkout/" + orderId,
|
||||
null,
|
||||
HttpExtension.GET,
|
||||
String.class
|
||||
);
|
||||
|
||||
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
|
||||
|
||||
public static void main(String[] args) throws InterruptedException{
|
||||
while(true) {
|
||||
TimeUnit.MILLISECONDS.sleep(5000);
|
||||
Random random = new Random();
|
||||
int orderId = random.nextInt(1000-1) + 1;
|
||||
DaprClient daprClient = new DaprClientBuilder().build();
|
||||
//Using Dapr SDK to invoke a method
|
||||
var result = daprClient.invokeMethod(
|
||||
"checkout",
|
||||
"checkout/" + orderId,
|
||||
null,
|
||||
HttpExtension.GET,
|
||||
String.class
|
||||
);
|
||||
log.info("Order requested: " + orderId);
|
||||
log.info("Result: " + result);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
|
||||
#dependencies
|
||||
import random
|
||||
from time import sleep
|
||||
import logging
|
||||
from dapr.clients import DaprClient
|
||||
|
||||
#code
|
||||
orderId = 100
|
||||
#Using Dapr SDK to invoke a method
|
||||
with DaprClient() as client:
|
||||
result = client.invoke_method(
|
||||
"checkout",
|
||||
f"checkout/{orderId}",
|
||||
data=b'',
|
||||
http_verb="GET"
|
||||
)
|
||||
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
while True:
|
||||
sleep(random.randrange(50, 5000) / 1000)
|
||||
orderId = random.randint(1, 1000)
|
||||
with DaprClient() as daprClient:
|
||||
#Using Dapr SDK to invoke a method
|
||||
result = daprClient.invoke_method(
|
||||
"checkout",
|
||||
f"checkout/{orderId}",
|
||||
data=b'',
|
||||
http_verb="GET"
|
||||
)
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
logging.info('Order requested: ' + str(orderId))
|
||||
logging.info('Result: ' + str(result))
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```go
|
||||
|
||||
//dependencies
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"math/rand"
|
||||
"time"
|
||||
"strconv"
|
||||
dapr "github.com/dapr/go-sdk/client"
|
||||
|
||||
)
|
||||
|
||||
//code
|
||||
func main() {
|
||||
orderId := 100
|
||||
//Using Dapr SDK to invoke a method
|
||||
client, err := dapr.NewClient()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer client.Close()
|
||||
ctx := context.Background()
|
||||
result, err := client.InvokeMethod(ctx, "checkout", "checkout/" + strconv.Itoa(orderId), "get")
|
||||
type Order struct {
|
||||
orderName string
|
||||
orderNum string
|
||||
}
|
||||
|
||||
func main() {
|
||||
for i := 0; i < 10; i++ {
|
||||
time.Sleep(5000)
|
||||
orderId := rand.Intn(1000-1) + 1
|
||||
client, err := dapr.NewClient()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer client.Close()
|
||||
ctx := context.Background()
|
||||
//Using Dapr SDK to invoke a method
|
||||
result, err := client.InvokeMethod(ctx, "checkout", "checkout/" + strconv.Itoa(orderId), "get")
|
||||
log.Println("Order requested: " + strconv.Itoa(orderId))
|
||||
log.Println("Result: ")
|
||||
log.Println(result)
|
||||
}
|
||||
}
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```javascript
|
||||
|
||||
//dependencies
|
||||
import { DaprClient, HttpMethod, CommunicationProtocolEnum } from 'dapr-client';
|
||||
|
||||
//code
|
||||
const daprHost = "127.0.0.1";
|
||||
const daprHost = "127.0.0.1";
|
||||
|
||||
var main = function() {
|
||||
var orderId = 100;
|
||||
//Using Dapr SDK to invoke a method
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
const result = await client.invoker.invoke('checkout' , "checkout/" + orderId , HttpMethod.GET);
|
||||
for(var i=0;i<10;i++) {
|
||||
sleep(5000);
|
||||
var orderId = Math.floor(Math.random() * (1000 - 1) + 1);
|
||||
start(orderId).catch((e) => {
|
||||
console.error(e);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
async function start(orderId) {
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
//Using Dapr SDK to invoke a method
|
||||
const result = await client.invoker.invoke('checkoutservice' , "checkout/" + orderId , HttpMethod.GET);
|
||||
console.log("Order requested: " + orderId);
|
||||
console.log("Result: " + result);
|
||||
}
|
||||
|
||||
function sleep(ms) {
|
||||
return new Promise(resolve => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
main();
|
||||
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
|
|
@ -81,7 +81,15 @@ Below are code examples that leverage Dapr SDKs for saving and retrieving a sing
|
|||
```csharp
|
||||
|
||||
//dependencies
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Net.Http;
|
||||
using System.Net.Http.Headers;
|
||||
using System.Threading.Tasks;
|
||||
using Dapr.Client;
|
||||
using Microsoft.AspNetCore.Mvc;
|
||||
using System.Threading;
|
||||
using System.Text.Json;
|
||||
|
||||
//code
|
||||
namespace EventService
|
||||
|
@ -91,17 +99,20 @@ namespace EventService
|
|||
static async Task Main(string[] args)
|
||||
{
|
||||
string DAPR_STORE_NAME = "statestore";
|
||||
|
||||
int orderId = 100;
|
||||
//Using Dapr SDK to save and get state
|
||||
using var client = new DaprClientBuilder().Build();
|
||||
await client.SaveStateAsync(DAPR_STORE_NAME, "order_1", orderId.ToString());
|
||||
await client.SaveStateAsync(DAPR_STORE_NAME, "order_2", orderId.ToString());
|
||||
var result = await client.GetStateAsync<string>(DAPR_STORE_NAME, orderId.ToString());
|
||||
while(true) {
|
||||
System.Threading.Thread.Sleep(5000);
|
||||
using var client = new DaprClientBuilder().Build();
|
||||
Random random = new Random();
|
||||
int orderId = random.Next(1,1000);
|
||||
//Using Dapr SDK to save and get state
|
||||
await client.SaveStateAsync(DAPR_STORE_NAME, "order_1", orderId.ToString());
|
||||
await client.SaveStateAsync(DAPR_STORE_NAME, "order_2", orderId.ToString());
|
||||
var result = await client.GetStateAsync<string>(DAPR_STORE_NAME, orderId.ToString());
|
||||
Console.WriteLine("Result after get: " + result);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -116,12 +127,17 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
{{% codetab %}}
|
||||
|
||||
```java
|
||||
|
||||
//dependencies
|
||||
import io.dapr.client.DaprClient;
|
||||
import io.dapr.client.DaprClientBuilder;
|
||||
import io.dapr.client.domain.State;
|
||||
import io.dapr.client.domain.TransactionalStateOperation;
|
||||
import org.springframework.boot.autoconfigure.SpringBootApplication;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import reactor.core.publisher.Mono;
|
||||
import java.util.Random;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
//code
|
||||
@SpringBootApplication
|
||||
|
@ -129,18 +145,23 @@ public class OrderProcessingServiceApplication {
|
|||
|
||||
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
|
||||
|
||||
public static void main(String[] args) throws InterruptedException {
|
||||
String STATE_STORE_NAME = "statestore";
|
||||
private static final String STATE_STORE_NAME = "statestore";
|
||||
|
||||
int orderId = 100;
|
||||
//Using Dapr SDK to save and get state
|
||||
DaprClient client = new DaprClientBuilder().build();
|
||||
client.saveState(STATE_STORE_NAME, "order_1", Integer.toString(orderId)).block();
|
||||
client.saveState(STATE_STORE_NAME, "order_2", Integer.toString(orderId)).block();
|
||||
Mono<State<String>> result = client.getState(STATE_STORE_NAME, "order_1", String.class);
|
||||
public static void main(String[] args) throws InterruptedException{
|
||||
while(true) {
|
||||
TimeUnit.MILLISECONDS.sleep(5000);
|
||||
Random random = new Random();
|
||||
int orderId = random.nextInt(1000-1) + 1;
|
||||
DaprClient client = new DaprClientBuilder().build();
|
||||
//Using Dapr SDK to save and get state
|
||||
client.saveState(STATE_STORE_NAME, "order_1", Integer.toString(orderId)).block();
|
||||
client.saveState(STATE_STORE_NAME, "order_2", Integer.toString(orderId)).block();
|
||||
Mono<State<String>> result = client.getState(STATE_STORE_NAME, "order_1", String.class);
|
||||
log.info("Result after get" + result);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -155,22 +176,26 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
|
||||
#dependencies
|
||||
import random
|
||||
from time import sleep
|
||||
import requests
|
||||
import logging
|
||||
from dapr.clients import DaprClient
|
||||
from dapr.clients.grpc._state import StateItem
|
||||
from dapr.clients.grpc._request import TransactionalStateOperation, TransactionOperationType
|
||||
|
||||
#code
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
|
||||
DAPR_STORE_NAME = "statestore"
|
||||
|
||||
orderId = 100
|
||||
#Using Dapr SDK to save and get state
|
||||
with DaprClient() as client:
|
||||
client.save_state(DAPR_STORE_NAME, "order_1", str(orderId))
|
||||
result = client.get_state(DAPR_STORE_NAME, "order_1")
|
||||
logging.info('Result after get: ' + str(result))
|
||||
|
||||
while True:
|
||||
sleep(random.randrange(50, 5000) / 1000)
|
||||
orderId = random.randint(1, 1000)
|
||||
with DaprClient() as client:
|
||||
#Using Dapr SDK to save and get state
|
||||
client.save_state(DAPR_STORE_NAME, "order_1", str(orderId))
|
||||
result = client.get_state(DAPR_STORE_NAME, "order_1")
|
||||
logging.info('Result after get: ' + result.data.decode('utf-8'))
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -185,7 +210,6 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
{{% codetab %}}
|
||||
|
||||
```go
|
||||
|
||||
//dependencies
|
||||
import (
|
||||
"context"
|
||||
|
@ -194,34 +218,32 @@ import (
|
|||
"time"
|
||||
"strconv"
|
||||
dapr "github.com/dapr/go-sdk/client"
|
||||
|
||||
)
|
||||
|
||||
//code
|
||||
func main() {
|
||||
|
||||
STATE_STORE_NAME := "statestore"
|
||||
|
||||
orderId := 100
|
||||
|
||||
//Using Dapr SDK to save and get state
|
||||
client, err := dapr.NewClient()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer client.Close()
|
||||
ctx := context.Background()
|
||||
|
||||
if err := client.SaveState(ctx, STATE_STORE_NAME, "order_1", []byte(strconv.Itoa(orderId))); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
result, err := client.GetState(ctx, STATE_STORE_NAME, "order_1")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for i := 0; i < 10; i++ {
|
||||
time.Sleep(5000)
|
||||
orderId := rand.Intn(1000-1) + 1
|
||||
client, err := dapr.NewClient()
|
||||
STATE_STORE_NAME := "statestore"
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer client.Close()
|
||||
ctx := context.Background()
|
||||
//Using Dapr SDK to save and get state
|
||||
if err := client.SaveState(ctx, STATE_STORE_NAME, "order_1", []byte(strconv.Itoa(orderId))); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
result, err := client.GetState(ctx, STATE_STORE_NAME, "order_2")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
log.Println("Result after get: ")
|
||||
log.Println(result)
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -236,19 +258,26 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
{{% codetab %}}
|
||||
|
||||
```javascript
|
||||
|
||||
//dependencies
|
||||
import { DaprClient, HttpMethod, CommunicationProtocolEnum } from 'dapr-client';
|
||||
|
||||
//code
|
||||
const daprHost = "127.0.0.1";
|
||||
|
||||
var main = function() {
|
||||
const STATE_STORE_NAME = "statestore";
|
||||
for(var i=0;i<10;i++) {
|
||||
sleep(5000);
|
||||
var orderId = Math.floor(Math.random() * (1000 - 1) + 1);
|
||||
start(orderId).catch((e) => {
|
||||
console.error(e);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
var orderId = 100;
|
||||
//Using Dapr SDK to save and get state
|
||||
async function start(orderId) {
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
const STATE_STORE_NAME = "statestore";
|
||||
//Using Dapr SDK to save and get state
|
||||
await client.state.save(STATE_STORE_NAME, [
|
||||
{
|
||||
key: "order_1",
|
||||
|
@ -260,10 +289,14 @@ var main = function() {
|
|||
}
|
||||
]);
|
||||
var result = await client.state.get(STATE_STORE_NAME, "order_1");
|
||||
console.log("Result after get: " + result);
|
||||
}
|
||||
|
||||
function sleep(ms) {
|
||||
return new Promise(resolve => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
main();
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -329,7 +362,6 @@ Below are code examples that leverage Dapr SDKs for deleting the state.
|
|||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
|
||||
//dependencies
|
||||
using Dapr.Client;
|
||||
|
||||
|
@ -347,7 +379,6 @@ namespace EventService
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -362,10 +393,10 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
{{% codetab %}}
|
||||
|
||||
```java
|
||||
|
||||
//dependencies
|
||||
import io.dapr.client.DaprClient;
|
||||
import io.dapr.client.DaprClientBuilder;
|
||||
import org.springframework.boot.autoconfigure.SpringBootApplication;
|
||||
|
||||
//code
|
||||
@SpringBootApplication
|
||||
|
@ -379,7 +410,6 @@ public class OrderProcessingServiceApplication {
|
|||
client.deleteState(STATE_STORE_NAME, "order_1", storedEtag, null).block();
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -394,19 +424,16 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
|
||||
#dependencies
|
||||
from dapr.clients.grpc._request import TransactionalStateOperation, TransactionOperationType
|
||||
|
||||
#code
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
|
||||
DAPR_STORE_NAME = "statestore"
|
||||
|
||||
#Using Dapr SDK to delete the state
|
||||
with DaprClient() as client:
|
||||
client.delete_state(store_name=DAPR_STORE_NAME, key="order_1")
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -421,7 +448,6 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
{{% codetab %}}
|
||||
|
||||
```go
|
||||
|
||||
//dependencies
|
||||
import (
|
||||
"context"
|
||||
|
@ -431,9 +457,7 @@ import (
|
|||
|
||||
//code
|
||||
func main() {
|
||||
|
||||
STATE_STORE_NAME := "statestore"
|
||||
|
||||
//Using Dapr SDK to delete the state
|
||||
client, err := dapr.NewClient()
|
||||
if err != nil {
|
||||
|
@ -446,7 +470,6 @@ func main() {
|
|||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -461,23 +484,19 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
{{% codetab %}}
|
||||
|
||||
```javascript
|
||||
|
||||
//dependencies
|
||||
import { DaprClient, HttpMethod, CommunicationProtocolEnum } from 'dapr-client';
|
||||
|
||||
//code
|
||||
const daprHost = "127.0.0.1";
|
||||
|
||||
var main = function() {
|
||||
const STATE_STORE_NAME = "statestore";
|
||||
|
||||
//Using Dapr SDK to save and get state
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
await client.state.delete(STATE_STORE_NAME, "order_1");
|
||||
}
|
||||
|
||||
main();
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -515,11 +534,11 @@ Below are code examples that leverage Dapr SDKs for saving and retrieving multip
|
|||
{{% codetab %}}
|
||||
|
||||
```java
|
||||
|
||||
//dependencies
|
||||
import io.dapr.client.DaprClient;
|
||||
import io.dapr.client.DaprClientBuilder;
|
||||
import io.dapr.client.domain.State;
|
||||
import java.util.Arrays;
|
||||
|
||||
//code
|
||||
@SpringBootApplication
|
||||
|
@ -529,15 +548,12 @@ public class OrderProcessingServiceApplication {
|
|||
|
||||
public static void main(String[] args) throws InterruptedException{
|
||||
String STATE_STORE_NAME = "statestore";
|
||||
|
||||
int orderId = 100;
|
||||
//Using Dapr SDK to retrieve multiple states
|
||||
DaprClient client = new DaprClientBuilder().build();
|
||||
Mono<List<State<String>>> resultBulk = client.getBulkState(STATE_STORE_NAME,
|
||||
Arrays.asList("order_1", "order_2"), String.class);
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -548,27 +564,22 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
|
||||
#dependencies
|
||||
from dapr.clients import DaprClient
|
||||
from dapr.clients.grpc._state import StateItem
|
||||
|
||||
#code
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
|
||||
DAPR_STORE_NAME = "statestore"
|
||||
|
||||
orderId = 100
|
||||
#Using Dapr SDK to save and retrieve multiple states
|
||||
with DaprClient() as client:
|
||||
client.save_bulk_state(store_name=DAPR_STORE_NAME, states=[StateItem(key="order_2", value=str(orderId))])
|
||||
result = client.get_bulk_state(store_name=DAPR_STORE_NAME, keys=["order_1", "order_2"], states_metadata={"metakey": "metavalue"}).items
|
||||
logging.info('Result after get bulk: ' + str(result))
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -579,21 +590,16 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```javascript
|
||||
|
||||
//dependencies
|
||||
import { DaprClient, HttpMethod, CommunicationProtocolEnum } from 'dapr-client';
|
||||
|
||||
//code
|
||||
const daprHost = "127.0.0.1";
|
||||
|
||||
var main = function() {
|
||||
const STATE_STORE_NAME = "statestore";
|
||||
|
||||
var orderId = 100;
|
||||
//Using Dapr SDK to save and retrieve multiple states
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
|
@ -611,7 +617,6 @@ var main = function() {
|
|||
}
|
||||
|
||||
main();
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -662,9 +667,16 @@ Below are code examples that leverage Dapr SDKs for performing state transaction
|
|||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
|
||||
//dependencies
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Net.Http;
|
||||
using System.Net.Http.Headers;
|
||||
using System.Threading.Tasks;
|
||||
using Dapr.Client;
|
||||
using Microsoft.AspNetCore.Mvc;
|
||||
using System.Threading;
|
||||
using System.Text.Json;
|
||||
|
||||
//code
|
||||
namespace EventService
|
||||
|
@ -674,22 +686,26 @@ namespace EventService
|
|||
static async Task Main(string[] args)
|
||||
{
|
||||
string DAPR_STORE_NAME = "statestore";
|
||||
|
||||
int orderId = 100;
|
||||
//Using Dapr SDK to perform the state transactions
|
||||
using var client = new DaprClientBuilder().Build();
|
||||
var requests = new List<StateTransactionRequest>()
|
||||
{
|
||||
new StateTransactionRequest("order_3", JsonSerializer.SerializeToUtf8Bytes(orderId.ToString()), StateOperationType.Upsert),
|
||||
new StateTransactionRequest("order_2", null, StateOperationType.Delete)
|
||||
};
|
||||
CancellationTokenSource source = new CancellationTokenSource();
|
||||
CancellationToken cancellationToken = source.Token;
|
||||
await client.ExecuteStateTransactionAsync(DAPR_STORE_NAME, requests, cancellationToken: cancellationToken);
|
||||
while(true) {
|
||||
System.Threading.Thread.Sleep(5000);
|
||||
Random random = new Random();
|
||||
int orderId = random.Next(1,1000);
|
||||
using var client = new DaprClientBuilder().Build();
|
||||
var requests = new List<StateTransactionRequest>()
|
||||
{
|
||||
new StateTransactionRequest("order_3", JsonSerializer.SerializeToUtf8Bytes(orderId.ToString()), StateOperationType.Upsert),
|
||||
new StateTransactionRequest("order_2", null, StateOperationType.Delete)
|
||||
};
|
||||
CancellationTokenSource source = new CancellationTokenSource();
|
||||
CancellationToken cancellationToken = source.Token;
|
||||
//Using Dapr SDK to perform the state transactions
|
||||
await client.ExecuteStateTransactionAsync(DAPR_STORE_NAME, requests, cancellationToken: cancellationToken);
|
||||
Console.WriteLine("Order requested: " + orderId);
|
||||
Console.WriteLine("Result: " + result);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -704,13 +720,19 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
{{% codetab %}}
|
||||
|
||||
```java
|
||||
|
||||
//dependencies
|
||||
import io.dapr.client.DaprClient;
|
||||
import io.dapr.client.DaprClientBuilder;
|
||||
import io.dapr.client.domain.State;
|
||||
import io.dapr.client.domain.TransactionalStateOperation;
|
||||
|
||||
import org.springframework.boot.autoconfigure.SpringBootApplication;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import reactor.core.publisher.Mono;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Random;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
//code
|
||||
@SpringBootApplication
|
||||
|
@ -718,21 +740,26 @@ public class OrderProcessingServiceApplication {
|
|||
|
||||
private static final Logger log = LoggerFactory.getLogger(OrderProcessingServiceApplication.class);
|
||||
|
||||
private static final String STATE_STORE_NAME = "statestore";
|
||||
|
||||
public static void main(String[] args) throws InterruptedException{
|
||||
String STATE_STORE_NAME = "statestore";
|
||||
|
||||
int orderId = 100;
|
||||
//Using Dapr SDK to perform the state transactions
|
||||
DaprClient client = new DaprClientBuilder().build();
|
||||
List<TransactionalStateOperation<?>> operationList = new ArrayList<>();
|
||||
operationList.add(new TransactionalStateOperation<>(TransactionalStateOperation.OperationType.UPSERT,
|
||||
new State<>("order_3", Integer.toString(orderId), "")));
|
||||
operationList.add(new TransactionalStateOperation<>(TransactionalStateOperation.OperationType.DELETE,
|
||||
new State<>("order_2")));
|
||||
client.executeStateTransaction(STATE_STORE_NAME, operationList).block();
|
||||
while(true) {
|
||||
TimeUnit.MILLISECONDS.sleep(5000);
|
||||
Random random = new Random();
|
||||
int orderId = random.nextInt(1000-1) + 1;
|
||||
DaprClient client = new DaprClientBuilder().build();
|
||||
List<TransactionalStateOperation<?>> operationList = new ArrayList<>();
|
||||
operationList.add(new TransactionalStateOperation<>(TransactionalStateOperation.OperationType.UPSERT,
|
||||
new State<>("order_3", Integer.toString(orderId), "")));
|
||||
operationList.add(new TransactionalStateOperation<>(TransactionalStateOperation.OperationType.DELETE,
|
||||
new State<>("order_2")));
|
||||
//Using Dapr SDK to perform the state transactions
|
||||
client.executeStateTransaction(STATE_STORE_NAME, operationList).block();
|
||||
log.info("Order requested: " + orderId);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -743,37 +770,42 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
|
||||
#dependencies
|
||||
import random
|
||||
from time import sleep
|
||||
import requests
|
||||
import logging
|
||||
from dapr.clients import DaprClient
|
||||
from dapr.clients.grpc._state import StateItem
|
||||
from dapr.clients.grpc._request import TransactionalStateOperation, TransactionOperationType
|
||||
|
||||
#code
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
DAPR_STORE_NAME = "statestore"
|
||||
while True:
|
||||
sleep(random.randrange(50, 5000) / 1000)
|
||||
orderId = random.randint(1, 1000)
|
||||
with DaprClient() as client:
|
||||
#Using Dapr SDK to perform the state transactions
|
||||
client.execute_state_transaction(store_name=DAPR_STORE_NAME, operations=[
|
||||
TransactionalStateOperation(
|
||||
operation_type=TransactionOperationType.upsert,
|
||||
key="order_3",
|
||||
data=str(orderId)),
|
||||
TransactionalStateOperation(key="order_3", data=str(orderId)),
|
||||
TransactionalStateOperation(
|
||||
operation_type=TransactionOperationType.delete,
|
||||
key="order_2",
|
||||
data=str(orderId)),
|
||||
TransactionalStateOperation(key="order_2", data=str(orderId))
|
||||
])
|
||||
|
||||
orderId = 100
|
||||
#Using Dapr SDK to perform the state transactions
|
||||
with DaprClient() as client:
|
||||
client.execute_state_transaction(store_name=DAPR_STORE_NAME, operations=[
|
||||
TransactionalStateOperation(
|
||||
operation_type=TransactionOperationType.upsert,
|
||||
key="order_3",
|
||||
data=str(orderId)),
|
||||
TransactionalStateOperation(key="order_3", data=str(orderId)),
|
||||
TransactionalStateOperation(
|
||||
operation_type=TransactionOperationType.delete,
|
||||
key="order_2",
|
||||
data=str(orderId)),
|
||||
TransactionalStateOperation(key="order_2", data=str(orderId))
|
||||
])
|
||||
|
||||
client.delete_state(store_name=DAPR_STORE_NAME, key="order_1")
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
logging.info('Order requested: ' + str(orderId))
|
||||
logging.info('Result: ' + str(result))
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
@ -788,38 +820,48 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
{{% codetab %}}
|
||||
|
||||
```javascript
|
||||
|
||||
//dependencies
|
||||
import { DaprClient, HttpMethod, CommunicationProtocolEnum } from 'dapr-client';
|
||||
|
||||
//code
|
||||
const daprHost = "127.0.0.1";
|
||||
|
||||
var main = function() {
|
||||
const STATE_STORE_NAME = "statestore";
|
||||
for(var i=0;i<10;i++) {
|
||||
sleep(5000);
|
||||
var orderId = Math.floor(Math.random() * (1000 - 1) + 1);
|
||||
start(orderId).catch((e) => {
|
||||
console.error(e);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
var orderId = 100;
|
||||
//Using Dapr SDK to save and retrieve multiple states
|
||||
async function start(orderId) {
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
const STATE_STORE_NAME = "statestore";
|
||||
//Using Dapr SDK to save and retrieve multiple states
|
||||
await client.state.transaction(STATE_STORE_NAME, [
|
||||
{
|
||||
operation: "upsert",
|
||||
request: {
|
||||
key: "order_3",
|
||||
value: orderId.toString()
|
||||
}
|
||||
operation: "upsert",
|
||||
request: {
|
||||
key: "order_3",
|
||||
value: orderId.toString()
|
||||
}
|
||||
},
|
||||
{
|
||||
operation: "delete",
|
||||
request: {
|
||||
key: "order_2"
|
||||
}
|
||||
operation: "delete",
|
||||
request: {
|
||||
key: "order_2"
|
||||
}
|
||||
}
|
||||
]);
|
||||
}
|
||||
|
||||
main();
|
||||
function sleep(ms) {
|
||||
return new Promise(resolve => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
|
||||
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
|
||||
|
|
|
@ -25,7 +25,7 @@ You can find additional information in the [related links]({{< ref "#related-lin
|
|||
## Querying the state
|
||||
|
||||
You submit query requests via HTTP POST/PUT or gRPC.
|
||||
The body of the request is the JSON map with 3 entries: `filter`, `sort`, and `pagination`.
|
||||
The body of the request is the JSON map with 3 entries: `filter`, `sort`, and `page`.
|
||||
|
||||
The `filter` is an optional section. It specifies the query conditions in the form of a tree of key/value operations, where the key is the operator and the value is the operands.
|
||||
|
||||
|
@ -42,7 +42,7 @@ If `filter` section is omitted, the query returns all entries.
|
|||
|
||||
The `sort` is an optional section and is an ordered array of `key:order` pairs, where `key` is a key in the state store, and the `order` is an optional string indicating sorting order: `"ASC"` for ascending and `"DESC"` for descending. If omitted, ascending order is the default.
|
||||
|
||||
The `pagination` is an optional section containing `limit` and `token` parameters. `limit` sets the page size. `token` is an iteration token returned by the component, and is used in subsequent queries.
|
||||
The `page` is an optional section containing `limit` and `token` parameters. `limit` sets the page size. `token` is an iteration token returned by the component, and is used in subsequent queries.
|
||||
|
||||
For some background understanding, this query request is translated into the native query language and executed by the state store component.
|
||||
|
||||
|
@ -87,17 +87,15 @@ First, let's find all employees in the state of California and sort them by thei
|
|||
This is the [query](../query-api-examples/query1.json):
|
||||
```json
|
||||
{
|
||||
"query": {
|
||||
"filter": {
|
||||
"EQ": { "value.state": "CA" }
|
||||
},
|
||||
"sort": [
|
||||
{
|
||||
"key": "value.person.id",
|
||||
"order": "DESC"
|
||||
}
|
||||
]
|
||||
}
|
||||
"filter": {
|
||||
"EQ": { "value.state": "CA" }
|
||||
},
|
||||
"sort": [
|
||||
{
|
||||
"key": "value.person.id",
|
||||
"order": "DESC"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -186,10 +184,8 @@ Let's now find all employees from the "Dev Ops" and "Hardware" organizations.
|
|||
This is the [query](../query-api-examples/query2.json):
|
||||
```json
|
||||
{
|
||||
"query": {
|
||||
"filter": {
|
||||
"IN": { "value.person.org": [ "Dev Ops", "Hardware" ] }
|
||||
}
|
||||
"filter": {
|
||||
"IN": { "value.person.org": [ "Dev Ops", "Hardware" ] }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
@ -228,36 +224,34 @@ This is the [query](../query-api-examples/query3.json):
|
|||
|
||||
```json
|
||||
{
|
||||
"query": {
|
||||
"filter": {
|
||||
"OR": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Dev Ops" }
|
||||
},
|
||||
{
|
||||
"AND": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Finance" }
|
||||
},
|
||||
{
|
||||
"IN": { "value.state": [ "CA", "WA" ] }
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
"filter": {
|
||||
"OR": [
|
||||
{
|
||||
"key": "value.state",
|
||||
"order": "DESC"
|
||||
"EQ": { "value.person.org": "Dev Ops" }
|
||||
},
|
||||
{
|
||||
"key": "value.person.id"
|
||||
"AND": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Finance" }
|
||||
},
|
||||
{
|
||||
"IN": { "value.state": [ "CA", "WA" ] }
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"limit": 3
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
{
|
||||
"key": "value.state",
|
||||
"order": "DESC"
|
||||
},
|
||||
{
|
||||
"key": "value.person.id"
|
||||
}
|
||||
],
|
||||
"page": {
|
||||
"limit": 3
|
||||
}
|
||||
}
|
||||
```
|
||||
|
@ -336,40 +330,52 @@ The pagination token is used "as is" in the [subsequent query](../query-api-exam
|
|||
|
||||
```json
|
||||
{
|
||||
"query": {
|
||||
"filter": {
|
||||
"OR": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Dev Ops" }
|
||||
},
|
||||
{
|
||||
"AND": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Finance" }
|
||||
},
|
||||
{
|
||||
"IN": { "value.state": [ "CA", "WA" ] }
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
"filter": {
|
||||
"OR": [
|
||||
{
|
||||
"key": "value.state",
|
||||
"order": "DESC"
|
||||
"EQ": { "value.person.org": "Dev Ops" }
|
||||
},
|
||||
{
|
||||
"key": "value.person.id"
|
||||
"AND": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Finance" }
|
||||
},
|
||||
{
|
||||
"IN": { "value.state": [ "CA", "WA" ] }
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"limit": 3,
|
||||
"token": "3"
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
{
|
||||
"key": "value.state",
|
||||
"order": "DESC"
|
||||
},
|
||||
{
|
||||
"key": "value.person.id"
|
||||
}
|
||||
],
|
||||
"page": {
|
||||
"limit": 3,
|
||||
"token": "3"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" >}}
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -s -X POST -H "Content-Type: application/json" -d @query-api-examples/query3-token.json http://localhost:3500/v1.0-alpha1/state/statestore/query | jq .
|
||||
```
|
||||
{{% /codetab %}}
|
||||
{{% codetab %}}
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -InFile query-api-examples/query3-token.json -Uri 'http://localhost:3500/v1.0-alpha1/state/statestore/query'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
And the result of this query is:
|
||||
```json
|
||||
{
|
||||
|
|
|
@ -1,13 +1,11 @@
|
|||
{
|
||||
"query": {
|
||||
"filter": {
|
||||
"EQ": { "value.state": "CA" }
|
||||
},
|
||||
"sort": [
|
||||
{
|
||||
"key": "value.person.id",
|
||||
"order": "DESC"
|
||||
}
|
||||
]
|
||||
}
|
||||
"filter": {
|
||||
"EQ": { "value.state": "CA" }
|
||||
},
|
||||
"sort": [
|
||||
{
|
||||
"key": "value.person.id",
|
||||
"order": "DESC"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
@ -1,7 +1,5 @@
|
|||
{
|
||||
"query": {
|
||||
"filter": {
|
||||
"IN": { "value.person.org": [ "Dev Ops", "Hardware" ] }
|
||||
}
|
||||
"filter": {
|
||||
"IN": { "value.person.org": [ "Dev Ops", "Hardware" ] }
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,34 +1,32 @@
|
|||
{
|
||||
"query": {
|
||||
"filter": {
|
||||
"OR": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Dev Ops" }
|
||||
},
|
||||
{
|
||||
"AND": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Finance" }
|
||||
},
|
||||
{
|
||||
"IN": { "value.state": [ "CA", "WA" ] }
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
"filter": {
|
||||
"OR": [
|
||||
{
|
||||
"key": "value.state",
|
||||
"order": "DESC"
|
||||
"EQ": { "value.person.org": "Dev Ops" }
|
||||
},
|
||||
{
|
||||
"key": "value.person.id"
|
||||
"AND": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Finance" }
|
||||
},
|
||||
{
|
||||
"IN": { "value.state": [ "CA", "WA" ] }
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"limit": 3,
|
||||
"token": "3"
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
{
|
||||
"key": "value.state",
|
||||
"order": "DESC"
|
||||
},
|
||||
{
|
||||
"key": "value.person.id"
|
||||
}
|
||||
],
|
||||
"page": {
|
||||
"limit": 3,
|
||||
"token": "3"
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,33 +1,31 @@
|
|||
{
|
||||
"query": {
|
||||
"filter": {
|
||||
"OR": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Dev Ops" }
|
||||
},
|
||||
{
|
||||
"AND": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Finance" }
|
||||
},
|
||||
{
|
||||
"IN": { "value.state": [ "CA", "WA" ] }
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
"filter": {
|
||||
"OR": [
|
||||
{
|
||||
"key": "value.state",
|
||||
"order": "DESC"
|
||||
"EQ": { "value.person.org": "Dev Ops" }
|
||||
},
|
||||
{
|
||||
"key": "value.person.id"
|
||||
"AND": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Finance" }
|
||||
},
|
||||
{
|
||||
"IN": { "value.state": [ "CA", "WA" ] }
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"limit": 3
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
{
|
||||
"key": "value.state",
|
||||
"order": "DESC"
|
||||
},
|
||||
{
|
||||
"key": "value.person.id"
|
||||
}
|
||||
],
|
||||
"page": {
|
||||
"limit": 3
|
||||
}
|
||||
}
|
||||
|
|
|
@ -6,7 +6,7 @@ description: "How to configure your Dapr application to autoscale using KEDA"
|
|||
weight: 2000
|
||||
---
|
||||
|
||||
Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components]({{< ref pubsub >}}), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting later.
|
||||
Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components]({{< ref pubsub >}}), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting layer.
|
||||
|
||||
For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), an event driven autoscaler for Kubernetes. Many of Dapr's pub/sub components overlap with the scalers provided by [KEDA](https://github.com/kedacore/keda) so it's easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA.
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ weight: 60
|
|||
description: "Tutorials with code samples that are aimed to get you started quickly with Dapr"
|
||||
---
|
||||
|
||||
The [Dapr Quickstarts](https://github.com/dapr/quickstarts/tree/v1.0.0) are a collection of tutorials with code samples that are aimed to get you started quickly with Dapr, each highlighting a different Dapr capability.
|
||||
The [Dapr Quickstarts](https://github.com/dapr/quickstarts/tree/v1.5.0) are a collection of tutorials with code samples that are aimed to get you started quickly with Dapr, each highlighting a different Dapr capability.
|
||||
|
||||
- A good place to start is the hello-world quickstart, it demonstrates how to run Dapr in standalone mode locally on your machine and demonstrates state management and service invocation in a simple application.
|
||||
- Next, if you are familiar with Kubernetes and want to see how to run the same application in a Kubernetes environment, look for the hello-kubernetes quickstart. Other quickstarts such as pub-sub, bindings and the distributed-calculator quickstart explore different Dapr capabilities include instructions for running both locally and on Kubernetes and can be completed in any order. A full list of the quickstarts can be found below.
|
||||
|
@ -17,12 +17,12 @@ The [Dapr Quickstarts](https://github.com/dapr/quickstarts/tree/v1.0.0) are a co
|
|||
|
||||
| Quickstart | Description |
|
||||
|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| [Hello World](https://github.com/dapr/quickstarts/tree/v1.4.0/hello-world) | Demonstrates how to run Dapr locally. Highlights service invocation and state management. |
|
||||
| [Hello Kubernetes](https://github.com/dapr/quickstarts/tree/v1.4.0/hello-kubernetes) | Demonstrates how to run Dapr in Kubernetes. Highlights service invocation and state management. |
|
||||
| [Distributed Calculator](https://github.com/dapr/quickstarts/tree/v1.4.0/distributed-calculator) | Demonstrates a distributed calculator application that uses Dapr services to power a React web app. Highlights polyglot (multi-language) programming, service invocation and state management. |
|
||||
| [Pub/Sub](https://github.com/dapr/quickstarts/tree/v1.4.0/pub-sub) | Demonstrates how to use Dapr to enable pub-sub applications. Uses Redis as a pub-sub component. |
|
||||
| [Bindings](https://github.com/dapr/quickstarts/tree/v1.4.0/bindings) | Demonstrates how to use Dapr to create input and output bindings to other components. Uses bindings to Kafka. |
|
||||
| [Middleware](https://github.com/dapr/quickstarts/tree/v1.4.0/middleware) | Demonstrates use of Dapr middleware to enable OAuth 2.0 authorization. |
|
||||
| [Observability](https://github.com/dapr/quickstarts/tree/v1.4.0/observability) | Demonstrates Dapr tracing capabilities. Uses Zipkin as a tracing component. |
|
||||
| [Secret Store](https://github.com/dapr/quickstarts/tree/v1.4.0/secretstore) | Demonstrates the use of Dapr Secrets API to access secret stores. |
|
||||
| [Hello World](https://github.com/dapr/quickstarts/tree/v1.5.0/hello-world) | Demonstrates how to run Dapr locally. Highlights service invocation and state management. |
|
||||
| [Hello Kubernetes](https://github.com/dapr/quickstarts/tree/v1.5.0/hello-kubernetes) | Demonstrates how to run Dapr in Kubernetes. Highlights service invocation and state management. |
|
||||
| [Distributed Calculator](https://github.com/dapr/quickstarts/tree/v1.5.0/distributed-calculator) | Demonstrates a distributed calculator application that uses Dapr services to power a React web app. Highlights polyglot (multi-language) programming, service invocation and state management. |
|
||||
| [Pub/Sub](https://github.com/dapr/quickstarts/tree/v1.5.0/pub-sub) | Demonstrates how to use Dapr to enable pub-sub applications. Uses Redis as a pub-sub component. |
|
||||
| [Bindings](https://github.com/dapr/quickstarts/tree/v1.5.0/bindings) | Demonstrates how to use Dapr to create input and output bindings to other components. Uses bindings to Kafka. |
|
||||
| [Middleware](https://github.com/dapr/quickstarts/tree/v1.5.0/middleware) | Demonstrates use of Dapr middleware to enable OAuth 2.0 authorization. |
|
||||
| [Observability](https://github.com/dapr/quickstarts/tree/v1.5.0/observability) | Demonstrates Dapr tracing capabilities. Uses Zipkin as a tracing component. |
|
||||
| [Secret Store](https://github.com/dapr/quickstarts/tree/v1.5.0/secretstore) | Demonstrates the use of Dapr Secrets API to access secret stores. |
|
||||
|
||||
|
|
|
@ -75,4 +75,4 @@ By default, tailing is set to /var/log/containers/*.log. To change this setting,
|
|||
* [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform)
|
||||
* [New Relic Logging](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging)
|
||||
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/)
|
||||
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence)
|
||||
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/alerts-ai-transition-guide-2022/)
|
||||
|
|
|
@ -40,4 +40,4 @@ This document explains how to install it in your cluster, either using a Helm ch
|
|||
* [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform)
|
||||
* [New Relic Prometheus OpenMetrics Integration](https://github.com/newrelic/helm-charts/tree/master/charts/nri-prometheus)
|
||||
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/)
|
||||
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence)
|
||||
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/alerts-ai-transition-guide-2022/)
|
||||
|
|
|
@ -83,7 +83,7 @@ spec:
|
|||
|
||||
#### Production
|
||||
|
||||
Jaeger uses Elasticsearch as the backend storage, and you can create a secret in k8s cluster to access Elasticsearch server with access control. See [Configuring and Deploying Jaeger](https://docs.openshift.com/container-platform/4.7/jaeger/jaeger_install/rhbjaeger-deploying.html)
|
||||
Jaeger uses Elasticsearch as the backend storage, and you can create a secret in k8s cluster to access Elasticsearch server with access control. See [Configuring and Deploying Jaeger](https://docs.openshift.com/container-platform/4.9/distr_tracing/distr_tracing_install/distr-tracing-deploying.html)
|
||||
|
||||
```shell
|
||||
kubectl create secret generic jaeger-secret --from-literal=ES_PASSWORD='xxx' --from-literal=ES_USERNAME='xxx' -n ${NAMESPACE}
|
||||
|
|
|
@ -53,7 +53,7 @@ Similarly to the OpenTelemetry instrumentation, you can also leverage a New Reli
|
|||
|
||||
In case Dapr and your applications run in the context of a Kubernetes environment, you can enable additional metrics and logs.
|
||||
|
||||
The easiest way to install the New Relic Kubernetes integration is to use the [automated installer](https://one.newrelic.com/launcher/nr1-core.settings?pane=eyJuZXJkbGV0SWQiOiJrOHMtY2x1c3Rlci1leHBsb3Jlci1uZXJkbGV0Lms4cy1zZXR1cCJ9) to generate a manifest. It bundles not just the integration DaemonSets, but also other New Relic Kubernetes configurations, like [Kubernetes events](https://docs.newrelic.com/docs/integrations/kubernetes-integration/kubernetes-events/install-kubernetes-events-integration), [Prometheus OpenMetrics](https://docs.newrelic.com/docs/integrations/prometheus-integrations/get-started/send-prometheus-metric-data-new-relic/), and [New Relic log monitoring](https://docs.newrelic.com/docs/logs).
|
||||
The easiest way to install the New Relic Kubernetes integration is to use the [automated installer](https://one.newrelic.com/launcher/nr1-core.settings?pane=eyJuZXJkbGV0SWQiOiJrOHMtY2x1c3Rlci1leHBsb3Jlci1uZXJkbGV0Lms4cy1zZXR1cCJ9) to generate a manifest. It bundles not just the integration DaemonSets, but also other New Relic Kubernetes configurations, like [Kubernetes events](https://docs.newrelic.com/docs/integrations/kubernetes-integration/kubernetes-events/install-kubernetes-events-integration), [Prometheus OpenMetrics](https://docs.newrelic.com/docs/integrations/prometheus-integrations/get-started/send-prometheus-metric-data-new-relic/), and [New Relic log monitoring](https://docs.newrelic.com/docs/logs/ui-data/use-logs-ui/).
|
||||
|
||||
### New Relic Kubernetes Cluster Explorer
|
||||
|
||||
|
@ -101,7 +101,7 @@ And the exact same dashboard templates from Dapr can be imported to visualize Da
|
|||
|
||||
## New Relic Alerts
|
||||
|
||||
All the data that is collected from Dapr, Kubernetes or any services that run on top of can be used to set-up alerts and notifications into the preferred channel of your choice. See [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence).
|
||||
All the data that is collected from Dapr, Kubernetes or any services that run on top of can be used to set-up alerts and notifications into the preferred channel of your choice. See [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/alerts-ai-transition-guide-2022/).
|
||||
|
||||
## Related Links/References
|
||||
|
||||
|
@ -111,4 +111,4 @@ All the data that is collected from Dapr, Kubernetes or any services that run on
|
|||
* [New Relic Trace API](https://docs.newrelic.com/docs/distributed-tracing/trace-api/introduction-trace-api/)
|
||||
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/)
|
||||
* [New Relic OpenTelemetry User Experience](https://blog.newrelic.com/product-news/opentelemetry-user-experience/)
|
||||
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence)
|
||||
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/alerts-ai-transition-guide-2022/)
|
||||
|
|
|
@ -34,12 +34,14 @@ To figure the Dapr OAuth middleware, you'll need to collect the following inform
|
|||
|
||||
Authorization/Token URLs of some of the popular authorization servers:
|
||||
|
||||
<!-- IGNORE_LINKS -->
|
||||
| Server | Authorization URL | Token URL |
|
||||
|---------|-------------------|-----------|
|
||||
|Azure AAD|<https://login.microsoftonline.com/{tenant}/oauth2/authorize>|<https://login.microsoftonline.com/{tenant}/oauth2/token>|
|
||||
|GitHub|<https://github.com/login/oauth/authorize>|<https://github.com/login/oauth/access_token>|
|
||||
|Google|<https://accounts.google.com/o/oauth2/v2/auth>|<https://accounts.google.com/o/oauth2/token> <https://www.googleapis.com/oauth2/v4/token>|
|
||||
|Twitter|<https://api.twitter.com/oauth/authorize>|<https://api.twitter.com/oauth2/token>|
|
||||
<!-- END_IGNORE -->
|
||||
|
||||
## Define the middleware component definition
|
||||
|
||||
|
|
|
@ -45,7 +45,9 @@ The table below shows the versions of Dapr releases that have been tested togeth
|
|||
| Sep 22nd 2021 | 1.4.1</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Supported
|
||||
| Sep 24th 2021 | 1.4.2</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Supported |
|
||||
| Oct 7th 2021 | 1.4.3</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Supported |
|
||||
| Nov 11th 2021 | 1.5.0</br> | 1.5.0 | Java 1.3.0 </br>Go 1.3.0 </br>PHP 1.1.0 </br>Python 1.4.0 </br>.NET 1.5.0 </br>JS 1.0.2 | 0.9.0 | Supported (current) |
|
||||
| Dev 6th 2021 | 1.4.4</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Supported |
|
||||
| Nov 11th 2021 | 1.5.0</br> | 1.5.0 | Java 1.3.0 </br>Go 1.3.0 </br>PHP 1.1.0 </br>Python 1.4.0 </br>.NET 1.5.0 </br>JS 1.0.2 | 0.9.0 | Supported (current) |
|
||||
| Dec 6th 2021 | 1.5.1</br> | 1.5.1 | Java 1.3.0 </br>Go 1.3.0 </br>PHP 1.1.0 </br>Python 1.4.0 </br>.NET 1.5.0 </br>JS 1.0.2 | 0.9.0 | Supported (current) |
|
||||
|
||||
## Upgrade paths
|
||||
After the 1.0 release of the runtime there may be situations where it is necessary to explicitly upgrade through an additional release to reach the desired target. For example an upgrade from v1.0 to v1.2 may need go pass through v1.1
|
||||
|
@ -59,23 +61,22 @@ General guidance on upgrading can be found for [self hosted mode]({{<ref self-ho
|
|||
| 1.0.0 or 1.0.1 | N/A | 1.1.2 |
|
||||
| | 1.1.2 | 1.2.2 |
|
||||
| | 1.2.2 | 1.3.1 |
|
||||
| | 1.3.1 | 1.4.3 |
|
||||
| | 1.4.3 | 1.5.0 |
|
||||
| | 1.3.1 | 1.4.4 |
|
||||
| | 1.4.4 | 1.5.1 |
|
||||
| 1.1.0 to 1.1.2 | N/A | 1.2.2 |
|
||||
| | 1.2.2 | 1.3.1 |
|
||||
| | 1.3.1 | 1.4.3 |
|
||||
| | 1.4.3 | 1.5.0 |
|
||||
| | 1.3.1 | 1.4.4 |
|
||||
| | 1.4.4 | 1.5.1 |
|
||||
| 1.2.0 to 1.2.2 | N/A | 1.3.1 |
|
||||
| | 1.3.1 | 1.4.3 |
|
||||
| | 1.4.3 | 1.5.0 |
|
||||
| | 1.3.1 | 1.4.4 |
|
||||
| | 1.4.4 | 1.5.1 |
|
||||
| 1.3.0 | N/A | 1.3.1 |
|
||||
| | 1.3.1 | 1.4.3 |
|
||||
| | 1.4.3 | 1.5.0 |
|
||||
| 1.3.1 | N/A | 1.4.3 |
|
||||
| | 1.4.3 | 1.5.0 |
|
||||
| 1.4.0 to 1.4.2 | N/A | 1.4.3 |
|
||||
| | 1.4.3 | 1.5.0 |
|
||||
| 1.4.3 | N/A | 1.5.0 |
|
||||
| | 1.3.1 | 1.4.4 |
|
||||
| | 1.4.4 | 1.5.1 |
|
||||
| 1.3.1 | N/A | 1.4.4 |
|
||||
| | 1.4.4 | 1.5.0 |
|
||||
| 1.4.0 to 1.4.2 | N/A | 1.4.4 |
|
||||
| | 1.4.4 | 1.5.1 |
|
||||
|
||||
## Feature and deprecations
|
||||
There is a process for announcing feature deprecations. Deprecations are applied two (2) releases after the release in which they were announced. For example Feature X is announced to be deprecated in the 1.0.0 release notes and will then be removed in 1.2.0.
|
||||
|
|
|
@ -145,9 +145,17 @@ Dapr runs the following system pods:
|
|||
|
||||
```Bash
|
||||
kubectl logs -l app=dapr-operator -n dapr-system
|
||||
time="2019-09-05T19:03:43Z" level=info msg="log level set to: info"
|
||||
time="2019-09-05T19:03:43Z" level=info msg="starting Dapr Operator -- version 0.3.0-alpha -- commit b6f2810-dirty"
|
||||
time="2019-09-05T19:03:43Z" level=info msg="Dapr Operator is started"
|
||||
|
||||
I1207 06:01:02.891031 1 leaderelection.go:243] attempting to acquire leader lease dapr-system/operator.dapr.io...
|
||||
I1207 06:01:02.913696 1 leaderelection.go:253] successfully acquired lease dapr-system/operator.dapr.io
|
||||
time="2021-12-07T06:01:03.092529085Z" level=info msg="getting tls certificates" instance=dapr-operator-84bb47f895-dvbsj scope=dapr.operator type=log ver=unknown
|
||||
time="2021-12-07T06:01:03.092703283Z" level=info msg="tls certificates loaded successfully" instance=dapr-operator-84bb47f895-dvbsj scope=dapr.operator type=log ver=unknown
|
||||
time="2021-12-07T06:01:03.093062379Z" level=info msg="starting gRPC server" instance=dapr-operator-84bb47f895-dvbsj scope=dapr.operator.api type=log ver=unknown
|
||||
time="2021-12-07T06:01:03.093123778Z" level=info msg="Healthz server is listening on :8080" instance=dapr-operator-84bb47f895-dvbsj scope=dapr.operator type=log ver=unknown
|
||||
time="2021-12-07T06:01:03.497889776Z" level=info msg="starting webhooks" instance=dapr-operator-84bb47f895-dvbsj scope=dapr.operator type=log ver=unknown
|
||||
I1207 06:01:03.497944 1 leaderelection.go:243] attempting to acquire leader lease dapr-system/webhooks.dapr.io...
|
||||
I1207 06:01:03.516641 1 leaderelection.go:253] successfully acquired lease dapr-system/webhooks.dapr.io
|
||||
time="2021-12-07T06:01:03.526202227Z" level=info msg="Successfully patched webhook in CRD "subscriptions.dapr.io"" instance=dapr-operator-84bb47f895-dvbsj scope=dapr.operator type=log ver=unknown
|
||||
```
|
||||
|
||||
*Note: If Dapr is installed to a different namespace than dapr-system, simply replace the namespace to the desired one in the command above*
|
||||
|
@ -156,9 +164,12 @@ time="2019-09-05T19:03:43Z" level=info msg="Dapr Operator is started"
|
|||
|
||||
```Bash
|
||||
kubectl logs -l app=dapr-sidecar-injector -n dapr-system
|
||||
time="2019-09-03T21:01:12Z" level=info msg="log level set to: info"
|
||||
time="2019-09-03T21:01:12Z" level=info msg="starting Dapr Sidecar Injector -- version 0.3.0-alpha -- commit b6f2810-dirty"
|
||||
time="2019-09-03T21:01:12Z" level=info msg="Sidecar injector is listening on :4000, patching Dapr-enabled pods"
|
||||
|
||||
time="2021-12-07T06:01:01.554859058Z" level=info msg="log level set to: info" instance=dapr-sidecar-injector-5d88fcfcf5-2gmvv scope=dapr.injector type=log ver=unknown
|
||||
time="2021-12-07T06:01:01.555114755Z" level=info msg="metrics server started on :9090/" instance=dapr-sidecar-injector-5d88fcfcf5-2gmvv scope=dapr.metrics type=log ver=unknown
|
||||
time="2021-12-07T06:01:01.555233253Z" level=info msg="starting Dapr Sidecar Injector -- version 1.5.1 -- commit c6daae8e9b11b3e241a9cb84c33e5aa740d74368" instance=dapr-sidecar-injector-5d88fcfcf5-2gmvv scope=dapr.injector type=log ver=unknown
|
||||
time="2021-12-07T06:01:01.557646524Z" level=info msg="Healthz server is listening on :8080" instance=dapr-sidecar-injector-5d88fcfcf5-2gmvv scope=dapr.injector type=log ver=unknown
|
||||
time="2021-12-07T06:01:01.621291968Z" level=info msg="Sidecar injector is listening on :4000, patching Dapr-enabled pods" instance=dapr-sidecar-injector-5d88fcfcf5-2gmvv scope=dapr.injector type=log ver=unknown
|
||||
```
|
||||
|
||||
*Note: If Dapr is installed to a different namespace than dapr-system, simply replace the namespace to the desired one in the command above*
|
||||
|
@ -166,11 +177,18 @@ time="2019-09-03T21:01:12Z" level=info msg="Sidecar injector is listening on :40
|
|||
#### Viewing Placement Service Logs
|
||||
|
||||
```Bash
|
||||
kubectl logs -l app=dapr-placement -n dapr-system
|
||||
time="2019-09-03T21:01:12Z" level=info msg="log level set to: info"
|
||||
time="2019-09-03T21:01:12Z" level=info msg="starting Dapr Placement Service -- version 0.3.0-alpha -- commit b6f2810-dirty"
|
||||
time="2019-09-03T21:01:12Z" level=info msg="placement Service started on port 50005"
|
||||
time="2019-09-04T00:21:57Z" level=info msg="host added: 10.244.1.89"
|
||||
kubectl logs -l app=dapr-placement-server -n dapr-system
|
||||
|
||||
time="2021-12-04T05:08:05.733416791Z" level=info msg="starting Dapr Placement Service -- version 1.5.0 -- commit 83fe579f5dc93bef1ce3b464d3167a225a3aff3a" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=unknown
|
||||
time="2021-12-04T05:08:05.733469491Z" level=info msg="log level set to: info" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
|
||||
time="2021-12-04T05:08:05.733512692Z" level=info msg="metrics server started on :9090/" instance=dapr-placement-server-0 scope=dapr.metrics type=log ver=1.5.0
|
||||
time="2021-12-04T05:08:05.735207095Z" level=info msg="Raft server is starting on 127.0.0.1:8201..." instance=dapr-placement-server-0 scope=dapr.placement.raft type=log ver=1.5.0
|
||||
time="2021-12-04T05:08:05.735221195Z" level=info msg="mTLS enabled, getting tls certificates" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
|
||||
time="2021-12-04T05:08:05.735265696Z" level=info msg="tls certificates loaded successfully" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
|
||||
time="2021-12-04T05:08:05.735276396Z" level=info msg="placement service started on port 50005" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
|
||||
time="2021-12-04T05:08:05.735553696Z" level=info msg="Healthz server is listening on :8080" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
|
||||
time="2021-12-04T05:08:07.036850257Z" level=info msg="cluster leadership acquired" instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
|
||||
time="2021-12-04T05:08:07.036909357Z" level=info msg="leader is established." instance=dapr-placement-server-0 scope=dapr.placement type=log ver=1.5.0
|
||||
```
|
||||
|
||||
*Note: If Dapr is installed to a different namespace than dapr-system, simply replace the namespace to the desired one in the command above*
|
||||
|
@ -181,4 +199,4 @@ The examples above are specific specific to Kubernetes, but the principal is the
|
|||
|
||||
## References
|
||||
|
||||
* [How to setup loggings for Dapr sidecar, and your application]({{< ref "logging.md" >}})
|
||||
* [How to setup logging in Dapr]({{< ref "logging.md" >}})
|
||||
|
|
|
@ -16,7 +16,7 @@ This endpoint lets you invoke a method in another Dapr enabled app.
|
|||
### HTTP Request
|
||||
|
||||
```
|
||||
POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/invoke/<appId>/method/<method-name>
|
||||
PATCH/POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/invoke/<appId>/method/<method-name>
|
||||
```
|
||||
|
||||
### HTTP Response codes
|
||||
|
|
|
@ -333,36 +333,34 @@ An array of JSON-encoded values
|
|||
curl http://localhost:3500/v1.0-alpha1/state/myStore/query \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"query": {
|
||||
"filter": {
|
||||
"OR": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Dev Ops" }
|
||||
},
|
||||
{
|
||||
"AND": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Finance" }
|
||||
},
|
||||
{
|
||||
"IN": { "value.state": [ "CA", "WA" ] }
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
"filter": {
|
||||
"OR": [
|
||||
{
|
||||
"key": "value.state",
|
||||
"order": "DESC"
|
||||
"EQ": { "value.person.org": "Dev Ops" }
|
||||
},
|
||||
{
|
||||
"key": "value.person.id"
|
||||
"AND": [
|
||||
{
|
||||
"EQ": { "value.person.org": "Finance" }
|
||||
},
|
||||
{
|
||||
"IN": { "value.state": [ "CA", "WA" ] }
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"limit": 3
|
||||
]
|
||||
},
|
||||
"sort": [
|
||||
{
|
||||
"key": "value.state",
|
||||
"order": "DESC"
|
||||
},
|
||||
{
|
||||
"key": "value.person.id"
|
||||
}
|
||||
],
|
||||
"page": {
|
||||
"limit": 3
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
|
|
@ -21,7 +21,7 @@ This table is meant to help users understand the equivalent options for running
|
|||
| `--control-plane-address` | not supported | | not supported | Address for a Dapr control plane |
|
||||
| `--dapr-grpc-port` | `--dapr-grpc-port` | | not supported | gRPC port for the Dapr API to listen on (default "50001") |
|
||||
| `--dapr-http-port` | `--dapr-http-port` | | not supported | The HTTP port for the Dapr API |
|
||||
|` --dapr-http-max-request-size` | --dapr-http-max-request-size | | `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB |
|
||||
| `--dapr-http-max-request-size` | --dapr-http-max-request-size | | `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB |
|
||||
| not supported | `--image` | | `dapr.io/sidecar-image` | Dapr sidecar image. Default is `daprio/daprd:latest` |
|
||||
| `--internal-grpc-port` | not supported | | not supported | gRPC port for the Dapr Internal API to listen on |
|
||||
| `--enable-metrics` | not supported | | configuration spec | Enable prometheus metric (default true) |
|
||||
|
@ -38,6 +38,7 @@ This table is meant to help users understand the equivalent options for running
|
|||
| `--app-protocol` | `--app-protocol` | `-P` | `dapr.io/app-protocol` | Tells Dapr which protocol your application is using. Valid options are `http` and `grpc`. Default is `http` |
|
||||
| `--sentry-address` | `--sentry-address` | | not supported | Address for the Sentry CA service |
|
||||
| `--version` | `--version` | `-v` | not supported | Prints the runtime version |
|
||||
| `--dapr-graceful-shutdown-seconds` | not supported | | `dapr.io/graceful-shutdown-seconds` | Graceful shutdown duration in seconds for Dapr, the maximum duration before forced shutdown when waiting for all in-progress requests to complete. Defaults to `5`. If you are running in Kubernetes mode, this value should not be larger than the Kubernetes termination grace period, who's default value is `30`.|
|
||||
| not supported | not supported | | `dapr.io/enabled` | Setting this paramater to true injects the Dapr sidecar into the pod |
|
||||
| not supported | not supported | | `dapr.io/api-token-secret` | Tells Dapr which Kubernetes secret to use for token based API authentication. By default this is not set |
|
||||
| `--dapr-listen-addresses` | not supported | | `dapr.io/sidecar-listen-addresses` | Comma separated list of IP addresses that sidecar will listen to. Defaults to all in standalone mode. Defaults to `[::1],127.0.0.1` in Kubernetes. To listen to all IPv4 addresses, use `0.0.0.0`. To listen to all IPv6 addresses, use `[::]`.
|
||||
|
|
|
@ -27,6 +27,8 @@ spec:
|
|||
value: mybucket
|
||||
- name: region
|
||||
value: us-west-2
|
||||
- name: endpoint
|
||||
value: s3-us-west-2.amazonaws.com
|
||||
- name: accessKey
|
||||
value: *****************
|
||||
- name: secretKey
|
||||
|
@ -37,6 +39,8 @@ spec:
|
|||
value: <bool>
|
||||
- name: encodeBase64
|
||||
value: <bool>
|
||||
- name: forcePathStyle
|
||||
value: <bool>
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
@ -49,9 +53,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|--------------------|:--------:|------------|-----|---------|
|
||||
| bucket | Y | Output | The name of the S3 bucket to write to | `"bucket"` |
|
||||
| region | Y | Output | The specific AWS region | `"us-east-1"` |
|
||||
| endpoint | N | Output | The specific AWS endpoint | `"s3-us-east-1.amazonaws.com"` |
|
||||
| accessKey | Y | Output | The AWS Access Key to access this resource | `"key"` |
|
||||
| secretKey | Y | Output | The AWS Secret Access Key to access this resource | `"secretAccessKey"` |
|
||||
| sessionToken | N | Output | The AWS session token to use | `"sessionToken"` |
|
||||
| forcePathStyle | N | Output | Currently Amazon S3 SDK supports virtual hosted-style and path-style access. `true` is path-style format like `https://<endpoint>/<your bucket>/<key>`. `false` is hosted-style format like `https://<your bucket>.<endpoint>/<key>`. Defaults to `false` | `true`, `false` |
|
||||
| decodeBase64 | N | Output | Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
|
||||
| encodeBase64 | N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
|
||||
|
||||
|
@ -136,6 +142,8 @@ spec:
|
|||
value: mybucket
|
||||
- name: region
|
||||
value: us-west-2
|
||||
- name: endpoint
|
||||
value: s3-us-west-2.amazonaws.com
|
||||
- name: accessKey
|
||||
value: *****************
|
||||
- name: secretKey
|
||||
|
@ -144,6 +152,8 @@ spec:
|
|||
value: mysession
|
||||
- name: decodeBase64
|
||||
value: <bool>
|
||||
- name: forcePathStyle
|
||||
value: <bool>
|
||||
```
|
||||
|
||||
Then you can upload it as you would normally:
|
||||
|
|
|
@ -29,14 +29,14 @@ Table captions:
|
|||
| [In Memory]({{< ref setup-inmemory.md >}}) | Alpha | v1 | 1.4 |
|
||||
| [JetStream]({{< ref setup-jetstream.md >}}) | Alpha | v1 | 1.4 |
|
||||
| [Pulsar]({{< ref setup-pulsar.md >}}) | Alpha | v1 | 1.0 |
|
||||
| [RabbitMQ]({{< ref setup-rabbitmq.md >}}) | Alpha | v1 | 1.0 |
|
||||
| [RabbitMQ]({{< ref setup-rabbitmq.md >}}) | Beta | v1 | 1.6 |
|
||||
| [Redis Streams]({{< ref setup-redis-pubsub.md >}}) | Stable | v1 | 1.0 |
|
||||
|
||||
### Amazon Web Services (AWS)
|
||||
|
||||
| Name | Status | Component version | Since |
|
||||
|---------------------------------------------------|--------| ---- |---------------|
|
||||
| [AWS SNS/SQS]({{< ref setup-aws-snssqs.md >}}) | Alpha | v1 | 1.0 |
|
||||
|---------------------------------------------------|--------| ---- | --------------|
|
||||
| [AWS SNS/SQS]({{< ref setup-aws-snssqs.md >}}) | Beta | v1 | 1.6 |
|
||||
|
||||
### Google Cloud Platform (GCP)
|
||||
|
||||
|
|
|
@ -21,34 +21,36 @@ spec:
|
|||
metadata:
|
||||
- name: host
|
||||
value: "amqp://localhost:5672"
|
||||
- name: consumerID
|
||||
value: myapp
|
||||
- name: durable
|
||||
value: "false"
|
||||
value: false
|
||||
- name: deletedWhenUnused
|
||||
value: "false"
|
||||
value: false
|
||||
- name: autoAck
|
||||
value: "false"
|
||||
value: false
|
||||
- name: deliveryMode
|
||||
value: "0"
|
||||
value: 0
|
||||
- name: requeueInFailure
|
||||
value: "false"
|
||||
value: false
|
||||
- name: prefetchCount
|
||||
value: "0"
|
||||
value: 0
|
||||
- name: reconnectWait
|
||||
value: "0"
|
||||
value: 0
|
||||
- name: concurrencyMode
|
||||
value: parallel
|
||||
- name: backOffPolicy
|
||||
value: "exponential"
|
||||
value: exponential
|
||||
- name: backOffInitialInterval
|
||||
value: "100"
|
||||
value: 100
|
||||
- name: backOffMaxRetries
|
||||
value: "16"
|
||||
value: 16
|
||||
- name: enableDeadLetter # Optional enable dead Letter or not
|
||||
value: "true"
|
||||
value: true
|
||||
- name: maxLen # Optional max message count in a queue
|
||||
value: "3000"
|
||||
value: 3000
|
||||
- name: maxLenBytes # Optional maximum length in bytes of a queue.
|
||||
value: "10485760"
|
||||
value: 10485760
|
||||
```
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
|
@ -59,6 +61,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| host | Y | Connection-string for the rabbitmq host | `amqp://user:pass@localhost:5672`
|
||||
| consumerID | N | Consumer ID a.k.a consumer tag organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer, i.e. a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the dapr runtime will set it to the dapr application ID. |
|
||||
| durable | N | Whether or not to use [durable](https://www.rabbitmq.com/queues.html#durability) queues. Defaults to `"false"` | `"true"`, `"false"`
|
||||
| deletedWhenUnused | N | Whether or not the queue should be configured to [auto-delete](https://www.rabbitmq.com/queues.html) Defaults to `"true"` | `"true"`, `"false"`
|
||||
| autoAck | N | Whether or not the queue consumer should [auto-ack](https://www.rabbitmq.com/confirms.html) messages. Defaults to `"false"` | `"true"`, `"false"`
|
||||
|
|
|
@ -1 +1 @@
|
|||
{{- if .Get "short" }}1.5{{ else if .Get "long" }}1.5.0{{ else if .Get "cli" }}1.5.0{{ else }}1.5.0{{ end -}}
|
||||
{{- if .Get "short" }}1.5{{ else if .Get "long" }}1.5.1{{ else if .Get "cli" }}1.5.1{{ else }}1.5.1{{ end -}}
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 86 KiB |
Binary file not shown.
After Width: | Height: | Size: 93 KiB |
Binary file not shown.
After Width: | Height: | Size: 167 KiB |
|
@ -1 +1 @@
|
|||
Subproject commit fadb6c2654255864438c387a1a2b862eaf3af240
|
||||
Subproject commit d60eaf1c0e3bb75af480eda34307926be9865570
|
Loading…
Reference in New Issue