mirror of https://github.com/dapr/docs.git
Pupsub updates
This commit is contained in:
parent
946c88df49
commit
06e2d17bed
|
|
@ -1,56 +0,0 @@
|
||||||
---
|
|
||||||
type: docs
|
|
||||||
title: "How-To: Publish message to a topic"
|
|
||||||
linkTitle: "How-To: Publish"
|
|
||||||
weight: 2000
|
|
||||||
description: "Send messages to subscribes through topics"
|
|
||||||
---
|
|
||||||
|
|
||||||
Pub/Sub is a common pattern in a distributed system with many services that want to utilize decoupled, asynchronous messaging.
|
|
||||||
Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers.
|
|
||||||
|
|
||||||
Dapr provides an extensible Pub/Sub system with At-Least-Once guarantees, allowing developers to publish and subscribe to topics.
|
|
||||||
Dapr provides different implementation of the underlying system, and allows operators to bring in their preferred infrastructure, for example Redis Streams, Kafka, etc.
|
|
||||||
|
|
||||||
## Setup the Pub/Sub component
|
|
||||||
|
|
||||||
The first step is to setup the Pub/Sub component.
|
|
||||||
For this guide, we'll use Redis Streams, which is also installed by default on a local machine when running `dapr init`.
|
|
||||||
|
|
||||||
*Note: When running Dapr locally, a pub/sub component YAML is automatically created for you locally. To override, create a `components` directory containing the file and use the flag `--components-path` with the `dapr run` CLI command.*
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: dapr.io/v1alpha1
|
|
||||||
kind: Component
|
|
||||||
metadata:
|
|
||||||
name: pubsub-name
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
type: pubsub.redis
|
|
||||||
metadata:
|
|
||||||
- name: redisHost
|
|
||||||
value: localhost:6379
|
|
||||||
- name: redisPassword
|
|
||||||
value: ""
|
|
||||||
- name: allowedTopics
|
|
||||||
value: "deathStartStatus"
|
|
||||||
```
|
|
||||||
|
|
||||||
Using the `allowedTopics` you can specify that only the `deathStartStatus` topic should be supported.
|
|
||||||
|
|
||||||
To deploy this into a Kubernetes cluster, fill in the `metadata` connection details in the yaml, and run `kubectl apply -f pubsub.yaml`.
|
|
||||||
|
|
||||||
## Publish a topic
|
|
||||||
|
|
||||||
To publish a message to a topic, invoke the following endpoint on a Dapr instance:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl -X POST http://localhost:3500/v1.0/publish/pubsub-name/deathStarStatus \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{
|
|
||||||
"status": "completed"
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
The above example publishes a JSON payload to a `deathStartStatus` topic.
|
|
||||||
Dapr wraps the user payload in a Cloud Events v1.0 compliant envelope.
|
|
||||||
|
|
@ -0,0 +1,225 @@
|
||||||
|
---
|
||||||
|
type: docs
|
||||||
|
title: "How-To: Publish message and subscribe to a topic"
|
||||||
|
linkTitle: "How-To: Publish & subscribe"
|
||||||
|
weight: 2000
|
||||||
|
description: "Learn how to send messages to a topic with one service and subscribe to that topic in another service"
|
||||||
|
---
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
Pub/Sub is a common pattern in a distributed system with many services that want to utilize decoupled, asynchronous messaging.
|
||||||
|
Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers.
|
||||||
|
|
||||||
|
Dapr provides an extensible Pub/Sub system with At-Least-Once guarantees, allowing developers to publish and subscribe to topics.
|
||||||
|
Dapr provides different implementation of the underlying system, and allows operators to bring in their preferred infrastructure, for example Redis Streams, Kafka, etc.
|
||||||
|
|
||||||
|
## Step 1: Setup the Pub/Sub component
|
||||||
|
|
||||||
|
The first step is to setup the Pub/Sub component:
|
||||||
|
|
||||||
|
{{< tabs "Self-Hosted (CLI)" Kubernetes >}}
|
||||||
|
|
||||||
|
{{% codetab %}}
|
||||||
|
Redis Streams is installed by default on a local machine when running `dapr init`.
|
||||||
|
|
||||||
|
Verify by opening your components file under `%UserProfile%\.dapr\components\pubsub.yaml` on Windows or `~/.dapr/components/pubsub.yaml` on Linux/MacOS:
|
||||||
|
```yaml
|
||||||
|
apiVersion: dapr.io/v1alpha1
|
||||||
|
kind: Component
|
||||||
|
metadata:
|
||||||
|
name: pubsub
|
||||||
|
spec:
|
||||||
|
type: pubsub.redis
|
||||||
|
metadata:
|
||||||
|
- name: redisHost
|
||||||
|
value: localhost:6379
|
||||||
|
- name: redisPassword
|
||||||
|
value: ""
|
||||||
|
```
|
||||||
|
|
||||||
|
You can override this file with another Redis instance or another [pubsub component]({{< ref setup-pubsub >}}) by creating a `components` directory containing the file and using the flag `--components-path` with the `dapr run` CLI command.
|
||||||
|
{{% /codetab %}}
|
||||||
|
|
||||||
|
{{% codetab %}}
|
||||||
|
To deploy this into a Kubernetes cluster, fill in the `metadata` connection details of your [desired pubsub component]({{< ref setup-pubsub >}}) in the yaml below, save as `pubsub.yaml`, and run `kubectl apply -f pubsub.yaml`.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: dapr.io/v1alpha1
|
||||||
|
kind: Component
|
||||||
|
metadata:
|
||||||
|
name: pubsub
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
type: pubsub.redis
|
||||||
|
metadata:
|
||||||
|
- name: redisHost
|
||||||
|
value: localhost:6379
|
||||||
|
- name: redisPassword
|
||||||
|
value: ""
|
||||||
|
```
|
||||||
|
{{% /codetab %}}
|
||||||
|
|
||||||
|
{{< /tabs >}}
|
||||||
|
|
||||||
|
## Step 2: Publish a topic
|
||||||
|
|
||||||
|
To publish a message to a topic, invoke the following endpoint on a Dapr instance:
|
||||||
|
|
||||||
|
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)">}}
|
||||||
|
|
||||||
|
{{% codetab %}}
|
||||||
|
Begin by ensuring a Dapr sidecar is running:
|
||||||
|
```bash
|
||||||
|
dapr --app-id myapp --port 3500 run
|
||||||
|
```
|
||||||
|
Then publish a message to the `deathStarStatus` topic:
|
||||||
|
```bash
|
||||||
|
curl -X POST http://localhost:3500/v1.0/publish/pubsub/deathStarStatus -H "Content-Type: application/json" -d '{"status": "completed"}'
|
||||||
|
```
|
||||||
|
{{% /codetab %}}
|
||||||
|
|
||||||
|
{{% codetab %}}
|
||||||
|
Begin by ensuring a Dapr sidecar is running:
|
||||||
|
```bash
|
||||||
|
dapr --app-id myapp --port 3500 run
|
||||||
|
```
|
||||||
|
Then publish a message to the `deathStarStatus` topic:
|
||||||
|
```powershell
|
||||||
|
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"status": "completed"}' -Uri 'http://localhost:3500/v1.0/publish/pubsub/deathStarStatus'
|
||||||
|
```
|
||||||
|
{{% /codetab %}}
|
||||||
|
|
||||||
|
{{< /tabs >}}
|
||||||
|
|
||||||
|
Dapr automatically wraps the user payload in a Cloud Events v1.0 compliant envelope.
|
||||||
|
|
||||||
|
## Step 3: Subscribe to topics
|
||||||
|
|
||||||
|
Dapr allows two methods by which you can subscribe to topics:
|
||||||
|
- **Declaratively**, where subscriptions are are defined in an external file.
|
||||||
|
- **Programatically**, where subscriptions are defined in user code
|
||||||
|
|
||||||
|
### Declarative subscriptions
|
||||||
|
|
||||||
|
You can subscribe to a topic using the following Custom Resources Definition (CRD):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: dapr.io/v1alpha1
|
||||||
|
kind: Subscription
|
||||||
|
metadata:
|
||||||
|
name: myevent-subscription
|
||||||
|
spec:
|
||||||
|
topic: deathStarStatus
|
||||||
|
route: /dsstatus
|
||||||
|
pubsubname: pubsub
|
||||||
|
scopes:
|
||||||
|
- app1
|
||||||
|
- app2
|
||||||
|
```
|
||||||
|
|
||||||
|
The example above shows an event subscription to topic `deathStarStatus`, for the pubsub component `pubsub`.
|
||||||
|
The `route` field tells Dapr to send all topic messages to the `/dsstatus` endpoint in the app.
|
||||||
|
|
||||||
|
The `scopes` field enables this subscription for apps with IDs `app1` and `app2`.
|
||||||
|
|
||||||
|
Set the component with:
|
||||||
|
{{< tabs "Self-Hosted (CLI)" Kubernetes>}}
|
||||||
|
|
||||||
|
{{% codetab %}}
|
||||||
|
Place the CRD in your `./components` directory. When Dapr starts up, it will load subscriptions along with components.
|
||||||
|
|
||||||
|
You can also override the default directory by pointing the Dapr CLI to a components path:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dapr run --app-id myapp --components-path ./myComponents -- python3 myapp.py
|
||||||
|
```
|
||||||
|
|
||||||
|
*Note: By default, Dapr loads components from `$HOME/.dapr/components` on MacOS/Linux and `%USERPROFILE%\.dapr\components` on Windows. If you place the subscription in a custom components path, make sure the Pub/Sub component is present also.*
|
||||||
|
{{% /codetab %}}
|
||||||
|
|
||||||
|
{{% codetab %}}
|
||||||
|
In Kubernetes, save the CRD to a file and apply it to the cluster:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -f subscription.yaml
|
||||||
|
```
|
||||||
|
{{% /codetab %}}
|
||||||
|
|
||||||
|
{{< /tabs >}}
|
||||||
|
|
||||||
|
#### Example
|
||||||
|
|
||||||
|
After setting up the subscription above, download this javascript into a `app1.js` file:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const express = require('express')
|
||||||
|
const bodyParser = require('body-parser')
|
||||||
|
const app = express()
|
||||||
|
app.use(bodyParser.json())
|
||||||
|
|
||||||
|
const port = 3000
|
||||||
|
|
||||||
|
app.post('/dsstatus', (req, res) => {
|
||||||
|
res.sendStatus(200);
|
||||||
|
});
|
||||||
|
|
||||||
|
app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
|
||||||
|
```
|
||||||
|
Run this app with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dapr --app-id app1 --app-port 3000 run node app1.js
|
||||||
|
```
|
||||||
|
|
||||||
|
### Programmatic subscriptions
|
||||||
|
|
||||||
|
To subscribe to topics, start a web server in the programming language of your choice and listen on the following `GET` endpoint: `/dapr/subscribe`.
|
||||||
|
The Dapr instance will call into your app at startup and expect a JSON response for the topic subscriptions with:
|
||||||
|
- `pubsubname`: Which pub/sub component Dapr should use
|
||||||
|
- `topic`: Which topic to subscribe to
|
||||||
|
- `route`: Which endpoint for Dapr to call on when a message comes to that topic
|
||||||
|
|
||||||
|
#### Example
|
||||||
|
|
||||||
|
*Note: The following example is written in Node.js, but can be in any programming language*
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const express = require('express')
|
||||||
|
const bodyParser = require('body-parser')
|
||||||
|
const app = express()
|
||||||
|
app.use(bodyParser.json())
|
||||||
|
|
||||||
|
const port = 3000
|
||||||
|
|
||||||
|
app.get('/dapr/subscribe', (req, res) => {
|
||||||
|
res.json([
|
||||||
|
{
|
||||||
|
pubsubname: "pubsub",
|
||||||
|
topic: "deathStarStatus",
|
||||||
|
route: "dsstatus"
|
||||||
|
}
|
||||||
|
]);
|
||||||
|
})
|
||||||
|
|
||||||
|
app.post('/dsstatus', (req, res) => {
|
||||||
|
res.sendStatus(200);
|
||||||
|
});
|
||||||
|
|
||||||
|
app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
|
||||||
|
```
|
||||||
|
|
||||||
|
The `/dsstatus` endpoint matches the `route` defined in the subscriptions and this is where Dapr will send all topic messages to.
|
||||||
|
|
||||||
|
## Step 4: ACK-ing a message
|
||||||
|
|
||||||
|
In order to tell Dapr that a message was processed successfully, return a `200 OK` response. If Dapr receives any other return status code than `200`, or if your app crashes, Dapr will attempt to redeliver the message following At-Least-Once semantics.
|
||||||
|
|
||||||
|
#### Example
|
||||||
|
|
||||||
|
*Note: The following example is written in Node.js, but can be in any programming language*
|
||||||
|
```javascript
|
||||||
|
app.post('/dsstatus', (req, res) => {
|
||||||
|
res.sendStatus(200);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
@ -1,152 +0,0 @@
|
||||||
---
|
|
||||||
type: docs
|
|
||||||
title: "How-To: Subscribe to a topic"
|
|
||||||
linkTitle: "How-To: Subscribe"
|
|
||||||
weight: 3000
|
|
||||||
description: "Consume messages from topics"
|
|
||||||
---
|
|
||||||
|
|
||||||
Pub/Sub is a very common pattern in a distributed system with many services that want to utilize decoupled, asynchronous messaging.
|
|
||||||
Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers.
|
|
||||||
|
|
||||||
Dapr provides an extensible Pub/Sub system with At-Least-Once guarantees, allowing developers to publish and subscribe to topics.
|
|
||||||
Dapr provides different implementation of the underlying system, and allows operators to bring in their preferred infrastructure, for example Redis Streams, Kafka, etc.
|
|
||||||
|
|
||||||
Watch this [video](https://www.youtube.com/watch?v=NLWukkHEwGA&feature=youtu.be&t=1052) on how to consume messages from topics.
|
|
||||||
|
|
||||||
## Setup the Pub Sub component
|
|
||||||
|
|
||||||
The first step is to setup the Pub/Sub component.
|
|
||||||
For this guide, we'll use Redis Streams, which is also installed by default on a local machine when running `dapr init`.
|
|
||||||
|
|
||||||
*Note: When running Dapr locally, a pub/sub component YAML is automatically created for you locally. To override, create a `components` directory containing the file and use the flag `--components-path` with the `dapr run` CLI command.*
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: dapr.io/v1alpha1
|
|
||||||
kind: Component
|
|
||||||
metadata:
|
|
||||||
name: pubsub
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
type: pubsub.redis
|
|
||||||
metadata:
|
|
||||||
- name: redisHost
|
|
||||||
value: localhost:6379
|
|
||||||
- name: redisPassword
|
|
||||||
value: ""
|
|
||||||
```
|
|
||||||
|
|
||||||
To deploy this into a Kubernetes cluster, fill in the `metadata` connection details in the yaml, and run `kubectl apply -f pubsub.yaml`.
|
|
||||||
|
|
||||||
## Subscribe to topics
|
|
||||||
|
|
||||||
Dapr allows two methods by which you can subscribe to topics: programatically, where subscriptions are defined in user code and declaratively, where subscriptions are are defined in an external file.
|
|
||||||
|
|
||||||
### Declarative subscriptions
|
|
||||||
|
|
||||||
You can subscribe to a topic using the following Custom Resources Definition (CRD):
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: dapr.io/v1alpha1
|
|
||||||
kind: Subscription
|
|
||||||
metadata:
|
|
||||||
name: myevent-subscription
|
|
||||||
spec:
|
|
||||||
topic: newOrder
|
|
||||||
route: /orders
|
|
||||||
pubsubname: kafka
|
|
||||||
scopes:
|
|
||||||
- app1
|
|
||||||
- app2
|
|
||||||
```
|
|
||||||
|
|
||||||
The example above shows an event subscription to topic `newOrder`, for the pubsub component `kafka`.
|
|
||||||
The `route` field tells Dapr to send all topic messages to the `/orders` endpoint in the app.
|
|
||||||
|
|
||||||
The `scopes` field enables this subscription for apps with IDs `app1` and `app2`.
|
|
||||||
|
|
||||||
An example of a node.js app that receives events from the subscription:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const express = require('express')
|
|
||||||
const bodyParser = require('body-parser')
|
|
||||||
const app = express()
|
|
||||||
app.use(bodyParser.json())
|
|
||||||
|
|
||||||
const port = 3000
|
|
||||||
|
|
||||||
app.post('/orders', (req, res) => {
|
|
||||||
res.sendStatus(200);
|
|
||||||
});
|
|
||||||
|
|
||||||
app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Subscribing on Kubernetes
|
|
||||||
|
|
||||||
In Kubernetes, save the CRD to a file and apply it to the cluster:
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl apply -f subscription.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Subscribing in Self Hosted
|
|
||||||
|
|
||||||
When running Dapr in Self-hosted, either locally or on a VM, put the CRD in your `./components` directory.
|
|
||||||
When Dapr starts up, it will load subscriptions along with components.
|
|
||||||
|
|
||||||
The following example shows how to point the Dapr CLI to a components path:
|
|
||||||
|
|
||||||
```
|
|
||||||
dapr run --app-id myapp --components-path ./myComponents -- python3 myapp.py
|
|
||||||
```
|
|
||||||
|
|
||||||
*Note: By default, Dapr loads components from $HOME/.dapr/components on MacOS/Linux and %USERPROFILE%\.dapr\components on Windows. If you place the subscription in a custom components path, make sure the Pub/Sub component is present also.*
|
|
||||||
|
|
||||||
### Programmatic subscriptions
|
|
||||||
|
|
||||||
To subscribe to topics, start a web server in the programming language of your choice and listen on the following `GET` endpoint: `/dapr/subscribe`.
|
|
||||||
The Dapr instance will call into your app, and expect a JSON response for the topic subscriptions.
|
|
||||||
|
|
||||||
*Note: The following example is written in node, but can be in any programming language*
|
|
||||||
|
|
||||||
<pre>
|
|
||||||
const express = require('express')
|
|
||||||
const bodyParser = require('body-parser')
|
|
||||||
const app = express()
|
|
||||||
app.use(bodyParser.json())
|
|
||||||
|
|
||||||
const port = 3000
|
|
||||||
|
|
||||||
<b>app.get('/dapr/subscribe', (req, res) => {
|
|
||||||
res.json([
|
|
||||||
{
|
|
||||||
pubsubname: "pubsub",
|
|
||||||
topic: "newOrder",
|
|
||||||
route: "orders"
|
|
||||||
}
|
|
||||||
]);
|
|
||||||
})</b>
|
|
||||||
|
|
||||||
app.post('/orders', (req, res) => {
|
|
||||||
res.sendStatus(200);
|
|
||||||
});
|
|
||||||
|
|
||||||
app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
|
|
||||||
</pre>
|
|
||||||
|
|
||||||
In the payload returned to Dapr, `topic` tells Dapr which topic to subscribe to, `route` tells Dapr which endpoint to call on when a message comes to that topic, and `pubsubName` tells Dapr which pub/sub component it should use. In this example this is `pubsub` as this is the name of the component we outlined above.
|
|
||||||
|
|
||||||
The `/orders` endpoint matches the `route` defined in the subscriptions and this is where Dapr will send all topic messages to.
|
|
||||||
|
|
||||||
### ACK-ing a message
|
|
||||||
|
|
||||||
In order to tell Dapr that a message was processed successfully, return a `200 OK` response:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
res.status(200).send()
|
|
||||||
```
|
|
||||||
|
|
||||||
### Schedule a message for redelivery
|
|
||||||
|
|
||||||
If Dapr receives any other return status code than `200`, or if your app crashes, Dapr will attempt to redeliver the message following At-Least-Once semantics.
|
|
||||||
|
|
@ -1,52 +1,50 @@
|
||||||
---
|
---
|
||||||
type: docs
|
type: docs
|
||||||
title: "Pub/Sub overview"
|
title: "Publish/Subscribe overview"
|
||||||
linkTitle: "Pub/Sub overview"
|
linkTitle: "Overview"
|
||||||
weight: 1000
|
weight: 1000
|
||||||
description: "Overview of the Dapr Pub/Sub building block"
|
description: "Overview of the Dapr Pub/Sub building block"
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
The [publish/subscribe pattern](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) allows your microservices to communicate with each other purely by sending messages. In this system, the **producer** of a message sends it to a **topic**, with no knowledge of what service will receive the message. A messages can even be sent if there's no consumer for it.
|
The [publish/subscribe pattern](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) allows your microservices to communicate with each other purely by sending messages. In this system, the **producer** of a message sends it to a **topic**, with no knowledge of what service will receive the message. A messages can even be sent if there's no consumer for it.
|
||||||
|
|
||||||
Similarly, a **consumer** will receive messages from a topic without knowledge of what producer sent it. This pattern is especially useful when you need to decouple microservices from one another.
|
Similarly, a **consumer** will receive messages from a topic without knowledge of what producer sent it. This pattern is especially useful when you need to decouple microservices from one another.
|
||||||
|
|
||||||
Dapr provides a publish/subscribe API that provides at-least-once guarantees and integrates with various message brokers implementations. These implementations are pluggable, and developed outside of the Dapr runtime in [components-contrib](https://github.com/dapr/components-contrib/tree/master/pubsub).
|
Dapr provides a publish/subscribe API that provides at-least-once guarantees and integrates with various message brokers implementations. These implementations are pluggable, and developed outside of the Dapr runtime in [components-contrib](https://github.com/dapr/components-contrib/tree/master/pubsub).
|
||||||
|
|
||||||
## Publish/Subscribe API
|
## Features
|
||||||
|
|
||||||
The API for Publish/Subscribe can be found in the [spec repo](../../reference/api/pubsub_api.md).
|
### Publish/Subscribe API
|
||||||
|
|
||||||
## Behavior and guarantees
|
The API for Publish/Subscribe can be found in the [spec repo]({{< ref pubsub_api.md >}}).
|
||||||
|
|
||||||
|
### At-Least-Once guarantee
|
||||||
|
|
||||||
Dapr guarantees At-Least-Once semantics for message delivery.
|
Dapr guarantees At-Least-Once semantics for message delivery.
|
||||||
That means that when an application publishes a message to a topic using the Publish/Subscribe API, it can assume the message is delivered at least once to any subscriber when the response status code from that endpoint is `200`, or returns no error if using the gRPC client.
|
That means that when an application publishes a message to a topic using the Publish/Subscribe API, it can assume the message is delivered at least once to any subscriber when the response status code from that endpoint is `200`, or returns no error if using the gRPC client.
|
||||||
|
|
||||||
|
### Consumer groups and multiple instances
|
||||||
|
|
||||||
The burden of dealing with concepts like consumer groups and multiple instances inside consumer groups is all catered for by Dapr.
|
The burden of dealing with concepts like consumer groups and multiple instances inside consumer groups is all catered for by Dapr.
|
||||||
|
|
||||||
### App ID
|
|
||||||
|
|
||||||
Dapr has the concept of an `id`. This is specified in Kubernetes using the `dapr.io/app-id` annotation and with the `app-id` flag using the Dapr CLI. Dapr requires an ID to be assigned to every application.
|
|
||||||
|
|
||||||
When multiple instances of the same application ID subscribe to a topic, Dapr will make sure to deliver the message to only one instance. If two different applications with different IDs subscribe to a topic, at least one instance in each application receives a copy of the same message.
|
When multiple instances of the same application ID subscribe to a topic, Dapr will make sure to deliver the message to only one instance. If two different applications with different IDs subscribe to a topic, at least one instance in each application receives a copy of the same message.
|
||||||
|
|
||||||
## Cloud events
|
### Cloud events
|
||||||
|
|
||||||
Dapr follows the [CloudEvents 1.0 Spec](https://github.com/cloudevents/spec/tree/v1.0) and wraps any payload sent to a topic inside a Cloud Events envelope.
|
Dapr follows the [CloudEvents 1.0 Spec](https://github.com/cloudevents/spec/tree/v1.0) and wraps any payload sent to a topic inside a Cloud Events envelope.
|
||||||
|
|
||||||
The following fields from the Cloud Events spec are implemented with Dapr:
|
The following fields from the Cloud Events spec are implemented with Dapr:
|
||||||
|
- `id`
|
||||||
* `id`
|
- `source`
|
||||||
* `source`
|
- `specversion`
|
||||||
* `specversion`
|
- `type`
|
||||||
* `type`
|
- `datacontenttype` (Optional)
|
||||||
* `datacontenttype` (Optional)
|
|
||||||
|
|
||||||
|
|
||||||
> Starting with Dapr v0.9, Dapr no longer wraps published content into CloudEvent if the published payload itself is already in CloudEvent format.
|
> Starting with Dapr v0.9, Dapr no longer wraps published content into CloudEvent if the published payload itself is already in CloudEvent format.
|
||||||
|
|
||||||
The following example shows an XML content in CloudEvent v1.0 serialized as JSON:
|
The following example shows an XML content in CloudEvent v1.0 serialized as JSON:
|
||||||
|
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"specversion" : "1.0",
|
"specversion" : "1.0",
|
||||||
|
|
@ -59,3 +57,12 @@ The following example shows an XML content in CloudEvent v1.0 serialized as JSON
|
||||||
"data" : "<note><to>User1</to><from>user2</from><message>hi</message></note>"
|
"data" : "<note><to>User1</to><from>user2</from><message>hi</message></note>"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Topic scoping
|
||||||
|
|
||||||
|
Limit which topics applications are able to publish/subscibe to in order to limit access to potentially sensitive data streams. Read [Pub/Sub scoping]({{< ref pubsub-scopes.md >}}) for more information.
|
||||||
|
|
||||||
|
## Next steps
|
||||||
|
|
||||||
|
- Read the How-To guide on [publishing and subscribing]({{< ref howto-publish-subscribe.md >}})
|
||||||
|
- Learn about [Pub/Sub scopes]({{< ref pubsub-scopes.md >}})
|
||||||
|
|
@ -1,32 +1,44 @@
|
||||||
---
|
---
|
||||||
type: docs
|
type: docs
|
||||||
title: "How To: Scope Pub/Sub topics"
|
title: "Scope Pub/Sub topic access"
|
||||||
linkTitle: "How To: Scope topics"
|
linkTitle: "Scope topic access"
|
||||||
weight: 5000
|
weight: 5000
|
||||||
description: "Use scopes to limit Pub/Sub topics to specific applications"
|
description: "Use scopes to limit Pub/Sub topics to specific applications"
|
||||||
---
|
---
|
||||||
|
|
||||||
[Namespaces or component scopes](../components-scopes/README.md) can be used to limit component access to particular applications. These application scopes added to a component limit only the applications with specific IDs to be able to use the component.
|
## Introduction
|
||||||
|
|
||||||
|
[Namespaces or component scopes]({{< ref component-scopes.md >}}) can be used to limit component access to particular applications. These application scopes added to a component limit only the applications with specific IDs to be able to use the component.
|
||||||
|
|
||||||
In addition to this general component scope, the following can be limited for pub/sub components:
|
In addition to this general component scope, the following can be limited for pub/sub components:
|
||||||
- the topics which can be used (published or subscribed)
|
- Which topics which can be used (published or subscribed)
|
||||||
- which applications are allowed to publish to specific topics
|
- Which applications are allowed to publish to specific topics
|
||||||
- which applications are allowed to subscribe to specific topics
|
- Which applications are allowed to subscribe to specific topics
|
||||||
|
|
||||||
This is called pub/sub topic scoping.
|
This is called **pub/sub topic scoping**.
|
||||||
|
|
||||||
Watch this [video](https://www.youtube.com/watch?v=7VdWBBGcbHQ&feature=youtu.be&t=513) on how to use pub/sub topic scoping.
|
Pub/sub scopes are defined for each pub/sub component. You may have a pub/sub component named `pubsub` that has one set of scopes, and another `pubsub2` with a different set.
|
||||||
|
|
||||||
To use this topic scoping, three metadata properties can be set for a pub/sub component:
|
To use this topic scoping three metadata properties can be set for a pub/sub component:
|
||||||
- ```spec.metadata.publishingScopes```: the list of applications to topic scopes to allow publishing, separated by semicolons. If an app is not specified in ```publishingScopes```, its allowed to publish to all topics.
|
- `spec.metadata.publishingScopes`
|
||||||
- ```spec.metadata.subscriptionScopes```: the list of applications to topic scopes to allow subscription, separated by semicolons. If an app is not specified in ```subscriptionScopes```, its allowed to subscribe to all topics.
|
- A semicolon-separated list of applications & comma-separated topic lists, allowing that app to publish to that list of topics
|
||||||
- ```spec.metadata.allowedTopics```: a comma-separated list for allowed topics for all applications. ```publishingScopes``` or ```subscriptionScopes``` can be used in addition to add granular limitations. If ```allowedTopics``` is not set, all topics are valid and then ```subscriptionScopes``` and ```publishingScopes``` take place if present.
|
- If nothing is specified in `publishingScopes` (default behavior), all apps can publish to all topics
|
||||||
|
- To deny an app the ability to publish to any topic, leave the topics list blank (`app1=;app2=topic2`)
|
||||||
|
- For example, `app1=topic1;app2=topic2,topic3;app3=` will allow app1 to publish to topic1 and nothing else, app2 to publish to topic2 and topic3 only, and app3 to publish to nothing.
|
||||||
|
- `spec.metadata.subscriptionScopes`
|
||||||
|
- A semicolon-separated list of applications & comma-separated topic lists, allowing that app to subscribe to that list of topics
|
||||||
|
- If nothing is specified in `subscriptionScopes` (default behavior), all apps can subscribe to all topics
|
||||||
|
- For example, `app1=topic1;app2=topic2,topic3` will allow app1 to subscribe to topic1 only and app2 to subscribe to topic2 and topic3
|
||||||
|
- `spec.metadata.allowedTopics`
|
||||||
|
- A comma-separated list of allowed topics for all applications.
|
||||||
|
- If `allowedTopics` is not set (default behavior), all topics are valid. `subscriptionScopes` and `publishingScopes` still take place if present.
|
||||||
|
- `publishingScopes` or `subscriptionScopes` can be used in conjuction with `allowedTopics` to add granular limitations
|
||||||
|
|
||||||
These metadata properties can be used for all pub/sub components. The following examples use Redis as pub/sub component.
|
These metadata properties can be used for all pub/sub components. The following examples use Redis as pub/sub component.
|
||||||
|
|
||||||
## Scenario 1: Limit which application can publish or subscribe to topics
|
## Example 1: Scope topic access
|
||||||
|
|
||||||
This can be useful, if you have topics which contain sensitive information and only a subset of your applications are allowed to publish or subscribe to these.
|
Limiting which applications can publish/subscribe to topics can be useful if you have topics which contain sensitive information and only a subset of your applications are allowed to publish or subscribe to these.
|
||||||
|
|
||||||
It can also be used for all topics to have always a "ground truth" for which applications are using which topics as publishers/subscribers.
|
It can also be used for all topics to have always a "ground truth" for which applications are using which topics as publishers/subscribers.
|
||||||
|
|
||||||
|
|
@ -50,29 +62,31 @@ spec:
|
||||||
value: "app2=;app3=topic1"
|
value: "app2=;app3=topic1"
|
||||||
```
|
```
|
||||||
|
|
||||||
The table below shows which application is allowed to publish into the topics:
|
The table below shows which applications are allowed to publish into the topics:
|
||||||
| Publishing | app1 | app2 | app3 |
|
|
||||||
|------------|------|------|------|
|
|
||||||
| topic1 | X | | |
|
|
||||||
| topic2 | | X | |
|
|
||||||
| topic3 | | X | |
|
|
||||||
|
|
||||||
The table below shows which application is allowed to subscribe to the topics:
|
| | topic1 | topic2 | topic3 |
|
||||||
| Subscription | app1 | app2 | app3 |
|
|------|--------|--------|--------|
|
||||||
|--------------|------|------|------|
|
| app1 | X | | |
|
||||||
| topic1 | X | | X |
|
| app2 | | X | X |
|
||||||
| topic2 | X | | |
|
| app3 | | | |
|
||||||
| topic3 | X | | |
|
|
||||||
|
|
||||||
> Note: If an application is not listed (e.g. app1 in subscriptionScopes), it is allowed to subscribe to all topics. Because ```allowedTopics``` (see below of examples) is not used and app1 does not have any subscription scopes, it can also use additional topics not listed above.
|
The table below shows which applications are allowed to subscribe to the topics:
|
||||||
|
|
||||||
## Scenario 2: Limit which topics can be used by all applications without granular limitations
|
| | topic1 | topic2 | topic3 |
|
||||||
|
|------|--------|--------|--------|
|
||||||
|
| app1 | X | X | X |
|
||||||
|
| app2 | | | |
|
||||||
|
| app3 | X | | |
|
||||||
|
|
||||||
A topic is created if a Dapr application sends a message to it. In some scenarios this topic creation should be governed. For example;
|
> Note: If an application is not listed (e.g. app1 in subscriptionScopes) it is allowed to subscribe to all topics. Because `allowedTopics` is not used and app1 does not have any subscription scopes, it can also use additional topics not listed above.
|
||||||
- a bug in a Dapr application on generating the topic name can lead to an unlimited amount of topics created
|
|
||||||
- streamline the topics names and total count and prevent an unlimited growth of topics
|
|
||||||
|
|
||||||
In these situations, ```allowedTopics``` can be used.
|
## Example 2: Limit allowed topics
|
||||||
|
|
||||||
|
A topic is created if a Dapr application sends a message to it. In some scenarios this topic creation should be governed. For example:
|
||||||
|
- A bug in a Dapr application on generating the topic name can lead to an unlimited amount of topics created
|
||||||
|
- Streamline the topics names and total count and prevent an unlimited growth of topics
|
||||||
|
|
||||||
|
In these situations `allowedTopics` can be used.
|
||||||
|
|
||||||
Here is an example of three allowed topics:
|
Here is an example of three allowed topics:
|
||||||
```yaml
|
```yaml
|
||||||
|
|
@ -94,7 +108,7 @@ spec:
|
||||||
|
|
||||||
All applications can use these topics, but only those topics, no others are allowed.
|
All applications can use these topics, but only those topics, no others are allowed.
|
||||||
|
|
||||||
## Scenario 3: Combine both allowed topics allowed applications that can publish and subscribe
|
## Example 3: Combine `allowedTopics` and scopes
|
||||||
|
|
||||||
Sometimes you want to combine both scopes, thus only having a fixed set of allowed topics and specify scoping to certain applications.
|
Sometimes you want to combine both scopes, thus only having a fixed set of allowed topics and specify scoping to certain applications.
|
||||||
|
|
||||||
|
|
@ -123,17 +137,22 @@ spec:
|
||||||
> Note: The third application is not listed, because if an app is not specified inside the scopes, it is allowed to use all topics.
|
> Note: The third application is not listed, because if an app is not specified inside the scopes, it is allowed to use all topics.
|
||||||
|
|
||||||
The table below shows which application is allowed to publish into the topics:
|
The table below shows which application is allowed to publish into the topics:
|
||||||
| Publishing | app1 | app2 | app3 |
|
|
||||||
|------------|------|------|------|
|
| | A | B | C |
|
||||||
| A | X | X | X |
|
|------|---|---|---|
|
||||||
| B | | X | X |
|
| app1 | X | | |
|
||||||
|
| app2 | X | X | |
|
||||||
|
| app3 | X | X | |
|
||||||
|
|
||||||
The table below shows which application is allowed to subscribe to the topics:
|
The table below shows which application is allowed to subscribe to the topics:
|
||||||
| Subscription | app1 | app2 | app3 |
|
|
||||||
|--------------|------|------|------|
|
|
||||||
| A | | X | X |
|
|
||||||
| B | | | X |
|
|
||||||
|
|
||||||
No other topics can be used, only A and B.
|
| | A | B | C |
|
||||||
|
|------|---|---|---|
|
||||||
|
| app1 | | | |
|
||||||
|
| app2 | X | | |
|
||||||
|
| app3 | X | X | |
|
||||||
|
|
||||||
Pub/sub scopes are per pub/sub. You may have pub/sub component named `pubsub` that has one set of scopes, and another `pubsub2` with a different set. The name is the `metadata.name` field in the yaml.
|
|
||||||
|
## Demo
|
||||||
|
|
||||||
|
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VdWBBGcbHQ?start=513" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||||
|
|
@ -6,6 +6,8 @@ weight: 200
|
||||||
description: "Use key value pairs to persist a state"
|
description: "Use key value pairs to persist a state"
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
State management is one of the most common needs of any application: new or legacy, monolith or microservice.
|
State management is one of the most common needs of any application: new or legacy, monolith or microservice.
|
||||||
Dealing with different databases libraries, testing them, handling retries and faults can be time consuming and hard.
|
Dealing with different databases libraries, testing them, handling retries and faults can be time consuming and hard.
|
||||||
|
|
||||||
|
|
@ -42,6 +44,10 @@ Begin by ensuring a Dapr sidecar is running:
|
||||||
```bash
|
```bash
|
||||||
dapr --app-id myapp --port 3500 run
|
dapr --app-id myapp --port 3500 run
|
||||||
```
|
```
|
||||||
|
{{% alert title="Note" color="info" %}}
|
||||||
|
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
|
||||||
|
|
||||||
|
{{% /alert %}}
|
||||||
|
|
||||||
Then in a separate terminal run:
|
Then in a separate terminal run:
|
||||||
```bash
|
```bash
|
||||||
|
|
@ -55,6 +61,11 @@ Begin by ensuring a Dapr sidecar is running:
|
||||||
dapr --app-id myapp --port 3500 run
|
dapr --app-id myapp --port 3500 run
|
||||||
```
|
```
|
||||||
|
|
||||||
|
{{% alert title="Note" color="info" %}}
|
||||||
|
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
|
||||||
|
|
||||||
|
{{% /alert %}}
|
||||||
|
|
||||||
Then in a separate terminal run:
|
Then in a separate terminal run:
|
||||||
```powershell
|
```powershell
|
||||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' -Uri 'http://localhost:3500/v1.0/state/statestore'
|
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' -Uri 'http://localhost:3500/v1.0/state/statestore'
|
||||||
|
|
@ -77,6 +88,12 @@ with DaprClient() as d:
|
||||||
```
|
```
|
||||||
|
|
||||||
Run with `dapr run --app-id myapp run python state.py`
|
Run with `dapr run --app-id myapp run python state.py`
|
||||||
|
|
||||||
|
{{% alert title="Note" color="info" %}}
|
||||||
|
It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state.
|
||||||
|
|
||||||
|
{{% /alert %}}
|
||||||
|
|
||||||
{{% /codetab %}}
|
{{% /codetab %}}
|
||||||
|
|
||||||
{{< /tabs >}}
|
{{< /tabs >}}
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,7 @@
|
||||||
---
|
---
|
||||||
type: docs
|
type: docs
|
||||||
title: "Pub/Sub and namespaces"
|
title: "Pub/Sub and namespaces"
|
||||||
linkTitle: "Pub/Sub and namespaces"
|
linkTitle: "Kubernetes Namespaces"
|
||||||
weight: 4000
|
weight: 4000
|
||||||
description: "Use Dapr Pub/Sub with multiple namespaces"
|
description: "Use Dapr Pub/Sub with multiple namespaces"
|
||||||
---
|
---
|
||||||
|
|
@ -11,14 +11,15 @@ In some scenarios, applications can be spread across namespaces and share a queu
|
||||||
In this example, we will use the [PubSub sample](https://github.com/dapr/quickstarts/tree/master/pub-sub). Redis installation and the subscribers will be in `namespace-a` while the publisher UI will be on `namespace-b`. This solution should also work if Redis was installed on another namespace or if we used a managed cloud service like Azure ServiceBus.
|
In this example, we will use the [PubSub sample](https://github.com/dapr/quickstarts/tree/master/pub-sub). Redis installation and the subscribers will be in `namespace-a` while the publisher UI will be on `namespace-b`. This solution should also work if Redis was installed on another namespace or if we used a managed cloud service like Azure ServiceBus.
|
||||||
|
|
||||||
The table below shows which resources are deployed to which namespaces:
|
The table below shows which resources are deployed to which namespaces:
|
||||||
| Resource | namespace-a | namespace-b |
|
|
||||||
|-|-|-|
|
| Resource | namespace-a | namespace-b |
|
||||||
| Redis master | X ||
|
|------------------------ |-------------|-------------|
|
||||||
| Redis slave | X ||
|
| Redis master | X | |
|
||||||
| Dapr's PubSub component | X | X |
|
| Redis slave | X | |
|
||||||
| Node subscriber | X ||
|
| Dapr's PubSub component | X | X |
|
||||||
| Python subscriber | X ||
|
| Node subscriber | X | |
|
||||||
| React UI publisher | | X|
|
| Python subscriber | X | |
|
||||||
|
| React UI publisher | | X |
|
||||||
|
|
||||||
## Pre-requisites
|
## Pre-requisites
|
||||||
|
|
||||||
Loading…
Reference in New Issue