Merge branch 'v1.14' into issue_4219-2

This commit is contained in:
Hannah Hunter 2024-11-05 12:21:21 -05:00 committed by GitHub
commit 680bcd2521
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
16 changed files with 66 additions and 38 deletions

View File

@ -18,7 +18,7 @@ See the [Dapr community repository](https://github.com/dapr/community) for more
1. **Docs**: This [repository](https://github.com/dapr/docs) contains the documentation for Dapr. You can contribute by updating existing documentation, fixing errors, or adding new content to improve user experience and clarity. Please see the specific guidelines for [docs contributions]({{< ref contributing-docs >}}).
2. **Quickstarts**: The Quickstarts [repository](https://github.com/dapr/quickstarts) provides simple, step-by-step guides to help users get started with Dapr quickly. Contributions in this repository involve creating new quickstarts, improving existing ones, or ensuring they stay up-to-date with the latest features.
2. **Quickstarts**: The Quickstarts [repository](https://github.com/dapr/quickstarts) provides simple, step-by-step guides to help users get started with Dapr quickly. [Contributions in this repository](https://github.com/dapr/quickstarts/blob/master/CONTRIBUTING.md) involve creating new quickstarts, improving existing ones, or ensuring they stay up-to-date with the latest features.
3. **Runtime**: The Dapr runtime [repository](https://github.com/dapr/dapr) houses the core runtime components. Here, you can contribute by fixing bugs, optimizing performance, implementing new features, or enhancing existing ones.

View File

@ -2,7 +2,7 @@
type: docs
title: "Dapr bot reference"
linkTitle: "Dapr bot"
weight: 15
weight: 70
description: "List of Dapr bot capabilities."
---

View File

@ -41,15 +41,18 @@ Style and tone conventions should be followed throughout all Dapr documentation
## Diagrams and images
Diagrams and images are invaluable visual aids for documentation pages. Diagrams are kept in a [Dapr Diagrams Deck](https://github.com/dapr/docs/tree/v1.11/daprdocs/static/presentations), which includes guidance on style and icons.
Diagrams and images are invaluable visual aids for documentation pages. Use the diagram style and icons in the [Dapr Diagrams template deck](https://github.com/dapr/docs/tree/v1.14/daprdocs/static/presentations).
As you create diagrams for your documentation:
The process for creating diagrams for your documentation:
- Save them as high-res PNG files into the [images folder](https://github.com/dapr/docs/tree/v1.11/daprdocs/static/images).
- Name your PNG files using the convention of a concept or building block so that they are grouped.
1. Download the [Dapr Diagrams template deck](https://github.com/dapr/docs/tree/v1.14/daprdocs/static/presentations) to use the icons and colors.
1. Add a new slide and create your diagram.
1. Screen capture the diagram as high-res PNG file and save in the [images folder](https://github.com/dapr/docs/tree/v1.14/daprdocs/static/images).
1. Name your PNG files using the convention of a concept or building block so that they are grouped.
- For example: `service-invocation-overview.png`.
- For more information on calling out images using shortcode, see the [Images guidance](#images) section below.
- Add the diagram to the correct section in the `Dapr-Diagrams.pptx` deck so that they can be amended and updated during routine refresh.
1. Add the diagram to the appropriate section in your documentation using the HTML `<image>` tag.
1. In your PR, comment the diagram slide (not the screen capture) so it can be reviewed and added to the diagram deck by maintainers.
## Contributing a new docs page

View File

@ -37,17 +37,16 @@ metadata:
spec:
topic: orders
routes:
default: /checkout
default: /orders
pubsubname: pubsub
scopes:
- orderprocessing
- checkout
```
Here the subscription called `order`:
- Uses the pub/sub component called `pubsub` to subscribes to the topic called `orders`.
- Sets the `route` field to send all topic messages to the `/checkout` endpoint in the app.
- Sets `scopes` field to scope this subscription for access only by apps with IDs `orderprocessing` and `checkout`.
- Sets the `route` field to send all topic messages to the `/orders` endpoint in the app.
- Sets `scopes` field to scope this subscription for access only by apps with ID `orderprocessing`.
When running Dapr, set the YAML component file path to point Dapr to the component.
@ -113,7 +112,7 @@ In your application code, subscribe to the topic specified in the Dapr pub/sub c
```csharp
//Subscribe to a topic
[HttpPost("checkout")]
[HttpPost("orders")]
public void getCheckout([FromBody] int orderId)
{
Console.WriteLine("Subscriber received : " + orderId);
@ -128,7 +127,7 @@ public void getCheckout([FromBody] int orderId)
import io.dapr.client.domain.CloudEvent;
//Subscribe to a topic
@PostMapping(path = "/checkout")
@PostMapping(path = "/orders")
public Mono<Void> getCheckout(@RequestBody(required = false) CloudEvent<String> cloudEvent) {
return Mono.fromRunnable(() -> {
try {
@ -146,7 +145,7 @@ public Mono<Void> getCheckout(@RequestBody(required = false) CloudEvent<String>
from cloudevents.sdk.event import v1
#Subscribe to a topic
@app.route('/checkout', methods=['POST'])
@app.route('/orders', methods=['POST'])
def checkout(event: v1.Event) -> None:
data = json.loads(event.Data())
logging.info('Subscriber received: ' + str(data))
@ -163,7 +162,7 @@ const app = express()
app.use(bodyParser.json({ type: 'application/*+json' }));
// listen to the declarative route
app.post('/checkout', (req, res) => {
app.post('/orders', (req, res) => {
console.log(req.body);
res.sendStatus(200);
});
@ -178,7 +177,7 @@ app.post('/checkout', (req, res) => {
var sub = &common.Subscription{
PubsubName: "pubsub",
Topic: "orders",
Route: "/checkout",
Route: "/orders",
}
func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
@ -191,7 +190,7 @@ func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err er
{{< /tabs >}}
The `/checkout` endpoint matches the `route` defined in the subscriptions and this is where Dapr sends all topic messages to.
The `/orders` endpoint matches the `route` defined in the subscriptions and this is where Dapr sends all topic messages to.
### Streaming subscriptions
@ -325,7 +324,7 @@ In the example below, you define the values found in the [declarative YAML subsc
```csharp
[Topic("pubsub", "orders")]
[HttpPost("/checkout")]
[HttpPost("/orders")]
public async Task<ActionResult<Order>>Checkout(Order order, [FromServices] DaprClient daprClient)
{
// Logic
@ -337,7 +336,7 @@ or
```csharp
// Dapr subscription in [Topic] routes orders topic to this route
app.MapPost("/checkout", [Topic("pubsub", "orders")] (Order order) => {
app.MapPost("/orders", [Topic("pubsub", "orders")] (Order order) => {
Console.WriteLine("Subscriber received : " + order);
return Results.Ok(order);
});
@ -359,7 +358,7 @@ app.UseEndpoints(endpoints =>
```java
private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
@Topic(name = "checkout", pubsubName = "pubsub")
@Topic(name = "orders", pubsubName = "pubsub")
@PostMapping(path = "/orders")
public Mono<Void> handleMessage(@RequestBody(required = false) CloudEvent<String> cloudEvent) {
return Mono.fromRunnable(() -> {
@ -370,6 +369,7 @@ public Mono<Void> handleMessage(@RequestBody(required = false) CloudEvent<String
throw new RuntimeException(e);
}
});
}
```
{{% /codetab %}}
@ -382,7 +382,7 @@ def subscribe():
subscriptions = [
{
'pubsubname': 'pubsub',
'topic': 'checkout',
'topic': 'orders',
'routes': {
'rules': [
{
@ -418,7 +418,7 @@ app.get('/dapr/subscribe', (req, res) => {
res.json([
{
pubsubname: "pubsub",
topic: "checkout",
topic: "orders",
routes: {
rules: [
{
@ -480,7 +480,7 @@ func configureSubscribeHandler(w http.ResponseWriter, _ *http.Request) {
t := []subscription{
{
PubsubName: "pubsub",
Topic: "checkout",
Topic: "orders",
Routes: routes{
Rules: []rule{
{

View File

@ -135,7 +135,7 @@ Because workflow retry policies are configured in code, the exact developer expe
| --- | --- |
| **Maximum number of attempts** | The maximum number of times to execute the activity or child workflow. |
| **First retry interval** | The amount of time to wait before the first retry. |
| **Backoff coefficient** | The amount of time to wait before each subsequent retry. |
| **Backoff coefficient** | The coefficient used to determine the rate of increase of back-off. For example a coefficient of 2 doubles the wait of each subsequent retry. |
| **Maximum retry interval** | The maximum amount of time to wait before each subsequent retry. |
| **Retry timeout** | The overall timeout for retries, regardless of any configured max number of attempts. |

View File

@ -10,6 +10,10 @@ description: Get started with the Dapr Workflow building block
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
{{% /alert %}}
{{% alert title="Note" color="primary" %}}
Redis is currently used as the state store component for Workflows in the Quickstarts. However, Redis does not support transaction rollbacks and should not be used in production as an actor state store.
{{% /alert %}}
Let's take a look at the Dapr [Workflow building block]({{< ref workflow-overview.md >}}). In this Quickstart, you'll create a simple console application to demonstrate Dapr's workflow programming model and the workflow management APIs.
In this guide, you'll:
@ -1356,4 +1360,4 @@ Join the discussion in our [discord channel](https://discord.com/channels/778680
- Walk through a more in-depth [.NET SDK example workflow](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
- Learn more about [Workflow as a Dapr building block]({{< ref workflow-overview >}})
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}

View File

@ -16,15 +16,17 @@ This table is meant to help users understand the equivalent options for running
| `--app-id` | `--app-id` | `-i` | `dapr.io/app-id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID |
| `--app-port` | `--app-port` | `-p` | `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on |
| `--components-path` | `--components-path` | `-d` | not supported | **Deprecated** in favor of `--resources-path` |
| `--resources-path` | `--resources-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded. |
| `--resources-path` | `--resources-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded |
| `--config` | `--config` | `-c` | `dapr.io/config` | Tells Dapr which Configuration resource to use |
| `--control-plane-address` | not supported | | not supported | Address for a Dapr control plane |
| `--dapr-grpc-port` | `--dapr-grpc-port` | | not supported | gRPC port for the Dapr API to listen on (default "50001") |
| `--dapr-http-port` | `--dapr-http-port` | | not supported | The HTTP port for the Dapr API |
| `--dapr-http-max-request-size` | --dapr-http-max-request-size | | `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB |
| `--dapr-http-read-buffer-size` | --dapr-http-read-buffer-size | | `dapr.io/http-read-buffer-size` | Increasing max size of http header read buffer in KB to handle when sending multi-KB headers. The default 4 KB. When sending bigger than default 4KB http headers, you should set this to a larger value, for example 16 (for 16KB) |
| `--dapr-grpc-port` | `--dapr-grpc-port` | | `dapr.io/grpc-port` | Sets the Dapr API gRPC port (default `50001`); all cluster services must use the same port for communication |
| `--dapr-http-port` | `--dapr-http-port` | | not supported | HTTP port for the Dapr API to listen on (default `3500`) |
| `--dapr-http-max-request-size` | `--dapr-http-max-request-size` | | `dapr.io/http-max-request-size` | **Deprecated** in favor of `--max-body-size`. Inreasing the request max body size to handle large file uploads using http and grpc protocols. Default is `4` MB |
| `--max-body-size` | not supported | | `dapr.io/max-body-size` | Inreasing the request max body size to handle large file uploads using http and grpc protocols. Set the value using size units (e.g., `16Mi` for 16MB). The default is `4Mi` |
| `--dapr-http-read-buffer-size` | `--dapr-http-read-buffer-size` | | `dapr.io/http-read-buffer-size` | **Deprecated** in favor of `--read-buffer-size`. Increasing max size of http header read buffer in KB to to support larger header values, for example `16` to support headers up to 16KB . Default is `16` for 16KB |
| `--read-buffer-size` | not supported | | `dapr.io/read-buffer-size` | Increasing max size of http header read buffer in KB to to support larger header values. Set the value using size units, for example `32Ki` will support headers up to 32KB . Default is `4` for 4KB |
| not supported | `--image` | | `dapr.io/sidecar-image` | Dapr sidecar image. Default is daprio/daprd:latest. The Dapr sidecar uses this image instead of the latest default image. Use this when building your own custom image of Dapr and or [using an alternative stable Dapr image]({{< ref "support-release-policy.md#build-variations" >}}) |
| `--internal-grpc-port` | not supported | | not supported | gRPC port for the Dapr Internal API to listen on |
| `--internal-grpc-port` | not supported | | `dapr.io/internal-grpc-port` | Sets the internal Dapr gRPC port (default `50002`); all cluster services must use the same port for communication |
| `--enable-metrics` | not supported | | configuration spec | Enable [prometheus metric]({{< ref prometheus >}}) (default true) |
| `--enable-mtls` | not supported | | configuration spec | Enables automatic mTLS for daprd to daprd communication channels |
| `--enable-profiling` | `--enable-profiling` | | `dapr.io/enable-profiling` | [Enable profiling]({{< ref profiling-debugging >}}) |

View File

@ -63,6 +63,10 @@ This component supports **output binding** with the following operations:
- `delete` : [Delete blob](#delete-blob)
- `list`: [List blobs](#list-blobs)
The Blob storage component's **input binding** triggers and pushes events using [Azure Event Grid]({{< ref eventgrid.md >}}).
Refer to the [Reacting to Blob storage events](https://learn.microsoft.com/azure/storage/blobs/storage-blob-event-overview) guide for more set up and more information.
### Create blob
To perform a create blob operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body:

View File

@ -90,6 +90,21 @@ This component supports **output binding** with the following operations:
- `create`: publishes a message on the Event Grid topic
## Receiving events
You can use the Event Grid binding to receive events from a variety of sources and actions. [Learn more about all of the available event sources and handlers that work with Event Grid.](https://learn.microsoft.com/azure/event-grid/overview)
In the following table, you can find the list of Dapr components that can raise events.
| Event sources | Dapr components |
| ------------- | --------------- |
| [Azure Blob Storage](https://learn.microsoft.com/azure/storage/blobs/) | [Azure Blob Storage binding]({{< ref blobstorage.md >}}) <br/>[Azure Blob Storage state store]({{< ref setup-azure-blobstorage.md >}}) |
| [Azure Cache for Redis](https://learn.microsoft.com/azure/azure-cache-for-redis/cache-overview) | [Redis binding]({{< ref redis.md >}}) <br/>[Redis pub/sub]({{< ref setup-redis-pubsub.md >}}) |
| [Azure Event Hubs](https://learn.microsoft.com/azure/event-hubs/event-hubs-about) | [Azure Event Hubs pub/sub]({{< ref setup-azure-eventhubs.md >}}) <br/>[Azure Event Hubs binding]({{< ref eventhubs.md >}}) |
| [Azure IoT Hub](https://learn.microsoft.com/azure/iot-hub/iot-concepts-and-iot-hub) | [Azure Event Hubs pub/sub]({{< ref setup-azure-eventhubs.md >}}) <br/>[Azure Event Hubs binding]({{< ref eventhubs.md >}}) |
| [Azure Service Bus](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-messaging-overview) | [Azure Service Bus binding]({{< ref servicebusqueues.md >}}) <br/>[Azure Service Bus pub/sub topics]({{< ref setup-azure-servicebus-topics.md >}}) and [queues]({{< ref setup-azure-servicebus-queues.md >}}) |
| [Azure SignalR Service](https://learn.microsoft.com/azure/azure-signalr/signalr-overview) | [SignalR binding]({{< ref signalr.md >}}) |
## Microsoft Entra ID credentials
The Azure Event Grid binding requires an Microsoft Entra ID application and service principal for two reasons:
@ -142,7 +157,7 @@ Connect-MgGraph -Scopes "Application.Read.All","Application.ReadWrite.All"
> Note: if your directory does not have a Service Principal for the application "Microsoft.EventGrid", you may need to run the command `Connect-MgGraph` and sign in as an admin for the Microsoft Entra ID tenant (this is related to permissions on the Microsoft Entra ID directory, and not the Azure subscription). Otherwise, please ask your tenant's admin to sign in and run this PowerShell command: `New-MgServicePrincipal -AppId "4962773b-9cdb-44cf-a8bf-237846a00ab7"` (the UUID is a constant)
### Testing locally
## Testing locally
- Install [ngrok](https://ngrok.com/download)
- Run locally using a custom port, for example `9000`, for handshakes
@ -160,7 +175,7 @@ ngrok http --host-header=localhost 9000
dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run
```
### Testing on Kubernetes
## Testing on Kubernetes
Azure Event Grid requires a valid HTTPS endpoint for custom webhooks; self-signed certificates aren't accepted. In order to enable traffic from the public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/).

@ -1 +1 @@
Subproject commit b8e276728935c66b0a335b5aa2ca4102c560dd3d
Subproject commit 03038fa519670b583eabcef1417eacd55c3e44c8

@ -1 +1 @@
Subproject commit 7c03c7ce58d100a559ac1881bc0c80d6dedc5ab9
Subproject commit dd9a2d5a3c4481b8a6bda032df8f44f5eaedb370

@ -1 +1 @@
Subproject commit a98327e7d9a81611b0d7e91e59ea23ad48271948
Subproject commit 0b7a051b79c7a394e9bd4f57bd40778fb5f29897

@ -1 +1 @@
Subproject commit 7350742b6869cc166633d1f4d17d76fbdbb12921
Subproject commit 76866c878a6e79bb889c83f3930172ddb20f1624

@ -1 +1 @@
Subproject commit 64a4f2f6658e9023e8ea080eefdb019645cae802
Subproject commit 6e90e84b166ac7ea603b78894e9e1b92dc456014