Merge branch 'v1.15' into workflow-stable-api
|
@ -18,7 +18,7 @@ See the [Dapr community repository](https://github.com/dapr/community) for more
|
|||
|
||||
1. **Docs**: This [repository](https://github.com/dapr/docs) contains the documentation for Dapr. You can contribute by updating existing documentation, fixing errors, or adding new content to improve user experience and clarity. Please see the specific guidelines for [docs contributions]({{< ref contributing-docs >}}).
|
||||
|
||||
2. **Quickstarts**: The Quickstarts [repository](https://github.com/dapr/quickstarts) provides simple, step-by-step guides to help users get started with Dapr quickly. Contributions in this repository involve creating new quickstarts, improving existing ones, or ensuring they stay up-to-date with the latest features.
|
||||
2. **Quickstarts**: The Quickstarts [repository](https://github.com/dapr/quickstarts) provides simple, step-by-step guides to help users get started with Dapr quickly. [Contributions in this repository](https://github.com/dapr/quickstarts/blob/master/CONTRIBUTING.md) involve creating new quickstarts, improving existing ones, or ensuring they stay up-to-date with the latest features.
|
||||
|
||||
3. **Runtime**: The Dapr runtime [repository](https://github.com/dapr/dapr) houses the core runtime components. Here, you can contribute by fixing bugs, optimizing performance, implementing new features, or enhancing existing ones.
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
type: docs
|
||||
title: "Dapr bot reference"
|
||||
linkTitle: "Dapr bot"
|
||||
weight: 15
|
||||
weight: 70
|
||||
description: "List of Dapr bot capabilities."
|
||||
---
|
||||
|
||||
|
|
|
@ -41,15 +41,18 @@ Style and tone conventions should be followed throughout all Dapr documentation
|
|||
|
||||
## Diagrams and images
|
||||
|
||||
Diagrams and images are invaluable visual aids for documentation pages. Diagrams are kept in a [Dapr Diagrams Deck](https://github.com/dapr/docs/tree/v1.11/daprdocs/static/presentations), which includes guidance on style and icons.
|
||||
Diagrams and images are invaluable visual aids for documentation pages. Use the diagram style and icons in the [Dapr Diagrams template deck](https://github.com/dapr/docs/tree/v1.14/daprdocs/static/presentations).
|
||||
|
||||
As you create diagrams for your documentation:
|
||||
The process for creating diagrams for your documentation:
|
||||
|
||||
- Save them as high-res PNG files into the [images folder](https://github.com/dapr/docs/tree/v1.11/daprdocs/static/images).
|
||||
- Name your PNG files using the convention of a concept or building block so that they are grouped.
|
||||
1. Download the [Dapr Diagrams template deck](https://github.com/dapr/docs/tree/v1.14/daprdocs/static/presentations) to use the icons and colors.
|
||||
1. Add a new slide and create your diagram.
|
||||
1. Screen capture the diagram as high-res PNG file and save in the [images folder](https://github.com/dapr/docs/tree/v1.14/daprdocs/static/images).
|
||||
1. Name your PNG files using the convention of a concept or building block so that they are grouped.
|
||||
- For example: `service-invocation-overview.png`.
|
||||
- For more information on calling out images using shortcode, see the [Images guidance](#images) section below.
|
||||
- Add the diagram to the correct section in the `Dapr-Diagrams.pptx` deck so that they can be amended and updated during routine refresh.
|
||||
1. Add the diagram to the appropriate section in your documentation using the HTML `<image>` tag.
|
||||
1. In your PR, comment the diagram slide (not the screen capture) so it can be reviewed and added to the diagram deck by maintainers.
|
||||
|
||||
## Contributing a new docs page
|
||||
|
||||
|
|
|
@ -336,14 +336,13 @@ Status | Description
|
|||
`RETRY` | Message to be retried by Dapr
|
||||
`DROP` | Warning is logged and message is dropped
|
||||
|
||||
Please refer [Expected HTTP Response for Bulk Subscribe]({{< ref pubsub_api.md >}}) for further insights on response.
|
||||
Refer to [Expected HTTP Response for Bulk Subscribe]({{< ref pubsub_api.md >}}) for further insights on response.
|
||||
|
||||
### Example
|
||||
|
||||
Please refer following code samples for how to use Bulk Subscribe:
|
||||
|
||||
{{< tabs "Java" "JavaScript" ".NET" >}}
|
||||
The following code examples demonstrate how to use Bulk Subscribe.
|
||||
|
||||
{{< tabs "Java" "JavaScript" ".NET" "Python" >}}
|
||||
{{% codetab %}}
|
||||
|
||||
```java
|
||||
|
@ -471,7 +470,50 @@ public class BulkMessageController : ControllerBase
|
|||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Currently, you can only bulk subscribe in Python using an HTTP client.
|
||||
|
||||
```python
|
||||
import json
|
||||
from flask import Flask, request, jsonify
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
@app.route('/dapr/subscribe', methods=['GET'])
|
||||
def subscribe():
|
||||
# Define the bulk subscribe configuration
|
||||
subscriptions = [{
|
||||
"pubsubname": "pubsub",
|
||||
"topic": "TOPIC_A",
|
||||
"route": "/checkout",
|
||||
"bulkSubscribe": {
|
||||
"enabled": True,
|
||||
"maxMessagesCount": 3,
|
||||
"maxAwaitDurationMs": 40
|
||||
}
|
||||
}]
|
||||
print('Dapr pub/sub is subscribed to: ' + json.dumps(subscriptions))
|
||||
return jsonify(subscriptions)
|
||||
|
||||
|
||||
# Define the endpoint to handle incoming messages
|
||||
@app.route('/checkout', methods=['POST'])
|
||||
def checkout():
|
||||
messages = request.json
|
||||
print(messages)
|
||||
for message in messages:
|
||||
print(f"Received message: {message}")
|
||||
return json.dumps({'success': True}), 200, {'ContentType': 'application/json'}
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(port=5000)
|
||||
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## How components handle publishing and subscribing to bulk messages
|
||||
|
||||
For event publish/subscribe, two kinds of network transfers are involved.
|
||||
|
|
|
@ -37,17 +37,16 @@ metadata:
|
|||
spec:
|
||||
topic: orders
|
||||
routes:
|
||||
default: /checkout
|
||||
default: /orders
|
||||
pubsubname: pubsub
|
||||
scopes:
|
||||
- orderprocessing
|
||||
- checkout
|
||||
```
|
||||
|
||||
Here the subscription called `order`:
|
||||
- Uses the pub/sub component called `pubsub` to subscribes to the topic called `orders`.
|
||||
- Sets the `route` field to send all topic messages to the `/checkout` endpoint in the app.
|
||||
- Sets `scopes` field to scope this subscription for access only by apps with IDs `orderprocessing` and `checkout`.
|
||||
- Sets the `route` field to send all topic messages to the `/orders` endpoint in the app.
|
||||
- Sets `scopes` field to scope this subscription for access only by apps with ID `orderprocessing`.
|
||||
|
||||
When running Dapr, set the YAML component file path to point Dapr to the component.
|
||||
|
||||
|
@ -113,7 +112,7 @@ In your application code, subscribe to the topic specified in the Dapr pub/sub c
|
|||
|
||||
```csharp
|
||||
//Subscribe to a topic
|
||||
[HttpPost("checkout")]
|
||||
[HttpPost("orders")]
|
||||
public void getCheckout([FromBody] int orderId)
|
||||
{
|
||||
Console.WriteLine("Subscriber received : " + orderId);
|
||||
|
@ -128,7 +127,7 @@ public void getCheckout([FromBody] int orderId)
|
|||
import io.dapr.client.domain.CloudEvent;
|
||||
|
||||
//Subscribe to a topic
|
||||
@PostMapping(path = "/checkout")
|
||||
@PostMapping(path = "/orders")
|
||||
public Mono<Void> getCheckout(@RequestBody(required = false) CloudEvent<String> cloudEvent) {
|
||||
return Mono.fromRunnable(() -> {
|
||||
try {
|
||||
|
@ -146,7 +145,7 @@ public Mono<Void> getCheckout(@RequestBody(required = false) CloudEvent<String>
|
|||
from cloudevents.sdk.event import v1
|
||||
|
||||
#Subscribe to a topic
|
||||
@app.route('/checkout', methods=['POST'])
|
||||
@app.route('/orders', methods=['POST'])
|
||||
def checkout(event: v1.Event) -> None:
|
||||
data = json.loads(event.Data())
|
||||
logging.info('Subscriber received: ' + str(data))
|
||||
|
@ -163,7 +162,7 @@ const app = express()
|
|||
app.use(bodyParser.json({ type: 'application/*+json' }));
|
||||
|
||||
// listen to the declarative route
|
||||
app.post('/checkout', (req, res) => {
|
||||
app.post('/orders', (req, res) => {
|
||||
console.log(req.body);
|
||||
res.sendStatus(200);
|
||||
});
|
||||
|
@ -178,7 +177,7 @@ app.post('/checkout', (req, res) => {
|
|||
var sub = &common.Subscription{
|
||||
PubsubName: "pubsub",
|
||||
Topic: "orders",
|
||||
Route: "/checkout",
|
||||
Route: "/orders",
|
||||
}
|
||||
|
||||
func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
|
||||
|
@ -191,7 +190,7 @@ func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err er
|
|||
|
||||
{{< /tabs >}}
|
||||
|
||||
The `/checkout` endpoint matches the `route` defined in the subscriptions and this is where Dapr sends all topic messages to.
|
||||
The `/orders` endpoint matches the `route` defined in the subscriptions and this is where Dapr sends all topic messages to.
|
||||
|
||||
### Streaming subscriptions
|
||||
|
||||
|
@ -325,7 +324,7 @@ In the example below, you define the values found in the [declarative YAML subsc
|
|||
|
||||
```csharp
|
||||
[Topic("pubsub", "orders")]
|
||||
[HttpPost("/checkout")]
|
||||
[HttpPost("/orders")]
|
||||
public async Task<ActionResult<Order>>Checkout(Order order, [FromServices] DaprClient daprClient)
|
||||
{
|
||||
// Logic
|
||||
|
@ -337,7 +336,7 @@ or
|
|||
|
||||
```csharp
|
||||
// Dapr subscription in [Topic] routes orders topic to this route
|
||||
app.MapPost("/checkout", [Topic("pubsub", "orders")] (Order order) => {
|
||||
app.MapPost("/orders", [Topic("pubsub", "orders")] (Order order) => {
|
||||
Console.WriteLine("Subscriber received : " + order);
|
||||
return Results.Ok(order);
|
||||
});
|
||||
|
@ -359,7 +358,7 @@ app.UseEndpoints(endpoints =>
|
|||
```java
|
||||
private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
|
||||
|
||||
@Topic(name = "checkout", pubsubName = "pubsub")
|
||||
@Topic(name = "orders", pubsubName = "pubsub")
|
||||
@PostMapping(path = "/orders")
|
||||
public Mono<Void> handleMessage(@RequestBody(required = false) CloudEvent<String> cloudEvent) {
|
||||
return Mono.fromRunnable(() -> {
|
||||
|
@ -370,6 +369,7 @@ public Mono<Void> handleMessage(@RequestBody(required = false) CloudEvent<String
|
|||
throw new RuntimeException(e);
|
||||
}
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
@ -382,7 +382,7 @@ def subscribe():
|
|||
subscriptions = [
|
||||
{
|
||||
'pubsubname': 'pubsub',
|
||||
'topic': 'checkout',
|
||||
'topic': 'orders',
|
||||
'routes': {
|
||||
'rules': [
|
||||
{
|
||||
|
@ -418,7 +418,7 @@ app.get('/dapr/subscribe', (req, res) => {
|
|||
res.json([
|
||||
{
|
||||
pubsubname: "pubsub",
|
||||
topic: "checkout",
|
||||
topic: "orders",
|
||||
routes: {
|
||||
rules: [
|
||||
{
|
||||
|
@ -480,7 +480,7 @@ func configureSubscribeHandler(w http.ResponseWriter, _ *http.Request) {
|
|||
t := []subscription{
|
||||
{
|
||||
PubsubName: "pubsub",
|
||||
Topic: "checkout",
|
||||
Topic: "orders",
|
||||
Routes: routes{
|
||||
Rules: []rule{
|
||||
{
|
||||
|
|
|
@ -10,8 +10,6 @@ State management is one of the most common needs of any new, legacy, monolith, o
|
|||
|
||||
In this guide, you'll learn the basics of using the key/value state API to allow an application to save, get, and delete state.
|
||||
|
||||
## Example
|
||||
|
||||
The code example below _loosely_ describes an application that processes orders with an order processing service which has a Dapr sidecar. The order processing service uses Dapr to store state in a Redis state store.
|
||||
|
||||
<img src="/images/building-block-state-management-example.png" width=1000 alt="Diagram showing state management of example service">
|
||||
|
@ -554,7 +552,7 @@ namespace EventService
|
|||
string DAPR_STORE_NAME = "statestore";
|
||||
//Using Dapr SDK to retrieve multiple states
|
||||
using var client = new DaprClientBuilder().Build();
|
||||
IReadOnlyList<BulkStateItem> mulitpleStateResult = await client.GetBulkStateAsync(DAPR_STORE_NAME, new List<string> { "order_1", "order_2" }, parallelism: 1);
|
||||
IReadOnlyList<BulkStateItem> multipleStateResult = await client.GetBulkStateAsync(DAPR_STORE_NAME, new List<string> { "order_1", "order_2" }, parallelism: 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -135,7 +135,7 @@ Because workflow retry policies are configured in code, the exact developer expe
|
|||
| --- | --- |
|
||||
| **Maximum number of attempts** | The maximum number of times to execute the activity or child workflow. |
|
||||
| **First retry interval** | The amount of time to wait before the first retry. |
|
||||
| **Backoff coefficient** | The amount of time to wait before each subsequent retry. |
|
||||
| **Backoff coefficient** | The coefficient used to determine the rate of increase of back-off. For example a coefficient of 2 doubles the wait of each subsequent retry. |
|
||||
| **Maximum retry interval** | The maximum amount of time to wait before each subsequent retry. |
|
||||
| **Retry timeout** | The overall timeout for retries, regardless of any configured max number of attempts. |
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ Currently, you can experience this actors quickstart using the .NET SDK.
|
|||
As a quick overview of the .NET actors quickstart:
|
||||
|
||||
1. Using a `SmartDevice.Service` microservice, you host:
|
||||
- Two `SmartDectectorActor` smoke alarm objects
|
||||
- Two `SmokeDetectorActor` smoke alarm objects
|
||||
- A `ControllerActor` object that commands and controls the smart devices
|
||||
1. Using a `SmartDevice.Client` console app, the client app interacts with each actor, or the controller, to perform actions in aggregate.
|
||||
1. The `SmartDevice.Interfaces` contains the shared interfaces and data types used by both the service and client apps.
|
||||
|
@ -119,7 +119,7 @@ If you have Zipkin configured for Dapr locally on your machine, you can view the
|
|||
|
||||
When you ran the client app, a few things happened:
|
||||
|
||||
1. Two `SmartDetectorActor` actors were [created in the client application](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/client/Program.cs) and initialized with object state with:
|
||||
1. Two `SmokeDetectorActor` actors were [created in the client application](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/client/Program.cs) and initialized with object state with:
|
||||
- `ActorProxy.Create<ISmartDevice>(actorId, actorType)`
|
||||
- `proxySmartDevice.SetDataAsync(data)`
|
||||
|
||||
|
@ -177,7 +177,7 @@ When you ran the client app, a few things happened:
|
|||
Console.WriteLine($"Device 2 state: {storedDeviceData2}");
|
||||
```
|
||||
|
||||
1. The [`DetectSmokeAsync` method of `SmartDetectorActor 1` is called](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L70).
|
||||
1. The [`DetectSmokeAsync` method of `SmokeDetectorActor 1` is called](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L70).
|
||||
|
||||
```csharp
|
||||
public async Task DetectSmokeAsync()
|
||||
|
@ -216,7 +216,7 @@ When you ran the client app, a few things happened:
|
|||
await proxySmartDevice1.DetectSmokeAsync();
|
||||
```
|
||||
|
||||
1. The [`SoundAlarm` methods](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L78) of `SmartDetectorActor 1` and `2` are called.
|
||||
1. The [`SoundAlarm` methods](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L78) of `SmokeDetectorActor 1` and `2` are called.
|
||||
|
||||
```csharp
|
||||
storedDeviceData1 = await proxySmartDevice1.GetDataAsync();
|
||||
|
@ -234,9 +234,9 @@ When you ran the client app, a few things happened:
|
|||
|
||||
For full context of the sample, take a look at the following code:
|
||||
|
||||
- [`SmartDetectorActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs): Implements the smart device actors
|
||||
- [`SmokeDetectorActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs): Implements the smart device actors
|
||||
- [`ControllerActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/ControllerActor.cs): Implements the controller actor that manages all devices
|
||||
- [`ISmartDevice`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/ISmartDevice.cs): The method definitions and shared data types for each `SmartDetectorActor`
|
||||
- [`ISmartDevice`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/ISmartDevice.cs): The method definitions and shared data types for each `SmokeDetectorActor`
|
||||
- [`IController`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/IController.cs): The method definitions and shared data types for the `ControllerActor`
|
||||
|
||||
{{% /codetab %}}
|
||||
|
|
|
@ -10,6 +10,10 @@ description: Get started with the Dapr Workflow building block
|
|||
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Redis is currently used as the state store component for Workflows in the Quickstarts. However, Redis does not support transaction rollbacks and should not be used in production as an actor state store.
|
||||
{{% /alert %}}
|
||||
|
||||
Let's take a look at the Dapr [Workflow building block]({{< ref workflow-overview.md >}}). In this Quickstart, you'll create a simple console application to demonstrate Dapr's workflow programming model and the workflow management APIs.
|
||||
|
||||
In this guide, you'll:
|
||||
|
@ -1356,4 +1360,4 @@ Join the discussion in our [discord channel](https://discord.com/channels/778680
|
|||
- Walk through a more in-depth [.NET SDK example workflow](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
|
||||
- Learn more about [Workflow as a Dapr building block]({{< ref workflow-overview >}})
|
||||
|
||||
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
|
||||
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
|
||||
|
|
|
@ -0,0 +1,122 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-To: Configure Environment Variables from Secrets for Dapr sidecar"
|
||||
linkTitle: "Environment Variables from Secrets"
|
||||
weight: 7500
|
||||
description: "Inject Environment Variables from Kubernetes Secrets into Dapr sidecar"
|
||||
---
|
||||
In special cases, the Dapr sidecar needs an environment variable injected into it. This use case may be required by a component, a 3rd party library, or a module that uses environment variables to configure the said component or customize its behavior. This can be useful for both production and non-production environments.
|
||||
|
||||
## Overview
|
||||
In Dapr 1.15, the new `dapr.io/env-from-secret` annotation was introduced, [similar to `dapr.io/env`]({{< ref arguments-annotations-overview >}}).
|
||||
With this annotation, you can inject an environment variable into the Dapr sidecar, with a value from a secret.
|
||||
|
||||
### Annotation format
|
||||
The values of this annotation are formatted like so:
|
||||
|
||||
- Single key secret: `<ENV_VAR_NAME>=<SECRET_NAME>`
|
||||
- Multi key/value secret: `<ENV_VAR_NAME>=<SECRET_NAME>:<SECRET_KEY>`
|
||||
|
||||
`<ENV_VAR_NAME>` is required to follow the `C_IDENTIFIER` format and captured by the `[A-Za-z_][A-Za-z0-9_]*` regex:
|
||||
- Must start with a letter or underscore
|
||||
- The rest of the identifier contains letters, digits, or underscores
|
||||
|
||||
The `name` field is required due to the restriction of the `secretKeyRef`, so both `name` and `key` must be set. [Learn more from the "env.valueFrom.secretKeyRef.name" section in this Kubernetes documentation.](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables)
|
||||
In this case, Dapr sets both to the same value.
|
||||
|
||||
## Configuring single key secret environment variable
|
||||
In the following example, the `dapr.io/env-from-secret` annotation is added to the Deployment.
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nodeapp
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
dapr.io/enabled: "true"
|
||||
dapr.io/app-id: "nodeapp"
|
||||
dapr.io/app-port: "3000"
|
||||
dapr.io/env-from-secret: "AUTH_TOKEN=auth-headers-secret"
|
||||
spec:
|
||||
containers:
|
||||
- name: node
|
||||
image: dapriosamples/hello-k8s-node:latest
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
imagePullPolicy: Always
|
||||
```
|
||||
|
||||
The `dapr.io/env-from-secret` annotation with a value of `"AUTH_TOKEN=auth-headers-secret"` is injected as:
|
||||
|
||||
```yaml
|
||||
env:
|
||||
- name: AUTH_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: auth-headers-secret
|
||||
key: auth-headers-secret
|
||||
```
|
||||
This requires the secret to have both `name` and `key` fields with the same value, "auth-headers-secret".
|
||||
|
||||
**Example secret**
|
||||
|
||||
> **Note:** The following example is for demo purposes only. It's not recommended to store secrets in plain text.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: auth-headers-secret
|
||||
type: Opaque
|
||||
stringData:
|
||||
auth-headers-secret: "AUTH=mykey"
|
||||
```
|
||||
|
||||
## Configuring multi-key secret environment variable
|
||||
|
||||
In the following example, the `dapr.io/env-from-secret` annotation is added to the Deployment.
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nodeapp
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
dapr.io/enabled: "true"
|
||||
dapr.io/app-id: "nodeapp"
|
||||
dapr.io/app-port: "3000"
|
||||
dapr.io/env-from-secret: "AUTH_TOKEN=auth-headers-secret:auth-header-value"
|
||||
spec:
|
||||
containers:
|
||||
- name: node
|
||||
image: dapriosamples/hello-k8s-node:latest
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
imagePullPolicy: Always
|
||||
```
|
||||
The `dapr.io/env-from-secret` annotation with a value of `"AUTH_TOKEN=auth-headers-secret:auth-header-value"` is injected as:
|
||||
```yaml
|
||||
env:
|
||||
- name: AUTH_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: auth-headers-secret
|
||||
key: auth-header-value
|
||||
```
|
||||
|
||||
**Example secret**
|
||||
|
||||
> **Note:** The following example is for demo purposes only. It's not recommended to store secrets in plain text.
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: auth-headers-secret
|
||||
type: Opaque
|
||||
stringData:
|
||||
auth-header-value: "AUTH=mykey"
|
||||
```
|
|
@ -4,10 +4,15 @@ title: "How-To: Limit the secrets that can be read from secret stores"
|
|||
linkTitle: "Limit secret store access"
|
||||
weight: 3000
|
||||
description: "Define secret scopes by augmenting the existing configuration resource with restrictive permissions."
|
||||
description: "Define secret scopes by augmenting the existing configuration resource with restrictive permissions."
|
||||
---
|
||||
|
||||
In addition to [scoping which applications can access a given component]({{< ref "component-scopes.md">}}), you can also scop a named secret store component to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` lists, you restrict applications to access only specific secrets.
|
||||
In addition to [scoping which applications can access a given component]({{< ref "component-scopes.md">}}), you can also scop a named secret store component to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` lists, you restrict applications to access only specific secrets.
|
||||
|
||||
For more information about configuring a Configuration resource:
|
||||
- [Configuration overview]({{< ref configuration-overview.md >}})
|
||||
- [Configuration schema]({{< ref configuration-schema.md >}})
|
||||
For more information about configuring a Configuration resource:
|
||||
- [Configuration overview]({{< ref configuration-overview.md >}})
|
||||
- [Configuration schema]({{< ref configuration-schema.md >}})
|
||||
|
@ -55,8 +60,10 @@ The `allowedSecrets` and `deniedSecrets` list values take priority over the `def
|
|||
|
||||
### Scenario 1: Deny access to all secrets for a secret store
|
||||
|
||||
In a Kubernetes cluster, the native Kubernetes secret store is added to your Dapr application by default. In some scenarios, it may be necessary to deny access to Dapr secrets for a given application. To add this configuration:
|
||||
In a Kubernetes cluster, the native Kubernetes secret store is added to your Dapr application by default. In some scenarios, it may be necessary to deny access to Dapr secrets for a given application. To add this configuration:
|
||||
|
||||
1. Define the following `appconfig.yaml`.
|
||||
1. Define the following `appconfig.yaml`.
|
||||
|
||||
```yaml
|
||||
|
@ -70,6 +77,17 @@ In a Kubernetes cluster, the native Kubernetes secret store is added to your Dap
|
|||
- storeName: kubernetes
|
||||
defaultAccess: deny
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: appconfig
|
||||
spec:
|
||||
secrets:
|
||||
scopes:
|
||||
- storeName: kubernetes
|
||||
defaultAccess: deny
|
||||
```
|
||||
|
||||
1. Apply it to the Kubernetes cluster using the following command:
|
||||
|
||||
|
@ -77,6 +95,13 @@ In a Kubernetes cluster, the native Kubernetes secret store is added to your Dap
|
|||
kubectl apply -f appconfig.yaml`.
|
||||
```
|
||||
|
||||
For applications that you need to deny access to the Kubernetes secret store, follow [the Kubernetes instructions]({{< ref kubernetes-overview >}}), adding the following annotation to the application pod.
|
||||
1. Apply it to the Kubernetes cluster using the following command:
|
||||
|
||||
```bash
|
||||
kubectl apply -f appconfig.yaml`.
|
||||
```
|
||||
|
||||
For applications that you need to deny access to the Kubernetes secret store, follow [the Kubernetes instructions]({{< ref kubernetes-overview >}}), adding the following annotation to the application pod.
|
||||
|
||||
```yaml
|
||||
|
@ -85,6 +110,7 @@ dapr.io/config: appconfig
|
|||
|
||||
With this defined, the application no longer has access to Kubernetes secret store.
|
||||
|
||||
### Scenario 2: Allow access to only certain secrets in a secret store
|
||||
### Scenario 2: Allow access to only certain secrets in a secret store
|
||||
|
||||
To allow a Dapr application to have access to only certain secrets, define the following `config.yaml`:
|
||||
|
@ -102,6 +128,7 @@ spec:
|
|||
allowedSecrets: ["secret1", "secret2"]
|
||||
```
|
||||
|
||||
This example defines configuration for secret store named `vault`. The default access to the secret store is `deny`. Meanwhile, some secrets are accessible by the application based on the `allowedSecrets` list. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar.
|
||||
This example defines configuration for secret store named `vault`. The default access to the secret store is `deny`. Meanwhile, some secrets are accessible by the application based on the `allowedSecrets` list. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar.
|
||||
|
||||
### Scenario 3: Deny access to certain sensitive secrets in a secret store
|
||||
|
@ -126,3 +153,8 @@ This configuration explicitly denies access to `secret1` and `secret2` from the
|
|||
## Next steps
|
||||
|
||||
{{< button text="Service invocation access control" page="invoke-allowlist" >}}
|
||||
This configuration explicitly denies access to `secret1` and `secret2` from the secret store named `vault,` while allowing access to all other secrets. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar.
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Service invocation access control" page="invoke-allowlist" >}}
|
||||
|
|
|
@ -16,6 +16,7 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
|
|||
- [AWS CLI](https://aws.amazon.com/cli/)
|
||||
- [eksctl](https://eksctl.io/)
|
||||
- [An existing VPC and subnets](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html)
|
||||
- [Dapr CLI](https://docs.dapr.io/getting-started/install-dapr-cli/)
|
||||
|
||||
## Deploy an EKS cluster
|
||||
|
||||
|
@ -25,20 +26,57 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
|
|||
aws configure
|
||||
```
|
||||
|
||||
1. Create an EKS cluster. To use a specific version of Kubernetes, use `--version` (1.13.x or newer version required).
|
||||
1. Create a new file called `cluster-config.yaml` and add the content below to it, replacing `[your_cluster_name]`, `[your_cluster_region]`, and `[your_k8s_version]` with the appropriate values:
|
||||
|
||||
```yaml
|
||||
apiVersion: eksctl.io/v1alpha5
|
||||
kind: ClusterConfig
|
||||
|
||||
metadata:
|
||||
name: [your_cluster_name]
|
||||
region: [your_cluster_region]
|
||||
version: [your_k8s_version]
|
||||
tags:
|
||||
karpenter.sh/discovery: [your_cluster_name]
|
||||
|
||||
iam:
|
||||
withOIDC: true
|
||||
|
||||
managedNodeGroups:
|
||||
- name: mng-od-4vcpu-8gb
|
||||
desiredCapacity: 2
|
||||
minSize: 1
|
||||
maxSize: 5
|
||||
instanceType: c5.xlarge
|
||||
privateNetworking: true
|
||||
|
||||
addons:
|
||||
- name: vpc-cni
|
||||
attachPolicyARNs:
|
||||
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
|
||||
- name: coredns
|
||||
version: latest
|
||||
- name: kube-proxy
|
||||
version: latest
|
||||
- name: aws-ebs-csi-driver
|
||||
wellKnownPolicies:
|
||||
ebsCSIController: true
|
||||
```
|
||||
|
||||
1. Create the cluster by running the following command:
|
||||
|
||||
```bash
|
||||
eksctl create cluster --name [your_eks_cluster_name] --region [your_aws_region] --version [kubernetes_version] --vpc-private-subnets [subnet_list_seprated_by_comma] --without-nodegroup
|
||||
eksctl create cluster -f cluster.yaml
|
||||
```
|
||||
|
||||
Change the values for `vpc-private-subnets` to meet your requirements. You can also add additional IDs. You must specify at least two subnet IDs. If you'd rather specify public subnets, you can change `--vpc-private-subnets` to `--vpc-public-subnets`.
|
||||
|
||||
1. Verify kubectl context:
|
||||
|
||||
1. Verify the kubectl context:
|
||||
|
||||
```bash
|
||||
kubectl config current-context
|
||||
```
|
||||
|
||||
## Add Dapr requirements for sidecar access and default storage class:
|
||||
|
||||
1. Update the security group rule to allow the EKS cluster to communicate with the Dapr Sidecar by creating an inbound rule for port 4000.
|
||||
|
||||
```bash
|
||||
|
@ -49,11 +87,37 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
|
|||
--source-group [your_security_group]
|
||||
```
|
||||
|
||||
2. Add a default storage class if you don't have one:
|
||||
|
||||
```bash
|
||||
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
||||
```
|
||||
|
||||
## Install Dapr
|
||||
|
||||
Install Dapr on your cluster by running:
|
||||
|
||||
```bash
|
||||
dapr init -k
|
||||
```
|
||||
|
||||
You should see the following response:
|
||||
|
||||
```bash
|
||||
⌛ Making the jump to hyperspace...
|
||||
ℹ️ Note: To install Dapr using Helm, see here: https://docs.dapr.io/getting-started/install-dapr-kubernetes/#install-with-helm-advanced
|
||||
|
||||
ℹ️ Container images will be pulled from Docker Hub
|
||||
✅ Deploying the Dapr control plane with latest version to your cluster...
|
||||
✅ Deploying the Dapr dashboard with latest version to your cluster...
|
||||
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run `dapr status -k' in your terminal. To get started, go here: https://docs.dapr.io/getting-started
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Access permissions
|
||||
|
||||
If you face any access permissions, make sure you are using the same AWS profile that was used to create the cluster. If needed, update the kubectl configuration with the correct profile:
|
||||
If you face any access permissions, make sure you are using the same AWS profile that was used to create the cluster. If needed, update the kubectl configuration with the correct profile. More information [here](https://repost.aws/knowledge-center/eks-api-server-unauthorized-error):
|
||||
|
||||
```bash
|
||||
aws eks --region [your_aws_region] update-kubeconfig --name [your_eks_cluster_name] --profile [your_profile_name]
|
||||
|
|
|
@ -6,7 +6,7 @@ weight: 60
|
|||
description: See and measure the message calls to components and between networked services
|
||||
---
|
||||
|
||||
[The following overview video and demo](https://www.youtube.com/live/0y7ne6teHT4?si=3bmNSSyIEIVSF-Ej&t=9931) demonstrates how observability in Dapr works.
|
||||
[The following overview video and demo](https://www.youtube.com/watch?v=0y7ne6teHT4&t=12652s) demonstrates how observability in Dapr works.
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/0y7ne6teHT4?si=iURnLk57t2zN-7zP&start=12653" title="YouTube video player" style="padding-bottom:25px;" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||
|
||||
|
|
|
@ -49,6 +49,15 @@ The following retry options are configurable:
|
|||
| `duration` | Determines the time interval between retries. Only applies to the `constant` policy.<br/>Valid values are of the form `200ms`, `15s`, `2m`, etc.<br/> Defaults to `5s`.|
|
||||
| `maxInterval` | Determines the maximum interval between retries to which the `exponential` back-off policy can grow.<br/>Additional retries always occur after a duration of `maxInterval`. Defaults to `60s`. Valid values are of the form `5s`, `1m`, `1m30s`, etc |
|
||||
| `maxRetries` | The maximum number of retries to attempt. <br/>`-1` denotes an unlimited number of retries, while `0` means the request will not be retried (essentially behaving as if the retry policy were not set).<br/>Defaults to `-1`. |
|
||||
| `matching.httpStatusCodes` | Optional: a comma-separated string of HTTP status codes or code ranges to retry. Status codes not listed are not retried.<br/>Valid values: 100-599, [Reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status)<br/>Format: `<code>` or range `<start>-<end>`<br/>Example: "429,501-503"<br/>Default: empty string `""` or field is not set. Retries on all HTTP errors. |
|
||||
| `matching.gRPCStatusCodes` | Optional: a comma-separated string of gRPC status codes or code ranges to retry. Status codes not listed are not retried.<br/>Valid values: 0-16, [Reference](https://grpc.io/docs/guides/status-codes/)<br/>Format: `<code>` or range `<start>-<end>`<br/>Example: "1,501-503"<br/>Default: empty string `""` or field is not set. Retries on all gRPC errors. |
|
||||
|
||||
|
||||
{{% alert title="httpStatusCodes and gRPCStatusCodes format" color="warning" %}}
|
||||
The field values should follow the format as specified in the field description or in the "Example 2" below.
|
||||
An incorrectly formatted value will produce an error log ("Could not read resiliency policy") and `daprd` startup sequence will proceed.
|
||||
{{% /alert %}}
|
||||
|
||||
|
||||
The exponential back-off window uses the following formula:
|
||||
|
||||
|
@ -77,7 +86,20 @@ spec:
|
|||
maxRetries: -1 # Retry indefinitely
|
||||
```
|
||||
|
||||
Example 2:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
retries:
|
||||
retry5xxOnly:
|
||||
policy: constant
|
||||
duration: 5s
|
||||
maxRetries: 3
|
||||
matches:
|
||||
httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried.
|
||||
gRPCStatusCodes: "1-4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.
|
||||
```
|
||||
|
||||
## Circuit Breakers
|
||||
|
||||
|
|
|
@ -68,6 +68,7 @@ After announcing a future breaking change, the change will happen in 2 releases
|
|||
| Hazelcast PubSub Component | 1.9.0 | 1.11.0 |
|
||||
| Twitter Binding Component | 1.10.0 | 1.11.0 |
|
||||
| NATS Streaming PubSub Component | 1.11.0 | 1.13.0 |
|
||||
| Workflows API Alpha1 `/v1.0-alpha1/workflows` being deprecated in favor of Workflow Client | 1.15.0 | 1.17.0 |
|
||||
|
||||
## Related links
|
||||
|
||||
|
|
|
@ -302,7 +302,7 @@ other | warning is logged and all messages to be retried
|
|||
|
||||
## Message envelope
|
||||
|
||||
Dapr pub/sub adheres to version 1.0 of CloudEvents.
|
||||
Dapr pub/sub adheres to [version 1.0 of CloudEvents](https://github.com/cloudevents/spec/blob/v1.0/spec.md).
|
||||
|
||||
## Related links
|
||||
|
||||
|
|
|
@ -16,15 +16,17 @@ This table is meant to help users understand the equivalent options for running
|
|||
| `--app-id` | `--app-id` | `-i` | `dapr.io/app-id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID |
|
||||
| `--app-port` | `--app-port` | `-p` | `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on |
|
||||
| `--components-path` | `--components-path` | `-d` | not supported | **Deprecated** in favor of `--resources-path` |
|
||||
| `--resources-path` | `--resources-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded. |
|
||||
| `--resources-path` | `--resources-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded |
|
||||
| `--config` | `--config` | `-c` | `dapr.io/config` | Tells Dapr which Configuration resource to use |
|
||||
| `--control-plane-address` | not supported | | not supported | Address for a Dapr control plane |
|
||||
| `--dapr-grpc-port` | `--dapr-grpc-port` | | not supported | gRPC port for the Dapr API to listen on (default "50001") |
|
||||
| `--dapr-http-port` | `--dapr-http-port` | | not supported | The HTTP port for the Dapr API |
|
||||
| `--dapr-http-max-request-size` | --dapr-http-max-request-size | | `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB |
|
||||
| `--dapr-http-read-buffer-size` | --dapr-http-read-buffer-size | | `dapr.io/http-read-buffer-size` | Increasing max size of http header read buffer in KB to handle when sending multi-KB headers. The default 4 KB. When sending bigger than default 4KB http headers, you should set this to a larger value, for example 16 (for 16KB) |
|
||||
| `--dapr-grpc-port` | `--dapr-grpc-port` | | `dapr.io/grpc-port` | Sets the Dapr API gRPC port (default `50001`); all cluster services must use the same port for communication |
|
||||
| `--dapr-http-port` | `--dapr-http-port` | | not supported | HTTP port for the Dapr API to listen on (default `3500`) |
|
||||
| `--dapr-http-max-request-size` | `--dapr-http-max-request-size` | | `dapr.io/http-max-request-size` | **Deprecated** in favor of `--max-body-size`. Inreasing the request max body size to handle large file uploads using http and grpc protocols. Default is `4` MB |
|
||||
| `--max-body-size` | not supported | | `dapr.io/max-body-size` | Inreasing the request max body size to handle large file uploads using http and grpc protocols. Set the value using size units (e.g., `16Mi` for 16MB). The default is `4Mi` |
|
||||
| `--dapr-http-read-buffer-size` | `--dapr-http-read-buffer-size` | | `dapr.io/http-read-buffer-size` | **Deprecated** in favor of `--read-buffer-size`. Increasing max size of http header read buffer in KB to to support larger header values, for example `16` to support headers up to 16KB . Default is `16` for 16KB |
|
||||
| `--read-buffer-size` | not supported | | `dapr.io/read-buffer-size` | Increasing max size of http header read buffer in KB to to support larger header values. Set the value using size units, for example `32Ki` will support headers up to 32KB . Default is `4` for 4KB |
|
||||
| not supported | `--image` | | `dapr.io/sidecar-image` | Dapr sidecar image. Default is daprio/daprd:latest. The Dapr sidecar uses this image instead of the latest default image. Use this when building your own custom image of Dapr and or [using an alternative stable Dapr image]({{< ref "support-release-policy.md#build-variations" >}}) |
|
||||
| `--internal-grpc-port` | not supported | | not supported | gRPC port for the Dapr Internal API to listen on |
|
||||
| `--internal-grpc-port` | not supported | | `dapr.io/internal-grpc-port` | Sets the internal Dapr gRPC port (default `50002`); all cluster services must use the same port for communication |
|
||||
| `--enable-metrics` | not supported | | configuration spec | Enable [prometheus metric]({{< ref prometheus >}}) (default true) |
|
||||
| `--enable-mtls` | not supported | | configuration spec | Enables automatic mTLS for daprd to daprd communication channels |
|
||||
| `--enable-profiling` | `--enable-profiling` | | `dapr.io/enable-profiling` | [Enable profiling]({{< ref profiling-debugging >}}) |
|
||||
|
@ -67,6 +69,7 @@ This table is meant to help users understand the equivalent options for running
|
|||
| not supported | not supported | | `dapr.io/sidecar-readiness-probe-period-seconds` | How often (in seconds) to perform the sidecar readiness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6`|
|
||||
| not supported | not supported | | `dapr.io/sidecar-readiness-probe-threshold` | When the sidecar readiness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unready. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`|
|
||||
| not supported | not supported | | `dapr.io/env` | List of environment variable to be injected into the sidecar. Strings consisting of key=value pairs separated by a comma.|
|
||||
| not supported | not supported | | `dapr.io/env-from-secret` | List of environment variables to be injected into the sidecar from secret. Strings consisting of `"key=secret-name:secret-key"` pairs are separated by a comma. |
|
||||
| not supported | not supported | | `dapr.io/volume-mounts` | List of [pod volumes to be mounted to the sidecar container]({{< ref "kubernetes-volume-mounts" >}}) in read-only mode. Strings consisting of `volume:path` pairs separated by a comma. Example, `"volume-1:/tmp/mount1,volume-2:/home/root/mount2"`. |
|
||||
| not supported | not supported | | `dapr.io/volume-mounts-rw` | List of [pod volumes to be mounted to the sidecar container]({{< ref "kubernetes-volume-mounts" >}}) in read-write mode. Strings consisting of `volume:path` pairs separated by a comma. Example, `"volume-1:/tmp/mount1,volume-2:/home/root/mount2"`. |
|
||||
| `--disable-builtin-k8s-secret-store` | not supported | | `dapr.io/disable-builtin-k8s-secret-store` | Disables BuiltIn Kubernetes secret store. Default value is false. See [Kubernetes secret store component]({{< ref "kubernetes-secret-store.md" >}}) for details. |
|
||||
|
|
|
@ -63,6 +63,10 @@ This component supports **output binding** with the following operations:
|
|||
- `delete` : [Delete blob](#delete-blob)
|
||||
- `list`: [List blobs](#list-blobs)
|
||||
|
||||
The Blob storage component's **input binding** triggers and pushes events using [Azure Event Grid]({{< ref eventgrid.md >}}).
|
||||
|
||||
Refer to the [Reacting to Blob storage events](https://learn.microsoft.com/azure/storage/blobs/storage-blob-event-overview) guide for more set up and more information.
|
||||
|
||||
### Create blob
|
||||
|
||||
To perform a create blob operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body:
|
||||
|
|
|
@ -90,6 +90,21 @@ This component supports **output binding** with the following operations:
|
|||
|
||||
- `create`: publishes a message on the Event Grid topic
|
||||
|
||||
## Receiving events
|
||||
|
||||
You can use the Event Grid binding to receive events from a variety of sources and actions. [Learn more about all of the available event sources and handlers that work with Event Grid.](https://learn.microsoft.com/azure/event-grid/overview)
|
||||
|
||||
In the following table, you can find the list of Dapr components that can raise events.
|
||||
|
||||
| Event sources | Dapr components |
|
||||
| ------------- | --------------- |
|
||||
| [Azure Blob Storage](https://learn.microsoft.com/azure/storage/blobs/) | [Azure Blob Storage binding]({{< ref blobstorage.md >}}) <br/>[Azure Blob Storage state store]({{< ref setup-azure-blobstorage.md >}}) |
|
||||
| [Azure Cache for Redis](https://learn.microsoft.com/azure/azure-cache-for-redis/cache-overview) | [Redis binding]({{< ref redis.md >}}) <br/>[Redis pub/sub]({{< ref setup-redis-pubsub.md >}}) |
|
||||
| [Azure Event Hubs](https://learn.microsoft.com/azure/event-hubs/event-hubs-about) | [Azure Event Hubs pub/sub]({{< ref setup-azure-eventhubs.md >}}) <br/>[Azure Event Hubs binding]({{< ref eventhubs.md >}}) |
|
||||
| [Azure IoT Hub](https://learn.microsoft.com/azure/iot-hub/iot-concepts-and-iot-hub) | [Azure Event Hubs pub/sub]({{< ref setup-azure-eventhubs.md >}}) <br/>[Azure Event Hubs binding]({{< ref eventhubs.md >}}) |
|
||||
| [Azure Service Bus](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-messaging-overview) | [Azure Service Bus binding]({{< ref servicebusqueues.md >}}) <br/>[Azure Service Bus pub/sub topics]({{< ref setup-azure-servicebus-topics.md >}}) and [queues]({{< ref setup-azure-servicebus-queues.md >}}) |
|
||||
| [Azure SignalR Service](https://learn.microsoft.com/azure/azure-signalr/signalr-overview) | [SignalR binding]({{< ref signalr.md >}}) |
|
||||
|
||||
## Microsoft Entra ID credentials
|
||||
|
||||
The Azure Event Grid binding requires an Microsoft Entra ID application and service principal for two reasons:
|
||||
|
@ -142,7 +157,7 @@ Connect-MgGraph -Scopes "Application.Read.All","Application.ReadWrite.All"
|
|||
|
||||
> Note: if your directory does not have a Service Principal for the application "Microsoft.EventGrid", you may need to run the command `Connect-MgGraph` and sign in as an admin for the Microsoft Entra ID tenant (this is related to permissions on the Microsoft Entra ID directory, and not the Azure subscription). Otherwise, please ask your tenant's admin to sign in and run this PowerShell command: `New-MgServicePrincipal -AppId "4962773b-9cdb-44cf-a8bf-237846a00ab7"` (the UUID is a constant)
|
||||
|
||||
### Testing locally
|
||||
## Testing locally
|
||||
|
||||
- Install [ngrok](https://ngrok.com/download)
|
||||
- Run locally using a custom port, for example `9000`, for handshakes
|
||||
|
@ -160,7 +175,7 @@ ngrok http --host-header=localhost 9000
|
|||
dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run
|
||||
```
|
||||
|
||||
### Testing on Kubernetes
|
||||
## Testing on Kubernetes
|
||||
|
||||
Azure Event Grid requires a valid HTTPS endpoint for custom webhooks; self-signed certificates aren't accepted. In order to enable traffic from the public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/).
|
||||
|
||||
|
|
|
@ -36,6 +36,8 @@ spec:
|
|||
value: "namespace"
|
||||
- name: enableEntityManagement
|
||||
value: "false"
|
||||
- name: enableInOrderMessageDelivery
|
||||
value: "false"
|
||||
# The following four properties are needed only if enableEntityManagement is set to true
|
||||
- name: resourceGroupName
|
||||
value: "test-rg"
|
||||
|
@ -71,7 +73,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| `eventHub` | Y* | Input/Output | The name of the Event Hubs hub ("topic"). Required if using Microsoft Entra ID authentication or if the connection string doesn't contain an `EntityPath` value | `mytopic` |
|
||||
| `connectionString` | Y* | Input/Output | Connection string for the Event Hub or the Event Hub namespace.<br>* Mutally exclusive with `eventHubNamespace` field.<br>* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
|
||||
| `eventHubNamespace` | Y* | Input/Output | The Event Hub Namespace name.<br>* Mutally exclusive with `connectionString` field.<br>* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
|
||||
| `enableEntityManagement` | N | Input/Output | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
|
||||
| `enableEntityManagement` | N | Input/Output | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true"`, `"false"`
|
||||
| `enableInOrderMessageDelivery` | N | Input/Output | Boolean value to allow messages to be delivered in the order in which they were posted. This assumes `partitionKey` is set when publishing or posting to ensure ordering across partitions. Default: `false` | `"true"`, `"false"`
|
||||
| `resourceGroupName` | N | Input/Output | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"`
|
||||
| `subscriptionID` | N | Input/Output | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"`
|
||||
| `partitionCount` | N | Input/Output | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"`
|
||||
|
|
|
@ -0,0 +1,231 @@
|
|||
---
|
||||
type: docs
|
||||
title: "SFTP binding spec"
|
||||
linkTitle: "SFTP"
|
||||
description: "Detailed documentation on the Secure File Transfer Protocol (SFTP) binding component"
|
||||
aliases:
|
||||
- "/operations/components/setup-bindings/supported-bindings/sftp/"
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To set up the SFTP binding, create a component of type `bindings.sftp`. See [this guide]({{ ref bindings-overview.md }}) on how to create and apply a binding configuration.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
spec:
|
||||
type: bindings.sftp
|
||||
version: v1
|
||||
metadata:
|
||||
- name: rootPath
|
||||
value: "<string>"
|
||||
- name: address
|
||||
value: "<string>"
|
||||
- name: username
|
||||
value: "<string>"
|
||||
- name: password
|
||||
value: "*****************"
|
||||
- name: privateKey
|
||||
value: "*****************"
|
||||
- name: privateKeyPassphrase
|
||||
value: "*****************"
|
||||
- name: hostPublicKey
|
||||
value: "*****************"
|
||||
- name: knownHostsFile
|
||||
value: "<string>"
|
||||
- name: insecureIgnoreHostKey
|
||||
value: "<bool>"
|
||||
```
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|------------|-----|---------|
|
||||
| `rootPath` | Y | Output | Root path for default working directory | `"/path"` |
|
||||
| `address` | Y | Output | Address of SFTP server | `"localhost:22"` |
|
||||
| `username` | Y | Output | Username for authentication | `"username"` |
|
||||
| `password` | N | Output | Password for username/password authentication | `"password"` |
|
||||
| `privateKey` | N | Output | Private key for public key authentication | <pre>"\|-<br>-----BEGIN OPENSSH PRIVATE KEY-----<br>*****************<br>-----END OPENSSH PRIVATE KEY-----"</pre> |
|
||||
| `privateKeyPassphrase` | N | Output | Private key passphrase for public key authentication | `"passphrase"` |
|
||||
| `hostPublicKey` | N | Output | Host public key for host validation | `"ecdsa-sha2-nistp256 *** root@openssh-server"` |
|
||||
| `knownHostsFile` | N | Output | Known hosts file for host validation | `"/path/file"` |
|
||||
| `insecureIgnoreHostKey` | N | Output | Allows to skip host validation. Defaults to `"false"` | `"true"`, `"false"` |
|
||||
|
||||
## Binding support
|
||||
|
||||
This component supports **output binding** with the following operations:
|
||||
|
||||
- `create` : [Create file](#create-file)
|
||||
- `get` : [Get file](#get-file)
|
||||
- `list` : [List files](#list-files)
|
||||
- `delete` : [Delete file](#delete-file)
|
||||
|
||||
### Create file
|
||||
|
||||
To perform a create file operation, invoke the SFTP binding with a `POST` method and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
"operation": "create",
|
||||
"data": "<YOUR_BASE_64_CONTENT>",
|
||||
"metadata": {
|
||||
"fileName": "<filename>",
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Example
|
||||
|
||||
{{< tabs Windows Linux >}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
#### Response
|
||||
|
||||
The response body contains the following JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"fileName": "<filename>"
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
### Get file
|
||||
|
||||
To perform a get file operation, invoke the SFTP binding with a `POST` method and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
"operation": "get",
|
||||
"metadata": {
|
||||
"fileName": "<filename>"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Example
|
||||
|
||||
{{< tabs Windows Linux >}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"filename\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "get", "metadata": { "fileName": "filename" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
#### Response
|
||||
|
||||
The response body contains the value stored in the file.
|
||||
|
||||
### List files
|
||||
|
||||
To perform a list files operation, invoke the SFTP binding with a `POST` method and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
"operation": "list"
|
||||
}
|
||||
```
|
||||
|
||||
If you only want to list the files beneath a particular directory below the `rootPath`, specify the relative directory name as the `fileName` in the metadata.
|
||||
|
||||
```json
|
||||
{
|
||||
"operation": "list",
|
||||
"metadata": {
|
||||
"fileName": "my/cool/directory"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Example
|
||||
|
||||
{{< tabs Windows Linux >}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
#### Response
|
||||
|
||||
The response is a JSON array of file names.
|
||||
|
||||
### Delete file
|
||||
|
||||
To perform a delete file operation, invoke the SFTP binding with a `POST` method and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
"operation": "delete",
|
||||
"metadata": {
|
||||
"fileName": "myfile"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Example
|
||||
|
||||
{{< tabs Windows Linux >}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
#### Response
|
||||
|
||||
An HTTP 204 (No Content) and empty body is returned if successful.
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- [Bindings building block]({{< ref bindings >}})
|
||||
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
|
||||
- [Bindings API reference]({{< ref bindings_api.md >}})
|
|
@ -53,6 +53,12 @@ spec:
|
|||
value: 2.0.0
|
||||
- name: disableTls # Optional. Disable TLS. This is not safe for production!! You should read the `Mutual TLS` section for how to use TLS.
|
||||
value: "true"
|
||||
- name: consumerFetchMin # Optional. Advanced setting. The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available.
|
||||
value: 1
|
||||
- name: consumerFetchDefault # Optional. Advanced setting. The default number of message bytes to fetch from the broker in each request.
|
||||
value: 2097152
|
||||
- name: channelBufferSize # Optional. Advanced setting. The number of events to buffer in internal and external channels.
|
||||
value: 512
|
||||
- name: schemaRegistryURL # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry URL.
|
||||
value: http://localhost:8081
|
||||
- name: schemaRegistryAPIKey # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry API Key.
|
||||
|
@ -111,7 +117,9 @@ spec:
|
|||
| schemaLatestVersionCacheTTL | N | When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. Default is 5 min | `5m` |
|
||||
| clientConnectionTopicMetadataRefreshInterval | N | The interval for the client connection's topic metadata to be refreshed with the broker as a Go duration. Defaults to `9m`. | `"4m"` |
|
||||
| clientConnectionKeepAliveInterval | N | The maximum time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely. | `"4m"` |
|
||||
| consumerFetchMin | N | The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available. The default is `1`, as `0` causes the consumer to spin when no messages are available. Equivalent to the JVM's `fetch.min.bytes`. | `"2"` |
|
||||
| consumerFetchDefault | N | The default number of message bytes to fetch from the broker in each request. Default is `"1048576"` bytes. | `"2097152"` |
|
||||
| channelBufferSize | N | The number of events to buffer in internal and external channels. This permits the producer and consumer to continue processing some messages in the background while user code is working, greatly improving throughput. Defaults to `256`. | `"512"` |
|
||||
| heartbeatInterval | N | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to "3s". | `"5s"` |
|
||||
| sessionTimeout | N | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s". | `"20s"` |
|
||||
| escapeHeaders | N | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` |
|
||||
|
@ -460,7 +468,7 @@ Apache Kafka supports the following bulk metadata options:
|
|||
|
||||
When invoking the Kafka pub/sub, its possible to provide an optional partition key by using the `metadata` query param in the request url.
|
||||
|
||||
The param name is `partitionKey`.
|
||||
The param name can either be `partitionKey` or `__key`
|
||||
|
||||
Example:
|
||||
|
||||
|
@ -476,7 +484,7 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.partiti
|
|||
|
||||
### Message headers
|
||||
|
||||
All other metadata key/value pairs (that are not `partitionKey`) are set as headers in the Kafka message. Here is an example setting a `correlationId` for the message.
|
||||
All other metadata key/value pairs (that are not `partitionKey` or `__key`) are set as headers in the Kafka message. Here is an example setting a `correlationId` for the message.
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correlationId=myCorrelationID&metadata.partitionKey=key1 \
|
||||
|
@ -487,7 +495,51 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correla
|
|||
}
|
||||
}'
|
||||
```
|
||||
### Kafka Pubsub special message headers received on consumer side
|
||||
|
||||
When consuming messages, special message metadata are being automatically passed as headers. These are:
|
||||
- `__key`: the message key if available
|
||||
- `__topic`: the topic for the message
|
||||
- `__partition`: the partition number for the message
|
||||
- `__offset`: the offset of the message in the partition
|
||||
- `__timestamp`: the timestamp for the message
|
||||
|
||||
You can access them within the consumer endpoint as follows:
|
||||
{{< tabs "Python (FastAPI)" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
from fastapi import APIRouter, Body, Response, status
|
||||
import json
|
||||
import sys
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
@router.get('/dapr/subscribe')
|
||||
def subscribe():
|
||||
subscriptions = [{'pubsubname': 'pubsub',
|
||||
'topic': 'my-topic',
|
||||
'route': 'my_topic_subscriber',
|
||||
}]
|
||||
return subscriptions
|
||||
|
||||
@router.post('/my_topic_subscriber')
|
||||
def my_topic_subscriber(
|
||||
key: Annotated[str, Header(alias="__key")],
|
||||
offset: Annotated[int, Header(alias="__offset")],
|
||||
event_data=Body()):
|
||||
print(f"key={key} - offset={offset} - data={event_data}", flush=True)
|
||||
return Response(status_code=status.HTTP_200_OK)
|
||||
|
||||
app.include_router(router)
|
||||
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
## Receiving message headers with special characters
|
||||
|
||||
The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors.
|
||||
|
|
|
@ -33,6 +33,8 @@ spec:
|
|||
value: "channel1"
|
||||
- name: enableEntityManagement
|
||||
value: "false"
|
||||
- name: enableInOrderMessageDelivery
|
||||
value: "false"
|
||||
# The following four properties are needed only if enableEntityManagement is set to true
|
||||
- name: resourceGroupName
|
||||
value: "test-rg"
|
||||
|
@ -65,11 +67,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| `connectionString` | Y* | Connection string for the Event Hub or the Event Hub namespace.<br>* Mutally exclusive with `eventHubNamespace` field.<br>* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
|
||||
| `eventHubNamespace` | Y* | The Event Hub Namespace name.<br>* Mutally exclusive with `connectionString` field.<br>* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
|
||||
| `consumerID` | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
|
||||
| `enableEntityManagement` | N | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
|
||||
| `enableInOrderMessageDelivery` | N | Input/Output | Boolean value to allow messages to be delivered in the order in which they were posted. This assumes `partitionKey` is set when publishing or posting to ensure ordering across partitions. Default: `false` | `"true"`, `"false"`
|
||||
| `storageAccountName` | Y | Storage account name to use for the checkpoint store. |`"myeventhubstorage"`
|
||||
| `storageAccountKey` | Y* | Storage account key for the checkpoint store account.<br>* When using Microsoft Entra ID, it's possible to omit this if the service principal has access to the storage account too. | `"112233445566778899"`
|
||||
| `storageConnectionString` | Y* | Connection string for the checkpoint store, alternative to specifying `storageAccountKey` | `"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>"`
|
||||
| `storageContainerName` | Y | Storage container name for the storage account name. | `"myeventhubstoragecontainer"`
|
||||
| `enableEntityManagement` | N | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
|
||||
| `resourceGroupName` | N | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"`
|
||||
| `subscriptionID` | N | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"`
|
||||
| `partitionCount` | N | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"`
|
||||
|
|
|
@ -134,6 +134,14 @@
|
|||
features:
|
||||
input: true
|
||||
output: false
|
||||
- component: SFTP
|
||||
link: sftp
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.15"
|
||||
features:
|
||||
input: false
|
||||
output: true
|
||||
- component: SMTP
|
||||
link: smtp
|
||||
state: Alpha
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
- component: AWS Secrets Manager
|
||||
link: aws-secret-manager
|
||||
state: Alpha
|
||||
state: Beta
|
||||
version: v1
|
||||
since: "1.0"
|
||||
since: "1.15"
|
||||
- component: AWS SSM Parameter Store
|
||||
link: aws-parameter-store
|
||||
state: Alpha
|
||||
|
|
Before Width: | Height: | Size: 46 KiB After Width: | Height: | Size: 86 KiB |
Before Width: | Height: | Size: 47 KiB After Width: | Height: | Size: 65 KiB |
Before Width: | Height: | Size: 131 KiB After Width: | Height: | Size: 107 KiB |
Before Width: | Height: | Size: 152 KiB After Width: | Height: | Size: 160 KiB |
Before Width: | Height: | Size: 30 KiB After Width: | Height: | Size: 46 KiB |
Before Width: | Height: | Size: 27 KiB After Width: | Height: | Size: 37 KiB |
Before Width: | Height: | Size: 117 KiB After Width: | Height: | Size: 78 KiB |
Before Width: | Height: | Size: 215 KiB After Width: | Height: | Size: 108 KiB |
Before Width: | Height: | Size: 201 KiB After Width: | Height: | Size: 176 KiB |
Before Width: | Height: | Size: 80 KiB After Width: | Height: | Size: 36 KiB |
Before Width: | Height: | Size: 258 KiB After Width: | Height: | Size: 84 KiB |
Before Width: | Height: | Size: 221 KiB After Width: | Height: | Size: 150 KiB |
Before Width: | Height: | Size: 274 KiB After Width: | Height: | Size: 202 KiB |
Before Width: | Height: | Size: 136 KiB After Width: | Height: | Size: 163 KiB |
Before Width: | Height: | Size: 116 KiB After Width: | Height: | Size: 91 KiB |
Before Width: | Height: | Size: 100 KiB After Width: | Height: | Size: 62 KiB |
Before Width: | Height: | Size: 71 KiB After Width: | Height: | Size: 66 KiB |
Before Width: | Height: | Size: 123 KiB After Width: | Height: | Size: 77 KiB |
Before Width: | Height: | Size: 228 KiB After Width: | Height: | Size: 185 KiB |
Before Width: | Height: | Size: 183 KiB After Width: | Height: | Size: 122 KiB |
Before Width: | Height: | Size: 137 KiB After Width: | Height: | Size: 64 KiB |
Before Width: | Height: | Size: 348 KiB After Width: | Height: | Size: 222 KiB |
Before Width: | Height: | Size: 85 KiB After Width: | Height: | Size: 147 KiB |
Before Width: | Height: | Size: 137 KiB After Width: | Height: | Size: 105 KiB |
Before Width: | Height: | Size: 31 KiB After Width: | Height: | Size: 65 KiB |
|
@ -1 +1 @@
|
|||
Subproject commit b8e276728935c66b0a335b5aa2ca4102c560dd3d
|
||||
Subproject commit 03038fa519670b583eabcef1417eacd55c3e44c8
|
|
@ -1 +1 @@
|
|||
Subproject commit 7c03c7ce58d100a559ac1881bc0c80d6dedc5ab9
|
||||
Subproject commit dd9a2d5a3c4481b8a6bda032df8f44f5eaedb370
|
|
@ -1 +1 @@
|
|||
Subproject commit a98327e7d9a81611b0d7e91e59ea23ad48271948
|
||||
Subproject commit 0b7a051b79c7a394e9bd4f57bd40778fb5f29897
|
|
@ -1 +1 @@
|
|||
Subproject commit 7350742b6869cc166633d1f4d17d76fbdbb12921
|
||||
Subproject commit 76866c878a6e79bb889c83f3930172ddb20f1624
|
|
@ -1 +1 @@
|
|||
Subproject commit 64a4f2f6658e9023e8ea080eefdb019645cae802
|
||||
Subproject commit 6e90e84b166ac7ea603b78894e9e1b92dc456014
|