mirror of https://github.com/dapr/docs.git
Merge branch 'v1.11' into workflow-review
This commit is contained in:
commit
da873b5f2e
|
@ -28,7 +28,7 @@ In this guide, you'll:
|
|||
|
||||
Currently, you can experience the Dapr Workflow using the .NET SDK.
|
||||
|
||||
{{< tabs ".NET" >}}
|
||||
{{< tabs ".NET" "Python" >}}
|
||||
|
||||
<!-- .NET -->
|
||||
{{% codetab %}}
|
||||
|
@ -254,8 +254,234 @@ The `Activities` directory holds the four workflow activities used by the workfl
|
|||
- `ProcessPaymentActivity.cs`
|
||||
- `UpdateInventoryActivity.cs`
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Python -->
|
||||
{{% codetab %}}
|
||||
|
||||
### Step 1: Pre-requisites
|
||||
|
||||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [Python 3.7+ installed](https://www.python.org/downloads/).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
|
||||
### Step 2: Set up the environment
|
||||
|
||||
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows).
|
||||
|
||||
```bash
|
||||
git clone https://github.com/dapr/quickstarts.git
|
||||
```
|
||||
|
||||
In a new terminal window, navigate to the `order-processor` directory:
|
||||
|
||||
```bash
|
||||
cd workflows/python/sdk/order-processor
|
||||
```
|
||||
|
||||
Install the Dapr Python SDK package:
|
||||
|
||||
```bash
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
### Step 3: Run the order processor app
|
||||
|
||||
In the terminal, start the order processor app alongside a Dapr sidecar:
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --resources-path ../../../components/ -- python3 app.py
|
||||
```
|
||||
|
||||
> **Note:** Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
|
||||
|
||||
This starts the `order-processor` app with unique workflow ID and runs the workflow activities.
|
||||
|
||||
Expected output:
|
||||
|
||||
```bash
|
||||
== APP == Starting order workflow, purchasing 10 of cars
|
||||
== APP == 2023-06-06 09:35:52.945 durabletask-worker INFO: Successfully connected to 127.0.0.1:65406. Waiting for work items...
|
||||
== APP == INFO:NotifyActivity:Received order f4e1926e-3721-478d-be8a-f5bebd1995da for 10 cars at $165000 !
|
||||
== APP == INFO:VerifyInventoryActivity:Verifying inventory for order f4e1926e-3721-478d-be8a-f5bebd1995da of 10 cars
|
||||
== APP == INFO:VerifyInventoryActivity:There are 100 Cars available for purchase
|
||||
== APP == INFO:RequestApprovalActivity:Requesting approval for payment of 165000 USD for 10 cars
|
||||
== APP == 2023-06-06 09:36:05.969 durabletask-worker INFO: f4e1926e-3721-478d-be8a-f5bebd1995da Event raised: manager_approval
|
||||
== APP == INFO:NotifyActivity:Payment for order f4e1926e-3721-478d-be8a-f5bebd1995da has been approved!
|
||||
== APP == INFO:ProcessPaymentActivity:Processing payment: f4e1926e-3721-478d-be8a-f5bebd1995da for 10 cars at 165000 USD
|
||||
== APP == INFO:ProcessPaymentActivity:Payment for request ID f4e1926e-3721-478d-be8a-f5bebd1995da processed successfully
|
||||
== APP == INFO:UpdateInventoryActivity:Checking inventory for order f4e1926e-3721-478d-be8a-f5bebd1995da for 10 cars
|
||||
== APP == INFO:UpdateInventoryActivity:There are now 90 cars left in stock
|
||||
== APP == INFO:NotifyActivity:Order f4e1926e-3721-478d-be8a-f5bebd1995da has completed!
|
||||
== APP == 2023-06-06 09:36:06.106 durabletask-worker INFO: f4e1926e-3721-478d-be8a-f5bebd1995da: Orchestration completed with status: COMPLETED
|
||||
== APP == Workflow completed! Result: Completed
|
||||
== APP == Purchase of item is Completed
|
||||
```
|
||||
|
||||
### (Optional) Step 4: View in Zipkin
|
||||
|
||||
If you have Zipkin configured for Dapr locally on your machine, you can view the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
|
||||
|
||||
<img src="/images/workflow-trace-spans-zipkin-python.png" width=900 style="padding-bottom:15px;">
|
||||
|
||||
### What happened?
|
||||
|
||||
When you ran `dapr run --app-id order-processor --resources-path ../../../components/ -- python3 app.py`:
|
||||
|
||||
1. A unique order ID for the workflow is generated (in the above example, `f4e1926e-3721-478d-be8a-f5bebd1995da`) and the workflow is scheduled.
|
||||
1. The `NotifyActivity` workflow activity sends a notification saying an order for 10 cars has been received.
|
||||
1. The `ReserveInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock.
|
||||
1. Your workflow starts and notifies you of its status.
|
||||
1. The `ProcessPaymentActivity` workflow activity begins processing payment for order `f4e1926e-3721-478d-be8a-f5bebd1995da` and confirms if successful.
|
||||
1. The `UpdateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed.
|
||||
1. The `NotifyActivity` workflow activity sends a notification saying that order `f4e1926e-3721-478d-be8a-f5bebd1995da` has completed.
|
||||
1. The workflow terminates as completed.
|
||||
|
||||
#### `order-processor/app.py`
|
||||
|
||||
In the application's program file:
|
||||
- The unique workflow order ID is generated
|
||||
- The workflow is scheduled
|
||||
- The workflow status is retrieved
|
||||
- The workflow and the workflow activities it invokes are registered
|
||||
|
||||
```python
|
||||
class WorkflowConsoleApp:
|
||||
def main(self):
|
||||
# Register workflow and activities
|
||||
workflowRuntime = WorkflowRuntime(settings.DAPR_RUNTIME_HOST, settings.DAPR_GRPC_PORT)
|
||||
workflowRuntime.register_workflow(order_processing_workflow)
|
||||
workflowRuntime.register_activity(notify_activity)
|
||||
workflowRuntime.register_activity(requst_approval_activity)
|
||||
workflowRuntime.register_activity(verify_inventory_activity)
|
||||
workflowRuntime.register_activity(process_payment_activity)
|
||||
workflowRuntime.register_activity(update_inventory_activity)
|
||||
workflowRuntime.start()
|
||||
|
||||
print("==========Begin the purchase of item:==========", flush=True)
|
||||
item_name = default_item_name
|
||||
order_quantity = 10
|
||||
|
||||
total_cost = int(order_quantity) * baseInventory[item_name].per_item_cost
|
||||
order = OrderPayload(item_name=item_name, quantity=int(order_quantity), total_cost=total_cost)
|
||||
|
||||
# Start Workflow
|
||||
print(f'Starting order workflow, purchasing {order_quantity} of {item_name}', flush=True)
|
||||
start_resp = daprClient.start_workflow(workflow_component=workflow_component,
|
||||
workflow_name=workflow_name,
|
||||
input=order)
|
||||
_id = start_resp.instance_id
|
||||
|
||||
def prompt_for_approval(daprClient: DaprClient):
|
||||
daprClient.raise_workflow_event(instance_id=_id, workflow_component=workflow_component,
|
||||
event_name="manager_approval", event_data={'approval': True})
|
||||
|
||||
approval_seeked = False
|
||||
start_time = datetime.now()
|
||||
while True:
|
||||
time_delta = datetime.now() - start_time
|
||||
state = daprClient.get_workflow(instance_id=_id, workflow_component=workflow_component)
|
||||
if not state:
|
||||
print("Workflow not found!") # not expected
|
||||
elif state.runtime_status == "Completed" or\
|
||||
state.runtime_status == "Failed" or\
|
||||
state.runtime_status == "Terminated":
|
||||
print(f'Workflow completed! Result: {state.runtime_status}', flush=True)
|
||||
break
|
||||
if time_delta.total_seconds() >= 10:
|
||||
state = daprClient.get_workflow(instance_id=_id, workflow_component=workflow_component)
|
||||
if total_cost > 50000 and (
|
||||
state.runtime_status != "Completed" or
|
||||
state.runtime_status != "Failed" or
|
||||
state.runtime_status != "Terminated"
|
||||
) and not approval_seeked:
|
||||
approval_seeked = True
|
||||
threading.Thread(target=prompt_for_approval(daprClient), daemon=True).start()
|
||||
|
||||
print("Purchase of item is ", state.runtime_status, flush=True)
|
||||
|
||||
def restock_inventory(self, daprClient: DaprClient, baseInventory):
|
||||
for key, item in baseInventory.items():
|
||||
print(f'item: {item}')
|
||||
item_str = f'{{"name": "{item.item_name}", "quantity": {item.quantity},\
|
||||
"per_item_cost": {item.per_item_cost}}}'
|
||||
daprClient.save_state("statestore-actors", key, item_str)
|
||||
|
||||
if __name__ == '__main__':
|
||||
app = WorkflowConsoleApp()
|
||||
app.main()
|
||||
```
|
||||
|
||||
#### `order-processor/workflow.py`
|
||||
|
||||
In `workflow.py`, the workflow is defined as a class with all of its associated tasks (determined by workflow activities).
|
||||
|
||||
```python
|
||||
def order_processing_workflow(ctx: DaprWorkflowContext, order_payload_str: OrderPayload):
|
||||
"""Defines the order processing workflow.
|
||||
When the order is received, the inventory is checked to see if there is enough inventory to
|
||||
fulfill the order. If there is enough inventory, the payment is processed and the inventory is
|
||||
updated. If there is not enough inventory, the order is rejected.
|
||||
If the total order is greater than $50,000, the order is sent to a manager for approval.
|
||||
"""
|
||||
order_id = ctx.instance_id
|
||||
order_payload=json.loads(order_payload_str)
|
||||
yield ctx.call_activity(notify_activity,
|
||||
input=Notification(message=('Received order ' +order_id+ ' for '
|
||||
+f'{order_payload["quantity"]}' +' ' +f'{order_payload["item_name"]}'
|
||||
+' at $'+f'{order_payload["total_cost"]}' +' !')))
|
||||
result = yield ctx.call_activity(verify_inventory_activity,
|
||||
input=InventoryRequest(request_id=order_id,
|
||||
item_name=order_payload["item_name"],
|
||||
quantity=order_payload["quantity"]))
|
||||
if not result.success:
|
||||
yield ctx.call_activity(notify_activity,
|
||||
input=Notification(message='Insufficient inventory for '
|
||||
+f'{order_payload["item_name"]}'+'!'))
|
||||
return OrderResult(processed=False)
|
||||
|
||||
if order_payload["total_cost"] > 50000:
|
||||
yield ctx.call_activity(requst_approval_activity, input=order_payload)
|
||||
approval_task = ctx.wait_for_external_event("manager_approval")
|
||||
timeout_event = ctx.create_timer(timedelta(seconds=200))
|
||||
winner = yield when_any([approval_task, timeout_event])
|
||||
if winner == timeout_event:
|
||||
yield ctx.call_activity(notify_activity,
|
||||
input=Notification(message='Payment for order '+order_id
|
||||
+' has been cancelled due to timeout!'))
|
||||
return OrderResult(processed=False)
|
||||
approval_result = yield approval_task
|
||||
if approval_result["approval"]:
|
||||
yield ctx.call_activity(notify_activity, input=Notification(
|
||||
message=f'Payment for order {order_id} has been approved!'))
|
||||
else:
|
||||
yield ctx.call_activity(notify_activity, input=Notification(
|
||||
message=f'Payment for order {order_id} has been rejected!'))
|
||||
return OrderResult(processed=False)
|
||||
|
||||
yield ctx.call_activity(process_payment_activity, input=PaymentRequest(
|
||||
request_id=order_id, item_being_purchased=order_payload["item_name"],
|
||||
amount=order_payload["total_cost"], quantity=order_payload["quantity"]))
|
||||
|
||||
try:
|
||||
yield ctx.call_activity(update_inventory_activity,
|
||||
input=PaymentRequest(request_id=order_id,
|
||||
item_being_purchased=order_payload["item_name"],
|
||||
amount=order_payload["total_cost"],
|
||||
quantity=order_payload["quantity"]))
|
||||
except Exception:
|
||||
yield ctx.call_activity(notify_activity,
|
||||
input=Notification(message=f'Order {order_id} Failed!'))
|
||||
return OrderResult(processed=False)
|
||||
|
||||
yield ctx.call_activity(notify_activity, input=Notification(
|
||||
message=f'Order {order_id} has completed!'))
|
||||
return OrderResult(processed=True)
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
|
|
|
@ -6,14 +6,18 @@ weight: 4500
|
|||
description: "Choose which Dapr sidecar APIs are available to the app"
|
||||
---
|
||||
|
||||
In certain scenarios such as zero trust networks or when exposing the Dapr sidecar to external traffic through a frontend, it's recommended to only enable the Dapr sidecar APIs that are being used by the app. Doing so reduces the attack surface and helps keep the Dapr APIs scoped to the actual needs of the application.
|
||||
In certain scenarios, such as zero trust networks or when exposing the Dapr sidecar to external traffic through a frontend, it's recommended to only enable the Dapr sidecar APIs that are being used by the app. Doing so reduces the attack surface and helps keep the Dapr APIs scoped to the actual needs of the application.
|
||||
|
||||
Dapr allows developers to control which APIs are accessible to the application by setting an API allow list using a [Dapr Configuration]({{<ref "configuration-overview.md">}}).
|
||||
Dapr allows developers to control which APIs are accessible to the application by setting an API allowlist or denylist using a [Dapr Configuration]({{<ref "configuration-overview.md">}}).
|
||||
|
||||
### Default behavior
|
||||
|
||||
If an API allow list section is not specified, the default behavior is to allow access to all Dapr APIs.
|
||||
Once an allow list is set, only the specified APIs are accessible.
|
||||
If no API allowlist or denylist is specified, the default behavior is to allow access to all Dapr APIs.
|
||||
|
||||
- If only a denylist is defined, all Dapr APIs are allowed except those defined in the denylist
|
||||
- If only an allowlist is defined, only the Dapr APIs listed in the allowlist are allowed
|
||||
- If both an allowlist and a denylist are defined, the allowed APIs are those defined in the allowlist, unless they are also included in the denylist. In other words, the denylist overrides the allowlist for APIs that are defined in both.
|
||||
- If neither is defined, all APIs are allowed.
|
||||
|
||||
For example, the following configuration enables all APIs for both HTTP and gRPC:
|
||||
|
||||
|
@ -28,9 +32,11 @@ spec:
|
|||
samplingRate: "1"
|
||||
```
|
||||
|
||||
### Enabling specific HTTP APIs
|
||||
### Using an allowlist
|
||||
|
||||
The following example enables the state `v1.0` HTTP API and block all the rest:
|
||||
#### Enabling specific HTTP APIs
|
||||
|
||||
The following example enables the state `v1.0` HTTP API and blocks all other HTTP APIs:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -41,14 +47,14 @@ metadata:
|
|||
spec:
|
||||
api:
|
||||
allowed:
|
||||
- name: state
|
||||
version: v1.0
|
||||
protocol: http
|
||||
- name: state
|
||||
version: v1.0
|
||||
protocol: http
|
||||
```
|
||||
|
||||
### Enabling specific gRPC APIs
|
||||
#### Enabling specific gRPC APIs
|
||||
|
||||
The following example enables the state `v1` gRPC API and block all the rest:
|
||||
The following example enables the state `v1` gRPC API and blocks all other gRPC APIs:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -59,9 +65,47 @@ metadata:
|
|||
spec:
|
||||
api:
|
||||
allowed:
|
||||
- name: state
|
||||
version: v1
|
||||
protocol: grpc
|
||||
- name: state
|
||||
version: v1
|
||||
protocol: grpc
|
||||
```
|
||||
|
||||
### Using a denylist
|
||||
|
||||
#### Disabling specific HTTP APIs
|
||||
|
||||
The following example disables the state `v1.0` HTTP API, allowing all other HTTP APIs:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: myappconfig
|
||||
namespace: default
|
||||
spec:
|
||||
api:
|
||||
denied:
|
||||
- name: state
|
||||
version: v1.0
|
||||
protocol: http
|
||||
```
|
||||
|
||||
#### Disabling specific gRPC APIs
|
||||
|
||||
The following example disables the state `v1` gRPC API, allowing all other gRPC APIs:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: myappconfig
|
||||
namespace: default
|
||||
spec:
|
||||
api:
|
||||
denied:
|
||||
- name: state
|
||||
version: v1
|
||||
protocol: grpc
|
||||
```
|
||||
|
||||
### List of Dapr APIs
|
||||
|
@ -70,12 +114,18 @@ The `name` field takes the name of the Dapr API you would like to enable.
|
|||
|
||||
See this list of values corresponding to the different Dapr APIs:
|
||||
|
||||
| Name | Dapr API |
|
||||
| ------------- | ------------- |
|
||||
| state | [State]({{< ref state_api.md>}})|
|
||||
| invoke | [Service Invocation]({{< ref service_invocation_api.md >}}) |
|
||||
| secrets | [Secrets]({{< ref secrets_api.md >}})|
|
||||
| bindings | [Output Bindings]({{< ref bindings_api.md >}}) |
|
||||
| publish | [Pub/Sub]({{< ref pubsub.md >}}) |
|
||||
| actors | [Actors]({{< ref actors_api.md >}}) |
|
||||
| metadata | [Metadata]({{< ref metadata_api.md >}}) |
|
||||
| API group | HTTP API | [gRPC API](https://github.com/dapr/dapr/blob/master/pkg/grpc/endpoints.go) |
|
||||
| ----- | ----- | ----- |
|
||||
| [Service Invocation]({{< ref service_invocation_api.md >}}) | `invoke` (`v1.0`) | `invoke` (`v1`) |
|
||||
| [State]({{< ref state_api.md>}})| `state` (`v1.0` and `v1.0-alpha1`) | `state` (`v1` and `v1alpha1`) |
|
||||
| [Pub/Sub]({{< ref pubsub.md >}}) | `publish` (`v1.0` and `v1.0-alpha1`) | `publish` (`v1` and `v1alpha1`) |
|
||||
| [(Output) Bindings]({{< ref bindings_api.md >}}) | `bindings` (`v1.0`) |`bindings` (`v1`) |
|
||||
| [Secrets]({{< ref secrets_api.md >}})| `secrets` (`v1.0`) | `secrets` (`v1`) |
|
||||
| [Actors]({{< ref actors_api.md >}}) | `actors` (`v1.0`) |`actors` (`v1`) |
|
||||
| [Metadata]({{< ref metadata_api.md >}}) | `metadata` (`v1.0`) |`metadata` (`v1`) |
|
||||
| [Configuration]({{< ref configuration_api.md >}}) | `configuration` (`v1.0` and `v1.0-alpha1`) | `configuration` (`v1` and `v1alpha1`) |
|
||||
| [Distributed Lock]({{< ref distributed_lock_api.md >}}) | `lock` (`v1.0-alpha1`)<br/>`unlock` (`v1.0-alpha1`) | `lock` (`v1alpha1`)<br/>`unlock` (`v1alpha1`) |
|
||||
| Cryptography | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) |
|
||||
| [Workflow]({{< ref workflow_api.md >}}) | `workflows` (`v1.0-alpha1`) |`workflows` (`v1alpha1`) |
|
||||
| [Health]({{< ref health_api.md >}}) | `healthz` (`v1.0`) | n/a |
|
||||
| Shutdown | `shutdown` (`v1.0`) | `shutdown` (`v1`) |
|
||||
|
|
|
@ -75,4 +75,4 @@ By default, tailing is set to /var/log/containers/*.log. To change this setting,
|
|||
* [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform)
|
||||
* [New Relic Logging](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging)
|
||||
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/)
|
||||
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/alerts-ai-transition-guide-2022/)
|
||||
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/overview/)
|
||||
|
|
|
@ -40,4 +40,4 @@ This document explains how to install it in your cluster, either using a Helm ch
|
|||
* [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform)
|
||||
* [New Relic Prometheus OpenMetrics Integration](https://github.com/newrelic/helm-charts/tree/master/charts/nri-prometheus)
|
||||
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/)
|
||||
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/alerts-ai-transition-guide-2022/)
|
||||
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/overview/)
|
||||
|
|
|
@ -101,7 +101,7 @@ And the exact same dashboard templates from Dapr can be imported to visualize Da
|
|||
|
||||
## New Relic Alerts
|
||||
|
||||
All the data that is collected from Dapr, Kubernetes or any services that run on top of can be used to set-up alerts and notifications into the preferred channel of your choice. See [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/alerts-ai-transition-guide-2022/).
|
||||
All the data that is collected from Dapr, Kubernetes or any services that run on top of can be used to set-up alerts and notifications into the preferred channel of your choice. See [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/overview/).
|
||||
|
||||
## Related Links/References
|
||||
|
||||
|
@ -111,4 +111,4 @@ All the data that is collected from Dapr, Kubernetes or any services that run on
|
|||
* [New Relic Trace API](https://docs.newrelic.com/docs/distributed-tracing/trace-api/introduction-trace-api/)
|
||||
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/)
|
||||
* [New Relic OpenTelemetry User Experience](https://blog.newrelic.com/product-news/opentelemetry-user-experience/)
|
||||
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/alerts-ai-transition-guide-2022/)
|
||||
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence/overview/)
|
||||
|
|
|
@ -67,6 +67,10 @@ spec:
|
|||
| oidcClientID | N | Input/Output | The OAuth2 client ID that has been provisioned in the identity provider. Required when `authType` is set to `oidc` | `dapr-kafka` |
|
||||
| oidcClientSecret | N | Input/Output | The OAuth2 client secret that has been provisioned in the identity provider: Required when `authType` is set to `oidc` | `"KeFg23!"` |
|
||||
| oidcScopes | N | Input/Output | Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when `authType` is set to `oidc`. Defaults to `"openid"` | `"openid,kafka-prod"` |
|
||||
| version | N | Input/Output | Kafka cluster version. Defaults to 2.0.0. Please note that this needs to be mandatorily set to `1.0.0` for EventHubs with Kafka. | `1.0.0` |
|
||||
|
||||
#### Note
|
||||
The metadata `version` must be set to `1.0.0` when using Azure EventHubs with Kafka.
|
||||
|
||||
## Binding support
|
||||
|
||||
|
|
|
@ -65,7 +65,7 @@ spec:
|
|||
| maxMessageBytes | N | The maximum size in bytes allowed for a single Kafka message. Defaults to 1024. | `2048`
|
||||
| consumeRetryInterval | N | The interval between retries when attempting to consume topics. Treats numbers without suffix as milliseconds. Defaults to 100ms. | `200ms` |
|
||||
| consumeRetryEnabled | N | Disable consume retry by setting `"false"` | `"true"`, `"false"` |
|
||||
| version | N | Kafka cluster version. Defaults to 2.0.0.0 | `0.10.2.0` |
|
||||
| version | N | Kafka cluster version. Defaults to 2.0.0. Note that this must be set to `1.0.0` if you are using Azure EventHubs with Kafka. | `0.10.2.0` |
|
||||
| caCert | N | Certificate authority certificate, required for using TLS. Can be `secretKeyRef` to use a secret reference | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientCert | N | Client certificate, required for `authType` `mtls`. Can be `secretKeyRef` to use a secret reference | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientKey | N | Client key, required for `authType` `mtls` Can be `secretKeyRef` to use a secret reference | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
|
||||
|
@ -78,6 +78,9 @@ spec:
|
|||
|
||||
The `secretKeyRef` above is referencing a [kubernetes secrets store]({{< ref kubernetes-secret-store.md >}}) to access the tls information. Visit [here]({{< ref setup-secret-store.md >}}) to learn more about how to configure a secret store component.
|
||||
|
||||
#### Note
|
||||
The metadata `version` must be set to `1.0.0` when using Azure EventHubs with Kafka.
|
||||
|
||||
### Authentication
|
||||
|
||||
Kafka supports a variety of authentication schemes and Dapr supports several: SASL password, mTLS, OIDC/OAuth2. With the added authentication methods, the `authRequired` field has
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 181 KiB |
Loading…
Reference in New Issue