Merge branch 'resiliency_docs' of github.com:greenie-msft/docs into resiliency_docs
|
|
@ -19,7 +19,6 @@ Fork the [docs repository](https://github.com/dapr/docs) to work on any changes
|
|||
Follow the instructions in the repository [README.md](https://github.com/dapr/docs/blob/master/README.md#environment-setup) to install Hugo locally and build the docs website.
|
||||
|
||||
## Branch guidance
|
||||
|
||||
The Dapr docs handles branching differently than most code repositories. Instead of having a `master` or `main` branch, every branch is labeled to match the major and minor version of a runtime release. For the full list visit the [Docs repo](https://github.com/dapr/docs#branch-guidance)
|
||||
|
||||
Overall, all updates should go into the docs branch for the latest release of Dapr. You can find this directly at [https://github.com/dapr/docs](https://github.com/dapr/docs), as the latest release will be the default branch. For any docs changes that are applicable to a release candidate or a pre-release version of the docs, make your changes into that particular branch.
|
||||
|
|
@ -36,6 +35,16 @@ These conventions should be followed throughout all Dapr documentation to ensure
|
|||
- **Assume a new developer audience** - Some obvious steps can seem hard. E.g. Now set an environment variable Dapr to a value X. It is better to give the reader the explicit command to do this, rather than having them figure this out.
|
||||
- **Use present tense** - Avoid sentences like "this command will install redis", which implies the action is in the future. Instead use "This command installs redis" which is in the present tense.
|
||||
|
||||
## Diagrams and images
|
||||
It is strongly encouraged to create diagrams and images where ever possible for documentation pages. All diagrams are kept in a Dapr Diagrams Deck, which has guidance on style and icons. The diagram images are saved as PNG files into the [images folder](/images).
|
||||
Diagrams should be;
|
||||
- Saved as PNG files with a high resolution
|
||||
- Named using the convention of a concept or building block so that they are grouped. For example `service-invocation-overview.png`. Also see Images guidance section below.
|
||||
- Added to the correct section in the `Dapr-Diagrams.pptx` deck so that they can be amended and updated.
|
||||
|
||||
{{< button text="Download the Dapr Diagrams Deck" link="/presentations/Dapr-Diagrams.zip" >}}
|
||||
|
||||
|
||||
## Contributing a new docs page
|
||||
- Make sure the documentation you are writing is in the correct place in the hierarchy.
|
||||
- Avoid creating new sections where possible, there is a good chance a proper place in the docs hierarchy already exists.
|
||||
|
|
@ -50,7 +59,6 @@ These conventions should be followed throughout all Dapr documentation to ensure
|
|||
- Where possible reference a practical How-To doc.
|
||||
|
||||
### Contributing a new How-To guide
|
||||
|
||||
- `How To` articles are meant to provide step-by-step practical guidance on to readers who wish to enable a feature, integrate a technology or use Dapr in a specific scenario.
|
||||
- Sub directory naming - the directory name should be descriptive and if referring to specific component or concept should begin with the relevant name. Example *pubsub-namespaces*.
|
||||
- Do not assume the reader is using a specific environment unless the article itself is specific to an environment. This include OS (Windows/Linux/MacOS), deployment target (Kubernetes, IoT etc.) or programming language. If instructions vary between operating systems, provide guidance for all.
|
||||
|
|
|
|||
|
|
@ -212,6 +212,40 @@ message SubscribeConfigurationRequest {
|
|||
|
||||
Using this method, you can subscribe to changes in specific keys for a given configuration store. gRPC streaming varies widely based on language - see the [gRPC examples here](https://grpc.io/docs/languages/) for usage.
|
||||
|
||||
Below are the examples in sdks:
|
||||
|
||||
{{< tabs Python>}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
#dependencies
|
||||
import asyncio
|
||||
from dapr.clients import DaprClient
|
||||
#code
|
||||
async def executeConfiguration():
|
||||
with DaprClient() as d:
|
||||
CONFIG_STORE_NAME = 'configstore'
|
||||
key = 'orderId'
|
||||
# Subscribe to configuration by key.
|
||||
configuration = await d.subscribe_configuration(store_name=CONFIG_STORE_NAME, keys=[key], config_metadata={})
|
||||
if configuration != None:
|
||||
items = configuration.get_items()
|
||||
for item in items:
|
||||
print(f"Subscribe key={item.key} value={item.value} version={item.version}", flush=True)
|
||||
else:
|
||||
print("Nothing yet")
|
||||
|
||||
asyncio.run(executeConfiguration())
|
||||
```
|
||||
|
||||
```bash
|
||||
dapr run --app-id orderprocessing --components-path components/ -- python3 OrderProcessingService.py
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
##### Stop watching configuration items
|
||||
|
||||
After you have subscribed to watch configuration items, the gRPC-server stream starts. This stream thread does not close itself, and you have to do by explicitly call the `UnSubscribeConfigurationRequest` API. This method accepts the following request object:
|
||||
|
|
|
|||
|
|
@ -42,7 +42,7 @@ In this example, RabbitMQ is used for publish and subscribe. Replace `pubsub.yam
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: order_pub_sub
|
||||
name: order-pub-sub
|
||||
spec:
|
||||
type: pubsub.rabbitmq
|
||||
version: v1
|
||||
|
|
@ -74,7 +74,7 @@ To deploy this into a Kubernetes cluster, fill in the `metadata` connection deta
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: order_pub_sub
|
||||
name: order-pub-sub
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.rabbitmq
|
||||
|
|
@ -121,17 +121,17 @@ You can subscribe to a topic using the following Custom Resources Definition (CR
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: order_pub_sub
|
||||
name: order-pub-sub
|
||||
spec:
|
||||
topic: orders
|
||||
route: /checkout
|
||||
pubsubname: order_pub_sub
|
||||
pubsubname: order-pub-sub
|
||||
scopes:
|
||||
- orderprocessing
|
||||
- checkout
|
||||
```
|
||||
|
||||
The example above shows an event subscription to topic `orders`, for the pubsub component `order_pub_sub`.
|
||||
The example above shows an event subscription to topic `orders`, for the pubsub component `order-pub-sub`.
|
||||
- The `route` field tells Dapr to send all topic messages to the `/checkout` endpoint in the app.
|
||||
- The `scopes` field enables this subscription for apps with IDs `orderprocessing` and `checkout`.
|
||||
|
||||
|
|
@ -216,7 +216,7 @@ namespace CheckoutService.controller
|
|||
public class CheckoutServiceController : Controller
|
||||
{
|
||||
//Subscribe to a topic
|
||||
[Topic("order_pub_sub", "orders")]
|
||||
[Topic("order-pub-sub", "orders")]
|
||||
[HttpPost("checkout")]
|
||||
public void getCheckout([FromBody] int orderId)
|
||||
{
|
||||
|
|
@ -252,7 +252,7 @@ public class CheckoutServiceController {
|
|||
|
||||
private static final Logger log = LoggerFactory.getLogger(CheckoutServiceController.class);
|
||||
//Subscribe to a topic
|
||||
@Topic(name = "orders", pubsubName = "order_pub_sub")
|
||||
@Topic(name = "orders", pubsubName = "order-pub-sub")
|
||||
@PostMapping(path = "/checkout")
|
||||
public Mono<Void> getCheckout(@RequestBody(required = false) CloudEvent<String> cloudEvent) {
|
||||
return Mono.fromRunnable(() -> {
|
||||
|
|
@ -287,7 +287,7 @@ import json
|
|||
app = App()
|
||||
logging.basicConfig(level = logging.INFO)
|
||||
#Subscribe to a topic
|
||||
@app.subscribe(pubsub_name='order_pub_sub', topic='orders')
|
||||
@app.subscribe(pubsub_name='order-pub-sub', topic='orders')
|
||||
def mytopic(event: v1.Event) -> None:
|
||||
data = json.loads(event.Data())
|
||||
logging.info('Subscriber received: ' + str(data))
|
||||
|
|
@ -318,7 +318,7 @@ import (
|
|||
|
||||
//code
|
||||
var sub = &common.Subscription{
|
||||
PubsubName: "order_pub_sub",
|
||||
PubsubName: "order-pub-sub",
|
||||
Topic: "orders",
|
||||
Route: "/checkout",
|
||||
}
|
||||
|
|
@ -373,7 +373,7 @@ async function start(orderId) {
|
|||
CommunicationProtocolEnum.HTTP
|
||||
);
|
||||
//Subscribe to a topic
|
||||
await server.pubsub.subscribe("order_pub_sub", "orders", async (orderId) => {
|
||||
await server.pubsub.subscribe("order-pub-sub", "orders", async (orderId) => {
|
||||
console.log(`Subscriber received: ${JSON.stringify(orderId)}`)
|
||||
});
|
||||
await server.startServer();
|
||||
|
|
@ -406,21 +406,21 @@ dapr run --app-id orderprocessing --dapr-http-port 3601
|
|||
Then publish a message to the `orders` topic:
|
||||
|
||||
```bash
|
||||
dapr publish --publish-app-id orderprocessing --pubsub order_pub_sub --topic orders --data '{"orderId": "100"}'
|
||||
dapr publish --publish-app-id orderprocessing --pubsub order-pub-sub --topic orders --data '{"orderId": "100"}'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Then publish a message to the `orders` topic:
|
||||
```bash
|
||||
curl -X POST http://localhost:3601/v1.0/publish/order_pub_sub/orders -H "Content-Type: application/json" -d '{"orderId": "100"}'
|
||||
curl -X POST http://localhost:3601/v1.0/publish/order-pub-sub/orders -H "Content-Type: application/json" -d '{"orderId": "100"}'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Then publish a message to the `orders` topic:
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"orderId": "100"}' -Uri 'http://localhost:3601/v1.0/publish/order_pub_sub/orders'
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"orderId": "100"}' -Uri 'http://localhost:3601/v1.0/publish/order-pub-sub/orders'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
|
@ -452,7 +452,7 @@ namespace EventService
|
|||
{
|
||||
static async Task Main(string[] args)
|
||||
{
|
||||
string PUBSUB_NAME = "order_pub_sub";
|
||||
string PUBSUB_NAME = "order-pub-sub";
|
||||
string TOPIC_NAME = "orders";
|
||||
while(true) {
|
||||
System.Threading.Thread.Sleep(5000);
|
||||
|
|
@ -501,7 +501,7 @@ public class OrderProcessingServiceApplication {
|
|||
public static void main(String[] args) throws InterruptedException{
|
||||
String MESSAGE_TTL_IN_SECONDS = "1000";
|
||||
String TOPIC_NAME = "orders";
|
||||
String PUBSUB_NAME = "order_pub_sub";
|
||||
String PUBSUB_NAME = "order-pub-sub";
|
||||
|
||||
while(true) {
|
||||
TimeUnit.MILLISECONDS.sleep(5000);
|
||||
|
|
@ -544,7 +544,7 @@ logging.basicConfig(level = logging.INFO)
|
|||
while True:
|
||||
sleep(random.randrange(50, 5000) / 1000)
|
||||
orderId = random.randint(1, 1000)
|
||||
PUBSUB_NAME = 'order_pub_sub'
|
||||
PUBSUB_NAME = 'order-pub-sub'
|
||||
TOPIC_NAME = 'orders'
|
||||
with DaprClient() as client:
|
||||
#Using Dapr SDK to publish a topic
|
||||
|
|
@ -580,7 +580,7 @@ import (
|
|||
|
||||
//code
|
||||
var (
|
||||
PUBSUB_NAME = "order_pub_sub"
|
||||
PUBSUB_NAME = "order-pub-sub"
|
||||
TOPIC_NAME = "orders"
|
||||
)
|
||||
|
||||
|
|
@ -633,7 +633,7 @@ var main = function() {
|
|||
}
|
||||
|
||||
async function start(orderId) {
|
||||
const PUBSUB_NAME = "order_pub_sub"
|
||||
const PUBSUB_NAME = "order-pub-sub"
|
||||
const TOPIC_NAME = "orders"
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
console.log("Published data:" + orderId)
|
||||
|
|
@ -676,21 +676,21 @@ Read about content types [here](#content-types), and about the [Cloud Events mes
|
|||
{{% codetab %}}
|
||||
Publish a custom CloudEvent to the `orders` topic:
|
||||
```bash
|
||||
dapr publish --publish-app-id orderprocessing --pubsub order_pub_sub --topic orders --data '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"orderId": "100"}}'
|
||||
dapr publish --publish-app-id orderprocessing --pubsub order-pub-sub --topic orders --data '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"orderId": "100"}}'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Publish a custom CloudEvent to the `orders` topic:
|
||||
```bash
|
||||
curl -X POST http://localhost:3601/v1.0/publish/order_pub_sub/orders -H "Content-Type: application/cloudevents+json" -d '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"orderId": "100"}}'
|
||||
curl -X POST http://localhost:3601/v1.0/publish/order-pub-sub/orders -H "Content-Type: application/cloudevents+json" -d '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"orderId": "100"}}'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
Publish a custom CloudEvent to the `orders` topic:
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/cloudevents+json' -Body '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"orderId": "100"}}' -Uri 'http://localhost:3601/v1.0/publish/order_pub_sub/orders'
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/cloudevents+json' -Body '{"specversion" : "1.0", "type" : "com.dapr.cloudevent.sent", "source" : "testcloudeventspubsub", "subject" : "Cloud Events Test", "id" : "someCloudEventId", "time" : "2021-08-02T09:00:00Z", "datacontenttype" : "application/cloudevents+json", "data" : {"orderId": "100"}}' -Uri 'http://localhost:3601/v1.0/publish/order-pub-sub/orders'
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Updating components"
|
||||
linkTitle: "Updating components"
|
||||
weight: 250
|
||||
description: "Updating deployed components used by applications"
|
||||
---
|
||||
|
||||
When making an update to an existing deployed component used by an application, Dapr does not update the component automatically. The Dapr sidecar needs to be restarted in order to pick up the latest version of the component. How this done depends on the hosting environment.
|
||||
|
||||
## Kubernetes
|
||||
|
||||
When running in Kubernetes, the process of updating a component involves two steps:
|
||||
|
||||
1. Applying the new component YAML to the desired namespace
|
||||
2. Performing a [rollout restart operation](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources) on your deployments to pick up the latest component
|
||||
|
||||
## Self Hosted
|
||||
|
||||
When running in Self Hosted mode, the process of updating a component involves a single step of stopping the `daprd` process and starting it again to pick up the latest component.
|
||||
|
||||
## Further reading
|
||||
- [Components concept]({{< ref components-concept.md >}})
|
||||
- [Reference secrets in component definitions]({{< ref component-secrets.md >}})
|
||||
- [Supported state stores]({{< ref supported-state-stores >}})
|
||||
- [Supported pub/sub brokers]({{< ref supported-pubsub >}})
|
||||
- [Supported secret stores]({{< ref supported-secret-stores >}})
|
||||
- [Supported bindings]({{< ref supported-bindings >}})
|
||||
- [Set component scopes]({{< ref component-scopes.md >}})
|
||||
|
|
@ -0,0 +1,76 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-To: Run Dapr in an offline or airgap environment"
|
||||
linkTitle: "Run in offline or airgap"
|
||||
weight: 30000
|
||||
description: "How to deploy and run Dapr in self-hosted mode in an airgap environment"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
By default, Dapr initialization downloads binaries and pulls images from the network to setup the development environment. However, Dapr also supports offline or airgap installation using pre-downloaded artifacts, either with a Docker or slim environment. The artifacts for each Dapr release are built into a [Dapr Installer Bundle](https://github.com/dapr/installer-bundle) which can be downloaded. By using this installer bundle with the Dapr CLI `init` command, you can install Dapr into environments that do not have any network access.
|
||||
## Setup
|
||||
|
||||
Before airgap initialization, it is required to download a Dapr Installer Bundle beforehand, containing the CLI, runtime and dashboard packaged together. This eliminates the need to download binaries as well as Docker images when initializing Dapr locally.
|
||||
|
||||
1. Download the [Dapr Installer Bundle](https://github.com/dapr/installer-bundle/releases) for the specific release version. For example, daprbundle_linux_amd64.tar.gz, daprbundle_windows_amd64.zip.
|
||||
2. Unpack it.
|
||||
3. To install Dapr CLI copy the `daprbundle/dapr (dapr.exe for Windows)` binary to the desired location:
|
||||
* For Linux/MacOS - `/usr/local/bin`
|
||||
* For Windows, create a directory and add this to your System PATH. For example create a directory called `c:\dapr` and add this directory to your path, by editing your system environment variable.
|
||||
|
||||
> Note: If Dapr CLI is not moved to the desired location, you can use local `dapr` CLI binary in the bundle. The steps above is to move it to the usual location and add it to the path.
|
||||
|
||||
## Initialize Dapr environment
|
||||
|
||||
Dapr can be initialized in an airgap environment with or without Docker containers.
|
||||
### Initialize Dapr with Docker
|
||||
|
||||
([Prerequisite](#Prerequisites): Docker is available in the environment)
|
||||
|
||||
Move to the bundle directory and run the following command:
|
||||
``` bash
|
||||
dapr init --from-dir .
|
||||
```
|
||||
> For linux users, if you run your Docker cmds with sudo, you need to use "**sudo dapr init**"
|
||||
|
||||
> If you are not running the above cmd from the bundle directory, provide the full path to bundle directory as input. For example, assuming the bundle directory path is $HOME/daprbundle, run `dapr init --from-dir $HOME/daprbundle` to have the same behavior.
|
||||
|
||||
The output should look similar to the following:
|
||||
```bash
|
||||
Making the jump to hyperspace...
|
||||
ℹ️ Installing runtime version latest
|
||||
↘ Extracting binaries and setting up components... Loaded image: daprio/dapr:$version
|
||||
✅ Extracting binaries and setting up components...
|
||||
✅ Extracted binaries and completed components set up.
|
||||
ℹ️ daprd binary has been installed to $HOME/.dapr/bin.
|
||||
ℹ️ dapr_placement container is running.
|
||||
ℹ️ Use `docker ps` to check running containers.
|
||||
✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started
|
||||
```
|
||||
|
||||
> Note: To emulate *online* Dapr initialization using `dapr init`, you can also run Redis and Zipkin containers as follows:
|
||||
```
|
||||
1. docker run --name "dapr_zipkin" --restart always -d -p 9411:9411 openzipkin/zipkin
|
||||
2. docker run --name "dapr_redis" --restart always -d -p 6379:6379 redislabs/rejson
|
||||
```
|
||||
|
||||
### Initialize Dapr without Docker
|
||||
|
||||
Alternatively to have the CLI not install any default configuration files or run any Docker containers, use the `--slim` flag with the `init` command. Only the Dapr binaries will be installed.
|
||||
|
||||
``` bash
|
||||
dapr init --slim --from-dir .
|
||||
```
|
||||
|
||||
The output should look similar to the following:
|
||||
```bash
|
||||
⌛ Making the jump to hyperspace...
|
||||
ℹ️ Installing runtime version latest
|
||||
↙ Extracting binaries and setting up components...
|
||||
✅ Extracting binaries and setting up components...
|
||||
✅ Extracted binaries and completed components set up.
|
||||
ℹ️ daprd binary has been installed to $HOME.dapr/bin.
|
||||
ℹ️ placement binary has been installed to $HOME/.dapr/bin.
|
||||
✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started
|
||||
```
|
||||
|
|
@ -12,7 +12,7 @@ Dapr can be configured to run in self-hosted mode on your local developer machin
|
|||
|
||||
## Initialization
|
||||
|
||||
Dapr can be initialized [with Docker]({{< ref self-hosted-with-docker.md >}}) (default) or in [slim-init mode]({{< ref self-hosted-no-docker.md >}}). The default Docker setup provides out of the box functionality with the following containers and configuration:
|
||||
Dapr can be initialized [with Docker]({{< ref self-hosted-with-docker.md >}}) (default) or in [slim-init mode]({{< ref self-hosted-no-docker.md >}}). It can also be initialized and run in [offline or airgap environments]({{< ref self-hosted-airgap.md >}}). The default Docker setup provides out of the box functionality with the following containers and configuration:
|
||||
- A Redis container configured to serve as the default component for both state management and publish/subscribe.
|
||||
- A Zipkin container for diagnostics and tracing.
|
||||
- A default Dapr configuration and components installed in `$HOME/.dapr/` (Mac/Linux) or `%USERPROFILE%\.dapr\` (Windows).
|
||||
|
|
|
|||
|
|
@ -84,6 +84,10 @@ spec:
|
|||
...
|
||||
```
|
||||
|
||||
## API Logging
|
||||
|
||||
API logging enables you to see the API calls from your application to the Dapr sidecar to debug issues. You can combine both Dapr API logging with Dapr log events. See [configure and view Dapr Logs]({{< ref "logs-troubleshooting.md" >}}) and [configure and view Dapr API Logs]({{< ref "api-logs-troubleshooting.md" >}}) for more information.
|
||||
|
||||
## Log collectors
|
||||
|
||||
If you run Dapr in a Kubernetes cluster, [Fluentd](https://www.fluentd.org/) is a popular container log collector. You can use Fluentd with a [json parser plugin](https://docs.fluentd.org/parser/json) to parse Dapr JSON formatted logs. This [how-to]({{< ref fluentd.md >}}) shows how to configure the Fluentd in your cluster.
|
||||
|
|
@ -100,3 +104,5 @@ If you are using the Azure Kubernetes Service, you can use [Azure monitor for co
|
|||
|
||||
- [How-to : Set up Fleuntd, Elastic search, and Kibana]({{< ref fluentd.md >}})
|
||||
- [How-to : Set up Azure Monitor in Azure Kubernetes Service]({{< ref azure-monitor.md >}})
|
||||
- [Configure and view Dapr Logs]({{< ref "logs-troubleshooting.md" >}})
|
||||
- [Configure and view Dapr API Logs]({{< ref "api-logs-troubleshooting.md" >}})
|
||||
|
|
|
|||
|
|
@ -6,16 +6,21 @@ weight: 4500
|
|||
description: "Configure resiliency policies for timeouts, retries/backoffs and circuit breakers"
|
||||
---
|
||||
|
||||
Resiliency is currently a preview feature. Before you can utilize resiliency policies, you must first enable the resiliency preview feature.
|
||||
> Resiliency is currently a preview feature. Before you can utilize resiliency policies, you must first enable the resiliency preview feature.
|
||||
|
||||
### Policies
|
||||
Policies is where timeouts, retries and circuit breaker policies are defined. Each is given a name so they can be referred to from the `targets` section in the resiliency spec.
|
||||
|
||||
You define timeouts, retries and circuit breaker policies under `policies`. Each policy is given a name so you can refer to them from the `targets` section in the resiliency spec.
|
||||
|
||||
#### Timeouts
|
||||
|
||||
Timeouts can be used to early-terminate long-running operations. If a timeout is exceeded the operation in progress will be terminated if possible and an error is returned. Valid values are of the form `15s`, `2m`, `1h30m`, etc
|
||||
Timeouts can be used to early-terminate long-running operations. If you've exceeded timeout:
|
||||
|
||||
- The operation in progress will be terminated (if possible).
|
||||
- An error is returned.
|
||||
|
||||
Valid values are of the form `15s`, `2m`, `1h30m`, etc. For example:
|
||||
|
||||
Example definitions:
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
|
|
@ -28,14 +33,17 @@ spec:
|
|||
|
||||
#### Retries
|
||||
|
||||
Retries allow defining of a retry stragegy for failed operations. Requests failed due to triggering a defined timeout or circuit breaker policy will also be retried per the retry strategy. The following retry options are configurable:
|
||||
With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy. The following retry options are configurable:
|
||||
|
||||
- `policy`: determines the backoff and retry interval strategy. Valid values are `constant` and `exponential`. Defaults to `constant`.
|
||||
- `duration`: determines the time interval between retries. Default: `5s`. Only applies to the `constant` `policy`. Valid values are of the form `200ms`, `15s`, `2m`, etc
|
||||
- `maxInterval`: determines the largest interval between retries to which the `exponential` backoff `policy` can grow. Additional retries will always occur after a duration of `maxInterval`. Defaults to `60s`. Valid values are of the form `5s`, `1m`, `1m30s`, etc
|
||||
- `maxRetries`: The number of retries to attempt. `-1` denotes an indefinite number of retries. Defaults to `-1`.
|
||||
| Retry option | Description |
|
||||
| ------------ | ----------- |
|
||||
| `policy` | Determines the back-off and retry interval strategy. Valid values are `constant` and `exponential`. Defaults to `constant`. |
|
||||
| `duration` | Determines the time interval between retries. Default: `5s`. Only applies to the `constant` `policy`. Valid values are of the form `200ms`, `15s`, `2m`, etc. |
|
||||
| `maxInterval` | Determines the maximum interval between retries to which the `exponential` back-off `policy` can grow. Additional retries will always occur after a duration of `maxInterval`. Defaults to `60s`. Valid values are of the form `5s`, `1m`, `1m30s`, etc |
|
||||
| `maxRetries` | The maximum number of retries to attempt. `-1` denotes an indefinite number of retries. Defaults to `-1`. |
|
||||
|
||||
The exponential back-off window uses the following formula:
|
||||
|
||||
The exponential backoff window uses the following formula:
|
||||
```
|
||||
BackOffDuration = PreviousBackOffDuration * (Random value from 0.5 to 1.5) * 1.5
|
||||
if BackOffDuration > maxInterval {
|
||||
|
|
@ -44,6 +52,7 @@ if BackOffDuration > maxInterval {
|
|||
```
|
||||
|
||||
Example definitions:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
|
|
@ -62,12 +71,14 @@ spec:
|
|||
|
||||
##### Circuit Breakers
|
||||
|
||||
Circuit Breakers (CBs) are policies that are used when other applications/services/components are experiencing elevated failure rates. Their purpose is to monitor the requests and, when a certain criteria is met, shut off all traffic to the impacted service. This is to give the service time to recover from their outage instead of flooding them with events. The circuit breaker can also allow partial traffic through to see if the system has healed (half open state). Once successful requests start to occur, the CB can close and allow traffic to resume.
|
||||
Circuit breakers (CBs) policies are used when other applications/services/components are experiencing elevated failure rates. CBs monitor the requests and shut off all traffic to the impacted service when a certain criteria is met. By doing this, CBs give the service time to recover from their outage instead of flooding them with events. The CB can also allow partial traffic through to see if the system has healed (half-open state). Once successful requests start to occur, the CB can close and allow traffic to resume.
|
||||
|
||||
- `maxRequests`: The maximum number of requests allowed to pass through when the CB is half-open (recovering from failure). Defaults to `1`.
|
||||
- `interval`: The cyclical period of time used by the CB to clear its internal counts. If set to 0 seconds, this will never clear. Defaults to `0s`.
|
||||
- `timeout`: The period of the open state (directly after failure) until the CB switches to half-open. Defaults to `60s`.
|
||||
- `trip`: A Common Expression Language (CEL) statement that is evaluated by the CB. When the statement evaluates to true, the CB trips and becomes open. Default is `consecutiveFailures > 5`.
|
||||
| Retry option | Description |
|
||||
| ------------ | ----------- |
|
||||
| `maxRequests` | The maximum number of requests allowed to pass through when the CB is half-open (recovering from failure). Defaults to `1`. |
|
||||
| `interval` | The cyclical period of time used by the CB to clear its internal counts. If set to 0 seconds, this will never clear. Defaults to `0s`. |
|
||||
| `timeout` | The period of the open state (directly after failure) until the CB switches to half-open. Defaults to `60s`. |
|
||||
| `trip` | A Common Expression Language (CEL) statement that is evaluated by the CB. When the statement evaluates to true, the CB trips and becomes open. Default is `consecutiveFailures > 5`. |
|
||||
|
||||
Example:
|
||||
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ weight: 4500
|
|||
description: "Configure Dapr error retries, timeouts, and circuit breakers"
|
||||
---
|
||||
|
||||
Resiliency is currently a preview feature. Before you can utilize resiliency policies, you must first [enable the resiliency preview feature]({{< ref preview-features >}}).
|
||||
> Resiliency is currently a preview feature. Before you can utilize resiliency policies, you must first [enable the resiliency preview feature]({{< ref preview-features >}}).
|
||||
|
||||
## Introduction
|
||||
|
||||
|
|
|
|||
|
|
@ -6,16 +6,21 @@ weight: 4500
|
|||
description: "Apply resiliency policies for apps, components and actors"
|
||||
---
|
||||
|
||||
Resiliency is currently a preview feature. Before you can utilize resiliency policies, you must first enable the resiliency preview feature.
|
||||
> Resiliency is currently a preview feature. Before you can utilize resiliency policies, you must first enable the resiliency preview feature.
|
||||
|
||||
### Targets
|
||||
Targets are what named policies are applied to. Dapr supports 3 target types - `apps`, `components` and `actors`, which covers all Dapr builing blocks with the exception of observability. It's worth noting that resilient behaviors might differ between target types, as some targets may already include resilient capabilities, for example service invocation with built-in retries.
|
||||
Named policies are applied to targets. Dapr supports 3 target types that cover all Dapr building blocks, except observability:
|
||||
- `apps`
|
||||
- `components`
|
||||
- `actors`
|
||||
|
||||
Resilient behaviors might differ between target types, as some targets may already include resilient capabilities; for example, service invocation with built-in retries.
|
||||
|
||||
#### Apps
|
||||
|
||||
<img src="/images/resiliency_svc_invocation.png" width=1000 alt="Diagram showing service invocation resiliency" />
|
||||
|
||||
The `apps` target allows for applying `retry`, `timeout` and `circuitbreaker` policies to service invocation calls to other Dapr apps. Under `apps`, each key is the target service's `app-id` that the policies are applied to. Policies are applied when the network failure occurs between sidecar communication (as pictured in the diagram above). Additionally, Dapr provides [built-in service invocation retries]({{<ref "service-invocation-overview.md#retries">}}), so any applied `retry` policies are additional.
|
||||
With the `apps` target, you can apply `retry`, `timeout`, and `circuitBreaker` policies to service invocation calls between Dapr apps. Under `targets/apps`, policies are applied to each key or target service's `app-id` listed when network failure occurs between sidecar communication (as pictured in the diagram above).
|
||||
|
||||
Example of policies to a target app with the `app-id` "appB":
|
||||
|
||||
|
|
@ -29,17 +34,26 @@ specs:
|
|||
circuitBreaker: general
|
||||
```
|
||||
|
||||
> Dapr provides [built-in service invocation retries]({{< ref "service-invocation-overview.md#retries" >}}), so any applied `retry` policies are additional.
|
||||
|
||||
#### Components
|
||||
|
||||
The `components` target allows for applying of `retry`, `timeout` and `circuitbreaker` policies to components operations. Policy assignments are optional.
|
||||
With the `components` target, you can apply `retry`, `timeout` and `circuitBreaker` policies to components operations. Policy assignments are optional.
|
||||
|
||||
Policies can be applied for `outbound` operations (calls to the Dapr sidecar) and/or `inbound` (the sidecar calling your app). At this time, inbound only applies to PubSub and InputBinding components.
|
||||
Policies can be applied for `outbound` operations (calls to the Dapr sidecar) and/or `inbound` (the sidecar calling your app). At this time, *inbound* only applies to PubSub and InputBinding components.
|
||||
|
||||
##### Outbound
|
||||
Calls from the sidecar to a component are `outbound` operations. Persisting or retrieveting state, publishing a message, invoking an output binding are all examples of `outbound` operations. Some components have `retry` capabilities built-in and are configured on a per component basis.
|
||||
|
||||
`outbound` operations are calls from the sidecar to a component, such as:
|
||||
|
||||
- Persisting or retrieving state.
|
||||
- Publishing a message.
|
||||
- Invoking an output binding.
|
||||
|
||||
<img src="/images/resiliency_outbound.png" width=1000 alt="Diagram showing service invocation resiliency">
|
||||
|
||||
Some components have built-in `retry` capabilities and are configured on a per-component basis.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
targets:
|
||||
|
|
@ -50,12 +64,17 @@ spec:
|
|||
circuitBreaker: pubsubCB
|
||||
```
|
||||
|
||||
##### Inbound
|
||||
Call from the sidecar to your application are `inbound` operations. Subscribing to a topic and inbound bindings are examples of `inbound` operations.
|
||||
##### Inbound
|
||||
|
||||
`inbound` operations are calls from the sidecar to your application, such as:
|
||||
|
||||
- Subscribing to a topic.
|
||||
- Inbound bindings.
|
||||
|
||||
<img src="/images/resiliency_inbound.png" width=1000 alt="Diagram showing service invocation resiliency" />
|
||||
|
||||
Example
|
||||
Some components have built-in `retry` capabilities and are configured on a per-component basis.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
targets:
|
||||
|
|
@ -68,9 +87,11 @@ spec:
|
|||
```
|
||||
|
||||
##### PubSub
|
||||
|
||||
In a PubSub `target/component`, you can specify both `inbound` and `outbound` operations.
|
||||
|
||||
<img src="/images/resiliency_pubsub.png" width=1000 alt="Diagram showing service invocation resiliency">
|
||||
|
||||
Example
|
||||
```yaml
|
||||
spec:
|
||||
targets:
|
||||
|
|
@ -87,13 +108,20 @@ spec:
|
|||
|
||||
#### Actors
|
||||
|
||||
Allows applying of `retry`, `timeout` and `circuitbreaker` policies to actor operations. Policy assignments are optional.
|
||||
With the `actors` target, you can apply `retry`, `timeout`, and `circuitBreaker` policies to actor operations. Policy assignments are optional.
|
||||
|
||||
When using a `circuitbreaker` policy, you can additionally specify whether circuit breaking state should be scoped to an invididual actor ID, to all actors across the actor type, or both. Specify `circuitBreakerScope` with values `id`, `type`, or `both`.
|
||||
When using a `circuitBreaker` policy, you can specify whether circuit breaking state should be scoped to:
|
||||
|
||||
Additionally, you can specify a cache size for the number of circuit breakers to keep in memory. This can be done by specifying `circuitBreakerCacheSize` and providing an integer value, e.g. `5000`.
|
||||
- An individual actor ID.
|
||||
- All actors across the actor type.
|
||||
- Both.
|
||||
|
||||
Specify `circuitBreakerScope` with values `id`, `type`, or `both`.
|
||||
|
||||
You can specify a cache size for the number of circuit breakers to keep in memory. Do this by specifying `circuitBreakerCacheSize` and providing an integer value, e.g. `5000`.
|
||||
|
||||
Example
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
targets:
|
||||
|
|
|
|||
|
|
@ -5,13 +5,19 @@ linkTitle: "Preview features"
|
|||
weight: 4000
|
||||
description: "List of current preview features"
|
||||
---
|
||||
Preview features in Dapr are considered experimental when they are first released. These preview features require explicit opt-in in order to be used. The opt-in is specified in Dapr's configuration. See [How-To: Enable preview features]({{<ref preview-features>}}) for information more information.
|
||||
Preview features in Dapr are considered experimental when they are first released.
|
||||
|
||||
Runtime preview features require explicit opt-in in order to be used. The runtime opt-in is specified in a preview setting feature in Dapr's application configuration. See [How-To: Enable preview features]({{<ref preview-features>}}) for more information.
|
||||
|
||||
For CLI there is no explicit opt-in, just the version that this was first made available.
|
||||
|
||||
|
||||
## Current preview features
|
||||
| Feature | Description | Setting | Documentation |
|
||||
| ---------- |-------------|---------|---------------|
|
||||
| **Partition actor reminders** | Allows actor reminders to be partitioned across multiple keys in the underlying statestore in order to improve scale and performance. | `Actor.TypeMetadata` | [How-To: Partition Actor Reminders]({{< ref "howto-actors.md#partitioning-reminders" >}}) |
|
||||
| **Pub/Sub routing** | Allow the use of expressions to route cloud events to different URIs/paths and event handlers in your application. | `PubSub.Routing` | [How-To: Publish a message and subscribe to a topic]({{<ref howto-route-messages>}}) |
|
||||
| **ARM64 Mac Support** | Dapr CLI, sidecar, and Dashboard are now natively compiled for ARM64 Macs, along with Dapr CLI installation via Homebrew. | N/A | [Install the Dapr CLI]({{<ref install-dapr-cli>}}) |
|
||||
| **Resiliency** | Allows configuring of fine-grained policies for retries, timeouts and circuitbreaking. | `Resiliency` | [Configure Resiliency Policies]({{<ref "resiliency-overview">}}) |
|
||||
| Feature | Description | Setting | Documentation | Version introduced |
|
||||
| ---------- |-------------|---------|---------------|-----------------|
|
||||
| **Partition actor reminders** | Allows actor reminders to be partitioned across multiple keys in the underlying statestore in order to improve scale and performance. | `Actor.TypeMetadata` | [How-To: Partition Actor Reminders]({{< ref "howto-actors.md#partitioning-reminders" >}}) | v1.4 |
|
||||
| **Pub/Sub routing** | Allow the use of expressions to route cloud events to different URIs/paths and event handlers in your application. | `PubSub.Routing` | [How-To: Publish a message and subscribe to a topic]({{<ref howto-route-messages>}}) | v1.7 |
|
||||
| **ARM64 Mac Support** | Dapr CLI, sidecar, and Dashboard are now natively compiled for ARM64 Macs, along with Dapr CLI installation via Homebrew. | N/A | [Install the Dapr CLI]({{<ref install-dapr-cli>}}) | v1.5 |
|
||||
| **--image-registry** flag with Dapr CLI| In self hosted mode you can set this flag to specify any private registry to pull the container images required to install Dapr| N/A | [init CLI command reference]({{<ref "dapr-init.md#self-hosted-environment" >}}) | v1.7 |
|
||||
| **Resiliency** | Allows configuring of fine-grained policies for retries, timeouts and circuitbreaking. | `Resiliency` | [Configure Resiliency Policies]({{<ref "resiliency-overview">}}) |
|
||||
|
||||
|
|
|
|||
|
|
@ -6,42 +6,36 @@ weight: 3000
|
|||
description: "Understand how API logging works in Dapr and how to view logs"
|
||||
---
|
||||
|
||||
API logging enables you to see the API calls from your application to the Dapr sidecar and debug issues as a result with different verbosity levels. You can also combine Dapr API logging with Dapr log events (see [configure and view Dapr Logs]({{< ref "logs-troubleshooting.md" >}}) into the output, if you want to use the logging capabilities together.
|
||||
API logging enables you to see the API calls from your application to the Dapr sidecar and debug issues as a result when the flag is set. You can also combine Dapr API logging with Dapr log events (see [configure and view Dapr Logs]({{< ref "logs-troubleshooting.md" >}}) into the output, if you want to use the logging capabilities together.
|
||||
|
||||
## Overview
|
||||
|
||||
API logs have different, configurable verbosity levels and is applicable for both HTTP and GRPC calls.
|
||||
The default value of the flag is false.
|
||||
|
||||
1. info
|
||||
2. debug
|
||||
|
||||
The default level is debug, which provides a balanced amount of information for operating Dapr in normal conditions.
|
||||
|
||||
To set the output level, you can use the `--api-log-level` command-line option. For example:
|
||||
To enable API logging, you can use the `--enable-api-logging` command-line option. For example:
|
||||
|
||||
```bash
|
||||
./daprd --api-log-level info
|
||||
./daprd --api-log-level debug
|
||||
./daprd --enable-api-logging
|
||||
```
|
||||
|
||||
This starts the Dapr runtime with a api log level of `info` and `debug` accordingly.
|
||||
This starts the Dapr runtime with API logging.
|
||||
|
||||
## Configuring API logging in self hosted mode
|
||||
|
||||
To set the log level when running your app with the Dapr CLI, pass the `api-log-level` param:
|
||||
To enable API logging when running your app with the Dapr CLI, pass the `enable-api-logging` flag:
|
||||
|
||||
```bash
|
||||
dapr run --api-log-level info node myapp.js
|
||||
dapr run --enable-api-logging node myapp.js
|
||||
```
|
||||
|
||||
### Viewing API logs in self hosted mode
|
||||
|
||||
When running Dapr with the Dapr CLI, both your app's log output and the Dapr runtime log output are redirected to the same session, for easy debugging.
|
||||
|
||||
The below example shows `info` level API logging:
|
||||
The example below shows some API logs:
|
||||
|
||||
```bash
|
||||
dapr run --api-log-level info node myapp.js
|
||||
dapr run --enable-api-logging -- node myapp.js
|
||||
|
||||
ℹ️ Starting Dapr with id order-processor on port 56730
|
||||
✅ You are up and running! Both Dapr and your app logs will appear here.
|
||||
|
|
@ -55,36 +49,13 @@ INFO[0000] HTTP API Called: DELETE /v1.0/state/statestore app_id=order-processo
|
|||
INFO[0000] HTTP API Called: PUT /v1.0/metadata/cliPID app_id=order-processor instance=QTM-SWATHIKIL-1.redmond.corp.microsoft.com scope=dapr.runtime.http type=log ver=edge
|
||||
```
|
||||
|
||||
The below example is for debug level API logging
|
||||
|
||||
```bash
|
||||
dapr run --api-log-level debug node myapp.js
|
||||
ℹ️ Starting Dapr with id order-processor on port 56730
|
||||
✅ You are up and running! Both Dapr and your app logs will appear here.
|
||||
.....
|
||||
DEBU[0000] HTTP API Called: POST /v1.0/state/statestore app_id=order-processor instance=QTM-SWATHIKIL-1.redmond.corp.microsoft.com scope=dapr.runtime.http type=log ver=edge
|
||||
== APP == INFO:root:Saving Order: {'orderId': '483'}
|
||||
DEBU[0000] HTTP API Called: GET /v1.0/state/statestore/483 app_id=order-processor instance=QTM-SWATHIKIL-1.redmond.corp.microsoft.com scope=dapr.runtime.http type=log ver=edge
|
||||
== APP == INFO:root:Getting Order: {'orderId': '483'}
|
||||
DEBU[0000] HTTP API Called: DELETE /v1.0/state/statestore app_id=order-processor instance=QTM-SWATHIKIL-1.redmond.corp.microsoft.com scope=dapr.runtime.http type=log ver=edge
|
||||
== APP == INFO:root:Deleted Order: {'orderId': '483'}
|
||||
DEBU[0000] HTTP API Called: PUT /v1.0/metadata/cliPID app_id=order-processor instance=QTM-SWATHIKIL-1.redmond.corp.microsoft.com scope=dapr.runtime.http type=log ver=edge
|
||||
```
|
||||
|
||||
## Configuring API logging in Kubernetes
|
||||
|
||||
You can set the log level individually for every sidecar by providing the following annotation in your pod spec template:
|
||||
You can enable the API logs for every sidecar by providing the following annotation in your pod spec template:
|
||||
|
||||
```yml
|
||||
annotations:
|
||||
dapr.io/api-log-level: "info"
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```yml
|
||||
annotations:
|
||||
dapr.io/api-log-level: "debug"
|
||||
dapr.io/enable-api-logging: true
|
||||
```
|
||||
|
||||
### Viewing API logs on Kubernetes
|
||||
|
|
@ -106,13 +77,3 @@ time="2022-03-16T18:32:02.917629403Z" level=info msg="HTTP API Called: GET /v1.0
|
|||
time="2022-03-16T18:32:03.137830112Z" level=info msg="HTTP API Called: GET /v1.0/invoke/invoke-receiver/method/my-method" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http type=log ver=edge
|
||||
time="2022-03-16T18:32:03.359097916Z" level=info msg="HTTP API Called: GET /v1.0/invoke/invoke-receiver/method/my-method" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http type=log ver=edge
|
||||
```
|
||||
|
||||
The below example is for debug level API logging in Kubernetes. The debug level API logs are visible only when the log-level is set to debug.
|
||||
|
||||
```bash
|
||||
time="2022-03-18T21:03:00.03969986Z" level=debug msg="HTTP API Called: GET /v1.0/invoke/invoke-receiver/method/my-method" app_id=invoke-caller instance=invokecaller-56894979cd-rt87b scope=dapr.runtime.http type=log ver=edge
|
||||
time="2022-03-18T21:03:00.271463876Z" level=debug msg="HTTP API Called: GET /v1.0/invoke/invoke-receiver/method/my-method" app_id=invoke-caller instance=invokecaller-56894979cd-rt87b scope=dapr.runtime.http type=log ver=edge
|
||||
time="2022-03-18T21:03:00.492570204Z" level=debug msg="HTTP API Called: GET /v1.0/invoke/invoke-receiver/method/my-method" app_id=invoke-caller instance=invokecaller-56894979cd-rt87b scope=dapr.runtime.http type=log ver=edge
|
||||
time="2022-03-18T21:03:00.705486991Z" level=debug msg="HTTP API Called: GET /v1.0/invoke/invoke-receiver/method/my-method" app_id=invoke-caller instance=invokecaller-56894979cd-rt87b scope=dapr.runtime.http type=log ver=edge
|
||||
time="2022-03-18T21:03:00.916868445Z" level=debug msg="HTTP API Called: GET /v1.0/invoke/invoke-receiver/method/my-method" app_id=invoke-caller instance=invokecaller-56894979cd-rt87b scope=dapr.runtime.http type=log ver=edge
|
||||
```
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ This table is meant to help users understand the equivalent options for running
|
|||
| `--unix-domain-socket` | `--unix-domain-socket` | `-u` | not supported | On Linux, when communicating with the Dapr sidecar, use unix domain sockets for lower latency and greater throughput compared to TCP ports. Not available on Windows OS |
|
||||
| `--log-as-json` | not supported | | `dapr.io/log-as-json` | Setting this parameter to `true` outputs logs in JSON format. Default is `false` |
|
||||
| `--log-level` | `--log-level` | | `dapr.io/log-level` | Sets the log level for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` |
|
||||
| `--api-log-level` | `--api-log-level` | | `dapr.io/api-log-level` | Sets the API log level for the Dapr sidecar. Allowed values are `debug`, `info`. Default is `debug` |
|
||||
| `--enable-api-logging` | `--enable-api-logging` | | `dapr.io/enable-api-logging` | Enables API logging for the Dapr sidecar |
|
||||
| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the concurrency of your application. A valid value is any number larger than `0`
|
||||
| `--metrics-port` | `--metrics-port` | | `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090` |
|
||||
| `--mode` | not supported | | not supported | Runtime mode for Dapr (default "standalone") |
|
||||
|
|
|
|||
|
|
@ -21,14 +21,28 @@ dapr components [flags]
|
|||
|
||||
### Flags
|
||||
|
||||
| Name | Environment Variable | Default | Description |
|
||||
| -------------------- | -------------------- | ------- | ------------------------------------------------ |
|
||||
| `--help`, `-h` | | | Print this help message |
|
||||
| `--kubernetes`, `-k` | | `false` | List all Dapr components in a Kubernetes cluster |
|
||||
|
||||
| Name | Environment Variable | Default | Description
|
||||
| --- | --- | --- | --- |
|
||||
| `--kubernetes`, `-k` | | `false` | List all Dapr components in a Kubernetes cluster (required) |
|
||||
| `--all-namespaces`, `-A` | | `true` | If true, list all Dapr components in all namespaces |
|
||||
| `--help`, `-h` | | | Print this help message |
|
||||
| `--name`, `-n` | | | The components name to be printed (optional) |
|
||||
| `--namespace` | | | List all components in the specified namespace |
|
||||
| `--output`, `-o` | | `list` | Output format (options: json or yaml or list) |
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
# List Kubernetes components
|
||||
# List Dapr components in all namespaces in Kubernetes mode
|
||||
dapr components -k
|
||||
```
|
||||
|
||||
# List Dapr components in specific namespace in Kubernetes mode
|
||||
dapr components -k --namespace default
|
||||
|
||||
# Print specific Dapr component in Kubernetes mode
|
||||
dapr components -k -n mycomponent
|
||||
|
||||
# List Dapr components in all namespaces in Kubernetes mode
|
||||
dapr components -k --all-namespaces
|
||||
```
|
||||
|
|
@ -21,16 +21,28 @@ dapr configurations [flags]
|
|||
|
||||
### Flags
|
||||
|
||||
| Name | Environment Variable | Default | Description |
|
||||
| -------------------- | -------------------- | ------- | ---------------------------------------------------- |
|
||||
| `--kubernetes`, `-k` | | `false` | List all Dapr configurations in a Kubernetes cluster |
|
||||
| `--name`, `-n` | | | The configuration name to be printed (optional) |
|
||||
| `--output`, `-o` | | `list` | Output format (options: json or yaml or list) |
|
||||
| `--help`, `-h` | | | Print this help message |
|
||||
|
||||
| Name | Environment Variable | Default | Description
|
||||
| --- | --- | --- | --- |
|
||||
| `--kubernetes`, `-k` | | `false` | List all Dapr configurations in Kubernetes cluster (required).
|
||||
| `--all-namespaces`, `-A` | | `true` | If true, list all Dapr configurations in all namespaces (optional)
|
||||
| `--namespace` | | | List Dapr configurations in specific namespace.
|
||||
| `--name`, `-n` | | | Print specific Dapr configuration. (optional)
|
||||
| `--output`, `-o` | | `list`| Output format (options: json or yaml or list)
|
||||
| `--help`, `-h` | | | Print this help message |
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
# List Kubernetes Dapr configurations
|
||||
# List Dapr configurations in all namespaces in Kubernetes mode
|
||||
dapr configurations -k
|
||||
```
|
||||
|
||||
# List Dapr configurations in specific namespace in Kubernetes mode
|
||||
dapr configurations -k --namespace default
|
||||
|
||||
# Print specific Dapr configuration in Kubernetes mode
|
||||
dapr configurations -k -n appconfig
|
||||
|
||||
# List Dapr configurations in all namespaces in Kubernetes mode
|
||||
dapr configurations -k --all-namespaces
|
||||
```
|
||||
|
|
@ -33,10 +33,15 @@ dapr init [flags]
|
|||
| `--namespace`, `-n` | | `dapr-system` | The Kubernetes namespace to install Dapr in |
|
||||
| `--runtime-version` | | `latest` | The version of the Dapr runtime to install, for example: `1.0.0` |
|
||||
| `--slim`, `-s` | | `false` | Exclude placement service, Redis and Zipkin containers from self-hosted installation |
|
||||
| `--from-dir` | | | Path to a local directory containing a downloaded "Dapr Installer Bundle" release which is used to `init` the airgap environment |
|
||||
| `--image-registry` | | | Pulls container images required by Dapr from the given image registry |
|
||||
| N/A |DAPR_DEFAULT_IMAGE_REGISTRY| | In self hosted mode, it is used to specify the default container registry to pull images from. When its value is set to `GHCR` or `ghcr` it pulls the required images from Github container registry. To default to Docker hub as default, just unset this env variable.|
|
||||
|
||||
### Examples
|
||||
|
||||
#### Self-hosted environment
|
||||
#### Self hosted environment
|
||||
|
||||
Install Dapr by pulling container images for Placement, Redis and Zipkin. By default these images are pulled from Docker Hub. To switch to Dapr Github container registry as the default registry, set the `DAPR_DEFAULT_IMAGE_REGISTRY` environment variable value to be `GHCR`. To switch back to Docker Hub as default registry, unset this environment variable.
|
||||
|
||||
```bash
|
||||
dapr init
|
||||
|
|
@ -54,6 +59,40 @@ Dapr can also run [Slim self-hosted mode]({{< ref self-hosted-no-docker.md >}})
|
|||
dapr init -s
|
||||
```
|
||||
|
||||
In an offline or airgap environment, you can [download a Dapr Installer Bundle](https://github.com/dapr/installer-bundle/releases) and use this to install Dapr instead of pulling images from the network.
|
||||
|
||||
```bash
|
||||
dapr init --from-dir <path-to-installer-bundle-directory>
|
||||
```
|
||||
|
||||
Dapr can also run in slim self-hosted mode without Docker in an airgap environment.
|
||||
|
||||
```bash
|
||||
dapr init -s --from-dir <path-to-installer-bundle-directory>
|
||||
```
|
||||
|
||||
You can also specify a private registry to pull container images from. These images need to be published to private registries as shown below to enable Dapr CLI to pull them successfully via the `dapr init` command -
|
||||
|
||||
1. Dapr runtime container image(dapr) (Used to run Placement) - dapr/dapr:<version>
|
||||
2. Redis container image(rejson) - dapr/3rdparty/rejson
|
||||
3. Zipkin container image(zipkin) - dapr/3rdparty/zipkin
|
||||
|
||||
> All the required images used by Dapr needs to be under the`dapr` path.
|
||||
|
||||
> The 3rd party images have to be published under `dapr/3rdparty` path.
|
||||
|
||||
> image-registy uri follows this format - `docker.io/<username>`
|
||||
|
||||
```bash
|
||||
dapr init --image-registry docker.io/username
|
||||
```
|
||||
|
||||
This command resolves the complete image URI as shown below -
|
||||
1. Placement container image(dapr) - docker.io/username/dapr/dapr:<version>
|
||||
2. Redis container image(rejson) - docker.io/username/dapr/3rdparty/rejson
|
||||
3. zipkin container image(zipkin) - docker.io/username/dapr/3rdparty/zipkin
|
||||
|
||||
|
||||
#### Kubernetes environment
|
||||
|
||||
```bash
|
||||
|
|
|
|||
|
|
@ -22,11 +22,14 @@ dapr list [flags]
|
|||
|
||||
### Flags
|
||||
|
||||
| Name | Environment Variable | Default | Description |
|
||||
| -------------------- | -------------------- | ------- | --------------------------------------------------------------------------- |
|
||||
| `--help`, `-h` | | | Print this help message |
|
||||
| `--kubernetes`, `-k` | | `false` | List all Dapr pods in a Kubernetes cluster |
|
||||
| `--output`, `-o` | | `table` | The output format of the list. Valid values are: `json`, `yaml`, or `table` |
|
||||
|
||||
| Name | Environment Variable | Default | Description
|
||||
| --- | --- | --- | --- |
|
||||
| `--all-namespaces`, `-A` | | `false` | List all Dapr pods in all namespaces (optional) |
|
||||
| `--help`, `-h` | | | Print this help message |
|
||||
| `--kubernetes`, `-k` | | `false` | List all Dapr pods in a Kubernetes cluster (optional) |
|
||||
| `--namespace`, `-n` | | `default` | List the Dapr pods in the defined namespace in Kubernetes. Only with `-k` flag (optional) |
|
||||
| `--output`, `-o` | | `table` | The output format of the list. Valid values are: `json`, `yaml`, or `table`
|
||||
|
||||
### Examples
|
||||
|
||||
|
|
@ -34,9 +37,15 @@ dapr list [flags]
|
|||
# List Dapr instances in self-hosted mode
|
||||
dapr list
|
||||
|
||||
# List Dapr instances in Kubernetes mode
|
||||
# List Dapr instances in all namespaces in Kubernetes mode
|
||||
dapr list -k
|
||||
|
||||
# List Dapr instances in JSON format
|
||||
dapr list -o json
|
||||
```
|
||||
|
||||
# List Dapr instances in a specific namespace in Kubernetes mode
|
||||
dapr list -k --namespace default
|
||||
|
||||
# List Dapr instances in all namespaces in Kubernetes mode
|
||||
dapr list -k --all-namespaces
|
||||
```
|
||||
|
|
@ -36,7 +36,7 @@ dapr run [flags] [command]
|
|||
| `--help`, `-h` | | | Print this help message |
|
||||
| `--image` | | | The image to build the code in. Input is: `repository/image` |
|
||||
| `--log-level` | | `info` | The log verbosity. Valid values are: `debug`, `info`, `warn`, `error`, `fatal`, or `panic` |
|
||||
| `--api-log-level` | | `debug` | The API log verbosity. Valid values are: `debug` or `info` |
|
||||
| `--enable-api-logging` | | `false` | Enable the logging of all API calls from application to Dapr |
|
||||
| `--metrics-port` | `DAPR_METRICS_PORT` | `9090` | The port that Dapr sends its metrics information to |
|
||||
| `--profile-port` | | `7777` | The port for the profile server to listen on |
|
||||
| `--unix-domain-socket`, `-u` | | | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows OS |
|
||||
|
|
@ -65,4 +65,7 @@ dapr run --app-id myapp
|
|||
|
||||
# Run a gRPC application written in Go (listening on port 3000)
|
||||
dapr run --app-id myapp --app-port 5000 --app-protocol grpc -- go run main.go
|
||||
|
||||
# Run a NodeJs application that listens to port 3000 with API logging enabled
|
||||
dapr run --app-id myapp --app-port 3000 --enable-api-logging -- node myapp.js
|
||||
```
|
||||
|
|
|
|||
|
|
@ -1,15 +1,15 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Azure CosmosDB binding spec"
|
||||
linkTitle: "Azure CosmosDB"
|
||||
description: "Detailed documentation on the Azure CosmosDB binding component"
|
||||
title: "Azure Cosmos DB binding spec"
|
||||
linkTitle: "Azure Cosmos DB"
|
||||
description: "Detailed documentation on the Azure Cosmos DB binding component"
|
||||
aliases:
|
||||
- "/operations/components/setup-bindings/supported-bindings/cosmosdb/"
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To setup Azure CosmosDB binding create a component of type `bindings.azure.cosmosdb`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
|
||||
To setup Azure Cosmos DB binding create a component of type `bindings.azure.cosmosdb`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
|
||||
|
||||
|
||||
```yaml
|
||||
|
|
@ -42,16 +42,19 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|--------|---------|---------|
|
||||
| url | Y | Output | The CosmosDB url | `"https://******.documents.azure.com:443/"` |
|
||||
| masterKey | Y | Output | The CosmosDB account master key | `"master-key"` |
|
||||
| database | Y | Output | The name of the CosmosDB database | `"OrderDb"` |
|
||||
| url | Y | Output | The Cosmos DB url | `"https://******.documents.azure.com:443/"` |
|
||||
| masterKey | Y | Output | The Cosmos DB account master key | `"master-key"` |
|
||||
| database | Y | Output | The name of the Cosmos DB database | `"OrderDb"` |
|
||||
| collection | Y | Output | The name of the container inside the database. | `"Orders"` |
|
||||
| partitionKey | Y | Output | The name of the key to extract from the payload (document to be created) that is used as the partition key. This name must match the partition key specified upon creation of the Cosmos DB container. | `"OrderId"`, `"message"` |
|
||||
| partitionKey | Y | Output | The name of the key to extract from the payload (document to be created) that is used as the partition key. This name must match the partition key specified upon creation of the Cosmos DB container. | `"OrderId"`, `"message"` |
|
||||
|
||||
For more information see [Azure Cosmos DB resource model](https://docs.microsoft.com/azure/cosmos-db/account-databases-containers-items).
|
||||
For more information see [Azure Cosmos DB resource model](https://docs.microsoft.com/azure/cosmos-db/account-databases-containers-items).
|
||||
|
||||
### Azure Active Directory (AAD) authentication
|
||||
The Azure Cosmos DB binding component supports authentication using all Azure Active Directory mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
### Azure Active Directory (Azure AD) authentication
|
||||
|
||||
The Azure Cosmos DB binding component supports authentication using all Azure Active Directory mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
|
||||
You can read additional information for setting up Cosmos DB with Azure AD authentication in the [section below](#setting-up-cosmos-db-for-authenticating-with-azure-ad).
|
||||
|
||||
## Binding support
|
||||
|
||||
|
|
@ -61,14 +64,14 @@ This component supports **output binding** with the following operations:
|
|||
|
||||
## Best Practices for Production Use
|
||||
|
||||
Azure Cosmos DB shares a strict metadata request rate limit across all databases in a single Azure Cosmos DB account. New connections to Azure Cosmos DB assume a large percentage of the allowable request rate limit. (See the [CosmosDB documentation](https://docs.microsoft.com/azure/cosmos-db/sql/troubleshoot-request-rate-too-large#recommended-solution-3))
|
||||
Azure Cosmos DB shares a strict metadata request rate limit across all databases in a single Azure Cosmos DB account. New connections to Azure Cosmos DB assume a large percentage of the allowable request rate limit. (See the [Cosmos DB documentation](https://docs.microsoft.com/azure/cosmos-db/sql/troubleshoot-request-rate-too-large#recommended-solution-3))
|
||||
|
||||
Therefore several strategies must be applied to avoid simultaneous new connections to Azure Cosmos DB:
|
||||
Therefore several strategies must be applied to avoid simultaneous new connections to Azure Cosmos DB:
|
||||
|
||||
- Ensure sidecars of applications only load the Azure Cosmos DB component when they require it to avoid unnecessary database connections. This can be done by [scoping your components to specific applications]({{< ref component-scopes.md >}}#application-access-to-components-with-scopes).
|
||||
- Choose deployment strategies that sequentially deploy or start your applications to minimize bursts in new connections to your Azure Cosmos DB accounts.
|
||||
- Avoid reusing the same Azure Cosmos DB account for unrelated databases or systems (even outside of Dapr). Distinct Azure Cosmos DB accounts have distinct rate limits.
|
||||
- Increase the `initTimeout` value to allow the component to retry connecting to Azure Cosmos DB during side car initialization for up to 5 minutes. The default value is `5s` and should be increased. When using Kubernetes, increasing this value may also require an update to your [Readiness and Liveness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/).
|
||||
- Ensure sidecars of applications only load the Azure Cosmos DB component when they require it to avoid unnecessary database connections. This can be done by [scoping your components to specific applications]({{< ref component-scopes.md >}}#application-access-to-components-with-scopes).
|
||||
- Choose deployment strategies that sequentially deploy or start your applications to minimize bursts in new connections to your Azure Cosmos DB accounts.
|
||||
- Avoid reusing the same Azure Cosmos DB account for unrelated databases or systems (even outside of Dapr). Distinct Azure Cosmos DB accounts have distinct rate limits.
|
||||
- Increase the `initTimeout` value to allow the component to retry connecting to Azure Cosmos DB during side car initialization for up to 5 minutes. The default value is `5s` and should be increased. When using Kubernetes, increasing this value may also require an update to your [Readiness and Liveness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/).
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
|
|
@ -81,8 +84,47 @@ spec:
|
|||
## Data format
|
||||
|
||||
The **output binding** `create` operation requires the following keys to exist in the payload of every document to be created:
|
||||
|
||||
- `id`: a unique ID for the document to be created
|
||||
- `<partitionKey>`: the name of the partition key specified via the `spec.partitionKey` in the component definition. This must also match the partition key specified upon creation of the Cosmos DB container.
|
||||
- `<partitionKey>`: the name of the partition key specified via the `spec.partitionKey` in the component definition. This must also match the partition key specified upon creation of the Cosmos DB container.
|
||||
|
||||
## Setting up Cosmos DB for authenticating with Azure AD
|
||||
|
||||
When using the Dapr Cosmos DB binding and authenticating with Azure AD, you need to perform a few additional steps to set up your environment.
|
||||
|
||||
Prerequisites:
|
||||
|
||||
- You need a Service Principal created as per the instructions in the [authenticating to Azure]({{< ref authenticating-azure.md >}}) page. You need the ID of the Service Principal for the commands below (note that this is different from the client ID of your application, or the value you use for `azureClientId` in the metadata).
|
||||
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
|
||||
- [jq](https://stedolan.github.io/jq/download/)
|
||||
- The scripts below are optimized for a bash or zsh shell
|
||||
|
||||
> When using the Cosmos DB binding, you **don't** need to create stored procedures as you do in the case of the Cosmos DB state store.
|
||||
|
||||
### Granting your Azure AD application access to Cosmos DB
|
||||
|
||||
> You can find more information on the [official documentation](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-rbac), including instructions to assign more granular permissions.
|
||||
|
||||
In order to grant your application permissions to access data stored in Cosmos DB, you need to assign it a custom role for the Cosmos DB data plane. In this example you're going to use a built-in role, "Cosmos DB Built-in Data Contributor", which grants your application full read-write access to the data; you can optionally create custom, fine-tuned roles following the instructions in the official docs.
|
||||
|
||||
```sh
|
||||
# Name of the Resource Group that contains your Cosmos DB
|
||||
RESOURCE_GROUP="..."
|
||||
# Name of your Cosmos DB account
|
||||
ACCOUNT_NAME="..."
|
||||
# ID of your Service Principal object
|
||||
PRINCIPAL_ID="..."
|
||||
# ID of the "Cosmos DB Built-in Data Contributor" role
|
||||
# You can also use the ID of a custom role
|
||||
ROLE_ID="00000000-0000-0000-0000-000000000002"
|
||||
|
||||
az cosmosdb sql role assignment create \
|
||||
--account-name "$ACCOUNT_NAME" \
|
||||
--resource-group "$RESOURCE_GROUP" \
|
||||
--scope "/" \
|
||||
--principal-id "$PRINCIPAL_ID" \
|
||||
--role-definition-id "$ROLE_ID"
|
||||
```
|
||||
|
||||
## Related links
|
||||
|
||||
|
|
|
|||
|
|
@ -43,7 +43,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| redisUsername | N | Output | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `"username"` |
|
||||
| enableTLS | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |
|
||||
| failover | N | Output | Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
|
||||
| sentinelMasterName | N | Output | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel) | `""`, `"127.0.0.1:6379"`
|
||||
| sentinelMasterName | N | Output | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
|
||||
| redeliverInterval | N | Output | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`
|
||||
| processingTimeout | N | Output | The amount time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"`
|
||||
| redisType | N | Output | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"`
|
||||
|
|
|
|||
|
|
@ -52,8 +52,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
|
||||
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
|
||||
| maxRetryBackoff | N | Minimum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
|
||||
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel). Defaults to `"false"` | `"true"`, `"false"`
|
||||
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel) | `""`, `"127.0.0.1:6379"`
|
||||
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/). Defaults to `"false"` | `"true"`, `"false"`
|
||||
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
|
||||
|
||||
|
||||
## Setup Redis
|
||||
|
|
|
|||
|
|
@ -63,7 +63,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
|
||||
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
|
||||
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
|
||||
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel) | `""`, `"127.0.0.1:6379"`
|
||||
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
|
||||
| maxLenApprox | N | Maximum number of items inside a stream.The old entries are automatically evicted when the specified length is reached, so that the stream is left at a constant size. Defaults to unlimited. | `"10000"`
|
||||
|
||||
## Create a Redis instance
|
||||
|
|
|
|||
|
|
@ -1,15 +1,15 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Azure Cosmos DB"
|
||||
linkTitle: "Azure Cosmos DB"
|
||||
description: Detailed information on the Azure CosmosDB state store component
|
||||
title: "Azure Cosmos DB"
|
||||
linkTitle: "Azure Cosmos DB"
|
||||
description: Detailed information on the Azure Cosmos DB state store component
|
||||
aliases:
|
||||
- "/operations/components/setup-state-store/supported-state-stores/setup-azure-cosmosdb/"
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To setup Azure CosmosDb state store create a component of type `state.azure.cosmosdb`. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration.
|
||||
To setup Azure Cosmos DB state store create a component of type `state.azure.cosmosdb`. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -35,7 +35,7 @@ spec:
|
|||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
If you wish to use CosmosDb as an actor store, append the following to the yaml.
|
||||
If you wish to use Cosmos DB as an actor store, append the following to the yaml.
|
||||
|
||||
```yaml
|
||||
- name: actorStateStore
|
||||
|
|
@ -46,37 +46,41 @@ If you wish to use CosmosDb as an actor store, append the following to the yaml.
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| url | Y | The CosmosDB url | `"https://******.documents.azure.com:443/"`.
|
||||
| masterKey | Y | The key to authenticate to the CosmosDB account | `"key"`
|
||||
| url | Y | The Cosmos DB url | `"https://******.documents.azure.com:443/"`.
|
||||
| masterKey | Y | The key to authenticate to the Cosmos DB account | `"key"`
|
||||
| database | Y | The name of the database | `"db"`
|
||||
| collection | Y | The name of the collection | `"collection"`
|
||||
| collection | Y | The name of the collection (container) | `"collection"`
|
||||
| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
|
||||
|
||||
### Azure Active Directory (AAD) authentication
|
||||
The Azure Cosmos DB state store component supports authentication using all Azure Active Directory mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
### Azure Active Directory (Azure AD) authentication
|
||||
|
||||
## Setup Azure Cosmos DB
|
||||
The Azure Cosmos DB state store component supports authentication using all Azure Active Directory mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of Azure AD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
|
||||
[Follow the instructions](https://docs.microsoft.com/azure/cosmos-db/how-to-manage-database-account) from the Azure documentation on how to create an Azure CosmosDB account. The database and collection must be created in CosmosDB before Dapr can use it.
|
||||
You can read additional information for setting up Cosmos DB with Azure AD authentication in the [section below](#setting-up-cosmos-db-for-authenticating-with-azure-ad).
|
||||
|
||||
**Note : The partition key for the collection must be named "/partitionKey". Note: this is case-sensitive.**
|
||||
## Setup Azure Cosmos DB
|
||||
|
||||
In order to setup CosmosDB as a state store, you need the following properties:
|
||||
- **URL**: the CosmosDB url. for example: https://******.documents.azure.com:443/
|
||||
- **Master Key**: The key to authenticate to the CosmosDB account
|
||||
[Follow the instructions](https://docs.microsoft.com/azure/cosmos-db/how-to-manage-database-account) from the Azure documentation on how to create an Azure Cosmos DB account. The database and collection must be created in Cosmos DB before Dapr can use it.
|
||||
|
||||
**Important: The partition key for the collection must be named `/partitionKey` (note: this is case-sensitive).**
|
||||
|
||||
In order to setup Cosmos DB as a state store, you need the following properties:
|
||||
|
||||
- **URL**: the Cosmos DB url. for example: `https://******.documents.azure.com:443/`
|
||||
- **Master Key**: The key to authenticate to the Cosmos DB account
|
||||
- **Database**: The name of the database
|
||||
- **Collection**: The name of the collection
|
||||
- **Collection**: The name of the collection (or container)
|
||||
|
||||
## Best Practices for Production Use
|
||||
|
||||
Azure Cosmos DB shares a strict metadata request rate limit across all databases in a single Azure Cosmos DB account. New connections to Azure Cosmos DB assume a large percentage of the allowable request rate limit. (See the [CosmosDB documentation](https://docs.microsoft.com/azure/cosmos-db/sql/troubleshoot-request-rate-too-large#recommended-solution-3))
|
||||
Azure Cosmos DB shares a strict metadata request rate limit across all databases in a single Azure Cosmos DB account. New connections to Azure Cosmos DB assume a large percentage of the allowable request rate limit. (See the [Cosmos DB documentation](https://docs.microsoft.com/azure/cosmos-db/sql/troubleshoot-request-rate-too-large#recommended-solution-3))
|
||||
|
||||
Therefore several strategies must be applied to avoid simultaneous new connections to Azure Cosmos DB:
|
||||
Therefore several strategies must be applied to avoid simultaneous new connections to Azure Cosmos DB:
|
||||
|
||||
- Ensure sidecars of applications only load the Azure Cosmos DB component when they require it to avoid unnecessary database connections. This can be done by [scoping your components to specific applications]({{< ref component-scopes.md >}}#application-access-to-components-with-scopes).
|
||||
- Choose deployment strategies that sequentially deploy or start your applications to minimize bursts in new connections to your Azure Cosmos DB accounts.
|
||||
- Avoid reusing the same Azure Cosmos DB account for unrelated databases or systems (even outside of Dapr). Distinct Azure Cosmos DB accounts have distinct rate limits.
|
||||
- Increase the `initTimeout` value to allow the component to retry connecting to Azure Cosmos DB during side car initialization for up to 5 minutes. The default value is `5s` and should be increased. When using Kubernetes, increasing this value may also require an update to your [Readiness and Liveness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/).
|
||||
- Ensure sidecars of applications only load the Azure Cosmos DB component when they require it to avoid unnecessary database connections. This can be done by [scoping your components to specific applications]({{< ref component-scopes.md >}}#application-access-to-components-with-scopes).
|
||||
- Choose deployment strategies that sequentially deploy or start your applications to minimize bursts in new connections to your Azure Cosmos DB accounts.
|
||||
- Avoid reusing the same Azure Cosmos DB account for unrelated databases or systems (even outside of Dapr). Distinct Azure Cosmos DB accounts have distinct rate limits.
|
||||
- Increase the `initTimeout` value to allow the component to retry connecting to Azure Cosmos DB during side car initialization for up to 5 minutes. The default value is `5s` and should be increased. When using Kubernetes, increasing this value may also require an update to your [Readiness and Liveness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/).
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
|
|
@ -88,17 +92,17 @@ spec:
|
|||
|
||||
## Data format
|
||||
|
||||
To use the CosmosDB state store, your data must be sent to Dapr in JSON-serialized. Having it just JSON *serializable* will not work.
|
||||
To use the Cosmos DB state store, your data must be sent to Dapr in JSON-serialized format. Having it just JSON *serializable* will not work.
|
||||
|
||||
If you are using the Dapr SDKs (e.g. https://github.com/dapr/dotnet-sdk) the SDK will serialize your data to json.
|
||||
If you are using the Dapr SDKs (for example the [.NET SDK](https://github.com/dapr/dotnet-sdk)), the SDK automatically serializes your data to JSON.
|
||||
|
||||
For examples see the curl operations in the [Partition keys](#partition-keys) section.
|
||||
If you want to invoke Dapr's HTTP endpoint directly, take a look at the examples (using curl) in the [Partition keys](#partition-keys) section below.
|
||||
|
||||
## Partition keys
|
||||
|
||||
For **non-actor state** operations, the Azure Cosmos DB state store will use the `key` property provided in the requests to the Dapr API to determine the Cosmos DB partition key. This can be overridden by specifying a metadata field in the request with a key of `partitionKey` and a value of the desired partition.
|
||||
For **non-actor state** operations, the Azure Cosmos DB state store will use the `key` property provided in the requests to the Dapr API to determine the Cosmos DB partition key. This can be overridden by specifying a metadata field in the request with a key of `partitionKey` and a value of the desired partition.
|
||||
|
||||
The following operation will use `nihilus` as the partition key value sent to CosmosDB:
|
||||
The following operation uses `nihilus` as the partition key value sent to Cosmos DB:
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/state/<store_name> \
|
||||
|
|
@ -111,7 +115,7 @@ curl -X POST http://localhost:3500/v1.0/state/<store_name> \
|
|||
]'
|
||||
```
|
||||
|
||||
For **non-actor** state operations, if you want to control the CosmosDB partition, you can specify it in metadata. Reusing the example above, here's how to put it under the `mypartition` partition
|
||||
For **non-actor** state operations, if you want to control the Cosmos DB partition, you can specify it in metadata. Reusing the example above, here's how to put it under the `mypartition` partition
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/state/<store_name> \
|
||||
|
|
@ -127,10 +131,102 @@ curl -X POST http://localhost:3500/v1.0/state/<store_name> \
|
|||
]'
|
||||
```
|
||||
|
||||
For **actor** state operations, the partition key is generated by Dapr using the `appId`, the actor type, and the actor id, such that data for the same actor always ends up under the same partition (you do not need to specify it). This is because actor state operations must use transactions, and in Cosmos DB the items in a transaction must be on the same partition.
|
||||
|
||||
For **actor** state operations, the partition key is generated by Dapr using the `appId`, the actor type, and the actor id, such that data for the same actor always ends up under the same partition (you do not need to specify it). This is because actor state operations must use transactions, and in CosmosDB the items in a transaction must be on the same partition.
|
||||
## Setting up Cosmos DB for authenticating with Azure AD
|
||||
|
||||
When using the Dapr Cosmos DB state store and authenticating with Azure AD, you need to perform a few additional steps to set up your environment.
|
||||
|
||||
Prerequisites:
|
||||
|
||||
- You need a Service Principal created as per the instructions in the [authenticating to Azure]({{< ref authenticating-azure.md >}}) page. You need the ID of the Service Principal for the commands below (note that this is different from the client ID of your application, or the value you use for `azureClientId` in the metadata).
|
||||
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
|
||||
- [jq](https://stedolan.github.io/jq/download/)
|
||||
- The scripts below are optimized for a bash or zsh shell
|
||||
|
||||
### Granting your Azure AD application access to Cosmos DB
|
||||
|
||||
> You can find more information on the [official documentation](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-rbac), including instructions to assign more granular permissions.
|
||||
|
||||
In order to grant your application permissions to access data stored in Cosmos DB, you need to assign it a custom role for the Cosmos DB data plane. In this example you're going to use a built-in role, "Cosmos DB Built-in Data Contributor", which grants your application full read-write access to the data; you can optionally create custom, fine-tuned roles following the instructions in the official docs.
|
||||
|
||||
```sh
|
||||
# Name of the Resource Group that contains your Cosmos DB
|
||||
RESOURCE_GROUP="..."
|
||||
# Name of your Cosmos DB account
|
||||
ACCOUNT_NAME="..."
|
||||
# ID of your Service Principal object
|
||||
PRINCIPAL_ID="..."
|
||||
# ID of the "Cosmos DB Built-in Data Contributor" role
|
||||
# You can also use the ID of a custom role
|
||||
ROLE_ID="00000000-0000-0000-0000-000000000002"
|
||||
|
||||
az cosmosdb sql role assignment create \
|
||||
--account-name "$ACCOUNT_NAME" \
|
||||
--resource-group "$RESOURCE_GROUP" \
|
||||
--scope "/" \
|
||||
--principal-id "$PRINCIPAL_ID" \
|
||||
--role-definition-id "$ROLE_ID"
|
||||
```
|
||||
|
||||
### Creating the stored procedures for Dapr
|
||||
|
||||
When using Cosmos DB as a state store for Dapr, we need to create two stored procedures in your collection. When you configure the state store using a "master key", Dapr creates those for you, automatically. However, when your state store authenticates with Cosmos DB using Azure AD, because of limitations in the platform we are not able to do it automatically.
|
||||
|
||||
If you are using Azure AD to authenticate your Cosmos DB state store and have not created the stored procedures (or if you are using an outdated version of them), your Dapr sidecar will fail to start and you will see an error similar to this in your logs:
|
||||
|
||||
```text
|
||||
Dapr requires stored procedures created in Cosmos DB before it can be used as state store. Those stored procedures are currently not existing or are using a different version than expected. When you authenticate using Azure AD we cannot automatically create them for you: please start this state store with a Cosmos DB master key just once so we can create the stored procedures for you; otherwise, you can check our docs to learn how to create them yourself: https://aka.ms/dapr/cosmosdb-aad
|
||||
```
|
||||
|
||||
To fix this issue, you have two options:
|
||||
|
||||
1. Configure your component to authenticate with the "master key" just once, to have Dapr automatically initialize the stored procedures for you. While you need to use a "master key" the first time you launch your application, you should be able to remove that and use Azure AD credentials (including Managed Identities) after.
|
||||
2. Alternatively, you can follow the steps below to create the stored procedures manually. These steps must be performed before you can start your application the first time.
|
||||
|
||||
To create the stored procedures manually, you can use the commands below.
|
||||
|
||||
First, download the code of the stored procedures for the version of Dapr that you're using. This will create two `.js` files in your working directory:
|
||||
|
||||
```sh
|
||||
# Set this to the version of Dapr that you're using
|
||||
DAPR_VERSION="v1.7.0"
|
||||
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/state/azure/cosmosdb/storedprocedures/__daprver__.js"
|
||||
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/state/azure/cosmosdb/storedprocedures/__dapr_v2__.js"
|
||||
```
|
||||
|
||||
> You won't need to update the code for the stored procedures every time you update Dapr. Although the code for the stored procedures doesn't change often, sometimes we may make updates to that: when that happens, if you're using Azure AD authentication your Dapr sidecar will fail to launch until you update the stored procedures, re-running the commands above.
|
||||
|
||||
Then, using the Azure CLI create the stored procedures in Cosmos DB, for your account, database, and collection (or container):
|
||||
|
||||
```sh
|
||||
# Name of the Resource Group that contains your Cosmos DB
|
||||
RESOURCE_GROUP="..."
|
||||
# Name of your Cosmos DB account
|
||||
ACCOUNT_NAME="..."
|
||||
# Name of your database in the Cosmos DB account
|
||||
DATABASE_NAME="..."
|
||||
# Name of the container (collection) in your database
|
||||
CONTAINER_NAME="..."
|
||||
|
||||
az cosmosdb sql stored-procedure create \
|
||||
--resource-group "$RESOURCE_GROUP" \
|
||||
--account-name "$ACCOUNT_NAME" \
|
||||
--database-name "$DATABASE_NAME" \
|
||||
--container-name "$CONTAINER_NAME" \
|
||||
--name "__daprver__" \
|
||||
--body @__daprver__.js
|
||||
az cosmosdb sql stored-procedure create \
|
||||
--resource-group "$RESOURCE_GROUP" \
|
||||
--account-name "$ACCOUNT_NAME" \
|
||||
--database-name "$DATABASE_NAME" \
|
||||
--container-name "$CONTAINER_NAME" \
|
||||
--name "__dapr_v2__" \
|
||||
--body @__dapr_v2__.js
|
||||
```
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
|
||||
- [State management building block]({{< ref state-management >}})
|
||||
|
|
|
|||
|
|
@ -64,8 +64,8 @@ If you wish to use Redis as an actor store, append the following to the yaml.
|
|||
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
|
||||
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
|
||||
| maxRetryBackoff | N | Minimum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
|
||||
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel). Defaults to `"false"` | `"true"`, `"false"`
|
||||
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel) | `""`, `"127.0.0.1:6379"`
|
||||
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/). Defaults to `"false"` | `"true"`, `"false"`
|
||||
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
|
||||
| redeliverInterval | N | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`
|
||||
| processingTimeout | N | The amount time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"`
|
||||
| redisType | N | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"`
|
||||
|
|
|
|||
|
|
@ -17,4 +17,6 @@ The following table lists the environment variables used by the Dapr runtime, CL
|
|||
| DAPR_GRPC_PORT | Your application | The gRPC port that the Dapr sidecar is listening on. Your application should use this variable to connect to Dapr sidecar instead of hardcoding the port value. Set by the Dapr CLI run command for self hosted or injected by the dapr-sidecar-injector into all the containers in the pod. |
|
||||
| DAPR_METRICS_PORT | Your application | The HTTP [Prometheus]({{< ref prometheus >}}) port that Dapr sends its metrics information to. Your application can use this variable to send its application specific metrics to have both Dapr metrics and application metrics together. See [metrics-port]({{< ref arguments-annotations-overview>}}) for more information |
|
||||
| DAPR_API_TOKEN | Dapr sidecar | The token used for Dapr API authentication for requests from the application. Read [enable API token authentication in Dapr]({{< ref api-token >}}) for more information. |
|
||||
| NAMESPACE | Dapr sidecar | Used to specify a component's [namespace in self-hosted mode]({{< ref component-scopes >}}) |
|
||||
| NAMESPACE | Dapr sidecar | Used to specify a component's [namespace in self-hosted mode]({{< ref component-scopes >}})|
|
||||
| DAPR_DEFAULT_IMAGE_REGISTRY | Dapr CLI | In self hosted mode, it is used to specify the default container registry to pull images from. When its value is set to `GHCR` or `ghcr` it pulls the required images from Github container registry. To default to Docker hub as default, just unset this env variable.|
|
||||
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 184 KiB After Width: | Height: | Size: 381 KiB |
|
Before Width: | Height: | Size: 173 KiB After Width: | Height: | Size: 331 KiB |
|
Before Width: | Height: | Size: 156 KiB After Width: | Height: | Size: 1.2 MiB |
|
Before Width: | Height: | Size: 81 KiB After Width: | Height: | Size: 306 KiB |
|
Before Width: | Height: | Size: 88 KiB After Width: | Height: | Size: 333 KiB |
|
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 154 KiB |
|
Before Width: | Height: | Size: 66 KiB After Width: | Height: | Size: 139 KiB |
|
Before Width: | Height: | Size: 35 KiB After Width: | Height: | Size: 133 KiB |
|
Before Width: | Height: | Size: 29 KiB After Width: | Height: | Size: 252 KiB |