diff --git a/.github/workflows/azure-static-web-apps-green-hill-0d7377310.yml b/.github/workflows/azure-static-web-apps-green-hill-0d7377310.yml new file mode 100644 index 000000000..8cf7f2b88 --- /dev/null +++ b/.github/workflows/azure-static-web-apps-green-hill-0d7377310.yml @@ -0,0 +1,51 @@ +name: Azure Static Web Apps CI/CD + +on: + push: + branches: + - website + pull_request: + types: [opened, synchronize, reopened, closed] + branches: + - website + +jobs: + build_and_deploy_job: + if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed') + runs-on: ubuntu-latest + name: Build and Deploy Job + steps: + - uses: actions/checkout@v2 + with: + submodules: recursive + - name: Setup Docsy + run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli + - name: Build And Deploy + id: builddeploy + uses: Azure/static-web-apps-deploy@v0.0.1-preview + env: + HUGO_ENV: production + HUGO_VERSION: "0.74.3" + with: + azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_GREEN_HILL_0D7377310 }} + repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments) + action: "upload" + ###### Repository/Build Configurations - These values can be configured to match you app requirements. ###### + # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig + app_location: "daprdocs" # App source code path + api_location: "api" # Api source code path - optional + app_artifact_location: 'public' # Built app content directory - optional + app_build_command: "hugo" + ###### End of Repository/Build Configurations ###### + + close_pull_request_job: + if: github.event_name == 'pull_request' && github.event.action == 'closed' + runs-on: ubuntu-latest + name: Close Pull Request Job + steps: + - name: Close Pull Request + id: closepullrequest + uses: Azure/static-web-apps-deploy@v0.0.1-preview + with: + azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_GREEN_HILL_0D7377310 }} + action: "close" diff --git a/.gitignore b/.gitignore index f7991ec7f..7c399e910 100644 --- a/.gitignore +++ b/.gitignore @@ -1,2 +1,5 @@ # Visual Studio 2015/2017/2019 cache/options directory .vs/ +node_modules/ +daprdocs/public +daprdocs/resources/_gen diff --git a/.gitmodules b/.gitmodules new file mode 100644 index 000000000..9d1226e9f --- /dev/null +++ b/.gitmodules @@ -0,0 +1,3 @@ +[submodule "daprdocs/themes/docsy"] + path = daprdocs/themes/docsy + url = https://github.com/google/docsy.git diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 000000000..259662622 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,3 @@ +# Contributing to Dapr docs + +Please see [this docs section](https://docs.dapr.io/contributing/) for general guidance on contributions to the Dapr project as well as specific guidelines on contributions to the docs repo. \ No newline at end of file diff --git a/presentations/Dapr Presentation Deck.pptx b/OLD/presentations/Dapr Presentation Deck.pptx similarity index 100% rename from presentations/Dapr Presentation Deck.pptx rename to OLD/presentations/Dapr Presentation Deck.pptx diff --git a/presentations/PastPresentations/2019IgniteCloudNativeApps.pdf b/OLD/presentations/PastPresentations/2019IgniteCloudNativeApps.pdf similarity index 100% rename from presentations/PastPresentations/2019IgniteCloudNativeApps.pdf rename to OLD/presentations/PastPresentations/2019IgniteCloudNativeApps.pdf diff --git a/presentations/PastPresentations/2020ReadyCloudNativeApps.pdf b/OLD/presentations/PastPresentations/2020ReadyCloudNativeApps.pdf similarity index 100% rename from presentations/PastPresentations/2020ReadyCloudNativeApps.pdf rename to OLD/presentations/PastPresentations/2020ReadyCloudNativeApps.pdf diff --git a/presentations/README.md b/OLD/presentations/README.md similarity index 100% rename from presentations/README.md rename to OLD/presentations/README.md diff --git a/README.md b/README.md index 1a2091881..bf537e213 100644 --- a/README.md +++ b/README.md @@ -1,49 +1,63 @@ -# 📖 Dapr documentation +# Dapr documentation -Welcome to the Dapr documentation repository. You can learn more about Dapr from the links below. +If you are looking to explore the Dapr documentation, please go to the documentation website: +[**https://docs.dapr.io**](https://docs.dapr.io) -## Document versions +This repo contains the markdown files which generate the above website. See below for guidance on running with a local environment to contribute to the docs. -Dapr is currently under community development in preview phase and master branch could include breaking changes. Therefore, please ensure that you refer to the right version of the documents for your Dapr runtime version. +## Contribution guidelines -| Version | Repo | -|:-------:|:----:| -| v0.11.0 | [Docs](https://github.com/dapr/docs/tree/v0.11.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.11.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.11.0) - [CLI](https://github.com/dapr/cli/tree/release-0.11) -| v0.10.0 | [Docs](https://github.com/dapr/docs/tree/v0.10.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.10.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.10.0) - [CLI](https://github.com/dapr/cli/tree/release-0.10) -| v0.9.0 | [Docs](https://github.com/dapr/docs/tree/v0.9.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.9.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.9.0) - [CLI](https://github.com/dapr/cli/tree/release-0.9) -| v0.8.0 | [Docs](https://github.com/dapr/docs/tree/v0.8.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.8.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.8.0) - [CLI](https://github.com/dapr/cli/tree/release-0.8) -| v0.7.0 | [Docs](https://github.com/dapr/docs/tree/v0.7.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.7.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.7.0) - [CLI](https://github.com/dapr/cli/tree/release-0.7) -| v0.6.0 | [Docs](https://github.com/dapr/docs/tree/v0.6.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.6.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.6.0) - [CLI](https://github.com/dapr/cli/tree/release-0.6) -| v0.5.0 | [Docs](https://github.com/dapr/docs/tree/v0.5.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.5.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.5.0) - [CLI](https://github.com/dapr/cli/tree/release-0.5) -| v0.4.0 | [Docs](https://github.com/dapr/docs/tree/v0.4.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.4.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.4.0) - [CLI](https://github.com/dapr/cli/tree/release-0.4) -| v0.3.0 | [Docs](https://github.com/dapr/docs/tree/v0.3.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.3.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.3.0) - [CLI](https://github.com/dapr/cli/tree/release-0.3) -| v0.2.0 | [Docs](https://github.com/dapr/docs/tree/v0.2.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.2.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.2.0) - [CLI](https://github.com/dapr/cli/tree/release-0.2) -| v0.1.0 | [Docs](https://github.com/dapr/docs/tree/v0.1.0) - [Quickstarts](https://github.com/dapr/quickstarts/tree/v0.1.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.1.0) - [CLI](https://github.com/dapr/cli/tree/release-0.1) +Before making your first contribution, make sure to review the [contributing section](http://docs.dapr.io/contributing/) in the docs. -## Contents +## Overview -| Topic | Description | -|-------|-------------| -|**[Overview](./overview)** | An overview of Dapr and how it enables you to build event driven, distributed applications -|**[Getting Started](./getting-started)** | Set up your development environment -|**[Concepts](./concepts)** | Dapr concepts explained -|**[How-Tos](./howto)** | Guides explaining how to accomplish specific tasks -|**[Best Practices](./best-practices)** | Guides explaining best practices when using Dapr -|**[Reference](./reference)** | API and bindings reference documentation -|**[FAQ](FAQ.md)** | Frequently asked questions +The Dapr docs are built using [Hugo](https://gohugo.io/) with the [Docsy](https://docsy.dev) theme, hosted on an [Azure Static Web App](https://docs.microsoft.com/en-us/azure/static-web-apps/overview). -## Further documentation +The [daprdocs](./daprdocs) directory contains the hugo project, markdown files, and theme configurations. -| Area | Description | -|------|-------------| -| **[Command Line Interface (CLI)](https://github.com/dapr/cli)** | The Dapr CLI allows you to setup Dapr on your local dev machine or on a Kubernetes cluster, provides debugging support, launches and manages Dapr instances. -| **[Dapr Runtime](https://github.com/dapr/dapr)** | Dapr runtime code and overview documentation. -| **[Components Contribution](https://github.com/dapr/components-contrib)** | Open, community driven reusable components for building distributed applications. -| **SDKs** | - [Go SDK](https://github.com/dapr/go-sdk)
- [Java SDK](https://github.com/dapr/java-sdk)
- [Javascript SDK](https://github.com/dapr/js-sdk)
- [Python SDK](https://github.com/dapr/python-sdk)
- [.NET SDK](https://github.com/dapr/dotnet-sdk)
- [C++ SDK](https://github.com/dapr/cpp-sdk)
- [RUST SDK](https://github.com/dapr/rust-sdk)

**Note:** Dapr is language agnostic and provides a [RESTful HTTP API](./reference/api/README.md) in addition to the protobuf clients. -| **Frameworks** | - [Workflows](https://github.com/dapr/workflows)
- [Azure Functions extension](https://github.com/dapr/azure-functions-extension)
-| **[Dapr Presentations](./presentations)** | Previous Dapr presentations and information on how to give your own Dapr presentation. +## Pre-requisites + +- [Hugo extended version](https://gohugo.io/getting-started/installing) +- [Node.js](https://nodejs.org/en/) + +## Environment setup + +1. Ensure pre-requisites are installed +2. Clone this repository +```sh +git clone https://github.com/dapr/docs.git +``` +3. Change to daprdocs directory: +```sh +cd daprdocs +``` +4. Add Docsy submodule: +```sh +git submodule add https://github.com/google/docsy.git themes/docsy +``` +5. Update submodules: +```sh +git submodule update --init --recursive +``` +6. Install npm packages: +```sh +npm install +``` + +## Run local server +1. Make sure you're still in the `daprdocs` directory +2. Run +```sh +hugo server --disableFastRender +``` +3. Navigate to `http://localhost:1313/docs` + +## Update docs +1. Create new branch +1. Commit and push changes to content +1. Submit pull request to `master` +1. Staging site will automatically get created and linked to PR to review and test ## Code of Conduct - -Please refer to our [Dapr Community Code of Conduct](https://github.com/dapr/community/blob/master/CODE-OF-CONDUCT.md) +Please refer to our [Dapr community code of conduct](https://github.com/dapr/community/blob/master/CODE-OF-CONDUCT.md). diff --git a/best-practices/security/README.md b/best-practices/security/README.md deleted file mode 100644 index 80ad73079..000000000 --- a/best-practices/security/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# documentation - -Content for this file to be added diff --git a/best-practices/troubleshooting/README.md b/best-practices/troubleshooting/README.md deleted file mode 100644 index c7edbe5ba..000000000 --- a/best-practices/troubleshooting/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# Debugging and Troubleshooting - -This section describes different tools, techniques and common problems to help users debug and diagnose issues with Dapr. - -1. [Logs](logs.md) -2. [Tracing and diagnostics](tracing.md) -3. [Profiling and debugging](profiling_debugging.md) -4. [Common issues](common_issues.md) - -Please open a new Bug or Feature Request item on our [issues section](https://github.com/dapr/dapr/issues) if you've encountered a problem running Dapr. -If a security vulnerability has been found, contact the [Dapr team](mailto:daprct@microsoft.com). diff --git a/concepts/README.md b/concepts/README.md deleted file mode 100644 index 868a0606b..000000000 --- a/concepts/README.md +++ /dev/null @@ -1,73 +0,0 @@ -# Dapr concepts - -The goal of these topics is to provide an understanding of the key concepts used in the Dapr documentation. - -## Contents - - - [Building blocks](#building-blocks) - - [Components](#components) - - [Configuration](#configuration) - - [Secrets](#secrets) - - [Hosting environments](#hosting-environments) - -## Building blocks - -A [building block](./architecture/building_blocks.md) is as an HTTP or gRPC API that can be called from user code and uses one or more Dapr components. Dapr consists of a set of building blocks, with extensibility to add new building blocks. - -The diagram below shows how building blocks expose a public API that is called from your code, using components to implement the building blocks' capability. - - - -The following are the building blocks provided by Dapr: - - - -| Building Block | Endpoint | Description | -|----------------|----------|-------------| -| [**Service-to-Service Invocation**](./service-invocation/README.md) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling. -| [**State Management**](./state-management/README.md) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state API with pluggable state stores for persistence. -| [**Publish and Subscribe**](./publish-subscribe-messaging/README.md) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publishes messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications. -| [**Resource Bindings**](./bindings/README.md)| `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service. -| [**Actors**](./actors/README.md) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the Virtual Actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use. See * [Actor Overview](./actors#understanding-actors) -| [**Observability**](./observability/README.md) | `N/A` | Dapr system components and runtime emit metrics, logs, and traces to debug, operate and monitor Dapr system services, components and user applications. -| [**Secrets**](./secrets/README.md) | `/v1.0/secrets` | Dapr offers a secrets building block API and integrates with secret stores such as Azure Key Vault and Kubernetes to store the secrets. Service code can call the secrets API to retrieve secrets out of the Dapr supported secret stores. - -## Components - -Dapr uses a modular design where functionality is delivered as a component. Each component has an interface definition. All of the components are pluggable so that you can swap out one component with the same interface for another. The [components contrib repo](https://github.com/dapr/components-contrib) is where you can contribute implementations for the component interfaces and extends Dapr's capabilities. - - A building block can use any combination of components. For example the [actors](./actors) building block and the state management building block both use state components. As another example, the pub/sub building block uses [pub/sub](./publish-subscribe-messaging/README.md) components. - - You can get a list of current components available in the current hosting environment using the `dapr components` CLI command. - - The following are the component types provided by Dapr: - -* [Service discovery](https://github.com/dapr/components-contrib/tree/master/nameresolution) -* [State](https://github.com/dapr/components-contrib/tree/master/state) -* [Pub/sub](https://github.com/dapr/components-contrib/tree/master/pubsub) -* [Bindings](https://github.com/dapr/components-contrib/tree/master/bindings) -* [Middleware](https://github.com/dapr/components-contrib/tree/master/middleware) -* [Secret stores](https://github.com/dapr/components-contrib/tree/master/secretstores) -* [Tracing exporters](https://github.com/dapr/components-contrib/tree/master/exporters) - -### Service invocation and service discovery components -Service discovery components are used with the [Service Invocation](./service-invocation/README.md) building block to integrate with the hosting environment to provide service-to-service discovery. For example, the Kubernetes service discovery component integrates with the kubernetes DNS service and self hosted uses mDNS. - -### Service invocation and middleware components -Dapr allows custom [**middleware**](./middleware/README.md) to be plugged into the request processing pipeline. Middleware can perform additional actions on a request, such as authentication, encryption and message transformation before the request is routed to the user code, or before the request is returned to the client. The middleware components are used with the [Service Invocation](./service-invocation/README.md) building block. - -### Secret store components -In Dapr, a [**secret**](./secrets/README.md) is any piece of private information that you want to guard against unwanted users. Secretstores, used to store secrets, are Dapr components and can be used by any of the building blocks. - -## Configuration - -Dapr [Configuration](./configuration/README.md) defines a policy that affects how any Dapr sidecar instance behaves, such as using [distributed tracing](./observability/traces.md) or a [middleware component](./middleware/README.md). Configuration can be applied to Dapr sidecar instances dynamically. - - You can get a list of current configurations available in the current the hosting environment using the `dapr configurations` CLI command. - -## Hosting environments - -Dapr can run on multiple hosting platforms. The supported hosting platforms are: - -* [**Self hosted**](./hosting/README.md#running-dapr-on-a-local-developer-machine-in-standalone-mode). Dapr runs on a single machine either as a process or in a container. Used for local development or running on a single machine execution -* [**Kubernetes**](./hosting/README.md#running-dapr-in-kubernetes-mode). Dapr runs on any Kubernetes cluster either from a cloud provider or on-premises. diff --git a/concepts/architecture/README.md b/concepts/architecture/README.md deleted file mode 100644 index 9dfdf1579..000000000 --- a/concepts/architecture/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Dapr architecture - -- [Building Blocks](./building_blocks.md) diff --git a/concepts/architecture/building_blocks.md b/concepts/architecture/building_blocks.md deleted file mode 100644 index f163bb18c..000000000 --- a/concepts/architecture/building_blocks.md +++ /dev/null @@ -1,15 +0,0 @@ -# Building blocks - -Dapr consists of a set of building blocks that can be called from any programming language through Dapr HTTP or gRPC APIs. These building blocks address common challenges in building resilient, microservices applications. And they capture and share best practices and patterns that empower distributed application developers. - -![Dapr building blocks](../../images/overview.png) - -## Anatomy of a building block - -Both Dapr spec and Dapr runtime are designed to be extensible -to include new building blocks. A building block is comprised of the following artifacts: - -* Dapr spec API definition. A newly proposed building block shall have its API design incorporated into the Dapr spec. -* Components. A building block may reuse existing [Dapr components](../README.md#components), or introduce new components. -* Test suites. A new building block implementation should come with associated unit tests and end-to-end scenario tests. -* Documents and samples. diff --git a/concepts/bindings/README.md b/concepts/bindings/README.md deleted file mode 100644 index ea0b04fb5..000000000 --- a/concepts/bindings/README.md +++ /dev/null @@ -1,99 +0,0 @@ -# Bindings - -Using bindings, you can trigger your app with events coming in from external systems, or invoke external systems. This building block provides several benefits for you and your code: - -* Remove the complexities of connecting to, and polling from, messaging systems such as queues and message buses -* Focus on business logic and not implementation details of how to interact with a system -* Keep your code free from SDKs or libraries -* Handle retries and failure recovery -* Switch between bindings at run time -* Build portable applications where environment-specific bindings are set-up and no code changes are required - -For a specific example, bindings would allow your microservice to respond to incoming Twilio/SMS messages without adding or configuring a third-party Twilio SDK, worrying about polling from Twilio (or using websockets, etc.). - -Bindings are developed independently of Dapr runtime. You can view and contribute to the bindings [here](https://github.com/dapr/components-contrib/tree/master/bindings). - -## Supported bindings and specs - -Every binding has its own unique set of properties. Click the name link to see the component YAML for each binding. - -### Generic - -| Name | Input
Binding | Output
Binding | Status | -|------|:----------------:|:-----------------:|--------| -| [APNs](../../reference/specs/bindings/apns.md) | | ✅ | Experimental | -| [Cron (Scheduler)](../../reference/specs/bindings/cron.md) | ✅ | ✅ | Experimental | -| [HTTP](../../reference/specs/bindings/http.md) | | ✅ | Experimental | -| [InfluxDB](../../reference/specs/bindings/influxdb.md) | | ✅ | Experimental | -| [Kafka](../../reference/specs/bindings/kafka.md) | ✅ | ✅ | Experimental | -| [Kubernetes Events](../../reference/specs/bindings/kubernetes.md) | ✅ | | Experimental | -| [MQTT](../../reference/specs/bindings/mqtt.md) | ✅ | ✅ | Experimental | -| [PostgreSql](../../reference/specs/bindings/postgres.md) | | ✅ | Experimental | -| [RabbitMQ](../../reference/specs/bindings/rabbitmq.md) | ✅ | ✅ | Experimental | -| [Redis](../../reference/specs/bindings/redis.md) | | ✅ | Experimental | -| [Twilio](../../reference/specs/bindings/twilio.md) | | ✅ | Experimental | -| [Twitter](../../reference/specs/bindings/twitter.md) | ✅ | ✅ | Experimental | -| [SendGrid](../../reference/specs/bindings/sendgrid.md) | | ✅ | Experimental | - - -### Amazon Web Service (AWS) - -| Name | Input
Binding | Output
Binding | Status | -|------|:----------------:|:-----------------:|--------| -| [AWS DynamoDB](../../reference/specs/bindings/dynamodb.md) | | ✅ | Experimental | -| [AWS S3](../../reference/specs/bindings/s3.md) | | ✅ | Experimental | -| [AWS SNS](../../reference/specs/bindings/sns.md) | | ✅ | Experimental | -| [AWS SQS](../../reference/specs/bindings/sqs.md) | ✅ | ✅ | Experimental | -| [AWS Kinesis](../../reference/specs/bindings/kinesis.md) | ✅ | ✅ | Experimental | - - -### Google Cloud Platform (GCP) - -| Name | Input
Binding | Output
Binding | Status | -|------|:----------------:|:-----------------:|--------| -| [GCP Cloud Pub/Sub](../../reference/specs/bindings/gcppubsub.md) | ✅ | ✅ | Experimental | -| [GCP Storage Bucket](../../reference/specs/bindings/gcpbucket.md) | | ✅ | Experimental | - -### Microsoft Azure - -| Name | Input
Binding | Output
Binding | Status | -|------|:----------------:|:-----------------:|--------| -| [Azure Blob Storage](../../reference/specs/bindings/blobstorage.md) | | ✅ | Experimental | -| [Azure EventHubs](../../reference/specs/bindings/eventhubs.md) | ✅ | ✅ | Experimental | -| [Azure CosmosDB](../../reference/specs/bindings/cosmosdb.md) | | ✅ | Experimental | -| [Azure Service Bus Queues](../../reference/specs/bindings/servicebusqueues.md) | ✅ | ✅ | Experimental | -| [Azure SignalR](../../reference/specs/bindings/signalr.md) | | ✅ | Experimental | -| [Azure Storage Queues](../../reference/specs/bindings/storagequeues.md) | ✅ | ✅ | Experimental | -| [Azure Event Grid](../../reference/specs/bindings/eventgrid.md) | ✅ | ✅ | Experimental | - -## Input bindings - -Input bindings are used to trigger your application when an event from an external resource has occurred. -An optional payload and metadata might be sent with the request. - -In order to receive events from an input binding: - -1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.) -2. Listen on an HTTP endpoint for the incoming event, or use the gRPC proto library to get incoming events - -> On startup Dapr sends a ```OPTIONS``` request for all defined input bindings to the application and expects a different status code as ```NOT FOUND (404)``` if this application wants to subscribe to the binding. - -Read the [Create an event-driven app using input bindings](../../howto/trigger-app-with-input-binding) section to get started with input bindings. - -## Output bindings - -Output bindings allow users to invoke external resources. -An optional payload and metadata can be sent with the invocation request. - -In order to invoke an output binding: - -1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.) -2. Use the HTTP endpoint or gRPC method to invoke the binding with an optional payload - - Read the [Send events to external systems using Output Bindings](../../howto/send-events-with-output-bindings) section to get started with output bindings. - - ## Related Topics -* [Implementing a new binding](https://github.com/dapr/docs/tree/master/reference/specs/bindings) -* [Trigger a service from different resources with input bindings](../../howto/trigger-app-with-input-binding) -* [Invoke different resources using output bindings](../../howto/send-events-with-output-bindings) - diff --git a/concepts/configuration/README.md b/concepts/configuration/README.md deleted file mode 100644 index 1bd9383bd..000000000 --- a/concepts/configuration/README.md +++ /dev/null @@ -1,263 +0,0 @@ -# Configurations -Dapr configurations are settings that enable you to change the behavior of individual Dapr application sidecars or globally on the system services in the Dapr control plane. -An example of a per Dapr application sidecar setting is configuring trace settings. An example of a Dapr control plane setting is mutual TLS which is a global setting on the Sentry system service. - -- [Setting self hosted sidecar configuration](#setting-self-hosted-sidecar-configuration) -- [Setting Kubernetes sidecar configuration](#setting-kubernetes-sidecar-configuration) -- [Sidecar configuration settings](#sidecar-configuration-settings) -- [Setting Kubernetes control plane configuration](#kubernetes-control-plane-configuration) -- [Control plane configuration settings](#control-plane-configuration-settings) - -## Setting self hosted sidecar configuration -In self hosted mode the Dapr configuration is a configuration file, for example `config.yaml`. By default the Dapr sidecar looks in the default Dapr folder for the runtime configuration eg: `$HOME/.dapr/config.yaml` in Linux/MacOS and `%USERPROFILE%\.dapr\config.yaml` in Windows. - -A Dapr sidecar can also apply a configuration by using a ```--config``` flag to the file path with ```dapr run``` CLI command. - -## Setting Kubernetes sidecar configuration -In Kubernetes mode the Dapr configuration is a Configuration CRD, that is applied to the cluster. For example; - -```cli -kubectl apply -f myappconfig.yaml -``` - -You can use the Dapr CLI to list the Configuration CRDs - -```cli -dapr configurations -k -``` - -A Dapr sidecar can apply a specific configuration by using a ```dapr.io/config``` annotation. For example: - -```yml - annotations: - dapr.io/enabled: "true" - dapr.io/app-id: "nodeapp" - dapr.io/app-port: "3000" - dapr.io/config: "myappconfig" -``` -Note: There are more [Kubernetes annotations](../../howto/configure-k8s/README.md) available to configure the Dapr sidecar on activation by sidecar Injector system service. - -## Sidecar configuration settings - -The following configuration settings can be applied to Dapr application sidecars; -- [Tracing](#tracing) -- [Middleware](#middleware) -- [Scoping secrets for secret stores](#scoping-secrets-for-secret-stores) -- [Access control allow lists for service invocation](#access-control-allow-lists-for-service-invocation) -- [Example application sidecar configuration](#example-application-sidecar-configuration) - -### Tracing - -Tracing configuration turns on tracing for an application. - -The `tracing` section under the `Configuration` spec contains the following properties: - -```yml -tracing: - samplingRate: "1" -``` - -The following table lists the properties for tracing: - -Property | Type | Description ----- | ------- | ----------- -samplingRate | string | Set sampling rate for tracing to be enabled or disabled. - - -`samplingRate` is used to enable or disable the tracing. To disable the sampling rate , -set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` samples all traces. By default, the sampling rate is (0.0001) or 1 in 10,000 traces. - -See [Observability distributed tracing](../observability/traces.md) for more information - -### Middleware - -Middleware configuration set named Http pipeline middleware handlers -The `httpPipeline` section under the `Configuration` spec contains the following properties: - -```yml -httpPipeline: - handlers: - - name: oauth2 - type: middleware.http.oauth2 - - name: uppercase - type: middleware.http.uppercase -``` - -The following table lists the properties for HTTP handlers: - -Property | Type | Description ----- | ------- | ----------- -name | string | name of the middleware component -type | string | type of middleware component - -See [Middleware pipelines](../middleware/README.md) for more information - -### Scoping secrets for secret stores - -In addition to scoping which applications can access a given component, for example a secret store component (see [Scoping components](../../howto/components-scopes)), a named secret store component itself can be scoped to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` list, applications can be restricted to access only specific secrets. - -The `secrets` section under the `Configuration` spec contains the following properties: - -```yml -secrets: - scopes: - - storeName: kubernetes - defaultAccess: allow - allowedSecrets: ["redis-password"] - - storeName: localstore - defaultAccess: allow - deniedSecrets: ["redis-password"] -``` - -The following table lists the properties for secret scopes: - -Property | Type | Description ----- | ------- | ----------- -storeName | string | name of the secret store component. storeName must be unique within the list -defaultAccess | string | access modifier. Accepted values "allow" (default) or "deny" -allowedSecrets | list | list of secret keys that can be accessed -deniedSecrets | list | list of secret keys that cannot be accessed - -When an `allowedSecrets` list is present with at least one element, only those secrets defined in the list can be accessed by the application. - -See the [Scoping secrets](../../howto/secrets-scopes/README.md) HowTo for examples on how to scope secrets to an application. - -### Access Control allow lists for service invocation -Access control enables the configuration of policies that restrict what operations *calling* applications can perform, via service invocation, on the *called* application. -An access control policy is specified in configuration and be applied to Dapr sidecar for the *called* application. Example access policies are shown below and access to the called app is based on the matched policy action. You can provide a default global action for all calling applications and if no access control policy is specified, the default behavior is to allow all calling applicatons to access to the called app. - -## Concepts -**TrustDomain** - A "trust domain" is a logical group to manage trust relationships. Every application is assigned a trust domain which can be specified in the access control list policy spec. If no policy spec is defined or an empty trust domain is specified, then a default value "public" is used. This trust domain is used to generate the identity of the application in the TLS cert. - -**App Identity** - Dapr generates a [SPIFFE](https://spiffe.io/) id for all applications which is attached in the TLS cert. The SPIFFE id is of the format: **spiffe://\/ns/\/\**. For matching policies, the trust domain, namespace and app ID values of the calling app are extracted from the SPIFFE id in the TLS cert of the calling app. These values are matched against the trust domain, namespace and app ID values specified in the policy spec. If all three of these match, then more specific policies are further matched. - -``` -apiVersion: dapr.io/v1alpha1 -kind: Configuration -metadata: - name: appconfig -spec: - accessControl: - defaultAction: deny --> Global default action in case no other policy is matched - trustDomain: "public" --> The called application is assigned a trust domain and is used to generate the identity of this app in the TLS certificate. - policies: - - appId: app1 --> AppId of the calling app to allow/deny service invocation from - defaultAction: deny --> App level default action in case the app is found but no specific operation is matched - trustDomain: 'public' --> Trust domain of the calling app is matched against the specified value here. - namespace: "default" --> Namespace of the calling app is matched against the specified value here. - operations: - - name: /op1 --> operation name on the called app - httpVerb: ['POST', 'GET'] --> specific http verbs, unused for grpc invocation - action: deny --> allow/deny access - - name: /op2/* --> operation name with a postfix - httpVerb: ["*"] --> wildcards can be used to match any http verb - action: allow - - appId: app2 - defaultAction: allow - trustDomain: "public" - namespace: "default" - operations: - - name: /op3 - httpVerb: ['POST', 'PUT'] - action: deny -``` - -The following tables lists the different properties for access control, policies and operations: - -Access Control -Property | Type | Description ----- | ------- | ----------- -defaultAction | string | Global default action when no other policy is matched -trustDomain | string | Trust domain assigned to the application. Default is "public". -policies | string | Policies to determine what operations the calling app can do on the called app - -Policies -Property | Type | Description ----- | ------- | ----------- -app | string | AppId of the calling app to allow/deny service invocation from -namespace | string | Namespace value that needs to be matched with the namespace of the calling app -trustDomain | string | Trust domain that needs to be matched with the trust domain of the calling app. Default is "public" -defaultAction | string | App level default action in case the app is found but no specific operation is matched -operations | string | operations that are allowed from the calling app - -Operations -Property | Type | Description ----- | ------- | ----------- -name | string | Path name of the operations allowed on the called app. Wildcard "\*" can be used to under a path to match -httpVerb | list | list specific http verbs that can be used by the calling app. Wildcard "\*" can be used to match any http verb. Unused for grpc invocation -action | string | Access modifier. Accepted values "allow" (default) or "deny" - -See the [Allow lists for service invocation](../../howto/allowlists-serviceinvocation/README.md) HowTo for examples on how to set allow lists. - -### Example application sidecar configuration -The following yaml shows an example configuration file that can be applied to an applications' Dapr sidecar. - -```yml -apiVersion: dapr.io/v1alpha1 -kind: Configuration -metadata: - name: myappconfig - namespace: default -spec: - tracing: - samplingRate: "1" - httpPipeline: - handlers: - - name: oauth2 - type: middleware.http.oauth2 - secrets: - scopes: - - storeName: localstore - defaultAccess: allow - deniedSecrets: ["redis-password"] - accessControl: - defaultAction: deny - trustDomain: "public" - policies: - - appId: app1 - defaultAction: deny - trustDomain: 'public' - namespace: "default" - operations: - - name: /op1 - httpVerb: ['POST', 'GET'] - action: deny - - name: /op2/* - httpVerb: ["*"] - action: allow -``` - -## Setting Kubernetes control plane configuration -There is a single configuration file called `default` installed with the Dapr control plane system services that applies global settings. This is set up when Dapr is deployed to Kubernetes - -## Control plane configuration settings -A Dapr control plane configuration can configure the following settings: - -Property | Type | Description ----- | ------- | ----------- -enabled | bool | Set mtls to be enabled or disabled -allowedClockSkew | string | The extra time to give for certificate expiry based on possible clock skew on a machine. Default is 15 minutes. -workloadCertTTL | string | Time a certificate is valid for. Default is 24 hours - -See the [Mutual TLS](../../howto/configure-mtls/README.md) HowTo and [security concepts](../security/README.md) for more information. - -### Example control plane configuration - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Configuration -metadata: - name: default - namespace: default -spec: - mtls: - enabled: true - allowedClockSkew: 15m - workloadCertTTL: 24h -``` - -## References -* [Distributed tracing](../observability/traces.md) -* [Middleware pipelines](../middleware/README.md) -* [Security](../security/README.md) -* [How-To: Configuring the Dapr sidecar on Kubernetes](../../howto/configure-k8s/README.md) diff --git a/concepts/hosting/README.md b/concepts/hosting/README.md deleted file mode 100644 index 883da3971..000000000 --- a/concepts/hosting/README.md +++ /dev/null @@ -1,41 +0,0 @@ - -# Hosting environments - -Dapr can run on multiple hosting platforms. - -## Contents -- [Running Dapr on a local developer machine in self hosted mode](#running-dapr-on-a-local-developer-machine-in-self-hosted-mode) -- [Running Dapr in Kubernetes mode](#running-dapr-in-kubernetes-mode) - -## Running Dapr on a local developer machine in self hosted mode - -Dapr can be configured to run on your local developer machine in [self hosted mode](../../getting-started). Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks. - -In self hosted mode, Redis is running locally in a container and is configured to serve as both the default component for state store and for pub/sub. A Zipkin container is also configured for diagnostics and tracing. After running `dapr init`, see the `$HOME/.dapr/components` directory (Mac/Linux) or `%USERPROFILE%\.dapr\components` on Windows. - -The `dapr-placement` service is responsible for managing the actor distribution scheme and key range settings. This service is only required if you are using Dapr actors. For more information on the actor `Placement` service read [actor overview](../actors). - - - -You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr enabled application on your local machine. - -## Running Dapr in Kubernetes mode - -Dapr can be configured to run on any [Kubernetes cluster](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes). In Kubernetes the `dapr-sidecar-injector` and `dapr-operator` services provide first class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned into the cluster. Additionally, the `dapr-sidecar-injector` also injects the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` into **all** the containers in the pod to enable user defined applications to easily communicate with Dapr without hardcoding Dapr port values. - -The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service read the [security overview](../security/README.md#dapr-to-dapr-communication) - - - -Deploying and running a Dapr enabled application into your Kubernetes cluster is a simple as adding a few annotations to the deployment schemes. To give your service an `id` and `port` known to Dapr, turn on tracing through configuration and launch the Dapr sidecar container, you annotate your Kubernetes deployment like this. - -```yml - annotations: - dapr.io/enabled: "true" - dapr.io/app-id: "nodeapp" - dapr.io/app-port: "3000" - dapr.io/config: "tracing" -``` -You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes/deploy) in the Kubernetes getting started sample. - -Read [Kubernetes how to topics](https://github.com/dapr/docs/tree/master/howto#kubernetes-configuration) for more information about setting up Kubernetes and Dapr. diff --git a/concepts/observability/README.md b/concepts/observability/README.md deleted file mode 100644 index 5b48a278e..000000000 --- a/concepts/observability/README.md +++ /dev/null @@ -1,39 +0,0 @@ -# Observability - -Observability is a term from control theory. Observability means you can answer questions about what's happening on the inside of a system by observing the outside of the system, without having to ship new code to answer new questions. Observability is critical in production environments and services to debug, operate and monitor Dapr system services, components and user applications. - -The observability capabilities enable users to monitor the Dapr system services, their interaction with user applications and understand how these monitored services behave. The observability capabilities are divided into the following areas; - -* **[Metrics](./metrics.md)**: are the series of measured values and counts that are collected and stored over time. Dapr metrics provide monitoring and understanding of the behavior of Dapr system services and user apps. For example, the service metrics between Dapr sidecars and user apps show call latency, traffic failures, error rates of requests etc. Dapr system services metrics show side car injection failures, health of the system services including CPU usage, number of actor placement made etc. -* **[Logs](./logs.md)**: are records of events that occur and can be used to determine failures or another status. Logs events contain warning, error, info, and debug messages produced by Dapr system services. Each log event includes metadata such as message type, hostname, component name, App ID, ip address, etc. -* **[Distributed tracing](./traces.md)**: is used to profile and monitor Dapr system services and user apps. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing is particularly well-suited to debugging and monitoring distributed software architectures, such as microservices. - - You can use distributed tracing to help debug and optimize application code. Distributed tracing contains trace spans between the Dapr runtime, Dapr system services, and user apps across process, nodes, network, and security boundaries. It provides a detailed understanding of service invocations (call flows) and service dependencies. - - Dapr uses [W3C tracing context for distributed tracing](./W3C-traces.md) - - It is generally recommended to run Dapr in production with tracing. - -* **[Health](./health.md)**: Dapr provides a way for a hosting platform to determine it's health using an HTTP endpoint. With this endpoint, the Dapr process, or sidecar, can be probed to determine its readiness and liveness and action taken accordingly. - -## Open Telemetry -Dapr integrates with OpenTelemetry for metrics, logs and tracing. With OpenTelemetry, you can configure various exporters for tracing and metrics based on your environment, whether it is running in the cloud or on-premises. - -## Monitoring tools - -The observability tools listed below are ones that have been tested to work with Dapr. - -### Metrics - -* [How-To: Set up Prometheus and Grafana](../../howto/setup-monitoring-tools/setup-prometheus-grafana.md) -* [How-To: Set up Azure Monitor](../../howto/setup-monitoring-tools/setup-azure-monitor.md) - -### Logs - -* [How-To: Set up Fluentd, Elastic search and Kibana in Kubernetes](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md) -* [How-To: Set up Azure Monitor](../../howto/setup-monitoring-tools/setup-azure-monitor.md) - -### Distributed Tracing - -* [How-To: Set up Zipkin](../../howto/diagnose-with-tracing/zipkin.md) -* [How-To: Set up Application Insights with Open Telemetry Collector](../../howto/diagnose-with-tracing/open-telemetry-collector.md) diff --git a/contributing/README.md b/contributing/README.md deleted file mode 100644 index 3db8846f5..000000000 --- a/contributing/README.md +++ /dev/null @@ -1,42 +0,0 @@ -# Contributing to Dapr documentation - -High quality documentation is a core tenant of the Dapr project. Some contribution guidelines are below. - -## Style and tone - -- Use sentence-casing for headers. -- When referring to product names and technologies use capitalization (e.g. Kubernetes, Helm, Visual Studio Code, Azure Key Vault and of course Dapr). -- Check the spelling and grammar in your articles. -- Use a casual and friendly voice—like tone as if you're talking to another person one-on-one. -- Use simple sentences. Easy-to-read sentences mean the reader can quickly use the guidance you share. -- Use “you” rather than “users” or “developers” conveying friendliness. -- Avoid the word “will”; write in present tense and not future where possible. E.g. Avoid “Next we will open the file and build”. Just say “Now open the file and build” -- Avoid the word “we”. We is generally not meaningful. We who? -- Avoid the word “please”. This is not a letter asking for any help, this is technical documentation. -- Assume a new developer audience. Some obvious steps can seem hard. E.g. Now set an environment variable DAPR to a value X. It is better to give the reader the explicit command to do this, rather than having them figure this out. -- Where possible give the reader links to next document/article for next steps or related topics (this can be relevant "how-to", samples for reference or concepts). - -# Contributing to `Concepts` - -- Ensure the reader can understand why they should care about this feature. What problems does it help them solve? -- Ensure the doc references the spec for examples of using the API. -- Ensure the spec is consistent with concept in terms of names, parameters and terminology. Update both the concept and the spec as needed. -- Avoid just repeating the spec. The idea is to give the reader more information and background on the capability so that they can try this out. Hence provide more information and implementation details where possible. -- Provide a link to the spec in the [Reference](/reference) section. -- Where possible reference a practical [How-To](/howto) doc. - -# Contributing to `How-Tos` - -See [this template](./howto-template.md) for `How To` articles. - -- `How To` articles are meant to provide step-by-step practical guidance on to readers who wish to enable a feature, integrate a technology or use Dapr in a specific scenario. -- Location - `How To` articles should all be under the [howto](../howto) directory in a relevant sub directories - make sure to see if the article you are contributed should be included in an existing sub directory. -- Sub directory naming - the directory name should be descriptive and if referring to specific component or concept should begin with the relevant name. Example *pubsub-namespaces*. -- When adding a new article make sure to add a link in the main [How To README.md](../howto/README.md) as well as other articles or samples that may be relevant. -- Do not assume the reader is using a specific environment unless the article itself is specific to an environment. This include OS (Windows/Linux/MacOS), deployment target (Kubernetes, IoT etc.) or programming language. If instructions vary between operating systems, provide guidance for all. -- How to articles should include the following sub sections: - - **Pre-requesties** - - **\** times X as needed - - **Cleanup** - - **Related links** -- Include code/sample/config snippets that can be easily copied and pasted. diff --git a/contributing/howto-template.md b/contributing/howto-template.md deleted file mode 100644 index d43cdb084..000000000 --- a/contributing/howto-template.md +++ /dev/null @@ -1,82 +0,0 @@ -# [Title] - ->Title should be descriptive of what this article helps achieve. Imagine it continues a sentence that starts with ***How to...*** so should start with a word such as "Setup", "Configure", "Implement" etc. -> ->Does not need to include the word *Dapr* in it (as it is in the context of the Dapr docs repo) -> ->If it is specific to an environment (e.g. Kubernetes), should call out the environment. -> ->Capital letters only for first word and product/technology names. -> ->Example: -># Set up Zipkin for distributed tracing in Kubernetes - -[Intro paragraph] - -> Intro paragraph should be a short description of what this article covers to set expectations of the reader. Include links if they can provide context and clarity to the reader. -> -> Example: -> -> This article will provide guidance on how to enable Dapr distributed tracing capabilities on Kubernetes using [Zipkin](https://zipkin.io/) as a tracing broker. - -## Pre-requisites - ->List the required setup this article assumes with links on how to achieve each prerequisite. -> ->Example: -> -> - [Setup Dapr on a Kubernetes cluster](https://github.com/dapr/docs/blob/master/getting-started/environment-setup.md#installing-dapr-on-a-kubernetes-cluster) -> - [Install Helm](https://helm.sh/docs/intro/install/) -> - [Install Python](https://www.python.org/downloads/) version >= 3.7 - -## [Step header] - (multiple) - -> ->Name each step section in a clear descriptive way which allows readers to understand what this section covers. Example: **Create a configuration file** -> ->If using terminal commands, make sure to allow easy copy/paste by having each terminal command in a separate line with the markdown (indented as needed when appearing in bullets or numbered lists). If Windows/Linux/MacOS instructions differ, make sure to include instructions for each. -> ->Example (note the indentation of the commands): -> ->- Clone the Dapr samples repository: -> ```bash -> git clone https://github.com/dapr/samples.git -> ``` ->- Go to the hello world sample: -> ``` -> cd 1.hello-world -> ``` -> ->Add sections as needed for multiple steps. -> - -## Cleanup - -> -> If possible, provide steps that undo the steps above. These should bring the user environment back to the pre-requisites stage. If using terminal commands, make sure to allow easy copy/paste by having each terminal command in a separate line with the markdown (indented as needed when appearing in bullets or numbered lists). If Windows/Linux/MacOS instructions differ, make sure to include instructions for each. -> ->Example: -> ->1. Delete the deployments from the cluster -> ``` -> kubectl delete -f file.yaml -> ``` ->2. Delete the Helm chart from the cluster -> ``` -> helm del --purge dapr-kafka -> ``` -> - -## Related links - -> -> Reference other documentation that may be relevant to a user interested in this How To. Include any of the following: -> ->- Other **How To** articles in related topics or alternative technology integrations. ->- **Concept** articles that are relevant. ->- **Reference** and **API** documentation that can be helpful ->- **Samples** that provide code reference relevant to this guidance. ->- Any other documentation link that may be a logical next step for a reader interested in this guidance (for example, if this is a how to on publishing to a pub/sub topic, a logical next step would be a how to which describes consuming from a topic). -> - - diff --git a/daprdocs/archetypes/default.md b/daprdocs/archetypes/default.md new file mode 100644 index 000000000..00e77bd79 --- /dev/null +++ b/daprdocs/archetypes/default.md @@ -0,0 +1,6 @@ +--- +title: "{{ replace .Name "-" " " | title }}" +date: {{ .Date }} +draft: true +--- + diff --git a/daprdocs/assets/icons/logo.png b/daprdocs/assets/icons/logo.png new file mode 100644 index 000000000..b77e0bba6 Binary files /dev/null and b/daprdocs/assets/icons/logo.png differ diff --git a/daprdocs/assets/icons/logo.svg b/daprdocs/assets/icons/logo.svg new file mode 100644 index 000000000..5f8ea5a2f --- /dev/null +++ b/daprdocs/assets/icons/logo.svg @@ -0,0 +1,15 @@ + + + + white on dark + Created with Sketch. + + + + + + + + + + \ No newline at end of file diff --git a/daprdocs/assets/scss/_code.scss b/daprdocs/assets/scss/_code.scss new file mode 100644 index 000000000..e986bca01 --- /dev/null +++ b/daprdocs/assets/scss/_code.scss @@ -0,0 +1,57 @@ +// Code formatting. + +.td-content { + // Highlighted code. + .highlight { + @extend .card; + + margin: 2rem 0; + padding: 0; + + max-width: 80%; + + pre { + margin: 0; + padding: 1rem; + } + } + + // Inline code + p code, li > code, table code { + color: inherit; + padding: 0.2em 0.4em; + margin: 0; + font-size: 85%; + word-break: normal; + background-color: rgba($black, 0.05); + border-radius: $border-radius; + + br { + display: none; + } + } + + + // Code blocks + pre { + word-wrap: normal; + background-color: $gray-100; + padding: $spacer; + + + > code { + background-color: inherit !important; + padding: 0; + margin: 0; + font-size: 100%; + word-break: normal; + white-space: pre; + border: 0; + } + } + + pre.mermaid { + background-color: inherit; + font-size: 0; + } +} diff --git a/daprdocs/assets/scss/_content.scss b/daprdocs/assets/scss/_content.scss new file mode 100644 index 000000000..c5c697d23 --- /dev/null +++ b/daprdocs/assets/scss/_content.scss @@ -0,0 +1,85 @@ +// +// Style Markdown content +// + +.td-content { + order: 1; + + p, li, td { + font-weight: $font-weight-body-text; + } + + > h1 { + font-weight: $font-weight-bold; + margin-bottom: .5rem; + } + + > h2 { + font-weight: $font-weight-bold; + margin-bottom: 1rem; + } + + > h2:not(:first-child) { + margin-top: 3rem; + } + + > h2 + h3 { + margin-top: 1rem; + } + + > h3, > h4, > h5, > h6 { + margin-bottom: 1rem; + margin-top: 2rem; + font-weight: $font-weight-bold; + } + + img { + @extend .img-fluid; + } + + > table { + @extend .table-striped; + + @extend .table-responsive; + + @extend .table; + } + + > blockquote { + padding: 0 0 0 1rem; + margin-bottom: $spacer; + color: $gray-600; + border-left: 6px solid $secondary; + } + + > ul li, > ol li { + margin-bottom: .25rem; + } + + strong { + font-weight: $font-weight-bold; + } + + > pre, > .highlight, > .lead, > h1, > h2, > ul, > ol, > p, > blockquote, > dl dd, .footnotes, > .alert { + @extend .td-max-width-on-larger-screens; + } + + .alert:not(:first-child) { + margin-top: 2 * $spacer; + margin-bottom: 2 * $spacer; + } + + .lead { + margin-bottom: 1.5rem; + font-weight: $font-weight-bold; + } +} + +.td-title { + margin-top: 1rem; + margin-bottom: .5rem; + + @include media-breakpoint-up(sm) { + font-size: 3rem; + } +} \ No newline at end of file diff --git a/daprdocs/assets/scss/_nav.scss b/daprdocs/assets/scss/_nav.scss new file mode 100644 index 000000000..ae6c470e4 --- /dev/null +++ b/daprdocs/assets/scss/_nav.scss @@ -0,0 +1,102 @@ +// +// Main navbar +// + +.td-navbar-cover { + background: $primary; + + @include media-breakpoint-up(md) { + background: transparent !important; + + .nav-link { + text-shadow: 1px 1px 2px $dark; + } + + } + + &.navbar-bg-onscroll .nav-link { + text-shadow: none; + } +} + +.navbar-bg-onscroll { + background: $primary !important; + opacity: inherit; +} + +.td-navbar { + background: $primary; + min-height: 4rem; + margin: 0; + z-index: 32; + + @include media-breakpoint-up(md) { + position: fixed; + top: 0; + width: 100%; + } + + + .navbar-brand { + text-transform: none !important; + text-align: middle; + + .nav-link { + display: inline-block; + margin-right: -30px; + } + + svg { + display: inline-block; + margin-right: 5px; + margin-left: 5px; + margin-top: 0px; + margin-bottom: 0px; + height: 40px; + width: 40px; + } + } + + .nav-link { + text-transform: none !important; + font-weight: $font-weight-bold; + } + + .td-search-input { + border: none; + + @include placeholder { + color: $navbar-dark-color; + } + } + + .dropdown { + min-width: 100px; + } + + @include media-breakpoint-down(md) { + padding-right: .5rem; + padding-left: .75rem; + + .td-navbar-nav-scroll { + max-width: 100%; + height: 2.5rem; + margin-top: .25rem; + overflow: hidden; + font-size: .875rem; + + .nav-link { + padding-right: .25rem; + padding-left: 0; + } + + .navbar-nav { + padding-bottom: 2rem; + overflow-x: auto; + white-space: nowrap; + -webkit-overflow-scrolling: touch; + + } + } + } +} \ No newline at end of file diff --git a/daprdocs/assets/scss/_sidebar-toc.scss b/daprdocs/assets/scss/_sidebar-toc.scss new file mode 100644 index 000000000..0e34a4ad8 --- /dev/null +++ b/daprdocs/assets/scss/_sidebar-toc.scss @@ -0,0 +1,58 @@ +// +// Right side toc +// +.td-toc { + border-left: 1px solid $border-color; + + @supports (position: sticky) { + position: sticky; + top: 4rem; + height: calc(100vh - 10rem); + overflow-y: auto; + } + + order: 2; + padding-top: 2.75rem; + padding-bottom: 1.5rem; + vertical-align: top; + + a { + display: block; + font-weight: $font-weight-medium; + padding-bottom: .25rem; + } + + li { + list-style: none; + display: block; + font-size: 1.1rem; + } + + li li { + margin-left: 1.5rem; + font-size: 1.1rem; + } + + .td-page-meta { + a { + font-weight: $font-weight-medium; + } + } + + #TableOfContents { + // Hugo's ToC is a mouthful, this can be used to style the top level h2 entries. + > ul > li > ul > li > a {} + + a { + color: rgb(68, 68, 68); + &:hover { + color: $blue; + text-decoration: none; + } + } + } + + ul { + padding-left: 0; + } +} \ No newline at end of file diff --git a/daprdocs/assets/scss/_sidebar-tree.scss b/daprdocs/assets/scss/_sidebar-tree.scss new file mode 100644 index 000000000..a080eee13 --- /dev/null +++ b/daprdocs/assets/scss/_sidebar-tree.scss @@ -0,0 +1,147 @@ +// +// Left side navigation +// +.td-sidebar-nav { + padding-right: 0.5rem; + margin-right: -15px; + margin-left: -15px; + + @include media-breakpoint-up(md) { + @supports (position: sticky) { + max-height: calc(100vh - 10rem); + overflow-y: auto; + } + } + + + @include media-breakpoint-up(md) { + display: block !important; + } + + + &__section { + li { + list-style: none; + } + + ul { + padding: 0; + margin: 0; + } + + @include media-breakpoint-up(md) { + & > ul { + padding-left: .5rem; + } + } + + + padding-left: 0; + } + + &__section-title { + display: block; + font-weight: $font-weight-medium; + + .active { + font-weight: $font-weight-bold; + } + + a { + color: $gray-900; + } + } + + .td-sidebar-link { + display: block; + padding-bottom: 0.375rem; + + &__page { + color: $gray-700; + font-weight: $font-weight-light; + } + } + + a { + &:hover { + color: $blue; + text-decoration: none; + } + + &.active { + font-weight: $font-weight-bold; + } + } + + .dropdown { + a { + color: $gray-700; + } + + .nav-link { + padding: 0 0 1rem; + } + } + + & > .td-sidebar-nav__section { + padding-top: .5rem; + padding-left: 0rem; + } +} + +.td-sidebar { + @include media-breakpoint-up(md) { + padding-top: 4rem; + background-color: $td-sidebar-bg-color; + padding-right: 1rem; + border-right: 1px solid $td-sidebar-border-color; + min-width: 18rem; + } + + + padding-bottom: 1rem; + + &__toggle { + line-height: 1; + color: $gray-900; + margin: 1rem; + } + + &__search { + padding: 1rem 15px; + margin-right: -15px; + margin-left: -15px; + } + + &__inner { + order: 0; + + @include media-breakpoint-up(md) { + @supports (position: sticky) { + position: sticky; + top: 4rem; + z-index: 10; + height: calc(100vh - 6rem); + } + } + + + @include media-breakpoint-up(xl) { + flex: 0 1 320px; + } + + + .td-search-box { + width: 100%; + } + } + + #content-desktop {display: block;} + #content-mobile {display: none;} + + @include media-breakpoint-down(md) { + + #content-desktop {display: none;} + #content-mobile {display: block;} + } +} \ No newline at end of file diff --git a/daprdocs/assets/scss/_variables_project.scss b/daprdocs/assets/scss/_variables_project.scss new file mode 100644 index 000000000..17463feed --- /dev/null +++ b/daprdocs/assets/scss/_variables_project.scss @@ -0,0 +1,12 @@ +$primary:#0D2192; +$secondary: #1F329A; + +.navbar-brand { + text-align: left; + + svg { + display: inline-block; + margin: 0 10px; + height: 60px; + } +} \ No newline at end of file diff --git a/daprdocs/config.toml b/daprdocs/config.toml new file mode 100644 index 000000000..a7bf2816a --- /dev/null +++ b/daprdocs/config.toml @@ -0,0 +1,119 @@ +# Site Configuration +baseURL = "https://docs.dapr.io/" +title = "Dapr Docs" +theme = "docsy" + +enableRobotsTXT = true +enableGitInfo = true + +# Language Configuration +languageCode = "en-us" +contentDir = "content/en" +defaultContentLanguage = "en" + +# Disable categories & tags +disableKinds = ["taxonomy", "term"] + +# Google Analytics +[services.googleAnalytics] +id = "UA-149338238-3" + +# Markdown Engine - Allow inline html +[markup] + [markup.goldmark] + [markup.goldmark.renderer] + unsafe = true + +# Top Nav Bar +[[menu.main]] + name = "Home" + weight = 40 + url = "https://dapr.io" +[[menu.main]] + name = "About" + weight = 50 + url = "https://dapr.io/#about" +[[menu.main]] + name = "Download" + weight = 60 + url = "https://dapr.io/#download" +[[menu.main]] + name = "Blog" + weight = 70 + url = "https://blog.dapr.io/blog" +[[menu.main]] + name = "Community" + weight = 80 + url = "https://dapr.io/#community" + +[params] +copyright = "Dapr" +#privacy_policy = "https://policies.google.com/privacy" + +# Algolia +algolia_docsearch = true +offlineSearch = false + +# GitHub Information +github_repo = "https://github.com/dapr/docs" +github_project_repo = "https://github.com/dapr/dapr" +github_subdir = "daprdocs" +github_branch = "website" + +# Versioning +version_menu = "Releases" +version = "v0.11" +archived_version = false + +[[params.versions]] + version = "v0.11" + url = "#" +[[params.versions]] + version = "v0.10" + url = "https://github.com/dapr/docs/tree/v0.10.0" +[[params.versions]] + version = "v0.9" + url = "https://github.com/dapr/docs/tree/v0.9.0" +[[params.versions]] + version = "v0.8" + url = "https://github.com/dapr/docs/tree/v0.8.0" + +# UI Customization +[params.ui] +sidebar_menu_compact = true +navbar_logo = true +sidebar_search_disable = true + +# Links +## End user relevant links. These will show up on left side of footer and in the community page if you have one. +[[params.links.user]] + name ="Twitter" + url = "https://twitter.com/daprdev" + icon = "fab fa-twitter" + desc = "Follow us on Twitter to get the latest updates!" +[[params.links.user]] + name = "YouTube" + url = "https://www.youtube.com/channel/UCtpSQ9BLB_3EXdWAUQYwnRA" + icon = "fab fa-youtube" + desc = "Community call recordings and other cool demos" +[[params.links.user]] + name = "Blog" + url = "https://blog.dapr.io/posts" + icon = "fas fa-blog" + desc = "Community call recordings and other cool demos" +## Developer relevant links. These will show up on right side of footer and in the community page if you have one. +[[params.links.developer]] + name = "GitHub" + url = "https://github.com/dapr/" + icon = "fab fa-github" + desc = "Development takes place here!" +[[params.links.developer]] + name = "Gitter" + url = "https://gitter.im/Dapr/community" + icon = "fab fa-gitter" + desc = "Conversations happen here!" +[[params.links.developer]] + name = "Zoom" + url = "https://aka.ms/dapr-community-call" + icon = "fas fa-video" + desc = "Meetings happen here!" \ No newline at end of file diff --git a/daprdocs/content/en/_index.md b/daprdocs/content/en/_index.md new file mode 100644 index 000000000..1915021a6 --- /dev/null +++ b/daprdocs/content/en/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +--- + +# Dapr Docs + +Welcome to the Dapr documentation site! \ No newline at end of file diff --git a/daprdocs/content/en/concepts/_index.md b/daprdocs/content/en/concepts/_index.md new file mode 100644 index 000000000..bcfdf00f6 --- /dev/null +++ b/daprdocs/content/en/concepts/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Dapr Concepts" +linkTitle: "Concepts" +weight: 10 +description: "Learn about Dapr including its main features and capabilities" +--- \ No newline at end of file diff --git a/daprdocs/content/en/concepts/building-blocks-concept.md b/daprdocs/content/en/concepts/building-blocks-concept.md new file mode 100644 index 000000000..6ac19979a --- /dev/null +++ b/daprdocs/content/en/concepts/building-blocks-concept.md @@ -0,0 +1,29 @@ +--- +type: docs +title: "Building blocks" +linkTitle: "Building blocks" +weight: 200 +description: "Modular best practices accessible over standard HTTP or gRPC APIs" +--- + +A [building block]({{< ref building-blocks >}}) is as an HTTP or gRPC API that can be called from your code and uses one or more Dapr components. + +Building blocks address common challenges in building resilient, microservices applications and codify best practices and patterns. Dapr consists of a set of building blocks, with extensibility to add new building blocks. + +The diagram below shows how building blocks expose a public API that is called from your code, using components to implement the building blocks' capability. + + + +The following are the building blocks provided by Dapr: + + + +| Building Block | Endpoint | Description | +|----------------|----------|-------------| +| [**Service-to-service invocation**]({{}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling. +| [**State management**]({{}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state API with pluggable state stores for persistence. +| [**Publish and subscribe**]({{}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publishes messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications. +| [**Resource bindings**]({{}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service. +| [**Actors**]({{}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the Virtual Actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use. See * [Actor Overview](./actors#understanding-actors) +| [**Observability**]({{}}) | `N/A` | Dapr system components and runtime emit metrics, logs, and traces to debug, operate and monitor Dapr system services, components and user applications. +| [**Secrets**]({{}}) | `/v1.0/secrets` | Dapr offers a secrets building block API and integrates with secret stores such as Azure Key Vault and Kubernetes to store the secrets. Service code can call the secrets API to retrieve secrets out of the Dapr supported secret stores. diff --git a/daprdocs/content/en/concepts/components-concept.md b/daprdocs/content/en/concepts/components-concept.md new file mode 100644 index 000000000..db1a2c0ba --- /dev/null +++ b/daprdocs/content/en/concepts/components-concept.md @@ -0,0 +1,32 @@ +--- +type: docs +title: "Components" +linkTitle: "Components" +weight: 300 +description: "Modular functionality used by building blocks and applications" +--- + +Dapr uses a modular design where functionality is delivered as a component. Each component has an interface definition. All of the components are pluggable so that you can swap out one component with the same interface for another. The [components contrib repo](https://github.com/dapr/components-contrib) is where you can contribute implementations for the component interfaces and extends Dapr's capabilities. + + A building block can use any combination of components. For example the [actors]({{}}) building block and the [state management]({{}}) building block both use [state components](https://github.com/dapr/components-contrib/tree/master/state). As another example, the [Pub/Sub]({{}}) building block uses [Pub/Sub components](https://github.com/dapr/components-contrib/tree/master/pubsub). + + You can get a list of current components available in the current hosting environment using the `dapr components` CLI command. + + The following are the component types provided by Dapr: + +* [Bindings](https://github.com/dapr/components-contrib/tree/master/bindings) +* [Pub/sub](https://github.com/dapr/components-contrib/tree/master/pubsub) +* [Middleware](https://github.com/dapr/components-contrib/tree/master/middleware) +* [Service discovery name resolution](https://github.com/dapr/components-contrib/tree/master/nameresolution) +* [Secret stores](https://github.com/dapr/components-contrib/tree/master/secretstores) +* [State](https://github.com/dapr/components-contrib/tree/master/state) +* [Tracing exporters](https://github.com/dapr/components-contrib/tree/master/exporters) + +### Service invocation and service discovery components +Service discovery components are used with the [service invocation]({{}}) building block to integrate with the hosting environment to provide service-to-service discovery. For example, the Kubernetes service discovery component integrates with the Kubernetes DNS service and self hosted uses mDNS. + +### Service invocation and middleware components +Dapr allows custom [middleware]({{}}) to be plugged into the request processing pipeline. Middleware can perform additional actions on a request, such as authentication, encryption and message transformation before the request is routed to the user code, or before the request is returned to the client. The middleware components are used with the [service invocation]({{}}) building block. + +### Secret store components +In Dapr, a [secret]({{}}) is any piece of private information that you want to guard against unwanted users. Secrets stores, used to store secrets, are Dapr components and can be used by any of the building blocks. diff --git a/daprdocs/content/en/concepts/configuration-concept.md b/daprdocs/content/en/concepts/configuration-concept.md new file mode 100644 index 000000000..e9f3a407e --- /dev/null +++ b/daprdocs/content/en/concepts/configuration-concept.md @@ -0,0 +1,12 @@ +--- +type: docs +title: "Configuration" +linkTitle: "Configuration" +weight: 400 +description: "Change the behavior of Dapr sidecars or globally on Dapr system services" +--- + +Dapr configurations are settings that enable you to change the behavior of individual Dapr application sidecars or globally on the system services in the Dapr control plane. +An example of a per Dapr application sidecar setting is configuring trace settings. An example of a Dapr control plane setting is mutual TLS which is a global setting on the Sentry system service. + +Read [this page]({{}}) for a list of all configuration options. diff --git a/FAQ.md b/daprdocs/content/en/concepts/faq.md similarity index 94% rename from FAQ.md rename to daprdocs/content/en/concepts/faq.md index 21be4bda0..a494c2a01 100644 --- a/FAQ.md +++ b/daprdocs/content/en/concepts/faq.md @@ -1,65 +1,66 @@ -# FAQ - -- **[Networking and service meshes](#networking-and-service-meshes)** -- **[Performance Benchmarks](#performance-benchmarks)** -- **[Actors](#actors)** -- **[Developer language SDKs and frameworks](#developer-language-sdks-and-frameworks)** - -## Networking and service meshes - -### Understanding how Dapr works with service meshes - -Dapr is a distributed application runtime. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build microservices. Dapr is developer-centric versus service meshes being infrastructure-centric. - -Dapr can be used alongside any service mesh such as Istio and Linkerd. A service mesh is a dedicated network infrastructure layer designed to connect services to one another and provide insightful telemetry. A service mesh doesn’t introduce new functionality to an application. - -That is where Dapr comes in. Dapr is a language agnostic programming model built on http and gRPC that provides distributed system building blocks via open APIs for asynchronous pub-sub, stateful services, service discovery and invocation, actors and distributed tracing. Dapr introduces new functionality to an app’s runtime. Both service meshes and Dapr run as side-car services to your application, one giving network features and the other distributed application capabilities. - -Watch this [video](https://www.youtube.com/watch?v=xxU68ewRmz8&feature=youtu.be&t=140) on how Dapr and service meshes work together. - -### Understanding how Dapr interoperates with the service mesh interface (SMI) - -SMI is an abstraction layer that provides a common API surface across different service mesh technology. Dapr can leverage any service mesh technology including SMI. - -### Differences between Dapr, Istio and Linkerd - -Read [How does Dapr work with service meshes?](https://github.com/dapr/dapr/wiki/FAQ#how-does-dapr-work-with-service-meshes) Istio is an open source service mesh implementation that focuses on Layer7 routing, traffic flow management and mTLS authentication between services. Istio uses a sidecar to intercept traffic going into and out of a container and enforces a set of network policies on them. - -Istio is not a programming model and does not focus on application level features such as state management, pub-sub, bindings etc. That is where Dapr comes in. - -## Performance Benchmarks -The Dapr project is focused on performance due to the inherent discussion of Dapr being a sidecar to your application. This [performance benchmark video](https://youtu.be/4kV3SHs1j2k?t=783) discusses and demos the work that has been done so far. The performance benchmark data is planned to be published on a regular basis. You can also run the perf tests in your own environment to get perf numbers. - -## Actors - -### What is the relationship between Dapr, Orleans and Service Fabric Reliable Actors? - -The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments. -Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview](https://github.com/dapr/docs/blob/master/overview/README.md). - -### Differences between Dapr from an actor framework - -Virtual actors capabilities are one of the building blocks that Dapr provides in its runtime. With Dapr because it is programming language agnostic with an http/gRPC API, the actors can be called from any language. This allows actors written in one language to invoke actors written in a different language. - -Creating a new actor follows a local call like `http://localhost:3500/v1.0/actors///…`, for example `http://localhost:3500/v1.0/actors/myactor/50/method/getData` to call the `getData` method on the newly created `myactor` with id `50`. - -The Dapr runtime SDKs have language specific actor frameworks. The .NET SDK for example has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java and Python SDK have actor frameworks. - -## Developer language SDKs and frameworks - -### Does Dapr have any SDKs if I want to work with a particular programming language or framework? - -To make using Dapr more natural for different languages, it includes language specific SDKs for Go, Java, JavaScript, .NET, Python, RUST and C++. - -These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support. - -### What frameworks does Dapr integrated with? -Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services. - -Dapr is integrated with the following frameworks; - -- Logic Apps with Dapr [Workflows](https://github.com/dapr/workflows) -- Functions with Dapr [Azure Functions Extension](https://github.com/dapr/azure-functions-extension) -- Spring Boot Web apps in Java SDK -- ASP.NET Core in .NET SDK -- [Azure API Management](https://cloudblogs.microsoft.com/opensource/2020/09/22/announcing-dapr-integration-azure-api-management-service-apim/) +--- +type: docs +title: "Frequently asked questions and answers" +linkTitle: "FAQs" +weight: 1000 +description: "Common questions asked about Dapr" +--- + +## Networking and service meshes + +### Understanding how Dapr works with service meshes + +Dapr is a distributed application runtime. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build microservices. Dapr is developer-centric versus service meshes being infrastructure-centric. + +Dapr can be used alongside any service mesh such as Istio and Linkerd. A service mesh is a dedicated network infrastructure layer designed to connect services to one another and provide insightful telemetry. A service mesh doesn’t introduce new functionality to an application. + +That is where Dapr comes in. Dapr is a language agnostic programming model built on http and gRPC that provides distributed system building blocks via open APIs for asynchronous pub-sub, stateful services, service discovery and invocation, actors and distributed tracing. Dapr introduces new functionality to an app’s runtime. Both service meshes and Dapr run as side-car services to your application, one giving network features and the other distributed application capabilities. + +Watch this [video](https://www.youtube.com/watch?v=xxU68ewRmz8&feature=youtu.be&t=140) on how Dapr and service meshes work together. + +### Understanding how Dapr interoperates with the service mesh interface (SMI) + +SMI is an abstraction layer that provides a common API surface across different service mesh technology. Dapr can leverage any service mesh technology including SMI. + +### Differences between Dapr, Istio and Linkerd + +Read [How does Dapr work with service meshes?](https://github.com/dapr/dapr/wiki/FAQ#how-does-dapr-work-with-service-meshes) Istio is an open source service mesh implementation that focuses on Layer7 routing, traffic flow management and mTLS authentication between services. Istio uses a sidecar to intercept traffic going into and out of a container and enforces a set of network policies on them. + +Istio is not a programming model and does not focus on application level features such as state management, pub-sub, bindings etc. That is where Dapr comes in. + +## Performance Benchmarks +The Dapr project is focused on performance due to the inherent discussion of Dapr being a sidecar to your application. This [performance benchmark video](https://youtu.be/4kV3SHs1j2k?t=783) discusses and demos the work that has been done so far. The performance benchmark data is planned to be published on a regular basis. You can also run the perf tests in your own environment to get perf numbers. + +## Actors + +### What is the relationship between Dapr, Orleans and Service Fabric Reliable Actors? + +The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments. +Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview](https://github.com/dapr/docs/blob/master/overview/README.md). + +### Differences between Dapr from an actor framework + +Virtual actors capabilities are one of the building blocks that Dapr provides in its runtime. With Dapr because it is programming language agnostic with an http/gRPC API, the actors can be called from any language. This allows actors written in one language to invoke actors written in a different language. + +Creating a new actor follows a local call like `http://localhost:3500/v1.0/actors///…`, for example `http://localhost:3500/v1.0/actors/myactor/50/method/getData` to call the `getData` method on the newly created `myactor` with id `50`. + +The Dapr runtime SDKs have language specific actor frameworks. The .NET SDK for example has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java and Python SDK have actor frameworks. + +## Developer language SDKs and frameworks + +### Does Dapr have any SDKs if I want to work with a particular programming language or framework? + +To make using Dapr more natural for different languages, it includes language specific SDKs for Go, Java, JavaScript, .NET, Python, RUST and C++. + +These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support. + +### What frameworks does Dapr integrated with? +Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services. + +Dapr is integrated with the following frameworks; + +- Logic Apps with Dapr [Workflows](https://github.com/dapr/workflows) +- Functions with Dapr [Azure Functions Extension](https://github.com/dapr/azure-functions-extension) +- Spring Boot Web apps in Java SDK +- ASP.NET Core in .NET SDK +- [Azure API Management](https://cloudblogs.microsoft.com/opensource/2020/09/22/announcing-dapr-integration-azure-api-management-service-apim/) diff --git a/concepts/middleware/README.md b/daprdocs/content/en/concepts/middleware-concept.md similarity index 57% rename from concepts/middleware/README.md rename to daprdocs/content/en/concepts/middleware-concept.md index ff72de9c1..648f77114 100644 --- a/concepts/middleware/README.md +++ b/daprdocs/content/en/concepts/middleware-concept.md @@ -1,16 +1,22 @@ -# Middleware pipeline +--- +type: docs +title: "Middleware pipelines" +linkTitle: "Middleware" +weight: 400 +description: "Custom processing pipelines of chained middleware components" +--- Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. A request goes through all defined middleware components before it's routed to user code, and then goes through the defined middleware, in reverse order, before it's returned to the client, as shown in the following diagram. -![Middleware](../../images/middleware.png) + ## Customize processing pipeline -When launched, a Dapr sidecar constructs a middleware processing pipeline. By default the pipeline consists of [tracing middleware](../observability/traces.md) and CORS middleware. Additional middleware, configured by a Dapr [configuration](../configuration/README.md), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, security and others. +When launched, a Dapr sidecar constructs a middleware processing pipeline. By default the pipeline consists of [tracing middleware]({{< ref tracing.md >}}) and CORS middleware. Additional middleware, configured by a Dapr [configuration]({{< ref configuration-concept.md >}}), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, security and others. > **NOTE:** Dapr provides a **middleware.http.uppercase** pre-registered component that changes all text in a request body to uppercase. You can use it to test/verify if your custom pipeline is in place. -The following configuration example defines a custom pipeline that uses a [OAuth 2.0 middleware](../../howto/authorization-with-oauth/README.md) and an uppercase middleware component. In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code. +The following configuration example defines a custom pipeline that uses a [OAuth 2.0 middleware]({{< ref oauth.md >}}) and an uppercase middleware component. In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code. ```yaml apiVersion: dapr.io/v1alpha1 @@ -51,8 +57,10 @@ func GetHandler(metadata Metadata) fasthttp.RequestHandler { } ``` -## Submitting middleware components -Your middleware component can be contributed to the https://github.com/dapr/components-contrib repository, under the */middleware* folder. Then submit another pull request against the https://github.com/dapr/dapr repository to register the new middleware type. You'll need to modify the **Load()** method under https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/registry.go to register your middleware using the **Register** method. +## Adding new middleware components +Your middleware component can be contributed to the [components-contrib repository](https://github.com/dapr/components-contrib/tree/master/middleware). -## References -* [How-To: Configure API authorization with OAuth](../../howto/authorization-with-oauth/README.md) +Then submit another pull request against the [Dapr runtime repository](https://github.com/dapr/dapr) to register the new middleware type. You'll need to modify the **Load()** method in [registry.go]( https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/registry.go) to register your middleware using the **Register** method. + +## Next steps +* [How-To: Configure API authorization with OAuth({{< ref oauth.md >}}) diff --git a/daprdocs/content/en/concepts/observability-concept.md b/daprdocs/content/en/concepts/observability-concept.md new file mode 100644 index 000000000..71df0de23 --- /dev/null +++ b/daprdocs/content/en/concepts/observability-concept.md @@ -0,0 +1,63 @@ +--- +type: docs +title: "Observability" +linkTitle: "Observability" +weight: 500 +description: > + How to monitor applications through tracing, metrics, logs and health +--- + +Observability is a term from control theory. Observability means you can answer questions about what's happening on the inside of a system by observing the outside of the system, without having to ship new code to answer new questions. Observability is critical in production environments and services to debug, operate and monitor Dapr system services, components and user applications. + +The observability capabilities enable users to monitor the Dapr system services, their interaction with user applications and understand how these monitored services behave. The observability capabilities are divided into the following areas; + +## Distributed tracing + +[Distributed tracing]({{}}) is used to profile and monitor Dapr system services and user apps. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing is particularly well-suited to debugging and monitoring distributed software architectures, such as microservices. + +You can use distributed tracing to help debug and optimize application code. Distributed tracing contains trace spans between the Dapr runtime, Dapr system services, and user apps across process, nodes, network, and security boundaries. It provides a detailed understanding of service invocations (call flows) and service dependencies. + +Dapr uses [W3C tracing context for distributed tracing]({{}}) + +It is generally recommended to run Dapr in production with tracing. + +### Open Telemetry + +Dapr integrates with [OpenTelemetry](https://opentelemetry.io/) for tracing, metrics and logs. With OpenTelemetry, you can configure various exporters for tracing and metrics based on your environment, whether it is running in the cloud or on-premises. + +#### Next steps + +- [How-To: Set up Zipkin]({{< ref zipkin.md >}}) +- [How-To: Set up Application Insights with Open Telemetry Collector]({{< ref open-telemetry-collector.md >}}) + +## Metrics + +[Metrics]({{}}) are the series of measured values and counts that are collected and stored over time. Dapr metrics provide monitoring and understanding of the behavior of Dapr system services and user apps. + +For example, the service metrics between Dapr sidecars and user apps show call latency, traffic failures, error rates of requests etc. + +Dapr system services metrics show side car injection failures, health of the system services including CPU usage, number of actor placement made etc. + +#### Next steps + +- [How-To: Set up Prometheus and Grafana]({{< ref prometheus.md >}}) +- [How-To: Set up Azure Monitor]({{< ref azure-monitor.md >}}) + +## Logs + +[Logs]({{}}) are records of events that occur and can be used to determine failures or another status. + +Logs events contain warning, error, info, and debug messages produced by Dapr system services. Each log event includes metadata such as message type, hostname, component name, App ID, ip address, etc. + +#### Next steps + +- [How-To: Set up Fluentd, Elastic search and Kibana in Kubernetes]({{< ref fluentd.md >}}) +- [How-To: Set up Azure Monitor]({{< ref azure-monitor.md >}}) + +## Health + +Dapr provides a way for a hosting platform to determine its [Health]({{}}) using an HTTP endpoint. With this endpoint, the Dapr process, or sidecar, can be probed to determine its readiness and liveness and action taken accordingly. + +#### Next steps + +- [Health API]({{< ref health_api.md >}}) \ No newline at end of file diff --git a/overview/README.md b/daprdocs/content/en/concepts/overview.md similarity index 59% rename from overview/README.md rename to daprdocs/content/en/concepts/overview.md index df5003f7f..46e9cad18 100644 --- a/overview/README.md +++ b/daprdocs/content/en/concepts/overview.md @@ -1,22 +1,17 @@ - -# Dapr overview +--- +type: docs +title: "Overview" +linkTitle: "Overview" +weight: 100 +description: > + Introduction to the Distributed Application Runtime +--- Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, stateless and stateful microservice applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks. -## Contents: - -- [Any language, any framework, anywhere](#any-language-any-framework-anywhere) -- [Microservice building blocks for cloud and edge](#microservice-building-blocks-for-cloud-and-edge) -- [Sidecar architecture](#sidecar-architecture) -- [Developer language SDKs and frameworks](#developer-language-sdks-and-frameworks) -- [Designed for operations](#designed-for-operations) -- [Run anywhere](#Run-anywhere) - - [Running Dapr on a local developer machine in self hosted mode](#running-dapr-on-a-local-developer-machine-in-self-hosted-mode) - - [Running Dapr in Kubernetes mode](#running-dapr-in-kubernetes-mode) - ## Any language, any framework, anywhere - + Today we are experiencing a wave of cloud adoption. Developers are comfortable with web + database application architectures (for example classic 3-tier designs) but not with microservice application architectures which are inherently distributed. It’s hard to become a distributed systems expert, nor should you have to. Developers want to focus on business logic, while leaning on the platforms to imbue their applications with scale, resiliency, maintainability, elasticity and the other attributes of cloud-native architectures. @@ -28,7 +23,7 @@ Using Dapr you can easily build microservice applications using any language, an ## Microservice building blocks for cloud and edge - + There are many considerations when architecting microservices applications. Dapr provides best practices for common capabilities when building microservice applications that developers can use in a standard way and deploy to any environment. It does this by providing distributed system building blocks. @@ -36,13 +31,13 @@ Each of these building blocks is independent, meaning that you can use one, some | Building Block | Description | |----------------|-------------| -| **[Service Invocation](../concepts/service-invocation)** | Resilient service-to-service invocation enables method calls, including retries, on remote services wherever they are located in the supported hosting environment. -| **[State Management](../concepts/state-management)** | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, Azure SQL Server, PostgreSQL, AWS DynamoDB or Redis among others. -| **[Publish and Subscribe Messaging](../concepts/publish-subscribe-messaging)** | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at least once message delivery guarantee. -| **[Resource Bindings](../concepts/bindings)** | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc. -| **[Actors](../concepts/actors)** | A pattern for stateful and stateless objects that make concurrency simple with method and state encapsulation. Dapr provides many capabilities in its actor runtime including concurrency, state, life-cycle management for actor activation/deactivation and timers and reminders to wake-up actors. -| **[Observability](../concepts/observability)** | Dapr emit metrics, logs, and traces to debug and monitor both Dapr and user applications. Dapr supports distributed tracing to easily diagnose and serve inter-service calls in production using the W3C Trace Context standard and Open Telemetry to send to different monitoring tools. -| **[Secrets](../concepts/secrets)** | Dapr provides secrets management and integrates with public cloud and local secret stores to retrieve the secrets for use in application code. +| [**Service-to-service invocation**]({{}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services wherever they are located in the supported hosting environment. +| [**State management**]({{}}) | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, Azure SQL Server, PostgreSQL, AWS DynamoDB or Redis among others. +| [**Publish and subscribe**]({{}}) | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at least once message delivery guarantee. +| [**Resource bindings**]({{}}) | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc. +| [**Actors**]({{}}) | A pattern for stateful and stateless objects that make concurrency simple with method and state encapsulation. Dapr provides many capabilities in its actor runtime including concurrency, state, life-cycle management for actor activation/deactivation and timers and reminders to wake-up actors. +| [**Observability**]({{}}) | Dapr emit metrics, logs, and traces to debug and monitor both Dapr and user applications. Dapr supports distributed tracing to easily diagnose and serve inter-service calls in production using the W3C Trace Context standard and Open Telemetry to send to different monitoring tools. +| [**Secrets**]({{}}) | Dapr provides secrets management and integrates with public cloud and local secret stores to retrieve the secrets for use in application code. ## Sidecar architecture @@ -55,13 +50,13 @@ Dapr can be hosted in multiple environments, including self hosted for local dev In self hosted mode Dapr runs as a separate side-car process which your service code can call via HTTP or gRPC. In self hosted mode, you can also deploy Dapr onto a set of VMs. - + ### Kubernetes hosted In container hosting environments such as Kubernetes, Dapr runs as a side-car container with the application container in the same pod. - + ## Developer language SDKs and frameworks @@ -74,9 +69,10 @@ To make using Dapr more natural for different languages, it also includes langua - **[Java SDK](https://github.com/dapr/java-sdk)** - **[Javascript SDK](https://github.com/dapr/js-sdk)** - **[Python SDK](https://github.com/dapr/python-sdk)** +- **[RUST SDK](https://github.com/dapr/rust-sdk)** - **[.NET SDK](https://github.com/dapr/dotnet-sdk)** -> Note: Dapr is language agnostic and provides a [RESTful HTTP API](../reference/api/README.md) in addition to the protobuf clients. +> Note: Dapr is language agnostic and provides a [RESTful HTTP API]({{< ref api >}}) in addition to the protobuf clients. ### Developer frameworks Dapr can be used from any developer framework. Here are some that have been integrated with Dapr. @@ -86,10 +82,10 @@ Dapr can be used from any developer framework. Here are some that have been int In the Dapr [Java SDK](https://github.com/dapr/java-sdk) you can find [Spring Boot](https://spring.io/) integration. -Dapr integrates easily with Python [Flask](https://pypi.org/project/Flask/) and node [Express](http://expressjs.com/), which you can find in the [getting started samples](https://github.com/dapr/docs/tree/master/getting-started) +Dapr integrates easily with Python [Flask](https://pypi.org/project/Flask/) and node [Express](http://expressjs.com/). See examples in the [Dapr quickstarts](https://github.com/dapr/quickstarts). #### Actors -Dapr SDKs support for [virtual actors](../concepts/actors) which are stateful objects that make concurrency simple, have method and state encapsulation, and are designed for scalable, distributed applications. +Dapr SDKs support for [virtual actors]({{< ref actors >}}) which are stateful objects that make concurrency simple, have method and state encapsulation, and are designed for scalable, distributed applications. #### Azure Functions Dapr integrates with the Azure Functions runtime via an extension that lets a function seamlessly interact with Dapr. Azure Functions provides an event-driven programming model and Dapr provides cloud-native building blocks. With this extension, you can bring both together for serverless and event-driven apps. For more information read @@ -100,26 +96,26 @@ To enable developers to easily build workflow applications that use Dapr’s cap [cloud-native workflows using Dapr and Logic Apps](https://cloudblogs.microsoft.com/opensource/2020/05/26/announcing-cloud-native-workflows-dapr-logic-apps/) and visit the [Dapr workflow](https://github.com/dapr/workflows) repo to try out the samples. ## Designed for Operations -Dapr is designed for operations. The [services dashboard](https://github.com/dapr/dashboard), installed via the Dapr CLI, provides a web-based UI enabling you to see information, view logs and more for the Dapr sidecars. +Dapr is designed for [operations](/operations/). The [services dashboard](https://github.com/dapr/dashboard), installed via the Dapr CLI, provides a web-based UI enabling you to see information, view logs and more for the Dapr sidecars. -The [monitoring dashboard](../reference/dashboard/README.md) provides deeper visibility into the Dapr system services and side-cars and the [observability capabilities](../concepts/observability) of Dapr provide insights into your application such as tracing and metrics. +The [monitoring tools support](/operations/monitoring/) provides deeper visibility into the Dapr system services and side-cars and the [observability capabilities]({{}}) of Dapr provide insights into your application such as tracing and metrics. ## Run anywhere ### Running Dapr on a local developer machine in self hosted mode -Dapr can be configured to run on your local developer machine in [self hosted mode](../concepts/hosting/). Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks. +Dapr can be configured to run on your local developer machine in [self-hosted mode]({{< ref self-hosted >}}). Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks. -You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr enabled application on your local machine. Try this out with the [getting started samples](../getting-started). +You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr enabled application on your local machine. Try this out with the [getting started samples]({{< ref getting-started >}}). - + ### Running Dapr in Kubernetes mode -Dapr can be configured to run on any [Kubernetes cluster](../concepts/hosting/). In Kubernetes the `dapr-sidecar-injector` and `dapr-operator` services provide first class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned into the cluster. +Dapr can be configured to run on any [Kubernetes cluster]({{< ref kubernetes >}}). In Kubernetes the `dapr-sidecar-injector` and `dapr-operator` services provide first class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned into the cluster. -The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service read the [security overview](../concepts/security/README.md#dapr-to-dapr-communication) +The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service read the [security overview]({{< ref "security-concept.md#dapr-to-dapr-communication" >}}) - + -Deploying and running a Dapr enabled application into your Kubernetes cluster is a simple as adding a few annotations to the deployment schemes. You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes/deploy) in the Kubernetes getting started sample. Try this out with the [Kubernetes getting started sample](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes) +Deploying and running a Dapr enabled application into your Kubernetes cluster is a simple as adding a few annotations to the deployment schemes. You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes/deploy) in the Kubernetes getting started sample. Try this out with the [Kubernetes quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes). diff --git a/concepts/security/README.md b/daprdocs/content/en/concepts/security-concept.md similarity index 85% rename from concepts/security/README.md rename to daprdocs/content/en/concepts/security-concept.md index 0a495cb1d..328a57ebe 100644 --- a/concepts/security/README.md +++ b/daprdocs/content/en/concepts/security-concept.md @@ -1,18 +1,14 @@ -# Security +--- +type: docs +title: "Security" +linkTitle: "Security" +weight: 600 +description: > + How Dapr is designed with security in mind +--- This article addresses multiple security considerations when using Dapr in a distributed application including: -- [Sidecar-to-App Communication](#sidecar-to-app-communication) -- [Sidecar-to-Sidecar Communication](#sidecar-to-sidecar-communication) -- [Sidecar-to-system-services-communication](#sidecar-to-system-services-communication) -- [Component namespace scopes and secrets](#component-namespace-scopes-and-secrets) -- [Network Security](#network-security) -- [Bindings Security](#bindings-security) -- [State Store Security](#state-store-security) -- [Management Security](#management-security) -- [Threat Model](#threat-model) -- [Security Audit June 2020](#security-audit-june-2020) - Several of the areas above are addressed through encryption of data in transit. One of the security mechanisms that Dapr employs for encrypting data in transit is [mutual authentication TLS](https://en.wikipedia.org/wiki/Mutual_authentication) or mTLS. mTLS offers a few key features for network traffic inside your application: - Two way authentication - the client proving its identify to the server, and vice-versa @@ -22,11 +18,11 @@ Mutual TLS is useful in almost all scenarios, but especially so for systems subj Dapr enables mTLS and all the features described in this document in your application with little to no extra code or complex configuration inside your production systems -## Sidecar-to-App communication +## Sidecar-to-app communication The Dapr sidecar runs close to the application through **localhost**, and is recommended to run under the same network boundary as the app. While many cloud-native systems today consider the pod level (on Kubernetes, for example) as a trusted security boundary, Dapr provides user with API level authentication using tokens. This feature guarantees that even on localhost, only an authenticated caller may call into Dapr. -## Sidecar-to-Sidecar communication +## Sidecar-to-sidecar communication Dapr includes an "on by default", automatic mutual TLS that provides in-transit encryption for traffic between Dapr sidecars. To achieve this, Dapr leverages a system service named `Sentry` which acts as a Certificate Authority (CA) and signs workload (app) certificate requests originating from the Dapr sidecar. @@ -46,17 +42,17 @@ Dapr also supports strong identities when deployed on Kubernetes, relying on a p By default, a workload cert is valid for 24 hours and the clock skew is set to 15 minutes. Mutual TLS can be turned off/on by editing the default configuration that is deployed with Dapr via the `spec.mtls.enabled` field. -This can be done for both Kubernetes and self hosted modes. Details for how to do this can be found [here](../../howto/configure-mtls). +This can be done for both Kubernetes and self hosted modes. Details for how to do this can be found [here]({{< ref mtls.md >}}). ### mTLS self hosted The diagram below shows how the Sentry system service issues certificates for applications based on the root/issuer certificate that is provided by an operator or generated by the Sentry service as stored in a file -![mTLS self hosted](../../images/security-mTLS-sentry-selfhosted.png) + ### mTLS in Kubernetes The diagram below shows how the Sentry system service issues certificates for applications based on the root/issuer certificate that is provided by an operator or generated by the Sentry service as stored as a Kubernetes secret -![mTLS Kubernetes](../../images/security-mTLS-sentry-kubernetes.png) + ## Sidecar to system services communication @@ -73,15 +69,15 @@ When the Dapr sidecar initializes, it authenticates with the system pods using t ### mTLS to system services in Kubernetes The diagram below shows secure communication between the Dapr sidecar and the Dapr Sentry (Certificate Authority), Placement (actor placement) and the Kubernetes Operator system services -![mTLS System Services on Kubernetes](../../images/security-mTLS-dapr-system-services.png) + ## Component namespace scopes and secrets -Dapr components are namespaced. That means a Dapr runtime sidecar instance can only access the components that have been deployed to the same namespace. See the [components scope topic](../../howto/components-scopes) for more details. +Dapr components are namespaced. That means a Dapr runtime sidecar instance can only access the components that have been deployed to the same namespace. See the [components scope documentation]({{}}) for more details. -Dapr components uses Dapr's built-in secret management capability to manage secrets. See the [secret topic](../secrets/README.md) for more details. +Dapr components uses Dapr's built-in secret management capability to manage secrets. See the [secret store overview]({{}}) for more details. -In addition, Dapr offers application-level scoping for components by allowing users to specify which applications can consume given components.For more information about application level scoping, see [here](../../howto/components-scopes#application-access-to-components-with-scopes) +In addition, Dapr offers application-level scoping for components by allowing users to specify which applications can consume given components.For more information about application level scoping, see [here]({{}}). ## Network security @@ -107,12 +103,14 @@ When deploying on Kubernetes, you can use regular [Kubernetes RBAC]( https://kub When deploying on Azure Kubernetes Service (AKS), you can use [Azure Active Directory (AD) service principals]( https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) to control access to management activities and resource management. -## Threat Model +## Threat model Threat modeling is a process by which potential threats, such as structural vulnerabilities or the absence of appropriate safeguards, can be identified, enumerated, and mitigations can be prioritized. The Dapr threat model is below. -![Threat Model](../../images/threat_model.png) +Dapr threat model -## Security Audit June 2020 +## Security audit + +### June 2020 In June 2020, Dapr has undergone a security audit from Cure53, a CNCF approved cybersecurity firm. The test focused on the following: @@ -129,7 +127,7 @@ The test focused on the following: * DoS attacks * Penetration testing -The full report can be found [here](./audits/DAP-01-report.pdf). +The full report can be found [here](/docs/Dapr-july-2020-security-audit-report.pdf). Two issues, one critical and one high, were fixed during the test. As of July 21st 2020, Dapr has 0 criticals, 2 highs, 2 mediums, 1 low, 1 info. diff --git a/daprdocs/content/en/contributing/_index.md b/daprdocs/content/en/contributing/_index.md new file mode 100644 index 000000000..90e65ff01 --- /dev/null +++ b/daprdocs/content/en/contributing/_index.md @@ -0,0 +1,8 @@ +--- +type: docs +title: "Contributing to Dapr" +linkTitle: "Contributing" +weight: 100 +description: > + How to contribute to the Dapr project +--- diff --git a/daprdocs/content/en/contributing/contributing-docs.md b/daprdocs/content/en/contributing/contributing-docs.md new file mode 100644 index 000000000..b4969ae22 --- /dev/null +++ b/daprdocs/content/en/contributing/contributing-docs.md @@ -0,0 +1,214 @@ +--- +type: docs +title: "Docs contributions" +linkTitle: "Docs" +weight: 2000 +description: > + Guidelines for contributing to the Dapr Docs +--- + +This guide contains information about contributions to the [Dapr docs repository](https://github.com/dapr/docs). Please review the guidelines below before making a contribution to the Dapr docs. This guide assumes you have already reviewed the [general guidance]({{< ref contributing-overview>}}) which applies to any Dapr project contributions. + +Dapr docs are published to [docs.dapr.io](https://docs.dapr.io). Therefore, any contribution must ensure docs can be compiled and published correctly. + +## Prerequisites +The Dapr docs are built using [Hugo](https://gohugo.io/) with the [Docsy](https://docsy.dev) theme. To verify docs are built correctly before submitting a contribution, you should setup your local environment to build and display the docs locally. + +Fork the [docs repository](https://github.com/dapr/docs) to work on any changes + +Follow the instructions in the repository [README.md](https://github.com/dapr/docs/blob/master/README.md#environment-setup) to install Hugo locally and build the docs website. + +## Style and tone +These conventions should be followed throughout all Dapr documentation to ensure a consistent experience across all docs. + +- **Casing** - Use upper case only at the start of a sentence or for proper nouns including names of technologies (Dapr, Redis, Kubernetes etc.). +- **Headers and titles** - Headers and titles must be descriptive and clear, use sentence casing i.e. use the above casing guidance for headers and titles too +- **Use simple sentences** - Easy-to-read sentences mean the reader can quickly use the guidance you share. +- **Avoid the first person** - Use 2nd person "you", "your" instead of "I", "we", "our". +- **Assume a new developer audience** - Some obvious steps can seem hard. E.g. Now set an environment variable Dapr to a value X. It is better to give the reader the explicit command to do this, rather than having them figure this out. + +## Contributing a new docs page +- Make sure the documentation you are writing is in the correct place in the hierarchy. +- Avoid creating new sections where possible, there is a good chance a proper place in the docs hierarchy already exists. +- Make sure to include a complete [Hugo front-matter](front-matter). + +### Contributing a new concept doc +- Ensure the reader can understand why they should care about this feature. What problems does it help them solve? +- Ensure the doc references the spec for examples of using the API. +- Ensure the spec is consistent with concept in terms of names, parameters and terminology. Update both the concept and the spec as needed. +- Avoid just repeating the spec. The idea is to give the reader more information and background on the capability so that they can try this out. Hence provide more information and implementation details where possible. +- Provide a link to the spec in the [Reference]({{}}) section. +- Where possible reference a practical How-To doc. + +### Contributing a new How-To guide + +- `How To` articles are meant to provide step-by-step practical guidance on to readers who wish to enable a feature, integrate a technology or use Dapr in a specific scenario. +- Sub directory naming - the directory name should be descriptive and if referring to specific component or concept should begin with the relevant name. Example *pubsub-namespaces*. +- Do not assume the reader is using a specific environment unless the article itself is specific to an environment. This include OS (Windows/Linux/MacOS), deployment target (Kubernetes, IoT etc.) or programming language. If instructions vary between operating systems, provide guidance for all. +- Include code/sample/config snippets that can be easily copied and pasted. +- At the end of the article, provide the reader with related links and next steps (this can be other relevant "how-to", samples for reference or related concepts). + +## Requirements for docs.dapr.io +Any contribution must ensure not to break the website build. The way Hugo builds the website requires following the below guidance. + +### Files and folder names +File and folder names should be globally unique. + - `\service-invocation` + - `service-invocation-overview.md` + +### Front-matter +[Front-matter](https://www.docsy.dev/docs/adding-content/content/#page-frontmatter) is what takes regular markdown files and upgrades them into Hugo compatible docs for rendering into the nav bars and ToCs. + +Every page needs a section at the top of the document like this: +```yaml +--- +type: docs +title: "TITLE FOR THE PAGE" +linkTitle: "SHORT TITLE FOR THE NAV BAR" +weight: (number) +description: "1+ SENTENCES DESCRIBING THE ARTICLE" +--- +``` + +#### Example +```yaml +--- +type: docs +title: "Service invocation overview" +linkTitle: "Overview" +weight: 10 +description: "A quick overview of Dapr service invocation and how to use it to invoke services within your application" +--- +``` + +> Weight determines the order of the pages in the left sidebar, with 0 being the top-most. + +Front-matter should be completed with all fields including type, title, linkTitle, weight, and description. +- `title` should be 1 sentence, no period at the end +- `linkTitle` should be 1-3 words, with the exception of How-to at the front. +- `description` should be 1-2 sentences on what the reader will learn, accomplish, or do in this doc. + +As per the [styling conventions](#styling-conventions), titles should only capitalize the first word and proper nouns, with the exception of "How-To:" + - "Getting started with Dapr service invocation" + - "How-To: Setup a local Redis instance" + +### Referencing other pages +Hugo `ref` and `relref` [shortcodes](https://gohugo.io/content-management/cross-references/) are used to reference other pages and sections. It also allows the build to break if a page is incorrectly renamed or removed. + +This shortcode, written inline with the rest of the markdown page, will link to the _index.md of the section/folder name: +```md +{{}} +``` + +This shortcode will link to a specific page: +```md +{{}} +``` + +> Note that all pages and folders need to have globally unique names in order for the ref shortcode to work properly. + +### Images +The markdown spec used by Docsy and Hugo does not give an option to resize images using markdown notation. Instead, raw HMTL is used. + +Begin by placing images under `/daprdocs/static/images` with the naming convention of `[page-name]-[image-name].[png|jpg|svg]`. + +Then link to the image using: +```md +Description of image +``` + +>Don't forget to set the alt attribute to keep the docs readable for our visually impaired users. + +#### Example + +This HTML will display the `dapr-overview.png` image on the `overview.md` page: +```md +Overview diagram of Dapr and its building blocks +``` + +### Tabbed Content +Tabs are made possible through [Hugo shortcodes](https://gohugo.io/content-management/shortcodes/). + +The overall format is: +``` +{{}} + +{{% codetab %}} +[Content for Tab1] +{{% /codetab %}} + +{{% codetab %}} +[Content for Tab2] +{{% /codetab %}} + +{{< /tabs */>}} +``` + +All content you author will be rendered to Markdown, so you can include images, code blocks, YouTube videos, and more. + +#### Example +```` +{{}} + +{{% codetab %}} +```powershell +powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex" +``` +{{% /codetab %}} + +{{% codetab %}} +```bash +wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash +``` +{{% /codetab %}} + +{{% codetab %}} +```bash +brew install dapr/tap/dapr-cli +``` +{{% /codetab %}} + +{{< /tabs */>}} +```` + +This example will render to this: + +{{< tabs Windows Linux MacOS>}} + +{{% codetab %}} +```powershell +powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex" +``` +{{% /codetab %}} + +{{% codetab %}} +```bash +wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash +``` +{{% /codetab %}} + +{{% codetab %}} +```bash +brew install dapr/tap/dapr-cli +``` +{{% /codetab %}} + +{{< /tabs >}} + +### YouTube Videos +Hugo can automatically embed YouTube videos using a shortcode: +``` +{{}} +``` + +#### Example + +Given the video https://youtu.be/dQw4w9WgXcQ + +The shortcode would be: +``` +{{}} +``` + +### References +- [Docsy authoring guide](https://www.docsy.dev/docs/adding-content/) diff --git a/daprdocs/content/en/contributing/contributing-overview.md b/daprdocs/content/en/contributing/contributing-overview.md new file mode 100644 index 000000000..cc9c81cd2 --- /dev/null +++ b/daprdocs/content/en/contributing/contributing-overview.md @@ -0,0 +1,69 @@ +--- +type: docs +title: "Contribution overview" +linkTitle: "Overview" +weight: 1000 +description: > + General guidance for contributing to any of the Dapr project repositories +--- + +Thank you for your interest in Dapr! +This document provides the guidelines for how to contribute to the [Dapr project](https://github.com/dapr) through issues and pull-requests. Contributions can also come in additional ways such as engaging with the community in community calls, commenting on issues or pull requests and more. + +See the [Dapr community repository](https://github.com/dapr/community) for more information on community engagement and community membership. + +> If you are looking to contribute to the Dapr docs, please also see the specific guidelines for [docs contributions]({{< ref contributing-docs >}}). + +## Issues + +### Issue types + +In most Dapr repositories there are usually 4 types of issues: + +- Issue/Bug: You've found a bug with the code, and want to report it, or create an issue to track the bug. +- Issue/Discussion: You have something on your mind, which requires input form others in a discussion, before it eventually manifests as a proposal. +- Issue/Proposal: Used for items that propose a new idea or functionality. This allows feedback from others before code is written. +- Issue/Question: Use this issue type, if you need help or have a question. + +### Before submitting + +Before you submit an issue, make sure you've checked the following: + +1. Is it the right repository? + - The Dapr project is distributed across multiple repositories. Check the list of [repositories](https://github.com/dapr) if you aren't sure which repo is the correct one. +1. Check for existing issues + - Before you create a new issue, please do a search in [open issues](https://github.com/dapr/dapr/issues) to see if the issue or feature request has already been filed. + - If you find your issue already exists, make relevant comments and add your [reaction](https://github.com/blog/2119-add-reaction-to-pull-requests-issues-and-comments). Use a reaction: + - 👍 up-vote + - 👎 down-vote +1. For bugs + - Check it's not an environment issue. For example, if running on Kubernetes, make sure prerequisites are in place. (state stores, bindings, etc.) + - You have as much data as possible. This usually comes in the form of logs and/or stacktrace. If running on Kubernetes or other environment, look at the logs of the Dapr services (runtime, operator, placement service). More details on how to get logs can be found [here](https://github.com/dapr/docs/tree/master/best-practices/troubleshooting/logs.md). +1. For proposals + - Many changes to the Dapr runtime may require changes to the API. In that case, the best place to discuss the potential feature is the main [Dapr repo](https://github.com/dapr/dapr). + - Other examples could include bindings, state stores or entirely new components. + +## Pull Requests + +All contributions come through pull requests. To submit a proposed change, follow this workflow: + +1. Make sure there's an issue (bug or proposal) raised, which sets the expectations for the contribution you are about to make. +1. Fork the relevant repo and create a new branch +1. Create your change + - Code changes require tests +1. Update relevant documentation for the change +1. Commit and open a PR +1. Wait for the CI process to finish and make sure all checks are green +1. A maintainer of the project will be assigned, and you can expect a review within a few days + +#### Use work-in-progress PRs for early feedback + +A good way to communicate before investing too much time is to create a "Work-in-progress" PR and share it with your reviewers. The standard way of doing this is to add a "[WIP]" prefix in your PR's title and assign the **do-not-merge** label. This will let people looking at your PR know that it is not well baked yet. + +### Use of Third-party code + +- Third-party code must include licenses. + +## Code of Conduct + +Please see the [Dapr community code of conduct](https://github.com/dapr/community/blob/master/CODE-OF-CONDUCT.md). diff --git a/daprdocs/content/en/developing-applications/_index.md b/daprdocs/content/en/developing-applications/_index.md new file mode 100644 index 000000000..6e08c928b --- /dev/null +++ b/daprdocs/content/en/developing-applications/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Developing applications with Dapr" +linkTitle: "Developing applications" +description: "Tools, tips, and information on how to build your application with Dapr" +weight: 30 +--- diff --git a/daprdocs/content/en/developing-applications/building-blocks/_index.md b/daprdocs/content/en/developing-applications/building-blocks/_index.md new file mode 100644 index 000000000..8117c1884 --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/_index.md @@ -0,0 +1,11 @@ +--- +type: docs +title: "Building blocks" +linkTitle: "Building blocks" +weight: 10 +description: "Dapr capabilities that solve common development challenges for distributed applications" +--- + +Get a high-level [overview of Dapr building blocks]({{< ref building-blocks-concept >}}) in the **Concepts** section. + +Diagram showing the different Dapr building blocks \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/building-blocks/actors/_index.md b/daprdocs/content/en/developing-applications/building-blocks/actors/_index.md new file mode 100644 index 000000000..d3ae80d32 --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/actors/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Actors" +linkTitle: "Actors" +weight: 50 +description: Encapsulate code and data in reusable actor objects as a common microservices design pattern +--- diff --git a/concepts/actors/README.md b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-background.md similarity index 92% rename from concepts/actors/README.md rename to daprdocs/content/en/developing-applications/building-blocks/actors/actors-background.md index 77c04b935..23efc59d1 100644 --- a/concepts/actors/README.md +++ b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-background.md @@ -1,4 +1,10 @@ -# Introduction to actors +--- +type: docs +title: "Introduction to actors" +linkTitle: "Actors background" +weight: 20 +description: Learn more about the actor pattern +--- The [actor pattern](https://en.wikipedia.org/wiki/Actor_model) describes **actors** as the lowest-level "unit of computation". In other words, you write your code in a self-contained unit (called an actor) that receives messages and processes them one at a time, without any kind of concurrency or threading. @@ -10,15 +16,8 @@ Dapr includes a runtime that specifically implements the [Virtual Actor pattern] ## Quick links -- [Dapr Actor Features](./actors_features.md) -- [Dapr Actor API Spec](../../reference/api/actors_api.md) - -## Contents - -- [Actors in Dapr](#actors-in-dapr) -- [Actor Lifetime](#actor-lifetime) -- [Distribution and Failover](#distribution-and-failover) -- [Actor Communication](#actor-communication) +- [Dapr Actor Features]({{< ref actors-overview.md >}}) +- [Dapr Actor API Spec]({{< ref actors_api.md >}} ) ### When to use actors @@ -34,7 +33,7 @@ The actor design pattern can be a good fit to a number of distributed systems pr Every actor is defined as an instance of an actor type, identical to the way an object is an instance of a class. For example, there may be an actor type that implements the functionality of a calculator and there could be many actors of that type that are distributed on various nodes across a cluster. Each such actor is uniquely identified by an actor ID. - + ## Actor lifetime @@ -57,11 +56,11 @@ Actors are distributed across the instances of the actor service, and those inst ### Actor placement service The Dapr actor runtime manages distribution scheme and key range settings for you. This is done by the actor `Placement` service. When a new instance of a service is created, the corresponding Dapr runtime register the actor types it can create and the `Placement` service calculates the partitioning across all the instances for a given actor type. This table of partition information for each actor type is updated and stored in each Dapr instance running in the environment and can change dynamically as new instance of actor services are created and destroyed. This is shown in the diagram below. -![Placement service registration](../../images/actors_placement_service_registration.png) + When a client calls an actor with a particular id (for example, actor id 123), the Dapr instance for the client hashes the actor type and id, and uses the information to call onto the corresponding Dapr instance that can serve the requests for that particular actor id. As a result, the same partition (or service instance) is always called for any given actor id. This is shown in the diagram below. -![Actor ID creation and calling](../../images/actors_id_hashing_calling.png) + This simplifies some choices but also carries some consideration: @@ -80,7 +79,7 @@ POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors///}}) for more details. ### Concurrency @@ -90,7 +89,8 @@ A single actor instance cannot process more than one request at a time. An actor Actors can deadlock on each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The Dapr actor runtime automatically times out on actor calls and throw an exception to the caller to interrupt possible deadlock situations. -!["Actor concurrency"](../../images/actors_communication.png) + + ### Turn-based access @@ -100,4 +100,5 @@ The Dapr actors runtime enforces turn-based concurrency by acquiring a per-actor The following example illustrates the above concepts. Consider an actor type that implements two asynchronous methods (say, Method1 and Method2), a timer, and a reminder. The diagram below shows an example of a timeline for the execution of these methods and callbacks on behalf of two actors (ActorId1 and ActorId2) that belong to this actor type. -![""](../../images/actors_concurrency.png) + + diff --git a/concepts/actors/actors_features.md b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md similarity index 90% rename from concepts/actors/actors_features.md rename to daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md index 71e1e05e4..a342db3d3 100644 --- a/concepts/actors/actors_features.md +++ b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md @@ -1,10 +1,12 @@ -# Dapr actors runtime +--- +type: docs +title: "Dapr actors overview" +linkTitle: "Overview" +weight: 10 +description: Overview of Dapr support for actors +--- -The Dapr actors runtime provides following capabilities: - -- [Method Invocation](#actor-method-invocation) -- [State Management](#actor-state-management) -- [Timers and Reminders](#actor-timers-and-reminders) +The Dapr actors runtime provides support for [virtual actors]({{< ref actors-background.md >}}) through following capabilities: ## Actor method invocation @@ -16,7 +18,7 @@ POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors///meth You can provide any data for the actor method in the request body and the response for the request is in response body which is data from actor method call. -Refer [api spec](../../reference/api/actors_api.md#invoke-actor-method) for more details. +Refer [api spec]({{< ref "actors_api.md#invoke-actor-method" >}}) for more details. ## Actor state management @@ -78,7 +80,7 @@ You can remove the actor timer by calling DELETE http://localhost:3500/v1.0/actors///timers/ ``` -Refer [api spec](../../reference/api/actors_api.md#invoke-timer) for more details. +Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details. ### Actor reminders @@ -116,7 +118,7 @@ The following request body configures a reminder with a `dueTime` 15 seconds and } ``` -#### Retrieve Actor Reminder +#### Retrieve actor reminder You can retrieve the actor reminder by calling @@ -124,7 +126,7 @@ You can retrieve the actor reminder by calling GET http://localhost:3500/v1.0/actors///reminders/ ``` -#### Remove the Actor Reminder +#### Remove the actor reminder You can remove the actor reminder by calling @@ -132,4 +134,4 @@ You can remove the actor reminder by calling DELETE http://localhost:3500/v1.0/actors///reminders/ ``` -Refer [api spec](../../reference/api/actors_api.md#invoke-reminder) for more details. +Refer [api spec]({{< ref "actors_api.md#invoke-reminder" >}}) for more details. diff --git a/daprdocs/content/en/developing-applications/building-blocks/bindings/_index.md b/daprdocs/content/en/developing-applications/building-blocks/bindings/_index.md new file mode 100644 index 000000000..78a0fe0eb --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/bindings/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Bindings" +linkTitle: "Bindings" +weight: 40 +description: Trigger code from and interface with a large array of external resources +--- diff --git a/daprdocs/content/en/developing-applications/building-blocks/bindings/bindings-overview.md b/daprdocs/content/en/developing-applications/building-blocks/bindings/bindings-overview.md new file mode 100644 index 000000000..e47047daf --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/bindings/bindings-overview.md @@ -0,0 +1,55 @@ +--- +type: docs +title: "Bindings overview" +linkTitle: "Overview" +weight: 100 +description: Overview of the Dapr bindings building block +--- + +## Introduction + +Using bindings, you can trigger your app with events coming in from external systems, or interface with external systems. This building block provides several benefits for you and your code: + +- Remove the complexities of connecting to, and polling from, messaging systems such as queues and message buses +- Focus on business logic and not implementation details of how to interact with a system +- Keep your code free from SDKs or libraries +- Handle retries and failure recovery +- Switch between bindings at run time +- Build portable applications where environment-specific bindings are set-up and no code changes are required + +For a specific example, bindings would allow your microservice to respond to incoming Twilio/SMS messages without adding or configuring a third-party Twilio SDK, worrying about polling from Twilio (or using websockets, etc.). + +Bindings are developed independently of Dapr runtime. You can view and contribute to the bindings [here](https://github.com/dapr/components-contrib/tree/master/bindings). + +## Input bindings + +Input bindings are used to trigger your application when an event from an external resource has occurred. +An optional payload and metadata may be sent with the request. + +In order to receive events from an input binding: + +1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.) +2. Listen on an HTTP endpoint for the incoming event, or use the gRPC proto library to get incoming events + +> On startup Dapr sends a `OPTIONS` request for all defined input bindings to the application and expects a status code other than `NOT FOUND (404)` if this application wants to subscribe to the binding. + +Read the [Create an event-driven app using input bindings]({{< ref howto-triggers.md >}}) page to get started with input bindings. + +## Output bindings + +Output bindings allow users to invoke external resources. +An optional payload and metadata can be sent with the invocation request. + +In order to invoke an output binding: + +1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.) +2. Use the HTTP endpoint or gRPC method to invoke the binding with an optional payload + +Read the [Send events to external systems using output bindings]({{< ref howto-bindings.md >}}) page to get started with output bindings. + + + + ## Related Topics +- [Trigger a service from different resources with input bindings]({{< ref howto-triggers.md >}}) +- [Invoke different resources using output bindings]({{< ref howto-bindings.md >}}) + diff --git a/howto/send-events-with-output-bindings/README.md b/daprdocs/content/en/developing-applications/building-blocks/bindings/howto-bindings.md similarity index 55% rename from howto/send-events-with-output-bindings/README.md rename to daprdocs/content/en/developing-applications/building-blocks/bindings/howto-bindings.md index bd5f032b0..e4370564b 100644 --- a/howto/send-events-with-output-bindings/README.md +++ b/daprdocs/content/en/developing-applications/building-blocks/bindings/howto-bindings.md @@ -1,6 +1,12 @@ -# Send events to external systems using Output Bindings +--- +type: docs +title: "How-To: Use bindings to interface with external resources" +linkTitle: "How-To: Bindings" +description: "Invoke external systems with Dapr output bindings" +weight: 300 +--- -Using bindings, its possible to invoke external resources without tying in to special SDK or libraries. +Using bindings, it is possible to invoke external resources without tying in to special SDK or libraries. For a complete sample showing output bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings). Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&t=1960) on how to use bi-directional output bindings. @@ -10,7 +16,7 @@ Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be& An output binding represents a resource that Dapr will use invoke and send messages to. -For the purpose of this guide, we'll use a Kafka binding. You can find a list of the different binding specs [here](../../concepts/bindings/README.md). +For the purpose of this guide, you'll use a Kafka binding. You can find a list of the different binding specs [here]({{< ref bindings >}}). Create the following YAML file, named binding.yaml, and save this to a `components` sub-folder in your application directory. (Use the `--components-path` flag with `dapr run` to point to your custom components dir) @@ -32,29 +38,29 @@ spec: value: topic1 ``` -Here, we create a new binding component with the name of `myevent`. +Here, create a new binding component with the name of `myevent`. -Inside the `metadata` section, we configure Kafka related properties such as the topic to publish the message to and the broker. +Inside the `metadata` section, configure Kafka related properties such as the topic to publish the message to and the broker. ## 2. Send an event All that's left now is to invoke the bindings endpoint on a running Dapr instance. -We can do so using HTTP: +You can do so using HTTP: ```bash curl -X POST -H http://localhost:3500/v1.0/bindings/myevent -d '{ "data": { "message": "Hi!" }, "operation": "create" }' ``` -As seen above, we invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myevent`. +As seen above, you invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myevent`. The payload goes inside the mandatory `data` field, and can be any JSON serializable value. -You'll also notice that there's an `operation` field that tells the binding what we need it to do. -You can check [here](../../reference/specs/bindings) which operations are supported for every output binding. +You'll also notice that there's an `operation` field that tells the binding what you need it to do. +You can check [here]({{< ref bindings >}}) which operations are supported for every output binding. ## References -* Binding [API](https://github.com/dapr/docs/blob/master/reference/api/bindings_api.md) -* Binding [Components](https://github.com/dapr/docs/tree/master/concepts/bindings) -* Binding [Detailed specifications](https://github.com/dapr/docs/tree/master/reference/specs/bindings) +- [Binding API]({{< ref bindings_api.md >}}) +- [Binding components]({{< ref bindings >}}) +- [Binding detailed specifications]({{< ref supported-bindings >}}) diff --git a/howto/trigger-app-with-input-binding/README.md b/daprdocs/content/en/developing-applications/building-blocks/bindings/howto-triggers.md similarity index 86% rename from howto/trigger-app-with-input-binding/README.md rename to daprdocs/content/en/developing-applications/building-blocks/bindings/howto-triggers.md index 6d36bbf6e..2adfc10b4 100644 --- a/howto/trigger-app-with-input-binding/README.md +++ b/daprdocs/content/en/developing-applications/building-blocks/bindings/howto-triggers.md @@ -1,4 +1,10 @@ -# Create an event-driven app using input bindings +--- +type: docs +title: "How-To: Trigger your application with input bindings" +linkTitle: "How-To: Triggers" +description: "Use Dapr input bindings to trigger event driven applications" +weight: 200 +--- Using bindings, your code can be triggered with incoming events from different resources which can be anything: a queue, messaging pipeline, cloud-service, filesystem etc. @@ -10,15 +16,15 @@ Dapr bindings allow you to: * Replace bindings without changing your code * Focus on business logic and not the event resource implementation -For more info on bindings, read [this](../../concepts/bindings/README.md) link. +For more info on bindings, read [this overview]({{}}). -For a complete sample showing bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings). +For a quickstart sample showing bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings). ## 1. Create a binding An input binding represents an event resource that Dapr uses to read events from and push to your application. -For the purpose of this HowTo, we'll use a Kafka binding. You can find a list of the different binding specs [here](../../reference/specs/bindings/README.md). +For the purpose of this HowTo, we'll use a Kafka binding. You can find a list of the different binding specs [here]({{< ref supported-bindings >}}). Create the following YAML file, named binding.yaml, and save this to a `components` sub-folder in your application directory. (Use the `--components-path` flag with `dapr run` to point to your custom components dir) diff --git a/daprdocs/content/en/developing-applications/building-blocks/observability/_index.md b/daprdocs/content/en/developing-applications/building-blocks/observability/_index.md new file mode 100644 index 000000000..eb6891d61 --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/observability/_index.md @@ -0,0 +1,9 @@ +--- +type: docs +title: "Observability" +linkTitle: "Observability" +weight: 60 +description: See and measure the message calls across components and networked services +--- + +This section includes guides for developers in the context of observability. See other sections for a [general overview of the observability concept]({{< ref observability >}}) in Dapr and for [operations guidance on monitoring]({{< ref monitoring >}}). diff --git a/concepts/observability/logs.md b/daprdocs/content/en/developing-applications/building-blocks/observability/logs.md similarity index 87% rename from concepts/observability/logs.md rename to daprdocs/content/en/developing-applications/building-blocks/observability/logs.md index f9848570d..90758ec97 100644 --- a/concepts/observability/logs.md +++ b/daprdocs/content/en/developing-applications/building-blocks/observability/logs.md @@ -1,4 +1,10 @@ -# Logs +--- +type: docs +title: "Logs" +linkTitle: "Logs" +weight: 3000 +description: "Understand Dapr logging" +--- Dapr produces structured logs to stdout either as a plain text or JSON formatted. By default, all Dapr processes (runtime and system services) write to console out in plain text. To enable JSON formatted logs, you need to add the `--log-as-json` command flag when running Dapr processes. @@ -80,17 +86,17 @@ spec: ## Log collectors -If you run Dapr in a Kubernetes cluster, [Fluentd](https://www.fluentd.org/) is a popular container log collector. You can use Fluentd with a [json parser plugin](https://docs.fluentd.org/parser/json) to parse Dapr JSON formatted logs. This [how-to](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md) shows how to configure the Fleuntd in your cluster. +If you run Dapr in a Kubernetes cluster, [Fluentd](https://www.fluentd.org/) is a popular container log collector. You can use Fluentd with a [json parser plugin](https://docs.fluentd.org/parser/json) to parse Dapr JSON formatted logs. This [how-to]({{< ref fluentd.md >}}) shows how to configure the Fleuntd in your cluster. If you are using the Azure Kubernetes Service, you can use the default OMS Agent to collect logs with Azure Monitor without needing to install Fluentd. ## Search engines -If you use [Fluentd](https://www.fluentd.org/), we recommend to using Elastic Search and Kibana. This [how-to](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md) shows how to set up Elastic Search and Kibana in your Kubernetes cluster. +If you use [Fluentd](https://www.fluentd.org/), we recommend to using Elastic Search and Kibana. This [how-to]({{< ref fluentd.md >}}) shows how to set up Elastic Search and Kibana in your Kubernetes cluster. If you are using the Azure Kubernetes Service, you can use [Azure monitor for containers](https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-overview) without indstalling any additional monitoring tools. Also read [How to enable Azure Monitor for containers](https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-onboard) ## References -- [How-to : Set up Fleuntd, Elastic search, and Kibana](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md) -- [How-to : Set up Azure Monitor in Azure Kubernetes Service](../../howto/setup-monitoring-tools/setup-azure-monitor.md) +- [How-to : Set up Fleuntd, Elastic search, and Kibana]({{< ref fluentd.md >}}) +- [How-to : Set up Azure Monitor in Azure Kubernetes Service]({{< ref azure-monitor.md >}}) diff --git a/concepts/observability/metrics.md b/daprdocs/content/en/developing-applications/building-blocks/observability/metrics.md similarity index 78% rename from concepts/observability/metrics.md rename to daprdocs/content/en/developing-applications/building-blocks/observability/metrics.md index c4ce411de..7e1d3a045 100644 --- a/concepts/observability/metrics.md +++ b/daprdocs/content/en/developing-applications/building-blocks/observability/metrics.md @@ -1,4 +1,10 @@ -# Metrics +--- +type: docs +title: "Metrics" +linkTitle: "Metrics" +weight: 4000 +description: "Observing Dapr metrics" +--- Dapr exposes a [Prometheus](https://prometheus.io/) metrics endpoint that you can scrape to gain a greater understanding of how Dapr is behaving and to setup alerts for specific conditions. @@ -31,6 +37,6 @@ Each Dapr system process emits Go runtime/process metrics by default and have th ## References -* [Howto: Run Prometheus locally](../../howto/setup-monitoring-tools/observe-metrics-with-prometheus-locally.md) -* [Howto: Set up Prometheus and Grafana for metrics](../../howto/setup-monitoring-tools/setup-prometheus-grafana.md) -* [Howto: Set up Azure monitor to search logs and collect metrics for Dapr](../../howto/setup-monitoring-tools/setup-azure-monitor.md) +* [Howto: Run Prometheus locally]({{< ref prometheus.md >}}) +* [Howto: Set up Prometheus and Grafana for metrics]({{< ref grafana.md >}}) +* [Howto: Set up Azure monitor to search logs and collect metrics for Dapr]({{< ref azure-monitor.md >}}) diff --git a/concepts/observability/health.md b/daprdocs/content/en/developing-applications/building-blocks/observability/sidecar-health.md similarity index 91% rename from concepts/observability/health.md rename to daprdocs/content/en/developing-applications/building-blocks/observability/sidecar-health.md index 62cc25f5a..2bd7a1372 100644 --- a/concepts/observability/health.md +++ b/daprdocs/content/en/developing-applications/building-blocks/observability/sidecar-health.md @@ -1,13 +1,19 @@ -# Health +--- +type: docs +title: "Sidecar health" +linkTitle: "Sidecar health" +weight: 5000 +description: Dapr sidecar health checks. +--- Dapr provides a way to determine it's health using an HTTP /healthz endpoint. -With this endpoint, the Dapr process, or sidecar, can be probed for its health and hence determine its readiness and liveness. See [health API ](../../reference/api/health_api.md) +With this endpoint, the Dapr process, or sidecar, can be probed for its health and hence determine its readiness and liveness. See [health API ]({{< ref health_api.md >}}) The Dapr `/healthz` endpoint can be used by health probes from the application hosting platform. This topic describes how Dapr integrates with probes from different hosting platforms. As a user, when deploying Dapr to a hosting platform (for example Kubernetes), the Dapr health endpoint is automatically configured for you. There is nothing you need to configure. -Note: Dapr actors also have a health API endpoint where Dapr probes the application for a response to a signal from Dapr that the actor application is healthy and running. See [actor health API](../../reference/api/actors_api.md#health-check) +Note: Dapr actors also have a health API endpoint where Dapr probes the application for a response to a signal from Dapr that the actor application is healthy and running. See [actor health API]({{< ref "actors_api.md#health-check" >}}) ## Health endpoint: Integration with Kubernetes @@ -20,7 +26,7 @@ The kubelet uses readiness probes to know when a container is ready to start acc When integrating with Kubernetes, the Dapr sidecar is injected with a Kubernetes probe configuration telling it to use the Dapr healthz endpoint. This is done by the `Sidecar Injector` system service. The integration with the kubelet is shown in the diagram below. -![mTLS System Services on Kubernetes](../../images/security-mTLS-dapr-system-services.png) + ### How to configure a liveness probe in Kubernetes @@ -78,6 +84,6 @@ readinessProbe: For more information refer to; -- [ Endpoint health API](../../reference/api/health_api.md) -- [Actor health API](../../reference/api/actors_api.md#health-check) +- [ Endpoint health API]({{< ref health_api.md >}}) +- [Actor health API]({{< ref "actors_api.md#health-check" >}}) - [Kubernetes probe configuration parameters](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) diff --git a/concepts/observability/traces.md b/daprdocs/content/en/developing-applications/building-blocks/observability/tracing.md similarity index 80% rename from concepts/observability/traces.md rename to daprdocs/content/en/developing-applications/building-blocks/observability/tracing.md index c510b3fa0..c31e15e28 100644 --- a/concepts/observability/traces.md +++ b/daprdocs/content/en/developing-applications/building-blocks/observability/tracing.md @@ -1,17 +1,14 @@ -# Distributed Tracing +--- +type: docs +title: "Distributed tracing" +linkTitle: "Distributed tracing" +weight: 1000 +description: "Use Dapr tracing to get visibility for distributed application" +--- Dapr uses OpenTelemetry (previously known as OpenCensus) for distributed traces and metrics collection. OpenTelemetry supports various backends including [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), [Instana](https://www.instana.com), [Jaeger](https://www.jaegertracing.io/), [SignalFX](https://www.signalfx.com/), [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io) and others. -![Tracing](../../images/tracing.png) - -## Contents - -- [Distributed Tracing](#distributed-tracing) - - [Contents](#contents) - - [Tracing design](#tracing-design) - - [W3C Correlation ID](#w3c-correlation-id) - - [Configuration](#configuration) - - [References](#references) + ## Tracing design @@ -26,7 +23,7 @@ Dapr adds a HTTP/gRPC middleware to the Dapr sidecar. The middleware intercepts Dapr uses the standard W3C Trace Context headers. For HTTP requests, Dapr uses `traceparent` header. For gRPC requests, Dapr uses `grpc-trace-bin` header. When a request arrives without a trace ID, Dapr creates a new one. Otherwise, it passes the trace ID along the call chain. -Read [W3C Tracing Context for distributed tracing](./W3C-traces.md) for more background on W3C Trace Context. +Read [W3C distributed tracing]({{< ref w3c-tracing >}}) for more background on W3C Trace Context. ## Configuration @@ -68,6 +65,6 @@ spec: ## References -* [How-To: Set up Application Insights distributed tracing with OpenTelemetry Collector](../../howto/diagnose-with-tracing/open-telemetry-collector.md) -* [How-To: Set up Zipkin for distributed tracing](../../howto/diagnose-with-tracing/zipkin.md) -* [How-To: Use W3C Trace Context for distributed tracing](../../howto/use-w3c-tracecontext/README.md) +- [How-To: Setup Application Insights for distributed tracing with OpenTelemetry Collector]({{< ref open-telemetry-collector.md >}}) +- [How-To: Set up Zipkin for distributed tracing]({{< ref zipkin.md >}}) +- [W3C distributed tracing]({{< ref w3c-tracing >}}) diff --git a/daprdocs/content/en/developing-applications/building-blocks/observability/w3c-tracing/_index.md b/daprdocs/content/en/developing-applications/building-blocks/observability/w3c-tracing/_index.md new file mode 100644 index 000000000..0c18e1f2a --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/observability/w3c-tracing/_index.md @@ -0,0 +1,8 @@ +--- +type: docs +title: "W3C trace context" +linkTitle: "W3C trace context" +weight: 1000 +description: Background and scenarios for using W3C tracing with Dapr +type: docs +--- \ No newline at end of file diff --git a/howto/use-w3c-tracecontext/README.md b/daprdocs/content/en/developing-applications/building-blocks/observability/w3c-tracing/w3c-tracing-howto.md similarity index 80% rename from howto/use-w3c-tracecontext/README.md rename to daprdocs/content/en/developing-applications/building-blocks/observability/w3c-tracing/w3c-tracing-howto.md index b19bb7741..2cf0026bf 100644 --- a/howto/use-w3c-tracecontext/README.md +++ b/daprdocs/content/en/developing-applications/building-blocks/observability/w3c-tracing/w3c-tracing-howto.md @@ -1,20 +1,16 @@ +--- +type: docs +title: "How-To: Use W3C trace context with Dapr" +linkTitle: "How-To: Use W3C trace context" +weight: 20000 +description: Using W3C tracing standard with Dapr +type: docs +--- + # How to use trace context -Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Dapr does all the heavy lifting of generating and propagating the trace context information and there are very few cases where you need to either propagate or create a trace context. First read scenarios in the [W3C trace context for distributed tracing](../../concepts/observability/W3C-traces.md) article to understand whether you need to propagate or create a trace context. +Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Dapr does all the heavy lifting of generating and propagating the trace context information and there are very few cases where you need to either propagate or create a trace context. First read scenarios in the [W3C distributed tracing]({{< ref w3c-tracing >}}) article to understand whether you need to propagate or create a trace context. -To view traces, read the [how to diagnose with tracing](../diagnose-with-tracing) article. - -## Contents -- [How to retrieve trace context from a response](#how-to-retrieve-trace-context-from-a-response) -- [How to propagate trace context in a request](#how-to-propagate-trace-context-in-a-request) -- [How to create trace context](#how-to-create-trace-context) - - [Go](#create-trace-context-in-go) - - [Java](#create-trace-context-in-java) - - [Python](#create-trace-context-in-python) - - [NodeJS](#create-trace-context-in-nodejs) - - [C++](#create-trace-context-in-c++) - - [C#](#create-trace-context-in-c#) -- [Putting it all together with a Go Sample](#putting-it-all-together-with-a-go-sample) -- [Related Links](#related-links) +To view traces, read the [how to diagnose with tracing]({{< ref tracing.md >}}) article. ## How to retrieve trace context from a response `Note: There are no helper methods exposed in Dapr SDKs to propagate and retrieve trace context. You need to use http/gRPC clients to propagate and retrieve trace headers through http headers and gRPC metadata.` @@ -102,8 +98,8 @@ f.SpanContextToRequest(traceContext, req) traceContext := span.SpanContext() traceContextBinary := propagation.Binary(traceContext) ``` - -You can then pass the trace context through [gRPC metadata]("google.golang.org/grpc/metadata") through `grpc-trace-bin` header. + +You can then pass the trace context through [gRPC metadata](https://google.golang.org/grpc/metadata) through `grpc-trace-bin` header. ```go ctx = metadata.AppendToOutgoingContext(ctx, "grpc-trace-bin", string(traceContextBinary)) @@ -219,7 +215,7 @@ In Kubernetes, you can apply the configuration as below : kubectl apply -f appconfig.yaml ``` -You then set the following tracing annotation in your deployment YAML. You can add the following annotaion in sample [grpc app](../create-grpc-app) deployment yaml. +You then set the following tracing annotation in your deployment YAML. You can add the following annotaion in sample [grpc app]({{< ref grpc.md >}}) deployment yaml. ```yaml dapr.io/config: "appconfig" @@ -227,13 +223,13 @@ dapr.io/config: "appconfig" ### Invoking Dapr with trace context -As mentioned in `Scenarios` section in [W3C Trace Context for distributed tracing](../../concepts/observability/W3C-traces.md) that Dapr covers generating trace context and you do not need to explicitly create trace context. +Dapr covers generating trace context and you do not need to explicitly create trace context. However if you choose to pass the trace context explicitly, then Dapr will use the passed trace context and propagate all across the HTTP/gRPC call. -Using the [grpc app](../create-grpc-app) in the example and putting this all together, the following steps show you how to create a Dapr client and call the InvokeService method passing the trace context: +Using the [grpc app]({{< ref grpc.md >}}) in the example and putting this all together, the following steps show you how to create a Dapr client and call the InvokeService method passing the trace context: -The Rest code snippet and details, refer to the [grpc app](../create-grpc-app). +The Rest code snippet and details, refer to the [grpc app]({{< ref grpc >}}). ### 1. Import the package @@ -293,9 +289,9 @@ You can now correlate the calls in your app and across services with Dapr using ## Related Links -* [Observability concepts](../../concepts/observability/traces.md) -* [W3C Trace Context for distributed tracing](../../concepts/observability/W3C-traces.md) -* [How to set up Application Insights distributed tracing with OpenTelemetry Collector](../../howto/diagnose-with-tracing/open-telemetry-collector.md) -* [How to set up Zipkin for distributed tracing](../../howto/diagnose-with-tracing/zipkin.md) -* [W3C trace context specification](https://www.w3.org/TR/trace-context/) -* [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/observability) +- [Observability concepts]({{< ref observability-concept.md >}}) +- [W3C Trace Context for distributed tracing]({{< ref w3c-tracing >}}) +- [How To set up Application Insights for distributed tracing with OpenTelemetry]({{< ref open-telemetry-collector.md >}}) +- [How to set up Zipkin for distributed tracing]({{< ref zipkin.md >}}) +- [W3C trace context specification](https://www.w3.org/TR/trace-context/) +- [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/observability) diff --git a/concepts/observability/W3C-traces.md b/daprdocs/content/en/developing-applications/building-blocks/observability/w3c-tracing/w3c-tracing-overview.md similarity index 92% rename from concepts/observability/W3C-traces.md rename to daprdocs/content/en/developing-applications/building-blocks/observability/w3c-tracing/w3c-tracing-overview.md index f365f0724..56ca62057 100644 --- a/concepts/observability/W3C-traces.md +++ b/daprdocs/content/en/developing-applications/building-blocks/observability/w3c-tracing/w3c-tracing-overview.md @@ -1,8 +1,11 @@ -# W3C trace context for distributed tracing - -- [Background](#background) -- [Trace scenarios](#scenarios) -- [W3C trace headers](#w3c-trace-headers) +--- +type: docs +title: "W3C trace context overview" +linkTitle: "Overview" +weight: 10000 +description: Background and scenarios for using W3C tracing with Dapr +type: docs +--- ## Introduction Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Largely Dapr does all the heavy lifting of generating and propogating the trace context information and this can be sent to many different diagnostics tools for visualization and querying. There are only a very few cases where you, as a developer, need to either propagate a trace header or generate one. @@ -28,7 +31,7 @@ This transformation of modern applications called for a distributed tracing cont A unified approach for propagating trace data improves visibility into the behavior of distributed applications, facilitating problem and performance analysis. ## Scenarios -There are two scenarios where you need to understand how tracing is used; +There are two scenarios where you need to understand how tracing is used: 1. Dapr generates and propagates the trace context between services. 2. Dapr generates the trace context and you need to propagate the trace context to another service **or** you generate the trace context and Dapr propagates the trace context to a service. @@ -66,7 +69,7 @@ In these scenarios Dapr does some of the work for you and you need to either cre In this case, when service A first calls service B, Dapr generates the trace headers in service A, and these trace headers are then propagated to service B. These trace headers are returned in the response from service B as part of response headers. However you need to propagate the returned trace context to the next services, service C and Service D, as Dapr does not know you want to reuse the same header. - To understand how to extract the trace headers from a response and add the trace headers into a request, see the [how to use trace context](../../howto/use-w3c-tracecontext/README.md) article. + To understand how to extract the trace headers from a response and add the trace headers into a request, see the [how to use trace context]({{< ref w3c-tracing >}}) article. 2. You have chosen to generate your own trace context headers. This is much more unusual. There may be occassions where you specifically chose to add W3C trace headers into a service call, for example if you have an existing application that does not currently use Dapr. In this case Dapr still propagates the trace context headers for you. If you decide to generate trace headers yourself, there are three ways this can be done : @@ -102,8 +105,7 @@ The tracestate fields are detailed [here](https://www.w3.org/TR/trace-context/#t In the gRPC API calls, trace context is passed through `grpc-trace-bin` header. ## Related Links -* [How To set up Application Insights using OpenTelemetry Collector](../../howto/diagnose-with-tracing/open-telemetry-collector.md) -* [How To set up Zipkin for distributed tracing](../../howto/diagnose-with-tracing/zipkin.md) -* [How to use Trace Context](../../howto/use-w3c-tracecontext) -* [W3C trace context specification](https://www.w3.org/TR/trace-context/) -* [Observability sample](https://github.com/dapr/quickstarts/tree/master/observability) +- [How To set up Application Insights for distributed tracing with OpenTelemetry]({{< ref open-telemetry-collector.md >}}) +- [How To set up Zipkin for distributed tracing]({{< ref zipkin.md >}}) +- [W3C trace context specification](https://www.w3.org/TR/trace-context/) +- [Observability sample](https://github.com/dapr/quickstarts/tree/master/observability) diff --git a/daprdocs/content/en/developing-applications/building-blocks/pubsub/_index.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/_index.md new file mode 100644 index 000000000..a6c894a5a --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Publish & subscribe messaging" +linkTitle: "Publish & subscribe" +weight: 30 +description: Secure, scalable messaging between services +--- diff --git a/daprdocs/content/en/developing-applications/building-blocks/pubsub/howto-publish-subscribe.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/howto-publish-subscribe.md new file mode 100644 index 000000000..05a46f8fb --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/howto-publish-subscribe.md @@ -0,0 +1,341 @@ +--- +type: docs +title: "How-To: Publish a message and subscribe to a topic" +linkTitle: "How-To: Publish & subscribe" +weight: 2000 +description: "Learn how to send messages to a topic with one service and subscribe to that topic in another service" +--- + +## Introduction + +Pub/Sub is a common pattern in a distributed system with many services that want to utilize decoupled, asynchronous messaging. +Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers. + +Dapr provides an extensible Pub/Sub system with At-Least-Once guarantees, allowing developers to publish and subscribe to topics. +Dapr provides different implementation of the underlying system, and allows operators to bring in their preferred infrastructure, for example Redis Streams, Kafka, etc. + +## Step 1: Setup the Pub/Sub component + +The first step is to setup the Pub/Sub component: + +{{< tabs "Self-Hosted (CLI)" Kubernetes >}} + +{{% codetab %}} +Redis Streams is installed by default on a local machine when running `dapr init`. + +Verify by opening your components file under `%UserProfile%\.dapr\components\pubsub.yaml` on Windows or `~/.dapr/components/pubsub.yaml` on Linux/MacOS: +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: pubsub +spec: + type: pubsub.redis + metadata: + - name: redisHost + value: localhost:6379 + - name: redisPassword + value: "" +``` + +You can override this file with another Redis instance or another [pubsub component]({{< ref setup-pubsub >}}) by creating a `components` directory containing the file and using the flag `--components-path` with the `dapr run` CLI command. +{{% /codetab %}} + +{{% codetab %}} +To deploy this into a Kubernetes cluster, fill in the `metadata` connection details of your [desired pubsub component]({{< ref setup-pubsub >}}) in the yaml below, save as `pubsub.yaml`, and run `kubectl apply -f pubsub.yaml`. + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: pubsub + namespace: default +spec: + type: pubsub.redis + metadata: + - name: redisHost + value: localhost:6379 + - name: redisPassword + value: "" +``` +{{% /codetab %}} + +{{< /tabs >}} + + +## Step 2: Subscribe to topics + +Dapr allows two methods by which you can subscribe to topics: +- **Declaratively**, where subscriptions are are defined in an external file. +- **Programatically**, where subscriptions are defined in user code + +### Declarative subscriptions + +You can subscribe to a topic using the following Custom Resources Definition (CRD). Create a file named `subscription.yaml` and paste the following: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Subscription +metadata: + name: myevent-subscription +spec: + topic: deathStarStatus + route: /dsstatus + pubsubname: pubsub +scopes: +- app1 +- app2 +``` + +The example above shows an event subscription to topic `deathStarStatus`, for the pubsub component `pubsub`. +- The `route` field tells Dapr to send all topic messages to the `/dsstatus` endpoint in the app. +- The `scopes` field enables this subscription for apps with IDs `app1` and `app2`. + +Set the component with: +{{< tabs "Self-Hosted (CLI)" Kubernetes>}} + +{{% codetab %}} +Place the CRD in your `./components` directory. When Dapr starts up, it will load subscriptions along with components. + +*Note: By default, Dapr loads components from `$HOME/.dapr/components` on MacOS/Linux and `%USERPROFILE%\.dapr\components` on Windows.* + +You can also override the default directory by pointing the Dapr CLI to a components path: + +```bash +dapr run --app-id myapp --components-path ./myComponents -- python3 app1.py +``` + +*Note: If you place the subscription in a custom components path, make sure the Pub/Sub component is present also.* + +{{% /codetab %}} + +{{% codetab %}} +In Kubernetes, save the CRD to a file and apply it to the cluster: + +```bash +kubectl apply -f subscription.yaml +``` +{{% /codetab %}} + +{{< /tabs >}} + +#### Example + +{{< tabs Python Node>}} + +{{% codetab %}} +Create a file named `app1.py` and paste in the following: +```python +import flask +from flask import request, jsonify +from flask_cors import CORS +import json +import sys + +app = flask.Flask(__name__) +CORS(app) + +@app.route('/dsstatus', methods=['POST']) +def ds_subscriber(): + print(request.json, flush=True) + return json.dumps({'success':True}), 200, {'ContentType':'application/json'} + +app.run() +``` +After creating `app1.py` ensute flask and flask_cors are installed: + +```bash +pip install flask +pip install flask_cors +``` + +Then run: + +```bash +dapr --app-id app1 --app-port 5000 run python app1.py +``` +{{% /codetab %}} + +{{% codetab %}} +After setting up the subscription above, download this javascript (Node > 4.16) into a `app2.js` file: + +```javascript +const express = require('express') +const bodyParser = require('body-parser') +const app = express() +app.use(bodyParser.json({ type: 'application/*+json' })); + +const port = 3000 + +app.post('/dsstatus', (req, res) => { + console.log(req.body); + res.sendStatus(200); +}); + +app.listen(port, () => console.log(`consumer app listening on port ${port}!`)) +``` +Run this app with: + +```bash +dapr --app-id app2 --app-port 3000 run node app2.js +``` +{{% /codetab %}} + +{{< /tabs >}} + +### Programmatic subscriptions + +To subscribe to topics, start a web server in the programming language of your choice and listen on the following `GET` endpoint: `/dapr/subscribe`. +The Dapr instance will call into your app at startup and expect a JSON response for the topic subscriptions with: +- `pubsubname`: Which pub/sub component Dapr should use +- `topic`: Which topic to subscribe to +- `route`: Which endpoint for Dapr to call on when a message comes to that topic + +#### Example + +{{< tabs Python Node>}} + +{{% codetab %}} +```python +import flask +from flask import request, jsonify +from flask_cors import CORS +import json +import sys + +app = flask.Flask(__name__) +CORS(app) + +@app.route('/dapr/subscribe', methods=['GET']) +def subscribe(): + subscriptions = [{'pubsubname': 'pubsub', + 'topic': 'deathStarStatus', + 'route': 'dsstatus'}] + return jsonify(subscriptions) + +@app.route('/dsstatus', methods=['POST']) +def ds_subscriber(): + print(request.json, flush=True) + return json.dumps({'success':True}), 200, {'ContentType':'application/json'} +app.run() +``` +After creating `app1.py` ensute flask and flask_cors are installed: + +```bash +pip install flask +pip install flask_cors +``` + +Then run: + +```bash +dapr --app-id app1 --app-port 5000 run python app1.py +``` +{{% /codetab %}} + +{{% codetab %}} +```javascript +const express = require('express') +const bodyParser = require('body-parser') +const app = express() +app.use(bodyParser.json({ type: 'application/*+json' })); + +const port = 3000 + +app.get('/dapr/subscribe', (req, res) => { + res.json([ + { + pubsubname: "pubsub", + topic: "deathStarStatus", + route: "dsstatus" + } + ]); +}) + +app.post('/dsstatus', (req, res) => { + console.log(req.body); + res.sendStatus(200); +}); + +app.listen(port, () => console.log(`consumer app listening on port ${port}!`)) +``` +Run this app with: + +```bash +dapr --app-id app2 --app-port 3000 run node app2.js +``` +{{% /codetab %}} + +{{< /tabs >}} + +The `/dsstatus` endpoint matches the `route` defined in the subscriptions and this is where Dapr will send all topic messages to. + +## Step 3: Publish a topic + +To publish a message to a topic, invoke the following endpoint on a Dapr instance: + +{{< tabs "Dapr CLI" "HTTP API (Bash)" "HTTP API (PowerShell)">}} + +{{% codetab %}} +```bash +dapr publish --pubsub pubsub --topic deathStarStatus --data '{"status": "completed"}' +``` +{{% /codetab %}} + +{{% codetab %}} +Begin by ensuring a Dapr sidecar is running: +```bash +dapr --app-id myapp --port 3500 run +``` +Then publish a message to the `deathStarStatus` topic: +```bash +curl -X POST http://localhost:3500/v1.0/publish/pubsub/deathStarStatus -H "Content-Type: application/json" -d '{"status": "completed"}' +``` +{{% /codetab %}} + +{{% codetab %}} +Begin by ensuring a Dapr sidecar is running: +```bash +dapr --app-id myapp --port 3500 run +``` +Then publish a message to the `deathStarStatus` topic: +```powershell +Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"status": "completed"}' -Uri 'http://localhost:3500/v1.0/publish/pubsub/deathStarStatus' +``` +{{% /codetab %}} + +{{< /tabs >}} + +Dapr automatically wraps the user payload in a Cloud Events v1.0 compliant envelope. + +## Step 4: ACK-ing a message + +In order to tell Dapr that a message was processed successfully, return a `200 OK` response. If Dapr receives any other return status code than `200`, or if your app crashes, Dapr will attempt to redeliver the message following At-Least-Once semantics. + +#### Example + +{{< tabs Python Node>}} + +{{% codetab %}} +```python +@app.route('/dsstatus', methods=['POST']) +def ds_subscriber(): + print(request.json, flush=True) + return json.dumps({'success':True}), 200, {'ContentType':'application/json'} +``` +{{% /codetab %}} + +{{% codetab %}} +```javascript +app.post('/dsstatus', (req, res) => { + res.sendStatus(200); +}); +``` +{{% /codetab %}} + +{{< /tabs >}} + +## Next steps +- [Scope access to your pub/sub topics]({{< ref pubsub-scopes.md >}}) +- [Pub/Sub quickstart](https://github.com/dapr/quickstarts/tree/master/pub-sub) +- [Pub/sub components]({{< ref setup-pubsub >}}) \ No newline at end of file diff --git a/concepts/publish-subscribe-messaging/README.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-overview.md similarity index 74% rename from concepts/publish-subscribe-messaging/README.md rename to daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-overview.md index 75874f7dd..78f98e01f 100644 --- a/concepts/publish-subscribe-messaging/README.md +++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-overview.md @@ -1,4 +1,12 @@ -# Publish/Subscribe Messaging +--- +type: docs +title: "Publish and subscribe overview" +linkTitle: "Overview" +weight: 1000 +description: "Overview of the Dapr Pub/Sub building block" +--- + +## Introduction The [publish/subscribe pattern](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) allows your microservices to communicate with each other purely by sending messages. In this system, the **producer** of a message sends it to a **topic**, with no knowledge of what service will receive the message. A messages can even be sent if there's no consumer for it. @@ -6,41 +14,37 @@ Similarly, a **consumer** will receive messages from a topic without knowledge o Dapr provides a publish/subscribe API that provides at-least-once guarantees and integrates with various message brokers implementations. These implementations are pluggable, and developed outside of the Dapr runtime in [components-contrib](https://github.com/dapr/components-contrib/tree/master/pubsub). -## Publish/Subscribe API +## Features -The API for Publish/Subscribe can be found in the [spec repo](../../reference/api/pubsub_api.md). +### Publish/Subscribe API -## Behavior and guarantees +The API for Publish/Subscribe can be found in the [spec repo]({{< ref pubsub_api.md >}}). + +### At-Least-Once guarantee Dapr guarantees At-Least-Once semantics for message delivery. That means that when an application publishes a message to a topic using the Publish/Subscribe API, it can assume the message is delivered at least once to any subscriber when the response status code from that endpoint is `200`, or returns no error if using the gRPC client. +### Consumer groups and multiple instances + The burden of dealing with concepts like consumer groups and multiple instances inside consumer groups is all catered for by Dapr. -### App ID - -Dapr has the concept of an `id`. This is specified in Kubernetes using the `dapr.io/app-id` annotation and with the `app-id` flag using the Dapr CLI. Dapr requires an ID to be assigned to every application. - When multiple instances of the same application ID subscribe to a topic, Dapr will make sure to deliver the message to only one instance. If two different applications with different IDs subscribe to a topic, at least one instance in each application receives a copy of the same message. -## Cloud events +### Cloud events Dapr follows the [CloudEvents 1.0 Spec](https://github.com/cloudevents/spec/tree/v1.0) and wraps any payload sent to a topic inside a Cloud Events envelope. The following fields from the Cloud Events spec are implemented with Dapr: - -* `id` -* `source` -* `specversion` -* `type` -* `datacontenttype` (Optional) - +- `id` +- `source` +- `specversion` +- `type` +- `datacontenttype` (Optional) > Starting with Dapr v0.9, Dapr no longer wraps published content into CloudEvent if the published payload itself is already in CloudEvent format. The following example shows an XML content in CloudEvent v1.0 serialized as JSON: - - ```json { "specversion" : "1.0", @@ -53,3 +57,12 @@ The following example shows an XML content in CloudEvent v1.0 serialized as JSON "data" : "User1user2hi" } ``` + +### Topic scoping + +Limit which topics applications are able to publish/subscibe to in order to limit access to potentially sensitive data streams. Read [Pub/Sub scoping]({{< ref pubsub-scopes.md >}}) for more information. + +## Next steps + +- Read the How-To guide on [publishing and subscribing]({{< ref howto-publish-subscribe.md >}}) +- Learn about [Pub/Sub scopes]({{< ref pubsub-scopes.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-scopes.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-scopes.md new file mode 100644 index 000000000..40630dc67 --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-scopes.md @@ -0,0 +1,158 @@ +--- +type: docs +title: "Scope Pub/Sub topic access" +linkTitle: "Scope topic access" +weight: 5000 +description: "Use scopes to limit Pub/Sub topics to specific applications" +--- + +## Introduction + +[Namespaces or component scopes]({{< ref component-scopes.md >}}) can be used to limit component access to particular applications. These application scopes added to a component limit only the applications with specific IDs to be able to use the component. + +In addition to this general component scope, the following can be limited for pub/sub components: +- Which topics which can be used (published or subscribed) +- Which applications are allowed to publish to specific topics +- Which applications are allowed to subscribe to specific topics + +This is called **pub/sub topic scoping**. + +Pub/sub scopes are defined for each pub/sub component. You may have a pub/sub component named `pubsub` that has one set of scopes, and another `pubsub2` with a different set. + +To use this topic scoping three metadata properties can be set for a pub/sub component: +- `spec.metadata.publishingScopes` + - A semicolon-separated list of applications & comma-separated topic lists, allowing that app to publish to that list of topics + - If nothing is specified in `publishingScopes` (default behavior), all apps can publish to all topics + - To deny an app the ability to publish to any topic, leave the topics list blank (`app1=;app2=topic2`) + - For example, `app1=topic1;app2=topic2,topic3;app3=` will allow app1 to publish to topic1 and nothing else, app2 to publish to topic2 and topic3 only, and app3 to publish to nothing. +- `spec.metadata.subscriptionScopes` + - A semicolon-separated list of applications & comma-separated topic lists, allowing that app to subscribe to that list of topics + - If nothing is specified in `subscriptionScopes` (default behavior), all apps can subscribe to all topics + - For example, `app1=topic1;app2=topic2,topic3` will allow app1 to subscribe to topic1 only and app2 to subscribe to topic2 and topic3 +- `spec.metadata.allowedTopics` + - A comma-separated list of allowed topics for all applications. + - If `allowedTopics` is not set (default behavior), all topics are valid. `subscriptionScopes` and `publishingScopes` still take place if present. + - `publishingScopes` or `subscriptionScopes` can be used in conjuction with `allowedTopics` to add granular limitations + +These metadata properties can be used for all pub/sub components. The following examples use Redis as pub/sub component. + +## Example 1: Scope topic access + +Limiting which applications can publish/subscribe to topics can be useful if you have topics which contain sensitive information and only a subset of your applications are allowed to publish or subscribe to these. + +It can also be used for all topics to have always a "ground truth" for which applications are using which topics as publishers/subscribers. + +Here is an example of three applications and three topics: +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: pubsub + namespace: default +spec: + type: pubsub.redis + metadata: + - name: redisHost + value: "localhost:6379" + - name: redisPassword + value: "" + - name: publishingScopes + value: "app1=topic1;app2=topic2,topic3;app3=" + - name: subscriptionScopes + value: "app2=;app3=topic1" +``` + +The table below shows which applications are allowed to publish into the topics: + +| | topic1 | topic2 | topic3 | +|------|--------|--------|--------| +| app1 | X | | | +| app2 | | X | X | +| app3 | | | | + +The table below shows which applications are allowed to subscribe to the topics: + +| | topic1 | topic2 | topic3 | +|------|--------|--------|--------| +| app1 | X | X | X | +| app2 | | | | +| app3 | X | | | + +> Note: If an application is not listed (e.g. app1 in subscriptionScopes) it is allowed to subscribe to all topics. Because `allowedTopics` is not used and app1 does not have any subscription scopes, it can also use additional topics not listed above. + +## Example 2: Limit allowed topics + +A topic is created if a Dapr application sends a message to it. In some scenarios this topic creation should be governed. For example: +- A bug in a Dapr application on generating the topic name can lead to an unlimited amount of topics created +- Streamline the topics names and total count and prevent an unlimited growth of topics + +In these situations `allowedTopics` can be used. + +Here is an example of three allowed topics: +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: pubsub + namespace: default +spec: + type: pubsub.redis + metadata: + - name: redisHost + value: "localhost:6379" + - name: redisPassword + value: "" + - name: allowedTopics + value: "topic1,topic2,topic3" +``` + +All applications can use these topics, but only those topics, no others are allowed. + +## Example 3: Combine `allowedTopics` and scopes + +Sometimes you want to combine both scopes, thus only having a fixed set of allowed topics and specify scoping to certain applications. + +Here is an example of three applications and two topics: +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: pubsub + namespace: default +spec: + type: pubsub.redis + metadata: + - name: redisHost + value: "localhost:6379" + - name: redisPassword + value: "" + - name: allowedTopics + value: "A,B" + - name: publishingScopes + value: "app1=A" + - name: subscriptionScopes + value: "app1=;app2=A" +``` + +> Note: The third application is not listed, because if an app is not specified inside the scopes, it is allowed to use all topics. + +The table below shows which application is allowed to publish into the topics: + +| | A | B | C | +|------|---|---|---| +| app1 | X | | | +| app2 | X | X | | +| app3 | X | X | | + +The table below shows which application is allowed to subscribe to the topics: + +| | A | B | C | +|------|---|---|---| +| app1 | | | | +| app2 | X | | | +| app3 | X | X | | + + +## Demo + + \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/building-blocks/secrets/_index.md b/daprdocs/content/en/developing-applications/building-blocks/secrets/_index.md new file mode 100644 index 000000000..65a64c82a --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/secrets/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Secrets building block" +linkTitle: "Secrets" +weight: 70 +description: Securely access secrets from your application +--- diff --git a/howto/get-secrets/README.md b/daprdocs/content/en/developing-applications/building-blocks/secrets/howto-secrets.md similarity index 87% rename from howto/get-secrets/README.md rename to daprdocs/content/en/developing-applications/building-blocks/secrets/howto-secrets.md index 4b0d6cf04..512704261 100644 --- a/howto/get-secrets/README.md +++ b/daprdocs/content/en/developing-applications/building-blocks/secrets/howto-secrets.md @@ -1,4 +1,10 @@ -# Access Application Secrets using the Secrets API +--- +type: docs +title: "How To: Retrieve a secret" +linkTitle: "How To: Retrieve a secret" +weight: 2000 +description: "Use the secret store building block to securely retrieve a secret" +--- It's common for applications to store sensitive information such as connection strings, keys and tokens that are used to authenticate with databases, services and external systems in secrets by using a dedicated secret store. @@ -16,7 +22,13 @@ The first step involves setting up a secret store, either in the cloud or in the The second step is to configure the secret store with Dapr. -Follow the instructions [here](../setup-secret-store) to set up the secret store of your choice. +To deploy in Kubernetes, save the file above to `aws_secret_manager.yaml` and then run: + +```bash +kubectl apply -f aws_secret_manager.yaml +``` + +To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&feature=youtu.be&t=1818) for an example on how to use the secrets API. Or watch this [video](https://www.youtube.com/watch?v=8W-iBDNvCUM&feature=youtu.be&t=1765) for an example on how to component scopes with secret components and the secrets API. diff --git a/concepts/secrets/README.md b/daprdocs/content/en/developing-applications/building-blocks/secrets/secrets-overview.md similarity index 88% rename from concepts/secrets/README.md rename to daprdocs/content/en/developing-applications/building-blocks/secrets/secrets-overview.md index b77f8b011..534241766 100644 --- a/concepts/secrets/README.md +++ b/daprdocs/content/en/developing-applications/building-blocks/secrets/secrets-overview.md @@ -1,4 +1,10 @@ -# Dapr secrets management +--- +type: docs +title: "Secrets stores overview" +linkTitle: "Secrets stores overview" +weight: 1000 +description: "Overview of Dapr secrets management building block" +--- Almost all non-trivial applications need to _securely_ store secret data like API keys, database passwords, and more. By nature, these secrets should not be checked into the version control system, but they also need to be accessible to code running in production. This is generally a hard problem, but it's critical to get it right. Otherwise, critical production systems can be compromised. @@ -17,7 +23,7 @@ See [Setup secret stores](https://github.com/dapr/docs/tree/master/howto/setup-s Instead of including credentials directly within a Dapr component file, you can place the credentials within a Dapr supported secret store and reference the secret within the Dapr component. This is preferred approach and is a recommended best practice especially in production environments. -For more information read [Referencing Secret Stores in Components](./component-secrets.md) +For more information read [Referencing Secret Stores in Components]({{< ref component-secrets.md >}}) ## Using secrets in your application @@ -27,15 +33,15 @@ Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&t=1818) for an ex For example, the diagram below shows an application requesting the secret called "mysecret" from a secret store called "vault" from a configured cloud secret store. - + Applications can use the secrets API to access secrets from a Kubernetes secret store. In the example below, the application retrieves the same secret "mysecret" from a Kubernetes secret store. - + In Azure Dapr can be configured to use Managed Identities to authenticate with Azure Key Vault in order to retrieve secrets. In the example below, an Azure Kubernetes Service (AKS) cluster is configured to use managed identities. Then Dapr uses [pod identities](https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-identity#use-pod-identities) to retrieve secrets from Azure Key Vault on behalf of the application. - + Notice that in all of the examples above the application code did not have to change to get the same secret. Dapr did all the heavy lifting here via the secrets building block API and using the secret components. diff --git a/howto/secrets-scopes/README.md b/daprdocs/content/en/developing-applications/building-blocks/secrets/secrets-scopes.md similarity index 75% rename from howto/secrets-scopes/README.md rename to daprdocs/content/en/developing-applications/building-blocks/secrets/secrets-scopes.md index ef1a307ef..465621fc0 100644 --- a/howto/secrets-scopes/README.md +++ b/daprdocs/content/en/developing-applications/building-blocks/secrets/secrets-scopes.md @@ -1,10 +1,17 @@ -# Limit the secrets that can be read from secret stores +--- +type: docs +title: "How To: Use secret scoping" +linkTitle: "How To: Use secret scoping" +weight: 3000 +description: "Use scoping to limit the secrets that can be read from secret stores" +type: docs +--- -Follow [these instructions](../setup-secret-store) to configure secret store for an application. Once configured, any secret defined within that store will be accessible from the Dapr application. +Follow [these instructions]({{< ref setup-secret-store >}}) to configure secret store for an application. Once configured, any secret defined within that store will be accessible from the Dapr application. To limit the secrets to which the Dapr application has access, users can define secret scopes by augmenting existing configuration CRD with restrictive permissions. -Follow [these instructions](../../concepts/configuration/README.md) to define a configuration CRD. +Follow [these instructions]({{< ref configuration-concept.md >}}) to define a configuration CRD. ## Scenario 1 : Deny access to all secrets for a secret store @@ -24,7 +31,7 @@ spec: defaultAccess: deny ``` -For applications that need to be deined access to the Kubernetes secret store, follow [these instructions](../configure-k8s/README.md), and add the following annotation to the application pod. +For applications that need to be deined access to the Kubernetes secret store, follow [these instructions]({{< ref kubernetes-overview.md >}}), and add the following annotation to the application pod. ```yaml dapr.io/config: appconfig @@ -49,7 +56,7 @@ spec: allowedSecrets: ["secret1", "secret2"] ``` -This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions](../../concepts/configuration/README.md) to apply configuration to the sidecar. +This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar. ## Scenario 3: Deny access to certain senstive secrets in a secret store @@ -68,7 +75,7 @@ spec: deniedSecrets: ["secret1", "secret2"] ``` -The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions](../../concepts/configuration/README.md) to apply configuration to the sidecar. +The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar. ## Permission priority diff --git a/daprdocs/content/en/developing-applications/building-blocks/service-invocation/_index.md b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/_index.md new file mode 100644 index 000000000..011c8b2e4 --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Service invocation" +linkTitle: "Service invocation" +weight: 10 +description: Perform direct, secure, service-to-service method calls +--- diff --git a/howto/invoke-and-discover-services/README.md b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-discover-services.md similarity index 59% rename from howto/invoke-and-discover-services/README.md rename to daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-discover-services.md index e0067cca4..8c81b689c 100644 --- a/howto/invoke-and-discover-services/README.md +++ b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-discover-services.md @@ -1,15 +1,20 @@ -# Get started with HTTP service invocation +--- +type: docs +title: "How-To: Invoke and discover services" +linkTitle: "How-To: Invoke services" +description: "How-to guide on how to use Dapr service invocation in a distributed application" +weight: 2000 +--- This article describe how to deploy services each with an unique application ID, so that other services can discover and call endpoints on them using service invocation API. -## 1. Choose an ID for your service +## Step 1: Choose an ID for your service -Dapr allows you to assign a global, unique ID for your app. +Dapr allows you to assign a global, unique ID for your app. This ID encapsulates the state for your application, regardless of the number of instances it may have. -This ID encapsulates the state for your application, regardless of the number of instances it may have. - -### Setup an ID using the Dapr CLI +{{< tabs "Self-Hosted (CLI)" Kubernetes >}} +{{% codetab %}} In self hosted mode, set the `--app-id` flag: ```bash @@ -21,8 +26,9 @@ If your app uses an SSL connection, you can tell Dapr to invoke your app over an ```bash dapr run --app-id cart --app-port 5000 --app-ssl python app.py ``` +{{% /codetab %}} -*Note: the Kubernetes annotation can be found [here](../configure-k8s).* +{{% codetab %}} ### Setup an ID using Kubernetes @@ -51,14 +57,16 @@ spec: dapr.io/app-port: "5000" ... ``` +*If your app uses an SSL connection, you can tell Dapr to invoke your app over an insecure SSL connection with the `app-ssl: "true"` annotation (full list [here]({{< ref kubernetes-annotations.md >}}))* -## 2. Invoke a service +{{% /codetab %}} -Dapr uses a sidecar, decentralized architecture. To invoke an application using Dapr, you can use the `invoke` API on any Dapr instance. +{{< /tabs >}} -The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another. -*Note: The following is a Python example of a cart app. It can be written in any programming language* +## Step 2: Setup a service + +The following is a Python example of a cart app. It can be written in any programming language. ```python from flask import Flask @@ -74,8 +82,16 @@ if __name__ == '__main__': This Python app exposes an `add()` method via the `/add` endpoint. -### Invoke with curl over HTTP +## Step 3: Invoke the service +Dapr uses a sidecar, decentralized architecture. To invoke an application using Dapr, you can use the `invoke` API on any Dapr instance. + +The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another. + +{{< tabs curl CLI >}} + +{{% codetab %}} +From a terminal or command prompt run: ```bash curl http://localhost:3500/v1.0/invoke/cart/method/add -X POST ``` @@ -95,30 +111,35 @@ curl http://localhost:3500/v1.0/invoke/cart/method/add -X DELETE ``` Dapr puts any payload returned by the called service in the HTTP response's body. +{{% /codetab %}} + +{{% codetab %}} +```bash +dapr invoke --app-id cart --method add +``` +{{% /codetab %}} + +{{< /tabs >}} ### Namespaces -When running on [namespace supported platforms](../../reference/api/service_invocation_api.md#namespace-supported-platforms), you include the namespace of the target app in the app ID: +When running on [namespace supported platforms]({{< ref "service_invocation_api.md#namespace-supported-platforms" >}}), you include the namespace of the target app in the app ID: `myApp.production` -``` -myApp.production -``` - -For example, invoking the example python service with a namespace would be; +For example, invoking the example python service with a namespace would be: ```bash curl http://localhost:3500/v1.0/invoke/cart.production/method/add -X POST ``` -See the [Cross namespace API spec](../../reference/api/service_invocation_api.md#cross-namespace-invocation) for more information on namespaces. +See the [Cross namespace API spec]({{< ref "service_invocation_api.md#cross-namespace-invocation" >}}) for more information on namespaces. -## 3. View traces and logs +## Step 4: View traces and logs The example above showed you how to directly invoke a different service running locally or in Kubernetes. Dapr outputs metrics, tracing and logging information allowing you to visualize a call graph between services, log errors and optionally log the payload body. -For more information on tracing and logs see the [observability](../../concepts/observability) article. +For more information on tracing and logs see the [observability]({{< ref observability-concept.md >}}) article. ## Related Links -* [Service invocation concepts](../../concepts/service-invocation/README.md) -* [Service invocation API specification](../../reference/api/service_invocation_api.md) +* [Service invocation overview]({{< ref service-invocation-overview.md >}}) +* [Service invocation API specification]({{< ref service_invocation_api.md >}}) diff --git a/concepts/service-invocation/README.md b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/service-invocation-overview.md similarity index 52% rename from concepts/service-invocation/README.md rename to daprdocs/content/en/developing-applications/building-blocks/service-invocation/service-invocation-overview.md index c9e0b72b7..1c5eed66d 100644 --- a/concepts/service-invocation/README.md +++ b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/service-invocation-overview.md @@ -1,12 +1,15 @@ -# Service Invocation +--- +type: docs +title: "Service invocation overview" +linkTitle: "Overview" +weight: 1000 +description: "Overview of the service invocation building block" +--- + +## Introduction Using service invocation, your application can discover and reliably and securely communicate with other applications using the standard protocols of [gRPC](https://grpc.io) or [HTTP](https://www.w3.org/Protocols/). -- [Overview](#overview) -- [Features](#features) -- [Next steps](#next-steps) - -## Overview In many environments with multiple services that need to communicate with each other, developers often ask themselves the following questions: * How do I discover and invoke methods on different services? @@ -18,47 +21,32 @@ Dapr allows you to overcome these challenges by providing an endpoint that acts Dapr uses a sidecar, decentralized architecture. To invoke an application using Dapr, you use the `invoke` API on any Dapr instance. The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another. +### Invoke logic + The diagram below is an overview of how Dapr's service invocation works. -![Service Invocation Diagram](../../images/service-invocation.png) +Diagram showing the steps of service invocation -1. Service A makes an http/gRPC call meant for Service B. The call goes to the local Dapr sidecar. -2. Dapr discovers Service B's location using the [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) installed for the given hosting platform. +1. Service A makes an http/gRPC call targeting Service B. The call goes to the local Dapr sidecar. +2. Dapr discovers Service B's location using the [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) which is running on the given [hosting platform]({{< ref "hosting" >}}). 3. Dapr forwards the message to Service B's Dapr sidecar - * Note: All calls between Dapr sidecars go over gRPC for performance. Only calls between services and Dapr sidecars are either HTTP or gRPC + + **Note**: All calls between Dapr sidecars go over gRPC for performance. Only calls between services and Dapr sidecars can be either HTTP or gRPC + 4. Service B's Dapr sidecar forwards the request to the specified endpoint (or method) on Service B. Service B then runs its business logic code. 5. Service B sends a response to Service A. The response goes to Service B's sidecar. 6. Dapr forwards the response to Service A's Dapr sidecar. 7. Service A receives the response. -### Example -As an example for the above call sequence, suppose you have the applications as described in the [hello world sample](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md), where a python app invokes a node.js app. - -In such a scenario, the python app would be "Service A" above, and the Node.js app would be "Service B". - -The diagram below shows sequence 1-7 again on a local machine showing the API call: - -![Service Invocation Diagram](../../images/service-invocation-example.png) - -1. Suppose the Node.js app has a Dapr app ID of `nodeapp`, as in the sample. The python app invokes the Node.js app's `neworder` method by posting `http://localhost:3500/v1.0/invoke/nodeapp/method/neworder`, which first goes to the python app's local Dapr sidecar. -2. Dapr discovers the Node.js app's location using multicast DNS component which runs on your local machine. -3. Dapr forwards the request to the Node.js app's sidecar. -4. The Node.js app's sidecar forwards the request to the Node.js app. The Node.js app performs its business logic, which, as described in the sample, is to log the incoming message and then persist the order ID into Redis (not shown in the diagram above). - - Steps 5-7 are the same as above. - ## Features Service invocation provides several features to make it easy for you to call methods on remote applications. -- [Namespaces scoping](#namespaces-scoping) -- [Retries](#Retries) -- [Service-to-service security](#service-to-service-security) -- [Service access security](#service-access-security) -- [Observability: Tracing, logging and metrics](#observability) -- [Pluggable service discovery](#pluggable-service-discovery) +### Service invocation API +The API for Pservice invocation can be found in the [spec repo]({{< ref service_invocation_api.md >}}). ### Namespaces scoping + Service invocation supports calls across namespaces. On all supported hosting platforms, Dapr app IDs conform to a valid FQDN format that includes the target namespace. For example, the following string contains the app ID `nodeapp` in addition to the namespace the app runs in `production`. @@ -67,10 +55,14 @@ For example, the following string contains the app ID `nodeapp` in addition to t localhost:3500/v1.0/invoke/nodeapp.production/method/neworder ``` -This is especially useful in cross namespace calls in a Kubernetes cluster. Watch this [video](https://youtu.be/LYYV_jouEuA?t=495) for a demo on how to use namespaces with service invocation. +This is especially useful in cross namespace calls in a Kubernetes cluster. Watch this video for a demo on how to use namespaces with service invocation. + + ### Retries + Service invocation performs automatic retries with backoff time periods in the event of call failures and transient errors. + Errors that cause retries are: * Network errors including endpoint unavailability and refused connections @@ -80,29 +72,48 @@ Per call retries are performed with a backoff interval of 1 second up to a thres Connection establishment via gRPC to the target sidecar has a timeout of 5 seconds. ### Service-to-service security + All calls between Dapr applications can be made secure with mutual (mTLS) authentication on hosted platforms, including automatic certificate rollover, via the Dapr Sentry service. The diagram below shows this for self hosted applications. -For more information read the [service-to-service security](../security#mtls-self-hosted) article. +For more information read the [service-to-service security]({{< ref "security-concept.md#sidecar-to-sidecar-communication" >}}) article. -![Self Hosted service to service security](../../images/security-mTLS-sentry-selfhosted.png) + ### Service access security + Applications can control which other applications are allowed to call them and what they are authorized to do via access policies. This enables you to restrict sensitive applications, that say have personnel information, from being accessed by unauthorized applications, and combined with service-to-service secure communication, provides for soft multi-tenancy deployments. -For more information read the [access control allow lists for service invocation](../configuration#access-control-allow-lists-for-service-invocation) article. +For more information read the [access control allow lists for service invocation]({{< ref invoke-allowlist.md >}}) article. ### Observability + By default, all calls between applications are traced and metrics are gathered to provide insights and diagnostics for applications, which is especially important in production scenarios. -For more information read the [observability](../concepts/observability) article. +For more information read the [observability]({{< ref observability-concept.md >}}) article. ### Pluggable service discovery -Dapr can run on any [hosting platform](../hosting). For the supported hosting platforms this means they have a [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) developed for them that enables service discovery. For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster. + +Dapr can run on any [hosting platform]({{< ref hosting >}}). For the supported hosting platforms this means they have a [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) developed for them that enables service discovery. For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster. + +## Example +Following the above call sequence, suppose you have the applications as described in the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md), where a python app invokes a node.js app. In such a scenario, the python app would be "Service A" , and a Node.js app would be "Service B". + +The diagram below shows sequence 1-7 again on a local machine showing the API call: + + + +1. The Node.js app has a Dapr app ID of `nodeapp`. The python app invokes the Node.js app's `neworder` method by POSTing `http://localhost:3500/v1.0/invoke/nodeapp/method/neworder`, which first goes to the python app's local Dapr sidecar. +2. Dapr discovers the Node.js app's location using name resolution component (in this case mDNS while self-hosted) which runs on your local machine. +3. Dapr forwards the request to the Node.js app's sidecar using the location it just received. +4. The Node.js app's sidecar forwards the request to the Node.js app. The Node.js app performs its business logic, logging the incoming message and then persist the order ID into Redis (not shown in the diagram) +5. The Node.js app sends a response to the Python app through the Node.js sidecar. +6. Dapr forwards the response to the Python Dapr sidecar +7. The Python app receives the resposne. ## Next steps -* Follow these guide on - * [How-to: Get started with HTTP service invocation](../../howto/invoke-and-discover-services) - * [How-to: Get started with Dapr and gRPC](../../howto/create-grpc-app) -* Try out the [hello world sample](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or visit the samples in each of the [Dapr SDKs](https://github.com/dapr/docs#further-documentation) -* Read the [service invocation API specification](../../reference/api/service_invocation_api.md) +* Follow these guide on: + * [How-to: Get started with HTTP service invocation]({{< ref howto-invoke-discover-services.md >}}) + * [How-to: Get started with Dapr and gRPC]({{< ref grpc >}}) +* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or visit the samples in each of the [Dapr SDKs]({{< ref sdks >}}) +* Read the [service invocation API specification]({{< ref service_invocation_api.md >}}) diff --git a/daprdocs/content/en/developing-applications/building-blocks/state-management/_index.md b/daprdocs/content/en/developing-applications/building-blocks/state-management/_index.md new file mode 100644 index 000000000..fd6f95bdb --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/state-management/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "State management" +linkTitle: "State management" +weight: 20 +description: Create long running stateful services +--- diff --git a/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md b/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md new file mode 100644 index 000000000..c7dcb577b --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md @@ -0,0 +1,168 @@ +--- +type: docs +title: "How-To: Save and get state" +linkTitle: "How-To: Save & get state" +weight: 200 +description: "Use key value pairs to persist a state" +--- + +## Introduction + +State management is one of the most common needs of any application: new or legacy, monolith or microservice. +Dealing with different databases libraries, testing them, handling retries and faults can be time consuming and hard. + +Dapr provides state management capabilities that include consistency and concurrency options. +In this guide we'll start of with the basics: Using the key/value state API to allow an application to save, get and delete state. + +## Step 1: Setup a state store + +A state store component represents a resource that Dapr uses to communicate with a database. +For the purpose of this how to we'll use a Redis state store, but any state store from the [supported list]({{< ref supported-state-stores >}}) will work. + +{{< tabs "Self-Hosted (CLI)" Kubernetes>}} + +{{% codetab %}} +When using `Dapr init` in Standalone mode, the Dapr CLI automatically provisions a state store (Redis) and creates the relevant YAML in a `components` directory, which for Linux/MacOS is `$HOME/.dapr/components` and for Windows is `%USERPROFILE%\.dapr\components` + +To change the state store being used, replace the YAML under `/components` with the file of your choice. +{{% /codetab %}} + +{{% codetab %}} +See the instructions [here]({{< ref "setup-state-store" >}}) on how to setup different state stores on Kubernetes. +{{% /codetab %}} + +{{< /tabs >}} + +## Step 2: Save state + +The following example shows how to save two key/value pairs in a single call using the state management API. + +{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK">}} + +{{% codetab %}} +Begin by ensuring a Dapr sidecar is running: +```bash +dapr --app-id myapp --port 3500 run +``` +{{% alert title="Note" color="info" %}} +It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state. + +{{% /alert %}} + +Then in a separate terminal run: +```bash +curl -X POST -H "Content-Type: application/json" -d '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' http://localhost:3500/v1.0/state/statestore +``` +{{% /codetab %}} + +{{% codetab %}} +Begin by ensuring a Dapr sidecar is running: +```bash +dapr --app-id myapp --port 3500 run +``` + +{{% alert title="Note" color="info" %}} +It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state. + +{{% /alert %}} + +Then in a separate terminal run: +```powershell +Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{ "key": "key1", "value": "value1"}, { "key": "key2", "value": "value2"}]' -Uri 'http://localhost:3500/v1.0/state/statestore' +``` +{{% /codetab %}} + +{{% codetab %}} +Make sure to install the Dapr Python SDK with `pip3 install dapr`. Then create a file named `state.py` with: +```python +from dapr.clients import DaprClient +from dapr.clients.grpc._state import StateItem + +with DaprClient() as d: + d.save_states(store_name="statestore", + states=[ + StateItem(key="key1", value="value1"), + StateItem(key="key2", value="value2") + ]) + +``` + +Run with `dapr run --app-id myapp run python state.py` + +{{% alert title="Note" color="info" %}} +It is important to set an app-id, as the state keys are prefixed with this value. If you don't set it one is generated for you at runtime, and the next time you run the command a new one will be generated and you will no longer be able to access previously saved state. + +{{% /alert %}} + +{{% /codetab %}} + +{{< /tabs >}} + +## Step 3: Get state + +The following example shows how to get an item by using a key with the state management API: + +{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK">}} + +{{% codetab %}} +With the same dapr instance running from above run: +```bash +curl http://localhost:3500/v1.0/state/statestore/key1 +``` +{{% /codetab %}} + +{{% codetab %}} +With the same dapr instance running from above run: +```powershell +Invoke-RestMethod -Uri 'http://localhost:3500/v1.0/state/statestore/key1' +``` +{{% /codetab %}} + +{{% codetab %}} +Add the following code to `state.py` from above and run again: +```python + data = d.get_state(store_name="statestore", + key="key1", + state_metadata={"metakey": "metavalue"}).data + print(f"Got value: {data}") +``` +{{% /codetab %}} + +{{< /tabs >}} + +## Step 4: Delete state + +The following example shows how to delete an item by using a key with the state management API: + +{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK">}} + +{{% codetab %}} +With the same dapr instance running from above run: +```bash +curl -X DELETE 'http://localhost:3500/v1.0/state/statestore/key1' +``` +Try getting state again and note that no value is returned. +{{% /codetab %}} + +{{% codetab %}} +With the same dapr instance running from above run: +```powershell +Invoke-RestMethod -Method Delete -Uri 'http://localhost:3500/v1.0/state/statestore/key1' +``` +Try getting state again and note that no value is returned. +{{% /codetab %}} + +{{% codetab %}} +Add the following code to `state.py` from above and run again: +```python + d.delete_state(store_name="statestore"", + key="key1", + state_metadata={"metakey": "metavalue"}) + data = d.get_state(store_name="statestore", + key="key1", + state_metadata={"metakey": "metavalue"}).data + print(f"Got value after delete: {data}") +``` +{{% /codetab %}} + +{{< /tabs >}} diff --git a/howto/stateful-replicated-service/README.md b/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-stateful-service.md similarity index 92% rename from howto/stateful-replicated-service/README.md rename to daprdocs/content/en/developing-applications/building-blocks/state-management/howto-stateful-service.md index 4ad312f2c..d25b7fbe6 100644 --- a/howto/stateful-replicated-service/README.md +++ b/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-stateful-service.md @@ -1,15 +1,21 @@ -# Create a stateful replicated service +--- +type: docs +title: "How-To: Build a stateful service" +linkTitle: "How-To: Build a stateful service" +weight: 300 +description: "Use state management with a scaled, replicated service" +--- -In this HowTo we'll show you how you can create a stateful service which can be horizontally scaled, using opt-in concurrency and consistency models. +In this article you'll learn how you can create a stateful service which can be horizontally scaled, using opt-in concurrency and consistency models. This frees developers from difficult state coordination, conflict resolution and failure handling, and allows them instead to consume these capabilities as APIs from Dapr. -## 1. Setup a state store +## Setup a state store A state store component represents a resource that Dapr uses to communicate with a database. For the purpose of this guide, we'll use a Redis state store. -See a list of supported state stores [here](../setup-state-store/supported-state-stores.md) +See a list of supported state stores [here]({{< ref supported-state-stores >}}) ### Using the Dapr CLI @@ -18,7 +24,7 @@ To change the state store being used, replace the YAML under `/components` with ### Kubernetes -See the instructions [here](../setup-state-store) on how to setup different state stores on Kubernetes. +See the instructions [here]({{}}) on how to setup different state stores on Kubernetes. ## Strong and Eventual consistency diff --git a/daprdocs/content/en/developing-applications/building-blocks/state-management/query-state-store/_index.md b/daprdocs/content/en/developing-applications/building-blocks/state-management/query-state-store/_index.md new file mode 100644 index 000000000..542bd4b9c --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/state-management/query-state-store/_index.md @@ -0,0 +1,9 @@ +--- +type: docs +title: "Work with backend state stores" +linkTitle: "Backend stores" +weight: 400 +description: "Guides for working with specific backend states stores" +--- + +Explore the **Operations** section to see a list of [supported state stores]({{}}) and how to setup [state store components]({{}}). \ No newline at end of file diff --git a/howto/query-state-store/query-cosmosdb-store.md b/daprdocs/content/en/developing-applications/building-blocks/state-management/query-state-store/query-cosmosdb-store.md similarity index 87% rename from howto/query-state-store/query-cosmosdb-store.md rename to daprdocs/content/en/developing-applications/building-blocks/state-management/query-state-store/query-cosmosdb-store.md index ffce0f539..385968847 100644 --- a/howto/query-state-store/query-cosmosdb-store.md +++ b/daprdocs/content/en/developing-applications/building-blocks/state-management/query-state-store/query-cosmosdb-store.md @@ -1,53 +1,59 @@ -# Query Azure Cosmos DB state store - -Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups. - -> **NOTE:** Azure Cosmos DB is a multi-modal database that supports multiple APIs. The default Dapr Cosmos DB state store implementation uses the [Azure Cosmos DB SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). - -## 1. Connect to Azure Cosmos DB - -The easiest way to connect to your Cosmos DB instance is to use the Data Explorer on [Azure Management Portal](https://portal.azure.com). Alternatively, you can use [various SDKs and tools](https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction). - -> **NOTE:** The following samples use Cosmos DB [SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). When you configure an Azure Cosmos DB for Dapr, you need to specify the exact database and collection to use. The follow samples assume you've already connected to the right database and a collection named "states". - -## 2. List keys by App ID - -To get all state keys associated with application "myapp", use the query: - -```sql -SELECT * FROM states WHERE CONTAINS(states.id, 'myapp||') -``` - -The above query returns all documents with id containing "myapp-", which is the prefix of the state keys. - -## 3. Get specific state data - -To get the state data by a key "balance" for the application "myapp", use the query: - -```sql -SELECT * FROM states WHERE states.id = 'myapp||balance' -``` - -Then, read the **value** field of the returned document. - -To get the state version/ETag, use the command: - -```sql -SELECT states._etag FROM states WHERE states.id = 'myapp||balance' -``` - -## 4. Read actor state - -To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command: - -```sql -SELECT * FROM states WHERE CONTAINS(states.id, 'mypets||cat||leroy||') -``` - -And to get a specific actor state such as "food", use the command: - -```sql -SELECT * FROM states WHERE states.id = 'mypets||cat||leroy||food' -``` - -> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime. +--- +type: docs +title: "Azure Cosmos DB" +linkTitle: "Azure Cosmos DB" +weight: 1000 +description: "Use Azure Cosmos DB as a backend state store" +--- + +Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{< ref state_api.md >}}). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups. + +> **NOTE:** Azure Cosmos DB is a multi-modal database that supports multiple APIs. The default Dapr Cosmos DB state store implementation uses the [Azure Cosmos DB SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). + +## 1. Connect to Azure Cosmos DB + +The easiest way to connect to your Cosmos DB instance is to use the Data Explorer on [Azure Management Portal](https://portal.azure.com). Alternatively, you can use [various SDKs and tools](https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction). + +> **NOTE:** The following samples use Cosmos DB [SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). When you configure an Azure Cosmos DB for Dapr, you need to specify the exact database and collection to use. The follow samples assume you've already connected to the right database and a collection named "states". + +## 2. List keys by App ID + +To get all state keys associated with application "myapp", use the query: + +```sql +SELECT * FROM states WHERE CONTAINS(states.id, 'myapp||') +``` + +The above query returns all documents with id containing "myapp-", which is the prefix of the state keys. + +## 3. Get specific state data + +To get the state data by a key "balance" for the application "myapp", use the query: + +```sql +SELECT * FROM states WHERE states.id = 'myapp||balance' +``` + +Then, read the **value** field of the returned document. + +To get the state version/ETag, use the command: + +```sql +SELECT states._etag FROM states WHERE states.id = 'myapp||balance' +``` + +## 4. Read actor state + +To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command: + +```sql +SELECT * FROM states WHERE CONTAINS(states.id, 'mypets||cat||leroy||') +``` + +And to get a specific actor state such as "food", use the command: + +```sql +SELECT * FROM states WHERE states.id = 'mypets||cat||leroy||food' +``` + +> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime. diff --git a/howto/query-state-store/query-redis-store.md b/daprdocs/content/en/developing-applications/building-blocks/state-management/query-state-store/query-redis-store.md similarity index 86% rename from howto/query-state-store/query-redis-store.md rename to daprdocs/content/en/developing-applications/building-blocks/state-management/query-state-store/query-redis-store.md index 86b8356af..ab7632e09 100644 --- a/howto/query-state-store/query-redis-store.md +++ b/daprdocs/content/en/developing-applications/building-blocks/state-management/query-state-store/query-redis-store.md @@ -1,60 +1,66 @@ -# Query Redis state store - -Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups. - ->**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation. - -## 1. Connect to Redis - -You can use the official [redis-cli](https://redis.io/topics/rediscli) or any other Redis compatible tools to connect to the Redis state store to directly query Dapr states. If you are running Redis in a container, the easiest way to use redis-cli is to use a container: - -```bash -docker run --rm -it --link redis redis-cli -h -``` - -## 2. List keys by App ID - -To get all state keys associated with application "myapp", use the command: - -```bash -KEYS myapp* -``` - -The above command returns a list of existing keys, for example: - -```bash -1) "myapp||balance" -2) "myapp||amount" -``` - -## 3. Get specific state data - -Dapr saves state values as hash values. Each hash value contains a "data" field, which contains the state data and a "version" field, which contains an ever-incrementing version serving as the ETag. - -For example, to get the state data by a key "balance" for the application "myapp", use the command: - -```bash -HGET myapp||balance data -``` - -To get the state version/ETag, use the command: - -```bash -HGET myapp||balance version -``` - -## 4. Read actor state - -To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command: - -```bash -KEYS mypets||cat||leroy* -``` - -And to get a specific actor state such as "food", use the command: - -```bash -HGET mypets||cat||leroy||food value -``` - -> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime. +--- +type: docs +title: "Redis" +linkTitle: "Redis" +weight: 2000 +description: "Use Redis as a backend state store" +--- + +Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{}}). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups. + +>**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation. + +## 1. Connect to Redis + +You can use the official [redis-cli](https://redis.io/topics/rediscli) or any other Redis compatible tools to connect to the Redis state store to directly query Dapr states. If you are running Redis in a container, the easiest way to use redis-cli is to use a container: + +```bash +docker run --rm -it --link redis redis-cli -h +``` + +## 2. List keys by App ID + +To get all state keys associated with application "myapp", use the command: + +```bash +KEYS myapp* +``` + +The above command returns a list of existing keys, for example: + +```bash +1) "myapp||balance" +2) "myapp||amount" +``` + +## 3. Get specific state data + +Dapr saves state values as hash values. Each hash value contains a "data" field, which contains the state data and a "version" field, which contains an ever-incrementing version serving as the ETag. + +For example, to get the state data by a key "balance" for the application "myapp", use the command: + +```bash +HGET myapp||balance data +``` + +To get the state version/ETag, use the command: + +```bash +HGET myapp||balance version +``` + +## 4. Read actor state + +To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command: + +```bash +KEYS mypets||cat||leroy* +``` + +And to get a specific actor state such as "food", use the command: + +```bash +HGET mypets||cat||leroy||food value +``` + +> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime. diff --git a/howto/query-state-store/query-sqlserver-store.md b/daprdocs/content/en/developing-applications/building-blocks/state-management/query-state-store/query-sqlserver-store.md similarity index 87% rename from howto/query-state-store/query-sqlserver-store.md rename to daprdocs/content/en/developing-applications/building-blocks/state-management/query-state-store/query-sqlserver-store.md index 35a28191d..981db0e45 100644 --- a/howto/query-state-store/query-sqlserver-store.md +++ b/daprdocs/content/en/developing-applications/building-blocks/state-management/query-state-store/query-sqlserver-store.md @@ -1,59 +1,65 @@ -# Query SQL Server state store - -Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups. - -## 1. Connect to SQL Server - -The easiest way to connect to your SQL Server instance is to use the [Azure Data Studio](https://docs.microsoft.com/sql/azure-data-studio/download-azure-data-studio) (Windows, macOS, Linux) or [SQL Server Management Studio](https://docs.microsoft.com/sql/ssms/download-sql-server-management-studio-ssms) (Windows). - -> **NOTE:** The following samples use Azure SQL. When you configure an Azure SQL database for Dapr, you need to specify the exact table name to use. The follow samples assume you've already connected to the right database with a table named "states". - -## 2. List keys by App ID - -To get all state keys associated with application "myapp", use the query: - -```sql -SELECT * FROM states WHERE [Key] LIKE 'myapp-%' -``` - -The above query returns all rows with id containing "myapp-", which is the prefix of the state keys. - -## 3. Get specific state data - -To get the state data by a key "balance" for the application "myapp", use the query: - -```sql -SELECT * FROM states WHERE [Key] = 'myapp-balance' -``` - -Then, read the **Data** field of the returned row. - -To get the state version/ETag, use the command: - -```sql -SELECT [RowVersion] FROM states WHERE [Key] = 'myapp-balance' -``` - -## 4. Get filtered state data - -To get all state data where the value "color" in json data equals to "blue", use the query: - -```sql -SELECT * FROM states WHERE JSON_VALUE([Data], '$.color') = 'blue' -``` - -## 5. Read actor state - -To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command: - -```sql -SELECT * FROM states WHERE [Key] LIKE 'mypets-cat-leroy-%' -``` - -And to get a specific actor state such as "food", use the command: - -```sql -SELECT * FROM states WHERE [Key] = 'mypets-cat-leroy-food' -``` - -> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime. +--- +type: docs +title: "SQL server" +linkTitle: "SQL server" +weight: 3000 +description: "Use SQL server as a backend state store" +--- + +Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec]({{< ref state_api.md >}}). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups. + +## 1. Connect to SQL Server + +The easiest way to connect to your SQL Server instance is to use the [Azure Data Studio](https://docs.microsoft.com/sql/azure-data-studio/download-azure-data-studio) (Windows, macOS, Linux) or [SQL Server Management Studio](https://docs.microsoft.com/sql/ssms/download-sql-server-management-studio-ssms) (Windows). + +> **NOTE:** The following samples use Azure SQL. When you configure an Azure SQL database for Dapr, you need to specify the exact table name to use. The follow samples assume you've already connected to the right database with a table named "states". + +## 2. List keys by App ID + +To get all state keys associated with application "myapp", use the query: + +```sql +SELECT * FROM states WHERE [Key] LIKE 'myapp-%' +``` + +The above query returns all rows with id containing "myapp-", which is the prefix of the state keys. + +## 3. Get specific state data + +To get the state data by a key "balance" for the application "myapp", use the query: + +```sql +SELECT * FROM states WHERE [Key] = 'myapp-balance' +``` + +Then, read the **Data** field of the returned row. + +To get the state version/ETag, use the command: + +```sql +SELECT [RowVersion] FROM states WHERE [Key] = 'myapp-balance' +``` + +## 4. Get filtered state data + +To get all state data where the value "color" in json data equals to "blue", use the query: + +```sql +SELECT * FROM states WHERE JSON_VALUE([Data], '$.color') = 'blue' +``` + +## 5. Read actor state + +To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command: + +```sql +SELECT * FROM states WHERE [Key] LIKE 'mypets-cat-leroy-%' +``` + +And to get a specific actor state such as "food", use the command: + +```sql +SELECT * FROM states WHERE [Key] = 'mypets-cat-leroy-food' +``` + +> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime. diff --git a/concepts/state-management/README.md b/daprdocs/content/en/developing-applications/building-blocks/state-management/state-management-overview.md similarity index 66% rename from concepts/state-management/README.md rename to daprdocs/content/en/developing-applications/building-blocks/state-management/state-management-overview.md index 24ed9f27e..e684823a3 100644 --- a/concepts/state-management/README.md +++ b/daprdocs/content/en/developing-applications/building-blocks/state-management/state-management-overview.md @@ -1,12 +1,15 @@ -# State management +--- +type: docs +title: "State management overview" +linkTitle: "Overview" +weight: 100 +description: "Overview of the state management building block" +--- -Dapr offers key/value storage APIs for state management. If a microservice uses state management, it can use these APIs to leverage any of the [supported state stores](https://github.com/dapr/docs/blob/master/howto/setup-state-store/supported-state-stores.md), without adding or learning a third party SDK. +## Introduction -- [Overview](#overview) -- [Features](#features) -- [Next Steps](#next-steps) +Dapr offers key/value storage APIs for state management. If a microservice uses state management, it can use these APIs to leverage any of the [supported state stores]({{< ref supported-state-stores.md >}}), without adding or learning a third party SDK. -## Overview When using state management your application can leverage several features that would otherwise be complicated and error-prone to build yourself such as: - Distributed concurrency and data consistency @@ -15,32 +18,29 @@ When using state management your application can leverage several features that See below for a diagram of state management's high level architecture. -![State management](../../images/state_management.png) + ## Features -- [State Management API](#state-management-api) -- [State Store Behaviors](#state-store-behaviors) +- [State management API](#state-management-api) +- [State store behaviors](#state-store-behaviors) - [Concurrency](#concurrency) - [Consistency](#consistency) -- [Retry Policies](#retry-policies) -- [Bulk Operations](#bulk-operations) -- [Querying State Store Directly](#querying-state-store-directly) +- [Retry policies](#retry-policies) +- [Bulk operations](#bulk-operations) +- [Querying state store directly](#querying-state-store-directly) -## State management API +### State management API Developers can use the state management API to retrieve, save and delete state values by providing keys. -Dapr data stores are components. Dapr ships with [Redis](https://redis.io -) out-of-box for local development in self hosted mode. Dapr allows you to plug in other data stores as components such as [Azure CosmosDB](https://azure.microsoft.com/services/cosmos-db/), [SQL Server](https://azure.microsoft.com/services/sql-database/), [AWS DynamoDB](https://aws.amazon.com/DynamoDB -), [GCP Cloud Spanner](https://cloud.google.com/spanner -) and [Cassandra](http://cassandra.apache.org/). +Dapr data stores are components. Dapr ships with [Redis](https://redis.io) out-of-box for local development in self hosted mode. Dapr allows you to plug in other data stores as components such as [Azure CosmosDB](https://azure.microsoft.com/services/cosmos-db/), [SQL Server](https://azure.microsoft.com/services/sql-database/), [AWS DynamoDB](https://aws.amazon.com/DynamoDB), [GCP Cloud Spanner](https://cloud.google.com/spanner) and [Cassandra](http://cassandra.apache.org/). -Visit [State API](../../reference/api/state_api.md) for more information. +Visit [State API]({{< ref state_api.md >}}) for more information. > **NOTE:** Dapr prefixes state keys with the ID of the current Dapr instance. This allows multiple Dapr instances to share the same state store. -## State store behaviors +### State store behaviors Dapr allows developers to attach to a state operation request additional metadata that describes how the request is expected to be handled. For example, you can attach concurrency requirements, consistency requirements, and retry policy to any state operation requests. @@ -50,50 +50,51 @@ Not all stores are created equal. To ensure portability of your application, you The following table summarizes the capabilities of existing data store implementations. -Store | Strong consistent write | Strong consistent read | ETag| -----|----|----|---- -Cosmos DB | Yes | Yes | Yes -PostgreSQL | Yes | Yes | Yes -Redis | Yes | Yes | Yes -Redis (clustered)| Yes | No | Yes -SQL Server | Yes | Yes | Yes +| Store | Strong consistent write | Strong consistent read | ETag | +|-------------------|-------------------------|------------------------|------| +| Cosmos DB | Yes | Yes | Yes | +| PostgreSQL | Yes | Yes | Yes | +| Redis | Yes | Yes | Yes | +| Redis (clustered) | Yes | No | Yes | +| SQL Server | Yes | Yes | Yes | -## Concurrency +### Concurrency Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an **ETag** property to the returned state. And when the user code tries to update or delete a state, it's expected to attach the ETag through the **If-Match** header. The write operation can succeed only when the provided ETag matches with the ETag in the state store. -Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a [Retry Policy](#Retry-Policies) to compensate for such conflicts when using ETags. +Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a [retry policy](#Retry-Policies) to compensate for such conflicts when using ETags. If your application omits ETags in writing requests, Dapr skips ETag checks while handling the requests. This essentially enables the **last-write-wins** pattern, compared to the **first-write-wins** pattern with ETags. > **NOTE:** For stores that don't natively support ETags, it's expected that the corresponding Dapr state store implementation simulates ETags and follows the Dapr state management API specification when handling states. Because Dapr state store implementations are technically clients to the underlying data store, such simulation should be straightforward using the concurrency control mechanisms provided by the store. -## Consistency +### Consistency Dapr supports both **strong consistency** and **eventual consistency**, with eventual consistency as the default behavior. When strong consistency is used, Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request. When eventual consistency is used, Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica. -## Retry policies +### Retry policies Dapr allows you to attach a retry policy to any write request. A policy is described by an **retryInterval**, a **retryPattern** and a **retryThreshold**. Dapr keeps retrying the request at the given interval up to the specified threshold. You can choose between a **linear** retry pattern or an **exponential** (backoff) pattern. When the **exponential** pattern is used, the retry interval is doubled after each attempt. -## Bulk operations +### Bulk operations Dapr supports two types of bulk operations - **bulk** or **multi**. You can group several requests of the same type into a bulk (or a batch). Dapr submits requests in the bulk as individual requests to the underlying data store. In other words, bulk operations are not transactional. On the other hand, you can group requests of different types into a multi-operation, which is handled as an atomic transaction. -## Querying state store directly +### Querying state store directly -Dapr saves and retrieves state values without any transformation. You can query and aggregate state directly from the underlying state store. For example, to get all state keys associated with an application ID "myApp" in Redis, use: +Dapr saves and retrieves state values without any transformation. You can query and aggregate state directly from the [underlying state store]({{< ref query-state-store >}}). + +For example, to get all state keys associated with an application ID "myApp" in Redis, use: ```bash KEYS "myApp*" ``` -> **NOTE:** See [How to query Redis store](../../howto/query-state-store/query-redis-store.md) for details on how to query a Redis store. -> +> **NOTE:** See [How to query Redis store]({{< ref query-redis-store.md >}} ) for details on how to query a Redis store. -### Querying actor state +#### Querying actor state If the data store supports SQL queries, you can query an actor's state using SQL queries. For example use: @@ -111,13 +112,6 @@ SELECT AVG(value) FROM StateTable WHERE Id LIKE '||||*||tem ## Next steps -* Follow these guides on - * [How-to: Set up Azure Cosmos DB store](../../howto/setup-state-store/setup-azure-cosmosdb.md) - * [How-to: query Azure Cosmos DB store](../../howto/query-state-store/query-cosmosdb-store.md) - * [How-to: Set up PostgreSQL store](../../howto/setup-state-store/setup-postgresql.md) - * [How-to: Set up Redis store](../../howto/setup-state-store/setup-redis.md) - * [How-to: Query Redis store](../../howto/query-state-store/query-redis-store.md) - * [How-to: Set up SQL Server store](../../howto/setup-state-store/setup-sqlserver.md) - * [How-to: Query SQL Server store](../../howto/query-state-store/query-sqlserver-store.md) -* Read the [state management API specification](../../reference/api/state_api.md) -* Read the [actors API specification](../../reference/api/actors_api.md) +* Follow the [state store setup guides]({{< ref setup-state-store >}}) +* Read the [state management API specification]({{< ref state_api.md >}}) +* Read the [actors API specification]({{< ref actors_api.md >}}) diff --git a/daprdocs/content/en/developing-applications/ides/_index.md b/daprdocs/content/en/developing-applications/ides/_index.md new file mode 100644 index 000000000..f74c56e54 --- /dev/null +++ b/daprdocs/content/en/developing-applications/ides/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "IDE support" +linkTitle: "IDE support" +weight: 30 +description: "Support for common Integrated Development Environments (IDEs)" +--- \ No newline at end of file diff --git a/howto/intellij-debugging-daprd/README.md b/daprdocs/content/en/developing-applications/ides/intellij.md similarity index 92% rename from howto/intellij-debugging-daprd/README.md rename to daprdocs/content/en/developing-applications/ides/intellij.md index 0b2bb7d0a..54149b7c3 100644 --- a/howto/intellij-debugging-daprd/README.md +++ b/daprdocs/content/en/developing-applications/ides/intellij.md @@ -1,4 +1,10 @@ -# Configuring IntelliJ Community Edition for debugging with Dapr +--- +type: docs +title: "IntelliJ" +linkTitle: "IntelliJ" +weight: 1000 +description: "Configuring IntelliJ community edition for debugging with Dapr" +--- When developing Dapr applications, you typically use the Dapr CLI to start your 'Daprized' service similar to this: @@ -74,14 +80,14 @@ Optionally, you may also create a new entry for a sidecar tool that can be reuse Now, create or edit the run configuration for the application to be debugged. It can be found in the menu next to the `main()` function. -![Edit run configuration menu](../../images/intellij_debug_menu.png) +![Edit run configuration menu](/images/intellij_debug_menu.png) Now, add the program arguments and environment variables. These need to match the ports defined in the entry in 'External Tool' above. * Command line arguments for this example: `-p 3000` * Environment variables for this example: `DAPR_HTTP_PORT=3005;DAPR_GRPC_PORT=52000` -![Edit run configuration](../../images/intellij_edit_run_configuration.png) +![Edit run configuration](/images/intellij_edit_run_configuration.png) ## Start debugging @@ -89,11 +95,11 @@ Once the one-time config above is done, there are two steps required to debug a 1. Start `dapr` via `Tools` -> `External Tool` in IntelliJ. -![Run dapr as 'External Tool'](../../images/intellij_start_dapr.png) +![Run dapr as 'External Tool'](/images/intellij_start_dapr.png) 2. Start your application in debug mode. -![Start application in debug mode](../../images/intellij_debug_app.png) +![Start application in debug mode](/images/intellij_debug_app.png) ## Wrapping up diff --git a/howto/vscode-debugging-daprd/README.md b/daprdocs/content/en/developing-applications/ides/vscode-debugging.md similarity index 98% rename from howto/vscode-debugging-daprd/README.md rename to daprdocs/content/en/developing-applications/ides/vscode-debugging.md index d69766df4..e7f70229a 100644 --- a/howto/vscode-debugging-daprd/README.md +++ b/daprdocs/content/en/developing-applications/ides/vscode-debugging.md @@ -1,4 +1,10 @@ -# Application development with Visual Studio Code +--- +type: docs +title: "VS Code" +linkTitle: "VS Code" +weight: 2000 +description: "Application development and debugging with Visual Studio Code" +--- ## Visual Studio Code Dapr extension It is recommended to use the *preview* of the [Dapr Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-dapr) available in the Visual Studio marketplace for local development and debugging of your Dapr applications. diff --git a/howto/vscode-remote-containers/README.md b/daprdocs/content/en/developing-applications/ides/vscode-remotecontainers.md similarity index 85% rename from howto/vscode-remote-containers/README.md rename to daprdocs/content/en/developing-applications/ides/vscode-remotecontainers.md index 1f63b7149..00454d4a4 100644 --- a/howto/vscode-remote-containers/README.md +++ b/daprdocs/content/en/developing-applications/ides/vscode-remotecontainers.md @@ -1,4 +1,10 @@ -# Application development with Visual Studio Code +--- +type: docs +title: "VS Code remote containers" +linkTitle: "VS Code remote containers" +weight: 3000 +description: "Application development and debugging with Visual Studio Code remote containers" +--- ## Using remote containers for your application development diff --git a/daprdocs/content/en/developing-applications/integrations/_index.md b/daprdocs/content/en/developing-applications/integrations/_index.md new file mode 100644 index 000000000..430b4594d --- /dev/null +++ b/daprdocs/content/en/developing-applications/integrations/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Integrations" +linkTitle: "Integrations" +weight: 50 +description: "Dapr integrations with other technologies" +--- \ No newline at end of file diff --git a/howto/autoscale-with-keda/README.md b/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md similarity index 93% rename from howto/autoscale-with-keda/README.md rename to daprdocs/content/en/developing-applications/integrations/autoscale-keda.md index 68c5132b0..abd206d94 100644 --- a/howto/autoscale-with-keda/README.md +++ b/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md @@ -1,10 +1,15 @@ -# Autoscaling Dapr application on Kubernetes using KEDA +--- +type: docs +title: "Autoscaling a Dapr app with KEDA" +linkTitle: "Autoscale" +weight: 2000 +--- -Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components](../../concepts/publish-subscribe-messaging), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting later. +Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components]({{< ref pubsub >}}), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting later. For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), an event driven autoscaler for Kubernetes. Many of Dapr's pub/sub components overlap with the scalers provided by [KEDA](https://github.com/kedacore/keda) so it's easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA. -This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to [pub/sub components](../../concepts/publish-subscribe-messaging) offered by Dapr. +This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to [pub/sub components]({{< ref pubsub >}}) offered by Dapr. ## Install KEDA diff --git a/howto/create-grpc-app/README.md b/daprdocs/content/en/developing-applications/integrations/gRPC-integration.md similarity index 92% rename from howto/create-grpc-app/README.md rename to daprdocs/content/en/developing-applications/integrations/gRPC-integration.md index cc4ce836c..c5e32f8b4 100644 --- a/howto/create-grpc-app/README.md +++ b/daprdocs/content/en/developing-applications/integrations/gRPC-integration.md @@ -1,6 +1,15 @@ -# Create a gRPC enabled app, and invoke Dapr over gRPC +--- +type: docs +title: "Dapr's gRPC Interface" +linkTitle: "gRPC" +weight: 1000 +description: "Use the Dapr gRPC API in your application" +type: docs +--- -Dapr implements both an HTTP and a gRPC API for local calls.gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients. +# Dapr and gRPC + +Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients. You can find a list of auto-generated clients [here](https://github.com/dapr/docs#sdks). @@ -222,5 +231,5 @@ You can use Dapr with any language supported by Protobuf, and not just with the Using the [protoc](https://developers.google.com/protocol-buffers/docs/downloads) tool you can generate the Dapr clients for other languages like Ruby, C++, Rust and others. ## Related Topics -* [Service invocation concepts](../../concepts/service-invocation/README.md) -* [Service invocation API specification](../../reference/api/service_invocation_api.md) +- [Service invocation building block]({{< ref service-invocation >}}) +- [Service invocation API specification]({{< ref service_invocation_api.md >}}) diff --git a/daprdocs/content/en/developing-applications/middleware/_index.md b/daprdocs/content/en/developing-applications/middleware/_index.md new file mode 100644 index 000000000..3ea4f84d5 --- /dev/null +++ b/daprdocs/content/en/developing-applications/middleware/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Middleware" +linkTitle: "Middleware" +weight: 50 +description: "Customize Dapr processing pipelines by adding middleware components" +--- diff --git a/howto/policies-with-opa/README.md b/daprdocs/content/en/developing-applications/middleware/middleware-opa-policies.md similarity index 95% rename from howto/policies-with-opa/README.md rename to daprdocs/content/en/developing-applications/middleware/middleware-opa-policies.md index 1f15bb5fe..623a97a3f 100644 --- a/howto/policies-with-opa/README.md +++ b/daprdocs/content/en/developing-applications/middleware/middleware-opa-policies.md @@ -1,4 +1,11 @@ -# Apply Open Policy Agent Polices +--- +type: docs +title: "How-To: Apply OPA policies" +linkTitle: "How-To: Apply OPA policies" +weight: 1000 +description: "Use Dapr middleware to apply Open Policy Agent (OPA) policies on incoming requests" +type: docs +--- The Dapr Open Policy Agent (OPA) [HTTP middleware](https://github.com/dapr/docs/blob/master/concepts/middleware/README.md) allows applying [OPA Policies](https://www.openpolicyagent.org/) to incoming Dapr HTTP requests. This can be used to apply reusable authorization policies to app endpoints. diff --git a/daprdocs/content/en/developing-applications/sdks/_index.md b/daprdocs/content/en/developing-applications/sdks/_index.md new file mode 100644 index 000000000..a905eaabf --- /dev/null +++ b/daprdocs/content/en/developing-applications/sdks/_index.md @@ -0,0 +1,22 @@ +--- +type: docs +title: "SDKs" +linkTitle: "SDKs" +weight: 20 +description: "Use your favorite languages with Dapr" +--- + +### .NET +See the [.NET SDK repository](https://github.com/dapr/dotnet-sdk) + +### Java +See the [Java SDK repository](https://github.com/dapr/java-sdk) + +### Go +See the [Go SDK repository](https://github.com/dapr/go-sdk) + +### Python +See the [Python SDK repository](https://github.com/dapr/python-sdk) + +### Javascript +See the [Javascript SDK repository](https://github.com/dapr/js-sdk) \ No newline at end of file diff --git a/howto/serialize/README.md b/daprdocs/content/en/developing-applications/sdks/serialization.md similarity index 96% rename from howto/serialize/README.md rename to daprdocs/content/en/developing-applications/sdks/serialization.md index 7fdc1f16d..ae7b30e60 100644 --- a/howto/serialize/README.md +++ b/daprdocs/content/en/developing-applications/sdks/serialization.md @@ -1,146 +1,151 @@ -# Serialization in Dapr's SDKs - -An SDK for Dapr should provide serialization for two use cases. First, for API objects sent through request and response payloads. Second, for objects to be persisted. For both these use cases, a default serialization is provided. In the Java SDK, it is the [DefaultObjectSerializer](https://dapr.github.io/java-sdk/io/dapr/serializer/DefaultObjectSerializer.html) class, providing JSON serialization. - -## Service invocation - -```java - DaprClient client = (new DaprClientBuilder()).build(); - client.invokeService(Verb.POST, "myappid", "saySomething", "My Message", null).block(); -``` - -In the example above, the app will receive a `POST` request for the `saySomething` method with the request payload as `"My Message"` - quoted since the serializer will serialize the input String to JSON. - -```text -POST /saySomething HTTP/1.1 -Host: localhost -Content-Type: text/plain -Content-Length: 12 - -"My Message" -``` - -## State management - -```java - DaprClient client = (new DaprClientBuilder()).build(); - client.saveState("MyStateStore", "MyKey", "My Message").block(); -``` -In this example, `My Message` will be saved. It is not quoted because Dapr's API will internally parse the JSON request object before saving it. - -```JSON -[ - { - "key": "MyKey", - "value": "My Message" - } -] -``` - -## PubSub - -```java - DaprClient client = (new DaprClientBuilder()).build(); - client.publishEvent("TopicName", "My Message").block(); -``` - -The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber will receive it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. Dapr SDK also provides a built-in deserializer for `CloudEvent` object. - -```java - @PostMapping(path = "/TopicName") - public void handleMessage(@RequestBody(required = false) byte[] body) { - // Dapr's event is compliant to CloudEvent. - CloudEvent event = CloudEvent.deserialize(body); - } -``` - -## Bindings - -In this case, the object is serialized to `byte[]` as well and the input binding receives the raw `byte[]` as-is and deserializes it to the expected object type. - -* Output binding: -```java - DaprClient client = (new DaprClientBuilder()).build(); - client.invokeBinding("sample", "My Message").block(); -``` - -* Input binding: -```java - @PostMapping(path = "/sample") - public void handleInputBinding(@RequestBody(required = false) byte[] body) { - String message = (new DefaultObjectSerializer()).deserialize(body, String.class); - System.out.println(message); - } -``` -It should print: -``` -My Message -``` - -## Actor Method invocation -Object serialization and deserialization for invocation of Actor's methods are same as for the service method invocation, the only difference is that the application does not need to deserialize the request or serialize the response since it is all done transparently by the SDK. - -For Actor's methods, the SDK only supports methods with zero or one parameter. - -* Invoking an Actor's method: -```java -public static void main() { - ActorProxyBuilder builder = new ActorProxyBuilder("DemoActor"); - String result = actor.invokeActorMethod("say", "My Message", String.class).block(); -} -``` - -* Implementing an Actor's method: -```java -public String say(String something) { - System.out.println(something); - return "OK"; -} -``` -It should print: -``` - My Message -``` - -## Actor's state management -Actors can also have state. In this case, the state manager will serialize and deserialize the objects using the state serializer and handle it transparently to the application. - -```java -public String actorMethod(String message) { - // Reads a state from key and deserializes it to String. - String previousMessage = super.getActorStateManager().get("lastmessage", String.class).block(); - - // Sets the new state for the key after serializing it. - super.getActorStateManager().set("lastmessage", message).block(); - return previousMessage; -} -``` - -## Default serializer - -The default serializer for Dapr is a JSON serializer with the following expectations: - -1. Use of basic [JSON data types](https://www.w3schools.com/js/js_json_datatypes.asp) for cross-language and cross-platform compatibility: string, number, array, boolean, null and another JSON object. Every complex property type in application's serializable objects (DateTime, for example), should be represented as one of the JSON's basic types. -2. Data persisted with the default serializer should be saved as JSON objects too, without extra quotes or encoding. The example below shows how a string and a JSON object would look like in a Redis store. -```bash -redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message -"This is a message to be saved and retrieved." -``` -```bash - redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata -{"value":"My data value."} -``` -3. Custom serializers must serialize object to `byte[]`. -4. Custom serializers must deserilize `byte[]` to object. -5. When user provides a custom serializer, it should be transferred or persisted as `byte[]`. When persisting, also encode as Base64 string. This is done natively by most JSON libraries. -```bash -redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message -"VGhpcyBpcyBhIG1lc3NhZ2UgdG8gYmUgc2F2ZWQgYW5kIHJldHJpZXZlZC4=" -``` -```bash - redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata -"eyJ2YWx1ZSI6Ik15IGRhdGEgdmFsdWUuIn0=" -``` -6. When serializing a object that is a `byte[]`, the serializer should just pass it through since `byte[]` shoould be already handled internally in the SDK. The same happens when deserializing to `byte[]`. - +--- +type: docs +title: "Serialization in Dapr's SDKs" +linkTitle: "Serialization" +weight: 1000 +--- + +An SDK for Dapr should provide serialization for two use cases. First, for API objects sent through request and response payloads. Second, for objects to be persisted. For both these use cases, a default serialization is provided. In the Java SDK, it is the [DefaultObjectSerializer](https://dapr.github.io/java-sdk/io/dapr/serializer/DefaultObjectSerializer.html) class, providing JSON serialization. + +## Service invocation + +```java + DaprClient client = (new DaprClientBuilder()).build(); + client.invokeService(Verb.POST, "myappid", "saySomething", "My Message", null).block(); +``` + +In the example above, the app will receive a `POST` request for the `saySomething` method with the request payload as `"My Message"` - quoted since the serializer will serialize the input String to JSON. + +```text +POST /saySomething HTTP/1.1 +Host: localhost +Content-Type: text/plain +Content-Length: 12 + +"My Message" +``` + +## State management + +```java + DaprClient client = (new DaprClientBuilder()).build(); + client.saveState("MyStateStore", "MyKey", "My Message").block(); +``` +In this example, `My Message` will be saved. It is not quoted because Dapr's API will internally parse the JSON request object before saving it. + +```JSON +[ + { + "key": "MyKey", + "value": "My Message" + } +] +``` + +## PubSub + +```java + DaprClient client = (new DaprClientBuilder()).build(); + client.publishEvent("TopicName", "My Message").block(); +``` + +The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber will receive it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. Dapr SDK also provides a built-in deserializer for `CloudEvent` object. + +```java + @PostMapping(path = "/TopicName") + public void handleMessage(@RequestBody(required = false) byte[] body) { + // Dapr's event is compliant to CloudEvent. + CloudEvent event = CloudEvent.deserialize(body); + } +``` + +## Bindings + +In this case, the object is serialized to `byte[]` as well and the input binding receives the raw `byte[]` as-is and deserializes it to the expected object type. + +* Output binding: +```java + DaprClient client = (new DaprClientBuilder()).build(); + client.invokeBinding("sample", "My Message").block(); +``` + +* Input binding: +```java + @PostMapping(path = "/sample") + public void handleInputBinding(@RequestBody(required = false) byte[] body) { + String message = (new DefaultObjectSerializer()).deserialize(body, String.class); + System.out.println(message); + } +``` +It should print: +``` +My Message +``` + +## Actor Method invocation +Object serialization and deserialization for invocation of Actor's methods are same as for the service method invocation, the only difference is that the application does not need to deserialize the request or serialize the response since it is all done transparently by the SDK. + +For Actor's methods, the SDK only supports methods with zero or one parameter. + +* Invoking an Actor's method: +```java +public static void main() { + ActorProxyBuilder builder = new ActorProxyBuilder("DemoActor"); + String result = actor.invokeActorMethod("say", "My Message", String.class).block(); +} +``` + +* Implementing an Actor's method: +```java +public String say(String something) { + System.out.println(something); + return "OK"; +} +``` +It should print: +``` + My Message +``` + +## Actor's state management +Actors can also have state. In this case, the state manager will serialize and deserialize the objects using the state serializer and handle it transparently to the application. + +```java +public String actorMethod(String message) { + // Reads a state from key and deserializes it to String. + String previousMessage = super.getActorStateManager().get("lastmessage", String.class).block(); + + // Sets the new state for the key after serializing it. + super.getActorStateManager().set("lastmessage", message).block(); + return previousMessage; +} +``` + +## Default serializer + +The default serializer for Dapr is a JSON serializer with the following expectations: + +1. Use of basic [JSON data types](https://www.w3schools.com/js/js_json_datatypes.asp) for cross-language and cross-platform compatibility: string, number, array, boolean, null and another JSON object. Every complex property type in application's serializable objects (DateTime, for example), should be represented as one of the JSON's basic types. +2. Data persisted with the default serializer should be saved as JSON objects too, without extra quotes or encoding. The example below shows how a string and a JSON object would look like in a Redis store. +```bash +redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message +"This is a message to be saved and retrieved." +``` +```bash + redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata +{"value":"My data value."} +``` +3. Custom serializers must serialize object to `byte[]`. +4. Custom serializers must deserilize `byte[]` to object. +5. When user provides a custom serializer, it should be transferred or persisted as `byte[]`. When persisting, also encode as Base64 string. This is done natively by most JSON libraries. +```bash +redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message +"VGhpcyBpcyBhIG1lc3NhZ2UgdG8gYmUgc2F2ZWQgYW5kIHJldHJpZXZlZC4=" +``` +```bash + redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata +"eyJ2YWx1ZSI6Ik15IGRhdGEgdmFsdWUuIn0=" +``` +6. When serializing a object that is a `byte[]`, the serializer should just pass it through since `byte[]` shoould be already handled internally in the SDK. The same happens when deserializing to `byte[]`. + *As of now, the [Java SDK](https://github.com/dapr/java-sdk/) is the only Dapr SDK that implements this specification. In the near future, other SDKs will also implement the same.* \ No newline at end of file diff --git a/daprdocs/content/en/getting-started/_index.md b/daprdocs/content/en/getting-started/_index.md new file mode 100644 index 000000000..51641361c --- /dev/null +++ b/daprdocs/content/en/getting-started/_index.md @@ -0,0 +1,9 @@ +--- +type: docs +title: "Getting started with Dapr" +linkTitle: "Getting started" +weight: 20 +description: "Get up and running with Dapr" +type: docs +--- + diff --git a/daprdocs/content/en/getting-started/configure-redis.md b/daprdocs/content/en/getting-started/configure-redis.md new file mode 100644 index 000000000..6c1bd9d25 --- /dev/null +++ b/daprdocs/content/en/getting-started/configure-redis.md @@ -0,0 +1,224 @@ +--- +type: docs +title: "How-To: Setup Redis" +linkTitle: "How-To: Setup Redis" +weight: 30 +description: "Configure Redis for Dapr state management or Pub/Sub" +--- + +Dapr can use Redis in two ways: + +1. As state store component (state.redis) for persistence and restoration +2. As pub/sub component (pubsub.redis) for async style message delivery + +## Create a Redis store + +Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service. If you already have a Redis store, move on to the [configuration](#configure-dapr-components) section. + +{{< tabs "Self-Hosted" "Kubernetes (Helm)" "Azure Redis Cache" "AWS Redis" "GCP Memorystore" >}} + +{{% codetab %}} +Redis is automatically installed in self-hosted environments by the Dapr CLI as part of the initialization process. +{{% /codetab %}} + +{{% codetab %}} +You can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm v3](https://github.com/helm/helm#install). + +1. Install Redis into your cluster: + + ```bash + helm repo add bitnami https://charts.bitnami.com/bitnami + helm repo update + helm install redis bitnami/redis + ``` + + Note that you will need a Redis version greater than 5, which is what Dapr's pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub) a lower version can be used. + +2. Run `kubectl get pods` to see the Redis containers now running in your cluster: + + ```bash + $ kubectl get pods + NAME READY STATUS RESTARTS AGE + redis-master-0 1/1 Running 0 69s + redis-slave-0 1/1 Running 0 69s + redis-slave-1 1/1 Running 0 22s + ``` + +3. Add `redis-master.default.svc.cluster.local:6379` as the `redisHost` in your [redis.yaml](#configure-dapr-components) file. For example: + + ```yaml + metadata: + - name: redisHost + value: redis-master.default.svc.cluster.local:6379 + ``` + +4. Securely reference the redis passoword in your [redis.yaml](#configure-dapr-components) file. For example: + + ```yaml + - name: redisPassword + secretKeyRef: + name: redis + key: redis-password + ``` + +5. (Alternative) It is **not recommended**, but you can use a hard code a password instead of using secretKeyRef. First you'll get the Redis password, which is slightly different depending on the OS you're using: + + - **Windows**: In Powershell run: + ```powershell + PS C:\> $base64pwd=kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" + PS C:\> $redispassword=[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64pwd)) + PS C:\> $base64pwd="" + PS C:\> $redispassword + ``` + - **Linux/MacOS**: Run: + ```bash + kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode + ``` + + Add this password as the `redisPassword` value in your [redis.yaml](#configure-dapr-components) file. For example: + + ```yaml + metadata: + - name: redisPassword + value: lhDOkwTlp0 + ``` +{{% /codetab %}} + +{{% codetab %}} +This method requires having an Azure Subscription. + +1. Open the [Azure Portal](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow. Log in if necessary. +1. Fill out the necessary information +1. Click "Create" to kickoff deployment of your Redis instance. +1. Once your instance is created, you'll need to grab your access key. Navigate to "Access Keys" under "Settings" and copy your key. +1. You'll need the hostname of your Redis instance, which you can retrieve from the "Overview" in Azure. It should look like `xxxxxx.redis.cache.windows.net:6380`. +1. Finally, you'll need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configure-dapr-components). + + As the connection to Azure is encrypted, make sure to add the following block to the `metadata` section of your `redis.yaml` file. + + ```yaml + metadata: + - name: enableTLS + value: "true" + ``` + +> **NOTE:** Dapr pub/sub uses [Redis streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0, which isn't currently available on Azure Cache for Redis. Consequently, you can use Azure Cache for Redis only for state persistence. +{{% /codetab %}} + +{{% codetab %}} +Visit [AWS Redis](https://aws.amazon.com/redis/). +{{% /codetab %}} + +{{% codetab %}} +Visit [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/). +{{% /codetab %}} + +{{< /tabs >}} + +## Configure Dapr components + +Dapr can use Redis as a [`statestore` component]({{< ref setup-state-store >}}) for state persistence (`state.redis`) or as a [`pubsub` component]({{< ref setup-pubsub >}}) (`pubsub.redis`). The following yaml files demonstrates how to define each component using either a secretKey reference (which is preferred) or a plain text password. + +### Create component files + +#### State store component with secret reference + +Create a file called redis-state.yaml, and paste the following: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: statestore + namespace: default +spec: + type: state.redis + metadata: + - name: redisHost + value: + - name: redisPassword + secretKeyRef: + name: redis + key: redis-password +``` + +#### Pub/sub component with secret reference + +Create a file called redis-pubsub.yaml, and paste the following: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: pubsub + namespace: default +spec: + type: pubsub.redis + metadata: + - name: redisHost + value: + - name: redisPassword + secretKeyRef: + name: redis + key: redis-password +``` + +#### State store component with hard coded password (not recommended) + +For development purposes only, create a file called redis-state.yaml, and paste the following: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: statestore + namespace: default +spec: + type: state.redis + metadata: + - name: redisHost + value: + - name: redisPassword + value: +``` + +#### Pub/Sub component with hard coded password (not recommended) + +For development purposes only, create a file called redis-pubsub.yaml, and paste the following: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: pubsub + namespace: default +spec: + type: pubsub.redis + metadata: + - name: redisHost + value: + - name: redisPassword + value: +``` + +### Apply the configuration + +{{< tabs "Self-Hosted" "Kubernetes">}} + +{{% codetab %}} +By default the Dapr CLI creates a local Redis instance when you run `dapr init`. However, if you want to configure a different Redis instance, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. + +If you initialized Dapr using `dapr init --slim`, the Dapr CLI did not create a Redis instance or a default configuration file for it. Follow [the instructions above](#creat-a-redis-store) to create a Redis store. Create the `redis.yaml` following the configuration [instructions](#configure-dapr-components) in a `components` dir and provide the path to the `dapr run` command with the flag `--components-path`. +{{% /codetab %}} + +{{% codetab %}} + +Run `kubectl apply -f ` for both state and pubsub files: + +```bash +kubectl apply -f redis-state.yaml +kubectl apply -f redis-pubsub.yaml +``` +{{% /codetab %}} + +{{< /tabs >}} diff --git a/daprdocs/content/en/getting-started/install-dapr.md b/daprdocs/content/en/getting-started/install-dapr.md new file mode 100644 index 000000000..52ca696ce --- /dev/null +++ b/daprdocs/content/en/getting-started/install-dapr.md @@ -0,0 +1,261 @@ +--- +type: docs +title: "How-To: Install Dapr into your environment" +linkTitle: "How-To: Install Dapr" +weight: 20 +description: "Install Dapr in your preferred environment" +--- + +This guide will get you up and running to evaluate Dapr and develop applications. Visit [this page]({{< ref hosting >}}) for a full list of supported platforms with instructions and best practices on running in production. + +## Install the Dapr CLI + +Begin by downloading and installing the Dapr CLI. This will be used to initialize your environment on your desired platform. + +{{< tabs Linux Windows MacOS Binaries>}} + +{{% codetab %}} +This command will install the latest linux Dapr CLI to `/usr/local/bin`: +```bash +wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash +``` +{{% /codetab %}} + +{{% codetab %}} +This command will install the latest windows Dapr cli to `%USERPROFILE%\.dapr\` and add this directory to User PATH environment variable: +```powershell +powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex" +``` +Verify by opening Explorer and entering `%USERPROFILE%\.dapr\` into the address bar. You should see folders for bin, componenets and a config file. +{{% /codetab %}} + +{{% codetab %}} +This command will install the latest darwin Dapr CLI to `/usr/local/bin`: +```bash +curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash +``` + +Or you can install via [Homebrew](https://brew.sh): +```bash +brew install dapr/tap/dapr-cli +``` +{{% /codetab %}} + +{{% codetab %}} +Each release of Dapr CLI includes various OSes and architectures. These binary versions can be manually downloaded and installed. + +1. Download the desired Dapr CLI from the latest [Dapr Release](https://github.com/dapr/cli/releases) +2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip) +3. Move it to your desired location. + - For Linux/MacOS - `/usr/local/bin` + - For Windows, create a directory and add this to your System PATH. For example create a directory called `c:\dapr` and add this directory to your path, by editing your system environment variable. +{{% /codetab %}} +{{< /tabs >}} + +## Install Dapr in self-hosted mode + +Running Dapr runtime in self hosted mode enables you to develop Dapr applications in your local development environment and then deploy and run them in other Dapr supported environments. + +### Prerequisites + +- Install [Docker Desktop](https://docs.docker.com/install/) + - Windows users ensure that `Docker Desktop For Windows` uses Linux containers. + +By default Dapr will install with a developer environment using Docker containers to get you started easily. This getting started guide assumes Docker is installed to ensure the best experience. However, Dapr does not depend on Docker to run. Read [this page]({{< ref self-hosted-no-docker.md >}}) for instructions on installing Dapr locally without Docker using slim init. + +### Initialize Dapr using the CLI + +This step will install the latest Dapr Docker containers and setup a developer environment to help you get started easily with Dapr. + +1. Ensure you are in an elevated terminal: + - **Linux/MacOS:** if you run your docker cmds with sudo or the install path is `/usr/local/bin`(default install path), you need to use `sudo` + - **Windows:** make sure that you run the cmd terminal in administrator mode + +2. Run `dapr init` + + ```bash + $ dapr init + ⌛ Making the jump to hyperspace... + Downloading binaries and setting up components + ✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started + ``` + +3. Verify installation + + From a command prompt run the `docker ps` command and check that the `daprio/dapr`, `openzipkin/zipkin`, and `redis` container images are running: + + ```bash + $ docker ps + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + 67bc611a118c daprio/dapr "./placement" About a minute ago Up About a minute 0.0.0.0:6050->50005/tcp dapr_placement + 855f87d10249 openzipkin/zipkin "/busybox/sh run.sh" About a minute ago Up About a minute 9410/tcp, 0.0.0.0:9411->9411/tcp dapr_zipkin + 71cccdce0e8f redis "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:6379->6379/tcp dapr_redis + ``` + +4. Visit our [hello world quickstart](https://github.com/dapr/quickstarts/tree/master/hello-world) or dive into the [Dapr building blocks]({{< ref building-blocks >}}) + +### (optional) Install a specific runtime version + +You can install or upgrade to a specific version of the Dapr runtime using `dapr init --runtime-version`. You can find the list of versions in [Dapr Release](https://github.com/dapr/dapr/releases). + +```bash +# Install v0.1.0 runtime +$ dapr init --runtime-version 0.11.0 + +# Check the versions of cli and runtime +$ dapr --version +cli version: v0.11.0 +runtime version: v0.11.2 +``` + +### Uninstall Dapr in self-hosted mode + +This command will remove the placement Dapr container: + +```bash +$ dapr uninstall +``` + +{{% alert title="Warning" color="warning" %}} +This command won't remove the Redis or Zipkin containers by default, just in case you were using them for other purposes. To remove Redis, Zipkin, Actor Placement container, as well as the default Dapr directory located at `$HOME/.dapr` or `%USERPROFILE%\.dapr\`, run: + +```bash +$ dapr uninstall --all +``` +{{% /alert %}} + +> For Linux/MacOS users, if you run your docker cmds with sudo or the install path is `/usr/local/bin`(default install path), you need to use `sudo dapr uninstall` to remove dapr binaries and/or the containers. + +## Install Dapr on a Kubernetes cluster + +When setting up Kubernetes you can use either the Dapr CLI or Helm. + +The following pods will be installed: + +- dapr-operator: Manages component updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) +- dapr-sidecar-injector: Injects Dapr into annotated deployment pods +- dapr-placement: Used for actors only. Creates mapping tables that map actor instances to pods +- dapr-sentry: Manages mTLS between services and acts as a certificate authority + +### Setup cluster + +You can install Dapr on any Kubernetes cluster. Here are some helpful links: + +- [Setup Minikube Cluster]({{< ref setup-minikube.md >}}) +- [Setup Azure Kubernetes Service Cluster]({{< ref setup-aks.md >}}) +- [Setup Google Cloud Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/quickstart) +- [Setup Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) + +{{% alert title="Note" color="primary" %}} +Both the Dapr CLI and the Dapr Helm chart automatically deploy with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy Dapr to Windows nodes, but most users should not need to. For more information see [Deploying to a hybrid Linux/Windows Kubernetes cluster]({{}}). +{{% /alert %}} + + +### Install with Dapr CLI + +You can install Dapr to a Kubernetes cluster using the Dapr CLI. + +#### Install Dapr + +The `-k` flag will initialize Dapr on the kuberentes cluster in your current context. + +```bash +$ dapr init -k + +⌛ Making the jump to hyperspace... +ℹ️ Note: To install Dapr using Helm, see here: https://github.com/dapr/docs/blob/master/getting-started/environment-setup.md#using-helm-advanced + +✅ Deploying the Dapr control plane to your cluster... +✅ Success! Dapr has been installed to namespace dapr-system. To verify, run "dapr status -k" in your terminal. To get started, go here: https://aka.ms/dapr-getting-started +``` + +#### Install to a custom namespace: + +The default namespace when initializeing Dapr is `dapr-system`. You can override this with the `-n` flag. + +``` +dapr init -k -n mynamespace +``` + +#### Install in highly available mode: + +You can run Dapr with 3 replicas of each control plane pod with the exception of the Placement pod in the dapr-system namespace for [production scenarios]({{< ref kubernetes-production.md >}}). + +``` +dapr init -k --enable-ha=true +``` + +#### Disable mTLS: + +Dapr is initialized by default with [mTLS]({{< ref "security-concept.md#sidecar-to-sidecar-communication" >}}). You can disable it with: + +``` +dapr init -k --enable-mtls=false +``` + +#### Uninstall Dapr on Kubernetes + +```bash +$ dapr uninstall --kubernetes +``` + +### Install with Helm (advanced) + +You can install Dapr to Kubernetes cluster using a Helm 3 chart. + + +{{% alert title="Note" color="primary" %}} +The latest Dapr helm chart no longer supports Helm v2. Please migrate from helm v2 to helm v3 by following [this guide](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/). +{{% /alert %}} + +#### Install Dapr on Kubernetes + +1. Make sure Helm 3 is installed on your machine +2. Add Helm repo + + ```bash + helm repo add dapr https://dapr.github.io/helm-charts/ + helm repo update + ``` + +3. Create `dapr-system` namespace on your kubernetes cluster + + ```bash + kubectl create namespace dapr-system + ``` + +4. Install the Dapr chart on your cluster in the `dapr-system` namespace. + + ```bash + helm install dapr dapr/dapr --namespace dapr-system + ``` + +#### Verify installation + +Once the chart installation is complete, verify the dapr-operator, dapr-placement, dapr-sidecar-injector and dapr-sentry pods are running in the `dapr-system` namespace: + +```bash +$ kubectl get pods -n dapr-system -w + +NAME READY STATUS RESTARTS AGE +dapr-operator-7bd6cbf5bf-xglsr 1/1 Running 0 40s +dapr-placement-7f8f76778f-6vhl2 1/1 Running 0 40s +dapr-sidecar-injector-8555576b6f-29cqm 1/1 Running 0 40s +dapr-sentry-9435776c7f-8f7yd 1/1 Running 0 40s +``` + +#### Uninstall Dapr on Kubernetes + +```bash +helm uninstall dapr -n dapr-system +``` + +> **Note:** See [this page](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md) for details on Dapr helm charts. + +### Sidecar annotations + +To see all the supported annotations for the Dapr sidecar on Kubernetes, visit [this]({{}}) how to guide. + +### Configure Redis + +Unlike Dapr self-hosted, redis is not pre-installed out of the box on Kubernetes. To install Redis as a state store or as a pub/sub message bus in your Kubernetes cluster see [How-To: Setup Redis]({{< ref configure-redis.md >}}) diff --git a/daprdocs/content/en/operations/_index.md b/daprdocs/content/en/operations/_index.md new file mode 100644 index 000000000..cb064ffc3 --- /dev/null +++ b/daprdocs/content/en/operations/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Deploying and configuring Dapr in your environment" +linkTitle: "Operations" +weight: 40 +description: "Hosting options, best-practices, and other guides and running your application on Dapr" +--- diff --git a/daprdocs/content/en/operations/components/_index.md b/daprdocs/content/en/operations/components/_index.md new file mode 100644 index 000000000..d9df52530 --- /dev/null +++ b/daprdocs/content/en/operations/components/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Managing components in Dapr" +linkTitle: "Components" +weight: 300 +description: "How to manage your Dapr components in your application" +--- \ No newline at end of file diff --git a/howto/components-scopes/README.md b/daprdocs/content/en/operations/components/component-scopes.md similarity index 77% rename from howto/components-scopes/README.md rename to daprdocs/content/en/operations/components/component-scopes.md index 70d79d2fc..8a1b45ebd 100644 --- a/howto/components-scopes/README.md +++ b/daprdocs/content/en/operations/components/component-scopes.md @@ -1,9 +1,12 @@ -# Scope components to be used by one or more applications +--- +type: docs +title: "How-To: Scope components to one or more applications" +linkTitle: "How-To: Set component scopes" +weight: 100 +description: "Limit component access to particular Dapr instances" +--- -There are two things to know about Dapr components in terms of security and access. -First, Dapr components are namespaced. That means a Dapr runtime instance can only access components that have been deployed to the same namespace. - -Although namespace sounds like a Kubernetes term, this is true for Dapr not only on Kubernetes. +Dapr components are namespaced (separate from the Kubernetes namespace concept), meaning a Dapr runtime instance can only access components that have been deployed to the same namespace. ## Namespaces Namespaces can be used to limit component access to particular Dapr instances. @@ -75,4 +78,7 @@ scopes: - app1 - app2 ``` -Watch this [video](https://www.youtube.com/watch?v=8W-iBDNvCUM&feature=youtu.be&t=1765) for an example on how to component scopes with secret components and the secrets API. \ No newline at end of file + +## Example + + \ No newline at end of file diff --git a/concepts/secrets/component-secrets.md b/daprdocs/content/en/operations/components/component-secrets.md similarity index 86% rename from concepts/secrets/component-secrets.md rename to daprdocs/content/en/operations/components/component-secrets.md index 35f613f4d..27fd42e99 100644 --- a/concepts/secrets/component-secrets.md +++ b/daprdocs/content/en/operations/components/component-secrets.md @@ -1,6 +1,14 @@ -# Referencing Secret Stores in Components +--- +type: docs +title: "How-To: Reference secret stores in components" +linkTitle: "How-To: Reference secrets" +weight: 200 +description: "How to securly reference secrets from a component definition" +--- -Components can reference secrets for the `spec.metadata` section. +## Overview + +Components can reference secrets for the `spec.metadata` section within the components definition. In order to reference a secret, you need to set the `auth.secretStore` field to specify the name of the secret store that holds the secrets. @@ -8,7 +16,7 @@ When running in Kubernetes, if the `auth.secretStore` is empty, the Kubernetes s ### Supported secret stores -Go to [this](../../howto/setup-secret-store/README.md) link to see all the secret stores supported by Dapr, along with information on how to configure and use them. +Go to [this]({{< ref "howto-secrets.md" >}}) link to see all the secret stores supported by Dapr, along with information on how to configure and use them. ## Non default namespaces diff --git a/daprdocs/content/en/operations/components/setup-bindings/_index.md b/daprdocs/content/en/operations/components/setup-bindings/_index.md new file mode 100644 index 000000000..ad369a646 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-bindings/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Bindings components" +linkTitle: "Bindings" +description: "Guidance on setting up Dapr bindings components" +weight: 4000 +--- \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/_index.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/_index.md new file mode 100644 index 000000000..159f82042 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/_index.md @@ -0,0 +1,58 @@ +--- +type: docs +title: "Supported external bindings" +linkTitle: "Supported bindings" +weight: 200 +description: List of all the supported external bindings that can interface with Dapr +--- + +Every binding has its own unique set of properties. Click the name link to see the component YAML for each binding. + +### Generic + +| Name | Input
Binding | Output
Binding | Status | +|------|:----------------:|:-----------------:|--------| +| [APNs]({{< ref apns.md >}}) | | ✅ | Experimental | +| [Cron (Scheduler)]({{< ref cron.md >}}) | ✅ | ✅ | Experimental | +| [HTTP]({{< ref http.md >}}) | | ✅ | Experimental | +| [InfluxDB]({{< ref influxdb.md >}}) | | ✅ | Experimental | +| [Kafka]({{< ref kafka.md >}}) | ✅ | ✅ | Experimental | +| [Kubernetes Events]({{< ref "kubernetes-binding.md" >}}) | ✅ | | Experimental | +| [MQTT]({{< ref mqtt.md >}}) | ✅ | ✅ | Experimental | +| [PostgreSql]({{< ref postgres.md >}}) | | ✅ | Experimental | +| [RabbitMQ]({{< ref rabbitmq.md >}}) | ✅ | ✅ | Experimental | +| [Redis]({{< ref redis.md >}}) | | ✅ | Experimental | +| [Twilio]({{< ref twilio.md >}}) | | ✅ | Experimental | +| [Twitter]({{< ref twitter.md >}}) | ✅ | ✅ | Experimental | +| [SendGrid]({{< ref sendgrid.md >}}) | | ✅ | Experimental | + + +### Amazon Web Service (AWS) + +| Name | Input
Binding | Output
Binding | Status | +|------|:----------------:|:-----------------:|--------| +| [AWS DynamoDB]({{< ref dynamodb.md >}}) | | ✅ | Experimental | +| [AWS S3]({{< ref s3.md >}}) | | ✅ | Experimental | +| [AWS SNS]({{< ref sns.md >}}) | | ✅ | Experimental | +| [AWS SQS]({{< ref sqs.md >}}) | ✅ | ✅ | Experimental | +| [AWS Kinesis]({{< ref kinesis.md >}}) | ✅ | ✅ | Experimental | + + +### Google Cloud Platform (GCP) + +| Name | Input
Binding | Output
Binding | Status | +|------|:----------------:|:-----------------:|--------| +| [GCP Cloud Pub/Sub]({{< ref gcppubsub.md >}}) | ✅ | ✅ | Experimental | +| [GCP Storage Bucket]({{< ref gcpbucket.md >}}) | | ✅ | Experimental | + +### Microsoft Azure + +| Name | Input
Binding | Output
Binding | Status | +|------|:----------------:|:-----------------:|--------| +| [Azure Blob Storage]({{< ref blobstorage.md >}}) | | ✅ | Experimental | +| [Azure EventHubs]({{< ref eventhubs.md >}}) | ✅ | ✅ | Experimental | +| [Azure CosmosDB]({{< ref cosmosdb.md >}}) | | ✅ | Experimental | +| [Azure Service Bus Queues]({{< ref servicebusqueues.md >}}) | ✅ | ✅ | Experimental | +| [Azure SignalR]({{< ref signalr.md >}}) | | ✅ | Experimental | +| [Azure Storage Queues]({{< ref storagequeues.md >}}) | ✅ | ✅ | Experimental | +| [Azure Event Grid]({{< ref eventgrid.md >}}) | ✅ | ✅ | Experimental | \ No newline at end of file diff --git a/reference/specs/bindings/apns.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/apns.md similarity index 90% rename from reference/specs/bindings/apns.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/apns.md index ff413c8ab..b4ba62999 100644 --- a/reference/specs/bindings/apns.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/apns.md @@ -1,4 +1,10 @@ -# Apple Push Notification Service Binding Spec + +--- +type: docs +title: "Apple Push Notification Service binding spec" +linkTitle: "Apple Push Notification Service" +description: "Detailed documentation on the Apple Push Notification Service binding component" +--- ## Configuration diff --git a/reference/specs/bindings/blobstorage.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/blobstorage.md similarity index 76% rename from reference/specs/bindings/blobstorage.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/blobstorage.md index 94fe19dde..0e9a9327b 100644 --- a/reference/specs/bindings/blobstorage.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/blobstorage.md @@ -1,4 +1,11 @@ -# Azure Blob Storage Binding Spec +--- +type: docs +title: "Azure Blob Storage binding spec" +linkTitle: "Azure Blob Storage" +description: "Detailed documentation on the Azure Blob Storage binding component" +--- + +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -24,15 +31,13 @@ spec: - `container` is the name of the Blob Storage container to write to. - `decodeBase64` optional configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). "true" is the only allowed positive value. Other positive variations like "True" are not acceptable. -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Output Binding Supported Operations -* [create](#create-blob) -* [get](#get-blob) - - -## Create Blob +### Create Blob To perform a get blob operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body: @@ -45,7 +50,7 @@ To perform a get blob operation, invoke the Azure Blob Storage binding with a `P } ``` -Example: +#### Example: ```bash @@ -53,7 +58,7 @@ curl -d '{ "operation": "create", "data": { "field1": "value1" }}' \ http://localhost:/v1.0/bindings/ ``` -### Response +#### Response The response body will contain the following JSON: @@ -64,7 +69,7 @@ The response body will contain the following JSON: ``` -## Get Blob +### Get Blob To perform a get blob operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body: @@ -77,14 +82,14 @@ To perform a get blob operation, invoke the Azure Blob Storage binding with a `P } ``` -Example: +#### Example: ```bash curl -d '{ "operation": "get", "metadata": { "blobName": "myblob" }}' \ http://localhost:/v1.0/bindings/ ``` -### Response +#### Response The response body will contain the value stored in the blob object. @@ -108,4 +113,10 @@ Applications publishing to an Azure Blob Storage output binding should send a me }, "operation": "create" } -``` \ No newline at end of file +``` + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/cosmosdb.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/cosmosdb.md new file mode 100644 index 000000000..e11325edf --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/cosmosdb.md @@ -0,0 +1,49 @@ +--- +type: docs +title: "Azure CosmosDB binding spec" +linkTitle: "Azure CosmosDB" +description: "Detailed documentation on the Azure CosmosDB binding component" +--- + +## Setup Dapr component + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: + namespace: +spec: + type: bindings.azure.cosmosdb + metadata: + - name: url + value: https://******.documents.azure.com:443/ + - name: masterKey + value: ***** + - name: database + value: db + - name: collection + value: collection + - name: partitionKey + value: message +``` + +- `url` is the CosmosDB url. +- `masterKey` is the CosmosDB account master key. +- `database` is the name of the CosmosDB database. +- `collection` is name of the collection inside the database. +- `partitionKey` is the name of the partitionKey to extract from the payload. + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Output Binding Supported Operations + +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/cron.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/cron.md similarity index 76% rename from reference/specs/bindings/cron.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/cron.md index dfd7d17f9..2a6e9c6ed 100644 --- a/reference/specs/bindings/cron.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/cron.md @@ -1,4 +1,11 @@ -# Cron Binding Spec +--- +type: docs +title: "Cron binding spec" +linkTitle: "Cron" +description: "Detailed documentation on the cron binding component" +--- + +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -39,3 +46,9 @@ For ease of use, the Dapr cron binding also supports few shortcuts: * `@every 15s` where `s` is seconds, `m` minutes, and `h` hours * `@daily` or `@hourly` which runs at that period from the time the binding is initialized + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/dynamodb.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/dynamodb.md new file mode 100644 index 000000000..ad1af6341 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/dynamodb.md @@ -0,0 +1,46 @@ +--- +type: docs +title: "AWS DynamoDB binding spec" +linkTitle: "AWS DynamoDB" +description: "Detailed documentation on the AWS DynamoDB binding component" +--- + +## Setup Dapr component + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: + namespace: +spec: + type: bindings.aws.dynamodb + metadata: + - name: region + value: us-west-2 + - name: accessKey + value: ***************** + - name: secretKey + value: ***************** + - name: table + value: items +``` + +- `region` is the AWS region. +- `accessKey` is the AWS access key. +- `secretKey` is the AWS secret key. +- `table` is the DynamoDB table name. + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Output Binding Supported Operations + +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/eventgrid.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/eventgrid.md similarity index 91% rename from reference/specs/bindings/eventgrid.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/eventgrid.md index a0a7b159e..0f2169bf3 100644 --- a/reference/specs/bindings/eventgrid.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/eventgrid.md @@ -1,7 +1,14 @@ -# Azure Event Grid Binding Spec +--- +type: docs +title: "Azure Event Grid binding spec" +linkTitle: "Azure Event Grid" +description: "Detailed documentation on the Azure Event Grid binding component" +--- See [this](https://docs.microsoft.com/en-us/azure/event-grid/) for Azure Event Grid documentation. +## Setup Dapr component + ```yml apiVersion: dapr.io/v1alpha1 kind: Component @@ -25,11 +32,9 @@ spec: value: [HandshakePort] - name: scope value: "[Scope]" - # Optional Input Binding Metadata - name: eventSubscriptionName value: "[EventSubscriptionName]" - # Required Output Binding Metadata - name: accessKey value: "[AccessKey]" @@ -37,38 +42,31 @@ spec: value: "[TopicEndpoint] ``` +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + ## Input Binding Metadata - `tenantId` is the Azure tenant id in which this Event Grid Event Subscription should be created - - `subscriptionId` is the Azure subscription id in which this Event Grid Event Subscription should be created - - `clientId` is the client id that should be used by the binding to create or update the Event Grid Event Subscription - - `clientSecret` is the client secret that should be used by the binding to create or update the Event Grid Event Subscription - - `subscriberEndpoint` is the https (required) endpoint in which Event Grid will handshake and send Cloud Events. If you aren't re-writing URLs on ingress, it should be in the form of: `https://[YOUR HOSTNAME]/api/events` If testing on your local machine, you can use something like [ngrok](https://ngrok.com) to create a public endpoint. - - `handshakePort` is the container port that the input binding will listen on for handshakes and events - - `scope` is the identifier of the resource to which the event subscription needs to be created or updated. The scope can be a subscription, or a resource group, or a top level resource belonging to a resource provider namespace, or an Event Grid topic. For example: - '/subscriptions/{subscriptionId}/' for a subscription - '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}' for a resource group - '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}' for a resource - '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}' for an Event Grid topic > Values in braces {} should be replaced with actual values. - - `eventSubscriptionName` (Optional) is the name of the event subscription. Event subscription names must be between 3 and 64 characters in length and should use alphanumeric letters only. -## Output Binding Supported Operations - -* create - ## Output Binding Metadata - `accessKey` is the Access Key to be used for publishing an Event Grid Event to a custom topic - - `topicEndpoint` is the topic endpoint in which this output binding should publish events -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +## Output Binding Supported Operations +- create ## Additional information @@ -84,7 +82,7 @@ az role assignment create --assignee --role "EventGrid EventSubscript _Make sure to also to add quotes around the `[HandshakePort]` in your Event Grid binding component because Kubernetes expects string values from the config._ -### Testing locally using ngrok and Dapr standalone mode +### Testing locally - Install [ngrok](https://ngrok.com/download) - Run locally using custom port `9000` for handshakes @@ -102,7 +100,7 @@ ngrok http -host-header=localhost 9000 dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run ``` -### Testing from Kubernetes cluster +### Testing om Kubernetes Azure Event Grid requires a valid HTTPS endpoint for custom webhooks. Self signed certificates won't do. In order to enable traffic from public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/). @@ -216,7 +214,7 @@ kubectl apply -f dotnetwebapi.yaml > **Note:** This manifest deploys everything to Kubernetes default namespace. -**Troubleshooting possible issues with Nginx controller** +#### Troubleshooting possible issues with Nginx controller After initial deployment the "Daprized" Nginx controller can malfunction. To check logs and fix issue (if it exists) follow these steps. @@ -237,3 +235,9 @@ $ kubectl delete pod nginx-nginx-ingress-controller-649df94867-fp6mg # Check the logs again - it should start returning 200 # .."OPTIONS /api/events HTTP/1.1" 200.. ``` + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/eventhubs.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/eventhubs.md similarity index 71% rename from reference/specs/bindings/eventhubs.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/eventhubs.md index 00155f604..9b7a13458 100644 --- a/reference/specs/bindings/eventhubs.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/eventhubs.md @@ -1,7 +1,14 @@ -# Azure Event Hubs Binding Spec +--- +type: docs +title: "Azure Event Hubs binding spec" +linkTitle: "Azure Event Hubs" +description: "Detailed documentation on the Azure Event Hubs binding component" +--- See [this](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-framework-getstarted-send) for instructions on how to set up an Event Hub. +## Setup Dapr component + ```yaml apiVersion: dapr.io/v1alpha1 kind: Component @@ -32,8 +39,16 @@ spec: - `storageContainerName` Is the name of the container in the Azure Storage account to persist checkpoints data on. - `partitionID` (Optional) ID of the partition to send and receive events. -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Output Binding Supported Operations -* create \ No newline at end of file +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/gcpbucket.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/gcpbucket.md similarity index 65% rename from reference/specs/bindings/gcpbucket.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/gcpbucket.md index a8057305a..bde28c423 100644 --- a/reference/specs/bindings/gcpbucket.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/gcpbucket.md @@ -1,4 +1,11 @@ -# GCP Storage Bucket Spec +--- +type: docs +title: "GCP Storage Bucket binding spec" +linkTitle: "GCP Storage Bucket" +description: "Detailed documentation on the GCP Storage Bucket binding component" +--- + +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -45,8 +52,16 @@ spec: - `client_x509_cert_url` is the GCP credentials project x509 cert url. - `private_key` is the GCP credentials private key. -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Output Binding Supported Operations -* create \ No newline at end of file +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/gcppubsub.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/gcppubsub.md similarity index 67% rename from reference/specs/bindings/gcppubsub.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/gcppubsub.md index d547e7f71..f8fd48205 100644 --- a/reference/specs/bindings/gcppubsub.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/gcppubsub.md @@ -1,4 +1,11 @@ -# GCP Cloud Pub/Sub Binding Spec +--- +type: docs +title: "GCP Pub/Sub binding spec" +linkTitle: "GCP Pub/Sub" +description: "Detailed documentation on the GCP Pub/Sub binding component" +--- + +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -48,8 +55,16 @@ spec: - `client_x509_cert_url` is the GCP credentials project x509 cert url. - `private_key` is the GCP credentials private key. -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Output Binding Supported Operations -* create \ No newline at end of file +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/http.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/http.md new file mode 100644 index 000000000..afe532905 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/http.md @@ -0,0 +1,36 @@ +--- +type: docs +title: "HTTP binding spec" +linkTitle: "HTTP" +description: "Detailed documentation on the HTTP binding component" +--- + +## Setup Dapr component + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: + namespace: +spec: + type: bindings.http + metadata: + - name: url + value: http://something.com + - name: method + value: GET +``` + +- `url` is the HTTP url to invoke. +- `method` is the HTTP verb to use for the request. + +## Output Binding Supported Operations + +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/influxdb.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/influxdb.md new file mode 100644 index 000000000..4f22627de --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/influxdb.md @@ -0,0 +1,46 @@ +--- +type: docs +title: "InfluxDB binding spec" +linkTitle: "InfluxDB" +description: "Detailed documentation on the InfluxDB binding component" +--- + +## Setup Dapr component + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: + namespace: +spec: + type: bindings.influx + metadata: + - name: url # Required + value: + - name: token # Required + value: + - name: org # Required + value: + - name: bucket # Required + value: +``` + +- `url` is the URL for the InfluxDB instance. eg. http://localhost:8086 +- `token` is the authorization token for InfluxDB. +- `org` is the InfluxDB organization. +- `bucket` bucket name to write to. + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Output Binding Supported Operations + +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/kafka.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/kafka.md similarity index 71% rename from reference/specs/bindings/kafka.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/kafka.md index 56d10aeaa..0933d43a0 100644 --- a/reference/specs/bindings/kafka.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/kafka.md @@ -1,4 +1,11 @@ -# Kafka Binding Spec +--- +type: docs +title: "Kafka binding spec" +linkTitle: "Kafka" +description: "Detailed documentation on the kafka binding component" +--- + +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -33,7 +40,9 @@ spec: - `saslUsername` is the SASL username for authentication. Only used if `authRequired` is set to - `"true"`. - `saslPassword` is the SASL password for authentication. Only used if `authRequired` is set to - `"true"`. -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Specifying a partition key @@ -60,3 +69,9 @@ curl -X POST http://localhost:3500/v1.0/bindings/myKafka \ ## Output Binding Supported Operations * create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/kinesis.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/kinesis.md similarity index 62% rename from reference/specs/bindings/kinesis.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/kinesis.md index 65903ca8c..4c44a5676 100644 --- a/reference/specs/bindings/kinesis.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/kinesis.md @@ -1,7 +1,14 @@ -# AWS Kinesis Binding Spec +--- +type: docs +title: "AWS Kinesis binding spec" +linkTitle: "AWS Kinesis" +description: "Detailed documentation on the AWS Kinesis binding component" +--- See [this](https://aws.amazon.com/kinesis/data-streams/getting-started/) for instructions on how to set up an AWS Kinesis data streams +## Setup Dapr component + ```yaml apiVersion: dapr.io/v1alpha1 kind: Component @@ -33,8 +40,16 @@ spec: - `consumerName` is the AWS Kinesis Consumer Name. -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Output Binding Supported Operations -* create \ No newline at end of file +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/kubernetes.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/kubernetes-binding.md similarity index 83% rename from reference/specs/bindings/kubernetes.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/kubernetes-binding.md index 2feaab856..54b0d708d 100644 --- a/reference/specs/bindings/kubernetes.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/kubernetes-binding.md @@ -1,4 +1,11 @@ -# Kubernetes Events Binding Spec +--- +type: docs +title: "Kubernetes Events binding spec" +linkTitle: "Kubernetes Events" +description: "Detailed documentation on the Kubernetes Events binding component" +--- + +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -92,3 +99,9 @@ roleRef: name: # same as the one above apiGroup: "" ``` + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/mqtt.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/mqtt.md new file mode 100644 index 000000000..585f44d9e --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/mqtt.md @@ -0,0 +1,40 @@ +--- +type: docs +title: "MQTT binding spec" +linkTitle: "MQTT" +description: "Detailed documentation on the MQTT binding component" +--- + +## Setup Dapr component + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: + namespace: +spec: + type: bindings.mqtt + metadata: + - name: url + value: mqtt[s]://[username][:password]@host.domain[:port] + - name: topic + value: topic1 +``` + +- `url` is the MQTT broker url. +- `topic` is the topic to listen on or send events to. + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Output Binding Supported Operations + +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/postgres.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/postgres.md similarity index 72% rename from reference/specs/bindings/postgres.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/postgres.md index 19151b335..a606a16ac 100644 --- a/reference/specs/bindings/postgres.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/postgres.md @@ -1,4 +1,11 @@ -# PostgrSQL Binding Spec +--- +type: docs +title: "PostgrSQL binding spec" +linkTitle: "PostgrSQL" +description: "Detailed documentation on the PostgrSQL binding component" +--- + +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -13,7 +20,9 @@ spec: value: ``` -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} The PostgrSQL binding uses [pgx connection pool](https://github.com/jackc/pgx) internally so the `url` parameter can be any valid connection string, either in a `DSN` or `URL` format: @@ -31,17 +40,17 @@ postgres://dapr:secret@dapr.example.com:5432/dapr?sslmode=verify-ca Both methods also support connection pool configuration variables: -* `pool_min_conns`: integer 0 or greater -* `pool_max_conns`: integer greater than 0 -* `pool_max_conn_lifetime`: duration string -* `pool_max_conn_idle_time`: duration string -* `pool_health_check_period`: duration string +- `pool_min_conns`: integer 0 or greater +- `pool_max_conns`: integer greater than 0 +- `pool_max_conn_lifetime`: duration string +- `pool_max_conn_idle_time`: duration string +- `pool_health_check_period`: duration string ## Output Binding Supported Operations -* `exec` -* `query` -* `close` +- `exec` +- `query` +- `close` ### exec @@ -122,4 +131,8 @@ Finally, the `close` operation can be used to explicitly close the DB connection > Note, the PostgreSql binding itself doesn't prevent SQL injection, like with any database application, validate the input before executing query. - +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/rabbitmq.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/rabbitmq.md similarity index 72% rename from reference/specs/bindings/rabbitmq.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/rabbitmq.md index b9ce6dab3..a5e4011f1 100644 --- a/reference/specs/bindings/rabbitmq.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/rabbitmq.md @@ -1,4 +1,11 @@ -# RabbitMQ Binding Spec +--- +type: docs +title: "RabbitMQ binding spec" +linkTitle: "RabbitMQ" +description: "Detailed documentation on the RabbitMQ binding component" +--- + +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -30,7 +37,9 @@ spec: - `ttlInSeconds` is an optional parameter to set the [default message time to live at RabbitMQ queue level](https://www.rabbitmq.com/ttl.html). If this parameter is omitted, messages won't expire, continuing to exist on the queue until processed. - `prefetchCount` is an optional parameter to set the [Channel Prefetch Setting (QoS)](https://www.rabbitmq.com/confirms.html#channel-qos-prefetch). If this parameter is omiited, QoS would set value to 0 as no limit. -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Specifying a time to live on message level @@ -59,3 +68,9 @@ curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \ ## Output Binding Supported Operations * create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/redis.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/redis.md new file mode 100644 index 000000000..bc3ac417b --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/redis.md @@ -0,0 +1,43 @@ +--- +type: docs +title: "Redis binding spec" +linkTitle: "Redis" +description: "Detailed documentation on the Redis binding component" +--- + +## Setup Dapr component + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: + namespace: +spec: + type: bindings.redis + metadata: + - name: redisHost + value:
:6379 + - name: redisPassword + value: ************** + - name: enableTLS + value: +``` + +- `redisHost` is the Redis host address. +- `redisPassword` is the Redis password. +- `enableTLS` - If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Output Binding Supported Operations + +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/howto/track-rethinkdb-state-store-changes/README.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/rethinkdb.md similarity index 72% rename from howto/track-rethinkdb-state-store-changes/README.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/rethinkdb.md index 647343fa2..ced4b9f5b 100644 --- a/howto/track-rethinkdb-state-store-changes/README.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/rethinkdb.md @@ -1,10 +1,17 @@ -# How to track RethinkDB state store changes +--- +type: docs +title: "RethinkDB binding spec" +linkTitle: "RethinkDB" +description: "Detailed documentation on the RethinkDB binding component" +--- + +## Introduction The RethinkDB state store supports transactions which means it can be used to support Dapr actors. Dapr persists only the actor's current state which doesn't allow the users to track how actor's state may have changed over time. To enable users to track change of the state of actors, this binding leverages RethinkDB's built-in capability to monitor RethinkDB table and event on change with both the `old` and `new` state. This binding creates a subscription on the Dapr state table and streams these changes using the Dapr input binding interface. -## Create a binding +## Setup Dapr component Create the following YAML file and save this to the `components` directory in your application directory. (Use the `--components-path` flag with `dapr run` to point to your custom components dir) @@ -24,3 +31,9 @@ spec: ``` For example on how to combine this binding with Dapr Pub/Sub to stream state changes to a topic see [here](https://github.com/mchmarny/dapr-state-store-change-handler). + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/s3.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/s3.md new file mode 100644 index 000000000..39b1868ad --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/s3.md @@ -0,0 +1,46 @@ +--- +type: docs +title: "AWS S3 binding spec" +linkTitle: "AWS S3" +description: "Detailed documentation on the AWS S3 binding component" +--- + +## Setup Dapr component + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: + namespace: +spec: + type: bindings.aws.s3 + metadata: + - name: region + value: us-west-2 + - name: accessKey + value: ***************** + - name: secretKey + value: ***************** + - name: bucket + value: mybucket +``` + +- `region` is the AWS region. +- `accessKey` is the AWS access key. +- `secretKey` is the AWS secret key. +- `table` is the name of the S3 bucket to write to. + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Output Binding Supported Operations + +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/sendgrid.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/sendgrid.md similarity index 66% rename from reference/specs/bindings/sendgrid.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/sendgrid.md index 5252fa250..9962205f7 100644 --- a/reference/specs/bindings/sendgrid.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/sendgrid.md @@ -1,4 +1,11 @@ -# SendGrid Binding Spec +--- +type: docs +title: "Twilio SendGrid binding spec" +linkTitle: "Twilio SendGrid" +description: "Detailed documentation on the Twilio SendGrid binding component" +--- + +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -39,8 +46,17 @@ Example request payload } ``` -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + ## Output Binding Supported Operations * create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/servicebusqueues.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/servicebusqueues.md similarity index 64% rename from reference/specs/bindings/servicebusqueues.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/servicebusqueues.md index 443fdbacf..870d18a7c 100644 --- a/reference/specs/bindings/servicebusqueues.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/servicebusqueues.md @@ -1,4 +1,11 @@ -# Azure Service Bus Queues Binding Spec +--- +type: docs +title: "Azure Service Bus Queues binding spec" +linkTitle: "Azure Service Bus Queues" +description: "Detailed documentation on the Azure Service Bus Queues binding component" +--- + +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -21,7 +28,10 @@ spec: - `queueName` is the Service Bus queue name. - `ttlInSeconds` is an optional parameter to set the default message [time to live](https://docs.microsoft.com/azure/service-bus-messaging/message-expiration). If this parameter is omitted, messages will expire after 14 days. -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + ## Specifying a time to live on message level @@ -50,3 +60,9 @@ curl -X POST http://localhost:3500/v1.0/bindings/myServiceBusQueue \ ## Output Binding Supported Operations * create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/signalr.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/signalr.md similarity index 69% rename from reference/specs/bindings/signalr.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/signalr.md index 7ddce5f4d..a671d4383 100644 --- a/reference/specs/bindings/signalr.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/signalr.md @@ -1,4 +1,11 @@ -# Azure SignalR Binding Spec +--- +type: docs +title: "Azure SignalR binding spec" +linkTitle: "Azure SignalR" +description: "Detailed documentation on the Azure SignalR binding component" +--- + +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -18,7 +25,10 @@ spec: - The metadata `connectionString` contains the Azure SignalR connection string. - The optional `hub` metadata value defines the hub in which the message will be send. The hub can be dynamically defined as a metadata value when publishing to an output binding (key is "hub"). -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + ## Additional information @@ -52,3 +62,9 @@ For more information on integration Azure SignalR into a solution check the [doc ## Output Binding Supported Operations * create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/sns.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/sns.md new file mode 100644 index 000000000..7e0915154 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/sns.md @@ -0,0 +1,46 @@ +--- +type: docs +title: "AWS SNS binding spec" +linkTitle: "AWS SNS" +description: "Detailed documentation on the AWS SNS binding component" +--- + +## Setup Dapr component + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: + namespace: +spec: + type: bindings.aws.sns + metadata: + - name: region + value: us-west-2 + - name: accessKey + value: ***************** + - name: secretKey + value: ***************** + - name: topicArn + value: mytopic +``` + +- `region` is the AWS region. +- `accessKey` is the AWS access key. +- `secretKey` is the AWS secret key. +- `topicArn` is the SNS topic name. + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Output Binding Supported Operations + +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/sqs.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/sqs.md new file mode 100644 index 000000000..3c378e458 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/sqs.md @@ -0,0 +1,47 @@ +--- +type: docs +title: "AWS SQS binding spec" +linkTitle: "AWS SQS" +description: "Detailed documentation on the AWS SQS binding component" +--- + +## Setup Dapr component + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: + namespace: +spec: + type: bindings.aws.sqs + metadata: + - name: region + value: us-west-2 + - name: accessKey + value: ***************** + - name: secretKey + value: ***************** + - name: queueName + value: items +``` + +- `region` is the AWS region. +- `accessKey` is the AWS access key. +- `secretKey` is the AWS secret key. +- `queueName` is the SQS queue name. + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + + +## Output Binding Supported Operations + +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/storagequeues.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/storagequeues.md similarity index 65% rename from reference/specs/bindings/storagequeues.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/storagequeues.md index c94162eb3..be233416b 100644 --- a/reference/specs/bindings/storagequeues.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/storagequeues.md @@ -1,4 +1,11 @@ -# Azure Storage Queues Binding Spec +--- +type: docs +title: "Azure Storage Queues binding spec" +linkTitle: "Azure Storage Queues" +description: "Detailed documentation on the Azure Storage Queues binding component" +--- + +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -24,7 +31,10 @@ spec: - `queue` is the name of the Azure Storage queue. - `ttlInSeconds` is an optional parameter to set the default message time to live. If this parameter is omitted, messages will expire after 10 minutes. -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + ## Specifying a time to live on message level @@ -53,3 +63,9 @@ curl -X POST http://localhost:3500/v1.0/bindings/myStorageQueue \ ## Output Binding Supported Operations * create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/twilio.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/twilio.md new file mode 100644 index 000000000..521503ffa --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/twilio.md @@ -0,0 +1,47 @@ +--- +type: docs +title: "Twilio SMS binding spec" +linkTitle: "Twilio SMS" +description: "Detailed documentation on the Twilio SMS binding component" +--- + +## Setup Dapr component + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: + namespace: +spec: + type: bindings.twilio.sms + metadata: + - name: toNumber # required. + value: 111-111-1111 + - name: fromNumber # required. + value: 222-222-2222 + - name: accountSid # required. + value: ***************** + - name: authToken # required. + value: ***************** +``` + +- `toNumber` is the target number to send the sms to. +- `fromNumber` is the sender phone number. +- `accountSid` is the Twilio account SID. +- `authToken` is the Twilio auth token. + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + + +## Output Binding Supported Operations + +* create + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/reference/specs/bindings/twitter.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/twitter.md similarity index 59% rename from reference/specs/bindings/twitter.md rename to daprdocs/content/en/operations/components/setup-bindings/supported-bindings/twitter.md index 3b7fb1ad5..affdac1a9 100644 --- a/reference/specs/bindings/twitter.md +++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/twitter.md @@ -1,4 +1,11 @@ -# Twitter Binding Spec +--- +type: docs +title: "Twitter binding spec" +linkTitle: "Twitter" +description: "Detailed documentation on the Twitter binding component" +--- + +## Setup Dapr component The Twitter binding supports both `input` and `output` binding configuration. First the common part: @@ -21,6 +28,12 @@ spec: value: "****" # twitter api access secret, required ``` +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Input bindings + For input bindings, where the query matching Tweets are streamed to the user service, the above component has to also include a query: ```yaml @@ -28,6 +41,8 @@ For input bindings, where the query matching Tweets are streamed to the user ser value: "dapr" # your search query, required ``` +## Output bindings + For output binding invocation the user code has to invoke the binding: ```shell @@ -50,11 +65,17 @@ Where the payload is: The metadata parameters are: -* `query` - any valid Twitter query (e.g. `dapr` or `dapr AND serverless`). See [Twitter docs](https://developer.twitter.com/en/docs/tweets/rules-and-filtering/overview/standard-operators) for more details on advanced query formats -* `lang` - (optional, default: `en`) restricts result tweets to the given language using [ISO 639-1 language code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) -* `result` - (optional, default: `recent`) specifies tweet query result type. Valid values include: - * `mixed` - both popular and real time results - * `recent` - most recent results - * `popular` - most popular results +- `query` - any valid Twitter query (e.g. `dapr` or `dapr AND serverless`). See [Twitter docs](https://developer.twitter.com/en/docs/tweets/rules-and-filtering/overview/standard-operators) for more details on advanced query formats +- `lang` - (optional, default: `en`) restricts result tweets to the given language using [ISO 639-1 language code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) +- `result` - (optional, default: `recent`) specifies tweet query result type. Valid values include: + - `mixed` - both popular and real time results + - `recent` - most recent results + - `popular` - most popular results -You can see the example of the JSON data that Twitter binding returns [here](https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets) \ No newline at end of file +You can see the example of the JSON data that Twitter binding returns [here](https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets) + +## Related links +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-pubsub/_index.md b/daprdocs/content/en/operations/components/setup-pubsub/_index.md new file mode 100644 index 000000000..502d39d8c --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-pubsub/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Pub/Sub brokers" +linkTitle: "Pub/sub brokers" +description: "Guidance on setting up different message brokers for Dapr Pub/Sub" +weight: 2000 +--- diff --git a/howto/pubsub-namespaces/README.md b/daprdocs/content/en/operations/components/setup-pubsub/pubsub-namespaces.md similarity index 84% rename from howto/pubsub-namespaces/README.md rename to daprdocs/content/en/operations/components/setup-pubsub/pubsub-namespaces.md index ec8dfaa48..f44a43927 100644 --- a/howto/pubsub-namespaces/README.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/pubsub-namespaces.md @@ -1,106 +1,113 @@ -# Using PubSub across multiple namespaces - -In some scenarios, applications can be spread across namespaces and share a queue or topic via PubSub. In this case, the PubSub component must be provisioned on each namespace. - -In this example, we will use the [PubSub sample](https://github.com/dapr/quickstarts/tree/master/pub-sub). Redis installation and the subscribers will be in `namespace-a` while the publisher UI will be on `namespace-b`. This solution should also work if Redis was installed on another namespace or if we used a managed cloud service like Azure ServiceBus. - -The table below shows which resources are deployed to which namespaces: -| Resource | namespace-a | namespace-b | -|-|-|-| -| Redis master | X || -| Redis slave | X || -| Dapr's PubSub component | X | X | -| Node subscriber | X || -| Python subscriber | X || -| React UI publisher | | X| - -## Pre-requisites - -* [Dapr installed](https://github.com/dapr/docs/blob/master/getting-started/environment-setup.md) on any namespace since Dapr works at the cluster level. -* Checkout and cd into directory for [PubSub sample](https://github.com/dapr/quickstarts/tree/master/pub-sub). - -## Setup `namespace-a` - -Create namespace and switch kubectl to use it. -``` -kubectl create namespace namespace-a -kubectl config set-context --current --namespace=namespace-a -``` - -Install Redis (master and slave) on `namespace-a`, following [these instructions](https://github.com/dapr/docs/blob/master/howto/setup-pub-sub-message-broker/setup-redis.md). - -Now, configure `deploy/redis.yaml`, paying attention to the hostname containing `namespace-a`. - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: pubsub - namespace: default -spec: - type: pubsub.redis - metadata: - - name: "redisHost" - value: "redis-master.namespace-a.svc:6379" - - name: "redisPassword" - value: "YOUR_PASSWORD" -``` - -Deploy resources to `namespace-a`: -``` -kubectl apply -f deploy/redis.yaml -kubectl apply -f deploy/node-subscriber.yaml -kubectl apply -f deploy/python-subscriber.yaml -``` - -## Setup `namespace-b` - -Create namespace and switch kubectl to use it. -``` -kubectl create namespace namespace-b -kubectl config set-context --current --namespace=namespace-b -``` - -Deploy resources to `namespace-b`, including the Redis component: -``` -kubectl apply -f deploy/redis.yaml -kubectl apply -f deploy/react-form.yaml -``` - -Now, find the IP address for react-form, open it on your browser and publish messages to each topic (A, B and C). -``` -kubectl get service -A -``` - -## Confirm subscribers received the messages. - -Switch back to `namespace-a`: -``` -kubectl config set-context --current --namespace=namespace-a -``` - -Find the POD names: -``` -kubectl get pod # Copy POD names and use in the next commands. -``` - -Display logs: -``` -kubectl logs node-subscriber-XYZ node-subscriber -kubectl logs python-subscriber-XYZ python-subscriber -``` - -The messages published on the browser should show in the corresponding subscriber's logs. The Node.js subscriber receives messages of type "A" and "B", while the Python subscriber receives messages of type "A" and "C". - -## Clean up - -``` -kubectl delete -f deploy/redis.yaml --namespace namespace-a -kubectl delete -f deploy/node-subscriber.yaml --namespace namespace-a -kubectl delete -f deploy/python-subscriber.yaml --namespace namespace-a -kubectl delete -f deploy/react-form.yaml --namespace namespace-b -kubectl delete -f deploy/redis.yaml --namespace namespace-b -kubectl config set-context --current --namespace=default -kubectl delete namespace namespace-a -kubectl delete namespace namespace-b +--- +type: docs +title: "Pub/Sub and namespaces" +linkTitle: "Kubernetes namespaces" +weight: 20000 +description: "Use Dapr Pub/Sub with multiple namespaces" +--- + +In some scenarios, applications can be spread across namespaces and share a queue or topic via PubSub. In this case, the PubSub component must be provisioned on each namespace. + +In this example, we will use the [PubSub sample](https://github.com/dapr/quickstarts/tree/master/pub-sub). Redis installation and the subscribers will be in `namespace-a` while the publisher UI will be on `namespace-b`. This solution should also work if Redis was installed on another namespace or if we used a managed cloud service like Azure ServiceBus. + +The table below shows which resources are deployed to which namespaces: + +| Resource | namespace-a | namespace-b | +|------------------------ |-------------|-------------| +| Redis master | X | | +| Redis slave | X | | +| Dapr's PubSub component | X | X | +| Node subscriber | X | | +| Python subscriber | X | | +| React UI publisher | | X | + +## Pre-requisites + +* [Dapr installed](https://github.com/dapr/docs/blob/master/getting-started/environment-setup.md) on any namespace since Dapr works at the cluster level. +* Checkout and cd into directory for [PubSub sample](https://github.com/dapr/quickstarts/tree/master/pub-sub). + +## Setup `namespace-a` + +Create namespace and switch kubectl to use it. +``` +kubectl create namespace namespace-a +kubectl config set-context --current --namespace=namespace-a +``` + +Install Redis (master and slave) on `namespace-a`, following [these instructions](https://github.com/dapr/docs/blob/master/howto/setup-pub-sub-message-broker/setup-redis.md). + +Now, configure `deploy/redis.yaml`, paying attention to the hostname containing `namespace-a`. + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: pubsub + namespace: default +spec: + type: pubsub.redis + metadata: + - name: "redisHost" + value: "redis-master.namespace-a.svc:6379" + - name: "redisPassword" + value: "YOUR_PASSWORD" +``` + +Deploy resources to `namespace-a`: +``` +kubectl apply -f deploy/redis.yaml +kubectl apply -f deploy/node-subscriber.yaml +kubectl apply -f deploy/python-subscriber.yaml +``` + +## Setup `namespace-b` + +Create namespace and switch kubectl to use it. +``` +kubectl create namespace namespace-b +kubectl config set-context --current --namespace=namespace-b +``` + +Deploy resources to `namespace-b`, including the Redis component: +``` +kubectl apply -f deploy/redis.yaml +kubectl apply -f deploy/react-form.yaml +``` + +Now, find the IP address for react-form, open it on your browser and publish messages to each topic (A, B and C). +``` +kubectl get service -A +``` + +## Confirm subscribers received the messages. + +Switch back to `namespace-a`: +``` +kubectl config set-context --current --namespace=namespace-a +``` + +Find the POD names: +``` +kubectl get pod # Copy POD names and use in the next commands. +``` + +Display logs: +``` +kubectl logs node-subscriber-XYZ node-subscriber +kubectl logs python-subscriber-XYZ python-subscriber +``` + +The messages published on the browser should show in the corresponding subscriber's logs. The Node.js subscriber receives messages of type "A" and "B", while the Python subscriber receives messages of type "A" and "C". + +## Clean up + +``` +kubectl delete -f deploy/redis.yaml --namespace namespace-a +kubectl delete -f deploy/node-subscriber.yaml --namespace namespace-a +kubectl delete -f deploy/python-subscriber.yaml --namespace namespace-a +kubectl delete -f deploy/react-form.yaml --namespace namespace-b +kubectl delete -f deploy/redis.yaml --namespace namespace-b +kubectl config set-context --current --namespace=default +kubectl delete namespace namespace-a +kubectl delete namespace namespace-b ``` \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-pubsub/setup-pubsub-overview.md b/daprdocs/content/en/operations/components/setup-pubsub/setup-pubsub-overview.md new file mode 100644 index 000000000..96343ae73 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-pubsub/setup-pubsub-overview.md @@ -0,0 +1,39 @@ +--- +type: docs +title: "Overview" +linkTitle: "Overview" +description: "General overview on set up of message brokers for Dapr Pub/Sub" +weight: 10000 +type: docs +--- + +Dapr integrates with pub/sub message buses to provide apps with the ability to create event-driven, loosely coupled architectures where producers send events to consumers via topics. It supports the configuration of multiple, named, pub/sub components *per application*. Each pub/sub component has a name and this name is used when publishing a message topic + +Pub/Sub message buses are extensible and can be found in the [components-contrib repo](https://github.com/dapr/components-contrib). + +A pub/sub in Dapr is described using a `Component` file: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: pubsub + namespace: default +spec: + type: pubsub. + metadata: + - name: + value: + - name: + value: +... +``` + +The type of pub/sub is determined by the `type` field, and things like connection strings and other metadata are put in the `.metadata` section. +Even though you can put plain text secrets in there, it is recommended you use a [secret store]({{< ref component-secrets.md >}}) using a `secretKeyRef` + +Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. + +## Related links + +- [Supported pub/sub components]({{< ref supported-pubsub >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/_index.md b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/_index.md new file mode 100644 index 000000000..119e1b430 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/_index.md @@ -0,0 +1,8 @@ +--- +type: docs +title: "Supported pub/sub components" +linkTitle: "Supported pub/sub" +weight: 30000 +description: List of all the supported external bindings that can interface with Dapr +simple_list: true +--- \ No newline at end of file diff --git a/howto/setup-pub-sub-message-broker/setup-kafka.md b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-apache-kafka.md similarity index 65% rename from howto/setup-pub-sub-message-broker/setup-kafka.md rename to daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-apache-kafka.md index 89386130d..97ba6022e 100644 --- a/howto/setup-pub-sub-message-broker/setup-kafka.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-apache-kafka.md @@ -1,13 +1,23 @@ -# Setup Kafka +--- +type: docs +title: "Apache Kafka" +linkTitle: "Apache Kafka" +description: "Detailed documentation on the Apache Kafka pubsub component" +--- -## Locally +## Setup Kafka +{{< tabs "Self-Hosted" "Kubernetes">}} +{{% codetab %}} You can run Kafka locally using [this](https://github.com/wurstmeister/kafka-docker) Docker image. To run without Docker, see the getting started guide [here](https://kafka.apache.org/quickstart). +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} To run Kafka on Kubernetes, you can use the [Helm Chart](https://github.com/helm/charts/tree/master/incubator/kafka#installing-the-chart). +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -38,19 +48,13 @@ spec: - name: saslPassword value: ``` - -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md). +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration -### In Kubernetes +Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. -To apply the Kafka component to Kubernetes, use the `kubectl`: - -``` -kubectl apply -f kafka.yaml -``` - -### Running locally - -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +## Related links +- [Pub/Sub building block]({{< ref pubsub >}}) \ No newline at end of file diff --git a/howto/setup-pub-sub-message-broker/setup-snssqs.md b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-aws-snssqs.md similarity index 75% rename from howto/setup-pub-sub-message-broker/setup-snssqs.md rename to daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-aws-snssqs.md index a8ee4a12e..e2bb8a5bb 100644 --- a/howto/setup-pub-sub-message-broker/setup-snssqs.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-aws-snssqs.md @@ -1,9 +1,18 @@ -# Setup AWS SNS/SQS for pub/sub +--- +type: docs +title: "AWS SNS/SQS" +linkTitle: "AWS SNS/SQS" +description: "Detailed documentation on the AWS SNS/SQS pubsub component" +--- -This article describes configuring Dapr to use AWS SNS/SQS for pub/sub on local and Kubernetes environments. For local development, the [localstack project](https://github.com/localstack/localstack) is used to integrate AWS SNS/SQS. -Follow the instructions [here](https://github.com/localstack/localstack#installing) to install the localstack CLI. +This article describes configuring Dapr to use AWS SNS/SQS for pub/sub on local and Kubernetes environments. -## Locally +## Setup SNS/SQS + +{{< tabs "Self-Hosted" "Kubernetes" "AWS" >}} + +{{% codetab %}} +For local development the [localstack project](https://github.com/localstack/localstack) is used to integrate AWS SNS/SQS. Follow the instructions [here](https://github.com/localstack/localstack#installing) to install the localstack CLI. In order to use localstack with your pubsub binding, you need to provide the `awsEndpoint` configuration in the component metadata. The `awsEndpoint` is unncessary when running against production AWS. @@ -22,9 +31,9 @@ spec: - name: awsRegion value: us-east-1 ``` +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} To run localstack on Kubernetes, you can apply the configuration below. Localstack is then reachable at the DNS name `http://localstack.default.svc.cluster.local:4566` (assuming this was applied to the default namespace) and this should be used as the `awsEndpoint` @@ -68,11 +77,15 @@ spec: type: LoadBalancer ``` +{{% /codetab %}} -## Run in AWS +{{% codetab %}} In order to run in AWS, you should create an IAM user with permissions to the SNS and SQS services. Use the account ID and account secret and plug them into the `awsAccountID` and `awsAccountSecret` in the component metadata using kubernetes secrets. +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -102,24 +115,16 @@ spec: value: us-east-1 ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md). +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration -### In Kubernetes +Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. -To apply the SNS/SQS component to Kubernetes, use the `kubectl` command: - -``` -kubectl apply -f snssqs.yaml -``` - -### Running locally - -Place the above components file `snssqs.yaml` in the local components directory (either the default directory or in a path you define when running the CLI command `dapr run`) - - -## Related Links +## Related links +- [Pub/Sub building block]({{< ref pubsub >}}) - [AWS SQS as subscriber to SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html) - [AWS SNS API refernce](https://docs.aws.amazon.com/sns/latest/api/Welcome.html) - [AWS SQS API refernce](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/Welcome.html) diff --git a/howto/setup-pub-sub-message-broker/setup-azure-eventhubs.md b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-azure-eventhubs.md similarity index 74% rename from howto/setup-pub-sub-message-broker/setup-azure-eventhubs.md rename to daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-azure-eventhubs.md index 579b606e6..021836301 100644 --- a/howto/setup-pub-sub-message-broker/setup-azure-eventhubs.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-azure-eventhubs.md @@ -1,4 +1,11 @@ -# Setup Azure Event Hubs +--- +type: docs +title: "Azure Events Hub" +linkTitle: "Azure Events Hub" +description: "Detailed documentation on the Azure Event Hubs pubsub component" +--- + +## Setup Azure Event Hubs Follow the instructions [here](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-create) on setting up Azure Event Hubs. Since this implementation uses the Event Processor Host, you will also need an [Azure Storage Account](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal). @@ -30,7 +37,9 @@ spec: See [here](https://docs.microsoft.com/en-us/azure/event-hubs/authorize-access-shared-access-signature) on how to get the Event Hubs connection string. Note this is not the Event Hubs namespace. -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Create consumer groups for each subscriber @@ -39,14 +48,7 @@ For example, a Dapr app running on Kubernetes with `dapr.io/app-id: "myapp"` wil ## Apply the configuration -### In Kubernetes +Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. -To apply the Azure Event Hubs pub/sub to Kubernetes, use the `kubectl` CLI: - -```bash -kubectl apply -f eventhubs.yaml -``` - -### Running locally - -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +## Related links +- [Pub/Sub building block]({{< ref pubsub >}}) \ No newline at end of file diff --git a/howto/setup-pub-sub-message-broker/setup-azure-servicebus.md b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-azure-servicebus.md similarity index 83% rename from howto/setup-pub-sub-message-broker/setup-azure-servicebus.md rename to daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-azure-servicebus.md index 74b707b5b..e0b72b41d 100644 --- a/howto/setup-pub-sub-message-broker/setup-azure-servicebus.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-azure-servicebus.md @@ -1,4 +1,11 @@ -# Setup Azure Service Bus +--- +type: docs +title: "Azure Service Bus" +linkTitle: "Azure Service Bus" +description: "Detailed documentation on the Azure Service Bus pubsub component" +--- + +## Setup Azure Service Bus Follow the instructions [here](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal) on setting up Azure Service Bus Topics. @@ -47,18 +54,13 @@ spec: > __NOTE:__ The above settings are shared across all topics that use this component. -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration -### In Kubernetes +Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. -To apply the Azure Service Bus pub/sub to Kubernetes, use the `kubectl` CLI: - -```bash -kubectl apply -f azuresb.yaml -``` - -### Running locally - -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +## Related links +- [Pub/Sub building block]({{< ref pubsub >}}) \ No newline at end of file diff --git a/howto/setup-pub-sub-message-broker/setup-gcp.md b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-gcp.md similarity index 80% rename from howto/setup-pub-sub-message-broker/setup-gcp.md rename to daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-gcp.md index 63d71f3f0..ebdffd990 100644 --- a/howto/setup-pub-sub-message-broker/setup-gcp.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-gcp.md @@ -1,4 +1,11 @@ -# Setup GCP Pubsub +--- +type: docs +title: "GCP Pub/Sub" +linkTitle: "GCP Pub/Sub" +description: "Detailed documentation on the GCP Pub/Sub component" +--- + +## Setup GCP Pub/Sub Follow the instructions [here](https://cloud.google.com/pubsub/docs/quickstart-console) on setting up Google Cloud Pub/Sub system. @@ -56,18 +63,13 @@ spec: - `private_key` is the GCP credentials private key. - `disableEntityManagement` Optional. Default: false. When set to true, topics and subscriptions do not get created automatically. -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration -### In Kubernetes +Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. -To apply the Google Cloud pub/sub to Kubernetes, use the `kubectl` CLI: - -```bash -kubectl apply -f messagebus.yaml -``` - -### Running locally - -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +## Related links +- [Pub/Sub building block]({{< ref pubsub >}}) \ No newline at end of file diff --git a/howto/setup-pub-sub-message-broker/setup-hazelcast.md b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-hazelcast.md similarity index 58% rename from howto/setup-pub-sub-message-broker/setup-hazelcast.md rename to daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-hazelcast.md index caffe918a..ce907461d 100644 --- a/howto/setup-pub-sub-message-broker/setup-hazelcast.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-hazelcast.md @@ -1,7 +1,15 @@ -# Setup Hazelcast +--- +type: docs +title: "Hazelcast" +linkTitle: "Hazelcast" +description: "Detailed documentation on the Hazelcast pubsub component" +--- -## Locally +## Setup Hazelcast +{{< tabs "Self-Hosted" "Kubernetes">}} + +{{% codetab %}} You can run Hazelcast locally using Docker: ``` @@ -9,10 +17,13 @@ docker run -e JAVA_OPTS="-Dhazelcast.local.publicAddress=127.0.0.1:5701" -p 5701 ``` You can then interact with the server using the `127.0.0.1:5701`. +{{% /codetab %}} -## Kubernetes +{{% codetab %}} +The easiest way to install Hazelcast on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/hazelcast). +{{% /codetab %}} -The easiest way to install Hazelcast on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/hazelcast): +{{< /tabs >}} ## Create a Dapr component @@ -33,19 +44,13 @@ spec: value: # Required. A comma delimited string of servers. Example: "hazelcast:3000,hazelcast2:3000" ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md). - +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration -### In Kubernetes +Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. -To apply the Hazelcast state store to Kubernetes, use the `kubectl` CLI: - -``` -kubectl apply -f hazelcast.yaml -``` - -### Running locally - -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +## Related links +- [Pub/Sub building block]({{< ref pubsub >}}) \ No newline at end of file diff --git a/howto/setup-pub-sub-message-broker/setup-mqtt.md b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-mqtt.md similarity index 69% rename from howto/setup-pub-sub-message-broker/setup-mqtt.md rename to daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-mqtt.md index 104758ef4..02a050263 100644 --- a/howto/setup-pub-sub-message-broker/setup-mqtt.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-mqtt.md @@ -1,16 +1,24 @@ -# Setup MQTT +--- +type: docs +title: "MQTT" +linkTitle: "MQTT" +description: "Detailed documentation on the MQTT pubsub component" +--- -## Locally +## Setup MQTT +{{< tabs "Self-Hosted" "Kubernetes">}} + +{{% codetab %}} You can run a MQTT broker [locally using Docker](https://hub.docker.com/_/eclipse-mosquitto): ```bash docker run -d -p 1883:1883 -p 9001:9001 --name mqtt eclipse-mosquitto:1.6.9 ``` You can then interact with the server using the client port: `mqtt://localhost:1883` +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} You can run a MQTT broker in kubernetes using following yaml: ```yaml @@ -63,6 +71,9 @@ spec: protocol: TCP ``` You can then interact with the server using the client port: `tcp://mqtt-broker.default.svc.cluster.local:1883` +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -108,29 +119,23 @@ spec: ``` Where: -* **url** (required) is the address of the MQTT broker. - - use **tcp://** scheme for non-TLS communication. - - use **tcps://** scheme for TLS communication. -* **qos** (optional) indicates the Quality of Service Level (QoS) of the message. (Default 0) -* **retain** (optional) defines whether the message is saved by the broker as the last known good value for a specified topic. (Default false) -* **cleanSession** (optional) will set the "clean session" in the connect message when client connects to an MQTT broker . (Default true) -* **caCert** (required for using TLS) is the certificate authority certificate. -* **clientCert** (required for using TLS) is the client certificate. -* **clientKey** (required for using TLS) is the client key. +- **url** (required) is the address of the MQTT broker. +- - use **tcp://** scheme for non-TLS communication. +- - use **tcps://** scheme for TLS communication. +- **qos** (optional) indicates the Quality of Service Level (QoS) of the message. (Default 0) +- **retain** (optional) defines whether the message is saved by the broker as the last known good value for a specified topic. (Default false) +- **cleanSession** (optional) will set the "clean session" in the connect message when client connects to an MQTT broker . (Default true) +- **caCert** (required for using TLS) is the certificate authority certificate. +- **clientCert** (required for using TLS) is the client certificate. +- **clientKey** (required for using TLS) is the client key. -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration -### In Kubernetes - -To apply the MQTT pubsub to Kubernetes, use the `kubectl` CLI: - -```bash -kubectl apply -f mqtt.yaml -``` - -### Running locally - -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. +## Related links +- [Pub/Sub building block]({{< ref pubsub >}}) \ No newline at end of file diff --git a/howto/setup-pub-sub-message-broker/setup-nats-streaming.md b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-nats-streaming.md similarity index 77% rename from howto/setup-pub-sub-message-broker/setup-nats-streaming.md rename to daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-nats-streaming.md index 88632351a..86cf07a61 100644 --- a/howto/setup-pub-sub-message-broker/setup-nats-streaming.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-nats-streaming.md @@ -1,7 +1,15 @@ -# Setup NATS +--- +type: docs +title: "NATS streaming" +linkTitle: "NATS streaming" +description: "Detailed documentation on the NATS pubsub component" +--- -## Locally +## Setup NATS +{{< tabs "Self-Hosted" "Kubernetes">}} + +{{% codetab %}} You can run a NATS server locally using Docker: ```bash @@ -9,9 +17,9 @@ docker run -d -name nats-streaming -p 4222:4222 -p 8222:8222 nats-streaming ``` You can then interact with the server using the client port: `localhost:4222`. +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} Install NATS on Kubernetes by using the [kubectl](https://docs.nats.io/nats-on-kubernetes/minimal-setup): ```bash @@ -28,6 +36,9 @@ To interact with NATS, find the service with: `kubectl get svc stan`. For example, if installing using the example above, the NATS Streaming address would be: `:4222` +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -72,19 +83,13 @@ spec: # value: "" ``` - -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration -### In Kubernetes +Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. -To apply the NATS pub/sub to Kubernetes, use the `kubectl` CLI: - -```bash -kubectl apply -f nats-stan.yaml -``` - -### Running locally - -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +## Related links +- [Pub/Sub building block]({{< ref pubsub >}}) \ No newline at end of file diff --git a/howto/setup-pub-sub-message-broker/setup-pulsar.md b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-pulsar.md similarity index 62% rename from howto/setup-pub-sub-message-broker/setup-pulsar.md rename to daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-pulsar.md index 9076819db..4fd055a09 100644 --- a/howto/setup-pub-sub-message-broker/setup-pulsar.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-pulsar.md @@ -1,6 +1,15 @@ -# Setup-Pulsar +--- +type: docs +title: "Pulsar" +linkTitle: "Pulsar" +description: "Detailed documentation on the Pulsar pubsub component" +--- -## Locally +## Setup Pulsar + +{{< tabs "Self-Hosted" "Kubernetes">}} + +{{% codetab %}} ``` docker run -it \ -p 6650:6650 \ @@ -11,10 +20,13 @@ docker run -it \ bin/pulsar standalone ``` +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} Please refer to the following [Helm chart](https://pulsar.apache.org/docs/en/kubernetes-helm/) Documentation. +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -40,10 +52,7 @@ spec: ## Apply the configuration -To apply the Pulsar pub/sub to Kubernetes, use the kubectl CLI: +Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. -`` kubectl apply -f pulsar.yaml `` - -### Running locally ### - -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +## Related links +- [Pub/Sub building block]({{< ref pubsub >}}) \ No newline at end of file diff --git a/howto/setup-pub-sub-message-broker/setup-rabbitmq.md b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-rabbitmq.md similarity index 74% rename from howto/setup-pub-sub-message-broker/setup-rabbitmq.md rename to daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-rabbitmq.md index 2aabec825..2be8211aa 100644 --- a/howto/setup-pub-sub-message-broker/setup-rabbitmq.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-rabbitmq.md @@ -1,7 +1,15 @@ -# Setup RabbitMQ +--- +type: docs +title: "RabbitMQ" +linkTitle: "RabbitMQ" +description: "Detailed documentation on the RabbitMQ pubsub component" +--- -## Locally +## Setup RabbitMQ +{{< tabs "Self-Hosted" "Kubernetes" >}} + +{{% codetab %}} You can run a RabbitMQ server locally using Docker: ```bash @@ -9,9 +17,9 @@ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3 ``` You can then interact with the server using the client port: `localhost:5672`. +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} The easiest way to install RabbitMQ on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/rabbitmq): ```bash @@ -26,6 +34,9 @@ To interact with RabbitMQ, find the service with: `kubectl get svc rabbitmq`. For example, if installing using the example above, the RabbitMQ server client address would be: `rabbitmq.default.svc.cluster.local:5672` +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -58,18 +69,13 @@ spec: value: # Optional. Default: "false". ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration -### In Kubernetes +Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. -To apply the RabbitMQ pub/sub to Kubernetes, use the `kubectl` CLI: - -```bash -kubectl apply -f rabbitmq.yaml -``` - -### Running locally - -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +## Related links +- [Pub/Sub building block]({{< ref pubsub >}}) \ No newline at end of file diff --git a/howto/setup-pub-sub-message-broker/setup-redis.md b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-redis-pubsub.md similarity index 61% rename from howto/setup-pub-sub-message-broker/setup-redis.md rename to daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-redis-pubsub.md index 07ddbaa4c..3476d3858 100644 --- a/howto/setup-pub-sub-message-broker/setup-redis.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/supported-pubsub/setup-redis-pubsub.md @@ -1,17 +1,24 @@ -# Setup Redis Streams +--- +type: docs +title: "Redis Streams" +linkTitle: "Redis Streams" +description: "Detailed documentation on the Redis Streams pubsub component" +weight: 100 +--- -## Creating a Redis instance +## Setup a Redis instance Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service, provided the version of Redis is 5.0.0 or later. If you already have a Redis instance > 5.0.0 installed, move on to the [Configuration](#configuration) section. -### Running locally +{{< tabs "Self-Hosted" "Kubernetes" "AWS" "GCP" "Azure">}} +{{% codetab %}} The Dapr CLI will automatically create and setup a Redis Streams instance for you. The Redis instance will be installed via Docker when you run `dapr init`, and the component file will be created in default directory. (`$HOME/.dapr/components` directory (Mac/Linux) or `%USERPROFILE%\.dapr\components` on Windows). +{{% /codetab %}} -### Creating a Redis instance in your Kubernetes Cluster using Helm - -We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm](https://github.com/helm/helm#install). +{{% codetab %}} +You can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm](https://github.com/helm/helm#install). 1. Install Redis into your cluster. ```bash @@ -39,19 +46,27 @@ We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Ku - name: redisPassword value: "lhDOkwTlp0" ``` +{{% /codetab %}} -### Other ways to create a Redis Database +{{% codetab %}} +[AWS Redis](https://aws.amazon.com/redis/) +{{% /codetab %}} -- [AWS Redis](https://aws.amazon.com/redis/) -- [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/) +{{% codetab %}} +[GCP Cloud MemoryStore](https://cloud.google.com/memorystore/) +{{% /codetab %}} -## Configuration +{{% codetab %}} +[Azure Redis](https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/quickstart-create-redis) +{{% /codetab %}} + +{{< /tabs >}} + +## Create a Dapr component To setup Redis, you need to create a component for `pubsub.redis`. -The following yaml files demonstrates how to define each. If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS in the yaml. **Note:** yaml files below illustrate secret management in plain text. In a production-grade application, follow [secret management](../../concepts/secrets/README.md) instructions to securely manage your secrets. - -### Configuring Redis Streams for Pub/Sub +The following yaml files demonstrates how to define each. If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS in the yaml. Create a file called pubsub.yaml, and paste the following: @@ -72,14 +87,17 @@ spec: value: ``` +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + ## Apply the configuration -### Kubernetes +{{% alert title="Note" color="primary" %}} +The Dapr CLI automatically deploys a redis instance and creates Dapr components as part of the `dapr init` command. +{{% /alert %}} -```bash -kubectl apply -f pubsub.yaml -``` +Visit [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. -### Standalone - -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +## Related links +- [Pub/Sub building block]({{< ref pubsub >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-secret-store/_index.md b/daprdocs/content/en/operations/components/setup-secret-store/_index.md new file mode 100644 index 000000000..e603b0126 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-secret-store/_index.md @@ -0,0 +1,8 @@ +--- +type: docs +title: "Secret store components" +linkTitle: "Secret stores" +description: "Guidance on setting up different secret store components" +weight: 3000 +type: docs +--- diff --git a/daprdocs/content/en/operations/components/setup-secret-store/secret-stores-overview.md b/daprdocs/content/en/operations/components/setup-secret-store/secret-stores-overview.md new file mode 100644 index 000000000..728f1c136 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-secret-store/secret-stores-overview.md @@ -0,0 +1,39 @@ +--- +type: docs +title: "Overview" +linkTitle: "Overview" +description: "General overview on set up of secret stores for Dapr" +weight: 10000 +type: docs +--- + +Dapr integrates with secret stores to provide apps and other components with secure store and access to secrets such as access keys and passwords.. Each secret store component has a name and this name is used when accessing a secret. + +Secret stores are extensible and can be found in the [components-contrib repo](https://github.com/dapr/components-contrib). + +A secret store in Dapr is described using a `Component` file: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: secretstore + namespace: default +spec: + type: secretstores. + metadata: + - name: + value: + - name: + value: +... +``` + +The type of secret store is determined by the `type` field, and things like connection strings and other metadata are put in the `.metadata` section. + +Visit [this guide]({{< ref "howto-secrets.md#setting-up-a-secret-store-component" >}}) for instructions on configuring a secret store component. + +## Related links + +- [Supported secret store components]({{< ref supported-secret-stores >}}) +- [Secrets building block]({{< ref secrets >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/_index.md b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/_index.md new file mode 100644 index 000000000..cdea8cc66 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/_index.md @@ -0,0 +1,8 @@ +--- +type: docs +title: "Supported secret stores" +linkTitle: "Supported secret stores" +weight: 30000 +description: List of all the supported secret stores that can interface with Dapr +simple_list: true +--- \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/aws-secret-manager.md b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/aws-secret-manager.md new file mode 100644 index 000000000..9eebb122b --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/aws-secret-manager.md @@ -0,0 +1,69 @@ +--- +type: docs +title: "AWS Secrets Manager" +linkTitle: "AWS Secrets Manager" +description: Detailed information on the decret store component +--- + +## Create an AWS Secrets Manager instance + +Setup AWS Secrets Manager using the AWS documentation: https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html. + +## Create the Dapr component + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: awssecretmanager + namespace: default +spec: + type: secretstores.aws.secretmanager + metadata: + - name: region + value: [aws_region] # Required. + - name: accessKey # Required. + value: "[aws_access_key]" + - name: secretKey # Required. + value: "[aws_secret_key]" + - name: sessionToken # Required. + value: "[aws_session_token]" +``` + +## Apply the configuration + +Read [this guide]({{< ref howto-secrets.md >}}) to learn how to apply a Dapr component. + +## Example + +This example shows you how to set the Redis password from the AWS Secret Manager secret store. +Here, you created a secret named `redisPassword` in AWS Secret Manager. Note its important to set it both as the `name` and `key` properties. + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: statestore + namespace: default +spec: + type: state.redis + metadata: + - name: redisHost + value: "[redis]:6379" + - name: redisPassword + secretKeyRef: + name: redisPassword + key: redisPassword +auth: + secretStore: awssecretmanager +``` + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a local secret store such as [Kubernetes secret store]({{< ref kubernetes-secret-store.md >}}) or a [local file]({{< ref file-secret-store.md >}}) to bootstrap secure key storage. +{{% /alert %}} + +## Related links +- [Secrets building block]({{< ref secrets >}}) +- [How-To: Retreive a secret]({{< ref "howto-secrets.md" >}}) +- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}}) +- [Secrets API reference]({{< ref secrets_api.md >}}) \ No newline at end of file diff --git a/howto/setup-secret-store/azure-keyvault-managed-identity.md b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/azure-keyvault-managed-identity.md similarity index 53% rename from howto/setup-secret-store/azure-keyvault-managed-identity.md rename to daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/azure-keyvault-managed-identity.md index e694de05a..f2eb167a8 100644 --- a/howto/setup-secret-store/azure-keyvault-managed-identity.md +++ b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/azure-keyvault-managed-identity.md @@ -1,22 +1,16 @@ -# Use Azure Key Vault secret store in Kubernetes mode using Managed Identities - -This document shows how to enable Azure Key Vault secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for Kubernetes mode using Managed Identities to authenticate to a Key Vault. - -## Contents - -- [Use Azure Key Vault secret store in Kubernetes mode using Managed Identities](#use-azure-key-vault-secret-store-in-kubernetes-mode-using-managed-identities) - - [Contents](#contents) - - [Prerequisites](#prerequisites) - - [Setup Kubernetes to use Managed identities and Azure Key Vault](#setup-kubernetes-to-use-managed-identities-and-azure-key-vault) - - [Use Azure Key Vault secret store in Kubernetes mode with managed identities](#use-azure-key-vault-secret-store-in-kubernetes-mode-with-managed-identities) - - [References](#references) +--- +type: docs +title: "Azure Key Vault with Managed Identities on Kubernetes" +linkTitle: "Azure Key Vault w/ Managed Identity" +description: How to configure Azure Key Vault and Kubernetes to use Azure Managed Identities to access secrets +--- ## Prerequisites -* [Azure Subscription](https://azure.microsoft.com/en-us/free/) -* [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) +- [Azure Subscription](https://azure.microsoft.com/en-us/free/) +- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) -## Setup Kubernetes to use Managed identities and Azure Key Vault +## Setup Managed Identity and Azure Key Vault 1. Login to Azure and set the default subscription @@ -124,7 +118,7 @@ This document shows how to enable Azure Key Vault secret store using [Dapr Secre kubectl apply -f azure-identity-config.yaml ``` -## Use Azure Key Vault secret store in Kubernetes mode with managed identities +## Configure Dapr component In Kubernetes mode, you store the certificate for the service principal into the Kubernetes Secret Store and then enable Azure Key Vault secret store with this certificate in Kubernetes secretstore. @@ -153,124 +147,11 @@ In Kubernetes mode, you store the certificate for the service principal into the kubectl apply -f azurekeyvault.yaml ``` -3. Store the redisPassword as a secret into your keyvault - - Now store the redisPassword as a secret into your keyvault - - ```bash - az keyvault secret set --name redisPassword --vault-name [your_keyvault_name] --value "your redis passphrase" - ``` - -4. Create redis.yaml state store component - - This redis state store component refers to `azurekeyvault` component as a secretstore and uses the secret for `redisPassword` stored in Azure Key Vault. - - ```yaml - apiVersion: dapr.io/v1alpha1 - kind: Component - metadata: - name: statestore - namespace: default - spec: - type: state.redis - metadata: - - name: redisHost - value: "[redis_url]:6379" - - name: redisPassword - secretKeyRef: - name: redisPassword - auth: - secretStore: azurekeyvault - ``` - -5. Apply redis statestore component - - ```bash - kubectl apply -f redis.yaml - ``` - -6. Create node.yaml deployment - - ```yaml - kind: Service - apiVersion: v1 - metadata: - name: nodeapp - namespace: default - labels: - app: node - spec: - selector: - app: node - ports: - - protocol: TCP - port: 80 - targetPort: 3000 - type: LoadBalancer - - --- - apiVersion: apps/v1 - kind: Deployment - metadata: - name: nodeapp - namespace: default - labels: - app: node - spec: - replicas: 1 - selector: - matchLabels: - app: node - template: - metadata: - labels: - app: node - aadpodidbinding: [you managed identity selector] - annotations: - dapr.io/enabled: "true" - dapr.io/app-id: "nodeapp" - dapr.io/app-port: "3000" - spec: - containers: - - name: node - image: dapriosamples/hello-k8s-node - ports: - - containerPort: 3000 - imagePullPolicy: Always - - ``` - -7. Apply the node app deployment - - ```bash - kubectl apply -f redis.yaml - ``` - - Make sure that `secretstores.azure.keyvault` is loaded successfully in `daprd` sidecar log - - Here is the nodeapp log of the sidecar. Note: use the nodeapp name for your deployed container instance. - - ```bash - $ kubectl logs $(kubectl get po --selector=app=node -o jsonpath='{.items[*].metadata.name}') daprd - - time="2020-02-05T09:15:03Z" level=info msg="starting Dapr Runtime -- version edge -- commit v0.3.0-rc.0-58-ge540a71-dirty" - time="2020-02-05T09:15:03Z" level=info msg="log level set to: info" - time="2020-02-05T09:15:03Z" level=info msg="kubernetes mode configured" - time="2020-02-05T09:15:03Z" level=info msg="app id: nodeapp" - time="2020-02-05T09:15:03Z" level=info msg="mTLS enabled. creating sidecar authenticator" - time="2020-02-05T09:15:03Z" level=info msg="trust anchors extracted successfully" - time="2020-02-05T09:15:03Z" level=info msg="authenticator created" - time="2020-02-05T09:15:03Z" level=info msg="loaded component azurekeyvault (secretstores.azure.keyvault)" - time="2020-02-05T09:15:04Z" level=info msg="loaded component statestore (state.redis)" - ... - 2020-02-05 09:15:04.636348 I | redis: connecting to redis-master:6379 - 2020-02-05 09:15:04.639435 I | redis: connected to redis-master:6379 (localAddr: 10.244.0.11:38294, remAddr: 10.0.74.145:6379) - ... - ``` - ## References - - [Azure CLI Keyvault CLI](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create) - [Create an Azure service principal with Azure CLI](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest) - [AAD Pod Identity](https://github.com/Azure/aad-pod-identity) -- [Secrets Component](../../concepts/secrets/README.md) +- [Secrets building block]({{< ref secrets >}}) +- [How-To: Retreive a secret]({{< ref "howto-secrets.md" >}}) +- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}}) +- [Secrets API reference]({{< ref secrets_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/azure-keyvault.md b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/azure-keyvault.md new file mode 100644 index 000000000..38cae15f3 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/azure-keyvault.md @@ -0,0 +1,174 @@ +--- +type: docs +title: "Azure Key Vault secret store" +linkTitle: "Azure Key Vault" +description: Detailed information on the Azure Key Vault secret store component +--- + +{{% alert title="Note" color="primary" %}} +Azure Managed Identity can be used for Azure Key Vault access on Kubernetes. Instructions [here]({{< ref azure-keyvault-managed-identity.md >}}). +{{% /alert %}} + +## Prerequisites + +- [Azure Subscription](https://azure.microsoft.com/en-us/free/) +- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) + +## Setup Key Vault and service principal + +1. Login to Azure and set the default subscription + + ```bash + # Log in Azure + az login + + # Set your subscription to the default subscription + az account set -s [your subscription id] + ``` + +2. Create an Azure Key Vault in a region + + ```bash + az keyvault create --location [region] --name [your_keyvault] --resource-group [your resource group] + ``` + +3. Create a service principal + + Create a service principal with a new certificate and store the 1-year certificate inside your keyvault's certificate vault. You can skip this step if you want to use an existing service principal for keyvault instead of creating new one + + ```bash + az ad sp create-for-rbac --name [your_service_principal_name] --create-cert --cert [certificate_name] --keyvault [your_keyvault] --skip-assignment --years 1 + + { + "appId": "a4f90000-0000-0000-0000-00000011d000", + "displayName": "[your_service_principal_name]", + "name": "http://[your_service_principal_name]", + "password": null, + "tenant": "34f90000-0000-0000-0000-00000011d000" + } + ``` + + **Save the both the appId and tenant from the output which will be used in the next step** + +4. Get the Object Id for [your_service_principal_name] + + ```bash + az ad sp show --id [service_principal_app_id] + + { + ... + "objectId": "[your_service_principal_object_id]", + "objectType": "ServicePrincipal", + ... + } + ``` + +5. Grant the service principal the GET permission to your Azure Key Vault + + ```bash + az keyvault set-policy --name [your_keyvault] --object-id [your_service_principal_object_id] --secret-permissions get + ``` + + Now that your service principal has access to your keyvault you are ready to configure the secret store component to use secrets stored in your keyvault to access other components securely. + +6. Download the certificate in PFX format from your Azure Key Vault either using the Azure portal or the Azure CLI: + +- **Using the Azure portal:** + + Go to your key vault on the Azure portal and navigate to the *Certificates* tab under *Settings*. Find the certificate that was created during the service principal creation, named [certificate_name] and click on it. + + Click *Download in PFX/PEM format* to download the certificate. + +- **Using the Azure CLI:** + + ```bash + az keyvault secret download --vault-name [your_keyvault] --name [certificate_name] --encoding base64 --file [certificate_name].pfx + ``` + +## Configure Dapr component + +{{< tabs "Self-Hosted" "Kubernetes">}} + +{{% codetab %}} +1. Copy downloaded PFX cert from your Azure Keyvault into your components directory or a secure location on your local disk + +2. Create a file called `azurekeyvault.yaml` in the components directory + + ```yaml + apiVersion: dapr.io/v1alpha1 + kind: Component + metadata: + name: azurekeyvault + namespace: default + spec: + type: secretstores.azure.keyvault + metadata: + - name: vaultName + value: [your_keyvault_name] + - name: spnTenantId + value: "[your_service_principal_tenant_id]" + - name: spnClientId + value: "[your_service_principal_app_id]" + - name: spnCertificateFile + value : "[pfx_certificate_file_local_path]" + ``` + +Fill in the metadata fields with your Key Vault details from the above setup process. +{{% /codetab %}} + +{{% codetab %}} +In Kubernetes mode, you store the certificate for the service principal into the Kubernetes Secret Store and then enable Azure Key Vault secret store with this certificate in Kubernetes secretstore. + +1. Create a kubernetes secret using the following command: + + ```bash + kubectl create secret generic [your_k8s_spn_secret_name] --from-file=[pfx_certificate_file_local_path] + ``` + +- `[pfx_certificate_file_local_path]` is the path of PFX cert file you downloaded above +- `[your_k8s_spn_secret_name]` is secret name in Kubernetes secret store + +2. Create a `azurekeyvault.yaml` component file + +The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the certificate stored in Kubernetes secret store. + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: azurekeyvault + namespace: default +spec: + type: secretstores.azure.keyvault + metadata: + - name: vaultName + value: [your_keyvault_name] + - name: spnTenantId + value: "[your_service_principal_tenant_id]" + - name: spnClientId + value: "[your_service_principal_app_id]" + - name: spnCertificate + secretKeyRef: + name: [your_k8s_spn_secret_name] + key: [pfx_certificate_file_local_name] +auth: + secretStore: kubernetes +``` + +3. Apply `azurekeyvault.yaml` component + +```bash +kubectl apply -f azurekeyvault.yaml +``` +{{% /codetab %}} + +{{< /tabs >}} + +## References + +- [Azure CLI Keyvault CLI](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create) +- [Create an Azure service principal with Azure CLI](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest) +- [Secrets building block]({{< ref secrets >}}) +- [How-To: Retreive a secret]({{< ref "howto-secrets.md" >}}) +- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}}) +- [Secrets API reference]({{< ref secrets_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/envvar-secret-store.md b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/envvar-secret-store.md new file mode 100644 index 000000000..59a327076 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/envvar-secret-store.md @@ -0,0 +1,34 @@ +--- +type: docs +title: "Local environment variables (for Development)" +linkTitle: "Local environment variables" +weight: 10 +description: Detailed information on the local environment secret store component +--- + +This Dapr secret store component uses locally defined environment variable and does not use authentication. + +{{% alert title="Warning" color="warning" %}} +This approach to secret management is not recommended for production environments. +{{% /alert %}} + +## Setup environment variable secret store + +To enable environment variable secret store, create a file with the following content in your `./components` directory: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: envvar-secret-store + namespace: default +spec: + type: secretstores.local.env + metadata: +``` + +## Related Links +- [Secrets building block]({{< ref secrets >}}) +- [How-To: Retreive a secret]({{< ref "howto-secrets.md" >}}) +- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}}) +- [Secrets API reference]({{< ref secrets_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/file-secret-store.md b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/file-secret-store.md new file mode 100644 index 000000000..43fc8f00c --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/file-secret-store.md @@ -0,0 +1,72 @@ +--- +type: docs +title: "Local file (for Development)" +linkTitle: "Local file" +weight: 20 +description: Detailed information on the local file secret store component +--- + +This Dapr secret store component reads plain text JSON from a given file and does not use authentication. + +## Setup JSON file to hold the secrets + +1. Create a JSON file (i.e. `secrets.json`) with the following contents: + + ```json + { + "redisPassword": "your redis passphrase" + } + ``` + +2. Save this file to your `./components` directory or a secure location in your filesystem + +## Configure Dapr component + +Create a Dapr component file (ex. `localSecretStore.yaml`) with following content: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: local-secret-store + namespace: default +spec: + type: secretstores.local.file + metadata: + - name: secretsFile + value: [path to the JSON file] + - name: nestedSeparator + value: ":" +``` + +The `nestedSeparator` parameter is optional (default value is ':'). It is used by the store when flattening the json hierarchy to a map. + +## Example + +Given the following json: + +```json +{ + "redisPassword": "your redis password", + "connectionStrings": { + "sql": "your sql connection string", + "mysql": "your mysql connection string" + } +} +``` + +The store will load the file and create a map with the following key value pairs: + +| flattened key | value | +| --- | --- | +|"redis" | "your redis password" | +|"connectionStrings:sql" | "your sql connection string" | +|"connectionStrings:mysql"| "your mysql connection string" | + +Use the flattened key (`connectionStrings:sql`) to access the secret. + +## Related links +- [Secrets building block]({{< ref secrets >}}) +- [How-To: Retreive a secret]({{< ref "howto-secrets.md" >}}) +- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}}) +- [Secrets API reference]({{< ref secrets_api.md >}}) \ No newline at end of file diff --git a/howto/setup-secret-store/gcp-secret-manager.md b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/gcp-secret-manager.md similarity index 63% rename from howto/setup-secret-store/gcp-secret-manager.md rename to daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/gcp-secret-manager.md index 6ca95bfe9..c3ff38543 100644 --- a/howto/setup-secret-store/gcp-secret-manager.md +++ b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/gcp-secret-manager.md @@ -1,12 +1,17 @@ -# Secret Store for GCP Secret Manager +--- +type: docs +title: "GCP Secret Manager" +linkTitle: "GCP Secret Manager" +description: Detailed information on the GCP Secret Manager secret store component +--- -This document shows how to enable GCP Secret Manager secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for self hosted and Kubernetes mode. +This document shows how to enable GCP Secret Manager secret store using [Dapr Secrets Component./../concepts/secrets/README.md) for self hosted and Kubernetes mode. -## Create an GCP Secret Manager instance +## Setup GCP Secret Manager instance Setup GCP Secret Manager using the GCP documentation: https://cloud.google.com/secret-manager/docs/quickstart. -## Create the component +## Setup Dapr component ```yaml apiVersion: dapr.io/v1alpha1 @@ -39,15 +44,30 @@ spec: value: PRIVATE KEY ``` +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a local secret store such as [Kubernetes secret store]({{< ref kubernetes-secret-store.md >}}) or a [local file]({{< ref file-secret-store.md >}}) to bootstrap secure key storage. +{{% /alert %}} + +## Apply the component + +{{< tabs "Self-Hosted" "Kubernetes">}} + +{{% codetab %}} +To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. + +{{% /codetab %}} + +{{% codetab %}} To deploy in Kubernetes, save the file above to `gcp_secret_manager.yaml` and then run: ```bash kubectl apply -f gcp_secret_manager.yaml ``` +{{% /codetab %}} -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +{{< /tabs >}} -## GCP Secret Manager reference example +## Example This example shows you how to take the Redis password from the GCP Secret Manager secret store. Here, you created a secret named `redisPassword` in GCP Secret Manager. Note its important to set it both as the `name` and `key` properties. @@ -70,3 +90,9 @@ spec: auth: secretStore: gcpsecretmanager ``` + +## Related links +- [Secrets building block]({{< ref secrets >}}) +- [How-To: Retreive a secret]({{< ref "howto-secrets.md" >}}) +- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}}) +- [Secrets API reference]({{< ref secrets_api.md >}}) \ No newline at end of file diff --git a/howto/setup-secret-store/hashicorp-vault.md b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/hashicorp-vault.md similarity index 71% rename from howto/setup-secret-store/hashicorp-vault.md rename to daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/hashicorp-vault.md index 15f782862..91fb25b70 100644 --- a/howto/setup-secret-store/hashicorp-vault.md +++ b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/hashicorp-vault.md @@ -1,12 +1,24 @@ -# Secret Store for Hashicorp Vault +--- +type: docs +title: "HashiCorp Vault" +linkTitle: "HashiCorp Vault" +description: Detailed information on the HashiCorp Vault secret store component +--- -This document shows how to enable Hashicorp Vault secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for Standalone and Kubernetes mode. +## Setup Hashicorp Vault instance -## Create a Hashicorp Vault instance +{{< tabs "Self-Hosted" "Kubernetes" >}} +{{% codetab %}} Setup Hashicorp Vault using the Vault documentation: https://www.vaultproject.io/docs/install/index.html. +{{% /codetab %}} +{{% codetab %}} For Kubernetes, you can use the Helm Chart: . +{{% /codetab %}} + +{{< /tabs >}} + ## Create the Vault component @@ -37,15 +49,24 @@ spec: value : "[vault_prefix]" ``` +{{< tabs "Self-Hosted" "Kubernetes" >}} + +{{% codetab %}} +To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +{{% /codetab %}} + +{{% codetab %}} To deploy in Kubernetes, save the file above to `vault.yaml` and then run: ```bash kubectl apply -f vault.yaml ``` +{{% /codetab %}} -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +{{< /tabs >}} -## Vault reference example + +## Example This example shows you how to take the Redis password from the Vault secret store. @@ -67,3 +88,9 @@ spec: auth: secretStore: vault ``` + +## Related links +- [Secrets building block]({{< ref secrets >}}) +- [How-To: Retreive a secret]({{< ref "howto-secrets.md" >}}) +- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}}) +- [Secrets API reference]({{< ref secrets_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/kubernetes-secret-store.md b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/kubernetes-secret-store.md new file mode 100644 index 000000000..2c08987f5 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-secret-store/supported-secret-stores/kubernetes-secret-store.md @@ -0,0 +1,17 @@ +--- +type: docs +title: "Kubernetes Secrets" +linkTitle: "Kubernetes Secrets" +weight: 30 +description: Detailed information on the Kubernetes secret store component +--- + +## Summary + +Kubernetes has a built-in state store which Dapr components can use to fetch secrets from. No special configuration is needed to setup the Kubernetes state store, and you are able to retreive secrets from the `http://localhost:3500/v1.0/secrets/kubernetes/[my-secret]` URL. + +## Related links +- [Secrets building block]({{< ref secrets >}}) +- [How-To: Retreive a secret]({{< ref "howto-secrets.md" >}}) +- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}}) +- [Secrets API reference]({{< ref secrets_api.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-state-store/_index.md b/daprdocs/content/en/operations/components/setup-state-store/_index.md new file mode 100644 index 000000000..d86cfa777 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-state-store/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "State stores components" +linkTitle: "State stores" +description: "Guidance on setting up different state stores for Dapr state management" +weight: 1000 +--- \ No newline at end of file diff --git a/daprdocs/content/en/operations/components/setup-state-store/setup-state-store-overview.md b/daprdocs/content/en/operations/components/setup-state-store/setup-state-store-overview.md new file mode 100644 index 000000000..07268d4ff --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-state-store/setup-state-store-overview.md @@ -0,0 +1,39 @@ +--- +type: docs +title: "Overview" +linkTitle: "Overview" +description: "Guidance on set up for state management components" +weight: 10000 +--- + +Dapr integrates with existing databases to provide apps with state management capabilities for CRUD operations, transactions and more. It also supports the configuration of multiple, named, state store components *per application*. + +State stores are extensible and can be found in the [components-contrib repo](https://github.com/dapr/components-contrib). + +A state store in Dapr is described using a `Component` file: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: statestore + namespace: default +spec: + type: state. + metadata: + - name: + value: + - name: + value: +... +``` + +The type of database is determined by the `type` field, and things like connection strings and other metadata are put in the `.metadata` section. +Even though you can put plain text secrets in there, it is recommended you use a [secret store]({{< ref component-secrets.md >}}). + +Visit [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to configure a state store component. + +## Related topics +- [Component concept]({{< ref components-concept.md >}}) +- [State management overview]({{< ref state-management >}}) +- [State management API specification]({{< ref state_api.md >}}) diff --git a/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/_index.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/_index.md new file mode 100644 index 000000000..ed8e14775 --- /dev/null +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/_index.md @@ -0,0 +1,31 @@ +--- +type: docs +title: "Supported stores" +linkTitle: "Supported stores" +description: "CRUD and/or transactional support for supported stores" +weight: 20000 +no_list: true +--- + +The following stores are supported, at various levels, by the Dapr state management building block: + +| Name | CRUD | Transactional | +|----------------------------------------------------------------|------|---------------| +| [Aerospike]({{< ref setup-aerospike.md >}}) | ✅ | ❌ | +| [Cassandra]({{< ref setup-cassandra.md >}}) | ✅ | ❌ | +| [Cloudstate]({{< ref setup-cloudstate.md >}}) | ✅ | ❌ | +| [Couchbase]({{< ref setup-couchbase.md >}}) | ✅ | ❌ | +| [etcd]({{< ref setup-etcd.md >}}) | ✅ | ❌ | +| [Hashicorp Consul]({{< ref setup-consul.md >}}) | ✅ | ❌ | +| [Hazelcast]({{< ref setup-hazelcast.md >}}) | ✅ | ❌ | +| [Memcached]({{< ref setup-memcached.md >}}) | ✅ | ❌ | +| [MongoDB]({{< ref setup-mongodb.md >}}) | ✅ | ✅ | +| [PostgreSQL]({{< ref setup-postgresql.md >}}) | ✅ | ✅ | +| [Redis]({{< ref setup-redis.md >}}) | ✅ | ✅ | +| [Zookeeper]({{< ref setup-zookeeper.md >}}) | ✅ | ❌ | +| [Azure CosmosDB]({{< ref setup-azure-cosmosdb.md >}}) | ✅ | ✅ | +| [Azure SQL Server]({{< ref setup-sqlserver.md >}}) | ✅ | ✅ | +| [Azure Table Storage]({{< ref setup-azure-tablestorage.md >}}) | ✅ | ❌ | +| [Azure Blob Storage]({{< ref setup-azure-blobstorage.md >}}) | ✅ | ❌ | +| [Google Cloud Firestore]({{< ref setup-firestore.md >}}) | ✅ | ❌ | + diff --git a/howto/setup-state-store/setup-aerospike.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-aerospike.md similarity index 80% rename from howto/setup-state-store/setup-aerospike.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-aerospike.md index 224e6c273..43802d243 100644 --- a/howto/setup-state-store/setup-aerospike.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-aerospike.md @@ -1,7 +1,15 @@ -# Setup Aerospike +--- +type: docs +title: "Aerospike" +linkTitle: "Aerospike" +description: Detailed information on the Aerospike state store component +--- -## Locally +## Setup Aerospike +{{< tabs "Self-Hosted" "Kubernetes" >}} + +{{% codetab %}} You can run Aerospike locally using Docker: ``` @@ -9,9 +17,9 @@ docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:30 ``` You can then interact with the server using `localhost:3000`. +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} The easiest way to install Aerospike on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/aerospike): ``` @@ -25,6 +33,9 @@ To interact with Aerospike, find the service with: `kubectl get svc aerospike -n For example, if installing using the example above, the Aerospike host address would be: `aerospike-my-aerospike.aerospike.svc.cluster.local:3000` +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -49,8 +60,9 @@ spec: value: # Optional. ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md). - +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration diff --git a/howto/setup-state-store/setup-azure-blobstorage.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-azure-blobstorage.md similarity index 83% rename from howto/setup-state-store/setup-azure-blobstorage.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-azure-blobstorage.md index 2873b0c3c..ea5e4b063 100644 --- a/howto/setup-state-store/setup-azure-blobstorage.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-azure-blobstorage.md @@ -1,4 +1,9 @@ -# Setup Azure Blob Storage +--- +type: docs +title: "Azure Blob Storage" +linkTitle: "Azure Blob Storage" +description: Detailed information on the Azure Blob Store state store component +--- ## Creating Azure Storage account @@ -7,10 +12,9 @@ If you wish to create a container for Dapr to use, you can do so beforehand. However, Blob Storage state provider will create one for you automatically if it doesn't exist. In order to setup Azure Blob Storage as a state store, you will need the following properties: - -* **AccountName**: The storage account name. For example: **mystorageaccount**. -* **AccountKey**: Primary or secondary storage key. -* **ContainerName**: The name of the container to be used for Dapr state. The container will be created for you if it doesn't exist. +- **AccountName**: The storage account name. For example: **mystorageaccount**. +- **AccountKey**: Primary or secondary storage key. +- **ContainerName**: The name of the container to be used for Dapr state. The container will be created for you if it doesn't exist. ## Create a Dapr component @@ -35,7 +39,11 @@ spec: value: ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md). +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +### Example The following example uses the Kubernetes secret store to retrieve the secrets: diff --git a/howto/setup-state-store/setup-azure-cosmosdb.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-azure-cosmosdb.md similarity index 79% rename from howto/setup-state-store/setup-azure-cosmosdb.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-azure-cosmosdb.md index e5ffa126f..0f310fc45 100644 --- a/howto/setup-state-store/setup-azure-cosmosdb.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-azure-cosmosdb.md @@ -1,17 +1,21 @@ -# Setup Azure CosmosDB +--- +type: docs +title: "Azure Cosmos DB" +linkTitle: "Azure Cosmos DB" +description: Detailed information on the Azure CosmosDB state store component +--- -## Creating an Azure CosmosDB account +## Create a Azure Cosmos DB account [Follow the instructions](https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-manage-database-account) from the Azure documentation on how to create an Azure CosmosDB account. The database and collection must be created in CosmosDB before Dapr can use it. **Note : The partition key for the collection must be named "/partitionKey". Note: this is case-sensitive.** In order to setup CosmosDB as a state store, you need the following properties: - -* **URL**: the CosmosDB url. for example: https://******.documents.azure.com:443/ -* **Master Key**: The key to authenticate to the CosmosDB account -* **Database**: The name of the database -* **Collection**: The name of the collection +- **URL**: the CosmosDB url. for example: https://******.documents.azure.com:443/ +- **Master Key**: The key to authenticate to the CosmosDB account +- **Database**: The name of the database +- **Collection**: The name of the collection ## Create a Dapr component @@ -38,7 +42,11 @@ spec: value: ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +### Example Here is an example of what the values could look like: @@ -114,7 +122,7 @@ For examples see the curl operations in the [Partition keys](#partition-keys) se ## Partition keys -For **non-actor state** operations, the Azure CosmosDB state store will use the `key` property provided in the requests to the Dapr API to determine the CosmosDB partition key. This can be overridden by specifying a metadata field in the request with a key of `partitionKey` and a value of the desired partition. +For **non-actor state** operations, the Azure Cosmos DB state store will use the `key` property provided in the requests to the Dapr API to determine the Cosmos DB partition key. This can be overridden by specifying a metadata field in the request with a key of `partitionKey` and a value of the desired partition. The following operation will use `nihilus` as the partition key value sent to CosmosDB: @@ -146,4 +154,4 @@ curl -X POST http://localhost:3500/v1.0/state/ \ ``` -For **actor** state operations, the partition key will be generated by Dapr using the appId, the actor type, and the actor id, such that data for the same actor will always end up under the same partition (you do not need to specify it). This is because actor state operations must use transactions, and in CosmosDB the items in a transaction must be on the same partition. +For **actor** state operations, the partition key will be generated by Dapr using the appId, the actor type, and the actor id, such that data for the same actor will always end up under the same partition (you do not need to specify it). This is because actor state operations must use transactions, and in Cosmos DB the items in a transaction must be on the same partition. diff --git a/howto/setup-state-store/setup-azure-tablestorage.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-azure-tablestorage.md similarity index 83% rename from howto/setup-state-store/setup-azure-tablestorage.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-azure-tablestorage.md index 983040e4d..906ac8f9a 100644 --- a/howto/setup-state-store/setup-azure-tablestorage.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-azure-tablestorage.md @@ -1,16 +1,20 @@ -# Setup Azure Table Storage +--- +type: docs +title: "Azure Table Storage " +linkTitle: "Azure Table Storage " +description: Detailed information on the Azure Table Storage state store component +--- -## Creating Azure Storage account +## Create an Azure Storage account [Follow the instructions](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal) from the Azure documentation on how to create an Azure Storage Account. If you wish to create a table for Dapr to use, you can do so beforehand. However, Table Storage state provider will create one for you automatically if it doesn't exist. In order to setup Azure Table Storage as a state store, you will need the following properties: - -* **AccountName**: The storage account name. For example: **mystorageaccount**. -* **AccountKey**: Primary or secondary storage key. -* **TableName**: The name of the table to be used for Dapr state. The table will be created for you if it doesn't exist. + **AccountName**: The storage account name. For example: **mystorageaccount**. + **AccountKey**: Primary or secondary storage key. +- **TableName**: The name of the table to be used for Dapr state. The table will be created for you if it doesn't exist. ## Create a Dapr component @@ -35,7 +39,11 @@ spec: value: ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md). +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +### Example The following example uses the Kubernetes secret store to retrieve the secrets: diff --git a/howto/setup-state-store/setup-cassandra.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-cassandra.md similarity index 85% rename from howto/setup-state-store/setup-cassandra.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-cassandra.md index 3c84d04e1..6d4c09530 100644 --- a/howto/setup-state-store/setup-cassandra.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-cassandra.md @@ -1,7 +1,15 @@ -# Setup Cassandra +--- +type: docs +title: "Cassandra" +linkTitle: "Cassandra" +description: Detailed information on the Cassandra state store component +--- -## Locally +## Create a Cassandra state store +{{< tabs "Self-Hosted" "Kubernetes" >}} + +{{% codetab %}} You can run Cassandra locally with the Datastax Docker image: ``` @@ -9,9 +17,9 @@ docker run -e DS_LICENSE=accept --memory 4g --name my-dse -d datastax/dse-server ``` You can then interact with the server using `localhost:9042`. +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} The easiest way to install Cassandra on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/incubator/cassandra): ``` @@ -25,6 +33,9 @@ To interact with Cassandra, find the service with: `kubectl get svc -n cassandra For example, if installing using the example above, the Cassandra DNS would be: `cassandra.cassandra.svc.cluster.local` +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -59,7 +70,11 @@ spec: value: # Optional. default: "1" ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +### Example The following example uses the Kubernetes secret store to retrieve the username and password: diff --git a/howto/setup-state-store/setup-cloudstate.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-cloudstate.md similarity index 95% rename from howto/setup-state-store/setup-cloudstate.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-cloudstate.md index 628516150..560d45971 100644 --- a/howto/setup-state-store/setup-cloudstate.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-cloudstate.md @@ -1,10 +1,17 @@ -# Setup Cloudstate +--- +type: docs +title: "Cloudstate" +linkTitle: "Cloudstate" +description: Detailed information on the Cloudstate state store component +--- + +## Introduction The Cloudstate-Dapr integration is unique in the sense that it enables developers to achieve high-throughput, low latency scenarios by leveraging Cloudstate running as a sidecar *next* to Dapr, keeping the state near the compute unit for optimal performance while providing replication between multiple instances that can be safely scaled up and down. This is due to Cloudstate forming an Akka cluster between its sidecars with replicated in-memory entities. Dapr leverages Cloudstate's CRDT capabilities with last-write-wins semantics. -## Kubernetes +## Setup a Cloudstate state store To install Cloudstate on your Kubernetes cluster, run the following commands: diff --git a/howto/setup-state-store/setup-consul.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-consul.md similarity index 81% rename from howto/setup-state-store/setup-consul.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-consul.md index 4f8089fe6..b8f2f816c 100644 --- a/howto/setup-state-store/setup-consul.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-consul.md @@ -1,7 +1,15 @@ -# Setup Consul +--- +type: docs +title: "HashiCorp Consul" +linkTitle: "HashiCorp Consul" +description: Detailed information on the HashiCorp Consul state store component +--- -## Locally +## Setup a HashiCorp Consul state store +{{< tabs "Self-Hosted" "Kubernetes" >}} + +{{% codetab %}} You can run Consul locally using Docker: ``` @@ -9,9 +17,9 @@ docker run -d --name=dev-consul -e CONSUL_BIND_INTERFACE=eth0 consul ``` You can then interact with the server using `localhost:8500`. +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} The easiest way to install Consul on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/consul): ``` @@ -24,6 +32,9 @@ To interact with Consul, find the service with: `kubectl get svc consul`. For example, if installing using the example above, the Consul host address would be: `consul.default.svc.cluster.local:8500` +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -52,7 +63,11 @@ spec: value: # Optional. default: "" ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +### Example The following example uses the Kubernetes secret store to retrieve the acl token: diff --git a/howto/setup-state-store/setup-couchbase.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-couchbase.md similarity index 77% rename from howto/setup-state-store/setup-couchbase.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-couchbase.md index 91108cc2f..905df25d9 100644 --- a/howto/setup-state-store/setup-couchbase.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-couchbase.md @@ -1,7 +1,15 @@ -# Setup Couchbase +--- +type: docs +title: "Couchbase" +linkTitle: "Couchbase" +description: Detailed information on the Couchbase state store component +--- -## Locally +## Create a Couchbase state store +{{< tabs "Self-Hosted" "Kubernetes" >}} + +{{% codetab %}} You can run Couchbase locally using Docker: ``` @@ -9,9 +17,9 @@ docker run -d --name db -p 8091-8094:8091-8094 -p 11210:11210 couchbase ``` You can then interact with the server using `localhost:8091` and start the server setup. +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} The easiest way to install Couchbase on Kubernetes is by using the [Helm chart](https://github.com/couchbase-partners/helm-charts#deploying-for-development-quick-start): ``` @@ -19,6 +27,9 @@ helm repo add couchbase https://couchbase-partners.github.io/helm-charts/ helm install couchbase/couchbase-operator helm install couchbase/couchbase-cluster ``` +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -45,8 +56,9 @@ spec: value: # Required. ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) - +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration diff --git a/howto/setup-state-store/setup-etcd.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-etcd.md similarity index 79% rename from howto/setup-state-store/setup-etcd.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-etcd.md index 29e81f3ab..8c37bce31 100644 --- a/howto/setup-state-store/setup-etcd.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-etcd.md @@ -1,7 +1,15 @@ -# Setup etcd +--- +type: docs +title: "etcd" +linkTitle: "etcd" +description: Detailed information on the etcd state store component +--- -## Locally +## Setup an etcd state store +{{< tabs "Self-Hosted" "Kubernetes" >}} + +{{% codetab %}} You can run etcd locally using Docker: ``` @@ -9,9 +17,9 @@ docker run -d --name etcd bitnami/etcd ``` You can then interact with the server using `localhost:2379`. +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} The easiest way to install etcd on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/incubator/etcd): ``` @@ -25,6 +33,9 @@ To interact with etcd, find the service with: `kubectl get svc etcd-etcd`. For example, if installing using the example above, the etcd host address would be: `etcd-etcd.default.svc.cluster.local:2379` +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -49,8 +60,9 @@ spec: value: # Optional. default: "10S" ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) - +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration diff --git a/howto/setup-state-store/setup-firestore.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-firestore.md similarity index 80% rename from howto/setup-state-store/setup-firestore.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-firestore.md index 7334f15d8..e4e1f7d2e 100644 --- a/howto/setup-state-store/setup-firestore.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-firestore.md @@ -1,14 +1,25 @@ -# Setup Google Cloud Firestore (Datastore mode) +--- +type: docs +title: "GCP Firestore (Datastore mode)" +linkTitle: "GCP Firestore" +description: Detailed information on the GCP Firestore state store component +--- -## Locally +## Setup a GCP Firestone state store +{{< tabs "Self-Hosted" "Google Cloud" >}} + +{{% codetab %}} You can use the GCP Datastore emulator to run locally using the instructions [here](https://cloud.google.com/datastore/docs/tools/datastore-emulator). You can then interact with the server using `localhost:8081`. +{{% /codetab %}} -## Google Cloud - +{{% codetab %}} Follow the instructions [here](https://cloud.google.com/datastore/docs/quickstart) to get started with setting up Firestore in Google Cloud. +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -49,8 +60,9 @@ spec: value: # Optional. default: "DaprState" ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) - +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration diff --git a/howto/setup-state-store/setup-hazelcast.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-hazelcast.md similarity index 73% rename from howto/setup-state-store/setup-hazelcast.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-hazelcast.md index 390e840b5..a8fbb7df9 100644 --- a/howto/setup-state-store/setup-hazelcast.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-hazelcast.md @@ -1,7 +1,15 @@ -# Setup Hazelcast +--- +type: docs +title: "Hazelcast" +linkTitle: "Hazelcast" +description: Detailed information on the Hazelcast state store component +--- -## Locally +## Setup a Hazelcast state store +{{< tabs "Self-Hosted" "Kubernetes" >}} + +{{% codetab %}} You can run Hazelcast locally using Docker: ``` @@ -9,10 +17,13 @@ docker run -e JAVA_OPTS="-Dhazelcast.local.publicAddress=127.0.0.1:5701" -p 5701 ``` You can then interact with the server using the `127.0.0.1:5701`. +{{% /codetab %}} -## Kubernetes +{{% codetab %}} +The easiest way to install Hazelcast on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/hazelcast). +{{% /codetab %}} -The easiest way to install Hazelcast on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/hazelcast): +{{< /tabs >}} ## Create a Dapr component @@ -35,8 +46,9 @@ spec: value: # Required. Hazelcast map configuration. ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md). - +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration diff --git a/howto/setup-state-store/setup-memcached.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-memcached.md similarity index 78% rename from howto/setup-state-store/setup-memcached.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-memcached.md index 4ace58ede..26779bc57 100644 --- a/howto/setup-state-store/setup-memcached.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-memcached.md @@ -1,7 +1,15 @@ -# Setup Memcached +--- +type: docs +title: "Memcached" +linkTitle: "Memcached" +description: Detailed information on the Memcached state store component +--- -## Locally +## Setup a Memcached state store +{{< tabs "Self-Hosted" "Kubernetes" >}} + +{{% codetab %}} You can run Memcached locally using Docker: ``` @@ -9,9 +17,9 @@ docker run --name my-memcache -d memcached ``` You can then interact with the server using `localhost:11211`. +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} The easiest way to install Memcached on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/memcached): ``` @@ -24,6 +32,9 @@ To interact with Memcached, find the service with: `kubectl get svc memcached`. For example, if installing using the example above, the Memcached host address would be: `memcached.default.svc.cluster.local:11211` +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -48,8 +59,9 @@ spec: value: # Optional. default: "1000ms" ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) - +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration diff --git a/howto/setup-state-store/setup-mongodb.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-mongodb.md similarity index 85% rename from howto/setup-state-store/setup-mongodb.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-mongodb.md index 05fde1c91..04dbefc15 100644 --- a/howto/setup-state-store/setup-mongodb.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-mongodb.md @@ -1,7 +1,15 @@ -# Setup MongoDB +--- +type: docs +title: "MongoDB" +linkTitle: "MongoDB" +description: Detailed information on the MongoDB state store component +--- -## Locally +## Setup a MongoDB state store +{{< tabs "Self-Hosted" "Kubernetes" >}} + +{{% codetab %}} You can run MongoDB locally using Docker: ``` @@ -9,9 +17,9 @@ docker run --name some-mongo -d mongo ``` You can then interact with the server using `localhost:27017`. +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} The easiest way to install MongoDB on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/mongodb): ``` @@ -28,6 +36,9 @@ For example, if installing using the example above, the MongoDB host address wou Follow the on-screen instructions to get the root password for MongoDB. The username will be `admin` by default. +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -62,7 +73,11 @@ spec: value: # Optional. default: "5s" ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md). +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +### Example The following example uses the Kubernetes secret store to retrieve the username and password: diff --git a/howto/setup-state-store/setup-postgresql.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-postgresql.md similarity index 88% rename from howto/setup-state-store/setup-postgresql.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-postgresql.md index 0753f0944..c40ce3352 100644 --- a/howto/setup-state-store/setup-postgresql.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-postgresql.md @@ -1,8 +1,12 @@ -# Setup PostgreSQL - -This article provides guidance on configuring a PostgreSQL state store. +--- +type: docs +title: "PostgreSQL" +linkTitle: "PostgreSQL" +description: Detailed information on the PostgreSQL state store component +--- ## Create a PostgreSQL Store + Dapr can use any PostgreSQL instance. If you already have a running instance of PostgreSQL, move on to the [Create a Dapr component](#create-a-dapr-component) section. 1. Run an instance of PostgreSQL. You can run a local instance of PostgreSQL in Docker CE with the following command: @@ -41,7 +45,9 @@ spec: - name: actorStateStore value: "true" ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md). +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration diff --git a/howto/setup-state-store/setup-redis.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-redis.md similarity index 84% rename from howto/setup-state-store/setup-redis.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-redis.md index c6edbc796..211d662f6 100644 --- a/howto/setup-state-store/setup-redis.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-redis.md @@ -1,11 +1,22 @@ -# Setup Redis +--- +type: docs +title: "Redis" +linkTitle: "Redis" +description: Detailed information on the Redis state store component +weight: 10 +--- -## Creating a Redis Store +## Create a Redis Store Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service. If you already have a Redis store, move on to the [Configuration](#configuration) section. -### Creating a Redis Cache in your Kubernetes Cluster using Helm +{{< tabs "Self-Hosted" "Kubernetes" "Azure" "AWS" "GCP" >}} +{{% codetab %}} +A Redis instance is automatically created as a Docker container when you run `dapr init` +{{% /codetab %}} + +{{% codetab %}} We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm](https://github.com/helm/helm#install). 1. Install Redis into your cluster. Note that we're explicitly setting an image tag to get a version greater than 5, which is what Dapr' pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub), you do not have to set the image version. @@ -32,9 +43,9 @@ We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Ku - name: redisPassword value: lhDOkwTlp0 ``` - -### Creating an Azure Managed Redis Cache +{{% /codetab %}} +{{% codetab %}} **Note**: this approach requires having an Azure Subscription. 1. Open [this link](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Cache for Redis creation flow. Log in if necessary. @@ -46,27 +57,24 @@ We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Ku 5. Finally, we need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configuration). Set the `redisHost` key to `[HOST NAME FROM PREVIOUS STEP]:6379` and the `redisPassword` key to the key you copied in step 4. **Note:** In a production-grade application, follow [secret management](https://github.com/dapr/docs/blob/master/concepts/components/secrets.md) instructions to securely manage your secrets. > **NOTE:** Dapr pub/sub uses [Redis Streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0, which isn't currently available on Azure Managed Redis Cache. Consequently, you can use Azure Managed Redis Cache only for state persistence. +{{% /codetab %}} +{{% codetab %}} +[AWS Redis](https://aws.amazon.com/redis/) +{{% /codetab %}} -### Other ways to create a Redis Database +{{% codetab %}} +[GCP Cloud MemoryStore](https://cloud.google.com/memorystore/) +{{% /codetab %}} -- [AWS Redis](https://aws.amazon.com/redis/) -- [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/) +{{< /tabs >}} -## Configuration +## Create a Dapr component -To setup Redis, you need to create a component for `state.redis`. -
- -The following yaml file demonstrates how to define each. - -### Configuring Redis for State Persistence and Retrieval **TLS:** If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS `true` or `false`. **Failover:** When set to `true` enables the failover feature. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel) -**Note:** yaml files below illustrate secret management in plain text. In a production-grade application, follow [secret management](../../concepts/secrets/README.md) instructions to securely manage your secrets. - Create a file called redis.yaml, and paste the following: ```yaml @@ -88,6 +96,10 @@ spec: value: # Optional. Allowed: true, false. ``` +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + ## Apply the configuration ### Kubernetes diff --git a/howto/setup-state-store/setup-rethinkdb.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-rethinkdb.md similarity index 79% rename from howto/setup-state-store/setup-rethinkdb.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-rethinkdb.md index 1d9f4c2ad..0959476a1 100644 --- a/howto/setup-state-store/setup-rethinkdb.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-rethinkdb.md @@ -1,6 +1,11 @@ -# Setup RethinkDB +--- +type: docs +title: "RethinkDB" +linkTitle: "RethinkDB" +description: Detailed information on the RethinkDB state store component +--- -## Locally +## Setup RethinkDB state store You can run [RethinkDB](https://rethinkdb.com/) locally using Docker: @@ -14,7 +19,6 @@ To connect to the admin UI: open "http://$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' rethinkdb):8080" ``` - ## Create a Dapr component The next step is to create a Dapr component for RethinkDB. @@ -44,6 +48,10 @@ spec: value: # Optional (whether or not store should keep archive table of all the state changes) ``` +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + RethinkDB state store supports transactions so it can be used to persist Dapr Actor state. By default, the state will be stored in table name `daprstate` in the specified database. Additionally, if the optional `archive` metadata is set to `true`, on each state change, the RethinkDB state store will also log state changes with timestamp in the `daprstate_archive` table. This allows for time series analyses of the state managed by Dapr. diff --git a/howto/setup-state-store/setup-sqlserver.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-sqlserver.md similarity index 75% rename from howto/setup-state-store/setup-sqlserver.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-sqlserver.md index f71aae7c7..7808fbf5c 100644 --- a/howto/setup-state-store/setup-sqlserver.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-sqlserver.md @@ -1,6 +1,11 @@ -# Setup SQL Server +--- +type: docs +title: "SQL Server" +linkTitle: "SQL Server" +description: Detailed information on the SQL Server state store component +--- -## Creating an Azure SQL instance +## Create an Azure SQL instance [Follow the instructions](https://docs.microsoft.com/azure/sql-database/sql-database-single-database-get-started?tabs=azure-portal) from the Azure documentation on how to create a SQL database. The database must be created before Dapr consumes it. @@ -8,10 +13,10 @@ In order to setup SQL Server as a state store, you will need the following properties: -* **Connection String**: the SQL Server connection string. For example: server=localhost;user id=sa;password=your-password;port=1433;database=mydatabase; -* **Schema**: The database schema do use (default=dbo). Will be created if not exists -* **Table Name**: The database table name. Will be created if not exists -* **Indexed Properties**: Optional properties from json data which will be indexed and persisted as individual column +- **Connection String**: the SQL Server connection string. For example: server=localhost;user id=sa;password=your-password;port=1433;database=mydatabase; +- **Schema**: The database schema do use (default=dbo). Will be created if not exists +- **Table Name**: The database table name. Will be created if not exists +- **Indexed Properties**: Optional properties from json data which will be indexed and persisted as individual column ### Create a dedicated user @@ -22,7 +27,7 @@ When connecting with a dedicated user (not `sa`), these authorizations are requi ## Create a Dapr component -> currently this component does not support state management for actors +> Currently this component does not support state management for actors The next step is to create a Dapr component for SQL Server. @@ -43,7 +48,11 @@ spec: value: ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +### Example The following example uses the Kubernetes secret store to retrieve the secrets: diff --git a/howto/setup-state-store/setup-zookeeper.md b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-zookeeper.md similarity index 81% rename from howto/setup-state-store/setup-zookeeper.md rename to daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-zookeeper.md index 4e6ca70b8..d3b2ac5b2 100644 --- a/howto/setup-state-store/setup-zookeeper.md +++ b/daprdocs/content/en/operations/components/setup-state-store/supported-state-stores/setup-zookeeper.md @@ -1,7 +1,15 @@ -# Setup Zookeeper +--- +type: docs +title: "Zookeeper" +linkTitle: "Zookeeper" +description: Detailed information on the Zookeeper state store component +--- -## Locally +## Setup a Zookeeper state store +{{< tabs "Self-Hosted" "Kubernetes" >}} + +{{% codetab %}} You can run Zookeeper locally using Docker: ``` @@ -9,9 +17,9 @@ docker run --name some-zookeeper --restart always -d zookeeper ``` You can then interact with the server using `localhost:2181`. +{{% /codetab %}} -## Kubernetes - +{{% codetab %}} The easiest way to install Zookeeper on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/incubator/zookeeper): ``` @@ -25,6 +33,9 @@ To interact with Zookeeper, find the service with: `kubectl get svc zookeeper`. For example, if installing using the example above, the Zookeeper host address would be: `zookeeper.default.svc.cluster.local:2181` +{{% /codetab %}} + +{{< /tabs >}} ## Create a Dapr component @@ -53,8 +64,9 @@ spec: value: # Optional. ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) - +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} ## Apply the configuration diff --git a/daprdocs/content/en/operations/configuration/_index.md b/daprdocs/content/en/operations/configuration/_index.md new file mode 100644 index 000000000..978858a53 --- /dev/null +++ b/daprdocs/content/en/operations/configuration/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Manage Dapr configuration" +linkTitle: "Configuration" +weight: 200 +description: "How to set your Dapr configuration and manage your deployment" +--- \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/configuration-overview.md b/daprdocs/content/en/operations/configuration/configuration-overview.md new file mode 100644 index 000000000..2f9f4e619 --- /dev/null +++ b/daprdocs/content/en/operations/configuration/configuration-overview.md @@ -0,0 +1,170 @@ +--- +type: docs +title: "Overview of Dapr configuration options" +linkTitle: "Overview" +weight: 100 +description: "Information on Dapr configuration and how to set options for your application" +--- + +## Sidecar configuration + +### Setup sidecar configuration + +#### Self-hosted sidecar +In self hosted mode the Dapr configuration is a configuration file, for example `config.yaml`. By default the Dapr sidecar looks in the default Dapr folder for the runtime configuration eg: `$HOME/.dapr/config.yaml` in Linux/MacOS and `%USERPROFILE%\.dapr\config.yaml` in Windows. + +A Dapr sidecar can also apply a configuration by using a ```--config``` flag to the file path with ```dapr run``` CLI command. + +#### Kubernetes sidecar +In Kubernetes mode the Dapr configuration is a Configuration CRD, that is applied to the cluster. For example; + +```bash +kubectl apply -f myappconfig.yaml +``` + +You can use the Dapr CLI to list the Configuration CRDs + +```bash +dapr configurations -k +``` + +A Dapr sidecar can apply a specific configuration by using a ```dapr.io/config``` annotation. For example: + +```yml + annotations: + dapr.io/enabled: "true" + dapr.io/app-id: "nodeapp" + dapr.io/app-port: "3000" + dapr.io/config: "myappconfig" +``` +Note: There are more [Kubernetes annotations]({{< ref "kubernetes-annotations.md" >}}) available to configure the Dapr sidecar on activation by sidecar Injector system service. + +### Sidecar configuration settings + +The following configuration settings can be applied to Dapr application sidecars; +- [Tracing](#tracing) +- [Middleware](#middleware) +- [Scoping secrets for secret stores](#scoping-secrets-for-secret-stores) +- [Access control allow lists for service invocation](#access-control-allow-lists-for-service-invocation) +- [Example application sidecar configuration](#example-application-sidecar-configuration) + +#### Tracing + +Tracing configuration turns on tracing for an application. + +The `tracing` section under the `Configuration` spec contains the following properties: + +```yml +tracing: + samplingRate: "1" +``` + +The following table lists the properties for tracing: + +| Property | Type | Description | +|--------------|--------|-------------| +| samplingRate | string | Set sampling rate for tracing to be enabled or disabled. + + +`samplingRate` is used to enable or disable the tracing. To disable the sampling rate , +set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` samples all traces. By default, the sampling rate is (0.0001) or 1 in 10,000 traces. + +See [Observability distributed tracing]({{< ref "tracing.md" >}}) for more information + +#### Middleware + +Middleware configuration set named Http pipeline middleware handlers +The `httpPipeline` section under the `Configuration` spec contains the following properties: + +```yml +httpPipeline: + handlers: + - name: oauth2 + type: middleware.http.oauth2 + - name: uppercase + type: middleware.http.uppercase +``` + +The following table lists the properties for HTTP handlers: + +| Property | Type | Description | +|----------|--------|-------------| +| name | string | Name of the middleware component +| type | string | Type of middleware component + +See [Middleware pipelines]({{< ref "middleware-concept.md" >}}) for more information + +#### Scope secret store access + +See the [Scoping secrets]({{< ref "secret-scope.md" >}}) guide for information and examples on how to scope secrets to an application. + +#### Access Control allow lists for service invocation + +See the [Allow lists for service invocation]({{< ref "invoke-allowlist.md" >}}) guide for information and examples on how to set allow lists. + +### Example sidecar configuration +The following yaml shows an example configuration file that can be applied to an applications' Dapr sidecar. + +```yml +apiVersion: dapr.io/v1alpha1 +kind: Configuration +metadata: + name: myappconfig + namespace: default +spec: + tracing: + samplingRate: "1" + httpPipeline: + handlers: + - name: oauth2 + type: middleware.http.oauth2 + secrets: + scopes: + - storeName: localstore + defaultAccess: allow + deniedSecrets: ["redis-password"] + accessControl: + defaultAction: deny + trustDomain: "public" + policies: + - appId: app1 + defaultAction: deny + trustDomain: 'public' + namespace: "default" + operations: + - name: /op1 + httpVerb: ['POST', 'GET'] + action: deny + - name: /op2/* + httpVerb: ["*"] + action: allow +``` + +## Control-plane configuration +There is a single configuration file called `default` installed with the Dapr control plane system services that applies global settings. This is only set up when Dapr is deployed to Kubernetes. + +### Control-plane configuration settings +A Dapr control plane configuration can configure the following settings: + +| Property | Type | Description | +|------------------|--------|-------------| +| enabled | bool | Set mtls to be enabled or disabled +| allowedClockSkew | string | The extra time to give for certificate expiry based on possible clock skew on a machine. Default is 15 minutes. +| workloadCertTTL | string | Time a certificate is valid for. Default is 24 hours + +See the [Mutual TLS]({{< ref "mtls.md" >}}) HowTo and [security concepts]({{< ref "security-concept.md" >}}) for more information. + +### Example control plane configuration + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Configuration +metadata: + name: default + namespace: default +spec: + mtls: + enabled: true + allowedClockSkew: 15m + workloadCertTTL: 24h +``` diff --git a/howto/control-concurrency/README.md b/daprdocs/content/en/operations/configuration/control-concurrency.md similarity index 87% rename from howto/control-concurrency/README.md rename to daprdocs/content/en/operations/configuration/control-concurrency.md index 501f01212..6203bac3b 100644 --- a/howto/control-concurrency/README.md +++ b/daprdocs/content/en/operations/configuration/control-concurrency.md @@ -1,4 +1,10 @@ -# Rate limiting an application +--- +type: docs +title: "How-To: Control concurrency and rate limit applications" +linkTitle: "Concurrency & rate limits" +weight: 2000 +description: "Control how many requests and events will invoke your application simultaneously" +--- A common scenario in distributed computing is to only allow for a given number of requests to execute concurrently. Using Dapr, you can control how many requests and events will invoke your application simultaneously. diff --git a/daprdocs/content/en/operations/configuration/grpc.md b/daprdocs/content/en/operations/configuration/grpc.md new file mode 100644 index 000000000..59c51a4ce --- /dev/null +++ b/daprdocs/content/en/operations/configuration/grpc.md @@ -0,0 +1,56 @@ +--- +type: docs +title: "How-To: Configure Dapr to use gRPC" +linkTitle: "Use gRPC interface" +weight: 5000 +description: "How to configure Dapr to use gRPC for low-latency, high performance scenarios" +--- + +Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients. + +You can find a list of auto-generated clients [here]({{< ref sdks >}}). + +The Dapr runtime implements a [proto service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto) that apps can communicate with via gRPC. + +In addition to calling Dapr via gRPC, Dapr can communicate with an application via gRPC. To do that, the app needs to host a gRPC server and implements the [Dapr appcallback service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto) + +## Configuring Dapr to communicate with an app via gRPC + +### Self hosted + +When running in self hosted mode, use the `--app-protocol` flag to tell Dapr to use gRPC to talk to the app: + +```bash +dapr run --app-protocol grpc --app-port 5005 node app.js +``` +This tells Dapr to communicate with your app via gRPC over port `5005`. + + +### Kubernetes + +On Kubernetes, set the following annotations in your deployment YAML: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: myapp + namespace: default + labels: + app: myapp +spec: + replicas: 1 + selector: + matchLabels: + app: myapp + template: + metadata: + labels: + app: myapp + annotations: + dapr.io/enabled: "true" + dapr.io/app-id: "myapp" + dapr.io/app-protocol: "grpc" + dapr.io/app-port: "5005" +... +``` \ No newline at end of file diff --git a/howto/allowlists-serviceinvocation/README.md b/daprdocs/content/en/operations/configuration/invoke-allowlist.md similarity index 65% rename from howto/allowlists-serviceinvocation/README.md rename to daprdocs/content/en/operations/configuration/invoke-allowlist.md index e8d5c0df9..773d6229f 100644 --- a/howto/allowlists-serviceinvocation/README.md +++ b/daprdocs/content/en/operations/configuration/invoke-allowlist.md @@ -1,18 +1,50 @@ -# Apply access control list for service invocation +--- +type: docs +title: "How-To: Apply access control list configuration for service invocation" +linkTitle: "Service Invocation access control" +weight: 4000 +description: "Restrict what operations *calling* applications can perform, via service invocation, on the *called* application" +--- Access control enables the configuration of policies that restrict what operations *calling* applications can perform, via service invocation, on the *called* application. To limit access to a called applications from specific operations and HTTP verbs from the calling applications, you can define an access control policy specification in configuration. -- [Concepts](#concepts) -- [Policy rules](#policy-rules) -- [Policy priority](#policy-priority) -- [Example scenarios](#example-scenarios) -- [Hello world example](#hello-world-example) - +An access control policy is specified in configuration and be applied to Dapr sidecar for the *called* application. Example access policies are shown below and access to the called app is based on the matched policy action. You can provide a default global action for all calling applications and if no access control policy is specified, the default behavior is to allow all calling applicatons to access to the called app. ## Concepts + **TrustDomain** - A "trust domain" is a logical group to manage trust relationships. Every application is assigned a trust domain which can be specified in the access control list policy spec. If no policy spec is defined or an empty trust domain is specified, then a default value "public" is used. This trust domain is used to generate the identity of the application in the TLS cert. -**App Identity** - Dapr generates a [SPIFFE](https://spiffe.io/) id for all applications which is attached in the TLS cert. The SPIFFE id is of the format: **spiffe://\/ns/\/\**. For matching policies, the trust domain, namespace and app ID values of the calling app are extracted from the SPIFFE id in the TLS cert of the calling app. These values are matched against the trust domain, namespace and app ID values specified in the policy spec. If all three of these match, then more specific policies are further matched. +**App Identity** - Dapr generates a [SPIFFE](https://spiffe.io/) id for all applications which is attached in the TLS cert. The SPIFFE id is of the format: `**spiffe://\/ns/\/\**`. For matching policies, the trust domain, namespace and app ID values of the calling app are extracted from the SPIFFE id in the TLS cert of the calling app. These values are matched against the trust domain, namespace and app ID values specified in the policy spec. If all three of these match, then more specific policies are further matched. + +## Configuration properties + +The following tables lists the different properties for access control, policies and operations: + +### Access Control + +| Property | Type | Description | +|---------------|--------|-------------| +| defaultAction | string | Global default action when no other policy is matched +| trustDomain | string | Trust domain assigned to the application. Default is "public". +| policies | string | Policies to determine what operations the calling app can do on the called app + +### Policies + +| Property | Type | Description | +|---------------|--------|-------------| +| app | string | AppId of the calling app to allow/deny service invocation from +| namespace | string | Namespace value that needs to be matched with the namespace of the calling app +| trustDomain | string | Trust domain that needs to be matched with the trust domain of the calling app. Default is "public" +| defaultAction | string | App level default action in case the app is found but no specific operation is matched +| operations | string | operations that are allowed from the calling app + +### Operations + +| Property | Type | Description | +|----------|--------|-------------| +| name | string | Path name of the operations allowed on the called app. Wildcard "\*" can be used to under a path to match +| httpVerb | list | List specific http verbs that can be used by the calling app. Wildcard "\*" can be used to match any http verb. Unused for grpc invocation +| action | string | Access modifier. Accepted values "allow" (default) or "deny" ## Policy rules @@ -31,9 +63,10 @@ The action corresponding to the most specific policy matched takes effect as ord ## Example scenarios -Below are some example scenarios for using access control list for service invocation. See [configuration guidance](../../concepts/configuration/README.md) to understand the available configuration settings for an application sidecar. +Below are some example scenarios for using access control list for service invocation. See [configuration guidance]({{< ref "configuration-concept.md" >}}) to understand the available configuration settings for an application sidecar. + +Scenario 1: Deny access to all apps except where trustDomain = public, namespace = default, appId = app1 -### Scenario 1 : Deny access to all apps except where trustDomain = public, namespace = default, appId = app1 With this configuration, all calling methods with appId = app1 are allowed and all other invocation requests from other applications are denied ```yaml @@ -52,7 +85,8 @@ spec: namespace: "default" ``` -### Scenario 2 : Deny access to all apps except trustDomain = public, namespace = default, appId = app1, operation = op1 +Scenario 2: Deny access to all apps except trustDomain = public, namespace = default, appId = app1, operation = op1 + With this configuration, only method op1 from appId = app1 is allowed and all other method requests from all other apps, including other methods on app1, are denied ```yaml @@ -75,7 +109,7 @@ spec: action: allow ``` -### Scenario 3 : Deny access to all apps except when a specific verb for HTTP and operation for GRPC is matched +Scenario 3: Deny access to all apps except when a specific verb for HTTP and operation for GRPC is matched With this configuration, the only scenarios below are allowed access and and all other method requests from all other apps, including other methods on app1 or app2, are denied * trustDomain = public, namespace = default, appID = app1, operation = op1, http verb = POST/PUT @@ -109,7 +143,7 @@ spec: action: allow ``` -### Scenario 4 : Allow access to all methods except trustDomain = public, namespace = default, appId = app1, operation = /op1/*, all http verbs +Scenario 4: Allow access to all methods except trustDomain = public, namespace = default, appId = app1, operation = /op1/*, all http verbs ```yaml apiVersion: dapr.io/v1alpha1 @@ -131,7 +165,7 @@ spec: action: deny ``` -### Scenario 5 : Allow access to all methods for trustDomain = public, namespace = ns1, appId = app1 and deny access to all methods for trustDomain = public, namespace = ns2, appId = app1 +Scenario 5: Allow access to all methods for trustDomain = public, namespace = ns1, appId = app1 and deny access to all methods for trustDomain = public, namespace = ns2, appId = app1 This scenario shows how applications with the same app ID but belonging to different namespaces can be specified @@ -156,7 +190,7 @@ spec: ``` ## Hello world example -This scenario shows how to apply access control to the [hello world](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) or [hello kubernetes](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) samples where a python app invokes a node.js app. You can create and apply these configuration files `nodeappconfig.yaml` and `pythonappconfig.yaml` as described in the [configuration](../../concepts/configuration/README.md) article. +This scenario shows how to apply access control to the [hello world](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) or [hello kubernetes](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) samples where a python app invokes a node.js app. You can create and apply these configuration files `nodeappconfig.yaml` and `pythonappconfig.yaml` as described in the [configuration]({{< ref "configuration-concept.md" >}}) article. The nodeappconfig example below shows how to deny access to the `neworder` method from the `pythonapp`, where the python app is in the `myDomain` trust domain and `default` namespace. The nodeapp is in the `public` trust domain. @@ -227,7 +261,3 @@ spec: - name: python image: dapriosamples/hello-k8s-python:edge ``` - -## Related Links - -* [Configuration concepts](../../concepts/configuration/README.md) diff --git a/daprdocs/content/en/operations/configuration/secret-scope.md b/daprdocs/content/en/operations/configuration/secret-scope.md new file mode 100644 index 000000000..ffb095250 --- /dev/null +++ b/daprdocs/content/en/operations/configuration/secret-scope.md @@ -0,0 +1,116 @@ +--- +type: docs +title: "How-To: Limit the secrets that can be read from secret stores" +linkTitle: "Limit secret store access" +weight: 3000 +description: "To limit the secrets to which the Dapr application has access, users can define secret scopes by augmenting existing configuration CRD with restrictive permissions." +--- + +In addition to scoping which applications can access a given component, for example a secret store component (see [Scoping components]({{< ref "component-scopes.md">}})), a named secret store component itself can be scoped to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` list, applications can be restricted to access only specific secrets. + +Follow [these instructions]({{< ref "configuration-overview.md" >}}) to define a configuration CRD. + +## Configure secrets access + +The `secrets` section under the `Configuration` spec contains the following properties: + +```yml +secrets: + scopes: + - storeName: kubernetes + defaultAccess: allow + allowedSecrets: ["redis-password"] + - storeName: localstore + defaultAccess: allow + deniedSecrets: ["redis-password"] +``` + +The following table lists the properties for secret scopes: + +| Property | Type | Description | +|----------------|--------|-------------| +| storeName | string | Name of the secret store component. storeName must be unique within the list +| defaultAccess | string | Access modifier. Accepted values "allow" (default) or "deny" +| allowedSecrets | list | List of secret keys that can be accessed +| deniedSecrets | list | List of secret keys that cannot be accessed + +When an `allowedSecrets` list is present with at least one element, only those secrets defined in the list can be accessed by the application. + +## Permission priority + +The `allowedSecrets` and `deniedSecrets` list values take priorty over the `defaultAccess`. + +| Scenarios | defaultAccess | allowedSecrets | deniedSecrets | permission +|----- | ------- | -----------| ----------| ------------ +| 1 - Only default access | deny/allow | empty | empty | deny/allow +| 2 - Default deny with allowed list | deny | ["s1"] | empty | only "s1" can be accessed +| 3 - Default allow with deneied list | allow | empty | ["s1"] | only "s1" cannot be accessed +| 4 - Default allow with allowed list | allow | ["s1"] | empty | only "s1" can be accessed +| 5 - Default deny with denied list | deny | empty | ["s1"] | deny +| 6 - Default deny/allow with both lists | deny/allow | ["s1"] | ["s2"] | only "s1" can be accessed + +## Examples + +### Scenario 1 : Deny access to all secrets for a secret store + +In Kubernetes cluster, the native Kubernetes secret store is added to Dapr application by default. In some scenarios it may be necessary to deny access to Dapr secrets for a given application. To add this configuration follow the steps below: + +Define the following `appconfig.yaml` and apply it to the Kubernetes cluster using the command `kubectl apply -f appconfig.yaml`. + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Configuration +metadata: + name: appconfig +spec: + secrets: + scopes: + - storeName: kubernetes + defaultAccess: deny +``` + +For applications that need to be deined access to the Kubernetes secret store, follow [these instructions]({{< ref kubernetes-overview >}}), and add the following annotation to the application pod. + +```yaml +dapr.io/config: appconfig +``` + +With this defined, the application no longer has access to Kubernetes secret store. + +### Scenario 2 : Allow access to only certain secrets in a secret store + +To allow a Dapr application to have access to only certain secrets, define the following `config.yaml`: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Configuration +metadata: + name: appconfig +spec: + secrets: + scopes: + - storeName: vault + defaultAccess: deny + allowedSecrets: ["secret1", "secret2"] +``` + +This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-overview.md >}}) to apply configuration to the sidecar. + +### Scenario 3: Deny access to certain senstive secrets in a secret store + +Define the following `config.yaml`: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Configuration +metadata: + name: appconfig +spec: + secrets: + scopes: + - storeName: vault + defaultAccess: allow # this is the default value, line can be omitted + deniedSecrets: ["secret1", "secret2"] +``` + +The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-overview.md >}}) to apply configuration to the sidecar. diff --git a/daprdocs/content/en/operations/hosting/_index.md b/daprdocs/content/en/operations/hosting/_index.md new file mode 100644 index 000000000..2dd2b7d81 --- /dev/null +++ b/daprdocs/content/en/operations/hosting/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Hosting options for Dapr" +linkTitle: "Hosting options" +weight: 100 +description: "How to deploy Dapr into your environment." +--- \ No newline at end of file diff --git a/daprdocs/content/en/operations/hosting/kubernetes/_index.md b/daprdocs/content/en/operations/hosting/kubernetes/_index.md new file mode 100644 index 000000000..124319d83 --- /dev/null +++ b/daprdocs/content/en/operations/hosting/kubernetes/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Deploy and run Dapr in Kubernetes mode" +linkTitle: "Kubernetes" +weight: 2000 +description: "How to get Dapr up and running on your Kubernetes cluster" +--- diff --git a/daprdocs/content/en/operations/hosting/kubernetes/cluster/_index.md b/daprdocs/content/en/operations/hosting/kubernetes/cluster/_index.md new file mode 100644 index 000000000..fbc733d04 --- /dev/null +++ b/daprdocs/content/en/operations/hosting/kubernetes/cluster/_index.md @@ -0,0 +1,8 @@ +--- +type: docs +title: "Kubernetes cluster setup" +linkTitle: "How-to: Setup clusters" +weight: 50000 +description: > + How to setup dapr on a kubernetes cluster. +--- \ No newline at end of file diff --git a/getting-started/cluster/setup-aks.md b/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-aks.md similarity index 91% rename from getting-started/cluster/setup-aks.md rename to daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-aks.md index 00f004340..905ef73b7 100644 --- a/getting-started/cluster/setup-aks.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-aks.md @@ -1,3 +1,11 @@ +--- +type: docs +title: "Setup an Azure Kubernetes Service cluster" +linkTitle: "Azure Kubernetes Service" +weight: 2000 +description: > + How to setup Dapr on an Azure Kubernetes Cluster. +--- # Set up an Azure Kubernetes Service cluster diff --git a/getting-started/cluster/setup-minikube.md b/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-minikube.md similarity index 94% rename from getting-started/cluster/setup-minikube.md rename to daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-minikube.md index ce0fc26d0..4c9f5deff 100644 --- a/getting-started/cluster/setup-minikube.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-minikube.md @@ -1,3 +1,11 @@ +--- +type: docs +title: "Setup an Minikube cluster" +linkTitle: "Minikube" +weight: 2000 +description: > + How to setup Dapr on a Minikube cluster. +--- # Set up a Minikube cluster diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-annotations.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-annotations.md new file mode 100644 index 000000000..39dcef64f --- /dev/null +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-annotations.md @@ -0,0 +1,35 @@ +--- +type: docs +title: "Dapr Kubernetes pod annotations spec" +linkTitle: "Kubernetes annotations" +weight: 40000 +description: "The available annotations available when configuring Dapr in your Kubernetes environment" +--- + +The following table shows all the supported pod Spec annotations supported by Dapr. + +| Annotation | Description | +|---------------------------------------------------|-------------| +| `dapr.io/enabled` | Setting this paramater to `true` injects the Dapr sidecar into the pod +| `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on +| `dapr.io/app-id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID +| `dapr.io/log-level` | Sets the log level for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` +| `dapr.io/config` | Tells Dapr which Configuration CRD to use +| `dapr.io/log-as-json` | Setting this parameter to `true` outputs logs in JSON format. Default is `false` +| `dapr.io/enable-profiling` | Setting this paramater to `true` starts the Dapr profiling server on port `7777`. Default is `false` +| `dapr.io/app-protocol` | Tells Dapr which protocol your application is using. Valid options are `http` and `grpc`. Default is `http` +| `dapr.io/app-max-concurrency` | Limit the concurrency of your application. A valid value is any number larger than `0` +| `dapr.io/app-ssl` | Tells Dapr to invoke the app over an insecure SSL connection. Applies to both HTTP and gRPC. Default is `false`. +| `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090` +| `dapr.io/sidecar-cpu-limit` | Maximum amount of CPU that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set +| `dapr.io/sidecar-memory-limit` | Maximum amount of Memory that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set +| `dapr.io/sidecar-cpu-request` | Amount of CPU that the Dapr sidecar requests. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set +| `dapr.io/sidecar-memory-request` | Amount of Memory that the Dapr sidecar requests .See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set +| `dapr.io/sidecar-liveness-probe-delay-seconds` | Number of seconds after the sidecar container has started before liveness probe is initiated. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3` +| `dapr.io/sidecar-liveness-probe-timeout-seconds` | Number of seconds after which the sidecar liveness probe times out. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3` +| `dapr.io/sidecar-liveness-probe-period-seconds` | How often (in seconds) to perform the sidecar liveness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6` +| `dapr.io/sidecar-liveness-probe-threshold` | When the sidecar liveness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unhealthy. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3` +| `dapr.io/sidecar-readiness-probe-delay-seconds` | Number of seconds after the sidecar container has started before readiness probe is initiated. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3` +| `dapr.io/sidecar-readiness-probe-timeout-seconds` | Number of seconds after which the sidecar readiness probe times out. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3` +| `dapr.io/sidecar-readiness-probe-period-seconds` | How often (in seconds) to perform the sidecar readiness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6` +| `dapr.io/sidecar-readiness-probe-threshold` | When the sidecar readiness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unready. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3` diff --git a/howto/windows-k8s/README.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-hybrid-clusters.md similarity index 91% rename from howto/windows-k8s/README.md rename to daprdocs/content/en/operations/hosting/kubernetes/kubernetes-hybrid-clusters.md index 92eb989fe..c3a90212a 100644 --- a/howto/windows-k8s/README.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-hybrid-clusters.md @@ -1,4 +1,10 @@ -# Deploy to hybrid Linux/Windows Kubernetes clusters +--- +type: docs +title: "How-To: Deploy to hybrid Linux/Windows Kubernetes clusters" +linkTitle: "Hybrid clusters" +weight: 20000 +description: "How to run Dapr apps on Kubernetes clusters with windows nodes" +--- Dapr supports running on kubernetes clusters with windows nodes. You can run your Dapr microservices exclusively on Windows, exclusively on Linux, or a combination of both. This is helpful to users who may be doing a piecemeal migration of a legacy application into a Dapr Kubernetes cluster. @@ -28,7 +34,7 @@ akswin000001 Ready agent 6d v1.17.9 10.240.0. ## Installing the Dapr Control Plane If you are installing using the Dapr CLI or via a helm chart, simply follow the normal deployment procedures: -[Installing Dapr on a Kubernetes cluster](../../getting-started/environment-setup.md#installing-Dapr-on-a-kubernetes-cluster) +[Installing Dapr on a Kubernetes cluster]({{< ref "install-dapr.md#installing-Dapr-on-a-kubernetes-cluster" >}}) Affinity will be automatically set for kubernetes.io/os=linux. This will be sufficient for most users, as Kubernetes requires at least one Linux node pool. @@ -152,8 +158,8 @@ helm uninstall dapr ``` ## Related links - - See the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) for examples of more advanced configuration via node affinity - - [Get started: Prep Windows for containers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment) +- See the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) for examples of more advanced configuration via node affinity +- [Get started: Prep Windows for containers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment) - [Setting up a Windows enabled Kubernetes cluster on Azure AKS](https://docs.microsoft.com/en-us/azure/aks/windows-container-cli) - [Setting up a Windows enabled Kubernetes cluster on AWS EKS](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html) - [Setting up Windows on Google Cloud GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster-windows) diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-overview.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-overview.md new file mode 100644 index 000000000..3af957488 --- /dev/null +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-overview.md @@ -0,0 +1,26 @@ +--- +type: docs +title: "Overview of Dapr on Kubernetes" +linkTitle: "Overview" +weight: 10000 +description: "Overview of how to get Dapr running on your Kubernetes cluster" +--- + +Dapr can be configured to run on any [Kubernetes cluster](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes). In Kubernetes the `dapr-sidecar-injector` and `dapr-operator` services provide first class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned into the cluster. Additionally, the `dapr-sidecar-injector` also injects the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` into **all** the containers in the pod to enable user defined applications to easily communicate with Dapr without hardcoding Dapr port values. + +The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service read the [security overview]({{< ref "security-concept.md" >}}) + + + +Deploying and running a Dapr enabled application into your Kubernetes cluster is a simple as adding a few annotations to the deployment schemes. To give your service an `id` and `port` known to Dapr, turn on tracing through configuration and launch the Dapr sidecar container, you annotate your Kubernetes deployment like this. + +```yml + annotations: + dapr.io/enabled: "true" + dapr.io/app-id: "nodeapp" + dapr.io/app-port: "3000" + dapr.io/config: "tracing" +``` +You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes/deploy) in the Kubernetes getting started sample. + +Read [Kubernetes how to topics](https://github.com/dapr/docs/tree/master/howto#kubernetes-configuration) for more information about setting up Kubernetes and Dapr. \ No newline at end of file diff --git a/howto/deploy-k8s-prod/README.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md similarity index 83% rename from howto/deploy-k8s-prod/README.md rename to daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md index 2e72be733..7cfde34c6 100644 --- a/howto/deploy-k8s-prod/README.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md @@ -1,15 +1,10 @@ -# Guidelines for production ready deployments on Kubernetes - -This section outlines recommendations and practices for deploying Dapr to a Kubernetes cluster in a production ready configuration. - -## Contents - -- [Cluster capacity requirements](#cluster-capacity-requirements) -- [Sidecar resource requirements](#sidecar-resource-requirements) -- [Deploying Dapr with Helm](#deploying-dapr-with-helm) -- [Upgrading Dapr with Helm](#upgrading-dapr-with-helm) -- [Recommended security configuration](#recommended-security-configuration) -- [Tracing and metrics configuration](#tracing-and-metrics-configuration) +--- +type: docs +title: "Guidelines for production ready deployments on Kubernetes" +linkTitle: "Production guidelines" +weight: 10000 +description: "Recommendations and practices for deploying Dapr to a Kubernetes cluster in a production ready configuration" +--- ## Cluster capacity requirements @@ -19,14 +14,14 @@ The Dapr control plane pods are designed to be lightweight and require the follo *Note: For more info on CPU and Memory resource units and their meaning, see [this](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes) link* | Deployment | CPU | Memory -| ------------| ---- | ------ -| Operator | Limit: 1, Request: 100m | Limit: 200Mi, Request: 20Mi +|-------------|-----|------- +| Operator | Limit: 1, Request: 100m | Limit: 200Mi, Request: 20Mi | Sidecar Injector | Limit: 1, Request: 100m | Limit: 200Mi, Request: 20Mi -| Sentry | Limit: 1, Request: 100m | Limit: 200Mi, Request: 20Mi -| Placement | Limit: 1, Request: 250m | Limit: 500Mi, Request: 100Mi -| Dashboard | Limit: 200m, Request: 50m | Limit: 200Mi, Request: 20Mi +| Sentry | Limit: 1, Request: 100m | Limit: 200Mi, Request: 20Mi +| Placement | Limit: 1, Request: 250m | Limit: 500Mi, Request: 100Mi +| Dashboard | Limit: 200m, Request: 50m | Limit: 200Mi, Request: 20Mi -To change the resource assignments for the Dapr sidecar, see the annotations [here](../configure-k8s/). +To change the resource assignments for the Dapr sidecar, see the annotations [here]({{< ref "kubernetes-annotations.md" >}}). The specific annotations related to resource constraints are: * `dapr.io/sidecar-cpu-limit` @@ -48,8 +43,8 @@ The following Dapr control plane deployments are optional: The Dapr sidecar requires the following resources in a production-ready setup: -| CPU | Memory -| --------- | --------- | +| CPU | Memory | +|-----|--------| | Limit: 4, Request: 100m | Limit: 4000Mi, Request: 250Mi *Note: Since Dapr is intended to do much of the I/O heavy lifting for your app, it's expected that the resources given to Dapr enable you to drastically reduce the resource allocations for the application* @@ -59,7 +54,7 @@ The CPU and memory limits above account for the fact that Dapr is intended to do ## Deploying Dapr with Helm When deploying to a production cluster, it's recommended to use Helm. The Dapr CLI installation into a Kubernetes cluster is for a development and test only setup. -You can find information [here](../../getting-started/environment-setup.md#using-helm-(advanced)) on how to deploy Dapr using Helm. +You can find information [here]({{< ref "install-dapr.md#using-helm-advanced" >}}) on how to deploy Dapr using Helm. When deploying Dapr in a production-ready configuration, it's recommended to deploy with a highly available configuration of the control plane: @@ -69,7 +64,7 @@ helm install dapr dapr/dapr --namespace dapr-system --set global.ha.enabled=true This command will run 3 replicas of each control plane pod with the exception of the Placement pod in the dapr-system namespace. -*Note: The Dapr Helm chart automatically deploys with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy the Dapr control plane to Windows nodes, but most users should not need to. For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster](../windows-k8s/)* +*Note: The Dapr Helm chart automatically deploys with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy the Dapr control plane to Windows nodes, but most users should not need to. For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster]({{< ref "kubernetes-hybrid-clusters.md" >}})* ## Upgrading Dapr with Helm @@ -79,7 +74,7 @@ Dapr supports zero downtime upgrades. The upgrade path includes the following st 2. Updating the Dapr control plane 3. Updating the data plane (Dapr sidecars) -### 1. Upgrading the CLI +### Upgrading the CLI To upgrade the Dapr CLI, [download a release version](https://github.com/dapr/cli/releases) of the CLI that matches the Dapr runtime version. For example, if upgrading to Dapr 0.9.0, download a CLI version of 0.9.x. @@ -204,24 +199,24 @@ Properly configured, Dapr not only be secured with regards to it's control plane It is recommended that a production-ready deployment includes the following settings: -1. Mutual Authentication (mTLS) should be enabled. Note that Dapr has mTLS on by default. For details on how to bring your own certificates, see [here](../configure-mtls/README.md#bringing-your-own-certificates) +1. Mutual Authentication (mTLS) should be enabled. Note that Dapr has mTLS on by default. For details on how to bring your own certificates, see [here]({{< ref "mtls.md#bringing-your-own-certificates" >}}) -2. Dapr API authentication is enabled (this is the between your application and the Dapr sidecar). To secure the Dapr API from unauthorized access, it is recommended to enable Dapr's token based auth. See [here](../enable-dapr-api-token-based-authentication/README.md) for details +2. Dapr API authentication is enabled (this is the between your application and the Dapr sidecar). To secure the Dapr API from unauthorized access, it is recommended to enable Dapr's token based auth. See [here]({{< ref "api-token.md" >}}) for details -3. All component YAMLs should have secret data configured in a secret store and not hard-coded in the YAML file. See [here](../../concepts/secrets/component-secrets.md) on how to use secrets with Dapr components +3. All component YAMLs should have secret data configured in a secret store and not hard-coded in the YAML file. See [here]({{< ref "component-secrets.md" >}}) on how to use secrets with Dapr components 4. The Dapr control plane is installed on a separate namespace such as `dapr-system`, and never into the `default` namespace -Dapr also supports scoping components for certain applications. This is not a required practice, and can be enabled according to your Sec-Ops needs. See [here](../components-scopes/README.md) for more info. +Dapr also supports scoping components for certain applications. This is not a required practice, and can be enabled according to your Sec-Ops needs. See [here]({{< ref "component-scopes.md" >}}) for more info. ## Tracing and metrics configuration Dapr has tracing and metrics enabled by default. -To configure a tracing backend for Dapr visit [this](../diagnose-with-tracing) link. +To configure a tracing backend for Dapr visit [this]({{< ref "setup-tracing.md" >}}) link. For metrics, Dapr exposes a Prometheus endpoint listening on port 9090 which can be scraped by Prometheus. It is *recommended* that you set up distributed tracing and metrics for your applications and the Dapr control plane in production. If you already have your own observability set-up, you can disable tracing and metrics for Dapr. -To setup Prometheus, Grafana and other monitoring tools with Dapr, visit [this](../setup-monitoring-tools) link. +To setup Prometheus, Grafana and other monitoring tools with Dapr, visit [this]({{< ref "monitoring" >}}) link. diff --git a/daprdocs/content/en/operations/hosting/self-hosted/_index.md b/daprdocs/content/en/operations/hosting/self-hosted/_index.md new file mode 100644 index 000000000..9c70df3ad --- /dev/null +++ b/daprdocs/content/en/operations/hosting/self-hosted/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Run Dapr in Self Hosted Mode" +linkTitle: "Self-Hosted" +weight: 1000 +description: "How to get Dapr up and running in your local environment" +--- \ No newline at end of file diff --git a/howto/self-hosted-no-docker/README.md b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-no-docker.md similarity index 74% rename from howto/self-hosted-no-docker/README.md rename to daprdocs/content/en/operations/hosting/self-hosted/self-hosted-no-docker.md index e429625f4..291178972 100644 --- a/howto/self-hosted-no-docker/README.md +++ b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-no-docker.md @@ -1,10 +1,16 @@ -# Self hosted mode without containers +--- +type: docs +title: "How-To: Run Dapr in self-hosted mode without Docker" +linkTitle: "Run without Docker" +weight: 30000 +description: "How to deploy and run Dapr in self-hosted mode without Docker installed on the local machine" +--- This article provides guidance on running Dapr in self-hosted mode without Docker. ## Prerequisites -- [Dapr CLI](../../getting-started/environment-setup.md#installing-dapr-cli) +- [Dapr CLI]({{< ref "install-dapr.md#installing-dapr-cli" >}}) ## Initialize Dapr without containers @@ -14,16 +20,16 @@ The Dapr CLI provides an option to initialize Dapr using slim init, without the dapr init --slim ``` -In this mode two different binaries are installed `daprd` and `placement`. The `placement` binary is needed to enable [actors](../../concepts/actors/README.md) in a Dapr self-hosted installation. +In this mode two different binaries are installed `daprd` and `placement`. The `placement` binary is needed to enable [actors]({{< ref "actors-overview.md" >}}) in a Dapr self-hosted installation. -In this mode no default components such as Redis are installed for state management or pub/sub. This means, that aside from [Service Invocation](../../concepts/service-invocation/README.md), no other building block functionality is available on install out of the box. Users are free to setup their own environment and custom components. Furthermore, actor based service invocation is possible if a state store is configured as explained in the following sections. +In this mode no default components such as Redis are installed for state management or pub/sub. This means, that aside from [Service Invocation]({{< ref "service-invocation-overview.md" >}}), no other building block functionality is available on install out of the box. Users are free to setup their own environment and custom components. Furthermore, actor based service invocation is possible if a state store is configured as explained in the following sections. ## Service invocation See [this sample](https://github.com/dapr/samples/tree/master/hello-dapr-slim) for an example on how to perform service invocation in this mode. ## Enabling state management or pub/sub -See configuring Redis in self hosted mode [without docker](../../howto/configure-redis/README.md#Self-Hosted-Mode-without-Containers) to enable a local state store or pub/sub broker for messaging. +See configuring Redis in self hosted mode [without docker](https://redis.io/topics/quickstart) to enable a local state store or pub/sub broker for messaging. ## Enabling actors @@ -60,4 +66,4 @@ INFO[0450] host removed: 192.168.1.6 instance=host.localhost ## Cleanup -Follow the uninstall [instructions](../../getting-started/environment-setup.md#Uninstall-Dapr-in-self-hosted-mode-(without-docker)) to remove the binaries. +Follow the uninstall [instructions]({{< ref "install-dapr.md#uninstall-dapr-in-a-self-hosted-mode" >}}) to remove the binaries. diff --git a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-overview.md b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-overview.md new file mode 100644 index 000000000..d149f91ea --- /dev/null +++ b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-overview.md @@ -0,0 +1,17 @@ +--- +type: docs +title: "Overview of Dapr in self-hosted mode" +linkTitle: "Overview" +weight: 10000 +description: "Overview of how to get Dapr running on your local machine" +--- + +Dapr can be configured to run on your local developer machine in self hosted mode. Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks. + +In self hosted mode, Redis is running locally in a container and is configured to serve as both the default component for state store and for pub/sub. A Zipkin container is also configured for diagnostics and tracing. After running `dapr init`, see the `$HOME/.dapr/components` directory (Mac/Linux) or `%USERPROFILE%\.dapr\components` on Windows. + +The `dapr-placement` service is responsible for managing the actor distribution scheme and key range settings. This service is only required if you are using Dapr actors. For more information on the actor `Placement` service read [actor overview]({{< ref "actors-overview.md" >}}). + + + +You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr enabled application on your local machine. \ No newline at end of file diff --git a/howto/run-with-docker/README.md b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md similarity index 86% rename from howto/run-with-docker/README.md rename to daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md index 6aae859f7..28afed63b 100644 --- a/howto/run-with-docker/README.md +++ b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md @@ -1,4 +1,11 @@ -# Run with Docker +--- +type: docs +title: "How-To: Run Dapr in self-hosted mode with Docker" +linkTitle: "Run with Docker" +weight: 20000 +description: "How to deploy and run Dapr in self-hosted mode using Docker" +--- + This article provides guidance on running Dapr with Docker outside of Kubernetes. There are a number of different configurations in which you may wish to run Dapr with Docker that are documented below. ## Prerequisites @@ -28,7 +35,7 @@ There are published Docker images for each of the Dapr components available on [ - `major.minor.patch-arm`: A release version for ARM. - `major.minor.patch-rc.iteration-arm`: A release candidate for ARM. -## Run Dapr in a Docker container with an app as a process +## Run app as a process > For development purposes ONLY If you are running Dapr in a Docker container and your app as a process on the host machine, then you need to configure @@ -42,7 +49,7 @@ Then you can run your app on the host and they should connect over the localhost However, if you are not running your Docker daemon on a Linux host, it is recommended you follow the steps below to run both your app and the [Dapr runtime in Docker containers using Docker Compose](#run-dapr-in-a-docker-container-using-docker-compose). -## Run Dapr and an app in a single Docker container +## Run app and Dapr in a single Docker container > For development purposes ONLY It is not recommended to run both the Dapr runtime and an application inside the same container. However, it is possible to do so for local development scenarios. @@ -71,7 +78,7 @@ CMD ["run", "--app-id", "nodeapp", "--app-port", "3000", "node", "app.js"] Remember that if Dapr needs to communicate with other components i.e. Redis, these also need to be made accessible to it. -## Run Dapr in a Docker container on a Docker network +## Run on a Docker network If you have multiple instances of Dapr running in Docker containers and want them to be able to communicate with each other i.e. for service invocation, then you'll need to create a shared Docker network and make sure those Dapr containers are attached to it. @@ -86,7 +93,7 @@ docker run --net=my-dapr-network ... ``` Each container will receive a unique IP on that network and be able to communicate with other containers on that network. -## Run Dapr in a Docker container using Docker-Compose +## Run using Docker-Compose [Docker Compose](https://docs.docker.com/compose/) can be used to define multi-container application configurations. If you wish to run multiple apps with Dapr sidecars locally without Kubernetes then it is recommended to use a Docker Compose definition (`docker-compose.yml`). @@ -135,14 +142,8 @@ services: To further learn how to run Dapr with Docker Compose, see the [Docker-Compose Sample](https://github.com/dapr/samples/tree/master/hello-docker-compose). -## Run Dapr in a Docker container on Kubernetes +## Run on Kubernetes If your deployment target is Kubernetes then you're probably better of running your applicaiton and Dapr sidecars directly on a Kubernetes platform. Running Dapr on Kubernetes is a first class experience and is documented separately. Please refer to the -following references: -- [Setup Dapr on a Kubernetes cluster](https://github.com/dapr/docs/blob/ea5b1918778a47555dbdccff0ed6c5b987ed10cf/getting-started/environment-setup.md#installing-dapr-on-a-kubernetes-cluster) -- [Hello Kubernetes Sample](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes) -- [Configuring the Dapr sidecar on Kubernetes](https://github.com/dapr/docs/blob/c88d247a2611d6824d41bb5b6adfeb38152dbbc6/howto/configure-k8s/README.md) -- [Running Dapr in Kubernetes mode](https://github.com/dapr/docs/blob/a7668cab5e16d12f364a42d2fe7d75933c6398e9/overview/README.md#running-dapr-in-kubernetes-mode) +[Dapr on Kubernetes docs]({{< ref "kubernetes-overview.md" >}}) -## Related links -- [Docker-Compose Sample](https://github.com/dapr/samples/hello-docker-compose) diff --git a/daprdocs/content/en/operations/monitoring/_index.md b/daprdocs/content/en/operations/monitoring/_index.md new file mode 100644 index 000000000..9f5489d92 --- /dev/null +++ b/daprdocs/content/en/operations/monitoring/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Monitor your application with Dapr" +linkTitle: "Monitoring" +weight: 400 +description: "How to monitor your application using Dapr integrations" +--- \ No newline at end of file diff --git a/howto/setup-monitoring-tools/setup-azure-monitor.md b/daprdocs/content/en/operations/monitoring/azure-monitor.md similarity index 86% rename from howto/setup-monitoring-tools/setup-azure-monitor.md rename to daprdocs/content/en/operations/monitoring/azure-monitor.md index 9bb9194e5..a15ce60ad 100644 --- a/howto/setup-monitoring-tools/setup-azure-monitor.md +++ b/daprdocs/content/en/operations/monitoring/azure-monitor.md @@ -1,6 +1,10 @@ -# Set up Azure Monitor to search logs and collect metrics - -This document describes how to enable Dapr metrics and logs with Azure Monitor for Azure Kubernetes Service (AKS). +--- +type: docs +title: "How-To: Set up Azure Monitor to search logs and collect metrics" +linkTitle: "Azure Monitor" +weight: 2000 +description: "Enable Dapr metrics and logs with Azure Monitor for Azure Kubernetes Service (AKS)" +--- ## Prerequisites @@ -9,12 +13,6 @@ This document describes how to enable Dapr metrics and logs with Azure Monitor f - [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) - [Helm 3](https://helm.sh/) -## Contents - - - [Enable Prometheus metric scrape using config map](#enable-prometheus-metric-scrape-using-config-map) - - [Install Dapr with JSON formatted logs](#install-dapr-with-json-formatted-logs) - - [Search metrics and logs with Azure Monitor](#Search-metrics-and-logs-with-azure-monitor) - ## Enable Prometheus metric scrape using config map 1. Make sure that omsagnets are running @@ -32,9 +30,9 @@ omsagent-smtk7 1/1 Runnin 2. Apply config map to enable Prometheus metrics endpoint scrape. -You can use [azm-config-map.yaml](./azm-config-map.yaml) to enable prometheus metrics endpoint scrape. +You can use [azm-config-map.yaml](/docs/azm-config-map.yaml) to enable prometheus metrics endpoint scrape. -If you installed Dapr to the different namespace, you need to change the `monitor_kubernetes_pod_namespaces` array values. For example; +If you installed Dapr to the different namespace, you need to change the `monitor_kubernetes_pod_namespaces` array values. For example: ```yaml ... @@ -50,7 +48,7 @@ If you installed Dapr to the different namespace, you need to change the `monito Apply config map: -``` +```bash kubectl apply -f ./azm-config.map.yaml ``` diff --git a/howto/setup-monitoring-tools/setup-fluentd-es-kibana.md b/daprdocs/content/en/operations/monitoring/fluentd.md similarity index 71% rename from howto/setup-monitoring-tools/setup-fluentd-es-kibana.md rename to daprdocs/content/en/operations/monitoring/fluentd.md index 041d7cf0e..134ec35a4 100644 --- a/howto/setup-monitoring-tools/setup-fluentd-es-kibana.md +++ b/daprdocs/content/en/operations/monitoring/fluentd.md @@ -1,6 +1,10 @@ -# Set up Fluentd, Elastic search and Kibana in Kubernetes - -This document descriebs how to install Fluentd, Elastic Search, and Kibana to search logs in Kubernetes +--- +type: docs +title: "How-To: Set up Fluentd, Elastic search and Kibana in Kubernetes" +linkTitle: "FluentD" +weight: 1000 +description: "How to install Fluentd, Elastic Search, and Kibana to search logs in Kubernetes" +--- ## Prerequisites @@ -8,33 +12,27 @@ This document descriebs how to install Fluentd, Elastic Search, and Kibana to se - [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) - [Helm 3](https://helm.sh/) -## Contents - - - [Install Fluentd, Elastic Search, and Kibana](#install-fluentd-elastic-search-and-kibana) - - [Install Fluentd](#install-fluentd) - - [Install Dapr with JSON formatted logs](#install-dapr-with-json-formatted-logs) - - [Search logs](#search-logs) ## Install Elastic search and Kibana 1. Create namespace for monitoring tool and add Helm repo for Elastic Search -```bash -kubectl create namespace dapr-monitoring -``` + ```bash + kubectl create namespace dapr-monitoring + ``` 2. Add Elastic helm repo -```bash -helm repo add elastic https://helm.elastic.co -helm repo update -``` + ```bash + helm repo add elastic https://helm.elastic.co + helm repo update + ``` 3. Install Elastic Search using Helm By default the chart creates 3 replicas which must be on different nodes. If your cluster has less than 3 nodes, specify a lower number of replicas. For example, this sets it to 1: -``` +```bash helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set replicas=1 ``` @@ -44,40 +42,41 @@ Otherwise: helm install elasticsearch elastic/elasticsearch -n dapr-monitoring ``` -If you are using minikube or want to disable persistent volumes for development purposes, you can disable it by using the following command. +If you are using minikube or want to disable persistent volumes for development purposes, you can disable it by using the following command: + ```bash helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set persistence.enabled=false --replicas=1 ``` 4. Install Kibana -```bash -helm install kibana elastic/kibana -n dapr-monitoring -``` + ```bash + helm install kibana elastic/kibana -n dapr-monitoring + ``` 5. Validation -Ensure Elastic Search and Kibana are running in your Kubernetes cluster. - -```bash -kubectl get pods -n dapr-monitoring -NAME READY STATUS RESTARTS AGE -elasticsearch-master-0 1/1 Running 0 6m58s -kibana-kibana-95bc54b89-zqdrk 1/1 Running 0 4m21s -``` + Ensure Elastic Search and Kibana are running in your Kubernetes cluster. + + ```bash + kubectl get pods -n dapr-monitoring + NAME READY STATUS RESTARTS AGE + elasticsearch-master-0 1/1 Running 0 6m58s + kibana-kibana-95bc54b89-zqdrk 1/1 Running 0 4m21s + ``` ## Install Fluentd 1. Install config map and Fluentd as a daemonset -Navigate to the following path if you're not already there (the one this document is in): - -``` -docs/howto/setup-monitoring-tools -``` +Download these config files: +- [fluentd-config-map.yaml](/docs/fluentd-config-map.yaml) +- [fluentd-dapr-with-rbac.yaml](/docs/fluentd-dapr-with-rbac.yaml) > Note: If you already have Fluentd running in your cluster, please enable the nested json parser to parse JSON formatted log from Dapr. +Apply the configurations to your cluster: + ```bash kubectl apply -f ./fluentd-config-map.yaml kubectl apply -f ./fluentd-dapr-with-rbac.yaml @@ -99,11 +98,11 @@ fluentd-sdrld 1/1 Running 0 14s 1. Install Dapr with enabling JSON-formatted logs -```bash -helm repo add dapr https://dapr.github.io/helm-charts/ -helm repo update -helm install dapr dapr/dapr --namespace dapr-system --set global.logAsJson=true -``` + ```bash + helm repo add dapr https://dapr.github.io/helm-charts/ + helm repo update + helm install dapr dapr/dapr --namespace dapr-system --set global.logAsJson=true + ``` 2. Enable JSON formatted log in Dapr sidecar @@ -152,37 +151,37 @@ Handling connection for 5601 3. Click Management -> Index Management -![kibana management](./img/kibana-1.png) +![kibana management](/images/kibana-1.png) 4. Wait until dapr-* is indexed. -![index log](./img/kibana-2.png) +![index log](/images/kibana-2.png) 5. Once dapr-* indexed, click Kibana->Index Patterns and Create Index Pattern -![create index pattern](./img/kibana-3.png) +![create index pattern](/images/kibana-3.png) 6. Define index pattern - type `dapr*` in index pattern -![define index pattern](./img/kibana-4.png) +![define index pattern](/images/kibana-4.png) 7. Select time stamp filed: `@timestamp` -![timestamp](./img/kibana-5.png) +![timestamp](/images/kibana-5.png) 8. Confirm that `scope`, `type`, `app_id`, `level`, etc are being indexed. > Note: if you cannot find the indexed field, please wait. it depends on the volume of data and resource size where elastic search is running. -![indexing](./img/kibana-6.png) +![indexing](/images/kibana-6.png) 9. Click `discover` icon and search `scope:*` > Note: it would take some time to make log searchable based on the data volume and resource. -![discover](./img/kibana-7.png) +![discover](/images/kibana-7.png) -# References +## References * [Fluentd for Kubernetes](https://docs.fluentd.org/v/0.12/articles/kubernetes-fluentd) * [Elastic search helm chart](https://github.com/elastic/helm-charts/tree/master/elasticsearch) diff --git a/daprdocs/content/en/operations/monitoring/grafana.md b/daprdocs/content/en/operations/monitoring/grafana.md new file mode 100644 index 000000000..d1c21917f --- /dev/null +++ b/daprdocs/content/en/operations/monitoring/grafana.md @@ -0,0 +1,140 @@ +--- +type: docs +title: "How-To: Observe metrics with Grafana" +linkTitle: "Grafana" +weight: 5000 +description: "How to view Dapr metrics in a Grafana dashboard." +--- + +## Pre-requisites + +- [Setup Prometheus]({{}}) + +## Setup on Kubernetes + +### Install Grafana + +1. Install Grafana + + ```bash + helm install grafana stable/grafana -n dapr-monitoring + ``` + + If you are Minikube user or want to disable persistent volume for development purpose, you can disable it by using the following command: + + ```bash + helm install grafana stable/grafana -n dapr-monitoring --set persistence.enabled=false + ``` + +2. Retrieve the admin password for Grafana login + + ```bash + kubectl get secret --namespace dapr-monitoring grafana -o jsonpath="{. data.admin-password}" | base64 --decode + cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1% + ``` + + {{% alert title="Note" color="info" %}} + Remove the `%` character from the password that this command returns. For example, the admin password is `cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1`. + {{% /alert %}} + +3. Validation + + Ensure Grafana is running in your cluster (see last line below) + + ```bash + kubectl get pods -n dapr-monitoring + + NAME READY STATUS RESTARTS AGE + dapr-prom-kube-state-metrics-9849d6cc6-t94p8 1/1 Running 0 4m58s + dapr-prom-prometheus-alertmanager-749cc46f6-9b5t8 2/2 Running 0 4m58s + dapr-prom-prometheus-node-exporter-5jh8p 1/1 Running 0 4m58s + dapr-prom-prometheus-node-exporter-88gbg 1/1 Running 0 4m58s + dapr-prom-prometheus-node-exporter-bjp9f 1/1 Running 0 4m58s + dapr-prom-prometheus-pushgateway-688665d597-h4xx2 1/1 Running 0 4m58s + dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0 4m58s + grafana-c49889cff-x56vj 1/1 Running 0 5m10s + ``` + +### Configure Prometheus as data source +First you need to connect Prometheus as a data source to Grafana. + +1. Port-forward to svc/grafana + + ```bash + $ kubectl port-forward svc/grafana 8080:80 -n dapr-monitoring + Forwarding from 127.0.0.1:8080 -> 3000 + Forwarding from [::1]:8080 -> 3000 + Handling connection for 8080 + Handling connection for 8080 + ``` + +2. Browse `http://localhost:8080` + +3. Login with admin and password + +4. Click Configuration Settings -> Data Sources + + ![data source](/images/grafana-datasources.png) + +5. Add Prometheus as a data soruce. + + ![add data source](/images/grafana-datasources.png) + +6. Enter Promethesus server address in your cluster. + + You can get the Prometheus server address by running the following command. + + ```bash + kubectl get svc -n dapr-monitoring + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + dapr-prom-kube-state-metrics ClusterIP 10.0.174.177 8080/TCP 7d9h + dapr-prom-prometheus-alertmanager ClusterIP 10.0.255.199 80/TCP 7d9h + dapr-prom-prometheus-node-exporter ClusterIP None 9100/TCP 7d9h + dapr-prom-prometheus-pushgateway ClusterIP 10.0.190.59 9091/TCP 7d9h + dapr-prom-prometheus-server ClusterIP 10.0.172.191 80/TCP 7d9h + elasticsearch-master ClusterIP 10.0.36.146 9200/TCP,9300/TCP 7d10h + elasticsearch-master-headless ClusterIP None 9200/TCP,9300/TCP 7d10h + grafana ClusterIP 10.0.15.229 80/TCP 5d5h + kibana-kibana ClusterIP 10.0.188.224 5601/TCP 7d10h + + ``` + + In this Howto, the server is `dapr-prom-prometheus-server`. + + You now need to set up Prometheus data source with the following settings: + + - Name: `Dapr` + - HTTP URL: `http://dapr-prom-prometheus-server.dapr-monitoring` + - Default: On + + ![prometheus server](/images/grafana-prometheus-dapr-server-url.png) + +7. Click `Save & Test` button to verify that the connection succeeded. + +## Import dashboards in Grafana +Next you import the Dapr dashboards into Grafana. + +In the upper left, click the "+" then "Import". + +You can now import built-in [Grafana dashboard templates](https://github.com/dapr/dapr/tree/master/grafana). + +The Grafana dashboards are part of [release assets](https://github.com/dapr/dapr/releases) with this URL https://github.com/dapr/dapr/releases/ +You can find `grafana-actor-dashboard.json`, `grafana-sidecar-dashboard.json` and `grafana-system-services-dashboard.json` in release assets location. + +![upload json](/images/grafana-uploadjson.png) + +8. Find the dashboard that you imported and enjoy! + +![upload json](/images/system-service-dashboard.png) + +## References + +* [Set up Prometheus and Grafana]({{< ref grafana.md >}}) +* [Prometheus Installation](https://github.com/helm/charts/tree/master/stable/prometheus-operator) +* [Prometheus on Kubernetes](https://github.com/coreos/kube-prometheus) +* [Prometheus Kubernetes Operator](https://github.com/helm/charts/tree/master/stable/prometheus-operator) +* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/) + +## Example + \ No newline at end of file diff --git a/daprdocs/content/en/operations/monitoring/open-telemetry-collector.md b/daprdocs/content/en/operations/monitoring/open-telemetry-collector.md new file mode 100644 index 000000000..9645bd474 --- /dev/null +++ b/daprdocs/content/en/operations/monitoring/open-telemetry-collector.md @@ -0,0 +1,91 @@ +--- +type: docs +title: "Using OpenTelemetry Collector to collect traces" +linkTitle: "OpenTelemetry" +weight: 1000 +description: "How to use Dapr to push trace events to Azure Application Insights, through the OpenTelemetry Collector." +--- + +Dapr can integrate with [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) using the OpenCensus API. This guide walks through an example to use Dapr to push trace events to Azure Application Insights, through the OpenTelemetry Collector. + +## Requirements + +A installation of Dapr on Kubernetes. + +## How to configure distributed tracing with Application Insights + +### Setup Application Insights + +1. First, you'll need an Azure account. See instructions [here](https://azure.microsoft.com/free/) to apply for a **free** Azure account. +2. Follow instructions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource) to create a new Application Insights resource. +3. Get the Application Insights Intrumentation key from your Application Insights page. + +### Run OpenTelemetry Collector to push to your Application Insights instance + +First, save your Application Insights Instrumentation Key in an environment variable +``` +export APP_INSIGHTS_KEY= +``` + +Next, install the OpenTelemetry Collector to your Kubernetes cluster to push events to your Application Insights instance + +1. Check out the file [open-telemetry-collector.yaml](/docs/open-telemetry-collector/open-telemetry-collector.yaml) and replace the `` placeholder with your `APP_INSIGHTS_KEY`. + +2. Apply the configuration with `kubectl apply -f open-telemetry-collector.yaml`. + +Next, set up both a Dapr configuration file to turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector. + +1. Create a collector-component.yaml file with this [content](/docs/open-telemetry-collector/collector-component.yaml) + +2. Apply the configuration with `kubectl apply -f collector-component.yaml`. + +### Deploy your app with tracing + +When running in Kubernetes mode, apply the `appconfig` configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following example: + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + ... +spec: + ... + template: + metadata: + ... + annotations: + dapr.io/enabled: "true" + dapr.io/app-id: "MyApp" + dapr.io/app-port: "8080" + dapr.io/config: "appconfig" +``` + +Some of the quickstarts such as [distributed calculator](https://github.com/dapr/quickstarts/tree/master/distributed-calculator) already configure these settings, so if you are using those no additional settings are needed. + +That's it! There's no need include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you. + +> **NOTE**: You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters. + +Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your Application Insights resource. You can also use **Application Map** to examine the topology of your services, as shown below: + +![Application map](/images/open-telemetry-app-insights.png) + +> **NOTE**: Only operations going through Dapr API exposed by Dapr sidecar (e.g. service invocation or event publishing) are displayed in Application Map topology. + +## Tracing configuration + +The `tracing` section under the `Configuration` spec contains the following properties: + +```yml +tracing: + samplingRate: "1" +``` + +The following table lists the different properties. + +| Property | Type | Description +|-------------- | ------ | ----------- +| samplingRate | string | Set sampling rate for tracing to be enabled or disabled. + + +`samplingRate` is used to enable or disable the tracing. To disable the sampling rate , set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` will always sample the traces. By default, the sampling rate is 1 in 10,000 diff --git a/daprdocs/content/en/operations/monitoring/prometheus.md b/daprdocs/content/en/operations/monitoring/prometheus.md new file mode 100644 index 000000000..3e60d02a1 --- /dev/null +++ b/daprdocs/content/en/operations/monitoring/prometheus.md @@ -0,0 +1,120 @@ +--- +type: docs +title: "How-To: Observe metrics with Prometheus" +linkTitle: "Prometheus" +weight: 4000 +description: "Use Prometheus to collect time-series data relating to the execution of the Dapr runtime itself" +--- + +## Setup Prometheus Locally +To run Prometheus on your local machine, you can either [install and run it as a process](#install) or run it as a [Docker container](#Run-as-Container). + +### Install +{{% alert title="Note" color="warning" %}} +You don't need to install Prometheus if you plan to run it as a Docker container. Please refer to the [Container](#run-as-container) instructions. +{{% /alert %}} + +To install Prometheus, follow the steps outlined [here](https://prometheus.io/docs/prometheus/latest/getting_started/) for your OS. + +### Configure +Now you've installed Prometheus, you need to create a configuration. + +Below is an example Prometheus configuration, save this to a file i.e. `/tmp/prometheus.yml` or `C:\Temp\prometheus.yml` +```yaml +global: + scrape_interval: 15s # By default, scrape targets every 15 seconds. + +# A scrape configuration containing exactly one endpoint to scrape: +# Here it's Prometheus itself. +scrape_configs: + - job_name: 'dapr' + + # Override the global default and scrape targets from this job every 5 seconds. + scrape_interval: 5s + + static_configs: + - targets: ['localhost:9090'] # Replace with Dapr metrics port if not default +``` + +### Run as Process +Run Prometheus with your configuration to start it collecting metrics from the specified targets. +```bash +./prometheus --config.file=/tmp/prometheus.yml --web.listen-address=:8080 +``` +> We change the port so it doesn't conflict with Dapr's own metrics endpoint. + +If you are not currently running a Dapr application, the target will show as offline. In order to start +collecting metrics you must start Dapr with the metrics port matching the one provided as the target in the configuration. + +Once Prometheus is running, you'll be able to visit its dashboard by visiting `http://localhost:8080`. + +### Run as Container +To run Prometheus as a Docker container on your local machine, first ensure you have [Docker](https://docs.docker.com/install/) installed and running. + +Then you can run Prometheus as a Docker container using: +```bash +docker run \ + --net=host \ + -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml \ + prom/prometheus --config.file=/etc/prometheus/prometheus.yml --web.listen-address=:8080 +``` +`--net=host` ensures that the Prometheus instance will be able to connect to any Dapr instances running on the host machine. If you plan to run your Dapr apps in containers as well, you'll need to run them on a shared Docker network and update the configuration with the correct target address. + +Once Prometheus is running, you'll be able to visit its dashboard by visiting `http://localhost:8080`. + +## Setup Prometheus on Kubernetes + +### Prerequisites + +- Kubernetes (> 1.14) +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +- [Helm 3](https://helm.sh/) + +### Install Prometheus + +1. First create namespace that can be used to deploy the Grafana and Prometheus monitoring tools + +```bash +kubectl create namespace dapr-monitoring +``` + +2. Install Prometheus + +```bash +helm repo add stable https://kubernetes-charts.storage.googleapis.com +helm repo update +helm install dapr-prom stable/prometheus -n dapr-monitoring +``` + +If you are Minikube user or want to disable persistent volume for development purposes, you can disable it by using the following command. + +```bash +helm install dapr-prom stable/prometheus -n dapr-monitoring --set alertmanager.persistentVolume.enable=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false +``` + +3. Validation + +Ensure Prometheus is running in your cluster. + +```bash +kubectl get pods -n dapr-monitoring + +NAME READY STATUS RESTARTS AGE +dapr-prom-kube-state-metrics-9849d6cc6-t94p8 1/1 Running 0 4m58s +dapr-prom-prometheus-alertmanager-749cc46f6-9b5t8 2/2 Running 0 4m58s +dapr-prom-prometheus-node-exporter-5jh8p 1/1 Running 0 4m58s +dapr-prom-prometheus-node-exporter-88gbg 1/1 Running 0 4m58s +dapr-prom-prometheus-node-exporter-bjp9f 1/1 Running 0 4m58s +dapr-prom-prometheus-pushgateway-688665d597-h4xx2 1/1 Running 0 4m58s +dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0 4m58s +``` + +## Example + + +## References + +* [Prometheus Installation](https://github.com/helm/charts/tree/master/stable/prometheus-operator) +* [Prometheus on Kubernetes](https://github.com/coreos/kube-prometheus) +* [Prometheus Kubernetes Operator](https://github.com/helm/charts/tree/master/stable/prometheus-operator) +* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/) diff --git a/howto/diagnose-with-tracing/zipkin.md b/daprdocs/content/en/operations/monitoring/zipkin.md similarity index 87% rename from howto/diagnose-with-tracing/zipkin.md rename to daprdocs/content/en/operations/monitoring/zipkin.md index 78366321e..d571b9dd6 100644 --- a/howto/diagnose-with-tracing/zipkin.md +++ b/daprdocs/content/en/operations/monitoring/zipkin.md @@ -1,9 +1,11 @@ -# Set up Zipkin for distributed tracing - -- [Configure self hosted mode](#Configure-self-hosted-mode) -- [Configure Kubernetes](#Configure-Kubernetes) -- [Tracing configuration](#Tracing-Configuration) - +--- +type: docs +title: "How-To: Set up Zipkin for distributed tracing" +linkTitle: "Zipkin" +weight: 3000 +description: "Set up Zipkin for distributed tracing" +type: docs +--- ## Configure self hosted mode @@ -132,10 +134,9 @@ To view traces, connect to the Zipkin Service and open the UI: kubectl port-forward svc/zipkin 9411:9411 ``` -In your browser, go to ```http://localhost:9411``` and you will see the Zipkin UI. +In your browser, go to `http://localhost:9411` and you will see the Zipkin UI. -![zipkin](../../images/zipkin_ui.png) +![zipkin](/images/zipkin_ui.png) ## References - -* [How-To: Use W3C Trace Context for distributed tracing](../../howto/use-w3c-tracecontext/README.md) +- [W3C distributed tracing]({{< ref w3c-tracing >}}) diff --git a/daprdocs/content/en/operations/security/_index.md b/daprdocs/content/en/operations/security/_index.md new file mode 100644 index 000000000..08801c900 --- /dev/null +++ b/daprdocs/content/en/operations/security/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Securing Dapr deployments" +linkTitle: "Security" +weight: 500 +description: "Best practices and instructions on how to secure your Dapr applications" +--- \ No newline at end of file diff --git a/howto/enable-dapr-api-token-based-authentication/README.md b/daprdocs/content/en/operations/security/api-token.md similarity index 93% rename from howto/enable-dapr-api-token-based-authentication/README.md rename to daprdocs/content/en/operations/security/api-token.md index ddc7d08ea..e6d03c2d7 100644 --- a/howto/enable-dapr-api-token-based-authentication/README.md +++ b/daprdocs/content/en/operations/security/api-token.md @@ -1,4 +1,10 @@ -# Enable Dapr APIs token-based authentication +--- +type: docs +title: "Enable API token based authentication" +linkTitle: "API token auth" +weight: 3000 +description: "Require every incoming API request to include an authentication token before allowing that request to pass through" +--- By default, Dapr relies on the network boundary to limit access to its public API. If you plan on exposing the Dapr API outside of that boundary, or if your deployment demands an additional level of security, consider enabling the token authentication for Dapr APIs. This will cause Dapr to require every incoming gRPC and HTTP request for its APIs for to include authentication token, before allowing that request to pass through. diff --git a/howto/configure-mtls/README.md b/daprdocs/content/en/operations/security/mtls.md similarity index 97% rename from howto/configure-mtls/README.md rename to daprdocs/content/en/operations/security/mtls.md index 9209ea0a6..4b14e56fb 100644 --- a/howto/configure-mtls/README.md +++ b/daprdocs/content/en/operations/security/mtls.md @@ -1,10 +1,16 @@ -# Setup and configure mutual TLS +--- +type: docs +title: "Setup & configure mutual TLS" +linkTitle: "mTLS" +weight: 1000 +description: "Encrypt communication between Dapr instances" +--- Dapr supports in-transit encryption of communication between Dapr instances using Sentry, a central Certificate Authority. Dapr allows operators and developers to bring in their own certificates, or let Dapr automatically create and persist self signed root and issuer certificates. -For detailed information on mTLS, go to the concepts section [here](../../concepts/security/README.md). +For detailed information on mTLS, go to the concepts section [here]({{< ref "security-concept.md" >}}). If custom certificates have not been provided, Dapr will automatically create and persist self signed certs valid for one year. In Kubernetes, the certs are persisted to a secret that resides in the namespace of the Dapr system pods, accessible only to them. diff --git a/howto/authorization-with-oauth/README.md b/daprdocs/content/en/operations/security/oauth.md similarity index 78% rename from howto/authorization-with-oauth/README.md rename to daprdocs/content/en/operations/security/oauth.md index 0a006e120..8b0410e8e 100644 --- a/howto/authorization-with-oauth/README.md +++ b/daprdocs/content/en/operations/security/oauth.md @@ -1,187 +1,186 @@ -# Configure API authorization with OAuth - -Dapr OAuth 2.0 [middleware](../../concepts/middleware/README.md) allows you to enable [OAuth](https://oauth.net/2/) -authorization on Dapr endpoints for your web APIs, -using the [Authorization Code Grant flow](https://tools.ietf.org/html/rfc6749#section-4.1). -As well as injecting authorization tokens into your APIs which can be used for authorization towards external APIs -called by your APIs, -using the [Client Credentials Grant flow](https://tools.ietf.org/html/rfc6749#section-4.4). -When the middleware is enabled, -any method invocation through Dapr needs to be authorized before getting passed to the user code. - -The main difference between the two flows is that the -`Authorization Code Grant flow` needs user interaction and authorizes a user, -the `Client Credentials Grant flow` doesn't need a user interaction and authorizes a service/application. - -## Register your application with a authorization server - -Different authorization servers provide different application registration experiences. Here are some samples: - -* [Azure AAD](https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-protocols-oauth-code) -* [Facebook](https://developers.facebook.com/apps) -* [Fitbit](https://dev.fitbit.com/build/reference/web-api/oauth2/) -* [GitHub](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/) -* [Google APIs](https://console.developers.google.com/apis/credentials/consen) -* [Slack](https://api.slack.com/docs/oauth) -* [Twitter](http://apps.twitter.com/) - -To figure the Dapr OAuth middleware, you'll need to collect the following information: - -* Client ID (see [here](https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/)) -* Client secret (see [here](https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/)) -* Scopes (see [here](https://oauth.net/2/scope/)) -* Authorization URL -* Token URL - -Authorization/Token URLs of some of the popular authorization servers: - -|Server|Authorization URL|Token URL| -|--------|--------|--------| -|Azure AAD||| -|GitHub||| -|Google|| | -|Twitter||| - -## Define the middleware component definition - -### Define an Authorization Code Grant component - -An OAuth middleware (Authorization Code) is defined by a component: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: oauth2 - namespace: default -spec: - type: middleware.http.oauth2 - metadata: - - name: clientId - value: "" - - name: clientSecret - value: "" - - name: scopes - value: "" - - name: authURL - value: "" - - name: tokenURL - value: "" - - name: redirectURL - value: "" - - name: authHeaderName - value: "
" -``` - -### Define a custom pipeline for an Authorization Code Grant - -To use the OAuth middleware (Authorization Code), you should create a [custom pipeline](../../concepts/middleware/README.md) -using [Dapr configuration](../../concepts/configuration/README.md), as shown in the following sample: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Configuration -metadata: - name: pipeline - namespace: default -spec: - httpPipeline: - handlers: - - name: oauth2 - type: middleware.http.oauth2 -``` - -### Define a Client Credentials Grant component - -An OAuth (Client Credentials) middleware is defined by a component: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: myComponent -spec: - type: middleware.http.oauth2clientcredentials - metadata: - - name: clientId - value: "" - - name: clientSecret - value: "" - - name: scopes - value: "" - - name: tokenURL - value: "" - - name: headerName - value: "
" - - name: endpointParamsQuery - value: "" - # authStyle: - # "0" means to auto-detect which authentication - # style the provider wants by trying both ways and caching - # the successful way for the future. - - # "1" sends the "client_id" and "client_secret" - # in the POST body as application/x-www-form-urlencoded parameters. - - # "2" sends the client_id and client_password - # using HTTP Basic Authorization. This is an optional style - # described in the OAuth2 RFC 6749 section 2.3.1. - - name: authStyle - value: "" -``` - -### Define a custom pipeline for a Client Credentials Grant - -To use the OAuth middleware (Client Credentials), you should create a [custom pipeline](../../concepts/middleware/README.md) -using [Dapr configuration](../../concepts/configuration/README.md), as shown in the following sample: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Configuration -metadata: - name: pipeline - namespace: default -spec: - httpPipeline: - handlers: - - name: myComponent - type: middleware.http.oauth2clientcredentials -``` - -## Apply the configuration - -To apply the above configuration (regardless of grant type) -to your Dapr sidecar, add a ```dapr.io/config``` annotation to your pod spec: - -```yaml -apiVersion: apps/v1 -kind: Deployment -... -spec: - ... - template: - metadata: - ... - annotations: - dapr.io/enabled: "true" - ... - dapr.io/config: "pipeline" -... -``` - -## Accessing the access token - -### Authorization Code Grant - -Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar -(such as calling the *v1.0/invoke/* endpoint), -it will be redirected to the authorization's consent page if an access token is not found. -Otherwise, the access token is written to the **authHeaderName** header and made available to the app code. - -### Client Credentials Grant - -Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar -(such as calling the *v1.0/invoke/* endpoint), -it will retrieve a new access token if an existing valid one is not found. -The access token is written to the **headerName** header and made available to the app code. -In that way the app can forward the token in the authorization header in calls towards the external API requesting that token. +--- +type: docs +title: "Configure API authention with OAUTH" +linkTitle: "OAuth" +weight: 2000 +description: "Enable OAUTH authorization on Dapr endpoints for your web APIs" +--- + +Dapr OAuth 2.0 [middleware]({{< ref "middleware-concept.md" >}}) allows you to enable [OAuth](https://oauth.net/2/) authorization on Dapr endpoints for your web APIs using the [Authorization Code Grant flow](https://tools.ietf.org/html/rfc6749#section-4.1). +You can also inject authorization tokens into your APIs which can be used for authorization towards external APIs called by your APIs using the [Client Credentials Grant flow](https://tools.ietf.org/html/rfc6749#section-4.4). +When the middleware is enabled any method invocation through Dapr needs to be authorized before getting passed to the user code. + +The main difference between the two flows is that the `Authorization Code Grant flow` needs user interaction and authorizes a user where the `Client Credentials Grant flow` doesn't need a user interaction and authorizes a service/application. + +## Register your application with a authorization server + +Different authorization servers provide different application registration experiences. Here are some samples: + +* [Azure AAD](https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-protocols-oauth-code) +* [Facebook](https://developers.facebook.com/apps) +* [Fitbit](https://dev.fitbit.com/build/reference/web-api/oauth2/) +* [GitHub](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/) +* [Google APIs](https://console.developers.google.com/apis/credentials/consen) +* [Slack](https://api.slack.com/docs/oauth) +* [Twitter](http://apps.twitter.com/) + +To figure the Dapr OAuth middleware, you'll need to collect the following information: + +* Client ID (see [here](https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/)) +* Client secret (see [here](https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/)) +* Scopes (see [here](https://oauth.net/2/scope/)) +* Authorization URL +* Token URL + +Authorization/Token URLs of some of the popular authorization servers: + +| Server | Authorization URL | Token URL | +|---------|-------------------|-----------| +|Azure AAD||| +|GitHub||| +|Google|| | +|Twitter||| + +## Define the middleware component definition + +### Define an Authorization Code Grant component + +An OAuth middleware (Authorization Code) is defined by a component: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: oauth2 + namespace: default +spec: + type: middleware.http.oauth2 + metadata: + - name: clientId + value: "" + - name: clientSecret + value: "" + - name: scopes + value: "" + - name: authURL + value: "" + - name: tokenURL + value: "" + - name: redirectURL + value: "" + - name: authHeaderName + value: "
" +``` + +### Define a custom pipeline for an Authorization Code Grant + +To use the OAuth middleware (Authorization Code), you should create a [custom pipeline]({{< ref "middleware-concept.md" >}}) +using [Dapr configuration]({{< ref "configuration-overview" >}}), as shown in the following sample: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Configuration +metadata: + name: pipeline + namespace: default +spec: + httpPipeline: + handlers: + - name: oauth2 + type: middleware.http.oauth2 +``` + +### Define a Client Credentials Grant component + +An OAuth (Client Credentials) middleware is defined by a component: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: myComponent +spec: + type: middleware.http.oauth2clientcredentials + metadata: + - name: clientId + value: "" + - name: clientSecret + value: "" + - name: scopes + value: "" + - name: tokenURL + value: "" + - name: headerName + value: "
" + - name: endpointParamsQuery + value: "" + # authStyle: + # "0" means to auto-detect which authentication + # style the provider wants by trying both ways and caching + # the successful way for the future. + + # "1" sends the "client_id" and "client_secret" + # in the POST body as application/x-www-form-urlencoded parameters. + + # "2" sends the client_id and client_password + # using HTTP Basic Authorization. This is an optional style + # described in the OAuth2 RFC 6749 section 2.3.1. + - name: authStyle + value: "" +``` + +### Define a custom pipeline for a Client Credentials Grant + +To use the OAuth middleware (Client Credentials), you should create a [custom pipeline]({{< ref "middleware-concept.md" >}}) +using [Dapr configuration]({{< ref "configuration-overview.md" >}}), as shown in the following sample: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Configuration +metadata: + name: pipeline + namespace: default +spec: + httpPipeline: + handlers: + - name: myComponent + type: middleware.http.oauth2clientcredentials +``` + +## Apply the configuration + +To apply the above configuration (regardless of grant type) +to your Dapr sidecar, add a ```dapr.io/config``` annotation to your pod spec: + +```yaml +apiVersion: apps/v1 +kind: Deployment +... +spec: + ... + template: + metadata: + ... + annotations: + dapr.io/enabled: "true" + ... + dapr.io/config: "pipeline" +... +``` + +## Accessing the access token + +### Authorization Code Grant + +Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar +(such as calling the *v1.0/invoke/* endpoint), +it will be redirected to the authorization's consent page if an access token is not found. +Otherwise, the access token is written to the **authHeaderName** header and made available to the app code. + +### Client Credentials Grant + +Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar +(such as calling the *v1.0/invoke/* endpoint), +it will retrieve a new access token if an existing valid one is not found. +The access token is written to the **headerName** header and made available to the app code. +In that way the app can forward the token in the authorization header in calls towards the external API requesting that token. diff --git a/daprdocs/content/en/operations/troubleshooting/_index.md b/daprdocs/content/en/operations/troubleshooting/_index.md new file mode 100644 index 000000000..218e9cd0e --- /dev/null +++ b/daprdocs/content/en/operations/troubleshooting/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Debugging and Troubleshooting" +linkTitle: "Troubleshooting" +weight: 700 +description: "Tools, techniques and common problems to help users debug and diagnose issues with Dapr" +--- diff --git a/best-practices/troubleshooting/common_issues.md b/daprdocs/content/en/operations/troubleshooting/common_issues.md similarity index 80% rename from best-practices/troubleshooting/common_issues.md rename to daprdocs/content/en/operations/troubleshooting/common_issues.md index 54d3f2230..b9e19f66d 100644 --- a/best-practices/troubleshooting/common_issues.md +++ b/daprdocs/content/en/operations/troubleshooting/common_issues.md @@ -1,8 +1,12 @@ -# Common Issues +--- +type: docs +title: "Common issues when running Dapr" +linkTitle: "Common Issues" +weight: 1000 +description: "Common issues and problems faced when running Dapr applications" +--- -This section will walk you through some common issues and problems. - -### I don't see the Dapr sidecar injected to my pod +## I don't see the Dapr sidecar injected to my pod There could be several reasons to why a sidecar will not be injected into a pod. First, check your Deployment or Pod YAML file, and check that you have the following annotations in the right place: @@ -51,7 +55,7 @@ In order to further diagnose any issue, check the logs of the Dapr sidecar injec *Note: If you installed Dapr to a different namespace, replace dapr-system above with the desired namespace* -### My pod is in CrashLoopBackoff or another failed state due to the daprd sidecar +## My pod is in CrashLoopBackoff or another failed state due to the daprd sidecar If the Dapr sidecar (`daprd`) is taking too long to initialize, this might be surfaced as a failing health check by Kubernetes. @@ -78,13 +82,13 @@ The most common cause of this failure is that a component (such as a state store To diagnose the root cause: -- Significantly increase the liveness probe delay - [link](../../howto/configure-k8s/README.md) -- Set the log level of the sidecar to debug - [link](./logs.md#setting-the-sidecar-log-level) -- Watch the logs for meaningful information - [link](./logs.md#viewing-logs-on-kubernetes) +- Significantly increase the liveness probe delay - [link]{{< ref "kubernetes-overview.md" >}}) +- Set the log level of the sidecar to debug - [link]({{< ref "logs-troubleshooting.md#setting-the-sidecar-log-level" >}}) +- Watch the logs for meaningful information - [link]({{< ref "logs-troubleshooting.md#viewing-logs-on-kubernetes" >}}) -> :bulb: Remember to configure the liveness check delay and log level back to your desired values after solving the problem. +> Remember to configure the liveness check delay and log level back to your desired values after solving the problem. -### I am unable to save state or get state +## I am unable to save state or get state Have you installed an Dapr State store in your cluster? @@ -95,7 +99,7 @@ kubectl get components ``` If there isn't a state store component, it means you need to set one up. -Visit [here](../../howto/setup-state-store/setup-redis.md) for more details. +Visit [here]({{< ref "state-management" >}}) for more details. If everything's set up correctly, make sure you got the credentials right. Search the Dapr runtime logs and look for any state store errors: @@ -104,7 +108,7 @@ Search the Dapr runtime logs and look for any state store errors: kubectl logs daprd ``` -### I am unable to publish and receive events +## I am unable to publish and receive events Have you installed an Dapr Message Bus in your cluster? @@ -115,7 +119,7 @@ kubectl get components ``` If there isn't a pub/sub component, it means you need to set one up. -Visit [here](../../howto/setup-pub-sub-message-broker/README.md) for more details. +Visit [here]({{< ref "pubsub" >}}) for more details. If everything is set up correctly, make sure you got the credentials right. Search the Dapr runtime logs and look for any pub/sub errors: @@ -124,7 +128,7 @@ Search the Dapr runtime logs and look for any pub/sub errors: kubectl logs daprd ``` -### The Dapr Operator pod keeps crashing +## The Dapr Operator pod keeps crashing Check that there's only one installation of the Dapr Operator in your cluster. Find out by running @@ -135,7 +139,7 @@ kubectl get pods -l app=dapr-operator --all-namespaces If two pods appear, delete the redundant Dapr installation. -### I'm getting 500 Error responses when calling Dapr +## I'm getting 500 Error responses when calling Dapr This means there are some internal issue inside the Dapr runtime. To diagnose, view the logs of the sidecar: @@ -144,29 +148,29 @@ To diagnose, view the logs of the sidecar: kubectl logs daprd ``` -### I'm getting 404 Not Found responses when calling Dapr +## I'm getting 404 Not Found responses when calling Dapr This means you're trying to call an Dapr API endpoint that either doesn't exist or the URL is malformed. -Look at the Dapr API reference [here](../../reference/api/README.md) and make sure you're calling the right endpoint. +Look at the Dapr API reference [here]({{< ref "api" >}}) and make sure you're calling the right endpoint. -### I don't see any incoming events or calls from other services +## I don't see any incoming events or calls from other services Have you specified the port your app is listening on? In Kubernetes, make sure the `dapr.io/app-port` annotation is specified: -
+```yaml
 annotations:
     dapr.io/enabled: "true"
     dapr.io/app-id: "nodeapp"
-    dapr.io/app-port: "3000"
-
+ dapr.io/app-port: "3000" +``` If using Dapr Standalone and the Dapr CLI, make sure you pass the `--app-port` flag to the `dapr run` command. -### My Dapr-enabled app isn't behaving correctly +## My Dapr-enabled app isn't behaving correctly The first thing to do is inspect the HTTP error code returned from the Dapr API, if any. -If you still can't find the issue, try enabling `debug` log levels for the Dapr runtime. See [here](logs.md) how to do so. +If you still can't find the issue, try enabling `debug` log levels for the Dapr runtime. See [here]({{< ref "logs.md" >}}) how to do so. You might also want to look at error logs from your own process. If running on Kubernetes, find the pod containing your app, and execute the following: @@ -176,7 +180,7 @@ kubectl logs If running in Standalone mode, you should see the stderr and stdout outputs from your app displayed in the main console session. -### I'm getting timeout/connection errors when running Actors locally +## I'm getting timeout/connection errors when running Actors locally Each Dapr instance reports it's host address to the placement service. The placement service then distributes a table of nodes and their addresses to all Dapr instances. If that host address is unreachable, you are likely to encounter socket timeout errors or other variants of failing request errors. diff --git a/best-practices/troubleshooting/logs.md b/daprdocs/content/en/operations/troubleshooting/logs-troubleshooting.md similarity index 89% rename from best-practices/troubleshooting/logs.md rename to daprdocs/content/en/operations/troubleshooting/logs-troubleshooting.md index 6ca69fcc2..441beebe8 100644 --- a/best-practices/troubleshooting/logs.md +++ b/daprdocs/content/en/operations/troubleshooting/logs-troubleshooting.md @@ -1,4 +1,10 @@ -# Logs +--- +type: docs +title: "Configure and view Dapr Logs" +linkTitle: "Logs" +weight: 2000 +description: "Understand how logging works in Dapr and how to configure and view logs" +--- This section will assist you in understanding how logging works in Dapr, configuring and viewing logs. @@ -23,13 +29,7 @@ To set the output level, you can use the `--log-level` command-line option. For This will start the Dapr runtime binary with a log level of `error` and the Dapr Actor Placement Service with a log level of `debug`. -## Configuring logs on Standalone Mode - -As outlined above, every Dapr binary takes a `--log-level` argument. For example, to launch the placement service with a log level of warning: - -```bash -./placement --log-level warning -``` +## Logs in stand-alone mode To set the log level when running your app with the Dapr CLI, pass the `log-level` param: @@ -37,7 +37,13 @@ To set the log level when running your app with the Dapr CLI, pass the `log-leve dapr run --log-level warning node myapp.js ``` -## Viewing Logs on Standalone Mode +As outlined above, every Dapr binary takes a `--log-level` argument. For example, to launch the placement service with a log level of warning: + +```bash +./placement --log-level warning +``` + +### Viewing Logs on Standalone Mode When running Dapr with the Dapr CLI, both your app's log output and the runtime's output will be redirected to the same session, for easy debugging. For example, this is the output when running Dapr: @@ -45,7 +51,7 @@ For example, this is the output when running Dapr: ```bash dapr run node myapp.js ℹ️ Starting Dapr with id Trackgreat-Lancer on port 56730 -✅ You're up and running! Both Dapr and your app logs will appear here. +✅ You are up and running! Both Dapr and your app logs will appear here. == APP == App listening on port 3000! == DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="starting Dapr Runtime -- version 0.3.0-alpha -- commit b6f2810-dirty" @@ -65,11 +71,7 @@ dapr run node myapp.js == DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="actors: established connection to placement service at localhost:50005" ``` -## Configuring Logs on Kubernetes - -This section shows you how to configure the log levels for Dapr system pods and the Dapr sidecar running on Kubernetes. - -### Setting the sidecar log level +## Logs in Kubernetes mode You can set the log level individually for every sidecar by providing the following annotation in your pod spec template: @@ -80,32 +82,29 @@ annotations: ### Setting system pods log level -When deploying Dapr to your cluster using Helm 3.x, you can individually set the log level for every Dapr system component. +When deploying Dapr to your cluster using Helm 3.x, you can individually set the log level for every Dapr system component: -#### Setting the Operator log level +```bash +helm install dapr dapr/dapr --namespace dapr-system --set .logLevel= +``` + +Components: +- dapr_operator +- dapr_placement +- dapr_sidecar_injector + +Example: ```bash helm install dapr dapr/dapr --namespace dapr-system --set dapr_operator.logLevel=error ``` -#### Setting the Placement Service log level - -```bash -helm install dapr dapr/dapr --namespace dapr-system --set dapr_placement.logLevel=error -``` - -#### Setting the Sidecar Injector log level - -```bash -helm install dapr dapr/dapr --namespace dapr-system --set dapr_sidecar_injector.logLevel=error -``` - -## Viewing Logs on Kubernetes +### Viewing Logs on Kubernetes Dapr logs are written to stdout and stderr. This section will guide you on how to view logs for Dapr system components as well as the Dapr sidecar. -### Sidecar Logs +#### Sidecar Logs When deployed in Kubernetes, the Dapr sidecar injector will inject an Dapr container named `daprd` into your annotated pod. In order to view logs for the sidecar, simply find the pod in question by running `kubectl get pods`: @@ -134,7 +133,7 @@ time="2019-09-04T02:52:27Z" level=info msg="dapr initialized. Status: Running. I time="2019-09-04T02:52:27Z" level=info msg="actors: established connection to placement service at dapr-placement.dapr-system.svc.cluster.local:80" ``` -### System Logs +#### System Logs Dapr runs the following system pods: @@ -142,7 +141,7 @@ Dapr runs the following system pods: * Dapr sidecar injector * Dapr placement service -#### Viewing Operator Logs +#### Operator Logs ```Bash kubectl logs -l app=dapr-operator -n dapr-system @@ -153,7 +152,7 @@ time="2019-09-05T19:03:43Z" level=info msg="Dapr Operator is started" *Note: If Dapr is installed to a different namespace than dapr-system, simply replace the namespace to the desired one in the command above* -#### Viewing Sidecar Injector Logs +#### Sidecar Injector Logs ```Bash kubectl logs -l app=dapr-sidecar-injector -n dapr-system diff --git a/best-practices/troubleshooting/profiling_debugging.md b/daprdocs/content/en/operations/troubleshooting/profiling-debugging.md similarity index 78% rename from best-practices/troubleshooting/profiling_debugging.md rename to daprdocs/content/en/operations/troubleshooting/profiling-debugging.md index 173f4f646..a803bd9fa 100644 --- a/best-practices/troubleshooting/profiling_debugging.md +++ b/daprdocs/content/en/operations/troubleshooting/profiling-debugging.md @@ -1,4 +1,10 @@ -# Profiling and Debugging +--- +type: docs +title: "Profiling & Debugging" +linkTitle: "Debugging" +weight: 4000 +description: "Discover problems and issues such as concurrency, performance, cpu and memory usage through a profiling session" +--- In any real world scenario, an app might start exhibiting undesirable behavior in terms of resource spikes. CPU/Memory spikes are not uncommon in most cases. @@ -7,32 +13,47 @@ Dapr allows users to start an on-demand profiling session using `pprof` through ## Enable profiling -Dapr allows you to enable profiling in both Kubernetes and Standalone modes. +Dapr allows you to enable profiling in both Kubernetes and stand-alone modes. -### Kubernetes +### Stand-alone -To enable profiling in Kubernetes, simply add the following annotation to your Dapr annotated pod: - -
-annotations:
-    dapr.io/enabled: "true"
-    dapr.io/app-id: "rust-app"
-    dapr.io/enable-profiling: "true"
-
- -### Standalone - -To enable profiling in Standalone mode, pass the `enable-profiling` and the `profile-port` flags to the Dapr CLI: -Note that `profile-port` is not required, and Dapr will pick an available port. +To enable profiling in Standalone mode, pass the `--enable-profiling` and the `--profile-port` flags to the Dapr CLI: +Note that `profile-port` is not required, and if not provided Dapr will pick an available port. ```bash dapr run --enable-profiling true --profile-port 7777 python myapp.py ``` +### Kubernetes + +To enable profiling in Kubernetes, simply add the `dapr.io/enable-profiling` annotation to your Dapr annotated pod: + +```yml + annotations: + dapr.io/enabled: "true" + dapr.io/app-id: "rust-app" + dapr.io/enable-profiling: "true" +``` + ## Debug a profiling session After profiling is enabled, we can start a profiling session to investigate what's going on with the Dapr runtime. +### Stand-alone + +For Standalone mode, locate the Dapr instance that you want to profile: + +```bash +dapr list +APP ID DAPR PORT APP PORT COMMAND AGE CREATED PID +node-subscriber 3500 3000 node app.js 12s 2019-09-09 15:11.24 896 +``` + +Grab the DAPR PORT, and if profiling has been enabled as described above, you can now start using `pprof` to profile Dapr. +Look at the Kubernetes examples above for some useful commands to profile Dapr. + +More info on pprof can be found [here](https://github.com/google/pprof). + ### Kubernetes First, find the pod containing the Dapr runtime. If you don't already know the the pod name, type `kubectl get pods`: @@ -82,7 +103,7 @@ For memory related issues, you can profile the heap: go tool pprof --pdf your-binary-file http://localhost:7777/debug/pprof/heap > heap.pdf ``` -![heap](../../images/heap.png) +![heap](/images/heap.png) Profiling allocated objects: @@ -99,19 +120,4 @@ To analyze, grab the file path above (its a dynamic file path, so pay attention go tool pprof -alloc_objects --pdf /Users/myusername/pprof/pprof.daprd.alloc_objects.alloc_space.inuse_objects.inuse_space.003.pb.gz > alloc-objects.pdf ``` -![alloc](../../images/alloc.png) - -### Standalone - -For Standalone mode, locate the Dapr instance that you want to profile: - -```bash -dapr list -APP ID DAPR PORT APP PORT COMMAND AGE CREATED PID -node-subscriber 3500 3000 node app.js 12s 2019-09-09 15:11.24 896 -``` - -Grab the DAPR PORT, and if profiling has been enabled as described above, you can now start using `pprof` to profile Dapr. -Look at the Kubernetes examples above for some useful commands to profile Dapr. - -More info on pprof can be found [here](https://github.com/google/pprof). +![alloc](/images/alloc.png) diff --git a/best-practices/troubleshooting/tracing.md b/daprdocs/content/en/operations/troubleshooting/setup-tracing.md similarity index 65% rename from best-practices/troubleshooting/tracing.md rename to daprdocs/content/en/operations/troubleshooting/setup-tracing.md index a58193839..afdbc8eb7 100644 --- a/best-practices/troubleshooting/tracing.md +++ b/daprdocs/content/en/operations/troubleshooting/setup-tracing.md @@ -1,11 +1,77 @@ -# Tracing +--- +type: docs +title: "Tracing" +linkTitle: "Tracing" +weight: 3000 +description: "Configure Dapr to send distributed tracing data" +--- Dapr integrates with Open Census for telemetry and tracing. It is recommended to run Dapr with tracing enabled for any production scenario. Since Dapr uses Open Census, you can configure various exporters for tracing and telemetry data based on your environment, whether it is running in the cloud or on-premises. -## Distributed Tracing with Zipkin on Kubernetes +## Tracing configuration + +The `tracing` section under the `Configuration` spec contains the following properties: + +```yml +tracing: + enabled: true + exporterType: zipkin + exporterAddress: "" + expandParams: true + includeBody: true +``` + +The following table lists the different properties. + +| Property | Type | Description | +|----------|------|-------------| +| enabled | bool | Set tracing to be enabled or disabled +| exporterType | string | Name of the Open Census exporter to use. For example: Zipkin, Azure Monitor, etc +| exporterAddress | string | URL of the exporter +| expandParams | bool | When true, expands parameters passed to HTTP endpoints +| includeBody | bool | When true, includes the request body in the tracing event + + +## Zipkin in stand-alone mode + +The following steps show you how to configure Dapr to send distributed tracing data to Zipkin running as a container on your local machine and view them. + +For Standalone mode, create a Dapr configuration file locally and reference it with the Dapr CLI. + +1. Create the following YAML file: + + ```yaml + apiVersion: dapr.io/v1alpha1 + kind: Configuration + metadata: + name: zipkin + namespace: default + spec: + tracing: + enabled: true + exporterType: zipkin + exporterAddress: "http://localhost:9411/api/v2/spans" + expandParams: true + includeBody: true + ``` + +2. Launch Zipkin using Docker: + + ```bash + docker run -d -p 9411:9411 openzipkin/zipkin + ``` + +3. Launch Dapr with the `--config` param: + + ```bash + dapr run --app-id mynode --app-port 3000 --config ./config.yaml node app.js + ``` + + +## Zipkin in Kubernetes mode The following steps show you how to configure Dapr to send distributed tracing data to Zipkin running as a container in your Kubernetes cluster, and how to view them. @@ -65,62 +131,5 @@ kubectl port-forward svc/zipkin 9411:9411 On your browser, go to ```http://localhost:9411``` and you should see the Zipkin UI. -![zipkin](../../images/zipkin_ui.png) +![zipkin](/images/zipkin_ui.png) -## Distributed Tracing with Zipkin - Standalone Mode - -The following steps show you how to configure Dapr to send distributed tracing data to Zipkin running as a container on your local machine and view them. - -For Standalone mode, create a Dapr configuration file locally and reference it with the Dapr CLI. - -1. Create the following YAML file: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Configuration -metadata: - name: zipkin - namespace: default -spec: - tracing: - enabled: true - exporterType: zipkin - exporterAddress: "http://localhost:9411/api/v2/spans" - expandParams: true - includeBody: true -``` - -2. Launch Zipkin using Docker: - -```bash -docker run -d -p 9411:9411 openzipkin/zipkin -``` - -3. Launch Dapr with the `--config` param: - -```bash -dapr run --app-id mynode --app-port 3000 --config ./config.yaml node app.js -``` - -## Tracing Configuration - -The `tracing` section under the `Configuration` spec contains the following properties: - -```yml -tracing: - enabled: true - exporterType: zipkin - exporterAddress: "" - expandParams: true - includeBody: true -``` - -The following table lists the different properties. - -Property | Type | Description ----- | ------- | ----------- -enabled | bool | Set tracing to be enabled or disabled -exporterType | string | Name of the Open Census exporter to use. For example: Zipkin, Azure Monitor, etc -exporterAddress | string | URL of the exporter -expandParams | bool | When true, expands parameters passed to HTTP endpoints -includeBody | bool | When true, includes the request body in the tracing event diff --git a/daprdocs/content/en/reference/_index.md b/daprdocs/content/en/reference/_index.md new file mode 100644 index 000000000..7908b27c5 --- /dev/null +++ b/daprdocs/content/en/reference/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Dapr Reference Docs" +linkTitle: "Reference" +weight: 60 +description: "Detailed documentation on the Dapr API, CLI, bindings and more" +--- diff --git a/daprdocs/content/en/reference/api/_index.md b/daprdocs/content/en/reference/api/_index.md new file mode 100644 index 000000000..03c8a9be6 --- /dev/null +++ b/daprdocs/content/en/reference/api/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: Dapr API reference +linkTitle: "Dapr API" +weight: 100 +description: "Information on each api, the associated endpoints, and what capabilities are available" +--- diff --git a/reference/api/actors_api.md b/daprdocs/content/en/reference/api/actors_api.md similarity index 91% rename from reference/api/actors_api.md rename to daprdocs/content/en/reference/api/actors_api.md index 7f4da8f76..78c30d1e2 100644 --- a/reference/api/actors_api.md +++ b/daprdocs/content/en/reference/api/actors_api.md @@ -1,28 +1,14 @@ -# Dapr actors API reference +--- +type: docs +title: "Actors API reference" +linkTitle: "Actors API" +description: "Detailed documentation on the actors API" +weight: 500 +--- Dapr provides native, cross-platform and cross-language virtual actor capabilities. Besides the language specific Dapr SDKs, a developer can invoke an actor using the API endpoints below. -## Endpoints - -- [Service Code Calling to Dapr](#specifications-for-user-service-code-calling-to-dapr) - - [Invoke Actor Method](#invoke-actor-method) - - [Actor State Transactions](#actor-state-transactions) - - [Get Actor State](#get-actor-state) - - [Create Actor Reminder](#create-actor-reminder) - - [Get Actor Reminder](#get-actor-reminder) - - [Delete Actor Reminder](#delete-actor-reminder) - - [Create Actor Timer](#create-actor-timer) - - [Delete Actor Timer](#delete-actor-timer) -- [Dapr Calling to Service Code](#specifications-for-dapr-calling-to-user-service-code) - - [Get Registered Actors](#get-registered-actors) - - [Deactivate Actor](#deactivate-actor) - - [Invoke Actor Method](#invoke-actor-method-1) - - [Invoke Reminder](#invoke-reminder) - - [Invoke Timer](#invoke-timer) - - [Health Checks](#health-check) -- [Querying Actor State Externally](#querying-actor-state-externally) - ## User service code calling dapr ### Invoke actor method @@ -31,7 +17,7 @@ Invoke an actor method through Dapr. #### HTTP Request -```http +``` POST/GET/PUT/DELETE http://localhost:/v1.0/actors///method/ ``` @@ -90,7 +76,7 @@ Persists the changed to the state for an actor as a multi-item transaction. #### HTTP Request -```http +``` POST/PUT http://localhost:/v1.0/actors///state ``` @@ -142,7 +128,7 @@ Gets the state for an actor using a specified key. #### HTTP Request -```http +``` GET http://localhost:/v1.0/actors///state/ ``` @@ -188,8 +174,8 @@ Creates a persistent reminder for an actor. #### HTTP Request -```http -POST,PUT http://localhost:/v1.0/actors///reminders/ +``` +POST/PUT http://localhost:/v1.0/actors///reminders/ ``` Body: @@ -255,7 +241,7 @@ Gets a reminder for an actor. #### HTTP Request -```http +``` GET http://localhost:/v1.0/actors///reminders/ ``` @@ -301,7 +287,7 @@ Deletes a reminder for an actor. #### HTTP Request -```http +``` DELETE http://localhost:/v1.0/actors///reminders/ ``` @@ -337,8 +323,8 @@ Creates a timer for an actor. #### HTTP Request -```http -POST,PUT http://localhost:/v1.0/actors///timers/ +``` +POST/PUT http://localhost:/v1.0/actors///timers/ ``` Body: @@ -433,7 +419,7 @@ Gets the registered actors types for this app and the Dapr actor configuration s #### HTTP Request -```http +``` GET http://localhost:/dapr/config ``` @@ -486,7 +472,7 @@ Deactivates an actor by persisting the instance of the actor to the state store #### HTTP Request -```http +``` DELETE http://localhost:/actors// ``` @@ -523,7 +509,7 @@ Invokes a method for an actor with the specified methodName where parameters to #### HTTP Request -```http +``` PUT http://localhost:/actors///method/ ``` @@ -561,7 +547,7 @@ Invokes a reminder for an actor with the specified reminderName. If the actor i #### HTTP Request -```http +``` PUT http://localhost:/actors///method/remind/ ``` @@ -599,7 +585,7 @@ Invokes a timer for an actor rwith the specified timerName. If the actor is not #### HTTP Request -```http +``` PUT http://localhost:/actors///method/timer/ ``` @@ -640,7 +626,7 @@ A response body is not required. #### HTTP Request -```http +``` GET http://localhost:/healthz ``` @@ -673,11 +659,10 @@ Conceptually, activating an actor means creating the actor's object and adding In order to enable visibility into the state of an actor and allow for complex scenarios such as state aggregation, Dapr saves actor state in external state stores such as databases. As such, it is possible to query for an actor state externally by composing the correct key or query. The state namespace created by Dapr for actors is composed of the following items: - -* App ID - Represents the unique ID given to the Dapr application. -* Actor Type - Represents the type of the actor. -* Actor ID - Represents the unique ID of the actor instance for an actor type. -* Key - A key for the specific state value. An actor ID can hold multiple state keys. +- App ID - Represents the unique ID given to the Dapr application. +- Actor Type - Represents the type of the actor. +- Actor ID - Represents the unique ID of the actor instance for an actor type. +- Key - A key for the specific state value. An actor ID can hold multiple state keys. The following example shows how to construct a key for the state of an actor instance under the `myapp` App ID namespace: `myapp-cat-hobbit-food` diff --git a/reference/api/bindings_api.md b/daprdocs/content/en/reference/api/bindings_api.md similarity index 76% rename from reference/api/bindings_api.md rename to daprdocs/content/en/reference/api/bindings_api.md index b4857ae49..22239c9c0 100644 --- a/reference/api/bindings_api.md +++ b/daprdocs/content/en/reference/api/bindings_api.md @@ -1,15 +1,15 @@ -# Bindings +--- +type: docs +title: "Bindings API reference" +linkTitle: "Bindings API" +description: "Detailed documentation on the bindings API" +weight: 400 +--- Dapr provides bi-directional binding capabilities for applications and a consistent approach to interacting with different cloud/on-premise services or systems. Developers can invoke output bindings using the Dapr API, and have the Dapr runtime trigger an application with input bindings. -Examples for bindings include ```Kafka```, ```Rabbit MQ```, ```Azure Event Hubs```, ```AWS SQS```, ```GCP Storage``` to name a few. - -## Contents - -- [Bindings Structure](#bindings-structure) -- [Invoking Service Code Through Input Bindings](#invoking-service-code-through-input-bindings) -- [Sending Messages to Output Bindings](#invoking-output-bindings) +Examples for bindings include `Kafka`, `Rabbit MQ`, `Azure Event Hubs`, `AWS SQS`, `GCP Storage` to name a few. ## Bindings Structure @@ -28,25 +28,25 @@ spec: value: ``` -The ```metadata.name``` is the name of the binding. +The `metadata.name` is the name of the binding. If running self hosted locally, place this file in your `components` folder next to your state store and message queue yml configurations. If running on kubernetes apply the component to your cluster. -> **Note:** In production never place passwords or secrets within Dapr component files. For information on securely storing and retrieving secrets using secret stores refer to [Setup Secret Store](../../howto/setup-secret-store) +> **Note:** In production never place passwords or secrets within Dapr component files. For information on securely storing and retrieving secrets using secret stores refer to [Setup Secret Store]({{< ref setup-secret-store >}}) ## Invoking Service Code Through Input Bindings -A developer who wants to trigger their app using an input binding can listen on a ```POST``` http endpoint with the route name being the same as ```metadata.name```. +A developer who wants to trigger their app using an input binding can listen on a `POST` http endpoint with the route name being the same as `metadata.name`. -On startup Dapr sends a ```OPTIONS``` request to the ```metadata.name``` endpoint and expects a different status code as ```NOT FOUND (404)``` if this application wants to subscribe to the binding. +On startup Dapr sends a `OPTIONS` request to the `metadata.name` endpoint and expects a different status code as `NOT FOUND (404)` if this application wants to subscribe to the binding. -The ```metadata``` section is an open key/value metadata pair that allows a binding to define connection properties, as well as custom properties unique to the component implementation. +The `metadata` section is an open key/value metadata pair that allows a binding to define connection properties, as well as custom properties unique to the component implementation. ### Examples -For example, here's how a Python application subscribes for events from ```Kafka``` using a Dapr API compliant platform. Note how the metadata.name value `kafkaevent` in the components matches the POST route name in the Python code. +For example, here's how a Python application subscribes for events from `Kafka` using a Dapr API compliant platform. Note how the metadata.name value `kafkaevent` in the components matches the POST route name in the Python code. #### Kafka Component @@ -88,7 +88,7 @@ Bindings are discovered from component yaml files. Dapr calls this endpoint on s #### HTTP Request -```http +``` OPTIONS http://localhost:/ ``` @@ -114,7 +114,7 @@ In order to deliver binding inputs, a POST call is made to user code with the na #### HTTP Request -```http +``` POST http://localhost:/ ``` @@ -138,9 +138,9 @@ name | the name of the binding Optionally, a response body can be used to directly bind input bindings with state stores or output bindings. **Example:** -Dapr stores ```stateDataToStore``` into a state store named "stateStore". -Dapr sends ```jsonObject``` to the output bindings named "storage" and "queue" in parallel. -If ```concurrency``` is not set, it is sent out sequential (the example below shows these operations are done in parallel) +Dapr stores `stateDataToStore` into a state store named "stateStore". +Dapr sends `jsonObject` to the output bindings named "storage" and "queue" in parallel. +If `concurrency` is not set, it is sent out sequential (the example below shows these operations are done in parallel) ```json { @@ -158,11 +158,11 @@ If ```concurrency``` is not set, it is sent out sequential (the example below sh This endpoint lets you invoke a Dapr output binding. Dapr bindings support various operations, such as `create`. -See the [different specs](../specs/bindings) on each binding to see the list of supported operations. +See the [different specs]({{< ref supported-bindings >}}) on each binding to see the list of supported operations. ### HTTP Request -```http +``` POST/PUT http://localhost:/v1.0/bindings/ ``` diff --git a/daprdocs/content/en/reference/api/error_codes.md b/daprdocs/content/en/reference/api/error_codes.md new file mode 100644 index 000000000..24baa24f2 --- /dev/null +++ b/daprdocs/content/en/reference/api/error_codes.md @@ -0,0 +1,45 @@ +--- +type: docs +title: "Error codes returned by APIs" +linkTitle: "Error codes" +description: "Detailed reference of the Dapr API error codes" +weight: 1000 +--- + +For http calls made to Dapr runtime, when an error is encountered, an error json is returned in http response body. The json contains an error code and an descriptive error message, e.g. +``` +{ + "errorCode": "ERR_STATE_GET", + "message": "Requested state key does not exist in state store." +} +``` + +Following table lists the error codes returned by Dapr runtime: + +| Error Code | Description | +|----------------------------------|-------------| +| ERR_ACTOR_INSTANCE_MISSING | Error getting an actor instance. This means that actor is now hosted in some other service replica. +| ERR_ACTOR_RUNTIME_NOT_FOUND | Error getting the actor instance. +| ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor. +| ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor. +| ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor. +| ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor. +| ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor. +| ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor. +| ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor. +| ERR_ACTOR_STATE_GET | Error getting the state for an actor. +| ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally. +| ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime. +| ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message. +| ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope. +| ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found. +| ERR_STATE_GET | Error getting a state for state store. +| ERR_STATE_DELETE | Error deleting a state from state store. +| ERR_STATE_SAVE | Error saving a state in state store. +| ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding. +| ERR_MALFORMED_REQUEST | Error with a malformed request. +| ERR_DIRECT_INVOKE | Error in direct invocation. +| ERR_DESERIALIZE_HTTP_BODY | Error deserializing an HTTP request body. +| ERR_SECRET_STORE_NOT_CONFIGURED | Error that no secret store is configured. +| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found. +| ERR_HEALTH_NOT_READY | Error that Dapr is not ready. \ No newline at end of file diff --git a/reference/api/health_api.md b/daprdocs/content/en/reference/api/health_api.md similarity index 62% rename from reference/api/health_api.md rename to daprdocs/content/en/reference/api/health_api.md index 5113017d2..5f5ecc2ef 100644 --- a/reference/api/health_api.md +++ b/daprdocs/content/en/reference/api/health_api.md @@ -1,31 +1,37 @@ -# Dapr health API reference +--- +type: docs +title: "Health API reference" +linkTitle: "Health API" +description: "Detailed documentation on the health API" +weight: 900 +--- Dapr provides health checking probes that can be used as readiness or liveness of Dapr. -### Get Dapr health state +## Get Dapr health state Gets the health state for Dapr. -#### HTTP Request +### HTTP Request ```http GET http://localhost:/v1.0/healthz ``` -#### HTTP Response Codes +### HTTP Response Codes Code | Description ---- | ----------- 200 | dapr is healthy 500 | dapr is not healthy -#### URL Parameters +### URL Parameters Parameter | Description --------- | ----------- daprPort | The Dapr port. -#### Examples +### Examples ```shell curl http://localhost:3500/v1.0/healthz diff --git a/reference/api/pubsub_api.md b/daprdocs/content/en/reference/api/pubsub_api.md similarity index 82% rename from reference/api/pubsub_api.md rename to daprdocs/content/en/reference/api/pubsub_api.md index 5e63df580..e3b7a88d1 100644 --- a/reference/api/pubsub_api.md +++ b/daprdocs/content/en/reference/api/pubsub_api.md @@ -1,13 +1,19 @@ -# Pub Sub +--- +type: docs +title: "Pub/sub API reference" +linkTitle: "Pub/Sub API" +description: "Detailed documentation on the pub/sub API" +weight: 300 +--- ## Publish a message to a given topic -This endpoint lets you publish data to multiple consumers who are listening on a ```topic```. +This endpoint lets you publish data to multiple consumers who are listening on a `topic`. Dapr guarantees at least once semantics for this endpoint. ### HTTP Request -```http +``` POST http://localhost:/v1.0/publish// ``` @@ -36,25 +42,25 @@ curl -X POST http://localhost:3500/v1.0/publish/pubsubName/deathStarStatus \ }' ``` -# Optional Application (User Code) Routes +## Optional Application (User Code) Routes -## Provide a route for Dapr to discover topic subscriptions +### Provide a route for Dapr to discover topic subscriptions Dapr will invoke the following endpoint on user code to discover topic subscriptions: -### HTTP Request +#### HTTP Request -```http +``` GET http://localhost:/dapr/subscribe ``` -### URL Parameters +#### URL Parameters Parameter | Description --------- | ----------- appPort | the application port -### HTTP Response body +#### HTTP Response body A json encoded array of strings. @@ -72,28 +78,28 @@ Example: > Note, all subscription parameters are case-sensitive. -## Provide route(s) for Dapr to deliver topic events +### Provide route(s) for Dapr to deliver topic events In order to deliver topic events, a `POST` call will be made to user code with the route specified in the subscription response. The following example illustrates this point, considering a subscription for topic `newOrder` with route `orders` on port 3000: `POST http://localhost:3000/orders` -### HTTP Request +#### HTTP Request -```http +``` POST http://localhost:/ ``` > Note, all URL parameters are case-sensitive. -### URL Parameters +#### URL Parameters Parameter | Description --------- | ----------- appPort | the application port path | route path from the subscription configuration -### Expected HTTP Response +#### Expected HTTP Response An HTTP 2xx response denotes successful processing of message. For richer response handling, a JSON encoded payload body with the processing status can be sent: @@ -122,11 +128,11 @@ HTTP Status | Description other | warning is logged and message to be retried -## Message Envelope +## Message envelope Dapr Pub/Sub adheres to version 1.0 of Cloud Events. ## Related links -* [How to consume topics](https://github.com/dapr/docs/tree/master/howto/consume-topic) +* [How to publish to and consume topics]({{< ref howto-publish-subscribe.md >}}) * [Sample for pub/sub](https://github.com/dapr/quickstarts/tree/master/pub-sub) diff --git a/reference/api/secrets_api.md b/daprdocs/content/en/reference/api/secrets_api.md similarity index 93% rename from reference/api/secrets_api.md rename to daprdocs/content/en/reference/api/secrets_api.md index 4a0fe78f7..eb4f6d930 100644 --- a/reference/api/secrets_api.md +++ b/daprdocs/content/en/reference/api/secrets_api.md @@ -1,8 +1,10 @@ -# Secrets API Specification - -## Endpoints - -- [Get Secret](#get-secret) +--- +type: docs +title: "Secrets API reference" +linkTitle: "Secrets API" +description: "Detailed documentation on the secrets API" +weight: 700 +--- ## Get Secret @@ -10,7 +12,7 @@ This endpoint lets you get the value of a secret for a given secret store. ### HTTP Request -```http +``` GET http://localhost:/v1.0/secrets// ``` @@ -28,7 +30,7 @@ name | the name of the secret to get Some secret stores have **optional** metadata properties. metadata is populated using query parameters: -```http +``` GET http://localhost:/v1.0/secrets//?metadata.version_id=15 ``` diff --git a/reference/api/service_invocation_api.md b/daprdocs/content/en/reference/api/service_invocation_api.md similarity index 92% rename from reference/api/service_invocation_api.md rename to daprdocs/content/en/reference/api/service_invocation_api.md index 130d8c709..63061ca6f 100644 --- a/reference/api/service_invocation_api.md +++ b/daprdocs/content/en/reference/api/service_invocation_api.md @@ -1,19 +1,21 @@ -# Service Invocation API Specification +--- +type: docs +title: "Service invocation API reference" +linkTitle: "Service invocation API" +description: "Detailed documentation on the service invocation API" +weight: 100 +--- Dapr provides users with the ability to call other applications that have unique ids. This functionality allows apps to interact with one another via named identifiers and puts the burden of service discovery on the Dapr runtime. -## Contents - -- [Invoke a Method on a Remote Dapr App](#invoke-a-method-on-a-remote-dapr-app) - ## Invoke a method on a remote dapr app This endpoint lets you invoke a method in another Dapr enabled app. ### HTTP Request -```http +``` POST/GET/PUT/DELETE http://localhost:/v1.0/invoke//method/ ``` @@ -75,7 +77,7 @@ myApp.production #### Namespace supported platforms -* Kubernetes +- Kubernetes ### Examples @@ -114,3 +116,6 @@ In case you are invoking `mathService` on a different namespace, you can use the `http://localhost:3500/v1.0/invoke/mathService.testing/method/api/v1/add` In this URL, `testing` is the namespace that `mathService` is running in. + +## Next Steps +- [How-To: Invoke and discover services]({{< ref howto-invoke-discover-services.md >}}) \ No newline at end of file diff --git a/reference/api/state_api.md b/daprdocs/content/en/reference/api/state_api.md similarity index 96% rename from reference/api/state_api.md rename to daprdocs/content/en/reference/api/state_api.md index afe6c1f66..1e6555134 100644 --- a/reference/api/state_api.md +++ b/daprdocs/content/en/reference/api/state_api.md @@ -1,15 +1,10 @@ -# State Management API Specification - -## Endpoints -- [Component File](#component-file) -- [Key Scheme](#key-scheme) -- [Save State](#save-state) -- [Get State](#get-state) -- [Get Bulk State](#get-bulk-state) -- [Delete State](#delete-state) -- [State transactions](#state-transactions) -- [Configuring State Store for Actors](#configuring-state-store-for-actors) -- [Optional Behaviors](#optional-behaviors) +--- +type: docs +title: "State management API reference" +linkTitle: "State management API" +description: "Detailed documentation on the state management API" +weight: 200 +--- ## Component file @@ -58,7 +53,7 @@ This endpoint lets you save an array of state objects. ### HTTP Request -```http +``` POST http://localhost:/v1.0/state/ ``` @@ -124,7 +119,7 @@ This endpoint lets you get the state for a specific key. ### HTTP Request -```http +``` GET http://localhost:/v1.0/state// ``` @@ -177,7 +172,7 @@ curl http://localhost:3500/v1.0/state/starwars/planet \ To pass metadata as query parammeter: -```http +``` GET http://localhost:3500/v1.0/state/starwars/planet?metadata.partitionKey=mypartitionKey ``` @@ -187,7 +182,7 @@ This endpoint lets you get a list of values for a given list of keys. ### HTTP Request -```http +``` POST http://localhost:/v1.0/state//bulk ``` @@ -243,7 +238,7 @@ curl http://localhost:3500/v1.0/state/myRedisStore/bulk \ ``` To pass metadata as query parammeter: -```http +``` POST http://localhost:3500/v1.0/state/myRedisStore/bulk?metadata.partitionKey=mypartitionKey ``` @@ -254,7 +249,7 @@ This endpoint lets you delete the state for a specific key. ### HTTP Request -```http +``` DELETE http://localhost:/v1.0/state// ``` @@ -311,7 +306,7 @@ List of state stores that support transactions: #### HTTP Request -```http +``` POST/PUT http://localhost:/v1.0/state//transaction ``` @@ -456,3 +451,7 @@ curl -X POST http://localhost:3500/v1.0/state/starwars \ } ]' ``` + +## Next Steps +- [State management overview]({{< ref state-management-overview.md >}}) +- [How-To: Save & get state]({{< ref howto-get-save-state.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/reference/cli/_index.md b/daprdocs/content/en/reference/cli/_index.md new file mode 100644 index 000000000..8064dd8a0 --- /dev/null +++ b/daprdocs/content/en/reference/cli/_index.md @@ -0,0 +1,6 @@ +--- +type: docs +title: "Dapr CLI reference" +linkTitle: "Dapr CLI" +description: "Detailed information on the Dapr CLI commands" +--- \ No newline at end of file diff --git a/daprdocs/content/en/reference/cli/cli-overview.md b/daprdocs/content/en/reference/cli/cli-overview.md new file mode 100644 index 000000000..3b4d1b3a8 --- /dev/null +++ b/daprdocs/content/en/reference/cli/cli-overview.md @@ -0,0 +1,71 @@ +--- +type: docs +title: "Dapr command line (CLI) reference" +linkTitle: "Overview" +description: "Detailed information on the dapr CLI" +weight: 10 +--- + +The Dapr CLI allows you to setup Dapr on your local dev machine or on a Kubernetes cluster, provides debugging support, and launches and manages Dapr instances. + +```bash + __ + ____/ /___ _____ _____ + / __ / __ '/ __ \/ ___/ + / /_/ / /_/ / /_/ / / + \__,_/\__,_/ .___/_/ + /_/ + +====================================================== +A serverless runtime for hyperscale, distributed systems + +Usage: + dapr [command] + +Available Commands: + components List all Dapr components + configurations List all Dapr configurations + help Help about any command + init Setup dapr in Kubernetes or Standalone modes + invoke Invokes a Dapr app with an optional payload (deprecated, use invokePost) + invokeGet Issue HTTP GET to Dapr app + invokePost Issue HTTP POST to Dapr app with an optional payload + list List all Dapr instances + logs Gets Dapr sidecar logs for an app in Kubernetes + mtls Check if mTLS is enabled in a Kubernetes cluster + publish Publish an event to multiple consumers + run Launches Dapr and (optionally) your app side by side + status Shows the Dapr system services (control plane) health status. + stop Stops multiple running Dapr instances and their associated apps + uninstall Removes a Dapr installation + +Flags: + -h, --help help for Dapr + --version version for Dapr + +Use "dapr [command] --help" for more information about a command. +``` + +## Command Reference + +You can learn more about each Dapr command from the links below. + + - [`dapr components`]({{< ref dapr-components.md >}}) + - [`dapr configurations`]({{< ref dapr-configurations.md >}}) + - [`dapr help`]({{< ref dapr-help.md >}}) + - [`dapr init`]({{< ref dapr-init.md >}}) + - [`dapr invoke`]({{< ref dapr-invoke.md >}}) + - [`dapr invokeGet`]({{< ref dapr-invokeGet.md >}}) + - [`dapr invokePost`]({{< ref dapr-invokePost.md >}}) + - [`dapr list`]({{< ref dapr-list.md >}}) + - [`dapr logs`]({{< ref dapr-logs.md >}}) + - [`dapr mtls`]({{< ref dapr-mtls.md >}}) + - [`dapr publish`]({{< ref dapr-publish.md >}}) + - [`dapr run`]({{< ref dapr-run.md >}}) + - [`dapr status`]({{< ref dapr-status.md >}}) + - [`dapr stop`]({{< ref dapr-stop.md >}}) + - [`dapr uninstall`]({{< ref dapr-uninstall.md >}}) + +## Environment Variables + +Some Dapr flags can be set via environment variables (e.g. `DAPR_NETWORK` for the `--network` flag of the `dapr init` command). Note that specifying the flag on the command line overrides any set environment variable. \ No newline at end of file diff --git a/daprdocs/content/en/reference/cli/dapr-components.md b/daprdocs/content/en/reference/cli/dapr-components.md new file mode 100644 index 000000000..90f8358d5 --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-components.md @@ -0,0 +1,23 @@ +--- +type: docs +title: "components CLI command reference" +linkTitle: "components" +description: "Detailed information on the components CLI command" +--- + +## Description + +List all Dapr components + +## Usage + +```bash +dapr components [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--help`, `-h` | | | Help for components | +| `--kubernetes`, `-k` | | `false` | List all Dapr components in a k8s cluster | diff --git a/daprdocs/content/en/reference/cli/dapr-configurations.md b/daprdocs/content/en/reference/cli/dapr-configurations.md new file mode 100644 index 000000000..6c44cbf14 --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-configurations.md @@ -0,0 +1,23 @@ +--- +type: docs +title: "configurations CLI command reference" +linkTitle: "configurations" +description: "Detailed information on the configurations CLI command" +--- + +## Description + +List all Dapr configurations + +## Usage + +```bash +dapr configurations [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--help`, `-h` | | | Help for configurations | +| `--kubernetes`, `-k` | | `false` | List all Dapr configurations in a k8s cluster | diff --git a/daprdocs/content/en/reference/cli/dapr-dashboard.md b/daprdocs/content/en/reference/cli/dapr-dashboard.md new file mode 100644 index 000000000..7a5446e94 --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-dashboard.md @@ -0,0 +1,26 @@ +--- +type: docs +title: "dashboard CLI command reference" +linkTitle: "dashboard" +description: "Detailed information on the dashboard CLI command" +--- + +## Description + +Start Dapr dashboard. + +## Usage + +```bash +dapr dashboard [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--help`, `-h` | | | Help for dashboard | +| `--kubernetes`, `-k` | | `false` | Start Dapr dashboard in local browser | +| `--version`, `-v` | | `false` | Check Dapr dashboard version | +| `--port`, `-p` | | `8080` | The local port on which to serve dashboard | +| `--namespace`, `-n` | | `dapr-system` | The namespace where Dapr dashboard is running | diff --git a/daprdocs/content/en/reference/cli/dapr-help.md b/daprdocs/content/en/reference/cli/dapr-help.md new file mode 100644 index 000000000..91b1be50f --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-help.md @@ -0,0 +1,22 @@ +--- +type: docs +title: "help CLI command reference" +linkTitle: "help" +description: "Detailed information on the help CLI command" +--- + +## Description + +Help provides help for any command in the application. + +## Usage + +```bash +dapr help [command] [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--help`, `-h` | | | Help for help | diff --git a/daprdocs/content/en/reference/cli/dapr-init.md b/daprdocs/content/en/reference/cli/dapr-init.md new file mode 100644 index 000000000..7b0cb7ec7 --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-init.md @@ -0,0 +1,27 @@ +--- +type: docs +title: "init CLI command reference" +linkTitle: "init" +description: "Detailed information on the init CLI command" +--- + +## Description + +Setup Dapr in Kubernetes or Standalone modes + +## Usage + +```bash +dapr init [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--help`, `-h` | | | Help for init | +| `--kubernetes`, `-k` | | `false` | Deploy Dapr to a Kubernetes cluster | +| `--network` | `DAPR_NETWORK` | | The Docker network on which to deploy the Dapr runtime | +| `--runtime-version` | | `latest` | The version of the Dapr runtime to install, for example: `v0.1.0-alpha` | +| `--redis-host` | `DAPR_REDIS_HOST` | `localhost` | The host on which the Redis service resides | +| `--slim`, `-s` | | `false` | Initialize dapr in self-hosted mode without placement, redis and zipkin containers.| diff --git a/daprdocs/content/en/reference/cli/dapr-invoke.md b/daprdocs/content/en/reference/cli/dapr-invoke.md new file mode 100644 index 000000000..050e9b84c --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-invoke.md @@ -0,0 +1,25 @@ +--- +type: docs +title: "invoke CLI command reference" +linkTitle: "invoke" +description: "Detailed information on the invoke CLI command" +--- + +## Description + +Invokes a Dapr app with an optional payload (deprecated, use invokePost) + +## Usage + +```bash +dapr invoke [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--app-id`, `-a` | | | The app ID to invoke | +| `--help`, `-h` | | | Help for invoke | +| `--method`, `-m` | | | The method to invoke | +| `--payload`, `-p` | | | (optional) a json payload | diff --git a/daprdocs/content/en/reference/cli/dapr-invokeGet.md b/daprdocs/content/en/reference/cli/dapr-invokeGet.md new file mode 100644 index 000000000..0cc75ea63 --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-invokeGet.md @@ -0,0 +1,24 @@ +--- +type: docs +title: "invokeGet CLI command reference" +linkTitle: "invokeGet" +description: "Detailed information on the invokeGet CLI command" +--- + +## Description + +Issue HTTP GET to Dapr app + +## Usage + +```bash +dapr invokeGet [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--app-id`, `-a` | | | The app ID to invoke | +| `--help`, `-h` | | | Help for invokeGet | +| `--method`, `-m` | | | The method to invoke | diff --git a/daprdocs/content/en/reference/cli/dapr-invokePost.md b/daprdocs/content/en/reference/cli/dapr-invokePost.md new file mode 100644 index 000000000..b9a4866e4 --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-invokePost.md @@ -0,0 +1,25 @@ +--- +type: docs +title: "invokePost CLI command reference" +linkTitle: "invokePost" +description: "Detailed information on the invokePost CLI command" +--- + +## Description + +Issue HTTP POST to Dapr app with an optional payload + +## Usage + +```bash +dapr invokePost [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--app-id`, `-a` | | | The app ID to invoke | +| `--help`, `-h` | | | Help for invokePost | +| `--method`, `-m` | | | The method to invoke | +| `--payload`, `-p` | | | (optional) a json payload | diff --git a/daprdocs/content/en/reference/cli/dapr-list.md b/daprdocs/content/en/reference/cli/dapr-list.md new file mode 100644 index 000000000..af60092fd --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-list.md @@ -0,0 +1,23 @@ +--- +type: docs +title: "list CLI command reference" +linkTitle: "list" +description: "Detailed information on the list CLI command" +--- + +## Description + +List all Dapr instances + +## Usage + +```bash +dapr list [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--help`, `-h` | | | Help for list | +| `--kubernetes`, `-k` | | `false` | List all Dapr pods in a k8s cluster | diff --git a/daprdocs/content/en/reference/cli/dapr-logs.md b/daprdocs/content/en/reference/cli/dapr-logs.md new file mode 100644 index 000000000..9a34a95eb --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-logs.md @@ -0,0 +1,26 @@ +--- +type: docs +title: "logs CLI command reference" +linkTitle: "logs" +description: "Detailed information on the logs CLI command" +--- + +## Description + +Gets Dapr sidecar logs for an app in Kubernetes + +## Usage + +```bash +dapr logs [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--app-id`, `-a` | | | The app id for which logs are needed | +| `--help`, `-h` | | | Help for logs | +| `--kubernetes`, `-k` | | `true` | only works with a Kubernetes cluster (default true) | +| `--namespace`, `-n` | | `default` | (optional) Kubernetes namespace in which your application is deployed. default value is 'default' | +| `--pod-name`, `-p` | | | (optional) Name of the Pod. Use this in case you have multiple app instances (Pods) | diff --git a/daprdocs/content/en/reference/cli/dapr-mtls.md b/daprdocs/content/en/reference/cli/dapr-mtls.md new file mode 100644 index 000000000..a8cf4c207 --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-mtls.md @@ -0,0 +1,23 @@ +--- +type: docs +title: "mtls CLI command reference" +linkTitle: "mtls" +description: "Detailed information on the mtls CLI command" +--- + +## Description + +Check if mTLS is enabled in a Kubernetes cluster + +## Usage + +```bash +dapr mtls [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--help`, `-h` | | | Help for mtls | +| `--kubernetes`, `-k` | | `false` | Check if mTLS is enabled in a Kubernetes cluster | diff --git a/daprdocs/content/en/reference/cli/dapr-publish.md b/daprdocs/content/en/reference/cli/dapr-publish.md new file mode 100644 index 000000000..1fd73bff9 --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-publish.md @@ -0,0 +1,25 @@ +--- +type: docs +title: "publish CLI command reference" +linkTitle: "publish" +description: "Detailed information on the publish CLI command" +--- + +## Description + +Publish an event to multiple consumers + +## Usage + +```bash +dapr publish [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--pubsub` | | | Name of the pub/sub component +| `--data`, `-d` | | | (optional) a json serialized string | +| `--help`, `-h` | | | Help for publish | +| `--topic`, `-t` | | | The topic the app is listening on | \ No newline at end of file diff --git a/daprdocs/content/en/reference/cli/dapr-run.md b/daprdocs/content/en/reference/cli/dapr-run.md new file mode 100644 index 000000000..efb74b108 --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-run.md @@ -0,0 +1,35 @@ +--- +type: docs +title: "run CLI command reference" +linkTitle: "run" +description: "Detailed information on the run CLI command" +--- + +## Description + +Launches Dapr and (optionally) your app side by side + +## Usage + +```bash +dapr run [flags] [command] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--app-id` | | | An ID for your application, used for service discovery | +| `--app-port` | | `-1` | The port your application is listening o +| `--run-path` | | `Linux & Mac: $HOME/.dapr/run`, `Windows: %USERPROFILE%\.dapr\run` | Path for run directory | +| `--config` | | `Linux & Mac: $HOME/.dapr/config.yaml`, `Windows: %USERPROFILE%\.dapr\config.yaml` | Dapr configuration file | +| `--enable-profiling` | | | Enable `pprof` profiling via an HTTP endpoint | +| `--dapr-grpc-port` | | `-1` | The gRPC port for Dapr to listen on | +| `--help`, `-h` | | | Help for run | +| `--image` | | | The image to build the code in. Input is: `repository/image` | +| `--log-level` | | `info` | Sets the log verbosity. Valid values are: `debug`, `info`, `warning`, `error`, `fatal`, or `panic` | +| `--max-concurrency` | | `-1` | Controls the concurrency level of the app | +| `--placement-host-address` | `DAPR_PLACEMENT_HOST` | `localhost` | The host on which the placement service resides | +| `--port`, `-p` | | `-1` | The HTTP port for Dapr to listen on | +| `--profile-port` | | `-1` | The port for the profile server to listen on | +| `--protocol` | | `http` | Tells Dapr to use HTTP or gRPC to talk to the app. Valid values are: `http` or `grpc` | diff --git a/daprdocs/content/en/reference/cli/dapr-status.md b/daprdocs/content/en/reference/cli/dapr-status.md new file mode 100644 index 000000000..d6fec1ecd --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-status.md @@ -0,0 +1,23 @@ +--- +type: docs +title: "status CLI command reference" +linkTitle: "status" +description: "Detailed information on the status CLI command" +--- + +## Description + +Shows the Dapr system services (control plane) health status. + +## Usage + +```bash +dapr status [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--help`, `-h` | | | Help for status | +| `--kubernetes`, `-k` | | `true` | only works with a Kubernetes cluster (default true) | \ No newline at end of file diff --git a/daprdocs/content/en/reference/cli/dapr-stop.md b/daprdocs/content/en/reference/cli/dapr-stop.md new file mode 100644 index 000000000..87cecf435 --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-stop.md @@ -0,0 +1,23 @@ +--- +type: docs +title: "stop CLI command reference" +linkTitle: "stop" +description: "Detailed information on the stop CLI command" +--- + +## Description + +Stops multiple running Dapr instances and their associated apps + +## Usage + +```bash +dapr stop [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--app-id` | | | The app ID to stop (standalong mode) | +| `--help`, `-h` | | | Help for stop | diff --git a/daprdocs/content/en/reference/cli/dapr-uninstall.md b/daprdocs/content/en/reference/cli/dapr-uninstall.md new file mode 100644 index 000000000..08ee72842 --- /dev/null +++ b/daprdocs/content/en/reference/cli/dapr-uninstall.md @@ -0,0 +1,26 @@ +--- +type: docs +title: "uninstall CLI command reference" +linkTitle: "uninstall" +description: "Detailed information on the uninstall CLI command" +--- + +## Description + +Removes a Dapr installation + +## Usage + +```bash +dapr uninstall [flags] +``` + +## Flags + +| Name | Environment Variable | Default | Description +| --- | --- | --- | --- | +| `--all` | | `false` | Remove Redis, Zipkin containers in addition to actor placement container. Remove default dapr dir located at `$HOME/.dapr or %USERPROFILE%\.dapr\`. | +| `--help`, `-h` | | | Help for uninstall | +| `--kubernetes`, `-k` | | `false` | Uninstall Dapr from a Kubernetes cluster | +| `--network` | `DAPR_NETWORK` | | The Docker network from which to remove the Dapr runtime | +| `--runtime-version` | | `latest` | The version of the Dapr runtime to uninstall, for example: `v0.1.0-alpha` (Kubernetes mode only) | diff --git a/daprdocs/content/en/roadmap/_index.md b/daprdocs/content/en/roadmap/_index.md new file mode 100644 index 000000000..e69de29bb diff --git a/daprdocs/layouts/docs/list.html b/daprdocs/layouts/docs/list.html new file mode 100644 index 000000000..8886ba5b6 --- /dev/null +++ b/daprdocs/layouts/docs/list.html @@ -0,0 +1,20 @@ +{{ define "main" }} +
+

{{ .Title }}

+ {{ with .Params.description }}
{{ . | markdownify }}
{{ end }} + {{ if (and (not .Params.hide_readingtime) (.Site.Params.ui.readingtime.enable)) }} + {{ partial "reading-time.html" . }} + {{ end }} + {{ .Content }} + {{ partial "section-index.html" . }} + {{ if (and (not .Params.hide_feedback) (.Site.Params.ui.feedback.enable) (.Site.GoogleAnalytics)) }} + {{ partial "feedback.html" .Site.Params.ui.feedback }} +
+ {{ end }} + {{ if (.Site.DisqusShortname) }} +
+ {{ partial "disqus-comment.html" . }} + {{ end }} +
{{ partial "page-meta-lastmod.html" . }}
+
+{{ end }} diff --git a/daprdocs/layouts/partials/hooks/body-end.html b/daprdocs/layouts/partials/hooks/body-end.html new file mode 100644 index 000000000..206a5e2f4 --- /dev/null +++ b/daprdocs/layouts/partials/hooks/body-end.html @@ -0,0 +1,17 @@ +{{ with .Site.Params.algolia_docsearch }} + + +{{ end }} \ No newline at end of file diff --git a/daprdocs/layouts/partials/hooks/head-end.html b/daprdocs/layouts/partials/hooks/head-end.html new file mode 100644 index 000000000..804fe38e9 --- /dev/null +++ b/daprdocs/layouts/partials/hooks/head-end.html @@ -0,0 +1,3 @@ +{{ with .Site.Params.algolia_docsearch }} + +{{ end }} \ No newline at end of file diff --git a/daprdocs/layouts/partials/navbar.html b/daprdocs/layouts/partials/navbar.html new file mode 100644 index 000000000..e1dc682d4 --- /dev/null +++ b/daprdocs/layouts/partials/navbar.html @@ -0,0 +1,33 @@ +{{ $cover := .HasShortcode "blocks/cover" }} + diff --git a/daprdocs/layouts/partials/section-index.html b/daprdocs/layouts/partials/section-index.html new file mode 100644 index 000000000..a6cd834bc --- /dev/null +++ b/daprdocs/layouts/partials/section-index.html @@ -0,0 +1,33 @@ +
+ {{ $pages := (where .Site.Pages "Section" .Section).ByWeight }} + {{ if .IsHome }} + {{ $pages = .Site.Pages.ByWeight }} + {{ end }} + {{ $parent := .Page }} + {{ if $parent.Params.no_list }} + {{/* If no_list is true we don't show a list of subpages */}} + {{ else if $parent.Params.simple_list }} + {{/* If simple_list is true we show a bulleted list of subpages */}} +
+
    + {{ range $pages }} + {{ if eq .Parent $parent }} +
  • {{- .Title -}}
  • + {{ end }} + {{ end }} +
+ {{ else }} + {{/* Otherwise we show a nice formatted list of subpages with page descriptions */}} +
+ {{ range $pages }} + {{ if eq .Parent $parent }} +
+
+ {{- .Title -}} +
+

{{ .Description | markdownify }}

+
+ {{ end }} + {{ end }} + {{ end }} +
\ No newline at end of file diff --git a/daprdocs/layouts/partials/sidebar-tree.html b/daprdocs/layouts/partials/sidebar-tree.html new file mode 100644 index 000000000..6f155591c --- /dev/null +++ b/daprdocs/layouts/partials/sidebar-tree.html @@ -0,0 +1,47 @@ +{{/* We cache this partial for bigger sites and set the active class client side. */}} +{{ $shouldDelayActive := ge (len .Site.Pages) 2000 }} +
+ {{ if not .Site.Params.ui.sidebar_search_disable }} + + {{ end }} + +
+{{ define "section-tree-nav-section" }} +{{ $s := .section }} +{{ $p := .page }} +{{ $shouldDelayActive := .delayActive }} +{{ $active := eq $p.CurrentSection $s }} +{{ $show := or (not $p.Site.Params.ui.sidebar_menu_compact) ($p.IsDescendant $s) }} +{{ $sid := $s.RelPermalink | anchorize }} +
    +
  • + {{ $s.LinkTitle }} +
  • +
      +
    • + {{ $pages := where (union $s.Pages $s.Sections).ByWeight ".Params.toc_hide" "!=" true }} + {{ $pages := $pages | first 50 }} + {{ range $pages }} + {{ if and (.IsPage) (not .Params.nomenu) }} + {{ $mid := printf "m-%s" (.RelPermalink | anchorize) }} + {{ $active := eq . $p }} + {{ .LinkTitle }} + {{ else if not .Params.toc_hide }} + {{ template "section-tree-nav-section" (dict "page" $p "section" .) }} + {{ end }} + {{ end }} +
    • +
    +
+{{ end }} \ No newline at end of file diff --git a/daprdocs/layouts/partials/toc.html b/daprdocs/layouts/partials/toc.html new file mode 100644 index 000000000..f155f8b13 --- /dev/null +++ b/daprdocs/layouts/partials/toc.html @@ -0,0 +1,8 @@ +{{ partial "page-meta-links.html" . }} +{{ if not .Params.notoc }} +{{ with .TableOfContents }} +{{ if ge (len .) 200 }} +{{ . }} +{{ end }} +{{ end }} +{{ end }} \ No newline at end of file diff --git a/daprdocs/layouts/shortcodes/codetab.html b/daprdocs/layouts/shortcodes/codetab.html new file mode 100644 index 000000000..8b72c96d4 --- /dev/null +++ b/daprdocs/layouts/shortcodes/codetab.html @@ -0,0 +1,21 @@ +{{- $index := .Ordinal -}} + +{{- if ne .Parent.Name "tabs" -}} +{{- errorf "codetab must be used within a tabs block" -}} +{{- end -}} + + +{{- $guid := printf "tabs-%d" .Parent.Ordinal -}} + + +{{- $entry := .Parent.Get $index -}} +{{- $entry := lower $entry -}} + +{{- $tabid := printf "%s-%s-tab" $guid $entry | anchorize -}} +{{- $entryid := printf "%s-%s" $guid $entry | anchorize -}} + +
+
+ {{- .Inner -}} +
\ No newline at end of file diff --git a/daprdocs/layouts/shortcodes/tabs.html b/daprdocs/layouts/shortcodes/tabs.html new file mode 100644 index 000000000..dbf753387 --- /dev/null +++ b/daprdocs/layouts/shortcodes/tabs.html @@ -0,0 +1,27 @@ + +{{- $guid := printf "tabs-%d" .Ordinal -}} + +{{- .Scratch.Set "first" true -}} + + + + +
+{{ .Inner }} +
\ No newline at end of file diff --git a/daprdocs/package-lock.json b/daprdocs/package-lock.json new file mode 100644 index 000000000..f31aedfcc --- /dev/null +++ b/daprdocs/package-lock.json @@ -0,0 +1,956 @@ +{ + "name": "dapr-docs-hugo", + "version": "0.0.1", + "lockfileVersion": 1, + "requires": true, + "dependencies": { + "@nodelib/fs.scandir": { + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.3.tgz", + "integrity": "sha512-eGmwYQn3gxo4r7jdQnkrrN6bY478C3P+a/y72IJukF8LjB6ZHeB3c+Ehacj3sYeSmUXGlnA67/PmbM9CVwL7Dw==", + "dev": true, + "requires": { + "@nodelib/fs.stat": "2.0.3", + "run-parallel": "^1.1.9" + } + }, + "@nodelib/fs.stat": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.3.tgz", + "integrity": "sha512-bQBFruR2TAwoevBEd/NWMoAAtNGzTRgdrqnYCc7dhzfoNvqPzLyqlEQnzZ3kVnNrSp25iyxE00/3h2fqGAGArA==", + "dev": true + }, + "@nodelib/fs.walk": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/@nodelib/fs.walk/-/fs.walk-1.2.4.tgz", + "integrity": "sha512-1V9XOY4rDW0rehzbrcqAmHnz8e7SKvX27gh8Gt2WgB0+pdzdiLV83p72kZPU+jvMbS1qU5mauP2iOvO8rhmurQ==", + "dev": true, + "requires": { + "@nodelib/fs.scandir": "2.1.3", + "fastq": "^1.6.0" + } + }, + "@types/color-name": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@types/color-name/-/color-name-1.1.1.tgz", + "integrity": "sha512-rr+OQyAjxze7GgWrSaJwydHStIhHq2lvY3BOC2Mj7KnzI7XK0Uw1TOOdI9lDoajEbSWLiYgoo4f1R51erQfhPQ==", + "dev": true + }, + "ansi-regex": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz", + "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg==", + "dev": true + }, + "ansi-styles": { + "version": "3.2.1", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.1.tgz", + "integrity": "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==", + "dev": true, + "requires": { + "color-convert": "^1.9.0" + } + }, + "anymatch": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/anymatch/-/anymatch-3.1.1.tgz", + "integrity": "sha512-mM8522psRCqzV+6LhomX5wgp25YVibjh8Wj23I5RPkPppSVSjyKD2A2mBJmWGa+KN7f2D6LNh9jkBCeyLktzjg==", + "dev": true, + "requires": { + "normalize-path": "^3.0.0", + "picomatch": "^2.0.4" + } + }, + "argparse": { + "version": "1.0.10", + "resolved": "https://registry.npmjs.org/argparse/-/argparse-1.0.10.tgz", + "integrity": "sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg==", + "dev": true, + "requires": { + "sprintf-js": "~1.0.2" + } + }, + "array-union": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/array-union/-/array-union-2.1.0.tgz", + "integrity": "sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw==", + "dev": true + }, + "at-least-node": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/at-least-node/-/at-least-node-1.0.0.tgz", + "integrity": "sha512-+q/t7Ekv1EDY2l6Gda6LLiX14rU9TV20Wa3ofeQmwPFZbOMo9DXrLbOjFaaclkXKWidIaopwAObQDqwWtGUjqg==", + "dev": true + }, + "autoprefixer": { + "version": "9.8.6", + "resolved": "https://registry.npmjs.org/autoprefixer/-/autoprefixer-9.8.6.tgz", + "integrity": "sha512-XrvP4VVHdRBCdX1S3WXVD8+RyG9qeb1D5Sn1DeLiG2xfSpzellk5k54xbUERJ3M5DggQxes39UGOTP8CFrEGbg==", + "dev": true, + "requires": { + "browserslist": "^4.12.0", + "caniuse-lite": "^1.0.30001109", + "colorette": "^1.2.1", + "normalize-range": "^0.1.2", + "num2fraction": "^1.2.2", + "postcss": "^7.0.32", + "postcss-value-parser": "^4.1.0" + } + }, + "binary-extensions": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/binary-extensions/-/binary-extensions-2.1.0.tgz", + "integrity": "sha512-1Yj8h9Q+QDF5FzhMs/c9+6UntbD5MkRfRwac8DoEm9ZfUBZ7tZ55YcGVAzEe4bXsdQHEk+s9S5wsOKVdZrw0tQ==", + "dev": true + }, + "braces": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz", + "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==", + "dev": true, + "requires": { + "fill-range": "^7.0.1" + } + }, + "browserslist": { + "version": "4.14.2", + "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.14.2.tgz", + "integrity": "sha512-HI4lPveGKUR0x2StIz+2FXfDk9SfVMrxn6PLh1JeGUwcuoDkdKZebWiyLRJ68iIPDpMI4JLVDf7S7XzslgWOhw==", + "dev": true, + "requires": { + "caniuse-lite": "^1.0.30001125", + "electron-to-chromium": "^1.3.564", + "escalade": "^3.0.2", + "node-releases": "^1.1.61" + } + }, + "caller-callsite": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/caller-callsite/-/caller-callsite-2.0.0.tgz", + "integrity": "sha1-hH4PzgoiN1CpoCfFSzNzGtMVQTQ=", + "dev": true, + "requires": { + "callsites": "^2.0.0" + } + }, + "caller-path": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/caller-path/-/caller-path-2.0.0.tgz", + "integrity": "sha1-Ro+DBE42mrIBD6xfBs7uFbsssfQ=", + "dev": true, + "requires": { + "caller-callsite": "^2.0.0" + } + }, + "callsites": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/callsites/-/callsites-2.0.0.tgz", + "integrity": "sha1-BuuE8A7qQT2oav/vrL/7Ngk7PFA=", + "dev": true + }, + "camelcase": { + "version": "5.3.1", + "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz", + "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg==", + "dev": true + }, + "caniuse-lite": { + "version": "1.0.30001131", + "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001131.tgz", + "integrity": "sha512-4QYi6Mal4MMfQMSqGIRPGbKIbZygeN83QsWq1ixpUwvtfgAZot5BrCKzGygvZaV+CnELdTwD0S4cqUNozq7/Cw==", + "dev": true + }, + "chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "requires": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + }, + "dependencies": { + "supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + } + } + }, + "chokidar": { + "version": "3.4.2", + "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-3.4.2.tgz", + "integrity": "sha512-IZHaDeBeI+sZJRX7lGcXsdzgvZqKv6sECqsbErJA4mHWfpRrD8B97kSFN4cQz6nGBGiuFia1MKR4d6c1o8Cv7A==", + "dev": true, + "requires": { + "anymatch": "~3.1.1", + "braces": "~3.0.2", + "fsevents": "~2.1.2", + "glob-parent": "~5.1.0", + "is-binary-path": "~2.1.0", + "is-glob": "~4.0.1", + "normalize-path": "~3.0.0", + "readdirp": "~3.4.0" + } + }, + "cliui": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/cliui/-/cliui-6.0.0.tgz", + "integrity": "sha512-t6wbgtoCXvAzst7QgXxJYqPt0usEfbgQdftEPbLL/cvv6HPE5VgvqCuAIDR0NgU52ds6rFwqrgakNLrHEjCbrQ==", + "dev": true, + "requires": { + "string-width": "^4.2.0", + "strip-ansi": "^6.0.0", + "wrap-ansi": "^6.2.0" + } + }, + "color-convert": { + "version": "1.9.3", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz", + "integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==", + "dev": true, + "requires": { + "color-name": "1.1.3" + } + }, + "color-name": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz", + "integrity": "sha1-p9BVi9icQveV3UIyj3QIMcpTvCU=", + "dev": true + }, + "colorette": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/colorette/-/colorette-1.2.1.tgz", + "integrity": "sha512-puCDz0CzydiSYOrnXpz/PKd69zRrribezjtE9yd4zvytoRc8+RY/KJPvtPFKZS3E3wP6neGyMe0vOTlHO5L3Pw==", + "dev": true + }, + "cosmiconfig": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/cosmiconfig/-/cosmiconfig-5.2.1.tgz", + "integrity": "sha512-H65gsXo1SKjf8zmrJ67eJk8aIRKV5ff2D4uKZIBZShbhGSpEmsQOPW/SKMKYhSTrqR7ufy6RP69rPogdaPh/kA==", + "dev": true, + "requires": { + "import-fresh": "^2.0.0", + "is-directory": "^0.3.1", + "js-yaml": "^3.13.1", + "parse-json": "^4.0.0" + } + }, + "decamelize": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/decamelize/-/decamelize-1.2.0.tgz", + "integrity": "sha1-9lNNFRSCabIDUue+4m9QH5oZEpA=", + "dev": true + }, + "dependency-graph": { + "version": "0.9.0", + "resolved": "https://registry.npmjs.org/dependency-graph/-/dependency-graph-0.9.0.tgz", + "integrity": "sha512-9YLIBURXj4DJMFALxXw9K3Y3rwb5Fk0X5/8ipCzaN84+gKxoHK43tVKRNakCQbiEx07E8Uwhuq21BpUagFhZ8w==", + "dev": true + }, + "dir-glob": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz", + "integrity": "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==", + "dev": true, + "requires": { + "path-type": "^4.0.0" + } + }, + "electron-to-chromium": { + "version": "1.3.570", + "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.570.tgz", + "integrity": "sha512-Y6OCoVQgFQBP5py6A/06+yWxUZHDlNr/gNDGatjH8AZqXl8X0tE4LfjLJsXGz/JmWJz8a6K7bR1k+QzZ+k//fg==", + "dev": true + }, + "emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true + }, + "error-ex": { + "version": "1.3.2", + "resolved": "https://registry.npmjs.org/error-ex/-/error-ex-1.3.2.tgz", + "integrity": "sha512-7dFHNmqeFSEt2ZBsCriorKnn3Z2pj+fd9kmI6QoWw4//DL+icEBfc0U7qJCisqrTsKTjw4fNFy2pW9OqStD84g==", + "dev": true, + "requires": { + "is-arrayish": "^0.2.1" + } + }, + "escalade": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.1.0.tgz", + "integrity": "sha512-mAk+hPSO8fLDkhV7V0dXazH5pDc6MrjBTPyD3VeKzxnVFjH1MIxbCdqGZB9O8+EwWakZs3ZCbDS4IpRt79V1ig==", + "dev": true + }, + "escape-string-regexp": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz", + "integrity": "sha1-G2HAViGQqN/2rjuyzwIAyhMLhtQ=", + "dev": true + }, + "esprima": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/esprima/-/esprima-4.0.1.tgz", + "integrity": "sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A==", + "dev": true + }, + "fast-glob": { + "version": "3.2.4", + "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.2.4.tgz", + "integrity": "sha512-kr/Oo6PX51265qeuCYsyGypiO5uJFgBS0jksyG7FUeCyQzNwYnzrNIMR1NXfkZXsMYXYLRAHgISHBz8gQcxKHQ==", + "dev": true, + "requires": { + "@nodelib/fs.stat": "^2.0.2", + "@nodelib/fs.walk": "^1.2.3", + "glob-parent": "^5.1.0", + "merge2": "^1.3.0", + "micromatch": "^4.0.2", + "picomatch": "^2.2.1" + } + }, + "fastq": { + "version": "1.8.0", + "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.8.0.tgz", + "integrity": "sha512-SMIZoZdLh/fgofivvIkmknUXyPnvxRE3DhtZ5Me3Mrsk5gyPL42F0xr51TdRXskBxHfMp+07bcYzfsYEsSQA9Q==", + "dev": true, + "requires": { + "reusify": "^1.0.4" + } + }, + "fill-range": { + "version": "7.0.1", + "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz", + "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==", + "dev": true, + "requires": { + "to-regex-range": "^5.0.1" + } + }, + "find-up": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz", + "integrity": "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==", + "dev": true, + "requires": { + "locate-path": "^5.0.0", + "path-exists": "^4.0.0" + } + }, + "fs-extra": { + "version": "9.0.1", + "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-9.0.1.tgz", + "integrity": "sha512-h2iAoN838FqAFJY2/qVpzFXy+EBxfVE220PalAqQLDVsFOHLJrZvut5puAbCdNv6WJk+B8ihI+k0c7JK5erwqQ==", + "dev": true, + "requires": { + "at-least-node": "^1.0.0", + "graceful-fs": "^4.2.0", + "jsonfile": "^6.0.1", + "universalify": "^1.0.0" + } + }, + "fsevents": { + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.1.3.tgz", + "integrity": "sha512-Auw9a4AxqWpa9GUfj370BMPzzyncfBABW8Mab7BGWBYDj4Isgq+cDKtx0i6u9jcX9pQDnswsaaOTgTmA5pEjuQ==", + "dev": true, + "optional": true + }, + "get-caller-file": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz", + "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==", + "dev": true + }, + "get-stdin": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/get-stdin/-/get-stdin-8.0.0.tgz", + "integrity": "sha512-sY22aA6xchAzprjyqmSEQv4UbAAzRN0L2dQB0NlN5acTTK9Don6nhoc3eAbUnpZiCANAMfd/+40kVdKfFygohg==", + "dev": true + }, + "glob-parent": { + "version": "5.1.1", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz", + "integrity": "sha512-FnI+VGOpnlGHWZxthPGR+QhR78fuiK0sNLkHQv+bL9fQi57lNNdquIbna/WrfROrolq8GK5Ek6BiMwqL/voRYQ==", + "dev": true, + "requires": { + "is-glob": "^4.0.1" + } + }, + "globby": { + "version": "11.0.1", + "resolved": "https://registry.npmjs.org/globby/-/globby-11.0.1.tgz", + "integrity": "sha512-iH9RmgwCmUJHi2z5o2l3eTtGBtXek1OYlHrbcxOYugyHLmAsZrPj43OtHThd62Buh/Vv6VyCBD2bdyWcGNQqoQ==", + "dev": true, + "requires": { + "array-union": "^2.1.0", + "dir-glob": "^3.0.1", + "fast-glob": "^3.1.1", + "ignore": "^5.1.4", + "merge2": "^1.3.0", + "slash": "^3.0.0" + } + }, + "graceful-fs": { + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.4.tgz", + "integrity": "sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw==", + "dev": true + }, + "has-flag": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz", + "integrity": "sha1-tdRU3CGZriJWmfNGfloH87lVuv0=", + "dev": true + }, + "ignore": { + "version": "5.1.8", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.1.8.tgz", + "integrity": "sha512-BMpfD7PpiETpBl/A6S498BaIJ6Y/ABT93ETbby2fP00v4EbvPBXWEoaR1UBPKs3iR53pJY7EtZk5KACI57i1Uw==", + "dev": true + }, + "import-cwd": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/import-cwd/-/import-cwd-2.1.0.tgz", + "integrity": "sha1-qmzzbnInYShcs3HsZRn1PiQ1sKk=", + "dev": true, + "requires": { + "import-from": "^2.1.0" + } + }, + "import-fresh": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-2.0.0.tgz", + "integrity": "sha1-2BNVwVYS04bGH53dOSLUMEgipUY=", + "dev": true, + "requires": { + "caller-path": "^2.0.0", + "resolve-from": "^3.0.0" + } + }, + "import-from": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/import-from/-/import-from-2.1.0.tgz", + "integrity": "sha1-M1238qev/VOqpHHUuAId7ja387E=", + "dev": true, + "requires": { + "resolve-from": "^3.0.0" + } + }, + "is-arrayish": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/is-arrayish/-/is-arrayish-0.2.1.tgz", + "integrity": "sha1-d8mYQFJ6qOyxqLppe4BkWnqSap0=", + "dev": true + }, + "is-binary-path": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-2.1.0.tgz", + "integrity": "sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==", + "dev": true, + "requires": { + "binary-extensions": "^2.0.0" + } + }, + "is-directory": { + "version": "0.3.1", + "resolved": "https://registry.npmjs.org/is-directory/-/is-directory-0.3.1.tgz", + "integrity": "sha1-YTObbyR1/Hcv2cnYP1yFddwVSuE=", + "dev": true + }, + "is-extglob": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", + "integrity": "sha1-qIwCU1eR8C7TfHahueqXc8gz+MI=", + "dev": true + }, + "is-fullwidth-code-point": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz", + "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==", + "dev": true + }, + "is-glob": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.1.tgz", + "integrity": "sha512-5G0tKtBTFImOqDnLB2hG6Bp2qcKEFduo4tZu9MT/H6NQv/ghhy30o55ufafxJ/LdH79LLs2Kfrn85TLKyA7BUg==", + "dev": true, + "requires": { + "is-extglob": "^2.1.1" + } + }, + "is-number": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz", + "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==", + "dev": true + }, + "js-yaml": { + "version": "3.14.0", + "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-3.14.0.tgz", + "integrity": "sha512-/4IbIeHcD9VMHFqDR/gQ7EdZdLimOvW2DdcxFjdyyZ9NsbS+ccrXqVWDtab/lRl5AlUqmpBx8EhPaWR+OtY17A==", + "dev": true, + "requires": { + "argparse": "^1.0.7", + "esprima": "^4.0.0" + } + }, + "json-parse-better-errors": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/json-parse-better-errors/-/json-parse-better-errors-1.0.2.tgz", + "integrity": "sha512-mrqyZKfX5EhL7hvqcV6WG1yYjnjeuYDzDhhcAAUrq8Po85NBQBJP+ZDUT75qZQ98IkUoBqdkExkukOU7Ts2wrw==", + "dev": true + }, + "jsonfile": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.0.1.tgz", + "integrity": "sha512-jR2b5v7d2vIOust+w3wtFKZIfpC2pnRmFAhAC/BuweZFQR8qZzxH1OyrQ10HmdVYiXWkYUqPVsz91cG7EL2FBg==", + "dev": true, + "requires": { + "graceful-fs": "^4.1.6", + "universalify": "^1.0.0" + } + }, + "locate-path": { + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-5.0.0.tgz", + "integrity": "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==", + "dev": true, + "requires": { + "p-locate": "^4.1.0" + } + }, + "lodash": { + "version": "4.17.20", + "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz", + "integrity": "sha512-PlhdFcillOINfeV7Ni6oF1TAEayyZBoZ8bcshTHqOYJYlrqzRK5hagpagky5o4HfCzzd1TRkXPMFq6cKk9rGmA==", + "dev": true + }, + "log-symbols": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/log-symbols/-/log-symbols-2.2.0.tgz", + "integrity": "sha512-VeIAFslyIerEJLXHziedo2basKbMKtTw3vfn5IzG0XTjhAVEJyNHnL2p7vc+wBDSdQuUpNw3M2u6xb9QsAY5Eg==", + "dev": true, + "requires": { + "chalk": "^2.0.1" + } + }, + "merge2": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz", + "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==", + "dev": true + }, + "micromatch": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.2.tgz", + "integrity": "sha512-y7FpHSbMUMoyPbYUSzO6PaZ6FyRnQOpHuKwbo1G+Knck95XVU4QAiKdGEnj5wwoS7PlOgthX/09u5iFJ+aYf5Q==", + "dev": true, + "requires": { + "braces": "^3.0.1", + "picomatch": "^2.0.5" + } + }, + "node-releases": { + "version": "1.1.61", + "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.61.tgz", + "integrity": "sha512-DD5vebQLg8jLCOzwupn954fbIiZht05DAZs0k2u8NStSe6h9XdsuIQL8hSRKYiU8WUQRznmSDrKGbv3ObOmC7g==", + "dev": true + }, + "normalize-path": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz", + "integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==", + "dev": true + }, + "normalize-range": { + "version": "0.1.2", + "resolved": "https://registry.npmjs.org/normalize-range/-/normalize-range-0.1.2.tgz", + "integrity": "sha1-LRDAa9/TEuqXd2laTShDlFa3WUI=", + "dev": true + }, + "num2fraction": { + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/num2fraction/-/num2fraction-1.2.2.tgz", + "integrity": "sha1-b2gragJ6Tp3fpFZM0lidHU5mnt4=", + "dev": true + }, + "p-limit": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz", + "integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==", + "dev": true, + "requires": { + "p-try": "^2.0.0" + } + }, + "p-locate": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-4.1.0.tgz", + "integrity": "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==", + "dev": true, + "requires": { + "p-limit": "^2.2.0" + } + }, + "p-try": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/p-try/-/p-try-2.2.0.tgz", + "integrity": "sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ==", + "dev": true + }, + "parse-json": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/parse-json/-/parse-json-4.0.0.tgz", + "integrity": "sha1-vjX1Qlvh9/bHRxhPmKeIy5lHfuA=", + "dev": true, + "requires": { + "error-ex": "^1.3.1", + "json-parse-better-errors": "^1.0.1" + } + }, + "path-exists": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", + "integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==", + "dev": true + }, + "path-type": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz", + "integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==", + "dev": true + }, + "picomatch": { + "version": "2.2.2", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.2.2.tgz", + "integrity": "sha512-q0M/9eZHzmr0AulXyPwNfZjtwZ/RBZlbN3K3CErVrk50T2ASYI7Bye0EvekFY3IP1Nt2DHu0re+V2ZHIpMkuWg==", + "dev": true + }, + "pify": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/pify/-/pify-2.3.0.tgz", + "integrity": "sha1-7RQaasBDqEnqWISY59yosVMw6Qw=", + "dev": true + }, + "postcss": { + "version": "7.0.32", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz", + "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==", + "dev": true, + "requires": { + "chalk": "^2.4.2", + "source-map": "^0.6.1", + "supports-color": "^6.1.0" + } + }, + "postcss-cli": { + "version": "7.1.2", + "resolved": "https://registry.npmjs.org/postcss-cli/-/postcss-cli-7.1.2.tgz", + "integrity": "sha512-3mlEmN1v2NVuosMWZM2tP8bgZn7rO5PYxRRrXtdSyL5KipcgBDjJ9ct8/LKxImMCJJi3x5nYhCGFJOkGyEqXBQ==", + "dev": true, + "requires": { + "chalk": "^4.0.0", + "chokidar": "^3.3.0", + "dependency-graph": "^0.9.0", + "fs-extra": "^9.0.0", + "get-stdin": "^8.0.0", + "globby": "^11.0.0", + "postcss": "^7.0.0", + "postcss-load-config": "^2.0.0", + "postcss-reporter": "^6.0.0", + "pretty-hrtime": "^1.0.3", + "read-cache": "^1.0.0", + "yargs": "^15.0.2" + }, + "dependencies": { + "ansi-styles": { + "version": "4.2.1", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.2.1.tgz", + "integrity": "sha512-9VGjrMsG1vePxcSweQsN20KY/c4zN0h9fLjqAbwbPfahM3t+NL+M9HC8xeXG2I8pX5NoamTGNuomEUFI7fcUjA==", + "dev": true, + "requires": { + "@types/color-name": "^1.1.1", + "color-convert": "^2.0.1" + } + }, + "chalk": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.0.tgz", + "integrity": "sha512-qwx12AxXe2Q5xQ43Ac//I6v5aXTipYrSESdOgzrN+9XjgEpyjpKuvSGaN4qE93f7TQTlerQQ8S+EQ0EyDoVL1A==", + "dev": true, + "requires": { + "ansi-styles": "^4.1.0", + "supports-color": "^7.1.0" + } + }, + "color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "requires": { + "color-name": "~1.1.4" + } + }, + "color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true + }, + "has-flag": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", + "dev": true + }, + "supports-color": { + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", + "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", + "dev": true, + "requires": { + "has-flag": "^4.0.0" + } + } + } + }, + "postcss-load-config": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/postcss-load-config/-/postcss-load-config-2.1.0.tgz", + "integrity": "sha512-4pV3JJVPLd5+RueiVVB+gFOAa7GWc25XQcMp86Zexzke69mKf6Nx9LRcQywdz7yZI9n1udOxmLuAwTBypypF8Q==", + "dev": true, + "requires": { + "cosmiconfig": "^5.0.0", + "import-cwd": "^2.0.0" + } + }, + "postcss-reporter": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/postcss-reporter/-/postcss-reporter-6.0.1.tgz", + "integrity": "sha512-LpmQjfRWyabc+fRygxZjpRxfhRf9u/fdlKf4VHG4TSPbV2XNsuISzYW1KL+1aQzx53CAppa1bKG4APIB/DOXXw==", + "dev": true, + "requires": { + "chalk": "^2.4.1", + "lodash": "^4.17.11", + "log-symbols": "^2.2.0", + "postcss": "^7.0.7" + } + }, + "postcss-value-parser": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-4.1.0.tgz", + "integrity": "sha512-97DXOFbQJhk71ne5/Mt6cOu6yxsSfM0QGQyl0L25Gca4yGWEGJaig7l7gbCX623VqTBNGLRLaVUCnNkcedlRSQ==", + "dev": true + }, + "pretty-hrtime": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/pretty-hrtime/-/pretty-hrtime-1.0.3.tgz", + "integrity": "sha1-t+PqQkNaTJsnWdmeDyAesZWALuE=", + "dev": true + }, + "read-cache": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/read-cache/-/read-cache-1.0.0.tgz", + "integrity": "sha1-5mTvMRYRZsl1HNvo28+GtftY93Q=", + "dev": true, + "requires": { + "pify": "^2.3.0" + } + }, + "readdirp": { + "version": "3.4.0", + "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-3.4.0.tgz", + "integrity": "sha512-0xe001vZBnJEK+uKcj8qOhyAKPzIT+gStxWr3LCB0DwcXR5NZJ3IaC+yGnHCYzB/S7ov3m3EEbZI2zeNvX+hGQ==", + "dev": true, + "requires": { + "picomatch": "^2.2.1" + } + }, + "require-directory": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", + "integrity": "sha1-jGStX9MNqxyXbiNE/+f3kqam30I=", + "dev": true + }, + "require-main-filename": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/require-main-filename/-/require-main-filename-2.0.0.tgz", + "integrity": "sha512-NKN5kMDylKuldxYLSUfrbo5Tuzh4hd+2E8NPPX02mZtn1VuREQToYe/ZdlJy+J3uCpfaiGF05e7B8W0iXbQHmg==", + "dev": true + }, + "resolve-from": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-3.0.0.tgz", + "integrity": "sha1-six699nWiBvItuZTM17rywoYh0g=", + "dev": true + }, + "reusify": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz", + "integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==", + "dev": true + }, + "run-parallel": { + "version": "1.1.9", + "resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.1.9.tgz", + "integrity": "sha512-DEqnSRTDw/Tc3FXf49zedI638Z9onwUotBMiUFKmrO2sdFKIbXamXGQ3Axd4qgphxKB4kw/qP1w5kTxnfU1B9Q==", + "dev": true + }, + "set-blocking": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/set-blocking/-/set-blocking-2.0.0.tgz", + "integrity": "sha1-BF+XgtARrppoA93TgrJDkrPYkPc=", + "dev": true + }, + "slash": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/slash/-/slash-3.0.0.tgz", + "integrity": "sha512-g9Q1haeby36OSStwb4ntCGGGaKsaVSjQ68fBxoQcutl5fS1vuY18H3wSt3jFyFtrkx+Kz0V1G85A4MyAdDMi2Q==", + "dev": true + }, + "source-map": { + "version": "0.6.1", + "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", + "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", + "dev": true + }, + "sprintf-js": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/sprintf-js/-/sprintf-js-1.0.3.tgz", + "integrity": "sha1-BOaSb2YolTVPPdAVIDYzuFcpfiw=", + "dev": true + }, + "string-width": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.0.tgz", + "integrity": "sha512-zUz5JD+tgqtuDjMhwIg5uFVV3dtqZ9yQJlZVfq4I01/K5Paj5UHj7VyrQOJvzawSVlKpObApbfD0Ed6yJc+1eg==", + "dev": true, + "requires": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.0" + } + }, + "strip-ansi": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz", + "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==", + "dev": true, + "requires": { + "ansi-regex": "^5.0.0" + } + }, + "supports-color": { + "version": "6.1.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-6.1.0.tgz", + "integrity": "sha512-qe1jfm1Mg7Nq/NSh6XE24gPXROEVsWHxC1LIx//XNlD9iw7YZQGjZNjYN7xGaEG6iKdA8EtNFW6R0gjnVXp+wQ==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + }, + "to-regex-range": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz", + "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==", + "dev": true, + "requires": { + "is-number": "^7.0.0" + } + }, + "universalify": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/universalify/-/universalify-1.0.0.tgz", + "integrity": "sha512-rb6X1W158d7pRQBg5gkR8uPaSfiids68LTJQYOtEUhoJUWBdaQHsuT/EUduxXYxcrt4r5PJ4fuHW1MHT6p0qug==", + "dev": true + }, + "which-module": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/which-module/-/which-module-2.0.0.tgz", + "integrity": "sha1-2e8H3Od7mQK4o6j6SzHD4/fm6Ho=", + "dev": true + }, + "wrap-ansi": { + "version": "6.2.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-6.2.0.tgz", + "integrity": "sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA==", + "dev": true, + "requires": { + "ansi-styles": "^4.0.0", + "string-width": "^4.1.0", + "strip-ansi": "^6.0.0" + }, + "dependencies": { + "ansi-styles": { + "version": "4.2.1", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.2.1.tgz", + "integrity": "sha512-9VGjrMsG1vePxcSweQsN20KY/c4zN0h9fLjqAbwbPfahM3t+NL+M9HC8xeXG2I8pX5NoamTGNuomEUFI7fcUjA==", + "dev": true, + "requires": { + "@types/color-name": "^1.1.1", + "color-convert": "^2.0.1" + } + }, + "color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "requires": { + "color-name": "~1.1.4" + } + }, + "color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true + } + } + }, + "y18n": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz", + "integrity": "sha512-r9S/ZyXu/Xu9q1tYlpsLIsa3EeLXXk0VwlxqTcFRfg9EhMW+17kbt9G0NrgCmhGb5vT2hyhJZLfDGx+7+5Uj/w==", + "dev": true + }, + "yargs": { + "version": "15.4.1", + "resolved": "https://registry.npmjs.org/yargs/-/yargs-15.4.1.tgz", + "integrity": "sha512-aePbxDmcYW++PaqBsJ+HYUFwCdv4LVvdnhBy78E57PIor8/OVvhMrADFFEDh8DHDFRv/O9i3lPhsENjO7QX0+A==", + "dev": true, + "requires": { + "cliui": "^6.0.0", + "decamelize": "^1.2.0", + "find-up": "^4.1.0", + "get-caller-file": "^2.0.1", + "require-directory": "^2.1.1", + "require-main-filename": "^2.0.0", + "set-blocking": "^2.0.0", + "string-width": "^4.2.0", + "which-module": "^2.0.0", + "y18n": "^4.0.0", + "yargs-parser": "^18.1.2" + } + }, + "yargs-parser": { + "version": "18.1.3", + "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-18.1.3.tgz", + "integrity": "sha512-o50j0JeToy/4K6OZcaQmW6lyXXKhq7csREXcDwk2omFPJEwUNOVtJKvmDr9EI1fAJZUyZcRF7kxGBWmRXudrCQ==", + "dev": true, + "requires": { + "camelcase": "^5.0.0", + "decamelize": "^1.2.0" + } + } + } +} diff --git a/daprdocs/package.json b/daprdocs/package.json new file mode 100644 index 000000000..e5b399925 --- /dev/null +++ b/daprdocs/package.json @@ -0,0 +1,20 @@ +{ + "name": "dapr-docs-hugo", + "version": "0.0.1", + "description": "Docs site for the Dapr project", + "main": "", + "repository": { + "type": "git", + "url": "git+https://github.com/dapr/docs.git" + }, + "author": "", + "bugs": { + "url": "https://github.com/dapr/docs/issues" + }, + "homepage": "https://dapr.io", + "dependencies": {}, + "devDependencies": { + "autoprefixer": "^9.8.6", + "postcss-cli": "^7.1.2" + } +} diff --git a/concepts/security/audits/DAP-01-report.pdf b/daprdocs/static/docs/Dapr-july-2020-security-audit-report.pdf similarity index 100% rename from concepts/security/audits/DAP-01-report.pdf rename to daprdocs/static/docs/Dapr-july-2020-security-audit-report.pdf diff --git a/reference/dashboard/grafana/actor-dashboard.json b/daprdocs/static/docs/actor-dashboard.json similarity index 100% rename from reference/dashboard/grafana/actor-dashboard.json rename to daprdocs/static/docs/actor-dashboard.json diff --git a/howto/setup-monitoring-tools/azm-config-map.yaml b/daprdocs/static/docs/azm-config-map.yaml similarity index 100% rename from howto/setup-monitoring-tools/azm-config-map.yaml rename to daprdocs/static/docs/azm-config-map.yaml diff --git a/howto/setup-monitoring-tools/fluentd-config-map.yaml b/daprdocs/static/docs/fluentd-config-map.yaml similarity index 100% rename from howto/setup-monitoring-tools/fluentd-config-map.yaml rename to daprdocs/static/docs/fluentd-config-map.yaml diff --git a/howto/setup-monitoring-tools/fluentd-dapr-with-rbac.yaml b/daprdocs/static/docs/fluentd-dapr-with-rbac.yaml similarity index 100% rename from howto/setup-monitoring-tools/fluentd-dapr-with-rbac.yaml rename to daprdocs/static/docs/fluentd-dapr-with-rbac.yaml diff --git a/howto/diagnose-with-tracing/open-telemetry-collector/collector-component.yaml b/daprdocs/static/docs/open-telemetry-collector/collector-component.yaml similarity index 100% rename from howto/diagnose-with-tracing/open-telemetry-collector/collector-component.yaml rename to daprdocs/static/docs/open-telemetry-collector/collector-component.yaml diff --git a/howto/diagnose-with-tracing/open-telemetry-collector/open-telemetry-collector.yaml b/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector.yaml similarity index 100% rename from howto/diagnose-with-tracing/open-telemetry-collector/open-telemetry-collector.yaml rename to daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector.yaml diff --git a/reference/dashboard/grafana/sidecar-dashboard.json b/daprdocs/static/docs/sidecar-dashboard.json similarity index 100% rename from reference/dashboard/grafana/sidecar-dashboard.json rename to daprdocs/static/docs/sidecar-dashboard.json diff --git a/reference/dashboard/grafana/system-services-dashboard.json b/daprdocs/static/docs/system-services-dashboard.json similarity index 95% rename from reference/dashboard/grafana/system-services-dashboard.json rename to daprdocs/static/docs/system-services-dashboard.json index 41525084f..e28c863aa 100644 --- a/reference/dashboard/grafana/system-services-dashboard.json +++ b/daprdocs/static/docs/system-services-dashboard.json @@ -1,1538 +1,1538 @@ -{ - "annotations": { - "list": [ - { - "builtIn": 1, - "datasource": "-- Grafana --", - "enable": true, - "hide": true, - "iconColor": "rgba(0, 211, 255, 1)", - "name": "Annotations & Alerts", - "type": "dashboard" - } - ] - }, - "editable": true, - "gnetId": null, - "graphTooltip": 0, - "id": 1, - "links": [], - "panels": [ - { - "collapsed": false, - "datasource": null, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 0 - }, - "id": 18, - "panels": [], - "title": "Health & Resource", - "type": "row" - }, - { - "cacheTimeout": null, - "datasource": "Dapr", - "gridPos": { - "h": 8, - "w": 4, - "x": 0, - "y": 1 - }, - "id": 20, - "links": [], - "options": { - "colorMode": "value", - "fieldOptions": { - "calcs": [ - "last" - ], - "defaults": { - "decimals": 1, - "mappings": [ - { - "id": 0, - "op": "=", - "text": "N/A", - "type": 1, - "value": "null" - } - ], - "nullValueMode": "connected", - "thresholds": { - "mode": "absolute", - "steps": [ - { - "color": "red", - "value": null - }, - { - "color": "green", - "value": 600 - } - ] - }, - "unit": "s" - }, - "overrides": [], - "values": false - }, - "graphMode": "area", - "justifyMode": "auto", - "orientation": "horizontal" - }, - "pluginVersion": "6.6.2", - "targets": [ - { - "expr": "time() - max(process_start_time_seconds{app=~\"dapr-sentry|dapr-placement|dapr-sidecar-injector|dapr-operator\"}) by (app)", - "intervalFactor": 2, - "legendFormat": "{{app}}", - "refId": "A" - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Uptime", - "type": "stat" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "Dapr", - "description": "This shows total amount of kernel and user CPU usage time.", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 8, - "w": 7, - "x": 4, - "y": 1 - }, - "hiddenSeries": false, - "id": 22, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(rate(container_cpu_usage_seconds_total{pod=~\"(dapr-sentry|dapr-sidecar-injector|dapr-placement|dapr-operator).*\"}[5m])) by (pod)", - "legendFormat": "{{pod}}", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Total CPU usage (kernel and user)", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "s", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "Dapr", - "description": "The amount of memory that belongs specifically to that process in bytes. This excludes swapped out memory pages.", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 8, - "w": 7, - "x": 11, - "y": 1 - }, - "hiddenSeries": false, - "id": 24, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "max(process_resident_memory_bytes{app=~\"(dapr-sentry|dapr-sidecar-injector|dapr-placement|dapr-operator)\"}) by (app)", - "legendFormat": "{{app}}", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Memory usage in bytes", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "decbytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "Dapr", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 8, - "w": 6, - "x": 18, - "y": 1 - }, - "hiddenSeries": false, - "id": 26, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "max(go_goroutines{app=~\"(dapr-sentry|dapr-sidecar-injector|dapr-placement|dapr-operator)\"}) by (app)", - "legendFormat": "{{app}}", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Number of GO routines", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "collapsed": false, - "datasource": null, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 9 - }, - "id": 14, - "panels": [], - "title": "Operator", - "type": "row" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": null, - "description": "The total number of services created.", - "fill": 10, - "fillGradient": 0, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 10 - }, - "hiddenSeries": false, - "id": 6, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": true, - "steppedLine": true, - "targets": [ - { - "expr": "count(dapr_operator_service_created_total) by (app_id)", - "legendFormat": "{{app_id}}", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "# Services Created", - "tooltip": { - "shared": true, - "sort": 1, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "Dapr", - "description": "The total number of services deleted.", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 10 - }, - "hiddenSeries": false, - "id": 4, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "dapr_operator_service_deleted_total", - "legendFormat": "{{app_id}}", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "# Services Deleted", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "collapsed": false, - "datasource": null, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 18 - }, - "id": 12, - "panels": [], - "title": "Sidecar Injector", - "type": "row" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": null, - "description": "The total number of sidecar injection requests.", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 9, - "w": 12, - "x": 0, - "y": 19 - }, - "hiddenSeries": false, - "id": 8, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "dapr_injector_sidecar_injection_requests_total", - "legendFormat": "sidecars requests", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "# sidecar injection requests", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": "Dapr", - "description": "The total number of successful sidecar injection requests.", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 9, - "w": 12, - "x": 12, - "y": 19 - }, - "hiddenSeries": false, - "id": 10, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pluginVersion": "6.6.2", - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "dapr_injector_sidecar_injection_succeeded_total", - "legendFormat": "{{app_id}}", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "# successful sidecar injected", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "collapsed": false, - "datasource": null, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 28 - }, - "id": 42, - "panels": [], - "title": "CA Sentry", - "type": "row" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": true, - "colors": [ - "#F2495C", - "#FADE2A", - "#73BF69" - ], - "datasource": null, - "decimals": null, - "description": "", - "format": "s", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 7, - "w": 3, - "x": 0, - "y": 29 - }, - "id": 44, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "options": {}, - "pluginVersion": "6.6.2", - "postfix": " left", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": true, - "ymax": null, - "ymin": null - }, - "tableColumn": "", - "targets": [ - { - "expr": "min(dapr_sentry_issuercert_expiry_timestamp) - time()", - "refId": "A" - } - ], - "thresholds": "2628000, 5256000", - "timeFrom": "1m", - "timeShift": null, - "title": "Root/Issuer cert expiry", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": null, - "description": "Certificate Signing Request ( CSR ) from Dapr runtime", - "fill": 0, - "fillGradient": 0, - "gridPos": { - "h": 7, - "w": 9, - "x": 3, - "y": 29 - }, - "hiddenSeries": false, - "id": 34, - "legend": { - "alignAsTable": true, - "avg": false, - "current": false, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 2, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [ - { - "alias": "CSR Requests", - "color": "rgb(60, 33, 166)", - "dashes": true - }, - { - "alias": "CSR Success", - "color": "#73BF69" - }, - { - "alias": "CSR Failure", - "color": "#F2495C" - } - ], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(dapr_sentry_cert_sign_request_received_total{app=\"dapr-sentry\"})", - "legendFormat": "CSR Requests", - "refId": "A" - }, - { - "expr": "sum(dapr_sentry_cert_sign_success_total{app=\"dapr-sentry\"})", - "instant": false, - "legendFormat": "CSR Success", - "refId": "B" - }, - { - "expr": "sum(dapr_sentry_cert_sign_failure_total{app=\"dapr-sentry\"})", - "instant": false, - "legendFormat": "CSR Failure", - "refId": "C" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Certificate Signing Requests (CSR) from Daprd", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": null, - "description": "This chart shows the failure reason of Certificate Sign Request.", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 7, - "w": 12, - "x": 12, - "y": 29 - }, - "hiddenSeries": false, - "id": 38, - "legend": { - "alignAsTable": false, - "avg": false, - "current": false, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(dapr_sentry_cert_sign_failure_total{app=\"dapr-sentry\"}) by (reason)", - "legendFormat": "{{reason}}", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "CSR Failures", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "cacheTimeout": null, - "dashLength": 10, - "dashes": false, - "datasource": "Dapr", - "description": "This will be counted when issuer cert and key are changed.", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 7, - "w": 12, - "x": 0, - "y": 36 - }, - "hiddenSeries": false, - "id": 36, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": false, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pluginVersion": "6.6.2", - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(dapr_sentry_issuercert_changed_total{app=\"dapr-sentry\"})", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Issuer cert and key changed total", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "none", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": null, - "description": "This chart shows the reason of gRPC server TLS certificate issuance failures.", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 7, - "w": 12, - "x": 12, - "y": 36 - }, - "hiddenSeries": false, - "id": 40, - "legend": { - "alignAsTable": false, - "avg": false, - "current": false, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(dapr_sentry_servercert_issue_failed_total{app=\"dapr-sentry\"}) by (reason)", - "legendFormat": "{{reason}}", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "Server TLS certificate issuance failures", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "collapsed": false, - "datasource": null, - "gridPos": { - "h": 1, - "w": 24, - "x": 0, - "y": 43 - }, - "id": 16, - "panels": [], - "title": "Placement", - "type": "row" - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": null, - "description": "The total number of replicas connected to placement service.", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 44 - }, - "hiddenSeries": false, - "id": 28, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "dapr_placement_hosts_total", - "legendFormat": "hosts", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "# total replicas", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": null, - "description": "The total number of replicas which are not hosting actors.", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 44 - }, - "hiddenSeries": false, - "id": 30, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "dapr_placement_nonactorhosts_total", - "legendFormat": "{{kubernetes_pod_name}}", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "# replicas not hosting actors", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": null, - "description": "The total number of actor types registered with Dapr runtime.", - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 52 - }, - "hiddenSeries": false, - "id": 32, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "dapr_placement_actortypes_total", - "legendFormat": "actor types", - "refId": "A" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "# actor types", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - } - ], - "refresh": "15m", - "schemaVersion": 22, - "style": "dark", - "tags": [], - "templating": { - "list": [] - }, - "time": { - "from": "now-1h", - "to": "now" - }, - "timepicker": { - "refresh_intervals": [ - "5s", - "10s", - "30s", - "1m", - "5m", - "15m", - "30m", - "1h", - "2h", - "1d" - ] - }, - "timezone": "", - "title": "Dapr System Services Dashboard", - "uid": "RHSwiHXWk", - "version": 2 +{ + "annotations": { + "list": [ + { + "builtIn": 1, + "datasource": "-- Grafana --", + "enable": true, + "hide": true, + "iconColor": "rgba(0, 211, 255, 1)", + "name": "Annotations & Alerts", + "type": "dashboard" + } + ] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "id": 1, + "links": [], + "panels": [ + { + "collapsed": false, + "datasource": null, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 0 + }, + "id": 18, + "panels": [], + "title": "Health & Resource", + "type": "row" + }, + { + "cacheTimeout": null, + "datasource": "Dapr", + "gridPos": { + "h": 8, + "w": 4, + "x": 0, + "y": 1 + }, + "id": 20, + "links": [], + "options": { + "colorMode": "value", + "fieldOptions": { + "calcs": [ + "last" + ], + "defaults": { + "decimals": 1, + "mappings": [ + { + "id": 0, + "op": "=", + "text": "N/A", + "type": 1, + "value": "null" + } + ], + "nullValueMode": "connected", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "red", + "value": null + }, + { + "color": "green", + "value": 600 + } + ] + }, + "unit": "s" + }, + "overrides": [], + "values": false + }, + "graphMode": "area", + "justifyMode": "auto", + "orientation": "horizontal" + }, + "pluginVersion": "6.6.2", + "targets": [ + { + "expr": "time() - max(process_start_time_seconds{app=~\"dapr-sentry|dapr-placement|dapr-sidecar-injector|dapr-operator\"}) by (app)", + "intervalFactor": 2, + "legendFormat": "{{app}}", + "refId": "A" + } + ], + "timeFrom": null, + "timeShift": null, + "title": "Uptime", + "type": "stat" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": "Dapr", + "description": "This shows total amount of kernel and user CPU usage time.", + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 8, + "w": 7, + "x": 4, + "y": 1 + }, + "hiddenSeries": false, + "id": 22, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(rate(container_cpu_usage_seconds_total{pod=~\"(dapr-sentry|dapr-sidecar-injector|dapr-placement|dapr-operator).*\"}[5m])) by (pod)", + "legendFormat": "{{pod}}", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "Total CPU usage (kernel and user)", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "s", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": "Dapr", + "description": "The amount of memory that belongs specifically to that process in bytes. This excludes swapped out memory pages.", + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 8, + "w": 7, + "x": 11, + "y": 1 + }, + "hiddenSeries": false, + "id": 24, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "max(process_resident_memory_bytes{app=~\"(dapr-sentry|dapr-sidecar-injector|dapr-placement|dapr-operator)\"}) by (app)", + "legendFormat": "{{app}}", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "Memory usage in bytes", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "decbytes", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": "Dapr", + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 8, + "w": 6, + "x": 18, + "y": 1 + }, + "hiddenSeries": false, + "id": 26, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "max(go_goroutines{app=~\"(dapr-sentry|dapr-sidecar-injector|dapr-placement|dapr-operator)\"}) by (app)", + "legendFormat": "{{app}}", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "Number of GO routines", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "collapsed": false, + "datasource": null, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 9 + }, + "id": 14, + "panels": [], + "title": "Operator", + "type": "row" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": null, + "description": "The total number of services created.", + "fill": 10, + "fillGradient": 0, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 10 + }, + "hiddenSeries": false, + "id": 6, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": true, + "steppedLine": true, + "targets": [ + { + "expr": "count(dapr_operator_service_created_total) by (app_id)", + "legendFormat": "{{app_id}}", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "# Services Created", + "tooltip": { + "shared": true, + "sort": 1, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": "Dapr", + "description": "The total number of services deleted.", + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 10 + }, + "hiddenSeries": false, + "id": 4, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "dapr_operator_service_deleted_total", + "legendFormat": "{{app_id}}", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "# Services Deleted", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "collapsed": false, + "datasource": null, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 18 + }, + "id": 12, + "panels": [], + "title": "Sidecar Injector", + "type": "row" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": null, + "description": "The total number of sidecar injection requests.", + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 9, + "w": 12, + "x": 0, + "y": 19 + }, + "hiddenSeries": false, + "id": 8, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "dapr_injector_sidecar_injection_requests_total", + "legendFormat": "sidecars requests", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "# sidecar injection requests", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": "Dapr", + "description": "The total number of successful sidecar injection requests.", + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 9, + "w": 12, + "x": 12, + "y": 19 + }, + "hiddenSeries": false, + "id": 10, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pluginVersion": "6.6.2", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "dapr_injector_sidecar_injection_succeeded_total", + "legendFormat": "{{app_id}}", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "# successful sidecar injected", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "collapsed": false, + "datasource": null, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 28 + }, + "id": 42, + "panels": [], + "title": "CA Sentry", + "type": "row" + }, + { + "cacheTimeout": null, + "colorBackground": false, + "colorValue": true, + "colors": [ + "#F2495C", + "#FADE2A", + "#73BF69" + ], + "datasource": null, + "decimals": null, + "description": "", + "format": "s", + "gauge": { + "maxValue": 100, + "minValue": 0, + "show": false, + "thresholdLabels": false, + "thresholdMarkers": true + }, + "gridPos": { + "h": 7, + "w": 3, + "x": 0, + "y": 29 + }, + "id": 44, + "interval": null, + "links": [], + "mappingType": 1, + "mappingTypes": [ + { + "name": "value to text", + "value": 1 + }, + { + "name": "range to text", + "value": 2 + } + ], + "maxDataPoints": 100, + "nullPointMode": "connected", + "nullText": null, + "options": {}, + "pluginVersion": "6.6.2", + "postfix": " left", + "postfixFontSize": "50%", + "prefix": "", + "prefixFontSize": "50%", + "rangeMaps": [ + { + "from": "null", + "text": "N/A", + "to": "null" + } + ], + "sparkline": { + "fillColor": "rgba(31, 118, 189, 0.18)", + "full": false, + "lineColor": "rgb(31, 120, 193)", + "show": true, + "ymax": null, + "ymin": null + }, + "tableColumn": "", + "targets": [ + { + "expr": "min(dapr_sentry_issuercert_expiry_timestamp) - time()", + "refId": "A" + } + ], + "thresholds": "2628000, 5256000", + "timeFrom": "1m", + "timeShift": null, + "title": "Root/Issuer cert expiry", + "type": "singlestat", + "valueFontSize": "80%", + "valueMaps": [ + { + "op": "=", + "text": "N/A", + "value": "null" + } + ], + "valueName": "current" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": null, + "description": "Certificate Signing Request ( CSR ) from Dapr runtime", + "fill": 0, + "fillGradient": 0, + "gridPos": { + "h": 7, + "w": 9, + "x": 3, + "y": 29 + }, + "hiddenSeries": false, + "id": 34, + "legend": { + "alignAsTable": true, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 2, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [ + { + "alias": "CSR Requests", + "color": "rgb(60, 33, 166)", + "dashes": true + }, + { + "alias": "CSR Success", + "color": "#73BF69" + }, + { + "alias": "CSR Failure", + "color": "#F2495C" + } + ], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(dapr_sentry_cert_sign_request_received_total{app=\"dapr-sentry\"})", + "legendFormat": "CSR Requests", + "refId": "A" + }, + { + "expr": "sum(dapr_sentry_cert_sign_success_total{app=\"dapr-sentry\"})", + "instant": false, + "legendFormat": "CSR Success", + "refId": "B" + }, + { + "expr": "sum(dapr_sentry_cert_sign_failure_total{app=\"dapr-sentry\"})", + "instant": false, + "legendFormat": "CSR Failure", + "refId": "C" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "Certificate Signing Requests (CSR) from Daprd", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": null, + "description": "This chart shows the failure reason of Certificate Sign Request.", + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 7, + "w": 12, + "x": 12, + "y": 29 + }, + "hiddenSeries": false, + "id": 38, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(dapr_sentry_cert_sign_failure_total{app=\"dapr-sentry\"}) by (reason)", + "legendFormat": "{{reason}}", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "CSR Failures", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "aliasColors": {}, + "bars": false, + "cacheTimeout": null, + "dashLength": 10, + "dashes": false, + "datasource": "Dapr", + "description": "This will be counted when issuer cert and key are changed.", + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 7, + "w": 12, + "x": 0, + "y": 36 + }, + "hiddenSeries": false, + "id": 36, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": false, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "links": [], + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pluginVersion": "6.6.2", + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(dapr_sentry_issuercert_changed_total{app=\"dapr-sentry\"})", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "Issuer cert and key changed total", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "none", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": null, + "description": "This chart shows the reason of gRPC server TLS certificate issuance failures.", + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 7, + "w": 12, + "x": 12, + "y": 36 + }, + "hiddenSeries": false, + "id": 40, + "legend": { + "alignAsTable": false, + "avg": false, + "current": false, + "max": false, + "min": false, + "rightSide": true, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "sum(dapr_sentry_servercert_issue_failed_total{app=\"dapr-sentry\"}) by (reason)", + "legendFormat": "{{reason}}", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "Server TLS certificate issuance failures", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": false + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "collapsed": false, + "datasource": null, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 43 + }, + "id": 16, + "panels": [], + "title": "Placement", + "type": "row" + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": null, + "description": "The total number of replicas connected to placement service.", + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 44 + }, + "hiddenSeries": false, + "id": 28, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "dapr_placement_hosts_total", + "legendFormat": "hosts", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "# total replicas", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": null, + "description": "The total number of replicas which are not hosting actors.", + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 44 + }, + "hiddenSeries": false, + "id": 30, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "dapr_placement_nonactorhosts_total", + "legendFormat": "{{kubernetes_pod_name}}", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "# replicas not hosting actors", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + }, + { + "aliasColors": {}, + "bars": false, + "dashLength": 10, + "dashes": false, + "datasource": null, + "description": "The total number of actor types registered with Dapr runtime.", + "fill": 1, + "fillGradient": 0, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 52 + }, + "hiddenSeries": false, + "id": 32, + "legend": { + "avg": false, + "current": false, + "max": false, + "min": false, + "show": true, + "total": false, + "values": false + }, + "lines": true, + "linewidth": 1, + "nullPointMode": "null", + "options": { + "dataLinks": [] + }, + "percentage": false, + "pointradius": 2, + "points": false, + "renderer": "flot", + "seriesOverrides": [], + "spaceLength": 10, + "stack": false, + "steppedLine": false, + "targets": [ + { + "expr": "dapr_placement_actortypes_total", + "legendFormat": "actor types", + "refId": "A" + } + ], + "thresholds": [], + "timeFrom": null, + "timeRegions": [], + "timeShift": null, + "title": "# actor types", + "tooltip": { + "shared": true, + "sort": 0, + "value_type": "individual" + }, + "type": "graph", + "xaxis": { + "buckets": null, + "mode": "time", + "name": null, + "show": true, + "values": [] + }, + "yaxes": [ + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + }, + { + "format": "short", + "label": null, + "logBase": 1, + "max": null, + "min": null, + "show": true + } + ], + "yaxis": { + "align": false, + "alignLevel": null + } + } + ], + "refresh": "15m", + "schemaVersion": 22, + "style": "dark", + "tags": [], + "templating": { + "list": [] + }, + "time": { + "from": "now-1h", + "to": "now" + }, + "timepicker": { + "refresh_intervals": [ + "5s", + "10s", + "30s", + "1m", + "5m", + "15m", + "30m", + "1h", + "2h", + "1d" + ] + }, + "timezone": "", + "title": "Dapr System Services Dashboard", + "uid": "RHSwiHXWk", + "version": 2 } \ No newline at end of file diff --git a/daprdocs/static/favicons/android-144x144.png b/daprdocs/static/favicons/android-144x144.png new file mode 100644 index 000000000..f5b25f80d Binary files /dev/null and b/daprdocs/static/favicons/android-144x144.png differ diff --git a/daprdocs/static/favicons/android-192x192.png b/daprdocs/static/favicons/android-192x192.png new file mode 100644 index 000000000..38dc11710 Binary files /dev/null and b/daprdocs/static/favicons/android-192x192.png differ diff --git a/daprdocs/static/favicons/android-36x36.png b/daprdocs/static/favicons/android-36x36.png new file mode 100644 index 000000000..429719459 Binary files /dev/null and b/daprdocs/static/favicons/android-36x36.png differ diff --git a/daprdocs/static/favicons/android-48x48.png b/daprdocs/static/favicons/android-48x48.png new file mode 100644 index 000000000..190ae9e14 Binary files /dev/null and b/daprdocs/static/favicons/android-48x48.png differ diff --git a/daprdocs/static/favicons/android-72x72.png b/daprdocs/static/favicons/android-72x72.png new file mode 100644 index 000000000..f2ce5050a Binary files /dev/null and b/daprdocs/static/favicons/android-72x72.png differ diff --git a/daprdocs/static/favicons/android-96x196.png b/daprdocs/static/favicons/android-96x196.png new file mode 100644 index 000000000..5bb6187e5 Binary files /dev/null and b/daprdocs/static/favicons/android-96x196.png differ diff --git a/daprdocs/static/favicons/apple-touch-icon-180x180.png b/daprdocs/static/favicons/apple-touch-icon-180x180.png new file mode 100644 index 000000000..796a078fb Binary files /dev/null and b/daprdocs/static/favicons/apple-touch-icon-180x180.png differ diff --git a/daprdocs/static/favicons/favicon-16x16.png b/daprdocs/static/favicons/favicon-16x16.png new file mode 100644 index 000000000..cfa765169 Binary files /dev/null and b/daprdocs/static/favicons/favicon-16x16.png differ diff --git a/daprdocs/static/favicons/favicon-32x32.png b/daprdocs/static/favicons/favicon-32x32.png new file mode 100644 index 000000000..7b3bb04b5 Binary files /dev/null and b/daprdocs/static/favicons/favicon-32x32.png differ diff --git a/daprdocs/static/favicons/favicon.ico b/daprdocs/static/favicons/favicon.ico new file mode 100644 index 000000000..35c7356bf Binary files /dev/null and b/daprdocs/static/favicons/favicon.ico differ diff --git a/daprdocs/static/favicons/tile150x150.png b/daprdocs/static/favicons/tile150x150.png new file mode 100644 index 000000000..17941dae1 Binary files /dev/null and b/daprdocs/static/favicons/tile150x150.png differ diff --git a/daprdocs/static/favicons/tile310x150.png b/daprdocs/static/favicons/tile310x150.png new file mode 100644 index 000000000..74ee0c759 Binary files /dev/null and b/daprdocs/static/favicons/tile310x150.png differ diff --git a/daprdocs/static/favicons/tile310x310.png b/daprdocs/static/favicons/tile310x310.png new file mode 100644 index 000000000..8f18792ac Binary files /dev/null and b/daprdocs/static/favicons/tile310x310.png differ diff --git a/daprdocs/static/favicons/tile70x70.png b/daprdocs/static/favicons/tile70x70.png new file mode 100644 index 000000000..e90849675 Binary files /dev/null and b/daprdocs/static/favicons/tile70x70.png differ diff --git a/reference/dashboard/img/actor-dashboard.png b/daprdocs/static/images/actor-dashboard.png similarity index 100% rename from reference/dashboard/img/actor-dashboard.png rename to daprdocs/static/images/actor-dashboard.png diff --git a/images/actor_game_example.png b/daprdocs/static/images/actor_background_game_example.png similarity index 100% rename from images/actor_game_example.png rename to daprdocs/static/images/actor_background_game_example.png diff --git a/images/actor_pattern.png b/daprdocs/static/images/actor_pattern.png similarity index 100% rename from images/actor_pattern.png rename to daprdocs/static/images/actor_pattern.png diff --git a/images/actors_communication.png b/daprdocs/static/images/actors_background_communication.png similarity index 100% rename from images/actors_communication.png rename to daprdocs/static/images/actors_background_communication.png diff --git a/images/actors_concurrency.png b/daprdocs/static/images/actors_background_concurrency.png similarity index 100% rename from images/actors_concurrency.png rename to daprdocs/static/images/actors_background_concurrency.png diff --git a/images/actors_id_hashing_calling.png b/daprdocs/static/images/actors_background_id_hashing_calling.png similarity index 100% rename from images/actors_id_hashing_calling.png rename to daprdocs/static/images/actors_background_id_hashing_calling.png diff --git a/images/actors_placement_service_registration.png b/daprdocs/static/images/actors_background_placement_service_registration.png similarity index 100% rename from images/actors_placement_service_registration.png rename to daprdocs/static/images/actors_background_placement_service_registration.png diff --git a/images/actors_client.png b/daprdocs/static/images/actors_client.png similarity index 100% rename from images/actors_client.png rename to daprdocs/static/images/actors_client.png diff --git a/images/actors_server.png b/daprdocs/static/images/actors_server.png similarity index 100% rename from images/actors_server.png rename to daprdocs/static/images/actors_server.png diff --git a/images/alloc.png b/daprdocs/static/images/alloc.png similarity index 100% rename from images/alloc.png rename to daprdocs/static/images/alloc.png diff --git a/images/azure-monitor.png b/daprdocs/static/images/azure-monitor.png similarity index 100% rename from images/azure-monitor.png rename to daprdocs/static/images/azure-monitor.png diff --git a/images/building_blocks.png b/daprdocs/static/images/building_blocks.png similarity index 100% rename from images/building_blocks.png rename to daprdocs/static/images/building_blocks.png diff --git a/daprdocs/static/images/buildingblocks-overview.png b/daprdocs/static/images/buildingblocks-overview.png new file mode 100644 index 000000000..a2ae1d363 Binary files /dev/null and b/daprdocs/static/images/buildingblocks-overview.png differ diff --git a/images/concepts-building-blocks.png b/daprdocs/static/images/concepts-building-blocks.png similarity index 100% rename from images/concepts-building-blocks.png rename to daprdocs/static/images/concepts-building-blocks.png diff --git a/images/dynamic_range.png b/daprdocs/static/images/dynamic_range.png similarity index 100% rename from images/dynamic_range.png rename to daprdocs/static/images/dynamic_range.png diff --git a/howto/setup-monitoring-tools/img/grafana-add-datasources.png b/daprdocs/static/images/grafana-add-datasources.png similarity index 100% rename from howto/setup-monitoring-tools/img/grafana-add-datasources.png rename to daprdocs/static/images/grafana-add-datasources.png diff --git a/howto/setup-monitoring-tools/img/grafana-datasources.png b/daprdocs/static/images/grafana-datasources.png similarity index 100% rename from howto/setup-monitoring-tools/img/grafana-datasources.png rename to daprdocs/static/images/grafana-datasources.png diff --git a/howto/setup-monitoring-tools/img/grafana-prometheus-dapr-server-url.png b/daprdocs/static/images/grafana-prometheus-dapr-server-url.png similarity index 100% rename from howto/setup-monitoring-tools/img/grafana-prometheus-dapr-server-url.png rename to daprdocs/static/images/grafana-prometheus-dapr-server-url.png diff --git a/howto/setup-monitoring-tools/img/grafana-uploadjson.png b/daprdocs/static/images/grafana-uploadjson.png similarity index 100% rename from howto/setup-monitoring-tools/img/grafana-uploadjson.png rename to daprdocs/static/images/grafana-uploadjson.png diff --git a/images/heap.png b/daprdocs/static/images/heap.png similarity index 100% rename from images/heap.png rename to daprdocs/static/images/heap.png diff --git a/daprdocs/static/images/home-title.png b/daprdocs/static/images/home-title.png new file mode 100644 index 000000000..9ed4e70c7 Binary files /dev/null and b/daprdocs/static/images/home-title.png differ diff --git a/images/intellij_debug_app.png b/daprdocs/static/images/intellij_debug_app.png similarity index 100% rename from images/intellij_debug_app.png rename to daprdocs/static/images/intellij_debug_app.png diff --git a/images/intellij_debug_menu.png b/daprdocs/static/images/intellij_debug_menu.png similarity index 100% rename from images/intellij_debug_menu.png rename to daprdocs/static/images/intellij_debug_menu.png diff --git a/images/intellij_edit_run_configuration.png b/daprdocs/static/images/intellij_edit_run_configuration.png similarity index 100% rename from images/intellij_edit_run_configuration.png rename to daprdocs/static/images/intellij_edit_run_configuration.png diff --git a/images/intellij_start_dapr.png b/daprdocs/static/images/intellij_start_dapr.png similarity index 100% rename from images/intellij_start_dapr.png rename to daprdocs/static/images/intellij_start_dapr.png diff --git a/howto/setup-monitoring-tools/img/kibana-1.png b/daprdocs/static/images/kibana-1.png similarity index 100% rename from howto/setup-monitoring-tools/img/kibana-1.png rename to daprdocs/static/images/kibana-1.png diff --git a/howto/setup-monitoring-tools/img/kibana-2.png b/daprdocs/static/images/kibana-2.png similarity index 100% rename from howto/setup-monitoring-tools/img/kibana-2.png rename to daprdocs/static/images/kibana-2.png diff --git a/howto/setup-monitoring-tools/img/kibana-3.png b/daprdocs/static/images/kibana-3.png similarity index 100% rename from howto/setup-monitoring-tools/img/kibana-3.png rename to daprdocs/static/images/kibana-3.png diff --git a/howto/setup-monitoring-tools/img/kibana-4.png b/daprdocs/static/images/kibana-4.png similarity index 100% rename from howto/setup-monitoring-tools/img/kibana-4.png rename to daprdocs/static/images/kibana-4.png diff --git a/howto/setup-monitoring-tools/img/kibana-5.png b/daprdocs/static/images/kibana-5.png similarity index 100% rename from howto/setup-monitoring-tools/img/kibana-5.png rename to daprdocs/static/images/kibana-5.png diff --git a/howto/setup-monitoring-tools/img/kibana-6.png b/daprdocs/static/images/kibana-6.png similarity index 100% rename from howto/setup-monitoring-tools/img/kibana-6.png rename to daprdocs/static/images/kibana-6.png diff --git a/howto/setup-monitoring-tools/img/kibana-7.png b/daprdocs/static/images/kibana-7.png similarity index 100% rename from howto/setup-monitoring-tools/img/kibana-7.png rename to daprdocs/static/images/kibana-7.png diff --git a/images/middleware.png b/daprdocs/static/images/middleware.png similarity index 100% rename from images/middleware.png rename to daprdocs/static/images/middleware.png diff --git a/images/open-telemetry-app-insights.png b/daprdocs/static/images/open-telemetry-app-insights.png similarity index 100% rename from images/open-telemetry-app-insights.png rename to daprdocs/static/images/open-telemetry-app-insights.png diff --git a/images/overview-sidecar-kubernetes.png b/daprdocs/static/images/overview-sidecar-kubernetes.png similarity index 100% rename from images/overview-sidecar-kubernetes.png rename to daprdocs/static/images/overview-sidecar-kubernetes.png diff --git a/images/overview-sidecar.png b/daprdocs/static/images/overview-sidecar.png similarity index 100% rename from images/overview-sidecar.png rename to daprdocs/static/images/overview-sidecar.png diff --git a/images/overview.png b/daprdocs/static/images/overview.png similarity index 100% rename from images/overview.png rename to daprdocs/static/images/overview.png diff --git a/images/overview_kubernetes.png b/daprdocs/static/images/overview_kubernetes.png similarity index 100% rename from images/overview_kubernetes.png rename to daprdocs/static/images/overview_kubernetes.png diff --git a/images/overview_standalone.png b/daprdocs/static/images/overview_standalone.png similarity index 100% rename from images/overview_standalone.png rename to daprdocs/static/images/overview_standalone.png diff --git a/images/programming_experience.png b/daprdocs/static/images/programming_experience.png similarity index 100% rename from images/programming_experience.png rename to daprdocs/static/images/programming_experience.png diff --git a/images/sample_trace.png b/daprdocs/static/images/sample_trace.png similarity index 100% rename from images/sample_trace.png rename to daprdocs/static/images/sample_trace.png diff --git a/images/secrets_azure_aks_keyvault.png b/daprdocs/static/images/secrets-overview-azure-aks-keyvault.png similarity index 100% rename from images/secrets_azure_aks_keyvault.png rename to daprdocs/static/images/secrets-overview-azure-aks-keyvault.png diff --git a/images/secrets_cloud_stores.png b/daprdocs/static/images/secrets-overview-cloud-stores.png similarity index 100% rename from images/secrets_cloud_stores.png rename to daprdocs/static/images/secrets-overview-cloud-stores.png diff --git a/images/secrets_kubernetes_store.png b/daprdocs/static/images/secrets-overview-kubernetes-store.png similarity index 100% rename from images/secrets_kubernetes_store.png rename to daprdocs/static/images/secrets-overview-kubernetes-store.png diff --git a/images/security-mTLS-dapr-system-services.png b/daprdocs/static/images/security-mTLS-dapr-system-services.png similarity index 100% rename from images/security-mTLS-dapr-system-services.png rename to daprdocs/static/images/security-mTLS-dapr-system-services.png diff --git a/images/security-mTLS-sentry-kubernetes.png b/daprdocs/static/images/security-mTLS-sentry-kubernetes.png similarity index 100% rename from images/security-mTLS-sentry-kubernetes.png rename to daprdocs/static/images/security-mTLS-sentry-kubernetes.png diff --git a/images/security-mTLS-sentry-selfhosted.png b/daprdocs/static/images/security-mTLS-sentry-selfhosted.png similarity index 100% rename from images/security-mTLS-sentry-selfhosted.png rename to daprdocs/static/images/security-mTLS-sentry-selfhosted.png diff --git a/images/threat_model.png b/daprdocs/static/images/security-threat-model.png similarity index 100% rename from images/threat_model.png rename to daprdocs/static/images/security-threat-model.png diff --git a/daprdocs/static/images/service-invocation-overview-example.png b/daprdocs/static/images/service-invocation-overview-example.png new file mode 100644 index 000000000..594b7875f Binary files /dev/null and b/daprdocs/static/images/service-invocation-overview-example.png differ diff --git a/daprdocs/static/images/service-invocation-overview.png b/daprdocs/static/images/service-invocation-overview.png new file mode 100644 index 000000000..f9af4d25c Binary files /dev/null and b/daprdocs/static/images/service-invocation-overview.png differ diff --git a/reference/dashboard/img/sidecar-dashboard.png b/daprdocs/static/images/sidecar-dashboard.png similarity index 100% rename from reference/dashboard/img/sidecar-dashboard.png rename to daprdocs/static/images/sidecar-dashboard.png diff --git a/daprdocs/static/images/state-management-overview.png b/daprdocs/static/images/state-management-overview.png new file mode 100644 index 000000000..5c8e181cb Binary files /dev/null and b/daprdocs/static/images/state-management-overview.png differ diff --git a/images/state_management.png b/daprdocs/static/images/state_management.png similarity index 100% rename from images/state_management.png rename to daprdocs/static/images/state_management.png diff --git a/reference/dashboard/img/system-service-dashboard.png b/daprdocs/static/images/system-service-dashboard.png similarity index 100% rename from reference/dashboard/img/system-service-dashboard.png rename to daprdocs/static/images/system-service-dashboard.png diff --git a/images/tracing.png b/daprdocs/static/images/tracing.png similarity index 100% rename from images/tracing.png rename to daprdocs/static/images/tracing.png diff --git a/images/vscode_remote_containers.png b/daprdocs/static/images/vscode_remote_containers.png similarity index 100% rename from images/vscode_remote_containers.png rename to daprdocs/static/images/vscode_remote_containers.png diff --git a/images/zipkin_ui.png b/daprdocs/static/images/zipkin_ui.png similarity index 100% rename from images/zipkin_ui.png rename to daprdocs/static/images/zipkin_ui.png diff --git a/daprdocs/themes/docsy b/daprdocs/themes/docsy new file mode 160000 index 000000000..4808e0ac5 --- /dev/null +++ b/daprdocs/themes/docsy @@ -0,0 +1 @@ +Subproject commit 4808e0ac5bb2df27021401fd33b2ef2756f751c2 diff --git a/getting-started/README.md b/getting-started/README.md deleted file mode 100644 index cf7325464..000000000 --- a/getting-started/README.md +++ /dev/null @@ -1,21 +0,0 @@ -# Getting Started - -Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, microservice stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks. - -## Core Concepts - -* **Building blocks** are a collection of components that implement distributed system capabilities, such as pub/sub, state management, resource bindings, and distributed tracing. - -* **Components** encapsulate the implementation for a building block API. Example implementations for the state building block may include Redis, Azure Storage, Azure Cosmos DB, and AWS DynamoDB. Many of the components are pluggable so that one implementation can be swapped out for another. - -To learn more, see [Dapr Concepts](../concepts). - -## Setup the development environment - -Dapr can be run locally or in Kubernetes. We recommend starting with a local setup to explore the core Dapr concepts and familiarize yourself with the Dapr CLI. Follow these instructions to [configure Dapr locally and on Kubernetes](./environment-setup.md). - -## Next steps - -1. Once Dapr is installed, continue to the [Hello World quickstart](https://github.com/dapr/quickstarts/tree/master/hello-world). -2. Explore additional [quickstarts](https://github.com/dapr/quickstarts) for more advanced concepts, such as service invocation, pub/sub, and state management. -3. Follow [How To guides](../howto) to understand how Dapr solves specific problems, such as creating a [rate limited app](../howto/control-concurrency). diff --git a/getting-started/environment-setup.md b/getting-started/environment-setup.md deleted file mode 100644 index 01d66f7a4..000000000 --- a/getting-started/environment-setup.md +++ /dev/null @@ -1,241 +0,0 @@ -# Environment Setup - -Dapr can be run in either self hosted or Kubernetes modes. Running Dapr runtime in self hosted mode enables you to develop Dapr applications in your local development environment and then deploy and run them in other Dapr supported environments. For example, you can develop Dapr applications in self hosted mode and then deploy them to any Kubernetes cluster. - -## Contents - -- [Prerequisites](#prerequisites) -- [Installing Dapr CLI](#installing-dapr-cli) -- [Installing Dapr in self-hosted mode](#installing-dapr-in-self-hosted-mode) -- [Installing Dapr on Kubernetes cluster](#installing-dapr-on-a-kubernetes-cluster) - -## Prerequisites - -On default Dapr will install with a developer environment using Docker containers to get you started easily. However, Dapr does not depend on Docker to run (see [here](https://github.com/dapr/cli/blob/master/README.md) for instructions on installing Dapr locally without Docker using slim init). This getting started guide assumes Dapr is installed along with this developer environment. - -- Install [Docker](https://docs.docker.com/install/) - -> For Windows user, ensure that `Docker Desktop For Windows` uses Linux containers. - -## Installing Dapr CLI - -### Using script to install the latest release - -**Windows** - -Install the latest windows Dapr cli to `c:\dapr` and add this directory to User PATH environment variable. - -```powershell -powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex" -``` - -**Linux** - -Install the latest linux Dapr CLI to `/usr/local/bin` - -```bash -wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash -``` - -**MacOS** - -Install the latest darwin Dapr CLI to `/usr/local/bin` - -```bash -curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash -``` - -Or install via [Homebrew](https://brew.sh) - -```bash -brew install dapr/tap/dapr-cli -``` - -### From the Binary Releases - -Each release of Dapr CLI includes various OSes and architectures. These binary versions can be manually downloaded and installed. - -1. Download the [Dapr CLI](https://github.com/dapr/cli/releases) -2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip) -3. Move it to your desired location. - - For Linux/MacOS - `/usr/local/bin` - - For Windows, create a directory and add this to your System PATH. For example create a directory called `c:\dapr` and add this directory to your path, by editing your system environment variable. - -## Installing Dapr in self hosted mode - -### Initialize Dapr using the CLI - -On default, during initialization the Dapr CLI will install the Dapr binaries as well as setup a developer environment to help you get started easily with Dapr. This environment uses Docker containers, therefore Docker is listed as a prerequisite. - ->If you prefer to run Dapr without this environment and no dependency on Docker, see the CLI documentation for usage of the `--slim` flag with the init CLI command [here](https://github.com/dapr/cli/blob/master/README.md). Note, if you are a new user, it is strongly recommended to intall Docker and use the regular init command. - -> For Linux users, if you run your docker cmds with sudo or the install path is `/usr/local/bin`(default install path), you need to use "**sudo dapr init**" -> For Windows users, make sure that you run the cmd terminal in administrator mode -> **Note:** See [Dapr CLI](https://github.com/dapr/cli) for details on the usage of Dapr CLI - -```bash -$ dapr init -⌛ Making the jump to hyperspace... -Downloading binaries and setting up components -✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started -``` - -To see that Dapr has been installed successfully, from a command prompt run the `docker ps` command and check that the `daprio/dapr:latest` and `redis` container images are both running. - -### Install a specific runtime version - -You can install or upgrade to a specific version of the Dapr runtime using `dapr init --runtime-version`. You can find the list of versions in [Dapr Release](https://github.com/dapr/dapr/releases). - -```bash -# Install v0.1.0 runtime -$ dapr init --runtime-version 0.1.0 - -# Check the versions of cli and runtime -$ dapr --version -cli version: v0.1.0 -runtime version: v0.1.0 -``` - -### Uninstall Dapr in a self hosted mode - -Uninstalling removes the Placement service container or the Placement service binary. - -```bash -$ dapr uninstall -``` - -> For Linux users, if you run your docker cmds with sudo or the install path is `/usr/local/bin`(default install path), you need to use "**sudo dapr uninstall**" to remove dapr binaries and/or the containers. - -It won't remove the Redis or Zipkin containers by default in case you were using them for other purposes. To remove Redis, Zipkin and actor Placement container as well as remove the default Dapr dir located at `$HOME/.dapr` or `%USERPROFILE%\.dapr\` run: - -```bash -$ dapr uninstall --all -``` - -**You should always run `dapr uninstall` before running another `dapr init`.** - -## Installing Dapr on a Kubernetes cluster - -When setting up Kubernetes, you can do this either via the Dapr CLI or Helm. - -Dapr installs the following pods: - -* dapr-operator: Manages component updates and kubernetes services endpoints for Dapr (state stores, pub-subs, etc.) -* dapr-sidecar-injector: Injects Dapr into annotated deployment pods -* dapr-placement: Used for actors only. Creates mapping tables that map actor instances to pods -* dapr-sentry: Manages mTLS between services and acts as a certificate authority - -### Setup Cluster - -You can install Dapr on any Kubernetes cluster. Here are some helpful links: - -- [Setup Minikube Cluster](./cluster/setup-minikube.md) -- [Setup Azure Kubernetes Service Cluster](./cluster/setup-aks.md) -- [Setup Google Cloud Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/quickstart) -- [Setup Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) - -> **Note:** Both the Dapr CLI, and the Dapr Helm chart automatically deploy with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy Dapr to Windows nodes, but most users should not need to. -> For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster](../howto/windows-k8s/) - -### Using the Dapr CLI - -You can install Dapr to a Kubernetes cluster using CLI. - -#### Install Dapr to Kubernetes - -*Note: The default namespace is dapr-system* - -``` -$ dapr init -k - -⌛ Making the jump to hyperspace... -ℹ️ Note: To install Dapr using Helm, see here: https://github.com/dapr/docs/blob/master/getting-started/environment-setup.md#using-helm-advanced - -✅ Deploying the Dapr control plane to your cluster... -✅ Success! Dapr has been installed to namespace dapr-system. To verify, run "dapr status -k" in your terminal. To get started, go here: https://aka.ms/dapr-getting-started -``` - -Install to a custom namespace: - -``` -dapr init -k -n mynamespace -``` - -Installing in highly available mode: - -``` -dapr init -k --enable-ha=true -``` - -Disable mTLS: - -``` -dapr init -k --enable-mtls=false -``` - -#### Uninstall Dapr on Kubernetes - -```bash -$ dapr uninstall --kubernetes -``` - -### Using Helm (Advanced) - -You can install Dapr to Kubernetes cluster using a Helm 3 chart. - -> **Note:** The latest Dapr helm chart no longer supports Helm v2. Please migrate from helm v2 to helm v3 by following [this guide](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/). - -#### Install Dapr to Kubernetes - -1. Make sure Helm 3 is installed on your machine - -2. Add Azure Container Registry as a Helm repo - -```bash -helm repo add dapr https://dapr.github.io/helm-charts/ -helm repo update -``` - -3. Create `dapr-system` namespace on your kubernetes cluster - -```bash -kubectl create namespace dapr-system -``` - -4. Install the Dapr chart on your cluster in the `dapr-system` namespace. - -```bash -helm install dapr dapr/dapr --namespace dapr-system -``` - -#### Verify installation - -Once the chart installation is complete, verify the dapr-dashboard, dapr-operator, dapr-placement, dapr-sidecar-injector and dapr-sentry pods are running in the `dapr-system` namespace: - -```bash -$ kubectl get pods -n dapr-system -w - -NAME READY STATUS RESTARTS AGE -dapr-dashboard-757c6999fb-f2rw5 1/1 Running 0 40s -dapr-operator-7bd6cbf5bf-xglsr 1/1 Running 0 40s -dapr-placement-7f8f76778f-6vhl2 1/1 Running 0 40s -dapr-sidecar-injector-8555576b6f-29cqm 1/1 Running 0 40s -dapr-sentry-9435776c7f-8f7yd 1/1 Running 0 40s -``` - -#### Sidecar annotations - -To see all the supported annotations for the Dapr sidecar on Kubernetes, visit [this](../howto/configure-k8s/README.md) how to guide. - -#### Uninstall Dapr on Kubernetes - -Helm 3 - -```bash -helm uninstall dapr -n dapr-system -``` - -> **Note:** See [here](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md) for details on Dapr helm charts. - -### Installing Redis on Kubernetes -To install Redis as a state store or as a pub/sub message bus into your Kubernetes cluster. See [Configure Redis for state management or pub/sub](../howto/configure-redis/readme.md) diff --git a/howto/README.md b/howto/README.md deleted file mode 100644 index 8e4318712..000000000 --- a/howto/README.md +++ /dev/null @@ -1,111 +0,0 @@ -# How Tos - -Here you'll find a list of "How To" guides that walk you through accomplishing specific tasks. - -## Contents -- [Service invocation](#service-invocation) -- [State management](#state-management) -- [Pub/Sub](#pubsub) -- [Bindings](#bindings-and-triggers) -- [Actors](#actors) -- [Observability](#observability) -- [Security](#security) -- [Middleware](#middleware) -- [Components](#components) -- [Hosting platforms](#hosting-platforms) -- [Developer tooling](#developer-tooling) - -## Service invocation - -* [Invoke other services in your cluster or environment](./invoke-and-discover-services) -* [Create a gRPC enabled app, and invoke Dapr over gRPC](./create-grpc-app) - -## State Management - -* [Setup a state store](./setup-state-store) -* [Configuring Redis for state management ](./configure-redis) -* [Create a service that performs stateful CRUD operations](./create-stateful-service) -* [Query the underlying state store](./query-state-store) -* [Create a stateful, replicated service with different consistency/concurrency levels](./stateful-replicated-service) -* [Control your app's throttling using rate limiting features](./control-concurrency) - -## Pub/Sub - -* [Setup a Pub/Sub component](./setup-pub-sub-message-broker) -* [Configuring Redis for pub/sub](./configure-redis) -* [Use Pub/Sub to publish messages to a given topic](./publish-topic) -* [Use Pub/Sub to consume events from a topic](./consume-topic) -* [Use Pub/Sub across multiple namespaces](./pubsub-namespaces) -* [Limit the Pub/Sub topics used or scope them to one or more applications](./pubsub-scopes) - -## Bindings and Triggers -* [Implementing a new binding](https://github.com/dapr/docs/tree/master/reference/specs/bindings) -* [Trigger a service from different resources with input bindings](./trigger-app-with-input-binding) -* [Invoke different resources using output bindings](./send-events-with-output-bindings) - -## Actors -For Actors How Tos see the SDK documentation -* [.NET Actors](https://github.com/dapr/dotnet-sdk/blob/master/docs/get-started-dapr-actor.md) -* [Java Actors](https://github.com/dapr/java-sdk) - -## Observability - -### Metrics and Logs - -* [Set up Azure Monitor to search logs and collect metrics for Dapr](./setup-monitoring-tools/setup-azure-monitor.md) -* [Set up Fleuntd, Elastic search, and Kibana in Kubernetes](./setup-monitoring-tools/setup-fluentd-es-kibana.md) -* [Set up Prometheus and Grafana in Kubernetes](./setup-monitoring-tools/setup-prometheus-grafana.md) -* [Observe metrics with Grafana](./setup-monitoring-tools/observe-metrics-with-grafana.md) - -### Distributed Tracing - -* [Diagnose your services with distributed tracing](./diagnose-with-tracing) -* [Use W3C Trace Context](./use-w3c-tracecontext) - -## Security - -### Dapr APIs Authentication - -* [Enable Dapr APIs token-based authentication](./enable-dapr-api-token-based-authentication) - -### Mutual Transport Layer Security (mTLS) - -* [Setup and configure mutual TLS between Dapr instances](./configure-mtls) - -### Secrets - -* [Configure component secrets using Dapr secret stores](./setup-secret-store) -* [Using the Secrets API to get application secrets](./get-secrets) - -## Middleware - -* [Configure API authorization with OAuth](./authorization-with-oauth) -* [Apply Open Policy Agent Polices](./policies-with-opa) - -## Components - -* [Limit components for one or more applications using scopes](./components-scopes) - -## Hosting Platforms -### Kubernetes Configuration - -* [Production deployment and upgrade guidelines](./deploy-k8s-prod) -* [Sidecar configuration on Kubernetes](./configure-k8s) -* [Autoscale on Kubernetes using KEDA and Dapr bindings](./autoscale-with-keda) -* [Deploy to hybrid Linux/Windows Kubernetes clusters](./windows-k8s) - -## Developer tooling -### Using Visual Studio Code - -* [Using Remote Containers for application development](./vscode-remote-containers) -* [Developing and debugging Dapr applications](./vscode-debugging-daprd) - -* [Setup development environment for Dapr runtime development ](https://github.com/dapr/dapr/blob/master/docs/development/setup-dapr-development-using-vscode.md) - -### Using IntelliJ - -* [Developing and debugging with daprd](./intellij-debugging-daprd) - -### SDKs - -* [Serialization in Dapr's SDKs](./serialize) diff --git a/howto/configure-k8s/README.md b/howto/configure-k8s/README.md deleted file mode 100644 index 72acc5644..000000000 --- a/howto/configure-k8s/README.md +++ /dev/null @@ -1,32 +0,0 @@ -# Configuring the Dapr sidecar on Kubernetes - -On Kubernetes, Dapr uses a sidecar injector pod that automatically injects the Dapr sidecar container into a pod that has the correct annotations. -The sidecar injector is an implementation of a Kubernetes [Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/). - -The following table shows all the supported pod Spec annotations supported by Dapr. - -| Annotation | Description -| ----------------------------------- | -------------- | -| `dapr.io/enabled` | Setting this paramater to `true` injects the Dapr sidecar into the pod -| `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on -| `dapr.io/app-id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID -| `dapr.io/log-level` | Sets the log level for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` -| `dapr.io/config` | Tells Dapr which Configuration CRD to use -| `dapr.io/log-as-json` | Setting this parameter to `true` outputs logs in JSON format. Default is `false` -| `dapr.io/enable-profiling` | Setting this paramater to `true` starts the Dapr profiling server on port `7777`. Default is `false` -| `dapr.io/app-protocol` | Tells Dapr which protocol your application is using. Valid options are `http` and `grpc`. Default is `http` -| `dapr.io/app-max-concurrency` | Limit the concurrency of your application. A valid value is any number larger than `0` -| `dapr.io/app-ssl` | Tells Dapr to invoke the app over an insecure SSL connection. Applies to both HTTP and gRPC. Default is `false`. -| `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090` -| `dapr.io/sidecar-cpu-limit` | Maximum amount of CPU that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set -| `dapr.io/sidecar-memory-limit` | Maximum amount of Memory that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set -| `dapr.io/sidecar-cpu-request` | Amount of CPU that the Dapr sidecar requests. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set -| `dapr.io/sidecar-memory-request` | Amount of Memory that the Dapr sidecar requests .See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set -| `dapr.io/sidecar-liveness-probe-delay-seconds` | Number of seconds after the sidecar container has started before liveness probe is initiated. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3` -| `dapr.io/sidecar-liveness-probe-timeout-seconds` | Number of seconds after which the sidecar liveness probe times out. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3` -| `dapr.io/sidecar-liveness-probe-period-seconds` | How often (in seconds) to perform the sidecar liveness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6` -| `dapr.io/sidecar-liveness-probe-threshold` | When the sidecar liveness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unhealthy. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3` -| `dapr.io/sidecar-readiness-probe-delay-seconds` | Number of seconds after the sidecar container has started before readiness probe is initiated. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3` -| `dapr.io/sidecar-readiness-probe-timeout-seconds` | Number of seconds after which the sidecar readiness probe times out. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3` -| `dapr.io/sidecar-readiness-probe-period-seconds` | How often (in seconds) to perform the sidecar readiness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6` -| `dapr.io/sidecar-readiness-probe-threshold` | When the sidecar readiness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unready. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3` diff --git a/howto/configure-redis/README.md b/howto/configure-redis/README.md deleted file mode 100644 index 93a721fb7..000000000 --- a/howto/configure-redis/README.md +++ /dev/null @@ -1,208 +0,0 @@ -# Configure Redis for state management or pub/sub - -Dapr can use Redis in two ways: - -1. As state store component (state.redis) for persistence and restoration -2. As pub/sub component (pubsub.redis) for async style message delivery - -- [Option 1: Creating a Redis Cache in your Kubernetes cluster using Helm](#Option-1:-creating-a-Redis-Cache-in-your-Kubernetes-Cluster-using-Helm) -- [Option 2: Creating an Azure Cache for Redis service](#Option-2:-Creating-an-Azure-Cache-for-Redis-service) -- [Configuration](#configuration) - - -## Creating a Redis store - -Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service. If you already have a Redis store, move on to the [configuration](#configuration) section. - -### Option 1: Creating a Redis Cache in your Kubernetes Cluster using Helm - -We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm v3](https://github.com/helm/helm#install). - -1. Install Redis into your cluster: - - ```bash - helm repo add bitnami https://charts.bitnami.com/bitnami - helm repo update - helm install redis bitnami/redis - ``` - - > Note that you need a Redis version greater than 5, which is what Dapr' pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub) a lower version can be used. - -2. Run `kubectl get pods` to see the Redis containers now running in your cluster or watch the rollout status. - - ```bash - kubectl rollout status statefulset.apps/redis-master - kubectl rollout status statefulset.apps/redis-slave - ``` - -3. Add `redis-master.default.svc.cluster.local:6379` as the `redisHost` in your [redis.yaml](#configuration) file. For example: - - ```yaml - metadata: - - name: redisHost - value: redis-master.default.svc.cluster.local:6379 - ``` - -4. Next, we'll get our Redis password using a `secretKeyRef` to a Kubernetes secret that has been configured into your cluster when Redis was installed. You can see the name of the secret key with `kubectl describe secret redis` - -Add `redis` with the key `redis-password` as the `redisPassword` secretKeyRef in your [redis.yaml](#configuration) file. For example: - - ```yaml - - name: redisPassword - secretKeyRef: - name: redis - key: redis-password - ``` -That's it! Now go to the [Configuration](#configuration) section - -5. (Alternative) Its **not recommended**, however you can use a hard coded password instead of secretKeyRef. First we'll get the Redis password, which is slightly different depending on the OS you're using: - - - **Windows**: Run below commands - ```powershell - # Create a file with your encoded password. - kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64 - # put your redis password in a text file called `password.txt`. - certutil -decode encoded.b64 password.txt - # Copy the password and delete the two files. - ``` - - - **Windows**: If you are using Powershell, it would be even easier. - ```powershell - PS C:\> $base64pwd=kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" - PS C:\> $redispassword=[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64pwd)) - PS C:\> $base64pwd="" - PS C:\> $redispassword - ``` - - **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password. - - Add this password as the `redisPassword` value in your [redis.yaml](#configuration) file. For example: - - ```yaml - metadata: - - name: redisPassword - value: lhDOkwTlp0 - ``` - > **Note:** The above example uses secret in plain text, follow [these instructions](https://github.com/dapr/docs/blob/master/concepts/secrets/) to configure secrets securely in production. - -### Option 2: Creating an Azure Cache for Redis service - -> **Note**: This approach requires having an Azure Subscription. - -1. Open the [Azure Portal](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow. Log in if necessary. -1. Fill out the necessary information -1. Click "Create" to kickoff deployment of your Redis instance. -1. Once your instance is created, you'll need to grab your access key. Navigate to "Access Keys" under "Settings" and copy your key. -1. We need the hostname of your Redis instance, which we can retrieve from the "Overview" in Azure. It should look like `xxxxxx.redis.cache.windows.net:6380`. -1. Finally, we need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configuration). - - As the connection to Azure is encrypted, make sure to add the following block to the `metadata` section of your `redis.yaml` file. - - ```yaml - metadata: - - name: enableTLS - value: "true" - ``` - -> **NOTE:** Dapr pub/sub uses [Redis Streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0, which isn't currently available on Azure Managed Redis Cache. Consequently, you can use Azure Managed Redis Cache only for state persistence. - -### Other options to create a Redis database - -- [AWS Redis](https://aws.amazon.com/redis/) -- [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/) - -## Configuration - -Dapr can use Redis as a `statestore` component for state persistence (`state.redis`) or as a `pubsub` component (`pubsub.redis`). The following yaml files demonstrates how to define each component using either a secretKey reference (which is preferred) or a plain text password. **Note:** In a production-grade application, follow [secret management](../../concepts/secrets/README.md) instructions to securely manage your secrets. - -### Configuring Redis for state persistence using a secret key reference (preferred) - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: statestore - namespace: default -spec: - type: state.redis - metadata: - - name: redisHost - value: - - name: redisPassword - secretKeyRef: - name: redis - key: redis-password -``` - -### Configuring Redis for Pub/Sub using a secret key reference (preferred) - -Create a file called redis-pubsub.yaml, and paste the following: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: pubsub - namespace: default -spec: - type: pubsub.redis - metadata: - - name: redisHost - value: - - name: redisPassword - secretKeyRef: - name: redis - key: redis-password -``` - -### Configuring Redis for state persistence using hard coded password (not recommended) - -Create a file called redis-state.yaml, and paste the following: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: statestore - namespace: default -spec: - type: state.redis - metadata: - - name: redisHost - value: - - name: redisPassword - value: -``` - -### Configuring Redis for Pub/Sub using hard coded password (not recommended) - -Create a file called redis-pubsub.yaml, and paste the following: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: pubsub - namespace: default -spec: - type: pubsub.redis - metadata: - - name: redisHost - value: - - name: redisPassword - value: -``` - -## Apply the configuration - -### Kubernetes - -```bash -kubectl apply -f redis-state.yaml -kubectl apply -f redis-pubsub.yaml -``` - -### Self hosted mode - -By default the Dapr CLI creates a local Redis instance when you run `dapr init`. However, if you want to configure a different Redis instance, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. - -If you initialized Dapr using `dapr init --slim`, the Dapr CLI did not create a Redis instance or a default configuration file for it. Follow [these instructions](#Creating-a-Redis-Store) to create a Redis store. Create the `redis.yaml` following the configuration [instructions](#Configuration) in a `components` dir and provide the path to the `dapr run` command with the flag `--components-path`. diff --git a/howto/consume-topic/README.md b/howto/consume-topic/README.md deleted file mode 100644 index e5f6dd5f6..000000000 --- a/howto/consume-topic/README.md +++ /dev/null @@ -1,146 +0,0 @@ -# Use Pub/Sub to consume messages from topics - -Pub/Sub is a very common pattern in a distributed system with many services that want to utilize decoupled, asynchronous messaging. -Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers. - -Dapr provides an extensible Pub/Sub system with At-Least-Once guarantees, allowing developers to publish and subscribe to topics. -Dapr provides different implementation of the underlying system, and allows operators to bring in their preferred infrastructure, for example Redis Streams, Kafka, etc. - -Watch this [video](https://www.youtube.com/watch?v=NLWukkHEwGA&feature=youtu.be&t=1052) on how to consume messages from topics. - -## Setup the Pub Sub component - -The first step is to setup the Pub/Sub component. -For this guide, we'll use Redis Streams, which is also installed by default on a local machine when running `dapr init`. - -*Note: When running Dapr locally, a pub/sub component YAML is automatically created for you locally. To override, create a `components` directory containing the file and use the flag `--components-path` with the `dapr run` CLI command.* - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: pubsub - namespace: default -spec: - type: pubsub.redis - metadata: - - name: redisHost - value: localhost:6379 - - name: redisPassword - value: "" -``` - -To deploy this into a Kubernetes cluster, fill in the `metadata` connection details in the yaml, and run `kubectl apply -f pubsub.yaml`. - -## Subscribe to topics - -Dapr allows two methods by which you can subscribe to topics: programatically, where subscriptions are defined in user code and declaratively, where subscriptions are are defined in an external file. - -### Declarative subscriptions - -You can subscribe to a topic using the following Custom Resources Definition (CRD): - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Subscription -metadata: - name: myevent-subscription -spec: - topic: newOrder - route: /orders - pubsubname: kafka -scopes: -- app1 -- app2 -``` - -The example above shows an event subscription to topic `newOrder`, for the pubsub component `kafka`. -The `route` field tells Dapr to send all topic messages to the `/orders` endpoint in the app. - -The `scopes` field enables this subscription for apps with IDs `app1` and `app2`. - -An example of a node.js app that receives events from the subscription: - -```javascript -const express = require('express') -const bodyParser = require('body-parser') -const app = express() -app.use(bodyParser.json()) - -const port = 3000 - -app.post('/orders', (req, res) => { - res.sendStatus(200); -}); - -app.listen(port, () => console.log(`consumer app listening on port ${port}!`)) -``` - -#### Subscribing on Kubernetes - -In Kubernetes, save the CRD to a file and apply it to the cluster: - -``` -kubectl apply -f subscription.yaml -``` - -#### Subscribing in Self Hosted - -When running Dapr in Self-hosted, either locally or on a VM, put the CRD in your `./components` directory. -When Dapr starts up, it will load subscriptions along with components. - -The following example shows how to point the Dapr CLI to a components path: - -``` -dapr run --app-id myapp --components-path ./myComponents -- python3 myapp.py -``` - -*Note: By default, Dapr loads components from $HOME/.dapr/components on MacOS/Linux and %USERPROFILE%\.dapr\components on Windows. If you place the subscription in a custom components path, make sure the Pub/Sub component is present also.* - -### Programmatic subscriptions - -To subscribe to topics, start a web server in the programming language of your choice and listen on the following `GET` endpoint: `/dapr/subscribe`. -The Dapr instance will call into your app, and expect a JSON response for the topic subscriptions. - -*Note: The following example is written in node, but can be in any programming language* - -
-const express = require('express')
-const bodyParser = require('body-parser')
-const app = express()
-app.use(bodyParser.json())
-
-const port = 3000
-
-app.get('/dapr/subscribe', (req, res) => {
-    res.json([
-        {
-            pubsubname: "pubsub",
-            topic: "newOrder",
-            route: "orders"        
-        }
-    ]);
-})
-
-app.post('/orders', (req, res) => {
-    res.sendStatus(200);
-});
-
-app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
-
- -In the payload returned to Dapr, `topic` tells Dapr which topic to subscribe to, `route` tells Dapr which endpoint to call on when a message comes to that topic, and `pubsubName` tells Dapr which pub/sub component it should use. In this example this is `pubsub` as this is the name of the component we outlined above. - -The `/orders` endpoint matches the `route` defined in the subscriptions and this is where Dapr will send all topic messages to. - -### ACK-ing a message - -In order to tell Dapr that a message was processed successfully, return a `200 OK` response: - -```javascript -res.status(200).send() -``` - -### Schedule a message for redelivery - -If Dapr receives any other return status code than `200`, or if your app crashes, Dapr will attempt to redeliver the message following At-Least-Once semantics. diff --git a/howto/create-stateful-service/README.md b/howto/create-stateful-service/README.md deleted file mode 100644 index 873fcc4ac..000000000 --- a/howto/create-stateful-service/README.md +++ /dev/null @@ -1,64 +0,0 @@ -# Create a stateful service - -State management is one of the most common needs of any application: new or legacy, monolith or microservice. -Dealing with different databases libraries, testing them, handling retries and faults can be time consuming and hard. - -Dapr provides state management capabilities that include consistency and concurrency options. -In this guide we'll start of with the basics: Using the key/value state API to allow an application to save, get and delete state. - -## 1. Setup a state store - -A state store component represents a resource that Dapr uses to communicate with a database. -For the purpose of this how to, we'll use a Redis state store. - -See a list of supported state stores [here](../setup-state-store/supported-state-stores.md) - -### Using the Dapr CLI - -When using `Dapr init` in Standalone mode, the Dapr CLI automatically provisions a state store (Redis) and creates the relevant YAML when running your app with `dapr run`. -To change the state store being used, replace the YAML under `/components` with the file of your choice. - -### Kubernetes - -See the instructions [here](../setup-state-store) on how to setup different state stores on Kubernetes. - -## 2. Save state - -The following example shows how to save two key/value pairs in a single call using the state management API, both of which are saved with the single `key1`name over http. - -*The following example is written in Python, but is applicable to any programming language* - -```python -import requests -import json - -stateReq = '[{ "key": "k1", "value": "Some Data"}, { "key": "k2", "value": "Some More Data"}]' -response = requests.post("http://localhost:3500/v1.0/state/key1", json=stateReq) -``` - -## 3. Get state - -The following example shows how to get an item by using a key with the state management API over http: - -*The following example is written in Python, but is applicable to any programming language* - -```python -import requests -import json - -response = requests.get("http://localhost:3500/v1.0/state/key1") -print(response.text) -``` - -## 4. Delete state - -The following example shows how to delete an item by using a key with the state management API over http: - -*The following example is written in Python, but is applicable to any programming language* - -```python -import requests -import json - -response = requests.delete("http://localhost:3500/v1.0/state/key1") -``` diff --git a/howto/diagnose-with-tracing/open-telemetry-collector.md b/howto/diagnose-with-tracing/open-telemetry-collector.md deleted file mode 100644 index 9d5792188..000000000 --- a/howto/diagnose-with-tracing/open-telemetry-collector.md +++ /dev/null @@ -1,124 +0,0 @@ -# Using OpenTelemetry Collector to collect traces - -Dapr can integrate with [OpenTelemetry -Collector](https://github.com/open-telemetry/opentelemetry-collector) -using the OpenCensus API. This guide walks through an example to use Dapr to push trace events to Azure Application Insights, through the OpenTelemetry Collector. - -## Requirements -A installation of Dapr on Kubernetes. - -## How to configure distributed tracing with Application Insights - -### Setup Application Insights - -1. First, you'll need an Azure account. See instructions - [here](https://azure.microsoft.com/free/) to apply for a **free** - Azure account. -2. Follow instructions - [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource) - to create a new Application Insights resource. -3. Get the Application Insights Intrumentation key from your Application Insights page. - -### Run OpenTelemetry Collector to push to your Application Insights instance - -First, save your Application Insights Instrumentation Key in an environment variable -``` -export APP_INSIGHTS_KEY= -``` - -Next, install the OpenTelemetry Collector to your Kubernetes cluster to push events to your Application Insights instance - -1. Check out the file - [open-telemetry-collector.yaml](open-telemetry-collector/open-telemetry-collector.yaml), - and replace the `` placeholder with your - APP_INSIGHTS_KEY. Or you can run the following - -``` -# Download the file -wget https://raw.githubusercontent.com/dapr/docs/master/howto/diagnose-with-tracing/open-telemetry-collector/open-telemetry-collector.yaml - -# Update the instrumentation key. -sed -i '' "s//$APP_INSIGHTS_KEY/g" open-telemetry-collector.yaml -``` - -2. Apply the configuration with `kubectl apply -f open-telemetry-collector.yaml`. - -Next, set up both a Dapr configuration file to turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector. - -1. Create a collector-component.yaml file with this - [content](open-telemetry-collector/collector-component.yaml) - -``` -wget https://raw.githubusercontent.com/dapr/docs/master/howto/diagnose-with-tracing/open-telemetry-collector/collector-component.yaml -``` - -2. Apply the configuration with `kubectl apply -f collector-component.yaml`. - -### Deploy your app with tracing - -When running in Kubernetes mode, apply the `appconfig` configuration by adding a -`dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following -example: - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - ... -spec: - ... - template: - metadata: - ... - annotations: - dapr.io/enabled: "true" - dapr.io/app-id: "MyApp" - dapr.io/app-port: "8080" - dapr.io/config: "appconfig" -``` - -Some of the quickstarts such as [distributed -calculator](https://github.com/dapr/quickstarts/tree/master/distributed-calculator) -already configure these settings, so if you are using those no additional settings are needed. - -That's it! There's no need include any SDKs or instrument your -application code. Dapr automatically handles the distributed tracing -for you. - -> **NOTE**: You can register multiple tracing exporters at the same time, and -> the tracing logs are forwarded to all registered exporters. - -Deploy and run some applications. After a few minutes, you should see -tracing logs appearing in your Application Insights resource. You can -also use **Application Map** to examine the topology of your services, -as shown below: - -![Application map](../../images/open-telemetry-app-insights.png) - -> **NOTE**: Only operations going through Dapr API exposed by Dapr -> sidecar (e.g. service invocation or event publishing) are -> displayed in Application Map topology. - -## Tracing configuration - -The `tracing` section under the `Configuration` spec contains the -following properties: - -```yml -tracing: - samplingRate: "1" -``` - -The following table lists the different properties. - -Property | Type | Description -------------- | ------ | ----------- -samplingRate | string | Set sampling rate for tracing to be enabled or disabled. - - -`samplingRate` is used to enable or disable the tracing. To disable -the sampling rate , set `samplingRate : "0"` in the configuration. The -valid range of samplingRate is between 0 and 1 inclusive. The sampling -rate determines whether a trace span should be sampled or not based on -value. `samplingRate : "1"` will always sample the traces.By default, -the sampling rate is 1 in 10,000 diff --git a/howto/publish-topic/README.md b/howto/publish-topic/README.md deleted file mode 100644 index 1ec13be74..000000000 --- a/howto/publish-topic/README.md +++ /dev/null @@ -1,50 +0,0 @@ -# Use Pub/Sub to publish a message to a topic - -Pub/Sub is a common pattern in a distributed system with many services that want to utilize decoupled, asynchronous messaging. -Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers. - -Dapr provides an extensible Pub/Sub system with At-Least-Once guarantees, allowing developers to publish and subscribe to topics. -Dapr provides different implementation of the underlying system, and allows operators to bring in their preferred infrastructure, for example Redis Streams, Kafka, etc. - -## Setup the Pub/Sub component - -The first step is to setup the Pub/Sub component. -For this guide, we'll use Redis Streams, which is also installed by default on a local machine when running `dapr init`. - -*Note: When running Dapr locally, a pub/sub component YAML is automatically created for you locally. To override, create a `components` directory containing the file and use the flag `--components-path` with the `dapr run` CLI command.* - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: pubsub-name - namespace: default -spec: - type: pubsub.redis - metadata: - - name: redisHost - value: localhost:6379 - - name: redisPassword - value: "" - - name: allowedTopics - value: "deathStartStatus" -``` - -Using the `allowedTopics` you can specify that only the `deathStartStatus` topic should be supported. - -To deploy this into a Kubernetes cluster, fill in the `metadata` connection details in the yaml, and run `kubectl apply -f pubsub.yaml`. - -## Publish a topic - -To publish a message to a topic, invoke the following endpoint on a Dapr instance: - -```bash -curl -X POST http://localhost:3500/v1.0/publish/pubsub-name/deathStarStatus \ - -H "Content-Type: application/json" \ - -d '{ - "status": "completed" - }' -``` - -The above example publishes a JSON payload to a `deathStartStatus` topic. -Dapr wraps the user payload in a Cloud Events v1.0 compliant envelope. \ No newline at end of file diff --git a/howto/pubsub-scopes/README.md b/howto/pubsub-scopes/README.md deleted file mode 100644 index fae554e5c..000000000 --- a/howto/pubsub-scopes/README.md +++ /dev/null @@ -1,133 +0,0 @@ -# Limit the Pub/Sub topics used or scope them to one or more applications - -[Namespaces or component scopes](../components-scopes/README.md) can be used to limit component access to particular applications. These application scopes added to a component limit only the applications with specific IDs to be able to use the component. - -In addition to this general component scope, the following can be limited for pub/sub components: -- the topics which can be used (published or subscribed) -- which applications are allowed to publish to specific topics -- which applications are allowed to subscribe to specific topics - -This is called pub/sub topic scoping. - -Watch this [video](https://www.youtube.com/watch?v=7VdWBBGcbHQ&feature=youtu.be&t=513) on how to use pub/sub topic scoping. - -To use this topic scoping, three metadata properties can be set for a pub/sub component: -- ```spec.metadata.publishingScopes```: the list of applications to topic scopes to allow publishing, separated by semicolons. If an app is not specified in ```publishingScopes```, its allowed to publish to all topics. -- ```spec.metadata.subscriptionScopes```: the list of applications to topic scopes to allow subscription, separated by semicolons. If an app is not specified in ```subscriptionScopes```, its allowed to subscribe to all topics. -- ```spec.metadata.allowedTopics```: a comma-separated list for allowed topics for all applications. ```publishingScopes``` or ```subscriptionScopes``` can be used in addition to add granular limitations. If ```allowedTopics``` is not set, all topics are valid and then ```subscriptionScopes``` and ```publishingScopes``` take place if present. - -These metadata properties can be used for all pub/sub components. The following examples use Redis as pub/sub component. - -## Scenario 1: Limit which application can publish or subscribe to topics - -This can be useful, if you have topics which contain sensitive information and only a subset of your applications are allowed to publish or subscribe to these. - -It can also be used for all topics to have always a "ground truth" for which applications are using which topics as publishers/subscribers. - -Here is an example of three applications and three topics: -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: pubsub - namespace: default -spec: - type: pubsub.redis - metadata: - - name: redisHost - value: "localhost:6379" - - name: redisPassword - value: "" - - name: publishingScopes - value: "app1=topic1;app2=topic2,topic3;app3=" - - name: subscriptionScopes - value: "app2=;app3=topic1" -``` - -The table below shows which application is allowed to publish into the topics: -| Publishing | app1 | app2 | app3 | -|------------|------|------|------| -| topic1 | X | | | -| topic2 | | X | | -| topic3 | | X | | - -The table below shows which application is allowed to subscribe to the topics: -| Subscription | app1 | app2 | app3 | -|--------------|------|------|------| -| topic1 | X | | X | -| topic2 | X | | | -| topic3 | X | | | - -> Note: If an application is not listed (e.g. app1 in subscriptionScopes), it is allowed to subscribe to all topics. Because ```allowedTopics``` (see below of examples) is not used and app1 does not have any subscription scopes, it can also use additional topics not listed above. - -## Scenario 2: Limit which topics can be used by all applications without granular limitations - -A topic is created if a Dapr application sends a message to it. In some scenarios this topic creation should be governed. For example; -- a bug in a Dapr application on generating the topic name can lead to an unlimited amount of topics created -- streamline the topics names and total count and prevent an unlimited growth of topics - -In these situations, ```allowedTopics``` can be used. - -Here is an example of three allowed topics: -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: pubsub - namespace: default -spec: - type: pubsub.redis - metadata: - - name: redisHost - value: "localhost:6379" - - name: redisPassword - value: "" - - name: allowedTopics - value: "topic1,topic2,topic3" -``` - -All applications can use these topics, but only those topics, no others are allowed. - -## Scenario 3: Combine both allowed topics allowed applications that can publish and subscribe - -Sometimes you want to combine both scopes, thus only having a fixed set of allowed topics and specify scoping to certain applications. - -Here is an example of three applications and two topics: -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: pubsub - namespace: default -spec: - type: pubsub.redis - metadata: - - name: redisHost - value: "localhost:6379" - - name: redisPassword - value: "" - - name: allowedTopics - value: "A,B" - - name: publishingScopes - value: "app1=A" - - name: subscriptionScopes - value: "app1=;app2=A" -``` - -> Note: The third application is not listed, because if an app is not specified inside the scopes, it is allowed to use all topics. - -The table below shows which application is allowed to publish into the topics: -| Publishing | app1 | app2 | app3 | -|------------|------|------|------| -| A | X | X | X | -| B | | X | X | - -The table below shows which application is allowed to subscribe to the topics: -| Subscription | app1 | app2 | app3 | -|--------------|------|------|------| -| A | | X | X | -| B | | | X | - -No other topics can be used, only A and B. - -Pub/sub scopes are per pub/sub. You may have pub/sub component named `pubsub` that has one set of scopes, and another `pubsub2` with a different set. The name is the `metadata.name` field in the yaml. diff --git a/howto/setup-monitoring-tools/observe-metrics-with-grafana.md b/howto/setup-monitoring-tools/observe-metrics-with-grafana.md deleted file mode 100644 index 38aee9b7c..000000000 --- a/howto/setup-monitoring-tools/observe-metrics-with-grafana.md +++ /dev/null @@ -1,95 +0,0 @@ -# Observe metrics with Grafana - -This document describes how to view Dapr metrics in a Grafana dashboard. You can see example screenshots of the Dapr dashboards [here](../../reference/dashboard/img/). - -Or watch this [video](https://www.youtube.com/watch?v=8W-iBDNvCUM&feature=youtu.be&t=2580) for a demonstration of the Grafana Dapr metrics dashboard. - -## Prerequisites - -- [Set up Prometheus and Grafana](./setup-prometheus-grafana.md) - -## Steps to view metrics - -- [Configure Prometheus as Data Source](#configure-prometheus-as-data-source) -- [Import Dashboards in Grafana](#import-dashboards-in-grafana) - -### Configure Prometheus as data source -First you need to connect Prometheus as a data source to Grafana. - -1. Port-forward to svc/grafana - -``` -$ kubectl port-forward svc/grafana 8080:80 -n dapr-monitoring -Forwarding from 127.0.0.1:8080 -> 3000 -Forwarding from [::1]:8080 -> 3000 -Handling connection for 8080 -Handling connection for 8080 -``` - -2. Browse `http://localhost:8080` - -3. Login with admin and password - -4. Click Configuration Settings -> Data Sources - - ![data source](./img/grafana-datasources.png) - -5. Add Prometheus as a data soruce. - - ![add data source](./img/grafana-datasources.png) - -6. Enter Promethesus server address in your cluster. - - You can get the Prometheus server address by running the following command. - -```bash -kubectl get svc -n dapr-monitoring - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -dapr-prom-kube-state-metrics ClusterIP 10.0.174.177 8080/TCP 7d9h -dapr-prom-prometheus-alertmanager ClusterIP 10.0.255.199 80/TCP 7d9h -dapr-prom-prometheus-node-exporter ClusterIP None 9100/TCP 7d9h -dapr-prom-prometheus-pushgateway ClusterIP 10.0.190.59 9091/TCP 7d9h -dapr-prom-prometheus-server ClusterIP 10.0.172.191 80/TCP 7d9h -elasticsearch-master ClusterIP 10.0.36.146 9200/TCP,9300/TCP 7d10h -elasticsearch-master-headless ClusterIP None 9200/TCP,9300/TCP 7d10h -grafana ClusterIP 10.0.15.229 80/TCP 5d5h -kibana-kibana ClusterIP 10.0.188.224 5601/TCP 7d10h - -``` - -In this Howto, the server is `dapr-prom-prometheus-server`. - -You now need to set up Prometheus data source with the following settings: - -- Name: `Dapr` -- HTTP URL: `http://dapr-prom-prometheus-server.dapr-monitoring` -- Default: On - -![prometheus server](./img/grafana-prometheus-dapr-server-url.png) - -7. Click `Save & Test` button to verify that the connection succeeded. - -### Import dashboards in Grafana -Next you import the Dapr dashboards into Grafana. - -In the upper left, click the "+" then "Import". - -You can now import built-in [Grafana dashboard templates](https://github.com/dapr/dapr/tree/master/grafana). - -The Grafana dashboards are part of [release assets](https://github.com/dapr/dapr/releases) with this URL https://github.com/dapr/dapr/releases/ -You can find `grafana-actor-dashboard.json`, `grafana-sidecar-dashboard.json` and `grafana-system-services-dashboard.json` in release assets location. - -![upload json](./img/grafana-uploadjson.png) - -8. Find the dashboard that you imported and enjoy! - -![upload json](../../reference/dashboard/img/system-service-dashboard.png) - -# References - -* [Set up Prometheus and Grafana](./setup-prometheus-grafana.md) -* [Prometheus Installation](https://github.com/helm/charts/tree/master/stable/prometheus-operator) -* [Prometheus on Kubernetes](https://github.com/coreos/kube-prometheus) -* [Prometheus Kubernetes Operator](https://github.com/helm/charts/tree/master/stable/prometheus-operator) -* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/) diff --git a/howto/setup-monitoring-tools/observe-metrics-with-prometheus-locally.md b/howto/setup-monitoring-tools/observe-metrics-with-prometheus-locally.md deleted file mode 100644 index d1e79af5a..000000000 --- a/howto/setup-monitoring-tools/observe-metrics-with-prometheus-locally.md +++ /dev/null @@ -1,58 +0,0 @@ -# Observe Metrics with Prometheus locally - -Dapr exposes a Prometheus metrics endpoint you can use to collect time-series -data relating to the execution of the Dapr runtime itself. - -## Setup Prometheus Locally -To run Prometheus on your local machine, you can either [install and run it as a process](#install) or run it as a [Docker container](#Run-as-Container). - -### Install -> You don't need to install Prometheus if you plan to run it as a Docker container. Please refer to the [Container](#Run-as-Container) instructions. - -To install Prometheus, follow the steps outlined [here](https://prometheus.io/docs/prometheus/latest/getting_started/) for your OS. - -### Configure -Now you've installed Prometheus, you need to create a configuration. - -Below is an example Prometheus configuration, save this to a file i.e. `/tmp/prometheus.yml` -```yaml -global: - scrape_interval: 15s # By default, scrape targets every 15 seconds. - -# A scrape configuration containing exactly one endpoint to scrape: -# Here it's Prometheus itself. -scrape_configs: - - job_name: 'dapr' - - # Override the global default and scrape targets from this job every 5 seconds. - scrape_interval: 5s - - static_configs: - - targets: ['localhost:9090'] # Replace with Dapr metrics port if not default -``` - -### Run as Process -Run Prometheus with your configuration to start it collecting metrics from the specified targets. -```bash -./prometheus --config.file=/tmp/prometheus.yml --web.listen-address=:8080 -``` -> We change the port so it doesn't conflict with Dapr's own metrics endpoint. - -If you are not currently running a Dapr application, the target will show as offline. In order to start -collecting metrics you must start Dapr with the metrics port matching the one provided as the target in the configuration. - -Once Prometheus is running, you'll be able to visit its dashboard by visiting `http://localhost:8080`. - -### Run as Container -To run Prometheus as a Docker container on your local machine, first ensure you have [Docker](https://docs.docker.com/install/) installed and running. - -Then you can run Prometheus as a Docker container using: -```bash -docker run \ - --net=host \ - -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml \ - prom/prometheus --config.file=/etc/prometheus/prometheus.yml --web.listen-address=:8080 -``` -`--net=host` ensures that the Prometheus instance will be able to connect to any Dapr instances running on the host machine. If you plan to run your Dapr apps in containers as well, you'll need to run them on a shared Docker network and update the configuration with the correct target address. - -Once Prometheus is running, you'll be able to visit its dashboard by visiting `http://localhost:8080`. diff --git a/howto/setup-monitoring-tools/setup-prometheus-grafana.md b/howto/setup-monitoring-tools/setup-prometheus-grafana.md deleted file mode 100644 index 22abf569e..000000000 --- a/howto/setup-monitoring-tools/setup-prometheus-grafana.md +++ /dev/null @@ -1,105 +0,0 @@ -# Set up Prometheus and Grafana in Kubernetes - -This document describes how to install Prometheus and Grafana on a Kubernetes cluster which can then be used to view the Dapr metrics dashboards. - -Watch this [video](https://www.youtube.com/watch?v=8W-iBDNvCUM&feature=youtu.be&t=2580) for a demonstration of the Grafana Dapr metrics dashboards. - -## Prerequisites - -- Kubernetes (> 1.14) -- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) -- [Helm 3](https://helm.sh/) - -## Contents - - - [Install Prometheus](#install-prometheus) - - [Install Grafana](#install-grafana) - -## Install Prometheus - -1. First create namespace that can be used to deploy the Grafana and Prometheus monitoring tools - -```bash -kubectl create namespace dapr-monitoring -``` - -2. Install Prometheus - -```bash -helm repo add stable https://kubernetes-charts.storage.googleapis.com -helm repo update -helm install dapr-prom stable/prometheus -n dapr-monitoring -``` - -If you are Minikube user or want to disable persistent volume for development purposes, you can disable it by using the following command. - -```bash -helm install dapr-prom stable/prometheus -n dapr-monitoring --set alertmanager.persistentVolume.enable=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false -``` - -3. Validation - -Ensure Prometheus is running in your cluster. - -```bash -kubectl get pods -n dapr-monitoring - -NAME READY STATUS RESTARTS AGE -dapr-prom-kube-state-metrics-9849d6cc6-t94p8 1/1 Running 0 4m58s -dapr-prom-prometheus-alertmanager-749cc46f6-9b5t8 2/2 Running 0 4m58s -dapr-prom-prometheus-node-exporter-5jh8p 1/1 Running 0 4m58s -dapr-prom-prometheus-node-exporter-88gbg 1/1 Running 0 4m58s -dapr-prom-prometheus-node-exporter-bjp9f 1/1 Running 0 4m58s -dapr-prom-prometheus-pushgateway-688665d597-h4xx2 1/1 Running 0 4m58s -dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0 4m58s -``` - -## Install Grafana - -1. Install Grafana - -```bash -helm install grafana stable/grafana -n dapr-monitoring -``` - -If you are Minikube user or want to disable persistent volume for development purpose, you can disable it by using the following command. - -```bash -helm install grafana stable/grafana -n dapr-monitoring --set persistence.enabled=false -``` - -2. Retrieve the admin password for Grafana login - -> Note: Remove the `%` character from the password that this command returns. For example, the admin password is `cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1`. - -``` -kubectl get secret --namespace dapr-monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode -cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1% -``` - -3. Validation - -Ensure Grafana is running in your cluster (see last line below) - -```bash -kubectl get pods -n dapr-monitoring - -NAME READY STATUS RESTARTS AGE -dapr-prom-kube-state-metrics-9849d6cc6-t94p8 1/1 Running 0 4m58s -dapr-prom-prometheus-alertmanager-749cc46f6-9b5t8 2/2 Running 0 4m58s -dapr-prom-prometheus-node-exporter-5jh8p 1/1 Running 0 4m58s -dapr-prom-prometheus-node-exporter-88gbg 1/1 Running 0 4m58s -dapr-prom-prometheus-node-exporter-bjp9f 1/1 Running 0 4m58s -dapr-prom-prometheus-pushgateway-688665d597-h4xx2 1/1 Running 0 4m58s -dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0 4m58s -grafana-c49889cff-x56vj 1/1 Running 0 5m10s -``` -Next steps are to [observe metrics with Grafana](./observe-metrics-with-grafana.md) - -# References - -* [Observe metrics with Grafana](./observe-metrics-with-grafana.md) -* [Prometheus Installation](https://github.com/helm/charts/tree/master/stable/prometheus-operator) -* [Prometheus on Kubernetes](https://github.com/coreos/kube-prometheus) -* [Prometheus Kubernetes Operator](https://github.com/helm/charts/tree/master/stable/prometheus-operator) -* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/) diff --git a/howto/setup-pub-sub-message-broker/README.md b/howto/setup-pub-sub-message-broker/README.md deleted file mode 100644 index 72d699f16..000000000 --- a/howto/setup-pub-sub-message-broker/README.md +++ /dev/null @@ -1,57 +0,0 @@ -# Setup a pub/sub component - -Dapr integrates with pub/sub message buses to provide apps with the ability to create event-driven, loosely coupled architectures where producers send events to consumers via topics. - -Dapr supports the configuration of multiple, named, pub/sub components *per application*. Each pub/sub component has a name and this name is used when publishing a message topic - -Pub/Sub message buses are extensible and can be found in the [components-contrib repo](https://github.com/dapr/components-contrib). - -A pub/sub in Dapr is described using a `Component` file: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: pubsub - namespace: default -spec: - type: pubsub. - metadata: - - name: - value: - - name: - value: -... -``` - -The type of pub/sub is determined by the `type` field, and things like connection strings and other metadata are put in the `.metadata` section. -Even though you can put plain text secrets in there, it is recommended you use a [secret store](../../concepts/secrets) using a `secretKeyRef` - -## Running locally - -When running locally with the Dapr CLI, a component file for a Redis pub/sub is created in a `components` directory, which for Linux/MacOS is `$HOME/.dapr/components` and for Windows is `%USERPROFILE%\.dapr\components`. See [Environment Setup](../getting-started/environment-setup.md#installing-dapr-in-self-hosted-mode) - -You can make changes to this file the way you see fit, whether to change connection values or replace it with a different pub/sub. - -## Running in Kubernetes - -Dapr uses a Kubernetes Operator to update the sidecars running in the cluster with different components. -To setup a pub/sub in Kubernetes, use `kubectl` to apply the component file: - -```bash -kubectl apply -f pubsub.yaml -``` - -## Related links - -- [Setup Redis Streams](./setup-redis.md) -- [Setup NATS Streaming](./setup-nats-streaming.md) -- [Setup Azure Service bus](./setup-azure-servicebus.md) -- [Setup RabbitMQ](./setup-rabbitmq.md) -- [Setup GCP Pubsub](./setup-gcp.md) -- [Setup Hazelcast Pubsub](./setup-hazelcast.md) -- [Setup Azure Event Hubs](./setup-azure-eventhubs.md) -- [Setup SNS/SQS](./setup-snssqs.md) -- [Setup MQTT](./setup-mqtt.md) -- [Setup Apache Pulsar](./setup-pulsar.md) -- [Setup Kafka](./setup-kafka.md) diff --git a/howto/setup-secret-store/README.md b/howto/setup-secret-store/README.md deleted file mode 100644 index e3c1630d2..000000000 --- a/howto/setup-secret-store/README.md +++ /dev/null @@ -1,13 +0,0 @@ -# How To: Setup Secret Stores - -The following list shows the supported secret stores by Dapr. The links here will walk you through setting up and using the secret store. - -* [AWS Secret Manager](./aws-secret-manager.md) -* [Azure Key Vault](./azure-keyvault.md) -* [Azure Key Vault with Managed Identity](./azure-keyvault-managed-identity.md) -* [GCP Secret Manager](./gcp-secret-manager.md) -* [Hashicorp Vault](./hashicorp-vault.md) -* [Kubernetes](./kubernetes.md) -* For Development -* [JSON file secret store](./file-secret-store.md) -* [Environment variable secret store](./envvar-secret-store.md) diff --git a/howto/setup-secret-store/aws-secret-manager.md b/howto/setup-secret-store/aws-secret-manager.md deleted file mode 100644 index 52173ae2a..000000000 --- a/howto/setup-secret-store/aws-secret-manager.md +++ /dev/null @@ -1,60 +0,0 @@ -# Secret store for AWS Secret Manager - -This document shows how to enable AWS Secret Manager secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for self hosted and Kubernetes mode. - -## Create an AWS Secret Manager instance - -Setup AWS Secret Manager using the AWS documentation: https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html. - -## Create the component - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: awssecretmanager - namespace: default -spec: - type: secretstores.aws.secretmanager - metadata: - - name: region - value: [aws_region] # Required. - - name: accessKey # Required. - value: "[aws_access_key]" - - name: secretKey # Required. - value: "[aws_secret_key]" - - name: sessionToken # Required. - value: "[aws_session_token]" -``` - -To deploy in Kubernetes, save the file above to `aws_secret_manager.yaml` and then run: - -```bash -kubectl apply -f aws_secret_manager.yaml -``` - -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. - -## AWS Secret Manager reference example - -This example shows you how to set the Redis password from the AWS Secret Manager secret store. -Here, you created a secret named `redisPassword` in AWS Secret Manager. Note its important to set it both as the `name` and `key` properties. - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: statestore - namespace: default -spec: - type: state.redis - metadata: - - name: redisHost - value: "[redis]:6379" - - name: redisPassword - secretKeyRef: - name: redisPassword - key: redisPassword -auth: - secretStore: awssecretmanager -``` diff --git a/howto/setup-secret-store/azure-keyvault.md b/howto/setup-secret-store/azure-keyvault.md deleted file mode 100644 index 292fcf5d8..000000000 --- a/howto/setup-secret-store/azure-keyvault.md +++ /dev/null @@ -1,292 +0,0 @@ -# Secret Store for Azure Key Vault - -This document shows how to enable Azure Key Vault secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for Standalone and Kubernetes mode. The Dapr secret store component uses Service Principal using certificate authorization to authenticate Key Vault. - -> **Note:** Find the Managed Identity for Azure Key Vault instructions [here](azure-keyvault-managed-identity.md). - -## Contents - -- [Prerequisites](#prerequisites) -- [Create Azure Key Vault and Service principal](#create-azure-key-vault-and-service-principal) -- [Use Azure Key Vault secret store in Standalone mode](#use-azure-key-vault-secret-store-in-standalone-mode) -- [Use Azure Key Vault secret store in Kubernetes mode](#use-azure-key-vault-secret-store-in-kubernetes-mode) -- [References](#references) - -## Prerequisites - -- [Azure Subscription](https://azure.microsoft.com/en-us/free/) -- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) - -## Create an Azure Key Vault and a service principal - -This creates new service principal and grants it the permission to keyvault. - -1. Login to Azure and set the default subscription - -```bash -# Log in Azure -az login - -# Set your subscription to the default subscription -az account set -s [your subscription id] -``` - -2. Create an Azure Key Vault in a region - -```bash -az keyvault create --location [region] --name [your_keyvault] --resource-group [your resource group] -``` - -3. Create a service principal - -Create a service principal with a new certificate and store the 1-year certificate inside [your keyvault]'s certificate vault. - -> **Note** you can skip this step if you want to use an existing service principal for keyvault instead of creating new one - -```bash -az ad sp create-for-rbac --name [your_service_principal_name] --create-cert --cert [certificate_name] --keyvault [your_keyvault] --skip-assignment --years 1 - -{ - "appId": "a4f90000-0000-0000-0000-00000011d000", - "displayName": "[your_service_principal_name]", - "name": "http://[your_service_principal_name]", - "password": null, - "tenant": "34f90000-0000-0000-0000-00000011d000" -} -``` - -**Save the both the appId and tenant from the output which will be used in the next step** - -4. Get the Object Id for [your_service_principal_name] - -```bash -az ad sp show --id [service_principal_app_id] - -{ - ... - "objectId": "[your_service_principal_object_id]", - "objectType": "ServicePrincipal", - ... -} -``` - -5. Grant the service principal the GET permission to your Azure Key Vault - -```bash -az keyvault set-policy --name [your_keyvault] --object-id [your_service_principal_object_id] --secret-permissions get -``` - -Now, your service principal has access to your keyvault, you are ready to configure the secret store component to use secrets stored in your keyvault to access other components securely. - -6. Download the certificate in PFX format from your Azure Key Vault either using the Azure portal or the Azure CLI: - -- **Using the Azure portal:** - - Go to your key vault on the Azure portal and navigate to the *Certificates* tab under *Settings*. Find the certificate that was created during the service principal creation, named [certificate_name] and click on it. - - Click *Download in PFX/PEM format* to download the certificate. - -- **Using the Azure CLI:** - - ```bash - az keyvault secret download --vault-name [your_keyvault] --name [certificate_name] --encoding base64 --file [certificate_name].pfx - ``` - -## Use Azure Key Vault secret store in Standalone mode - -This section walks you through how to enable an Azure Key Vault secret store to store a password to securely access a Redis state store in Standalone mode. - -1. Create a components directory in your application root - -```bash -mkdir components -``` - -2. Copy downloaded PFX cert from your Azure Keyvault Certificate Vault into `./components` or a secure location in your local disk - -3. Create a file called azurekeyvault.yaml in the components directory - -Now create an Dapr azurekeyvault component. Create a file called azurekeyvault.yaml in the components directory with the content below - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: azurekeyvault - namespace: default -spec: - type: secretstores.azure.keyvault - metadata: - - name: vaultName - value: [your_keyvault_name] - - name: spnTenantId - value: "[your_service_principal_tenant_id]" - - name: spnClientId - value: "[your_service_principal_app_id]" - - name: spnCertificateFile - value : "[pfx_certificate_file_local_path]" -``` - -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. - -4. Store redisPassword secret to keyvault - -```bash -az keyvault secret set --name redisPassword --vault-name [your_keyvault_name] --value "your redis passphrase" -``` - -5. Create redis.yaml in the components directory with the content below - -Create a statestore component file. This Redis component yaml shows how to use the `redisPassword` secret stored in an Azure Key Vault called azurekeyvault as a Redis connection password. - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: statestore - namespace: default -spec: - type: state.redis - metadata: - - name: redisHost - value: "[redis]:6379" - - name: redisPassword - secretKeyRef: - name: redisPassword - key: redisPassword -auth: - secretStore: azurekeyvault -``` - -6. Run your app - -You can check that `secretstores.azure.keyvault` component is loaded and redis server connects successfully by looking at the log output when using the dapr `run` command - -Here is the log when you run [HelloWorld sample](https://github.com/dapr/quickstarts/tree/master/hello-world) with Azure Key Vault secret store. - -```bash -$ dapr run --app-id mynode --app-port 3000 --dapr-http-port 3500 node app.js - -ℹ️ Starting Dapr with id mynode on port 3500 -✅ You're up and running! Both Dapr and your app logs will appear here. - -... -== DAPR == time="2019-09-25T17:57:37-07:00" level=info msg="loaded component azurekeyvault (secretstores.azure.keyvault)" -== APP == Node App listening on port 3000! -== DAPR == time="2019-09-25T17:57:38-07:00" level=info msg="loaded component statestore (state.redis)" -== DAPR == time="2019-09-25T17:57:38-07:00" level=info msg="loaded component messagebus (pubsub.redis)" -... -== DAPR == 2019/09/25 17:57:38 redis: connecting to [redis]:6379 -== DAPR == 2019/09/25 17:57:38 redis: connected to [redis]:6379 (localAddr: x.x.x.x:62137, remAddr: x.x.x.x:6379) -... -``` - -## Use Azure Key Vault secret store in Kubernetes mode - -In Kubernetes mode, you store the certificate for the service principal into the Kubernetes Secret Store and then enable Azure Key Vault secret store with this certificate in Kubernetes secretstore. - -1. Create a kubernetes secret using the following command - -- **[pfx_certificate_file_local_path]** is the path of PFX cert file you downloaded from [Create Azure Key Vault and Service principal](#create-azure-key-vault-and-service-principal) - -- **[your_k8s_spn_secret_name]** is secret name in Kubernetes secret store - -```bash -kubectl create secret generic [your_k8s_spn_secret_name] --from-file=[pfx_certificate_file_local_path] -``` - -2. Create azurekeyvault.yaml component file - -The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the certificate stored in Kubernetes secret store. - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: azurekeyvault - namespace: default -spec: - type: secretstores.azure.keyvault - metadata: - - name: vaultName - value: [your_keyvault_name] - - name: spnTenantId - value: "[your_service_principal_tenant_id]" - - name: spnClientId - value: "[your_service_principal_app_id]" - - name: spnCertificate - secretKeyRef: - name: [your_k8s_spn_secret_name] - key: [pfx_certificate_file_local_name] -auth: - secretStore: kubernetes -``` - -3. Apply azurekeyvault.yaml component - -```bash -kubectl apply -f azurekeyvault.yaml -``` - -4. Store the redisPassword as a secret into your keyvault -Now store the redisPassword as a secret into your keyvault - -```bash -az keyvault secret set --name redisPassword --vault-name [your_keyvault_name] --value "your redis passphrase" -``` - -5. Create redis.yaml state store component - -This redis state store component refers to `azurekeyvault` component as a secretstore and uses the secret for `redisPassword` stored in Azure Key Vault. - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: statestore - namespace: default -spec: - type: state.redis - metadata: - - name: redisHost - value: "[redis_url]:6379" - - name: redisPassword - secretKeyRef: - name: redisPassword - key: redisPassword -auth: - secretStore: azurekeyvault -``` - -6. Apply redis statestore component - - ```bash - kubectl apply -f redis.yaml - ``` - -7. Deploy your app to Kubernetes - -Make sure that `secretstores.azure.keyvault` is loaded successfully in `daprd` sidecar log - -Here is the nodeapp log of [HelloWorld Kubernetes sample](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes). Note: use the nodeapp name for your deployed container instance. - -```bash -$ kubectl logs nodeapp-f7b7576f4-4pjrj daprd - -time="2019-09-26T20:34:23Z" level=info msg="starting Dapr Runtime -- version 0.4.0-alpha.4 -- commit 876474b-dirty" -time="2019-09-26T20:34:23Z" level=info msg="log level set to: info" -time="2019-09-26T20:34:23Z" level=info msg="kubernetes mode configured" -time="2019-09-26T20:34:23Z" level=info msg="app id: nodeapp" -time="2019-09-26T20:34:24Z" level=info msg="loaded component azurekeyvault (secretstores.azure.keyvault)" -time="2019-09-26T20:34:25Z" level=info msg="loaded component statestore (state.redis)" -... -2019/09/26 20:34:25 redis: connecting to redis-master:6379 -2019/09/26 20:34:25 redis: connected to redis-master:6379 (localAddr: 10.244.3.67:42686, remAddr: 10.0.1.26:6379) -... -``` - -## References - -- [Azure CLI Keyvault CLI](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create) -- [Create an Azure service principal with Azure CLI](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest) -- [Secrets Component](../../concepts/secrets/README.md) diff --git a/howto/setup-secret-store/envvar-secret-store.md b/howto/setup-secret-store/envvar-secret-store.md deleted file mode 100644 index ff0dcc102..000000000 --- a/howto/setup-secret-store/envvar-secret-store.md +++ /dev/null @@ -1,69 +0,0 @@ -# Local secret store using environment variable (for Development) - -This document shows how to enable [environment variable](https://en.wikipedia.org/wiki/Environment_variable) secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for Development scenarios in Standalone mode. This Dapr secret store component locally defined environment variable and does not use authentication. - -> Note, this approach to secret management is not recommended for production environments. - -## How to enable environment variable secret store - -To enable environment variable secret store, create a file with the following content in your components directory: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: envvar-secret-store - namespace: default -spec: - type: secretstores.local.env - metadata: -``` - -## How to use the environment variable secret store in other components - -To use the environment variable secrets in other component you can replace the `value` with `secretKeyRef` containing the name of your local environment variable like this: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: statestore - namespace: default -spec: - type: state.redis - metadata: - - name: redisHost - value: "[redis]:6379" - - name: redisPassword - secretKeyRef: - name: REDIS_PASSWORD -auth: - secretStore: envvar-secret-store -``` - -# How to confirm the secrets are being used - -To confirm the secrets are being used you can check the logs output by `dapr run` command. Here is the log when you run [HelloWorld sample](https://github.com/dapr/quickstarts/tree/master/hello-world) with Local Secret secret store. - -```bash -$ dapr run --app-id mynode --app-port 3000 --dapr-http-port 3500 node app.js - -ℹ️ Starting Dapr with id mynode on port 3500 -✅ You're up and running! Both Dapr and your app logs will appear here. - -... -== DAPR == time="2019-09-25T17:57:37-07:00" level=info msg="loaded component envvar-secret-store (secretstores.local.env)" -== APP == Node App listening on port 3000! -== DAPR == time="2019-09-25T17:57:38-07:00" level=info msg="loaded component statestore (state.redis)" -== DAPR == time="2019-09-25T17:57:38-07:00" level=info msg="loaded component messagebus (pubsub.redis)" -... -== DAPR == 2019/09/25 17:57:38 redis: connecting to [redis]:6379 -== DAPR == 2019/09/25 17:57:38 redis: connected to [redis]:6379 (localAddr: x.x.x.x:62137, remAddr: x.x.x.x:6379) -... -``` - -## Related Links - -- [Secrets Component](../../concepts/secrets/README.md) -- [Secrets API](../../reference/api/secrets_api.md) -- [Secrets API Samples](https://github.com/dapr/quickstarts/blob/master/secretstore/README.md) \ No newline at end of file diff --git a/howto/setup-secret-store/file-secret-store.md b/howto/setup-secret-store/file-secret-store.md deleted file mode 100644 index 2c1f62c2a..000000000 --- a/howto/setup-secret-store/file-secret-store.md +++ /dev/null @@ -1,115 +0,0 @@ -# Local secret store using file (for Development) - -This document shows how to enable file-based secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for Development scenarios in Standalone mode. This Dapr secret store component reads plain text JSON from a given file and does not use authentication. - -> Note, this approach to secret management is not recommended for production environments. - -## Contents - -- [Create JSON file to hold the secrets](#create-json-file-to-hold-the-secrets) -- [Use Local secret store in Standalone mode](#use-local-secret-store-in-standalone-mode) -- [References](#references) - -## Create JSON file to hold the secrets - -This creates new JSON file to hold the secrets. - -1. Create a json file (i.e. secrets.json) with the following contents: - -```json -{ - "redisPassword": "your redis passphrase" -} -``` - -## Use file-based secret store in Standalone mode - -This section walks you through how to enable a Local secret store to store a password to access a Redis state store in Standalone mode. - -1. Create a JSON file in components directory with following content. - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: local-secret-store - namespace: default -spec: - type: secretstores.local.file - metadata: - - name: secretsFile - value: [path to the JSON file] - - name: nestedSeparator - value: ":" -``` - -The `nestedSeparator` parameter, is not required (default value is ':') and it's used by the store when flattening the json hierarchy to a map. So given the following json: - -```json -{ - "redisPassword": "your redis password", - "connectionStrings": { - "sql": "your sql connection string", - "mysql": "your mysql connection string" - } -} -``` - -the store will load the file and create a map with the following key value pairs: - -| flattened key | value | -| --- | --- | -|"redis" | "your redis password" | -|"connectionStrings:sql" | "your sql connection string" | -|"connectionStrings:mysql"| "your mysql connection string" | - -> Use the flattened key to access the secret. - -2. Use secrets in other components - -To use the previously created secrets for example in a Redis state store component you can replace the `value` with `secretKeyRef` and a nested name of the key like this: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: statestore - namespace: default -spec: - type: state.redis - metadata: - - name: redisHost - value: "[redis]:6379" - - name: redisPassword - secretKeyRef: - name: redisPassword -auth: - secretStore: local-secret-store -``` - -# Confirm the secrets are being used - -To confirm the secrets are being used you can check the logs output by `dapr run` command. Here is the log when you run [HelloWorld sample](https://github.com/dapr/quickstarts/tree/master/hello-world) with Local Secret secret store. - -```bash -$ dapr run --app-id mynode --app-port 3000 --dapr-http-port 3500 node app.js - -ℹ️ Starting Dapr with id mynode on port 3500 -✅ You're up and running! Both Dapr and your app logs will appear here. - -... -== DAPR == time="2019-09-25T17:57:37-07:00" level=info msg="loaded component local-secret-store (secretstores.local.file)" -== APP == Node App listening on port 3000! -== DAPR == time="2019-09-25T17:57:38-07:00" level=info msg="loaded component statestore (state.redis)" -== DAPR == time="2019-09-25T17:57:38-07:00" level=info msg="loaded component messagebus (pubsub.redis)" -... -== DAPR == 2019/09/25 17:57:38 redis: connecting to [redis]:6379 -== DAPR == 2019/09/25 17:57:38 redis: connected to [redis]:6379 (localAddr: x.x.x.x:62137, remAddr: x.x.x.x:6379) -... -``` - -## Related Links - -- [Secrets Component](../../concepts/secrets/README.md) -- [Secrets API](../../reference/api/secrets_api.md) -- [Secrets API Samples](https://github.com/dapr/quickstarts/blob/master/secretstore/README.md) \ No newline at end of file diff --git a/howto/setup-secret-store/kubernetes.md b/howto/setup-secret-store/kubernetes.md deleted file mode 100644 index 671bfca0c..000000000 --- a/howto/setup-secret-store/kubernetes.md +++ /dev/null @@ -1,6 +0,0 @@ -# Secret Store for Kubernetes - -Kubernetes has a built-in state store which Dapr components can use to fetch secrets from. -No special configuration is needed to setup the Kubernetes state store. - -Please refer to [this](../../concepts/secrets/README.md) document for information and examples on how to fetch secrets from Kubernetes using Dapr. diff --git a/howto/setup-state-store/README.md b/howto/setup-state-store/README.md deleted file mode 100644 index f5374f99c..000000000 --- a/howto/setup-state-store/README.md +++ /dev/null @@ -1,67 +0,0 @@ -# Setup a state store component - -Dapr integrates with existing databases to provide apps with state management capabilities for CRUD operations, transactions and more. - -Dapr supports the configuration of multiple, named, state store components *per application*. - -State stores are extensible and can be found in the [components-contrib repo](https://github.com/dapr/components-contrib). - -A state store in Dapr is described using a `Component` file: - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: statestore - namespace: default -spec: - type: state. - metadata: - - name: - value: - - name: - value: -... -``` - -The type of database is determined by the `type` field, and things like connection strings and other metadata are put in the `.metadata` section. -Even though you can put plain text secrets in there, it is recommended you use a [secret store](../../concepts/secrets/README.md). - -## Running locally - -When running locally with the Dapr CLI, a component file for a Redis state store will be automatically created in a `components` directory in your current working directory. - -You can make changes to this file the way you see fit, whether to change connection values or replace it with a different store. - -## Running in Kubernetes - -Dapr uses a Kubernetes Operator to update the sidecars running in the cluster with different components. -To setup a state store in Kubernetes, use `kubectl` to apply the component file: - -```bash -kubectl apply -f statestore.yaml -``` - ## Related Topics -* [State management concepts](../../concepts/state-management/README.md) -* [State management API specification](../../reference/api/state_api.md) - -## Reference - -* [Setup Aerospike](./setup-aerospike.md) -* [Setup Cassandra](./setup-cassandra.md) -* [Setup Cloudstate](./setup-cloudstate.md) -* [Setup Couchbase](./setup-couchbase.md) -* [Setup etcd](./setup-etcd.md) -* [Setup Hashicorp Consul](./setup-consul.md) -* [Setup Hazelcast](./setup-hazelcast.md) -* [Setup Memcached](./setup-memcached.md) -* [Setup MongoDB](./setup-mongodb.md) -* [Setup PostgreSQL](./setup-postgresql.md) -* [Setup Redis](./setup-redis.md) -* [Setup Zookeeper](./setup-zookeeper.md) -* [Setup Azure CosmosDB](./setup-azure-cosmosdb.md) -* [Setup Azure SQL Server](./setup-sqlserver.md) -* [Setup Azure Table Storage](./setup-azure-tablestorage.md) -* [Setup Azure Blob Storage](./setup-azure-blobstorage.md) -* [Setup Google Cloud Firestore (Datastore mode)](./setup-firestore.md) -* [Supported State Stores](./supported-state-stores.md) diff --git a/howto/setup-state-store/supported-state-stores.md b/howto/setup-state-store/supported-state-stores.md deleted file mode 100644 index 8b10e32ab..000000000 --- a/howto/setup-state-store/supported-state-stores.md +++ /dev/null @@ -1,22 +0,0 @@ -# Supported state stores - -| Name | CRUD | Transactional -| ------------- | -------|------ | -| Aerospike | :white_check_mark: | :x: | -| Cassandra | :white_check_mark: | :x: | -| Cloudstate | :white_check_mark: | :x: | -| Couchbase | :white_check_mark: | :x: | -| etcd | :white_check_mark: | :x: | -| Hashicorp Consul | :white_check_mark: | :x: | -| Hazelcast | :white_check_mark: | :x: | -| Memcached | :white_check_mark: | :x: | -| MongoDB | :white_check_mark: | :white_check_mark: | -| PostgreSQL | :white_check_mark: | :white_check_mark: | -| Redis | :white_check_mark: | :white_check_mark: | -| Zookeeper | :white_check_mark: | :x: | -| Azure CosmosDB | :white_check_mark: | :white_check_mark: | -| Azure SQL Server | :white_check_mark: | :white_check_mark: | -| Azure Table Storage | :white_check_mark: | :x: | -| Azure Blob Storage | :white_check_mark: | :x: | -| Google Cloud Firestore | :white_check_mark: | :x: | - diff --git a/images/service-invocation-example.png b/images/service-invocation-example.png deleted file mode 100644 index 69b350cc6..000000000 Binary files a/images/service-invocation-example.png and /dev/null differ diff --git a/images/service-invocation.png b/images/service-invocation.png deleted file mode 100644 index b8231df82..000000000 Binary files a/images/service-invocation.png and /dev/null differ diff --git a/reference/README.md b/reference/README.md deleted file mode 100644 index eea062f97..000000000 --- a/reference/README.md +++ /dev/null @@ -1,13 +0,0 @@ -# Dapr references - -- **[Dapr CLI](https://github.com/dapr/cli)**: The Dapr CLI allows you to setup Dapr on your local dev machine or on a Kubernetes cluster, provides debugging support, launches and manages Dapr instances. -- **[Dapr APIs](./api)**: Dapr provides a variety of APIs to allow developers to access building block capabilities: - - [Actors](./api/actors_api.md) - - [Bindings](./api/bindings_api.md) - - [Publish/Subscribe Messaging](./api/pubsub_api.md) - - [Secrets](./api/secrets_api.md) - - [Service Invocation](./api/service_invocation_api.md) - - [State Management](./api/state_api.md) -- [Error Codes](./api/error_codes.md) returned by Dapr APIs. -- **[Dapr Binding Specs](./specs/bindings)**: Bindings provide triggers and interactions with external resources and services -- **[Monitoring Dashboard templates](./dashboard/README.md)**: Monitoring dashboard templates help Dapr user to monitor Dapr services by importing templates to their monitoring tools diff --git a/reference/api/README.md b/reference/api/README.md deleted file mode 100644 index 978db69ac..000000000 --- a/reference/api/README.md +++ /dev/null @@ -1,13 +0,0 @@ -# Dapr API reference - -Dapr is language agnostic and provides a RESTful HTTP & gRPC API. - -The documents in this folder outline each api, the associated endpoints, and what capabilities are available. - -- [Actors](./actors_api.md) -- [Bindings](./bindings_api.md) -- [PubSub](./pubsub_api.md) -- [Secrets](./secrets_api.md) -- [Service Invocation](./service_invocation_api.md) -- [State](./state_api.md) -- [Health](./health_api.md) diff --git a/reference/api/error_codes.md b/reference/api/error_codes.md deleted file mode 100644 index 62c419111..000000000 --- a/reference/api/error_codes.md +++ /dev/null @@ -1,39 +0,0 @@ -# Error Codes returned by APIs - -For http calls made to Dapr runtime, when an error is encountered, an error json is returned in http response body. The json contains an error code and an descriptive error message, e.g. -``` -{ - "errorCode": "ERR_STATE_GET", - "message": "Requested state key does not exist in state store." -} -``` - -Following table lists the error codes returned by Dapr runtime: - -Error Code | Description ---------- | ----------- -ERR_ACTOR_INSTANCE_MISSING | Error getting an actor instance. This means that actor is now hosted in some other service replica. -ERR_ACTOR_RUNTIME_NOT_FOUND | Error getting the actor instance. -ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor. -ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor. -ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor. -ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor. -ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor. -ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor. -ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor. -ERR_ACTOR_STATE_GET | Error getting the state for an actor. -ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally. -ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime. -ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message. -ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope. -ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found. -ERR_STATE_GET | Error getting a state for state store. -ERR_STATE_DELETE | Error deleting a state from state store. -ERR_STATE_SAVE | Error saving a state in state store. -ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding. -ERR_MALFORMED_REQUEST | Error with a malformed request. -ERR_DIRECT_INVOKE | Error in direct invocation. -ERR_DESERIALIZE_HTTP_BODY | Error deserializing an HTTP request body. -ERR_SECRET_STORE_NOT_CONFIGURED| Error that no secret store is configured. -ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found. -ERR_HEALTH_NOT_READY | Error that Dapr is not ready. \ No newline at end of file diff --git a/reference/dashboard/README.md b/reference/dashboard/README.md deleted file mode 100644 index b224d1d4a..000000000 --- a/reference/dashboard/README.md +++ /dev/null @@ -1,16 +0,0 @@ -# Monitoring Dashboard - -This includes dashboard templates to monitor Dapr system services and sidecars. For more detail information, please read [Dapr Observability](../../concepts/observability/README.md). - -## Grafana - -You can set up [Prometheus and Grafana](../../howto/setup-monitoring-tools/setup-prometheus-grafana.md) and import the templates to your Grafana dashboard to monitor Dapr. - -1. [Dapr System Service Dashboard](./grafana/system-services-dashboard.json) - - [Shows Dapr system component status](./img/system-service-dashboard.png) - dapr-operator, dapr-sidecar-injector, dapr-sentry, and dapr-placement - -2. [Dapr Sidecar Dashboard](./grafana/sidecar-dashboard.json) - - [Shows Dapr Sidecar status](./img/sidecar-dashboard.png) - sidecar health/resources, throughput/latency of HTTP and gRPC, Actor, mTLS, etc. - -3. [Dapr Actor Dashboard](./grafana/actor-dashboard.json) - - [Shows Dapr Sidecar status](./img/actor-dashboard.png) - actor invocation throughput/latency, timer/reminder triggers, and turn-based concurrnecy. diff --git a/reference/specs/bindings/README.md b/reference/specs/bindings/README.md deleted file mode 100644 index 6d3d29ae8..000000000 --- a/reference/specs/bindings/README.md +++ /dev/null @@ -1,10 +0,0 @@ -# Binding component specs - -The files within this directory contain the detailed information for each binding component, including the applicable fields, structure, and explanation. - -## Quick links - -- [Concept: Bindings](../../../concepts/bindings/README.md) -- [Implementing a new binding](https://github.com/dapr/components-contrib/blob/master/bindings/Readme.md#implementing-a-new-binding) -- [How-to: Trigger app with input binding](../../../howto/trigger-app-with-input-binding/README.md) -- [How-to: Send events with output bindings](../../../howto/send-events-with-output-bindings/README.md) diff --git a/reference/specs/bindings/cosmosdb.md b/reference/specs/bindings/cosmosdb.md deleted file mode 100644 index 5e1465197..000000000 --- a/reference/specs/bindings/cosmosdb.md +++ /dev/null @@ -1,34 +0,0 @@ -# Azure CosmosDB Binding Spec - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: - namespace: -spec: - type: bindings.azure.cosmosdb - metadata: - - name: url - value: https://******.documents.azure.com:443/ - - name: masterKey - value: ***** - - name: database - value: db - - name: collection - value: collection - - name: partitionKey - value: message -``` - -- `url` is the CosmosDB url. -- `masterKey` is the CosmosDB account master key. -- `database` is the name of the CosmosDB database. -- `collection` is name of the collection inside the database. -- `partitionKey` is the name of the partitionKey to extract from the payload. - -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) - -## Output Binding Supported Operations - -* create \ No newline at end of file diff --git a/reference/specs/bindings/dynamodb.md b/reference/specs/bindings/dynamodb.md deleted file mode 100644 index c37076374..000000000 --- a/reference/specs/bindings/dynamodb.md +++ /dev/null @@ -1,31 +0,0 @@ -# AWS DynamoDB Binding Spec - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: - namespace: -spec: - type: bindings.aws.dynamodb - metadata: - - name: region - value: us-west-2 - - name: accessKey - value: ***************** - - name: secretKey - value: ***************** - - name: table - value: items -``` - -- `region` is the AWS region. -- `accessKey` is the AWS access key. -- `secretKey` is the AWS secret key. -- `table` is the DynamoDB table name. - -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) - -## Output Binding Supported Operations - -* create \ No newline at end of file diff --git a/reference/specs/bindings/http.md b/reference/specs/bindings/http.md deleted file mode 100644 index f15c34a5a..000000000 --- a/reference/specs/bindings/http.md +++ /dev/null @@ -1,23 +0,0 @@ -# HTTP Binding Spec - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: - namespace: -spec: - type: bindings.http - metadata: - - name: url - value: http://something.com - - name: method - value: GET -``` - -- `url` is the HTTP url to invoke. -- `method` is the HTTP verb to use for the request. - -## Output Binding Supported Operations - -* create diff --git a/reference/specs/bindings/influxdb.md b/reference/specs/bindings/influxdb.md deleted file mode 100644 index f38ca564a..000000000 --- a/reference/specs/bindings/influxdb.md +++ /dev/null @@ -1,31 +0,0 @@ -# InfluxDB Binding Spec - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: - namespace: -spec: - type: bindings.influx - metadata: - - name: url # Required - value: - - name: token # Required - value: - - name: org # Required - value: - - name: bucket # Required - value: -``` - -- `url` is the URL for the InfluxDB instance. eg. http://localhost:8086 -- `token` is the authorization token for InfluxDB. -- `org` is the InfluxDB organization. -- `bucket` bucket name to write to. - -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) - -## Output Binding Supported Operations - -* create diff --git a/reference/specs/bindings/mqtt.md b/reference/specs/bindings/mqtt.md deleted file mode 100644 index f4b200541..000000000 --- a/reference/specs/bindings/mqtt.md +++ /dev/null @@ -1,25 +0,0 @@ -# MQTT Binding Spec - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: - namespace: -spec: - type: bindings.mqtt - metadata: - - name: url - value: mqtt[s]://[username][:password]@host.domain[:port] - - name: topic - value: topic1 -``` - -- `url` is the MQTT broker url. -- `topic` is the topic to listen on or send events to. - -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) - -## Output Binding Supported Operations - -* create \ No newline at end of file diff --git a/reference/specs/bindings/redis.md b/reference/specs/bindings/redis.md deleted file mode 100644 index 0e5e703d6..000000000 --- a/reference/specs/bindings/redis.md +++ /dev/null @@ -1,28 +0,0 @@ -# Redis Binding Spec - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: - namespace: -spec: - type: bindings.redis - metadata: - - name: redisHost - value:
:6379 - - name: redisPassword - value: ************** - - name: enableTLS - value: -``` - -- `redisHost` is the Redis host address. -- `redisPassword` is the Redis password. -- `enableTLS` - If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. - -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) - -## Output Binding Supported Operations - -* create diff --git a/reference/specs/bindings/s3.md b/reference/specs/bindings/s3.md deleted file mode 100644 index d2b985ff9..000000000 --- a/reference/specs/bindings/s3.md +++ /dev/null @@ -1,31 +0,0 @@ -# AWS S3 Binding Spec - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: - namespace: -spec: - type: bindings.aws.s3 - metadata: - - name: region - value: us-west-2 - - name: accessKey - value: ***************** - - name: secretKey - value: ***************** - - name: bucket - value: mybucket -``` - -- `region` is the AWS region. -- `accessKey` is the AWS access key. -- `secretKey` is the AWS secret key. -- `table` is the name of the S3 bucket to write to. - -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) - -## Output Binding Supported Operations - -* create diff --git a/reference/specs/bindings/sns.md b/reference/specs/bindings/sns.md deleted file mode 100644 index 68aec9830..000000000 --- a/reference/specs/bindings/sns.md +++ /dev/null @@ -1,31 +0,0 @@ -# AWS SNS Binding Spec - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: - namespace: -spec: - type: bindings.aws.sns - metadata: - - name: region - value: us-west-2 - - name: accessKey - value: ***************** - - name: secretKey - value: ***************** - - name: topicArn - value: mytopic -``` - -- `region` is the AWS region. -- `accessKey` is the AWS access key. -- `secretKey` is the AWS secret key. -- `topicArn` is the SNS topic name. - -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) - -## Output Binding Supported Operations - -* create diff --git a/reference/specs/bindings/sqs.md b/reference/specs/bindings/sqs.md deleted file mode 100644 index d5991efaf..000000000 --- a/reference/specs/bindings/sqs.md +++ /dev/null @@ -1,31 +0,0 @@ -# AWS SQS Binding Spec - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: - namespace: -spec: - type: bindings.aws.sqs - metadata: - - name: region - value: us-west-2 - - name: accessKey - value: ***************** - - name: secretKey - value: ***************** - - name: queueName - value: items -``` - -- `region` is the AWS region. -- `accessKey` is the AWS access key. -- `secretKey` is the AWS secret key. -- `queueName` is the SQS queue name. - -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) - -## Output Binding Supported Operations - -* create diff --git a/reference/specs/bindings/twilio.md b/reference/specs/bindings/twilio.md deleted file mode 100644 index 260709288..000000000 --- a/reference/specs/bindings/twilio.md +++ /dev/null @@ -1,31 +0,0 @@ -# Twilio SMS Binding Spec - -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Component -metadata: - name: - namespace: -spec: - type: bindings.twilio.sms - metadata: - - name: toNumber # required. - value: 111-111-1111 - - name: fromNumber # required. - value: 222-222-2222 - - name: accountSid # required. - value: ***************** - - name: authToken # required. - value: ***************** -``` - -- `toNumber` is the target number to send the sms to. -- `fromNumber` is the sender phone number. -- `accountSid` is the Twilio account SID. -- `authToken` is the Twilio auth token. - -> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store) - -## Output Binding Supported Operations - -* create diff --git a/walkthroughs/daprrun.md b/walkthroughs/daprrun.md deleted file mode 100644 index 2509a4e95..000000000 --- a/walkthroughs/daprrun.md +++ /dev/null @@ -1,54 +0,0 @@ -# Sequence of Events on a dapr run in Self Hosting Mode - -The doc describes the sequence of events that occur when `dapr run` is executed in self hosting mode. It uses [sample 1](https://github.com/dapr/quickstarts/tree/master/hello-world) as an example. - -Terminology used below: - -- Dapr CLI - the Dapr command line tool. The binary name is dapr (dapr.exe on Windows) -- Dapr runtime - this runs alongside each app. The binary name is daprd (daprd.exe on Windows) - -In self hosting mode, running `dapr init` copies the Dapr runtime onto your machine and starts the Placement service (used for actors placement) Redis and Zipkin in containers. The Redis and Placement service must be present before running `dapr run`. The Dapr CLI also creates the default components directory which for Linux/MacOS is: `$HOME/.dapr/components` and for Windows: `%USERPROFILE%\.dapr\components` if it does not already exist. The CLI creates a default `config.yaml` in `$HOME/.dapr` for Linux/MacOS or `%USERPROFILE%\.dapr` in Windows to enable tracing by default. - -What happens when `dapr run` is executed? - -```bash -dapr run --app-id nodeapp --app-port 3000 --dapr-http-port 3500 node app.js -``` - -First, the Dapr CLI loads the components from the default directory (specified above) for the state store and pub/sub: `statestore.yaml` and `pubsub.yaml`, respectively. [Code](https://github.com/dapr/cli/blob/51b99a988c4d1545fdc04909d6308be121a7fe0c/pkg/standalone/run.go#L196-L266). - -You can either add components to the default directory or create your own `components` directory and provide the path to the CLI using the `--components-path` flag. - -In order to switch components, simply replace or add the YAML files in the components directory and run `dapr run` again. -For example, by default Dapr uses the Redis state store in the default components dir. You can either override it with a different YAML, or supply your own components path. - -Then, the Dapr CLI [launches](https://github.com/dapr/cli/blob/d585612185a4a525c05fb62b86e288ccad510006/pkg/standalone/run.go#L290) two proceses: the Dapr runtime and your app (in this sample `node app.js`). - -If you inspect the command line of the Dapr runtime and the app, observe that the Dapr runtime has these args: - -```bash -daprd.exe --app-id mynode --dapr-http-port 3500 --dapr-grpc-port 43693 --log-level info --app-max-concurrency -1 --app-protocol http --app-port 3000 --placement-host-address localhost:50005 -``` - -And the app has these args, which are not modified from what was passed in via the CLI: - -```bash -node app.js -``` - -### Dapr runtime - -The daprd process is started with the args above. `--app-id`, "nodeapp", which is the Dapr app id, is forwarded from the Dapr CLI into `daprd` as the `--app-id` arg. Similarly: - -- the `--app-port` from the CLI, which represents the port on the app that `daprd` will use to communicate with it has been passed into the `--app-port` arg. -- the `--dapr-http-port` arg from the CLI, which represents the http port that daprd is listening on is passed into the `--dapr-http-port` arg. (Note to specify grpc instead you can use `--dapr-grpc-port`). If it's not specified, it will be -1 which means the Dapr CLI will chose a random free port. Below, it's 43693, yours will vary. - -### The app - -The Dapr CLI doesn't change the command line for the app itself. Since `node app.js` was specified, this will be the command it runs with. However, two environment variables are added, which the app can use to determine the ports the Dapr runtime is listening on. -The two ports below match the ports passed to the Dapr runtime above: - -```ini -DAPR_GRPC_PORT=43693 -DAPR_HTTP_PORT=3500 -```