Initial configuration of hugo and docsy

This commit is contained in:
Aaron Crawfis 2020-09-14 15:15:29 -07:00
parent 1ba5968481
commit 10fe786380
239 changed files with 30852 additions and 2830 deletions

2
.gitignore vendored
View File

@ -1,2 +1,4 @@
# Visual Studio 2015/2017/2019 cache/options directory
.vs/
package-lock.json
node_modules/

3
.gitmodules vendored Normal file
View File

@ -0,0 +1,3 @@
[submodule "daprdocs/themes/docsy"]
path = daprdocs/themes/docsy
url = https://github.com/google/docsy.git

View File

@ -1,50 +1,50 @@
# FAQ
- **[Networking and service meshes](#networking-and-service-meshes)**
- **[Actors](#actors)**
- **[Developer language SDKs and frameworks](#developer-language-sdks-and-frameworks)**
## Networking and service meshes
### Understanding how Dapr works with service meshes
Dapr is a distributed application runtime. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build microservices. Dapr is developer-centric versus service meshes being infrastructure-centric.
Dapr can be used alongside any service mesh such as Istio and Linkerd. A service mesh is a dedicated network infrastructure layer designed to connect services to one another and provide insightful telemetry. A service mesh doesnt introduce new functionality to an application.
That is where Dapr comes in. Dapr is a language agnostic programming model built on http and gRPC that provides distributed system building blocks via open APIs for asynchronous pub-sub, stateful services, service discovery and invocation, actors and distributed tracing. Dapr introduces new functionality to an apps runtime. Both service meshes and Dapr run as side-car services to your application, one giving network features and the other distributed application capabilities.
Watch this [video](https://www.youtube.com/watch?v=xxU68ewRmz8&feature=youtu.be&t=140) on how Dapr and service meshes work together.
### Understanding how Dapr interoperates with the service mesh interface (SMI)
SMI is an abstraction layer that provides a common API surface across different service mesh technology. Dapr can leverage any service mesh technology including SMI.
### Differences between Dapr, Istio and Linkerd
Read [How does Dapr work with service meshes?](https://github.com/dapr/dapr/wiki/FAQ#how-does-dapr-work-with-service-meshes) Istio is an open source service mesh implementation that focuses on Layer7 routing, traffic flow management and mTLS authentication between services. Istio uses a sidecar to intercept traffic going into and out of a container and enforces a set of network policies on them.
Istio is not a programming model and does not focus on application level features such as state management, pub-sub, bindings etc. That is where Dapr comes in.
## Actors
### Relationship between Dapr, Orleans and Service Fabric Reliable Actors
The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments.
Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview](https://github.com/dapr/docs/blob/master/overview.md)
### Differences between Dapr from an actor framework
Virtual actors capabilities are one of the building blocks that Dapr provides in its runtime. With Dapr because it is programming language agnostic with an http/gRPC API, the actors can be called from any language. This allows actors written in one language to invoke actors written in a different language.
Creating a new actor follows a local call like `http://localhost:3500/v1.0/actors/<actorType>/<actorId>/…`, for example `http://localhost:3500/v1.0/actors/myactor/50/method/getData` to call the `getData` method on the newly created `myactor` with id `50`.
The Dapr runtime SDKs have language specific actor frameworks. The .NET SDK for example has C# actors. The goal is for all the Dapr language SDKs to have an actor framework.
## Developer language SDKs and frameworks
### Does Dapr have any SDKs if I want to work with a particular programming language or framework?
To make using Dapr more natural for different languages, it includes language specific SDKs for Go, Java, JavaScript, .NET and Python. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services.
# FAQ
- **[Networking and service meshes](#networking-and-service-meshes)**
- **[Actors](#actors)**
- **[Developer language SDKs and frameworks](#developer-language-sdks-and-frameworks)**
## Networking and service meshes
### Understanding how Dapr works with service meshes
Dapr is a distributed application runtime. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build microservices. Dapr is developer-centric versus service meshes being infrastructure-centric.
Dapr can be used alongside any service mesh such as Istio and Linkerd. A service mesh is a dedicated network infrastructure layer designed to connect services to one another and provide insightful telemetry. A service mesh doesnt introduce new functionality to an application.
That is where Dapr comes in. Dapr is a language agnostic programming model built on http and gRPC that provides distributed system building blocks via open APIs for asynchronous pub-sub, stateful services, service discovery and invocation, actors and distributed tracing. Dapr introduces new functionality to an apps runtime. Both service meshes and Dapr run as side-car services to your application, one giving network features and the other distributed application capabilities.
Watch this [video](https://www.youtube.com/watch?v=xxU68ewRmz8&feature=youtu.be&t=140) on how Dapr and service meshes work together.
### Understanding how Dapr interoperates with the service mesh interface (SMI)
SMI is an abstraction layer that provides a common API surface across different service mesh technology. Dapr can leverage any service mesh technology including SMI.
### Differences between Dapr, Istio and Linkerd
Read [How does Dapr work with service meshes?](https://github.com/dapr/dapr/wiki/FAQ#how-does-dapr-work-with-service-meshes) Istio is an open source service mesh implementation that focuses on Layer7 routing, traffic flow management and mTLS authentication between services. Istio uses a sidecar to intercept traffic going into and out of a container and enforces a set of network policies on them.
Istio is not a programming model and does not focus on application level features such as state management, pub-sub, bindings etc. That is where Dapr comes in.
## Actors
### Relationship between Dapr, Orleans and Service Fabric Reliable Actors
The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments.
Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview](https://github.com/dapr/docs/blob/master/overview.md)
### Differences between Dapr from an actor framework
Virtual actors capabilities are one of the building blocks that Dapr provides in its runtime. With Dapr because it is programming language agnostic with an http/gRPC API, the actors can be called from any language. This allows actors written in one language to invoke actors written in a different language.
Creating a new actor follows a local call like `http://localhost:3500/v1.0/actors/<actorType>/<actorId>/…`, for example `http://localhost:3500/v1.0/actors/myactor/50/method/getData` to call the `getData` method on the newly created `myactor` with id `50`.
The Dapr runtime SDKs have language specific actor frameworks. The .NET SDK for example has C# actors. The goal is for all the Dapr language SDKs to have an actor framework.
## Developer language SDKs and frameworks
### Does Dapr have any SDKs if I want to work with a particular programming language or framework?
To make using Dapr more natural for different languages, it includes language specific SDKs for Go, Java, JavaScript, .NET and Python. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services.

View File

@ -1,187 +1,187 @@
# Configure API authorization with OAuth
Dapr OAuth 2.0 [middleware](../../concepts/middleware/README.md) allows you to enable [OAuth](https://oauth.net/2/)
authorization on Dapr endpoints for your web APIs,
using the [Authorization Code Grant flow](https://tools.ietf.org/html/rfc6749#section-4.1).
As well as injecting authorization tokens into your APIs which can be used for authorization towards external APIs
called by your APIs,
using the [Client Credentials Grant flow](https://tools.ietf.org/html/rfc6749#section-4.4).
When the middleware is enabled,
any method invocation through Dapr needs to be authorized before getting passed to the user code.
The main difference between the two flows is that the
`Authorization Code Grant flow` needs user interaction and authorizes a user,
the `Client Credentials Grant flow` doesn't need a user interaction and authorizes a service/application.
## Register your application with a authorization server
Different authorization servers provide different application registration experiences. Here are some samples:
* [Azure AAD](https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-protocols-oauth-code)
* [Facebook](https://developers.facebook.com/apps)
* [Fitbit](https://dev.fitbit.com/build/reference/web-api/oauth2/)
* [GitHub](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/)
* [Google APIs](https://console.developers.google.com/apis/credentials/consen)
* [Slack](https://api.slack.com/docs/oauth)
* [Twitter](http://apps.twitter.com/)
To figure the Dapr OAuth middleware, you'll need to collect the following information:
* Client ID (see [here](https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/))
* Client secret (see [here](https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/))
* Scopes (see [here](https://oauth.net/2/scope/))
* Authorization URL
* Token URL
Authorization/Token URLs of some of the popular authorization servers:
|Server|Authorization URL|Token URL|
|--------|--------|--------|
|Azure AAD|<https://login.microsoftonline.com/{tenant}/oauth2/authorize>|<https://login.microsoftonline.com/{tenant}/oauth2/token>|
|GitHub|<https://github.com/login/oauth/authorize>|<https://github.com/login/oauth/access_token>|
|Google|<https://accounts.google.com/o/oauth2/v2/auth>|<https://accounts.google.com/o/oauth2/token> <https://www.googleapis.com/oauth2/v4/token>|
|Twitter|<https://api.twitter.com/oauth/authorize>|<https://api.twitter.com/oauth2/token>|
## Define the middleware component definition
### Define an Authorization Code Grant component
An OAuth middleware (Authorization Code) is defined by a component:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: oauth2
namespace: default
spec:
type: middleware.http.oauth2
metadata:
- name: clientId
value: "<your client ID>"
- name: clientSecret
value: "<your client secret>"
- name: scopes
value: "<comma-separated scope names>"
- name: authURL
value: "<authorization URL>"
- name: tokenURL
value: "<token exchange URL>"
- name: redirectURL
value: "<redirect URL>"
- name: authHeaderName
value: "<header name under which the secret token is saved>"
```
### Define a custom pipeline for an Authorization Code Grant
To use the OAuth middleware (Authorization Code), you should create a [custom pipeline](../../concepts/middleware/README.md)
using [Dapr configuration](../../concepts/configuration/README.md), as shown in the following sample:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: pipeline
namespace: default
spec:
httpPipeline:
handlers:
- name: oauth2
type: middleware.http.oauth2
```
### Define a Client Credentials Grant component
An OAuth (Client Credentials) middleware is defined by a component:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: myComponent
spec:
type: middleware.http.oauth2clientcredentials
metadata:
- name: clientId
value: "<your client ID>"
- name: clientSecret
value: "<your client secret>"
- name: scopes
value: "<comma-separated scope names>"
- name: tokenURL
value: "<token issuing URL>"
- name: headerName
value: "<header name under which the secret token is saved>"
- name: endpointParamsQuery
value: "<list of additional key=value settings separated by ampersands or semicolons forwarded to the token issuing service>"
# authStyle:
# "0" means to auto-detect which authentication
# style the provider wants by trying both ways and caching
# the successful way for the future.
# "1" sends the "client_id" and "client_secret"
# in the POST body as application/x-www-form-urlencoded parameters.
# "2" sends the client_id and client_password
# using HTTP Basic Authorization. This is an optional style
# described in the OAuth2 RFC 6749 section 2.3.1.
- name: authStyle
value: "<see comment>"
```
### Define a custom pipeline for a Client Credentials Grant
To use the OAuth middleware (Client Credentials), you should create a [custom pipeline](../../concepts/middleware/README.md)
using [Dapr configuration](../../concepts/configuration/README.md), as shown in the following sample:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: pipeline
namespace: default
spec:
httpPipeline:
handlers:
- name: myComponent
type: middleware.http.oauth2clientcredentials
```
## Apply the configuration
To apply the above configuration (regardless of grant type)
to your Dapr sidecar, add a ```dapr.io/config``` annotation to your pod spec:
```yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
metadata:
...
annotations:
dapr.io/enabled: "true"
...
dapr.io/config: "pipeline"
...
```
## Accessing the access token
### Authorization Code Grant
Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar
(such as calling the *v1.0/invoke/* endpoint),
it will be redirected to the authorization's consent page if an access token is not found.
Otherwise, the access token is written to the **authHeaderName** header and made available to the app code.
### Client Credentials Grant
Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar
(such as calling the *v1.0/invoke/* endpoint),
it will retrieve a new access token if an existing valid one is not found.
The access token is written to the **headerName** header and made available to the app code.
In that way the app can forward the token in the authorization header in calls towards the external API requesting that token.
# Configure API authorization with OAuth
Dapr OAuth 2.0 [middleware](../../concepts/middleware/README.md) allows you to enable [OAuth](https://oauth.net/2/)
authorization on Dapr endpoints for your web APIs,
using the [Authorization Code Grant flow](https://tools.ietf.org/html/rfc6749#section-4.1).
As well as injecting authorization tokens into your APIs which can be used for authorization towards external APIs
called by your APIs,
using the [Client Credentials Grant flow](https://tools.ietf.org/html/rfc6749#section-4.4).
When the middleware is enabled,
any method invocation through Dapr needs to be authorized before getting passed to the user code.
The main difference between the two flows is that the
`Authorization Code Grant flow` needs user interaction and authorizes a user,
the `Client Credentials Grant flow` doesn't need a user interaction and authorizes a service/application.
## Register your application with a authorization server
Different authorization servers provide different application registration experiences. Here are some samples:
* [Azure AAD](https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-protocols-oauth-code)
* [Facebook](https://developers.facebook.com/apps)
* [Fitbit](https://dev.fitbit.com/build/reference/web-api/oauth2/)
* [GitHub](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/)
* [Google APIs](https://console.developers.google.com/apis/credentials/consen)
* [Slack](https://api.slack.com/docs/oauth)
* [Twitter](http://apps.twitter.com/)
To figure the Dapr OAuth middleware, you'll need to collect the following information:
* Client ID (see [here](https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/))
* Client secret (see [here](https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/))
* Scopes (see [here](https://oauth.net/2/scope/))
* Authorization URL
* Token URL
Authorization/Token URLs of some of the popular authorization servers:
|Server|Authorization URL|Token URL|
|--------|--------|--------|
|Azure AAD|<https://login.microsoftonline.com/{tenant}/oauth2/authorize>|<https://login.microsoftonline.com/{tenant}/oauth2/token>|
|GitHub|<https://github.com/login/oauth/authorize>|<https://github.com/login/oauth/access_token>|
|Google|<https://accounts.google.com/o/oauth2/v2/auth>|<https://accounts.google.com/o/oauth2/token> <https://www.googleapis.com/oauth2/v4/token>|
|Twitter|<https://api.twitter.com/oauth/authorize>|<https://api.twitter.com/oauth2/token>|
## Define the middleware component definition
### Define an Authorization Code Grant component
An OAuth middleware (Authorization Code) is defined by a component:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: oauth2
namespace: default
spec:
type: middleware.http.oauth2
metadata:
- name: clientId
value: "<your client ID>"
- name: clientSecret
value: "<your client secret>"
- name: scopes
value: "<comma-separated scope names>"
- name: authURL
value: "<authorization URL>"
- name: tokenURL
value: "<token exchange URL>"
- name: redirectURL
value: "<redirect URL>"
- name: authHeaderName
value: "<header name under which the secret token is saved>"
```
### Define a custom pipeline for an Authorization Code Grant
To use the OAuth middleware (Authorization Code), you should create a [custom pipeline](../../concepts/middleware/README.md)
using [Dapr configuration](../../concepts/configuration/README.md), as shown in the following sample:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: pipeline
namespace: default
spec:
httpPipeline:
handlers:
- name: oauth2
type: middleware.http.oauth2
```
### Define a Client Credentials Grant component
An OAuth (Client Credentials) middleware is defined by a component:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: myComponent
spec:
type: middleware.http.oauth2clientcredentials
metadata:
- name: clientId
value: "<your client ID>"
- name: clientSecret
value: "<your client secret>"
- name: scopes
value: "<comma-separated scope names>"
- name: tokenURL
value: "<token issuing URL>"
- name: headerName
value: "<header name under which the secret token is saved>"
- name: endpointParamsQuery
value: "<list of additional key=value settings separated by ampersands or semicolons forwarded to the token issuing service>"
# authStyle:
# "0" means to auto-detect which authentication
# style the provider wants by trying both ways and caching
# the successful way for the future.
# "1" sends the "client_id" and "client_secret"
# in the POST body as application/x-www-form-urlencoded parameters.
# "2" sends the client_id and client_password
# using HTTP Basic Authorization. This is an optional style
# described in the OAuth2 RFC 6749 section 2.3.1.
- name: authStyle
value: "<see comment>"
```
### Define a custom pipeline for a Client Credentials Grant
To use the OAuth middleware (Client Credentials), you should create a [custom pipeline](../../concepts/middleware/README.md)
using [Dapr configuration](../../concepts/configuration/README.md), as shown in the following sample:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: pipeline
namespace: default
spec:
httpPipeline:
handlers:
- name: myComponent
type: middleware.http.oauth2clientcredentials
```
## Apply the configuration
To apply the above configuration (regardless of grant type)
to your Dapr sidecar, add a ```dapr.io/config``` annotation to your pod spec:
```yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
metadata:
...
annotations:
dapr.io/enabled: "true"
...
dapr.io/config: "pipeline"
...
```
## Accessing the access token
### Authorization Code Grant
Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar
(such as calling the *v1.0/invoke/* endpoint),
it will be redirected to the authorization's consent page if an access token is not found.
Otherwise, the access token is written to the **authHeaderName** header and made available to the app code.
### Client Credentials Grant
Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar
(such as calling the *v1.0/invoke/* endpoint),
it will retrieve a new access token if an existing valid one is not found.
The access token is written to the **headerName** header and made available to the app code.
In that way the app can forward the token in the authorization header in calls towards the external API requesting that token.

View File

@ -1,188 +1,188 @@
# Set up Application Insights for distributed tracing
Dapr integrates with Application Insights through OpenTelemetry's default exporter along with a dedicated agent known as the [Local Forwarder](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder).
> Note: The local forwarder is still under preview, but being deprecated. The Application Insights team recommends using [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector) (which is in alpha state) for the future so we're working on moving from local forwarder to [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector).
- [How to configure distributed tracing with Application insights](#How-to-configure-distributed-tracing-with-Application-insights)
- [Tracing configuration](#Tracing-configuration)
## How to configure distributed tracing with Application insights
The following steps show you how to configure Dapr to send distributed tracing data to Application insights.
### Setup Application Insights
1. First, you'll need an Azure account. Please see instructions [here](https://azure.microsoft.com/free/) to apply for a **free** Azure account.
2. Follow instructions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource) to create a new Application Insights resource.
3. Get Application insights Intrumentation key from your application insights page
4. On the Application Insights side menu, go to `Configure -> API Access`
5. Click `Create API Key`
6. Select all checkboxes under `Choose what this API key will allow apps to do:`
- Read telemetry
- Write annotations
- Authenticate SDK control channel
7. Generate Key and get API key
### Setup the Local Forwarder
#### Self hosted environment
This is for running the local forwarder on your machine.
1. Run the local fowarder
```bash
docker run -e APPINSIGHTS_INSTRUMENTATIONKEY=<Your Instrumentation Key> -e APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY=<Your API Key> -d -p 55678:55678 daprio/dapr-localforwarder:latest
```
> Note: [dapr-localforwarder](https://github.com/dapr/ApplicationInsights-LocalForwarder) is the forked version of [ApplicationInsights Localforwarder](https://github.com/microsoft/ApplicationInsights-LocalForwarder/), that includes the minor changes for Dapr. We're working on migrating to [opentelemetry-sdk and opentelemetry collector](https://opentelemetry.io/).
1. Create the following YAML files. Copy the native.yaml component file and tracing.yaml configuration file to the *components/* sub-folder under the same folder where you run your application.
* native.yaml component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: native
namespace: default
spec:
type: exporters.native
metadata:
- name: enabled
value: "true"
- name: agentEndpoint
value: "localhost:55678"
```
* tracing.yaml configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
```
3. When running in the local self hosted mode, you need to launch Dapr with the `--config` parameter:
```bash
dapr run --app-id mynode --app-port 3000 --config ./components/tracing.yaml node app.js
```
#### Kubernetes environment
1. Download [dapr-localforwarder.yaml](./localforwarder/dapr-localforwarder.yaml)
2. Replace `<APPINSIGHT INSTRUMENTATIONKEY>` with your Instrumentation Key and `<APPINSIGHT API KEY>` with the generated key in the file
```yaml
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: <APPINSIGHT INSTRUMENTATIONKEY> # Replace with your ikey
- name: APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY
value: <APPINSIGHT API KEY> # Replace with your generated api key
```
3. Deploy dapr-localfowarder.yaml
```bash
kubectl apply -f ./dapr-localforwarder.yaml
```
4. Create the following YAML files
* native.yaml component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: native
namespace: default
spec:
type: exporters.native
metadata:
- name: enabled
value: "true"
- name: agentEndpoint
value: "<Local forwarder address, e.g. dapr-localforwarder.default.svc.cluster.local:55678>"
```
* tracing.yaml configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
```
5. Use kubectl to apply the above CRD files:
```bash
kubectl apply -f tracing.yaml
kubectl apply -f native.yaml
```
6. Deploy your app with tracing
When running in Kubernetes mode, apply the configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following example:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
...
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "calculator-front-end"
dapr.io/app-port: "8080"
dapr.io/config: "tracing"
```
That's it! There's no need include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
> **NOTE**: You can register multiple exporters at the same time, and the tracing logs are forwarded to all registered exporters.
Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your Application Insights resource. You can also use **Application Map** to examine the topology of your services, as shown below:
![Application map](../../images/azure-monitor.png)
> **NOTE**: Only operations going through Dapr API exposed by Dapr sidecar (e.g. service invocation or event publishing) will be displayed in Application Map topology. Direct service invocations (not going through the Dapr API) will not be shown.
## Tracing configuration
The `tracing` section under the `Configuration` spec contains the following properties:
```yml
tracing:
samplingRate: "1"
```
The following table lists the different properties.
Property | Type | Description
---- | ------- | -----------
samplingRate | string | Set sampling rate for tracing to be enabled or disabled.
`samplingRate` is used to enable or disable the tracing. To disable the sampling rate ,
set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` will always sample the traces.By default, the sampling rate is 1 in 10,000
## References
* [How-To: Use W3C Trace Context for distributed tracing](../../howto/use-w3c-tracecontext/README.md)
# Set up Application Insights for distributed tracing
Dapr integrates with Application Insights through OpenTelemetry's default exporter along with a dedicated agent known as the [Local Forwarder](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder).
> Note: The local forwarder is still under preview, but being deprecated. The Application Insights team recommends using [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector) (which is in alpha state) for the future so we're working on moving from local forwarder to [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector).
- [How to configure distributed tracing with Application insights](#How-to-configure-distributed-tracing-with-Application-insights)
- [Tracing configuration](#Tracing-configuration)
## How to configure distributed tracing with Application insights
The following steps show you how to configure Dapr to send distributed tracing data to Application insights.
### Setup Application Insights
1. First, you'll need an Azure account. Please see instructions [here](https://azure.microsoft.com/free/) to apply for a **free** Azure account.
2. Follow instructions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource) to create a new Application Insights resource.
3. Get Application insights Intrumentation key from your application insights page
4. On the Application Insights side menu, go to `Configure -> API Access`
5. Click `Create API Key`
6. Select all checkboxes under `Choose what this API key will allow apps to do:`
- Read telemetry
- Write annotations
- Authenticate SDK control channel
7. Generate Key and get API key
### Setup the Local Forwarder
#### Self hosted environment
This is for running the local forwarder on your machine.
1. Run the local fowarder
```bash
docker run -e APPINSIGHTS_INSTRUMENTATIONKEY=<Your Instrumentation Key> -e APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY=<Your API Key> -d -p 55678:55678 daprio/dapr-localforwarder:latest
```
> Note: [dapr-localforwarder](https://github.com/dapr/ApplicationInsights-LocalForwarder) is the forked version of [ApplicationInsights Localforwarder](https://github.com/microsoft/ApplicationInsights-LocalForwarder/), that includes the minor changes for Dapr. We're working on migrating to [opentelemetry-sdk and opentelemetry collector](https://opentelemetry.io/).
1. Create the following YAML files. Copy the native.yaml component file and tracing.yaml configuration file to the *components/* sub-folder under the same folder where you run your application.
* native.yaml component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: native
namespace: default
spec:
type: exporters.native
metadata:
- name: enabled
value: "true"
- name: agentEndpoint
value: "localhost:55678"
```
* tracing.yaml configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
```
3. When running in the local self hosted mode, you need to launch Dapr with the `--config` parameter:
```bash
dapr run --app-id mynode --app-port 3000 --config ./components/tracing.yaml node app.js
```
#### Kubernetes environment
1. Download [dapr-localforwarder.yaml](./localforwarder/dapr-localforwarder.yaml)
2. Replace `<APPINSIGHT INSTRUMENTATIONKEY>` with your Instrumentation Key and `<APPINSIGHT API KEY>` with the generated key in the file
```yaml
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: <APPINSIGHT INSTRUMENTATIONKEY> # Replace with your ikey
- name: APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY
value: <APPINSIGHT API KEY> # Replace with your generated api key
```
3. Deploy dapr-localfowarder.yaml
```bash
kubectl apply -f ./dapr-localforwarder.yaml
```
4. Create the following YAML files
* native.yaml component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: native
namespace: default
spec:
type: exporters.native
metadata:
- name: enabled
value: "true"
- name: agentEndpoint
value: "<Local forwarder address, e.g. dapr-localforwarder.default.svc.cluster.local:55678>"
```
* tracing.yaml configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
```
5. Use kubectl to apply the above CRD files:
```bash
kubectl apply -f tracing.yaml
kubectl apply -f native.yaml
```
6. Deploy your app with tracing
When running in Kubernetes mode, apply the configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following example:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
...
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "calculator-front-end"
dapr.io/app-port: "8080"
dapr.io/config: "tracing"
```
That's it! There's no need include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
> **NOTE**: You can register multiple exporters at the same time, and the tracing logs are forwarded to all registered exporters.
Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your Application Insights resource. You can also use **Application Map** to examine the topology of your services, as shown below:
![Application map](../../images/azure-monitor.png)
> **NOTE**: Only operations going through Dapr API exposed by Dapr sidecar (e.g. service invocation or event publishing) will be displayed in Application Map topology. Direct service invocations (not going through the Dapr API) will not be shown.
## Tracing configuration
The `tracing` section under the `Configuration` spec contains the following properties:
```yml
tracing:
samplingRate: "1"
```
The following table lists the different properties.
Property | Type | Description
---- | ------- | -----------
samplingRate | string | Set sampling rate for tracing to be enabled or disabled.
`samplingRate` is used to enable or disable the tracing. To disable the sampling rate ,
set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` will always sample the traces.By default, the sampling rate is 1 in 10,000
## References
* [How-To: Use W3C Trace Context for distributed tracing](../../howto/use-w3c-tracecontext/README.md)

View File

@ -1,106 +1,106 @@
# Using PubSub across multiple namespaces
In some scenarios, applications can be spread across namespaces and share a queue or topic via PubSub. In this case, the PubSub component must be provisioned on each namespace.
In this example, we will use the [PubSub sample](https://github.com/dapr/quickstarts/tree/master/pub-sub). Redis installation and the subscribers will be in `namespace-a` while the publisher UI will be on `namespace-b`. This solution should also work if Redis was installed on another namespace or if we used a managed cloud service like Azure ServiceBus.
The table below shows which resources are deployed to which namespaces:
| Resource | namespace-a | namespace-b |
|-|-|-|
| Redis master | X ||
| Redis slave | X ||
| Dapr's PubSub component | X | X |
| Node subscriber | X ||
| Python subscriber | X ||
| React UI publisher | | X|
## Pre-requisites
* [Dapr installed](https://github.com/dapr/docs/blob/master/getting-started/environment-setup.md.) on any namespace since Dapr works at the cluster level.
* Checkout and cd into directory for [PubSub sample](https://github.com/dapr/quickstarts/tree/master/pub-sub).
## Setup `namespace-a`
Create namespace and switch kubectl to use it.
```
kubectl create namespace namespace-a
kubectl config set-context --current --namespace=namespace-a
```
Install Redis (master and slave) on `namespace-a`, following [these instructions](https://github.com/dapr/docs/blob/master/howto/setup-pub-sub-message-broker/setup-redis.md).
Now, configure `deploy/redis.yaml`, paying attention to the hostname containing `namespace-a`.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: "redisHost"
value: "redis-master.namespace-a.svc:6379"
- name: "redisPassword"
value: "YOUR_PASSWORD"
```
Deploy resources to `namespace-a`:
```
kubectl apply -f deploy/redis.yaml
kubectl apply -f deploy/node-subscriber.yaml
kubectl apply -f deploy/python-subscriber.yaml
```
## Setup `namespace-b`
Create namespace and switch kubectl to use it.
```
kubectl create namespace namespace-b
kubectl config set-context --current --namespace=namespace-b
```
Deploy resources to `namespace-b`, including the Redis component:
```
kubectl apply -f deploy/redis.yaml
kubectl apply -f deploy/react-form.yaml
```
Now, find the IP address for react-form, open it on your browser and publish messages to each topic (A, B and C).
```
kubectl get service -A
```
## Confirm subscribers received the messages.
Switch back to `namespace-a`:
```
kubectl config set-context --current --namespace=namespace-a
```
Find the POD names:
```
kubectl get pod # Copy POD names and use in the next commands.
```
Display logs:
```
kubectl logs node-subscriber-XYZ node-subscriber
kubectl logs python-subscriber-XYZ python-subscriber
```
The messages published on the browser should show in the corresponding subscriber's logs. The Node.js subscriber receives messages of type "A" and "B", while the Python subscriber receives messages of type "A" and "C".
## Clean up
```
kubectl delete -f deploy/redis.yaml --namespace namespace-a
kubectl delete -f deploy/node-subscriber.yaml --namespace namespace-a
kubectl delete -f deploy/python-subscriber.yaml --namespace namespace-a
kubectl delete -f deploy/react-form.yaml --namespace namespace-b
kubectl delete -f deploy/redis.yaml --namespace namespace-b
kubectl config set-context --current --namespace=default
kubectl delete namespace namespace-a
kubectl delete namespace namespace-b
# Using PubSub across multiple namespaces
In some scenarios, applications can be spread across namespaces and share a queue or topic via PubSub. In this case, the PubSub component must be provisioned on each namespace.
In this example, we will use the [PubSub sample](https://github.com/dapr/quickstarts/tree/master/pub-sub). Redis installation and the subscribers will be in `namespace-a` while the publisher UI will be on `namespace-b`. This solution should also work if Redis was installed on another namespace or if we used a managed cloud service like Azure ServiceBus.
The table below shows which resources are deployed to which namespaces:
| Resource | namespace-a | namespace-b |
|-|-|-|
| Redis master | X ||
| Redis slave | X ||
| Dapr's PubSub component | X | X |
| Node subscriber | X ||
| Python subscriber | X ||
| React UI publisher | | X|
## Pre-requisites
* [Dapr installed](https://github.com/dapr/docs/blob/master/getting-started/environment-setup.md.) on any namespace since Dapr works at the cluster level.
* Checkout and cd into directory for [PubSub sample](https://github.com/dapr/quickstarts/tree/master/pub-sub).
## Setup `namespace-a`
Create namespace and switch kubectl to use it.
```
kubectl create namespace namespace-a
kubectl config set-context --current --namespace=namespace-a
```
Install Redis (master and slave) on `namespace-a`, following [these instructions](https://github.com/dapr/docs/blob/master/howto/setup-pub-sub-message-broker/setup-redis.md).
Now, configure `deploy/redis.yaml`, paying attention to the hostname containing `namespace-a`.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: "redisHost"
value: "redis-master.namespace-a.svc:6379"
- name: "redisPassword"
value: "YOUR_PASSWORD"
```
Deploy resources to `namespace-a`:
```
kubectl apply -f deploy/redis.yaml
kubectl apply -f deploy/node-subscriber.yaml
kubectl apply -f deploy/python-subscriber.yaml
```
## Setup `namespace-b`
Create namespace and switch kubectl to use it.
```
kubectl create namespace namespace-b
kubectl config set-context --current --namespace=namespace-b
```
Deploy resources to `namespace-b`, including the Redis component:
```
kubectl apply -f deploy/redis.yaml
kubectl apply -f deploy/react-form.yaml
```
Now, find the IP address for react-form, open it on your browser and publish messages to each topic (A, B and C).
```
kubectl get service -A
```
## Confirm subscribers received the messages.
Switch back to `namespace-a`:
```
kubectl config set-context --current --namespace=namespace-a
```
Find the POD names:
```
kubectl get pod # Copy POD names and use in the next commands.
```
Display logs:
```
kubectl logs node-subscriber-XYZ node-subscriber
kubectl logs python-subscriber-XYZ python-subscriber
```
The messages published on the browser should show in the corresponding subscriber's logs. The Node.js subscriber receives messages of type "A" and "B", while the Python subscriber receives messages of type "A" and "C".
## Clean up
```
kubectl delete -f deploy/redis.yaml --namespace namespace-a
kubectl delete -f deploy/node-subscriber.yaml --namespace namespace-a
kubectl delete -f deploy/python-subscriber.yaml --namespace namespace-a
kubectl delete -f deploy/react-form.yaml --namespace namespace-b
kubectl delete -f deploy/redis.yaml --namespace namespace-b
kubectl config set-context --current --namespace=default
kubectl delete namespace namespace-a
kubectl delete namespace namespace-b
```

View File

@ -1,133 +1,133 @@
# Limit the Pub/Sub topics used or scope them to one or more applications
[Namespaces or component scopes](../components-scopes/README.md) can be used to limit component access to particular applications. These application scopes added to a component limit only the applications with specific IDs to be able to use the component.
In addition to this general component scope, the following can be limited for pub/sub components:
- the topics which can be used (published or subscribed)
- which applications are allowed to publish to specific topics
- which applications are allowed to subscribe to specific topics
This is called pub/sub topic scoping.
Watch this [video](https://www.youtube.com/watch?v=7VdWBBGcbHQ&feature=youtu.be&t=513) on how to use pub/sub topic scoping.
To use this topic scoping, three metadata properties can be set for a pub/sub component:
- ```spec.metadata.publishingScopes```: the list of applications to topic scopes to allow publishing, separated by semicolons. If an app is not specified in ```publishingScopes```, its allowed to publish to all topics.
- ```spec.metadata.subscriptionScopes```: the list of applications to topic scopes to allow subscription, separated by semicolons. If an app is not specified in ```subscriptionScopes```, its allowed to subscribe to all topics.
- ```spec.metadata.allowedTopics```: a comma-separated list for allowed topics for all applications. ```publishingScopes``` or ```subscriptionScopes``` can be used in addition to add granular limitations. If ```allowedTopics``` is not set, all topics are valid and then ```subscriptionScopes``` and ```publishingScopes``` take place if present.
These metadata properties can be used for all pub/sub components. The following examples use Redis as pub/sub component.
## Scenario 1: Limit which application can publish or subscribe to topics
This can be useful, if you have topics which contain sensitive information and only a subset of your applications are allowed to publish or subscribe to these.
It can also be used for all topics to have always a "ground truth" for which applications are using which topics as publishers/subscribers.
Here is an example of three applications and three topics:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: publishingScopes
value: "app1=topic1;app2=topic2,topic3;app3="
- name: subscriptionScopes
value: "app2=;app3=topic1"
```
The table below shows which application is allowed to publish into the topics:
| Publishing | app1 | app2 | app3 |
|------------|------|------|------|
| topic1 | X | | |
| topic2 | | X | |
| topic3 | | X | |
The table below shows which application is allowed to subscribe to the topics:
| Subscription | app1 | app2 | app3 |
|--------------|------|------|------|
| topic1 | X | | X |
| topic2 | X | | |
| topic3 | X | | |
> Note: If an application is not listed (e.g. app1 in subscriptionScopes), it is allowed to subscribe to all topics. Because ```allowedTopics``` (see below of examples) is not used and app1 does not have any subscription scopes, it can also use additional topics not listed above.
## Scenario 2: Limit which topics can be used by all applications without granular limitations
A topic is created if a Dapr application sends a message to it. In some scenarios this topic creation should be governed. For example;
- a bug in a Dapr application on generating the topic name can lead to an unlimited amount of topics created
- streamline the topics names and total count and prevent an unlimited growth of topics
In these situations, ```allowedTopics``` can be used.
Here is an example of three allowed topics:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: allowedTopics
value: "topic1,topic2,topic3"
```
All applications can use these topics, but only those topics, no others are allowed.
## Scenario 3: Combine both allowed topics allowed applications that can publish and subscribe
Sometimes you want to combine both scopes, thus only having a fixed set of allowed topics and specify scoping to certain applications.
Here is an example of three applications and two topics:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: allowedTopics
value: "A,B"
- name: publishingScopes
value: "app1=A"
- name: subscriptionScopes
value: "app1=;app2=A"
```
> Note: The third application is not listed, because if an app is not specified inside the scopes, it is allowed to use all topics.
The table below shows which application is allowed to publish into the topics:
| Publishing | app1 | app2 | app3 |
|------------|------|------|------|
| A | X | X | X |
| B | | X | X |
The table below shows which application is allowed to subscribe to the topics:
| Subscription | app1 | app2 | app3 |
|--------------|------|------|------|
| A | | X | X |
| B | | | X |
No other topics can be used, only A and B.
Pub/sub scopes are per pub/sub. You may have pub/sub component named `pubsub` that has one set of scopes, and another `pubsub2` with a different set. The name is the `metadata.name` field in the yaml.
# Limit the Pub/Sub topics used or scope them to one or more applications
[Namespaces or component scopes](../components-scopes/README.md) can be used to limit component access to particular applications. These application scopes added to a component limit only the applications with specific IDs to be able to use the component.
In addition to this general component scope, the following can be limited for pub/sub components:
- the topics which can be used (published or subscribed)
- which applications are allowed to publish to specific topics
- which applications are allowed to subscribe to specific topics
This is called pub/sub topic scoping.
Watch this [video](https://www.youtube.com/watch?v=7VdWBBGcbHQ&feature=youtu.be&t=513) on how to use pub/sub topic scoping.
To use this topic scoping, three metadata properties can be set for a pub/sub component:
- ```spec.metadata.publishingScopes```: the list of applications to topic scopes to allow publishing, separated by semicolons. If an app is not specified in ```publishingScopes```, its allowed to publish to all topics.
- ```spec.metadata.subscriptionScopes```: the list of applications to topic scopes to allow subscription, separated by semicolons. If an app is not specified in ```subscriptionScopes```, its allowed to subscribe to all topics.
- ```spec.metadata.allowedTopics```: a comma-separated list for allowed topics for all applications. ```publishingScopes``` or ```subscriptionScopes``` can be used in addition to add granular limitations. If ```allowedTopics``` is not set, all topics are valid and then ```subscriptionScopes``` and ```publishingScopes``` take place if present.
These metadata properties can be used for all pub/sub components. The following examples use Redis as pub/sub component.
## Scenario 1: Limit which application can publish or subscribe to topics
This can be useful, if you have topics which contain sensitive information and only a subset of your applications are allowed to publish or subscribe to these.
It can also be used for all topics to have always a "ground truth" for which applications are using which topics as publishers/subscribers.
Here is an example of three applications and three topics:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: publishingScopes
value: "app1=topic1;app2=topic2,topic3;app3="
- name: subscriptionScopes
value: "app2=;app3=topic1"
```
The table below shows which application is allowed to publish into the topics:
| Publishing | app1 | app2 | app3 |
|------------|------|------|------|
| topic1 | X | | |
| topic2 | | X | |
| topic3 | | X | |
The table below shows which application is allowed to subscribe to the topics:
| Subscription | app1 | app2 | app3 |
|--------------|------|------|------|
| topic1 | X | | X |
| topic2 | X | | |
| topic3 | X | | |
> Note: If an application is not listed (e.g. app1 in subscriptionScopes), it is allowed to subscribe to all topics. Because ```allowedTopics``` (see below of examples) is not used and app1 does not have any subscription scopes, it can also use additional topics not listed above.
## Scenario 2: Limit which topics can be used by all applications without granular limitations
A topic is created if a Dapr application sends a message to it. In some scenarios this topic creation should be governed. For example;
- a bug in a Dapr application on generating the topic name can lead to an unlimited amount of topics created
- streamline the topics names and total count and prevent an unlimited growth of topics
In these situations, ```allowedTopics``` can be used.
Here is an example of three allowed topics:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: allowedTopics
value: "topic1,topic2,topic3"
```
All applications can use these topics, but only those topics, no others are allowed.
## Scenario 3: Combine both allowed topics allowed applications that can publish and subscribe
Sometimes you want to combine both scopes, thus only having a fixed set of allowed topics and specify scoping to certain applications.
Here is an example of three applications and two topics:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: "localhost:6379"
- name: redisPassword
value: ""
- name: allowedTopics
value: "A,B"
- name: publishingScopes
value: "app1=A"
- name: subscriptionScopes
value: "app1=;app2=A"
```
> Note: The third application is not listed, because if an app is not specified inside the scopes, it is allowed to use all topics.
The table below shows which application is allowed to publish into the topics:
| Publishing | app1 | app2 | app3 |
|------------|------|------|------|
| A | X | X | X |
| B | | X | X |
The table below shows which application is allowed to subscribe to the topics:
| Subscription | app1 | app2 | app3 |
|--------------|------|------|------|
| A | | X | X |
| B | | | X |
No other topics can be used, only A and B.
Pub/sub scopes are per pub/sub. You may have pub/sub component named `pubsub` that has one set of scopes, and another `pubsub2` with a different set. The name is the `metadata.name` field in the yaml.

View File

@ -1,53 +1,53 @@
# Query Azure Cosmos DB state store
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
> **NOTE:** Azure Cosmos DB is a multi-modal database that supports multiple APIs. The default Dapr Cosmos DB state store implementation uses the [Azure Cosmos DB SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started).
## 1. Connect to Azure Cosmos DB
The easiest way to connect to your Cosmos DB instance is to use the Data Explorer on [Azure Management Portal](https://portal.azure.com). Alternatively, you can use [various SDKs and tools](https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction).
> **NOTE:** The following samples use Cosmos DB [SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). When you configure an Azure Cosmos DB for Dapr, you need to specify the exact database and collection to use. The follow samples assume you've already connected to the right database and a collection named "states".
## 2. List keys by App ID
To get all state keys associated with application "myapp", use the query:
```sql
SELECT * FROM states WHERE CONTAINS(states.id, 'myapp||')
```
The above query returns all documents with id containing "myapp-", which is the prefix of the state keys.
## 3. Get specific state data
To get the state data by a key "balance" for the application "myapp", use the query:
```sql
SELECT * FROM states WHERE states.id = 'myapp||balance'
```
Then, read the **value** field of the returned document.
To get the state version/ETag, use the command:
```sql
SELECT states._etag FROM states WHERE states.id = 'myapp||balance'
```
## 4. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```sql
SELECT * FROM states WHERE CONTAINS(states.id, 'mypets||cat||leroy||')
```
And to get a specific actor state such as "food", use the command:
```sql
SELECT * FROM states WHERE states.id = 'mypets||cat||leroy||food'
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.
# Query Azure Cosmos DB state store
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
> **NOTE:** Azure Cosmos DB is a multi-modal database that supports multiple APIs. The default Dapr Cosmos DB state store implementation uses the [Azure Cosmos DB SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started).
## 1. Connect to Azure Cosmos DB
The easiest way to connect to your Cosmos DB instance is to use the Data Explorer on [Azure Management Portal](https://portal.azure.com). Alternatively, you can use [various SDKs and tools](https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction).
> **NOTE:** The following samples use Cosmos DB [SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). When you configure an Azure Cosmos DB for Dapr, you need to specify the exact database and collection to use. The follow samples assume you've already connected to the right database and a collection named "states".
## 2. List keys by App ID
To get all state keys associated with application "myapp", use the query:
```sql
SELECT * FROM states WHERE CONTAINS(states.id, 'myapp||')
```
The above query returns all documents with id containing "myapp-", which is the prefix of the state keys.
## 3. Get specific state data
To get the state data by a key "balance" for the application "myapp", use the query:
```sql
SELECT * FROM states WHERE states.id = 'myapp||balance'
```
Then, read the **value** field of the returned document.
To get the state version/ETag, use the command:
```sql
SELECT states._etag FROM states WHERE states.id = 'myapp||balance'
```
## 4. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```sql
SELECT * FROM states WHERE CONTAINS(states.id, 'mypets||cat||leroy||')
```
And to get a specific actor state such as "food", use the command:
```sql
SELECT * FROM states WHERE states.id = 'mypets||cat||leroy||food'
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.

View File

@ -1,60 +1,60 @@
# Query Redis state store
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
>**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation.
## 1. Connect to Redis
You can use the official [redis-cli](https://redis.io/topics/rediscli) or any other Redis compatible tools to connect to the Redis state store to directly query Dapr states. If you are running Redis in a container, the easiest way to use redis-cli is to use a container:
```bash
docker run --rm -it --link <name of the Redis container> redis redis-cli -h <name of the Redis container>
```
## 2. List keys by App ID
To get all state keys associated with application "myapp", use the command:
```bash
KEYS myapp*
```
The above command returns a list of existing keys, for example:
```bash
1) "myapp||balance"
2) "myapp||amount"
```
## 3. Get specific state data
Dapr saves state values as hash values. Each hash value contains a "data" field, which contains the state data and a "version" field, which contains an ever-incrementing version serving as the ETag.
For example, to get the state data by a key "balance" for the application "myapp", use the command:
```bash
HGET myapp||balance data
```
To get the state version/ETag, use the command:
```bash
HGET myapp||balance version
```
## 4. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```bash
KEYS mypets||cat||leroy*
```
And to get a specific actor state such as "food", use the command:
```bash
HGET mypets||cat||leroy||food value
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.
# Query Redis state store
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
>**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation.
## 1. Connect to Redis
You can use the official [redis-cli](https://redis.io/topics/rediscli) or any other Redis compatible tools to connect to the Redis state store to directly query Dapr states. If you are running Redis in a container, the easiest way to use redis-cli is to use a container:
```bash
docker run --rm -it --link <name of the Redis container> redis redis-cli -h <name of the Redis container>
```
## 2. List keys by App ID
To get all state keys associated with application "myapp", use the command:
```bash
KEYS myapp*
```
The above command returns a list of existing keys, for example:
```bash
1) "myapp||balance"
2) "myapp||amount"
```
## 3. Get specific state data
Dapr saves state values as hash values. Each hash value contains a "data" field, which contains the state data and a "version" field, which contains an ever-incrementing version serving as the ETag.
For example, to get the state data by a key "balance" for the application "myapp", use the command:
```bash
HGET myapp||balance data
```
To get the state version/ETag, use the command:
```bash
HGET myapp||balance version
```
## 4. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```bash
KEYS mypets||cat||leroy*
```
And to get a specific actor state such as "food", use the command:
```bash
HGET mypets||cat||leroy||food value
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.

View File

@ -1,59 +1,59 @@
# Query SQL Server state store
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
## 1. Connect to SQL Server
The easiest way to connect to your SQL Server instance is to use the [Azure Data Studio](https://docs.microsoft.com/sql/azure-data-studio/download-azure-data-studio) (Windows, macOS, Linux) or [SQL Server Management Studio](https://docs.microsoft.com/sql/ssms/download-sql-server-management-studio-ssms) (Windows).
> **NOTE:** The following samples use Azure SQL. When you configure an Azure SQL database for Dapr, you need to specify the exact table name to use. The follow samples assume you've already connected to the right database with a table named "states".
## 2. List keys by App ID
To get all state keys associated with application "myapp", use the query:
```sql
SELECT * FROM states WHERE [Key] LIKE 'myapp-%'
```
The above query returns all rows with id containing "myapp-", which is the prefix of the state keys.
## 3. Get specific state data
To get the state data by a key "balance" for the application "myapp", use the query:
```sql
SELECT * FROM states WHERE [Key] = 'myapp-balance'
```
Then, read the **Data** field of the returned row.
To get the state version/ETag, use the command:
```sql
SELECT [RowVersion] FROM states WHERE [Key] = 'myapp-balance'
```
## 4. Get filtered state data
To get all state data where the value "color" in json data equals to "blue", use the query:
```sql
SELECT * FROM states WHERE JSON_VALUE([Data], '$.color') = 'blue'
```
## 5. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```sql
SELECT * FROM states WHERE [Key] LIKE 'mypets-cat-leroy-%'
```
And to get a specific actor state such as "food", use the command:
```sql
SELECT * FROM states WHERE [Key] = 'mypets-cat-leroy-food'
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.
# Query SQL Server state store
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
## 1. Connect to SQL Server
The easiest way to connect to your SQL Server instance is to use the [Azure Data Studio](https://docs.microsoft.com/sql/azure-data-studio/download-azure-data-studio) (Windows, macOS, Linux) or [SQL Server Management Studio](https://docs.microsoft.com/sql/ssms/download-sql-server-management-studio-ssms) (Windows).
> **NOTE:** The following samples use Azure SQL. When you configure an Azure SQL database for Dapr, you need to specify the exact table name to use. The follow samples assume you've already connected to the right database with a table named "states".
## 2. List keys by App ID
To get all state keys associated with application "myapp", use the query:
```sql
SELECT * FROM states WHERE [Key] LIKE 'myapp-%'
```
The above query returns all rows with id containing "myapp-", which is the prefix of the state keys.
## 3. Get specific state data
To get the state data by a key "balance" for the application "myapp", use the query:
```sql
SELECT * FROM states WHERE [Key] = 'myapp-balance'
```
Then, read the **Data** field of the returned row.
To get the state version/ETag, use the command:
```sql
SELECT [RowVersion] FROM states WHERE [Key] = 'myapp-balance'
```
## 4. Get filtered state data
To get all state data where the value "color" in json data equals to "blue", use the query:
```sql
SELECT * FROM states WHERE JSON_VALUE([Data], '$.color') = 'blue'
```
## 5. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```sql
SELECT * FROM states WHERE [Key] LIKE 'mypets-cat-leroy-%'
```
And to get a specific actor state such as "food", use the command:
```sql
SELECT * FROM states WHERE [Key] = 'mypets-cat-leroy-food'
```
> **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime.

View File

@ -1,146 +1,146 @@
# Serialization in Dapr's SDKs
An SDK for Dapr should provide serialization for two use cases. First, for API objects sent through request and response payloads. Second, for objects to be persisted. For both these use cases, a default serialization is provided. In the Java SDK, it is the [DefaultObjectSerializer](https://dapr.github.io/java-sdk/io/dapr/serializer/DefaultObjectSerializer.html) class, providing JSON serialization.
## Service invocation
```java
DaprClient client = (new DaprClientBuilder()).build();
client.invokeService(Verb.POST, "myappid", "saySomething", "My Message", null).block();
```
In the example above, the app will receive a `POST` request for the `saySomething` method with the request payload as `"My Message"` - quoted since the serializer will serialize the input String to JSON.
```text
POST /saySomething HTTP/1.1
Host: localhost
Content-Type: text/plain
Content-Length: 12
"My Message"
```
## State management
```java
DaprClient client = (new DaprClientBuilder()).build();
client.saveState("MyStateStore", "MyKey", "My Message").block();
```
In this example, `My Message` will be saved. It is not quoted because Dapr's API will internally parse the JSON request object before saving it.
```JSON
[
{
"key": "MyKey",
"value": "My Message"
}
]
```
## PubSub
```java
DaprClient client = (new DaprClientBuilder()).build();
client.publishEvent("TopicName", "My Message").block();
```
The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber will receive it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. Dapr SDK also provides a built-in deserializer for `CloudEvent` object.
```java
@PostMapping(path = "/TopicName")
public void handleMessage(@RequestBody(required = false) byte[] body) {
// Dapr's event is compliant to CloudEvent.
CloudEvent event = CloudEvent.deserialize(body);
}
```
## Bindings
In this case, the object is serialized to `byte[]` as well and the input binding receives the raw `byte[]` as-is and deserializes it to the expected object type.
* Output binding:
```java
DaprClient client = (new DaprClientBuilder()).build();
client.invokeBinding("sample", "My Message").block();
```
* Input binding:
```java
@PostMapping(path = "/sample")
public void handleInputBinding(@RequestBody(required = false) byte[] body) {
String message = (new DefaultObjectSerializer()).deserialize(body, String.class);
System.out.println(message);
}
```
It should print:
```
My Message
```
## Actor Method invocation
Object serialization and deserialization for invocation of Actor's methods are same as for the service method invocation, the only difference is that the application does not need to deserialize the request or serialize the response since it is all done transparently by the SDK.
For Actor's methods, the SDK only supports methods with zero or one parameter.
* Invoking an Actor's method:
```java
public static void main() {
ActorProxyBuilder builder = new ActorProxyBuilder("DemoActor");
String result = actor.invokeActorMethod("say", "My Message", String.class).block();
}
```
* Implementing an Actor's method:
```java
public String say(String something) {
System.out.println(something);
return "OK";
}
```
It should print:
```
My Message
```
## Actor's state management
Actors can also have state. In this case, the state manager will serialize and deserialize the objects using the state serializer and handle it transparently to the application.
```java
public String actorMethod(String message) {
// Reads a state from key and deserializes it to String.
String previousMessage = super.getActorStateManager().get("lastmessage", String.class).block();
// Sets the new state for the key after serializing it.
super.getActorStateManager().set("lastmessage", message).block();
return previousMessage;
}
```
## Default serializer
The default serializer for Dapr is a JSON serializer with the following expectations:
1. Use of basic [JSON data types](https://www.w3schools.com/js/js_json_datatypes.asp) for cross-language and cross-platform compatibility: string, number, array, boolean, null and another JSON object. Every complex property type in application's serializable objects (DateTime, for example), should be represented as one of the JSON's basic types.
2. Data persisted with the default serializer should be saved as JSON objects too, without extra quotes or encoding. The example below shows how a string and a JSON object would look like in a Redis store.
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message
"This is a message to be saved and retrieved."
```
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata
{"value":"My data value."}
```
3. Custom serializers must serialize object to `byte[]`.
4. Custom serializers must deserilize `byte[]` to object.
5. When user provides a custom serializer, it should be transferred or persisted as `byte[]`. When persisting, also encode as Base64 string. This is done natively by most JSON libraries.
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message
"VGhpcyBpcyBhIG1lc3NhZ2UgdG8gYmUgc2F2ZWQgYW5kIHJldHJpZXZlZC4="
```
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata
"eyJ2YWx1ZSI6Ik15IGRhdGEgdmFsdWUuIn0="
```
6. When serializing a object that is a `byte[]`, the serializer should just pass it through since `byte[]` shoould be already handled internally in the SDK. The same happens when deserializing to `byte[]`.
# Serialization in Dapr's SDKs
An SDK for Dapr should provide serialization for two use cases. First, for API objects sent through request and response payloads. Second, for objects to be persisted. For both these use cases, a default serialization is provided. In the Java SDK, it is the [DefaultObjectSerializer](https://dapr.github.io/java-sdk/io/dapr/serializer/DefaultObjectSerializer.html) class, providing JSON serialization.
## Service invocation
```java
DaprClient client = (new DaprClientBuilder()).build();
client.invokeService(Verb.POST, "myappid", "saySomething", "My Message", null).block();
```
In the example above, the app will receive a `POST` request for the `saySomething` method with the request payload as `"My Message"` - quoted since the serializer will serialize the input String to JSON.
```text
POST /saySomething HTTP/1.1
Host: localhost
Content-Type: text/plain
Content-Length: 12
"My Message"
```
## State management
```java
DaprClient client = (new DaprClientBuilder()).build();
client.saveState("MyStateStore", "MyKey", "My Message").block();
```
In this example, `My Message` will be saved. It is not quoted because Dapr's API will internally parse the JSON request object before saving it.
```JSON
[
{
"key": "MyKey",
"value": "My Message"
}
]
```
## PubSub
```java
DaprClient client = (new DaprClientBuilder()).build();
client.publishEvent("TopicName", "My Message").block();
```
The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber will receive it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. Dapr SDK also provides a built-in deserializer for `CloudEvent` object.
```java
@PostMapping(path = "/TopicName")
public void handleMessage(@RequestBody(required = false) byte[] body) {
// Dapr's event is compliant to CloudEvent.
CloudEvent event = CloudEvent.deserialize(body);
}
```
## Bindings
In this case, the object is serialized to `byte[]` as well and the input binding receives the raw `byte[]` as-is and deserializes it to the expected object type.
* Output binding:
```java
DaprClient client = (new DaprClientBuilder()).build();
client.invokeBinding("sample", "My Message").block();
```
* Input binding:
```java
@PostMapping(path = "/sample")
public void handleInputBinding(@RequestBody(required = false) byte[] body) {
String message = (new DefaultObjectSerializer()).deserialize(body, String.class);
System.out.println(message);
}
```
It should print:
```
My Message
```
## Actor Method invocation
Object serialization and deserialization for invocation of Actor's methods are same as for the service method invocation, the only difference is that the application does not need to deserialize the request or serialize the response since it is all done transparently by the SDK.
For Actor's methods, the SDK only supports methods with zero or one parameter.
* Invoking an Actor's method:
```java
public static void main() {
ActorProxyBuilder builder = new ActorProxyBuilder("DemoActor");
String result = actor.invokeActorMethod("say", "My Message", String.class).block();
}
```
* Implementing an Actor's method:
```java
public String say(String something) {
System.out.println(something);
return "OK";
}
```
It should print:
```
My Message
```
## Actor's state management
Actors can also have state. In this case, the state manager will serialize and deserialize the objects using the state serializer and handle it transparently to the application.
```java
public String actorMethod(String message) {
// Reads a state from key and deserializes it to String.
String previousMessage = super.getActorStateManager().get("lastmessage", String.class).block();
// Sets the new state for the key after serializing it.
super.getActorStateManager().set("lastmessage", message).block();
return previousMessage;
}
```
## Default serializer
The default serializer for Dapr is a JSON serializer with the following expectations:
1. Use of basic [JSON data types](https://www.w3schools.com/js/js_json_datatypes.asp) for cross-language and cross-platform compatibility: string, number, array, boolean, null and another JSON object. Every complex property type in application's serializable objects (DateTime, for example), should be represented as one of the JSON's basic types.
2. Data persisted with the default serializer should be saved as JSON objects too, without extra quotes or encoding. The example below shows how a string and a JSON object would look like in a Redis store.
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message
"This is a message to be saved and retrieved."
```
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata
{"value":"My data value."}
```
3. Custom serializers must serialize object to `byte[]`.
4. Custom serializers must deserilize `byte[]` to object.
5. When user provides a custom serializer, it should be transferred or persisted as `byte[]`. When persisting, also encode as Base64 string. This is done natively by most JSON libraries.
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message
"VGhpcyBpcyBhIG1lc3NhZ2UgdG8gYmUgc2F2ZWQgYW5kIHJldHJpZXZlZC4="
```
```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata
"eyJ2YWx1ZSI6Ik15IGRhdGEgdmFsdWUuIn0="
```
6. When serializing a object that is a `byte[]`, the serializer should just pass it through since `byte[]` shoould be already handled internally in the SDK. The same happens when deserializing to `byte[]`.
*As of now, the [Java SDK](https://github.com/dapr/java-sdk/) is the only Dapr SDK that implements this specification. In the near future, other SDKs will also implement the same.*

View File

Before

Width:  |  Height:  |  Size: 80 KiB

After

Width:  |  Height:  |  Size: 80 KiB

View File

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 52 KiB

View File

Before

Width:  |  Height:  |  Size: 338 KiB

After

Width:  |  Height:  |  Size: 338 KiB

View File

Before

Width:  |  Height:  |  Size: 360 KiB

After

Width:  |  Height:  |  Size: 360 KiB

View File

Before

Width:  |  Height:  |  Size: 238 KiB

After

Width:  |  Height:  |  Size: 238 KiB

View File

Before

Width:  |  Height:  |  Size: 248 KiB

After

Width:  |  Height:  |  Size: 248 KiB

View File

Before

Width:  |  Height:  |  Size: 501 KiB

After

Width:  |  Height:  |  Size: 501 KiB

View File

Before

Width:  |  Height:  |  Size: 373 KiB

After

Width:  |  Height:  |  Size: 373 KiB

View File

Before

Width:  |  Height:  |  Size: 440 KiB

After

Width:  |  Height:  |  Size: 440 KiB

Some files were not shown because too many files have changed in this diff Show More