Merge branch 'v1.7' into clean_up_how_tos

This commit is contained in:
greenie-msft 2022-03-31 10:07:32 -07:00 committed by GitHub
commit 79365a57a5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
75 changed files with 2520 additions and 663 deletions

View File

@ -6,12 +6,8 @@ weight: 30
description: Learn more about actor reentrancy
---
{{% alert title="Preview feature" color="warning" %}}
Actor reentrancy is currently in [preview]({{< ref preview-features.md >}}).
{{% /alert %}}
## Actor reentrancy
A core tenet of the virtual actor pattern is the single-threaded nature of actor execution. Before reentrancy, this caused the Dapr runtime to lock an actor on any given request. A second request could not start until the first had completed. This behavior means an actor cannot call itself, or have another actor call into it even if it is part of the same chain. Reentrancy solves this by allowing requests from the same chain or context to re-enter into an already locked actor. Examples of chains that reentrancy allows can be seen below:
A core tenet of the virtual actor pattern is the single-threaded nature of actor execution. Without reentrancy, the Dapr runtime locks on all actor requests, even those that are in the same call chain. A second request could not start until the first had completed. This means an actor cannot call itself, or have another actor call into it even if it is part of the same chain. Reentrancy solves this by allowing requests from the same chain, or context, to re-enter into an already locked actor. This is especially useful in scenarios where an actor wants to call a method on itself or when actors are used in workflows where other actors are used to perform work, and they then call back onto the coordinating actor. Examples of chains that reentrancy allows are shown below:
```
Actor A -> Actor A
@ -20,25 +16,68 @@ ActorA -> Actor B -> Actor A
With reentrancy, there can be more complex actor calls without sacrificing the single-threaded behavior of virtual actors.
## Enabling actor reentrancy
Actor reentrancy is currently in preview, so enabling it is a two step process.
<img src="/images/actor-reentrancy.png" width=1000 height=500 alt="Diagram showing reentrancy for a coordinator workflow actor calling worker actors or an actor calling an method on itself">
### Preview feature configuration
Before using reentrancy, the feature must be enabled in Dapr. For more information on preview configurations, see [the full guide on opting into preview features in Dapr]({{< ref preview-features.md >}}). Below is an example of the configuration for actor reentrancy:
The `maxStackDepth` parameter sets a value that controls how many reentrant calls be made to the same actor. By default this is set to 32, which is more than sufficient in most cases.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: reentrantconfig
spec:
features:
- name: Actor.Reentrancy
enabled: true
## Enable Actor Reentrancy with Actor Configuration
The actor that will be reentrant must provide configuration to use reentrancy. This is done by the actor's endpoint for `GET /dapr/config`, similar to other actor configuration elements.
{{< tabs Dotnet Python Go >}}
{{% codetab %}}
```csharp
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<BankService>();
services.AddActors(options =>
{
options.Actors.RegisterActor<DemoActor>();
options.ReentrancyConfig = new Dapr.Actors.ActorReentrancyConfig()
{
Enabled = true,
MaxStackDepth = 32,
};
});
}
}
```
### Actor runtime configuration
Once actor reentrancy is enabled as an opt-in preview feature, the actor that will be reentrant must also provide the appropriate configuration to use reentrancy. This is done by the actor's endpoint for `GET /dapr/config`, similar to other actor configuration elements. Here is a snipet of an actor written in Golang providing the configuration:
{{% /codetab %}}
{{% codetab %}}
```python
from fastapi import FastAPI
from dapr.ext.fastapi import DaprActor
from dapr.actor.runtime.config import ActorRuntimeConfig, ActorReentrancyConfig
from dapr.actor.runtime.runtime import ActorRuntime
from demo_actor import DemoActor
reentrancyConfig = ActorReentrancyConfig(enabled=True)
config = ActorRuntimeConfig(reentrancy=reentrancyConfig)
ActorRuntime.set_actor_config(config)
app = FastAPI(title=f'{DemoActor.__name__}Service')
actor = DaprActor(app)
@app.on_event("startup")
async def startup_event():
# Register DemoActor
await actor.register_actor(DemoActor)
@app.get("/MakeExampleReentrantCall")
def do_something_reentrant():
# invoke another actor here, reentrancy will be handled automatically
return
```
{{% /codetab %}}
{{% codetab %}}
Here is a snippet of an actor written in Golang providing the reentrancy configuration via the HTTP API. Reentrancy has not yet been included into the Go SDK.
```go
type daprConfig struct {
@ -69,7 +108,7 @@ func configHandler(w http.ResponseWriter, r *http.Request) {
### Handling reentrant requests
The key to a reentrant request is the `Dapr-Reentrancy-Id` header. The value of this header is used to match requests to their call chain and allow them to bypass the actor's lock.
The header is generated by the Dapr runtime for any actor request that has a reentrant config specified. Once it is generated, it is used to lock the actor and must be passed to all future requests. Below is a snippet of code from an actor handling this is Golang:
The header is generated by the Dapr runtime for any actor request that has a reentrant config specified. Once it is generated, it is used to lock the actor and must be passed to all future requests. Below are the snippets of code from an actor handling this:
```go
func reentrantCallHandler(w http.ResponseWriter, r *http.Request) {
@ -91,4 +130,11 @@ func reentrantCallHandler(w http.ResponseWriter, r *http.Request) {
}
```
Currently, no SDK supports actor reentrancy. In the future, the method for handling the reentrancy id may be different based on the SDK that is being used.
{{% /codetab %}}
{{< /tabs >}}
Watch this [video](https://www.youtube.com/watch?v=QADHQ5v-gww&list=PLcip_LgkYwzuF-OV6zKRADoiBvUvGhkao&t=674s) on how to use actor reentrancy.
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/QADHQ5v-gww?start=674" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>

View File

@ -50,5 +50,5 @@ Read the [Use output bindings to interface with external resources]({{< ref howt
* Follow these guides on:
* [How-To: Trigger a service from different resources with input bindings]({{< ref howto-triggers.md >}})
* [How-To: Use output bindings to interface with external resources]({{< ref howto-bindings.md >}})
* Try out the [bindings quickstart](https://github.com/dapr/quickstarts/tree/master/bindings/README.md) which shows how to bind to a Kafka queue
* Try out the [bindings quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/bindings/README.md) which shows how to bind to a Kafka queue
* Read the [bindings API specification]({{< ref bindings_api.md >}})

View File

@ -294,4 +294,4 @@ You can now correlate the calls in your app and across services with Dapr using
- [How To set up Application Insights for distributed tracing with OpenTelemetry]({{< ref open-telemetry-collector.md >}})
- [How to set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
- [W3C trace context specification](https://www.w3.org/TR/trace-context/)
- [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/observability)
- [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability)

View File

@ -108,4 +108,4 @@ In the gRPC API calls, trace context is passed through `grpc-trace-bin` header.
- [How To set up Application Insights for distributed tracing with OpenTelemetry]({{< ref open-telemetry-collector.md >}})
- [How To set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
- [W3C trace context specification](https://www.w3.org/TR/trace-context/)
- [Observability sample](https://github.com/dapr/quickstarts/tree/master/observability)
- [Observability sample](https://github.com/dapr/quickstarts/tree/master/tutorials/observability)

View File

@ -112,7 +112,7 @@ The publish/subscribe API is located in the [API reference]({{< ref pubsub_api.m
* Follow these guides on:
* [How-To: Publish a message and subscribe to a topic]({{< ref howto-publish-subscribe.md >}})
* [How-To: Configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
* Try out the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/pub-sub)
* Try out the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/tutorials/pub-sub)
* Learn about [topic scoping]({{< ref pubsub-scopes.md >}})
* Learn about [message time-to-live (TTL)]({{< ref pubsub-message-ttl.md >}})
* Learn about [pubsub without CloudEvent]({{< ref pubsub-raw.md >}})

View File

@ -237,4 +237,5 @@ main();
- [Configure a secret store]({{<ref setup-secret-store>}})
- [Supported secrets]({{<ref supported-secret-stores>}})
- [Using secrets in components]({{<ref component-secrets>}})
- [Secret stores quickstart](https://github.com/dapr/quickstarts/tree/master/secretstore)
- [Secret stores tutorial](https://github.com/dapr/quickstarts/tree/master/tutorials/secretstore)

View File

@ -5,9 +5,6 @@ linkTitle: "How-To: Invoke with gRPC"
description: "Call between services using service invocation"
weight: 3000
---
{{% alert title="Preview feature" color="warning" %}}
gRPC proxying is currently in [preview]({{< ref preview-features.md >}}).
{{% /alert %}}
This article describe how to use Dapr to connect services using gRPC.
By using Dapr's gRPC proxying capability, you can use your existing proto based gRPC services and have the traffic go through the Dapr sidecar. Doing so yields the following [Dapr service invocation]({{< ref service-invocation-overview.md >}}) benefits to developers:
@ -70,27 +67,8 @@ This Go app implements the Greeter proto service and exposes a `SayHello` method
### Run the gRPC server using the Dapr CLI
Since gRPC proxying is currently a preview feature, you need to opt-in using a configuration file. See https://docs.dapr.io/operations/configuration/preview-features/ for more information.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: serverconfig
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: http://localhost:9411/api/v2/spans
features:
- name: proxy.grpc
enabled: true
```
Run the sidecar and the Go server:
```bash
dapr run --app-id server --app-port 50051 --config config.yaml -- go run main.go
dapr run --app-id server --app-port 50051 -- go run main.go
```
Using the Dapr CLI, we're assigning a unique id to the app, `server`, using the `--app-id` flag.
@ -183,7 +161,7 @@ response = stub.SayHello(request={ name: 'Darth Revan' }, metadata=metadata)
const metadata = new grpc.Metadata();
metadata.add('dapr-app-id', 'server');
client.sayHello({ name: "Darth Malgus", metadata })
client.sayHello({ name: "Darth Malgus" }, metadata)
```
{{% /codetab %}}
@ -205,25 +183,8 @@ context.AddMetadata("dapr-app-id", "Darth Sidious");
### Run the client using the Dapr CLI
Since gRPC proxying is currently a preview feature, you need to opt-in using a configuration file. See https://docs.dapr.io/operations/configuration/preview-features/ for more information.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: serverconfig
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: http://localhost:9411/api/v2/spans
features:
- name: proxy.grpc
enabled: true
```
```bash
dapr run --app-id client --dapr-grpc-port 50007 --config config.yaml -- go run main.go
dapr run --app-id client --dapr-grpc-port 50007 -- go run main.go
```
### View telemetry
@ -232,28 +193,7 @@ If you're running Dapr locally with Zipkin installed, open the browser at `http:
## Deploying to Kubernetes
### Step 1: Apply the following configuration YAML using `kubectl`
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: serverconfig
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: http://localhost:9411/api/v2/spans
features:
- name: proxy.grpc
enabled: true
```
```bash
kubectl apply -f config.yaml
```
### Step 2: set the following Dapr annotations on your pod
Set the following Dapr annotations on your deployment:
```yaml
apiVersion: apps/v1
@ -277,13 +217,11 @@ spec:
dapr.io/app-id: "server"
dapr.io/app-protocol: "grpc"
dapr.io/app-port: "50051"
dapr.io/config: "serverconfig"
...
```
*If your app uses an SSL connection, you can tell Dapr to invoke your app over an insecure SSL connection with the `app-ssl: "true"` annotation (full list [here]({{< ref arguments-annotations-overview.md >}}))*
The `dapr.io/app-protocol: "grpc"` annotation tells Dapr to invoke the app using gRPC.
The `dapr.io/config: "serverconfig"` annotation tells Dapr to use the configuration applied above that enables gRPC proxying.
### Namespaces

View File

@ -6,18 +6,30 @@ weight: 1000
description: "Call between services deployed to different namespaces"
---
This article describes how you can call between services deployed to different namespaces. By default, you can invoke services within the *same* namespace by simply referencing the app ID (`nodeapp`):
In this article, you'll learn how you can call between services deployed to different namespaces. By default, service invocation supports invoking services within the *same* namespace by simply referencing the app ID (`nodeapp`):
```sh
localhost:3500/v1.0/invoke/nodeapp/method/neworder
```
Service invocation also supports calls across namespaces. On all supported hosting platforms, Dapr app IDs conform to a valid FQDN format that includes the target namespace. You can therefore specify both the app ID (`nodeapp`) in addition to the namespace the app runs in (`production`). For example to call the `neworder` method on the `nodeapp`, in the `production` namespace would look like this:
Service invocation also supports calls across namespaces. On all supported hosting platforms, Dapr app IDs conform to a valid FQDN format that includes the target namespace. You can specify both:
- The app ID (`nodeapp`), and
- The namespace the app runs in (`production`).
**Example 1**
Call the `neworder` method on the `nodeapp` in the `production` namespace:
```sh
localhost:3500/v1.0/invoke/nodeapp.production/method/neworder
```
When using service invocation to call an application in a namespace you qualify it with the namespace. This is especially useful in cross namespace calls in a Kubernetes cluster. As another example, calling the `ping` method on `myapp` which is scoped to the `production` namespace would look like this:
When calling an application in a namespace using service invocation, you qualify it with the namespace. This proves useful in cross-namespace calls in a Kubernetes cluster.
**Example 2**
Call the `ping` method on `myapp` scoped to the `production` namespace:
```bash
https://localhost:3500/v1.0/invoke/myapp.production/method/ping
@ -28,6 +40,7 @@ https://localhost:3500/v1.0/invoke/myapp.production/method/ping
Call the same `ping` method as example 2 using a curl command from an external DNS address (in this case, `api.demo.dapr.team`) and supply the Dapr API token for authentication:
MacOS/Linux:
```
curl -i -d '{ "message": "hello" }' \
-H "Content-type: application/json" \

View File

@ -45,7 +45,7 @@ Applications can be scoped to namespaces for deployment and security, and you ca
### Service-to-service security
All calls between Dapr applications can be made secure with mutual (mTLS) authentication on hosted platforms, including automatic certificate rollover, via the Dapr Sentry service. The diagram below shows this for self hosted applications.
All calls between Dapr applications can be made secure with mutual (mTLS) authentication on hosted platforms, including automatic certificate rollover, via the Dapr Sentry service.
For more information read the [service-to-service security]({{< ref "security-concept.md#sidecar-to-sidecar-communication" >}}) article.
@ -95,7 +95,7 @@ Dapr allows users to keep their own proto services and work natively with gRPC.
## Example
Following the above call sequence, suppose you have the applications as described in the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md), where a python app invokes a node.js app. In such a scenario, the python app would be "Service A" , and a Node.js app would be "Service B".
Following the above call sequence, suppose you have the applications as described in the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/tutorials/hello-world/README.md), where a python app invokes a node.js app. In such a scenario, the python app would be "Service A" , and a Node.js app would be "Service B".
The diagram below shows sequence 1-7 again on a local machine showing the API calls:
@ -115,6 +115,6 @@ The diagram below shows sequence 1-7 again on a local machine showing the API ca
- [How-to: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}})
- [How-To: Configure Dapr to use gRPC]({{< ref grpc >}})
- [How-to: Invoke services using gRPC]({{< ref howto-invoke-services-grpc.md >}})
- Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or try the samples in the [Dapr SDKs]({{< ref sdks >}})
- Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/tutorials/hello-world/README.md) which shows how to use HTTP service invocation or try the samples in the [Dapr SDKs]({{< ref sdks >}})
- Read the [service invocation API specification]({{< ref service_invocation_api.md >}})
- Understand the [service invocation performance]({{< ref perf-service-invocation.md >}}) numbers

View File

@ -7,35 +7,18 @@ description: "Automatically encrypt state and manage key rotations"
---
{{% alert title="Preview feature" color="warning" %}}
State store encryption is currently in [preview]({{< ref preview-features.md >}}).
{{% /alert %}}
## Introduction
Application state often needs to get encrypted at rest to provide stronger security in enterprise workloads or regulated environments. Dapr offers automatic client side encryption based on [AES256](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard).
Application state often needs to get encrypted at rest to provide stronger security in enterprise workloads or regulated environments. Dapr offers automatic client side encryption based on [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in [Galois/Counter Mode (GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode), supporting keys of 128, 192, and 256-bits.
In addition to automatic encryption, Dapr supports primary and secondary encryption keys to make it easier for developers and ops teams to enable a key rotation strategy.
This feature is supported by all Dapr state stores.
The encryption keys are fetched from a secret, and cannot be supplied as plaintext values on the `metadata` section.
The encryption keys are always fetched from a secret, and cannot be supplied as plaintext values on the `metadata` section.
## Enabling automatic encryption
1. Enable the state encryption preview feature using a standard [Dapr Configuration]({{< ref configuration-overview.md >}}):
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: stateconfig
spec:
features:
- name: State.Encryption
enabled: true
```
2. Add the following `metadata` section to any Dapr supported state store:
1. Add the following `metadata` section to any Dapr supported state store:
```yaml
metadata:
@ -67,7 +50,15 @@ spec:
```
You now have a Dapr state store that's configured to fetch the encryption key from a secret named `mysecret`, containing the actual encryption key in a key named `mykey`.
The actual encryption key *must* be an AES256 encryption key. Dapr will error and exit if the encryption key is invalid.
The actual encryption key *must* be a valid, hex-encoded encryption key. We recommend using 128-bit encryption keys; 192-bit and 256-bit keys are supported too. Dapr errors and exists if the encryption key is invalid.
> As an example, you can generate a random, hex-encoded 128-bit (16-byte) key with:
>
> ```sh
> openssl rand 16 | hexdump -v -e '/1 "%02x"'
> # Result will be similar to "cb321007ad11a9d23f963bff600d58e0"
> ```
*Note that the secret store does not have to support keys*
@ -89,8 +80,9 @@ metadata:
When Dapr starts, it will fetch the secrets containing the encryption keys listed in the `metadata` section. Dapr knows which state item has been encrypted with which key automatically, as it appends the `secretKeyRef.name` field to the end of the actual state key.
To rotate a key, simply change the `primaryEncryptionKey` to point to a secret containing your new key, and move the old primary encryption key to the `secondaryEncryptionKey`. New data will be encrypted using the new key, and old data that's retrieved will be decrypted using the secondary key. Any updates to data items encrypted using the old key will be re-encrypted using the new key.
To rotate a key, change the `primaryEncryptionKey` to point to a secret containing your new key, and move the old primary encryption key to the `secondaryEncryptionKey`. New data will be encrypted using the new key, and old data that's retrieved will be decrypted using the secondary key. Any updates to data items encrypted using the old key will be re-encrypted using the new key. Note that when you rotate a key, data encrypted with the old key is not automatically re-encrypted unless your application writes it again. If you remove the rotated key (the now-secondary encryption key), you will not be able to access data that was encrypted with that.
## Related links
- [Security overview]({{< ref "security-concept.md" >}})
- [State store query API implementation guide](https://github.com/dapr/components-contrib/blob/master/state/Readme.md#implementing-state-query-api)
- [State store components]({{< ref "supported-state-stores.md" >}})

View File

@ -128,7 +128,7 @@ The state management API can be found in the [state management API reference]({{
* [How-To: Query state]({{< ref howto-state-query-api.md >}})
* [How-To: Encrypt application state]({{< ref howto-encrypt-state.md >}})
* [State Time-to-Live]({{< ref state-store-ttl.md >}})
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use state management or try the samples in the [Dapr SDKs]({{< ref sdks >}})
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/tutorials/hello-world/README.md) which shows how to use state management or try the samples in the [Dapr SDKs]({{< ref sdks >}})
* List of [state store components]({{< ref supported-state-stores.md >}})
* Read the [state management API reference]({{< ref state_api.md >}})
* Read the [actors API reference]({{< ref actors_api.md >}})

View File

@ -12,7 +12,7 @@ Bridge to Kubernetes allows you to run and debug code on your development comput
## Debug Dapr apps
Bridge to Kubernetes supports debugging Dapr apps on your machine, while still having them interact with the services and applications running on your Kubernetes cluster. This example showcases Bridge to Kubernetes enabling a developer to debug the [distributed calculator quickstart](https://github.com/dapr/quickstarts/tree/master/distributed-calculator):
Bridge to Kubernetes supports debugging Dapr apps on your machine, while still having them interact with the services and applications running on your Kubernetes cluster. This example showcases Bridge to Kubernetes enabling a developer to debug the [distributed calculator quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/distributed-calculator):
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/rxwg-__otso" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

View File

@ -111,4 +111,4 @@ All done. Now you can point to port 40000 and start a remote debug session from
- [Overview of Dapr on Kubernetes]({{< ref kubernetes-overview >}})
- [Deploy Dapr to a Kubernetes cluster]({{< ref kubernetes-deploy >}})
- [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes)
- [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes)

View File

@ -42,7 +42,7 @@ Then step into 'dapr' directory from your cloned [dapr/dapr repository](https://
helm install dapr charts/dapr --namespace dapr-system --values values.yml --wait
```
To enable debug mode for daprd, you need to put an extra annotation `dapr.io/enable-debug` in your application's deployment file. Let's use [quickstarts/hello-kubernetes](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes) as an example. Modify 'deploy/node.yaml' like below:
To enable debug mode for daprd, you need to put an extra annotation `dapr.io/enable-debug` in your application's deployment file. Let's use [quickstarts/hello-kubernetes](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) as an example. Modify 'deploy/node.yaml' like below:
```diff
diff --git a/hello-kubernetes/deploy/node.yaml b/hello-kubernetes/deploy/node.yaml
@ -61,7 +61,7 @@ index 23185a6..6cdb0ae 100644
The annotation `dapr.io/enable-debug` will hint Dapr injector to inject Dapr sidecar into the debug mode. You can also specify the debug port with annotation `dapr.io/debug-port`, otherwise the default port will be "40000".
Deploy the application with the following command. For the complete guide refer to the [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes):
Deploy the application with the following command. For the complete guide refer to the [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes):
```bash
kubectl apply -f ./deploy/node.yaml
@ -92,4 +92,4 @@ All done. Now you can point to port 40000 and start a remote debug session to da
- [Overview of Dapr on Kubernetes]({{< ref kubernetes-overview >}})
- [Deploy Dapr to a Kubernetes cluster]({{< ref kubernetes-deploy >}})
- [Debug Dapr services on Kubernetes]({{< ref debug-dapr-services >}})
- [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes)
- [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes)

View File

@ -18,12 +18,12 @@ dapr run --app-id nodeapp --app-port 3000 --dapr-http-port 3500 app.js
One approach to attaching the debugger to your service is to first run daprd with the correct arguments from the command line and then launch your code and attach the debugger. While this is a perfectly acceptable solution, it does require a few extra steps and some instruction to developers who might want to clone your repo and hit the "play" button to begin debugging.
If your application is a collection of microservices, each with a Dapr sidecar, it will be useful to debug them together in Visual Studio Code. This page will use the [hello world quickstart](https://github.com/dapr/quickstarts/tree/master/hello-world) to showcase how to configure VSCode to debug multiple Dapr application using [VSCode debugging](https://code.visualstudio.com/Docs/editor/debugging).
If your application is a collection of microservices, each with a Dapr sidecar, it will be useful to debug them together in Visual Studio Code. This page will use the [hello world quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world) to showcase how to configure VSCode to debug multiple Dapr application using [VSCode debugging](https://code.visualstudio.com/Docs/editor/debugging).
## Prerequisites
- Install the [Dapr extension]({{< ref vscode-dapr-extension.md >}}). You will be using the [tasks](https://code.visualstudio.com/docs/editor/tasks) it offers later on.
- Optionally clone the [hello world quickstart](https://github.com/dapr/quickstarts/tree/master/hello-world)
- Optionally clone the [hello world quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world)
## Step 1: Configure launch.json

View File

@ -13,7 +13,9 @@ Dapr has pre-built Docker remote containers for NodeJS and C#. You can pick the
### Setup a remote dev container
#### Prerequisites
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
- [Visual Studio Code](https://code.visualstudio.com/)
- [VSCode Remote Development extension pack](https://aka.ms/vscode-remote/download/extension)

View File

@ -124,7 +124,7 @@ import (
"github.com/golang/protobuf/ptypes/any"
"github.com/golang/protobuf/ptypes/empty"
commonv1pb "github.com/dapr/go-sdk/dapr/proto/common/v1"
commonv1pb "github.com/dapr/dapr/pkg/proto/common/v1"
pb "github.com/dapr/go-sdk/dapr/proto/runtime/v1"
"google.golang.org/grpc"
)

View File

@ -3,24 +3,18 @@ type: docs
title: "Getting started with Dapr"
linkTitle: "Getting started"
weight: 20
description: "How to get up and running with Dapr in minutes"
no_list: true
description: "Get up and running with Dapr in minutes"
---
Welcome to the Dapr getting started guide!
{{% alert title="Dapr Concepts" color="primary" %}}
If you are looking for an introductory overview of Dapr and learn more about basic Dapr terminology, it is recommended to visit the [concepts section]({{<ref concepts>}}).
If you are looking for an introductory overview of Dapr and learn more about basic Dapr terminology, we recommend starting with the [concepts section]({{<ref concepts>}}).
{{% /alert %}}
This guide will walk you through a series of steps to install, initialize and start using Dapr. The recommended way to get started with Dapr is to setup a local development environment (also referred to as [_self-hosted_ mode]({{< ref self-hosted >}})) which includes the Dapr CLI, Dapr sidecar binaries, and some default components that can help you start using Dapr quickly.
Our getting started guide will walk you through a series of steps to install, initialize, experiment with, and start using Dapr.
The following steps in this guide are:
1. Install the Dapr CLI
1. Initialize Dapr
1. Use the Dapr API
1. Configure a component
1. Explore Dapr quickstarts
<br>
{{< button text="First step: Install the Dapr CLI >>" page="install-dapr-cli" >}}
<br><br>

View File

@ -1,243 +0,0 @@
---
type: docs
title: "How-To: Configure state store and pub/sub message broker"
linkTitle: "(optional) Configure state & pub/sub"
weight: 80
description: "Configure state store and pub/sub message broker components for Dapr"
aliases:
- /getting-started/configure-redis/
---
In order to get up and running with the state and pub/sub building blocks two components are needed:
1. A state store component for persistence and restoration
2. As pub/sub message broker component for async-style message delivery
A full list of supported components can be found here:
- [Supported state stores]({{< ref supported-state-stores >}})
- [Supported pub/sub message brokers]({{< ref supported-pubsub >}})
The rest of this page describes how to get up and running with Redis.
{{% alert title="Self-hosted mode" color="warning" %}}
When initialized in self-hosted mode, Dapr automatically runs a Redis container and sets up the required component yaml files. You can skip this page and go to [next steps](#next-steps)
{{% /alert %}}
## Create a Redis store
Dapr can use any Redis instance - either containerized on your local dev machine or a managed cloud service. If you already have a Redis store, move on to the [configuration](#configure-dapr-components) section.
{{< tabs "Self-Hosted" "Kubernetes" "Azure" "AWS" "GCP" >}}
{{% codetab %}}
Redis is automatically installed in self-hosted environments by the Dapr CLI as part of the initialization process. You are all set and can skip to the [next steps](#next-steps)
{{% /codetab %}}
{{% codetab %}}
You can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm v3](https://github.com/helm/helm#install).
1. Install Redis into your cluster:
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install redis bitnami/redis
```
Note that you will need a Redis version greater than 5, which is what Dapr's pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub) a lower version can be used.
2. Run `kubectl get pods` to see the Redis containers now running in your cluster:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-0 1/1 Running 0 69s
redis-replicas-0 1/1 Running 0 69s
redis-replicas-1 1/1 Running 0 22s
```
Note that the hostname is `redis-master.default.svc.cluster.local:6379`, and a Kubernetes secret, `redis`, is created automatically.
{{% /codetab %}}
{{% codetab %}}
This method requires having an Azure Subscription.
1. Open the [Azure Portal](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow. Log in if necessary.
1. Fill out the necessary information
- Dapr pub/sub uses [Redis streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0. If you would like to use Azure Redis Cache for pub/sub make sure to set the version to (PREVIEW) 6.
1. Click "Create" to kickoff deployment of your Redis instance.
1. You'll need the hostname of your Redis instance, which you can retrieve from the "Overview" in Azure. It should look like `xxxxxx.redis.cache.windows.net:6380`. Note this for later.
1. Once your instance is created, you'll need to grab your access key. Navigate to "Access Keys" under "Settings" and create a Kubernetes secret to store your Redis password:
```bash
kubectl create secret generic redis --from-literal=redis-password=*********
```
{{% /codetab %}}
{{% codetab %}}
1. Visit [AWS Redis](https://aws.amazon.com/redis/) to deploy a Redis instance
1. Note the Redis hostname in the AWS portal for use later
1. Create a Kubernetes secret to store your Redis password:
```bash
kubectl create secret generic redis --from-literal=redis-password=*********
```
{{% /codetab %}}
{{% codetab %}}
1. Visit [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/) to deploy a MemoryStore instance
1. Note the Redis hostname in the GCP portal for use later
1. Create a Kubernetes secret to store your Redis password:
```bash
kubectl create secret generic redis --from-literal=redis-password=*********
```
{{% /codetab %}}
{{< /tabs >}}
## Configure Dapr components
Dapr uses components to define what resources to use for building block functionality. These steps go through how to connect the resources you created above to Dapr for state and pub/sub.
In self-hosted mode, component files are automatically created under:
- **Windows**: `%USERPROFILE%\.dapr\components\`
- **Linux/MacOS**: `$HOME/.dapr/components`
For Kubernetes, files can be created in any directory, as they are applied with `kubectl`.
### Create State store component
Create a file named `redis-state.yaml`, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: <REPLACE WITH HOSTNAME FROM ABOVE - for Redis on Kubernetes it is redis-master.default.svc.cluster.local:6379>
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
This example uses the kubernetes secret that was created when setting up a cluster with the above instructions.
{{% alert title="Other stores" color="primary" %}}
If using a state store other than Redis, refer to the [supported state stores]({{< ref supported-state-stores >}}) for information on what options to set.
{{% /alert %}}
### Create Pub/sub message broker component
Create a file called `redis-pubsub.yaml`, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: <REPLACE WITH HOSTNAME FROM ABOVE - for Redis on Kubernetes it is redis-master.default.svc.cluster.local:6379>
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
This example uses the kubernetes secret that was created when setting up a cluster with the above instructions.
{{% alert title="Other stores" color="primary" %}}
If using a pub/sub message broker other than Redis, refer to the [supported pub/sub message brokers]({{< ref supported-pubsub >}}) for information on what options to set.
{{% /alert %}}
### Hard coded passwords (not recommended)
For development purposes only you can skip creating kubernetes secrets and place passwords directly into the Dapr component file:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
## Apply the configuration
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
By default the Dapr CLI creates a local Redis instance when you run `dapr init`. However, if you want to configure a different Redis instance you can either:
- Update the existing component files or create new ones in the default components directory
- **Linux/MacOS:** `$HOME/.dapr/components`
- **Windows:** `%USERPROFILE%\.dapr\components`
- Create a new `components` directory in your app folder containing the YAML files and provide the path to the `dapr run` command with the flag `--components-path`
{{% alert title="Self-hosted slim mode" color="primary" %}}
If you initialized Dapr in [slim mode]({{< ref self-hosted-no-docker.md >}}) (without Docker) you need to manually create the default directory, or always specify a components directory using `--components-path`.
{{% /alert %}}
{{% /codetab %}}
{{% codetab %}}
Run `kubectl apply -f <FILENAME>` for both state and pubsub files:
```bash
kubectl apply -f redis-state.yaml
kubectl apply -f redis-pubsub.yaml
```
{{% /codetab %}}
{{< /tabs >}}
## Next steps
- [Try out a Dapr quickstart]({{< ref quickstarts.md >}})

View File

@ -3,27 +3,40 @@ type: docs
title: "Use the Dapr API"
linkTitle: "Use the Dapr API"
weight: 30
description: "Run a Dapr sidecar and try out the state API"
---
After running the `dapr init` command in the [previous step]({{<ref install-dapr-selfhost.md>}}), your local environment has the Dapr sidecar binaries as well as default component definitions for both state management and a message broker (both using Redis). You can now try out some of what Dapr has to offer by using the Dapr CLI to run a Dapr sidecar and try out the state API that will allow you to store and retrieve a state. You can learn more about the state building block and how it works in [these docs]({{< ref state-management >}}).
Running [`dapr init`]({{<ref install-dapr-selfhost.md>}}) loads your local environment with:
You will now run the sidecar and call the API directly (simulating what an application would do).
- The Dapr sidecar binaries.
- Default Redis component definitions for both:
- State management, and
- A message broker.
## Step 1: Run the Dapr sidecar
With this setup, run Dapr using the Dapr CLI and try out the state API to store and retrieve a state. [Learn more about the state building block and how it works in our concept docs]({{< ref state-management >}}).
One of the most useful Dapr CLI commands is [`dapr run`]({{< ref dapr-run.md >}}). This command launches an application together with a sidecar. For the purpose of this tutorial you'll run the sidecar without an application.
In this guide, you will simulate an application by running the sidecar and calling the API directly. For the purpose of this tutorial you'll run the sidecar without an application.
Run the following command to launch a Dapr sidecar that will listen on port 3500 for a blank application named myapp:
### Step 1: Run the Dapr sidecar
One of the most useful Dapr CLI commands is [`dapr run`]({{< ref dapr-run.md >}}). This command launches an application, together with a sidecar.
Launch a Dapr sidecar that will listen on port 3500 for a blank application named `myapp`:
```bash
dapr run --app-id myapp --dapr-http-port 3500
```
With this command, no custom component folder was defined, so Dapr uses the default component definitions that were created during the init flow (these can be found under `$HOME/.dapr/components` on Linux or MacOS and under `%USERPROFILE%\.dapr\components` on Windows). These tell Dapr to use the local Redis Docker container as a state store and message broker.
Since no custom component folder was defined with the above command, Dapr uses the default component definitions created during the [`dapr init` flow]({{< ref install-dapr-selfhost.md >}}), found:
## Step 2: Save state
- On Windows, under `%UserProfile%\.dapr\components`
- On Linux/MacOS, under `~/.dapr/components`
We will now update the state with an object. The new state will look like this:
These tell Dapr to use the local Docker container for Redis as a state store and message broker.
### Step 2: Save state
Update the state with an object. The new state will look like this:
```json
[
@ -36,7 +49,7 @@ We will now update the state with an object. The new state will look like this:
Notice, the object contained in the state has a `key` assigned with the value `name`. You will use the key in the next step.
Run the command shown below to store the new state.
Store the new state using the following command:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}}
@ -55,45 +68,46 @@ Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{ "key":
{{< /tabs >}}
## Step 3: Get state
### Step 3: Get state
Now get the object you just stored in the state by using the state management API with the key `name`:
Retrieve the object you just stored in the state by using the state management API with the key `name`. Run the following code with the same Dapr instance you ran earlier. :
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}}
With the same Dapr instance running from above run:
```bash
curl http://localhost:3500/v1.0/state/statestore/name
```
{{% /codetab %}}
{{% codetab %}}
With the same Dapr instance running from above run:
```powershell
Invoke-RestMethod -Uri 'http://localhost:3500/v1.0/state/statestore/name'
```
{{% /codetab %}}
{{< /tabs >}}
## Step 4: See how the state is stored in Redis
### Step 4: See how the state is stored in Redis
You can look in the Redis container and verify Dapr is using it as a state store. Run the following to use the Redis CLI:
Look in the Redis container and verify Dapr is using it as a state store. Use the Redis CLI with the following command:
```bash
docker exec -it dapr_redis redis-cli
```
List the redis keys to see how Dapr created a key value pair (with the app-id you provided to `dapr run` as a prefix to the key):
List the Redis keys to see how Dapr created a key value pair with the app-id you provided to `dapr run` as the key's prefix:
```bash
keys *
```
```
1) "myapp||name"
```
**Output:**
`1) "myapp||name"`
View the state value by running:
@ -101,12 +115,11 @@ View the state value by running:
hgetall "myapp||name"
```
```
1) "data"
2) "\"Bruce Wayne\""
3) "version"
4) "1"
```
**Output:**
`1) "data"`
`2) "\"Bruce Wayne\""`
`3) "version"`
`4) "1"`
Exit the redis-cli with:
@ -114,4 +127,4 @@ Exit the redis-cli with:
exit
```
{{< button text="Next step: Define a component >>" page="get-started-component" >}}
{{< button text="Next step: Dapr Quickstarts >>" page="getting-started/quickstarts" >}}

View File

@ -1,95 +0,0 @@
---
type: docs
title: "Define a component"
linkTitle: "Define a component"
weight: 40
---
In the [previous step]({{<ref get-started-api.md>}}) you called the Dapr HTTP API to store and retrieve a state from a Redis backed state store. Dapr knew to use the Redis instance that was configured locally on your machine through default component definition files that were created when Dapr was initialized.
When building an app, you most likely would create your own component file definitions depending on the building block and specific component that you'd like to use.
As an example of how to define custom components for your application, you will now create a component definition file to interact with the [secrets building block]({{< ref secrets >}}).
In this guide you will:
- Create a local JSON secret store
- Register the secret store with Dapr using a component definition file
- Obtain the secret using the Dapr HTTP API
## Step 1: Create a JSON secret store
While Dapr supports [many types of secret stores]({{< ref supported-secret-stores >}}), the easiest way to get started is a local JSON file with your secret (note this secret store is meant for development purposes and is not recommended for production use cases as it is not secured).
Begin by saving the following JSON contents into a file named `mysecrets.json`:
```json
{
"my-secret" : "I'm Batman"
}
```
## Step 2: Create a secret store Dapr component
Create a new directory named `my-components` to hold the new component file:
```bash
mkdir my-components
```
Inside this directory create a new file `localSecretStore.yaml` with the following contents:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: my-secret-store
namespace: default
spec:
type: secretstores.local.file
version: v1
metadata:
- name: secretsFile
value: <PATH TO SECRETS FILE>/mysecrets.json
- name: nestedSeparator
value: ":"
```
You can see that the above file definition has a `type: secretstores.local.file` which tells Dapr to use the local file component as a secret store. The metadata fields provide component specific information needed to work with this component (in this case, the path to the secret store JSON is relative to where you call `dapr run` from.)
## Step 3: Run the Dapr sidecar
Run the following command to launch a Dapr sidecar that will listen on port 3500 for a blank application named myapp:
```bash
dapr run --app-id myapp --dapr-http-port 3500 --components-path ./my-components
```
> If you encounter a error message stating the app ID is already in use, it may be that the sidecar you ran in the previous step is still running. Make sure you stop the sidecar before running the above command using "Control-C" or running the command `dapr stop --app-id myapp`.
## Step 4: Get a secret
In a separate terminal run:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}}
```bash
curl http://localhost:3500/v1.0/secrets/my-secret-store/my-secret
```
{{% /codetab %}}
{{% codetab %}}
```powershell
Invoke-RestMethod -Uri 'http://localhost:3500/v1.0/secrets/my-secret-store/my-secret'
```
{{% /codetab %}}
{{< /tabs >}}
You should see output with the secret you stored in the JSON file.
```json
{"my-secret":"I'm Batman"}
```
{{< button text="Next step: Explore Dapr quickstarts >>" page="quickstarts" >}}

View File

@ -3,100 +3,131 @@ type: docs
title: "Install the Dapr CLI"
linkTitle: "Install Dapr CLI"
weight: 10
description: "Install the Dapr CLI as the main tool for running Dapr-related tasks"
---
The Dapr CLI is the main tool you'll be using for various Dapr related tasks. You can use it to run an application with a Dapr sidecar, as well as review sidecar logs, list running services, and run the Dapr dashboard. The Dapr CLI works with both [self-hosted]({{< ref self-hosted >}}) and [Kubernetes]({{< ref Kubernetes >}}) environments.
You'll use the Dapr CLI as the main tool for various Dapr-related tasks. You can use it to:
Begin by downloading and installing the Dapr CLI:
- Run an application with a Dapr sidecar.
- Review sidecar logs.
- List running services.
- Run the Dapr dashboard.
The Dapr CLI works with both [self-hosted]({{< ref self-hosted >}}) and [Kubernetes]({{< ref Kubernetes >}}) environments.
### Step 1: Install the Dapr CLI
{{< tabs Linux Windows MacOS Binaries>}}
{{% codetab %}}
### Install from Terminal
This command installs the latest linux Dapr CLI to `/usr/local/bin`:
#### Install from Terminal
Install the latest Linux Dapr CLI to `/usr/local/bin`:
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash
```
### Install without `sudo`
If you do not have access to the `sudo` command or your username is not in the `sudoers` file you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
#### Install without `sudo`
If you do not have access to the `sudo` command or your username is not in the `sudoers` file, you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | DAPR_INSTALL_DIR="$HOME/dapr" /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
### Install from Command Prompt
This Command Prompt command installs the latest windows Dapr cli to `C:\dapr` and adds this directory to User PATH environment variable.
#### Install from Command Prompt
Install the latest windows Dapr cli to `C:\dapr` and add this directory to the User PATH environment variable:
```powershell
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
```
### Install without administrative rights
If you do not have admin rights you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
#### Install without administrative rights
If you do not have admin rights, you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
```powershell
$script=iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1; $block=[ScriptBlock]::Create($script); invoke-command -ScriptBlock $block -ArgumentList "", "$HOME/dapr"
```
{{% /codetab %}}
{{% codetab %}}
### Install from Terminal
This command installs the latest darwin Dapr CLI to `/usr/local/bin`:
Install the latest Darwin Dapr CLI to `/usr/local/bin`:
```bash
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash
```
#### Note for ARM64 Macs
Support for ARM64 Macs is available as a Preview feature. When installing from the terminal, native ARM64 binaries are downloaded when available. For older releases, AMD64 binaries are downloaded, which must be run with Rosetta2 emulation enabled. To install Rosetta emulation:
**For ARM64 Macs:**
ARM64 Macs support is available as a *preview feature*. When installing from the terminal, native ARM64 binaries are downloaded once available. For older releases, AMD64 binaries are downloaded and must be run with Rosetta2 emulation enabled.
To install Rosetta emulation:
```bash
softwareupdate --install-rosetta
```
### Install from Homebrew
You can install via [Homebrew](https://brew.sh):
#### Install from Homebrew
Install via [Homebrew](https://brew.sh):
```bash
brew install dapr/tap/dapr-cli
```
#### Note for ARM64 Macs
**For ARM64 Macs:**
For ARM64 Macs, only Homebrew 3.0 and higher versions are supported. Please update Homebrew to 3.0.0 or higher and then run the command below:
```bash
arch -arm64 brew install dapr/tap/dapr-cli
```
### Install without `sudo`
If you do not have access to the `sudo` command or your username is not in the `sudoers` file you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
#### Install without `sudo`
If you do not have access to the `sudo` command or your username is not in the `sudoers` file, you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
```bash
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | DAPR_INSTALL_DIR="$HOME/dapr" /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
Each release of Dapr CLI includes various OSes and architectures. These binary versions can be manually downloaded and installed.
Each release of Dapr CLI includes various OSes and architectures. You can manually download and install these binary versions.
1. Download the desired Dapr CLI from the latest [Dapr Release](https://github.com/dapr/cli/releases)
2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip)
1. Download the desired Dapr CLI from the latest [Dapr Release](https://github.com/dapr/cli/releases).
2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip).
3. Move it to your desired location.
- For Linux/MacOS `/usr/local/bin` is recommended.
- For Windows, create a directory and add this to your System PATH. For example create a directory called `C:\dapr` and add this directory to your User PATH, by editing your system environment variable.
{{% /codetab %}}
{{< /tabs >}}
- For Linux/MacOS, we recommend `/usr/local/bin`.
- For Windows, create a directory and add this to your System PATH. For example:
- Create a directory called `C:\dapr`.
- Add your newly created directory to your User PATH, by editing your system environment variable.
{{% /codetab %}}
{{< /tabs >}}
### Step 2: Verify the installation
You can verify the CLI is installed by restarting your terminal/command prompt and running the following:
Verify the CLI is installed by restarting your terminal/command prompt and running the following:
```bash
dapr
```
The output should look like this:
**Output:**
```md
__

View File

@ -3,38 +3,52 @@ type: docs
title: "Initialize Dapr in your local environment"
linkTitle: "Init Dapr locally"
weight: 20
description: "Fetch the Dapr sidecar binaries and install them locally using `dapr-init`"
aliases:
- /getting-started/install-dapr/
- /getting-started/set-up-dapr/install-dapr/
---
Now that you have the [Dapr CLI installed]({{<ref install-dapr-cli.md>}}), it's time to initialize Dapr on your local machine using the CLI.
Now that you've [installed the Dapr CLI]({{<ref install-dapr-cli.md>}}), use the CLI to initialize Dapr on your local machine.
Dapr runs as a sidecar alongside your application, and in self-hosted mode this means it is a process on your local machine. Therefore, initializing Dapr includes fetching the Dapr sidecar binaries and installing them locally.
Dapr runs as a sidecar alongside your application. In self-hosted mode, this means it is a process on your local machine. By initializing Dapr, you:
In addition, the default initialization process also creates a development environment that helps streamline application development with Dapr. This includes the following steps:
- Fetch and install the Dapr sidecar binaries locally.
- Create a development environment that streamlines application development with Dapr.
1. Running a **Redis container instance** to be used as a local state store and message broker
1. Running a **Zipkin container instance** for observability
1. Creating a **default components folder** with component definitions for the above
1. Running a **Dapr placement service container instance** for local actor support
Dapr initialization includes:
1. Running a **Redis container instance** to be used as a local state store and message broker.
1. Running a **Zipkin container instance** for observability.
1. Creating a **default components folder** with component definitions for the above.
1. Running a **Dapr placement service container instance** for local actor support.
{{% alert title="Docker" color="primary" %}}
This recommended development environment requires [Docker](https://docs.docker.com/install/). It is possible to initialize Dapr without a dependency on Docker (see [this guidance]({{<ref self-hosted-no-docker.md>}})) but next steps in this guide assume the recommended development environment.
The recommended development environment requires [Docker](https://docs.docker.com/install/). While you can [initialize Dapr without a dependency on Docker]({{<ref self-hosted-no-docker.md>}})), the next steps in this guide assume the recommended Docker development environment.
{{% /alert %}}
### Step 1: Open an elevated terminal
{{< tabs "Linux/MacOS" "Windows">}}
{{< tabs "Linux/MacOS" "Windows">}}
{{% codetab %}}
If you run your Docker commands with sudo, or the install path is `/usr/local/bin` (default install path), you will need to use `sudo` below.
{{% /codetab %}}
{{% codetab %}}
{{% codetab %}}
Make sure that you run Command Prompt as administrator (right click, run as administrator)
{{% /codetab %}}
You will need to use `sudo` for this quickstart if:
{{< /tabs >}}
- You run your Docker commands with `sudo`, or
- The install path is `/usr/local/bin` (default install path).
{{% /codetab %}}
{{% codetab %}}
Run Windows Terminal or command prompt as administrator.
1. Right click on the Windows Terminal or command prompt icon.
1. Select **Run as administrator**.
{{% /codetab %}}
{{< /tabs >}}
### Step 2: Run the init CLI command
@ -50,63 +64,65 @@ dapr init
dapr --version
```
Output should look like this:
```
CLI version: {{% dapr-latest-version cli="true" %}}
Runtime version: {{% dapr-latest-version long="true" %}}
```
**Output:**
`CLI version: {{% dapr-latest-version cli="true" %}}` <br>
`Runtime version: {{% dapr-latest-version long="true" %}}`
### Step 4: Verify containers are running
As mentioned above, the `dapr init` command launches several containers that will help you get started with Dapr. Verify this by running:
As mentioned earlier, the `dapr init` command launches several containers that will help you get started with Dapr. Verify you have container instances with `daprio/dapr`, `openzipkin/zipkin`, and `redis` images running:
```bash
docker ps
```
Make sure that instances with `daprio/dapr`, `openzipkin/zipkin`, and `redis` images are all running:
**Output:**
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0dda6684dc2e openzipkin/zipkin "/busybox/sh run.sh" 2 minutes ago Up 2 minutes 9410/tcp, 0.0.0.0:9411->9411/tcp dapr_zipkin
9bf6ef339f50 redis "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 0.0.0.0:6379->6379/tcp dapr_redis
8d993e514150 daprio/dapr "./placement" 2 minutes ago Up 2 minutes 0.0.0.0:6050->50005/tcp dapr_placement
```
<img src="/images/install-dapr-selfhost/docker-containers.png" width=800>
### Step 5: Verify components directory has been initialized
On `dapr init`, the CLI also creates a default components folder which includes several YAML files with definitions for a state store, pub/sub and zipkin. These will be read by the Dapr sidecar, telling it to use the Redis container for state management and messaging and the Zipkin container for collecting traces.
On `dapr init`, the CLI also creates a default components folder that contains several YAML files with definitions for a state store, Pub/sub, and Zipkin. The Dapr sidecar will read these components and use:
- In Linux/MacOS Dapr is initialized with default components and files in `$HOME/.dapr`.
- For Windows Dapr is initialized to `%USERPROFILE%\.dapr\`
- The Redis container for state management and messaging.
- The Zipkin container for collecting traces.
Verify by opening your components directory:
- On Windows, under `%UserProfile%\.dapr`
- On Linux/MacOS, under `~/.dapr`
{{< tabs "Linux/MacOS" "Windows">}}
{{% codetab %}}
Run:
```bash
ls $HOME/.dapr
```
You should see:
```
bin components config.yaml
```
**Output:**
`bin components config.yaml`
<br>
{{% /codetab %}}
{{% codetab %}}
Using Command Prompt (not PowerShell), open `%USERPROFILE%\.dapr\` in file explorer:
```powershell
explorer "%USERPROFILE%\.dapr\"
```
You will see the Dapr config, Dapr binaries directory, and the default components directory for Dapr:
**Result:**
<img src="/images/install-dapr-selfhost/windows-view-components.png" width=600>
<img src="/images/install-dapr-selfhost-windows.png" width=500>
{{% /codetab %}}
{{< /tabs >}}
{{< button text="Next step: Use the Dapr API >>" page="get-started-api" >}}
<br>
{{< button text="Next step: Try Dapr quickstarts >>" page="getting-started/_index.md" >}}

View File

@ -1,27 +0,0 @@
---
type: docs
title: "Try out Dapr quickstarts to learn core concepts"
linkTitle: "Dapr Quickstarts"
weight: 60
description: "Tutorials with code samples that are aimed to get you started quickly with Dapr"
---
The [Dapr Quickstarts](https://github.com/dapr/quickstarts/tree/v1.5.0) are a collection of tutorials with code samples that are aimed to get you started quickly with Dapr, each highlighting a different Dapr capability.
- A good place to start is the hello-world quickstart, it demonstrates how to run Dapr in standalone mode locally on your machine and demonstrates state management and service invocation in a simple application.
- Next, if you are familiar with Kubernetes and want to see how to run the same application in a Kubernetes environment, look for the hello-kubernetes quickstart. Other quickstarts such as pub-sub, bindings and the distributed-calculator quickstart explore different Dapr capabilities include instructions for running both locally and on Kubernetes and can be completed in any order. A full list of the quickstarts can be found below.
- At anytime, you can explore the Dapr documentation or SDK specific samples and come back to try additional quickstarts.
- When you're done, consider exploring the [Dapr samples repository](https://github.com/dapr/samples) for additional code samples contributed by the community that show more advanced or specific usages of Dapr.
## Quickstarts
| Quickstart | Description |
|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Hello World](https://github.com/dapr/quickstarts/tree/v1.6.0/hello-world) | Demonstrates how to run Dapr locally. Highlights service invocation and state management. |
| [Hello Kubernetes](https://github.com/dapr/quickstarts/tree/v1.6.0/hello-kubernetes) | Demonstrates how to run Dapr in Kubernetes. Highlights service invocation and state management. |
| [Distributed Calculator](https://github.com/dapr/quickstarts/tree/v1.6.0/distributed-calculator) | Demonstrates a distributed calculator application that uses Dapr services to power a React web app. Highlights polyglot (multi-language) programming, service invocation and state management. |
| [Pub/Sub](https://github.com/dapr/quickstarts/tree/v1.6.0/pub-sub) | Demonstrates how to use Dapr to enable pub-sub applications. Uses Redis as a pub-sub component. |
| [Bindings](https://github.com/dapr/quickstarts/tree/v1.6.0/bindings) | Demonstrates how to use Dapr to create input and output bindings to other components. Uses bindings to Kafka. |
| [Observability](https://github.com/dapr/quickstarts/tree/v1.6.0/observability) | Demonstrates Dapr tracing capabilities. Uses Zipkin as a tracing component. |
| [Secret Store](https://github.com/dapr/quickstarts/tree/v1.6.0/secretstore) | Demonstrates the use of Dapr Secrets API to access secret stores. |

View File

@ -0,0 +1,32 @@
---
type: docs
title: "Dapr Quickstarts"
linkTitle: "Dapr Quickstarts"
weight: 70
description: "Try out Dapr quickstarts with code samples that are aimed to get you started quickly with Dapr"
no_list: true
---
Hit the ground running with our Dapr quickstarts, complete with code samples aimed to get you started quickly with Dapr.
{{% alert title="Note" color="primary" %}}
We are actively working on adding to our quickstart library. In the meantime, you can explore Dapr through our [tutorials]({{< ref "getting-started/tutorials/_index.md" >}}).
{{% /alert %}}
#### Before you begin
- [Set up your local Dapr environment]({{< ref "install-dapr-cli.md" >}}).
## Quickstarts
| Quickstarts | Description |
| ----------- | ----------- |
| [Service Invocation]({{< ref serviceinvocation-quickstart.md >}}) | Invoke a method using HTTP proxying with Dapr's Service Invocation building block. |
| [Publish and Subscribe]({{< ref pubsub-quickstart.md >}}) | Get started with Dapr's Publish and Subscribe building block. |
| State Management | Coming soon. |
| Bindings | Coming soon. |
| Actors | Coming soon. |
| Observability | Coming soon. |
| Secrets Management | Coming soon. |
| Configuration | Coming soon. |

View File

@ -0,0 +1,816 @@
---
type: docs
title: "Quickstart: Publish and Subscribe"
linkTitle: "Publish and Subscribe"
weight: 70
description: "Get started with Dapr's Publish and Subscribe building block"
---
Let's take a look at Dapr's [Publish and Subscribe (Pub/sub) building block]({{< ref pubsub >}}). In this quickstart, you will run a publisher microservice and a subscriber microservice to demonstrate how Dapr enables a Pub/sub pattern.
1. Using a publisher service, developers can repeatedly publish messages to a topic.
1. [A Pub/sub component](https://docs.dapr.io/concepts/components-concept/#pubsub-brokers) queues or brokers those messages. Our example below uses Redis, you can use RabbitMQ, Kafka, etc.
1. The subscriber to that topic pulls messages from the queue and processes them.
<img src="/images/pubsub-quickstart/pubsub-diagram.png" width=800 style="padding-bottom:15px;">
Select your preferred language-specific Dapr SDK before proceeding with the quickstart.
{{< tabs "Python" "JavaScript" ".NET" "Java" "Go" >}}
<!-- Python -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Python 3.7+ installed](https://www.python.org/downloads/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the sample we've provided.
```bash
git clone https://github.com/dapr/quickstarts.git
```
### Step 2: Publish a topic
In a terminal window, navigate to the `checkout` directory.
```bash
cd pub_sub/python/sdk/checkout
```
Install the dependencies:
```bash
pip3 install -r requirements.txt
```
Run the `checkout` publisher service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --components-path ../../../components/ -- python3 app.py
```
In the `checkout` publisher, we're publishing the orderId message to the Redis instance called `order_pub_sub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. As soon as the service starts, it publishes in a loop:
```python
with DaprClient() as client:
# Publish an event/message using Dapr PubSub
result = client.publish_event(
pubsub_name='order_pub_sub',
topic_name='orders',
data=json.dumps(order),
data_content_type='application/json',
)
```
### Step 3: Subscribe to topics
In a new terminal window, navigate to the `order-processor` directory.
```bash
cd pub_sub/python/sdk/order-processor
```
Install the dependencies:
```bash
pip3 install -r requirements.txt
```
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components/ --app-port 5001 -- python3 app.py
```
In the `order-processor` subscriber, we're subscribing to the Redis instance called `order_pub_sub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. This enables your app code to talk to the Redis component instance through the Dapr sidecar.
```py
# Register Dapr pub/sub subscriptions
@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
subscriptions = [{
'pubsubname': 'order_pub_sub',
'topic': 'orders',
'route': 'orders'
}]
print('Dapr pub/sub is subscribed to: ' + json.dumps(subscriptions))
return jsonify(subscriptions)
# Dapr subscription in /dapr/subscribe sets up this route
@app.route('/orders', methods=['POST'])
def orders_subscriber():
event = from_http(request.headers, request.get_data())
print('Subscriber received : ' + event.data['orderid'], flush=True)
return json.dumps({'success': True}), 200, {
'ContentType': 'application/json'}
app.run(port=5001)
```
### Step 4: View the Pub/sub outputs
Notice, as specified in the code above, the publisher pushes a random number to the Dapr sidecar while the subscriber receives it.
Publisher output:
```
== APP == INFO:root:Published data: {"orderId": 1}
== APP == INFO:root:Published data: {"orderId": 2}
== APP == INFO:root:Published data: {"orderId": 3}
== APP == INFO:root:Published data: {"orderId": 4}
== APP == INFO:root:Published data: {"orderId": 5}
== APP == INFO:root:Published data: {"orderId": 6}
== APP == INFO:root:Published data: {"orderId": 7}
== APP == INFO:root:Published data: {"orderId": 8}
== APP == INFO:root:Published data: {"orderId": 9}
== APP == INFO:root:Published data: {"orderId": 10}
```
Subscriber output:
```
== APP == INFO:root:Subscriber received: {"orderId": 1}
== APP == INFO:root:Subscriber received: {"orderId": 2}
== APP == INFO:root:Subscriber received: {"orderId": 3}
== APP == INFO:root:Subscriber received: {"orderId": 4}
== APP == INFO:root:Subscriber received: {"orderId": 5}
== APP == INFO:root:Subscriber received: {"orderId": 6}
== APP == INFO:root:Subscriber received: {"orderId": 7}
== APP == INFO:root:Subscriber received: {"orderId": 8}
== APP == INFO:root:Subscriber received: {"orderId": 9}
== APP == INFO:root:Subscriber received: {"orderId": 10}
```
#### `pubsub.yaml` component file
When you run `dapr init`, Dapr creates a default Redis `pubsub.yaml` and runs a Redis container on your local machine, located:
- On Windows, under `%UserProfile%\.dapr\components\pubsub.yaml`
- On Linux/MacOS, under `~/.dapr/components/pubsub.yaml`
With the `pubsub.yaml` component, you can easily swap out underlying components without application code changes.
The Redis `pubsub.yaml` file included for this quickstart contains the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: order_pub_sub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
```
In the YAML file:
- `metadata/name` is how your application talks to the component.
- `spec/metadata` defines the connection to the instance of the component.
- `scopes` specify which application can use the component.
{{% /codetab %}}
<!-- JavaScript -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Latest Node.js installed](https://nodejs.org/download/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the sample we've set up:
```bash
git clone https://github.com/dapr/quickstarts.git
```
### Step 2: Publish a topic
In a terminal window, navigate to the `checkout` directory.
```bash
cd pub_sub/javascript/sdk/checkout
```
Install dependencies, which will include the `dapr-client` package from the JavaScript SDK:
```bash
npm install
```
Verify you have the following files included in the service directory:
- `package.json`
- `package-lock.json`
Run the `checkout` publisher service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 --components-path ../../../components -- npm run start
```
In the `checkout` publisher service, we're publishing the orderId message to the Redis instance called `order_pub_sub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. As soon as the service starts, it publishes in a loop:
```js
await client.pubsub.publish(PUBSUB_NAME, PUBSUB_TOPIC, order);
console.log("Published data: " + JSON.stringify(order));
```
### Step 3: Subscribe to topics
In a new terminal window, navigate to the `order-processor` directory.
```bash
cd pub_sub/javascript/sdk/order-processor
```
Install dependencies, which will include the `dapr-client` package from the JavaScript SDK:
```bash
npm install
```
Verify you have the following files included in the service directory:
- `package.json`
- `package-lock.json`
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
dapr run --app-port 5001 --app-id order-processing --app-protocol http --dapr-http-port 3501 --components-path ../../../components -- npm run start
```
In the `order-processor` subscriber, we're subscribing to the Redis instance called `order_pub_sub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. This enables your app code to talk to the Redis component instance through the Dapr sidecar.
```js
server.pubsub.subscribe("order_pub_sub", "orders", (data) => console.log("Subscriber received: " + JSON.stringify(data)));
```
### Step 4: View the Pub/sub outputs
Notice, as specified in the code above, the publisher pushes a random number to the Dapr sidecar while the subscriber receives it.
Publisher output:
```cli
== APP == Published data: {"orderId":1}
== APP == Published data: {"orderId":2}
== APP == Published data: {"orderId":3}
== APP == Published data: {"orderId":4}
== APP == Published data: {"orderId":5}
== APP == Published data: {"orderId":6}
== APP == Published data: {"orderId":7}
== APP == Published data: {"orderId":8}
== APP == Published data: {"orderId":9}
== APP == Published data: {"orderId":10}
```
Subscriber output:
```cli
== APP == Subscriber received: {"orderId":1}
== APP == Subscriber received: {"orderId":2}
== APP == Subscriber received: {"orderId":3}
== APP == Subscriber received: {"orderId":4}
== APP == Subscriber received: {"orderId":5}
== APP == Subscriber received: {"orderId":6}
== APP == Subscriber received: {"orderId":7}
== APP == Subscriber received: {"orderId":8}
== APP == Subscriber received: {"orderId":9}
== APP == Subscriber received: {"orderId":10}
```
#### `pubsub.yaml` component file
When you run `dapr init`, Dapr creates a default Redis `pubsub.yaml` and runs a Redis container on your local machine, located:
- On Windows, under `%UserProfile%\.dapr\components\pubsub.yaml`
- On Linux/MacOS, under `~/.dapr/components/pubsub.yaml`
With the `pubsub.yaml` component, you can easily swap out underlying components without application code changes.
The Redis `pubsub.yaml` file included for this quickstart contains the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: order_pub_sub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
```
In the YAML file:
- `metadata/name` is how your application talks to the component.
- `spec/metadata` defines the connection to the instance of the component.
- `scopes` specify which application can use the component.
{{% /codetab %}}
<!-- .NET -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the sample we've set up:
```bash
git clone https://github.com/dapr/quickstarts.git
```
### Step 2: Publish a topic
In a terminal window, navigate to the `checkout` directory.
```bash
cd pub_sub/csharp/sdk/checkout
```
Recall NuGet packages:
```bash
dotnet restore
dotnet build
```
Run the `checkout` publisher service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --components-path ../../../components -- dotnet run
```
In the `checkout` publisher, we're publishing the orderId message to the Redis instance called `order_pub_sub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. As soon as the service starts, it publishes in a loop:
```cs
using var client = new DaprClientBuilder().Build();
await client.PublishEventAsync("order_pub_sub", "orders", order);
Console.WriteLine("Published data: " + order);
```
### Step 3: Subscribe to topics
In a new terminal window, navigate to the `order-processor` directory.
```bash
cd pub_sub/csharp/sdk/order-processor
```
Recall NuGet packages:
```bash
dotnet restore
dotnet build
```
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components --app-port 7001 -- dotnet run
```
In the `order-processor` subscriber, we're subscribing to the Redis instance called `order_pub_sub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. This enables your app code to talk to the Redis component instance through the Dapr sidecar.
```cs
// Dapr subscription in [Topic] routes orders topic to this route
app.MapPost("/orders", [Topic("order_pub_sub", "orders")] (Order order) => {
Console.WriteLine("Subscriber received : " + order);
return Results.Ok(order);
});
public record Order([property: JsonPropertyName("orderId")] int OrderId);
```
### Step 4: View the Pub/sub outputs
Notice, as specified in the code above, the publisher pushes a random number to the Dapr sidecar while the subscriber receives it.
Publisher output:
```dotnetcli
== APP == Published data: Order { OrderId = 1 }
== APP == Published data: Order { OrderId = 2 }
== APP == Published data: Order { OrderId = 3 }
== APP == Published data: Order { OrderId = 4 }
== APP == Published data: Order { OrderId = 5 }
== APP == Published data: Order { OrderId = 6 }
== APP == Published data: Order { OrderId = 7 }
== APP == Published data: Order { OrderId = 8 }
== APP == Published data: Order { OrderId = 9 }
== APP == Published data: Order { OrderId = 10 }
```
Subscriber output:
```dotnetcli
== APP == Subscriber received: Order { OrderId = 1 }
== APP == Subscriber received: Order { OrderId = 2 }
== APP == Subscriber received: Order { OrderId = 3 }
== APP == Subscriber received: Order { OrderId = 4 }
== APP == Subscriber received: Order { OrderId = 5 }
== APP == Subscriber received: Order { OrderId = 6 }
== APP == Subscriber received: Order { OrderId = 7 }
== APP == Subscriber received: Order { OrderId = 8 }
== APP == Subscriber received: Order { OrderId = 9 }
== APP == Subscriber received: Order { OrderId = 10 }
```
#### `pubsub.yaml` component file
When you run `dapr init`, Dapr creates a default Redis `pubsub.yaml` and runs a Redis container on your local machine, located:
- On Windows, under `%UserProfile%\.dapr\components\pubsub.yaml`
- On Linux/MacOS, under `~/.dapr/components/pubsub.yaml`
With the `pubsub.yaml` component, you can easily swap out underlying components without application code changes.
The Redis `pubsub.yaml` file included for this quickstart contains the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: order_pub_sub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
```
In the YAML file:
- `metadata/name` is how your application talks to the component.
- `spec/metadata` defines the connection to the instance of the component.
- `scopes` specify which application can use the component.
{{% /codetab %}}
<!-- Java -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11), or
- [OpenJDK](https://jdk.java.net/13/)
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the sample we've provided.
```bash
git clone https://github.com/dapr/quickstarts.git
```
### Step 2: Publish a topic
In a terminal window, navigate to the `checkout` directory.
```bash
cd pub_sub/java/sdk/checkout
```
Install the dependencies:
```bash
mvn clean install
```
Run the `checkout` publisher service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --components-path ../../../components -- java -jar target/CheckoutService-0.0.1-SNAPSHOT.jar
```
In the `checkout` publisher, we're publishing the orderId message to the Redis instance called `order_pub_sub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. As soon as the service starts, it publishes in a loop:
```java
DaprClient client = new DaprClientBuilder().build();
client.publishEvent(
PUBSUB_NAME,
TOPIC_NAME,
order).block();
logger.info("Published data: " + order.getOrderId());
```
### Step 3: Subscribe to topics
In a new terminal window, navigate to the `order-processor` directory.
```bash
cd pub_sub/java/sdk/order-processor
```
Install the dependencies:
```bash
mvn clean install
```
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
dapr run --app-port 8080 --app-id order-processor --components-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
```
In the `order-processor` subscriber, we're subscribing to the Redis instance called `order_pub_sub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. This enables your app code to talk to the Redis component instance through the Dapr sidecar.
```java
@Topic(name = "orders", pubsubName = "order_pub_sub")
@PostMapping(path = "/orders", consumes = MediaType.ALL_VALUE)
public Mono<ResponseEntity> getCheckout(@RequestBody(required = false) CloudEvent<Order> cloudEvent) {
return Mono.fromSupplier(() -> {
try {
logger.info("Subscriber received: " + cloudEvent.getData().getOrderId());
return ResponseEntity.ok("SUCCESS");
} catch (Exception e) {
throw new RuntimeException(e);
}
});
}
```
### Step 4: View the Pub/sub outputs
Notice, as specified in the code above, the publisher pushes a random number to the Dapr sidecar while the subscriber receives it.
Publisher output:
```
== APP == 7194 [main] INFO com.service.CheckoutServiceApplication - Published data: 1
== APP == 12213 [main] INFO com.service.CheckoutServiceApplication - Published data: 2
== APP == 17233 [main] INFO com.service.CheckoutServiceApplication - Published data: 3
== APP == 22252 [main] INFO com.service.CheckoutServiceApplication - Published data: 4
== APP == 27276 [main] INFO com.service.CheckoutServiceApplication - Published data: 5
== APP == 32320 [main] INFO com.service.CheckoutServiceApplication - Published data: 6
== APP == 37340 [main] INFO com.service.CheckoutServiceApplication - Published data: 7
== APP == 42356 [main] INFO com.service.CheckoutServiceApplication - Published data: 8
== APP == 47386 [main] INFO com.service.CheckoutServiceApplication - Published data: 9
== APP == 52410 [main] INFO com.service.CheckoutServiceApplication - Published data: 10
```
Subscriber output:
```
== APP == 2022-03-07 13:31:19.551 INFO 43512 --- [nio-8080-exec-5] c.s.c.OrderProcessingServiceController : Subscriber received: 1
== APP == 2022-03-07 13:31:19.552 INFO 43512 --- [nio-8080-exec-9] c.s.c.OrderProcessingServiceController : Subscriber received: 2
== APP == 2022-03-07 13:31:19.551 INFO 43512 --- [nio-8080-exec-6] c.s.c.OrderProcessingServiceController : Subscriber received: 3
== APP == 2022-03-07 13:31:19.552 INFO 43512 --- [nio-8080-exec-2] c.s.c.OrderProcessingServiceController : Subscriber received: 4
== APP == 2022-03-07 13:31:19.553 INFO 43512 --- [nio-8080-exec-2] c.s.c.OrderProcessingServiceController : Subscriber received: 5
== APP == 2022-03-07 13:31:19.553 INFO 43512 --- [nio-8080-exec-9] c.s.c.OrderProcessingServiceController : Subscriber received: 6
== APP == 2022-03-07 13:31:22.849 INFO 43512 --- [nio-8080-exec-3] c.s.c.OrderProcessingServiceController : Subscriber received: 7
== APP == 2022-03-07 13:31:27.866 INFO 43512 --- [nio-8080-exec-6] c.s.c.OrderProcessingServiceController : Subscriber received: 8
== APP == 2022-03-07 13:31:32.895 INFO 43512 --- [nio-8080-exec-6] c.s.c.OrderProcessingServiceController : Subscriber received: 9
== APP == 2022-03-07 13:31:37.919 INFO 43512 --- [nio-8080-exec-2] c.s.c.OrderProcessingServiceController : Subscriber received: 10
```
#### `pubsub.yaml` component file
When you run `dapr init`, Dapr creates a default Redis `pubsub.yaml` and runs a Redis container on your local machine, located:
- On Windows, under `%UserProfile%\.dapr\components\pubsub.yaml`
- On Linux/MacOS, under `~/.dapr/components/pubsub.yaml`
With the `pubsub.yaml` component, you can easily swap out underlying components without application code changes.
The Redis `pubsub.yaml` file included for this quickstart contains the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: order_pub_sub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
scopes:
- orderprocessing
- checkout
```
In the YAML file:
- `metadata/name` is how your application talks to the component.
- `spec/metadata` defines the connection to the instance of the component.
- `scopes` specify which application can use the component.
{{% /codetab %}}
<!-- Go -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Latest version of Go](https://go.dev/dl/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the sample we've provided.
```bash
git clone https://github.com/dapr/quickstarts.git
```
### Step 2: Publish a topic
In a terminal window, navigate to the `checkout` directory.
```bash
cd pub_sub/go/sdk/checkout
```
Install the dependencies:
```bash
go build app.go
```
Run the `checkout` publisher service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 --components-path ../../../components -- go run app.go
```
In the `checkout` publisher, we're publishing the orderId message to the Redis instance called `order_pub_sub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. As soon as the service starts, it publishes in a loop:
```go
if err := client.PublishEvent(ctx, PUBSUB_NAME, PUBSUB_TOPIC, []byte(order)); err != nil {
panic(err)
}
fmt.Sprintf("Published data: ", order)
```
### Step 3: Subscribe to topics
In a new terminal window, navigate to the `order-processor` directory.
```bash
cd pub_sub/go/sdk/order-processor
```
Install the dependencies:
```bash
go build app.go
```
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
dapr run --app-port 6001 --app-id order-processor --app-protocol http --dapr-http-port 3501 --components-path ../../../components -- go run app.go
```
In the `order-processor` subscriber, we're subscribing to the Redis instance called `order_pub_sub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. This enables your app code to talk to the Redis component instance through the Dapr sidecar.
```go
func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
fmt.Println("Subscriber received: ", e.Data)
return false, nil
}
```
### Step 4: View the Pub/sub outputs
Notice, as specified in the code above, the publisher pushes a numbered message to the Dapr sidecar while the subscriber receives it.
Publisher output:
```
== APP == dapr client initializing for: 127.0.0.1:63293
```
Subscriber output:
```
== APP == Subscriber received: {"orderId":1}
== APP == Subscriber received: {"orderId":2}
== APP == Subscriber received: {"orderId":3}
== APP == Subscriber received: {"orderId":4}
== APP == Subscriber received: {"orderId":5}
== APP == Subscriber received: {"orderId":6}
== APP == Subscriber received: {"orderId":7}
== APP == Subscriber received: {"orderId":8}
== APP == Subscriber received: {"orderId":9}
== APP == Subscriber received: {"orderId":10}
```
#### `pubsub.yaml` component file
When you run `dapr init`, Dapr creates a default Redis `pubsub.yaml` and runs a Redis container on your local machine, located:
- On Windows, under `%UserProfile%\.dapr\components\pubsub.yaml`
- On Linux/MacOS, under `~/.dapr/components/pubsub.yaml`
With the `pubsub.yaml` component, you can easily swap out underlying components without application code changes.
The Redis `pubsub.yaml` file included for this quickstart contains the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: order_pub_sub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
scopes:
- orderprocessing
- checkout
```
In the YAML file:
- `metadata/name` is how your application talks to the component.
- `spec/metadata` defines the connection to the instance of the component.
- `scopes` specify which application can use the component.
{{% /codetab %}}
{{< /tabs >}}
## Tell us what you think!
We're continuously working to improve our quickstart examples and value your feedback. Did you find this quickstart helpful? Do you have suggestions for improvement?
Join the discussion in our [discord channel](https://discord.gg/22ZtJrNe).
## Next steps
- Set up Pub/sub using HTTP instead of an SDK.
- [Python](https://github.com/dapr/quickstarts/tree/master/pub_sub/python/http)
- [JavaScript](https://github.com/dapr/quickstarts/tree/master/pub_sub/javascript/http)
- [.NET](https://github.com/dapr/quickstarts/tree/master/pub_sub/csharp/http)
- [Java](https://github.com/dapr/quickstarts/tree/master/pub_sub/java/http)
- [Go](https://github.com/dapr/quickstarts/tree/master/pub_sub/go/http)
- Learn more about [Pub/sub as a Dapr building block]({{< ref pubsub-overview >}})
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}

View File

@ -0,0 +1,633 @@
---
type: docs
title: "Quickstart: Service Invocation"
linkTitle: "Service Invocation"
weight: 70
description: "Get started with Dapr's Service Invocation building block"
---
With [Dapr's Service Invocation building block](https://docs.dapr.io/developing-applications/building-blocks/service-invocation), your application can communicate reliably and securely with other applications.
<img src="/images/serviceinvocation-quickstart/service-invocation-overview.png" width=800 alt="Diagram showing the steps of service invocation" style="padding-bottom:25px;">
Dapr offers several methods for service invocation, which you can choose depending on your scenario. For this quickstart, you'll enable the checkout service to invoke a method using HTTP proxy in the order-processor service.
Learn more about Dapr's methods for service invocation in the [overview article]({{< ref service-invocation-overview.md >}}).
Select your preferred language before proceeding with the quickstart.
{{< tabs "Python" "JavaScript" ".NET" "Java" "Go" >}}
<!-- Python -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Python 3.7+ installed](https://www.python.org/downloads/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the sample we've provided.
```bash
git clone https://github.com/dapr/quickstarts.git
```
### Run `checkout` service
In a terminal window, navigate to the `checkout` directory.
```bash
cd service_invocation/python/http/checkout
```
Install the dependencies:
```bash
pip3 install -r requirements.txt
```
Run the `checkout` service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- python3 app.py
```
In the `checkout` service, you'll notice there's no need to rewrite your app code to use Dapr's service invocation. You can enable service invocation by simply adding the `dapr-app-id` header, which specifies the ID of the target service.
```python
headers = {'dapr-app-id': 'order-processor'}
result = requests.post(
url='%s/orders' % (base_url),
data=json.dumps(order),
headers=headers
)
```
### Run `order-processor` service
In a new terminal window, navigate to `order-processor` directory.
```bash
cd service_invocation/python/http/order-processor
```
Install the dependencies:
```bash
pip3 install -r requirements.txt
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-port 6001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- python3 app.py
```
```py
@app.route('/orders', methods=['POST'])
def getOrder():
data = request.json
print('Order received : ' + json.dumps(data), flush=True)
return json.dumps({'success': True}), 200, {
'ContentType': 'application/json'}
app.run(port=5001)
```
### View the Service Invocation outputs
Dapr invokes an application on any Dapr instance. In the code, the sidecar programming model encourages each application to talk to its own instance of Dapr. The Dapr instances then discover and communicate with one another.
`checkout` service output:
```
== APP == Order passed: {"orderId": 1}
== APP == Order passed: {"orderId": 2}
== APP == Order passed: {"orderId": 3}
== APP == Order passed: {"orderId": 4}
== APP == Order passed: {"orderId": 5}
== APP == Order passed: {"orderId": 6}
== APP == Order passed: {"orderId": 7}
== APP == Order passed: {"orderId": 8}
== APP == Order passed: {"orderId": 9}
== APP == Order passed: {"orderId": 10}
```
`order-processor` service output:
```
== APP == Order received: {"orderId": 1}
== APP == Order received: {"orderId": 2}
== APP == Order received: {"orderId": 3}
== APP == Order received: {"orderId": 4}
== APP == Order received: {"orderId": 5}
== APP == Order received: {"orderId": 6}
== APP == Order received: {"orderId": 7}
== APP == Order received: {"orderId": 8}
== APP == Order received: {"orderId": 9}
== APP == Order received: {"orderId": 10}
```
{{% /codetab %}}
<!-- JavaScript -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Latest Node.js installed](https://nodejs.org/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the sample we've provided.
```bash
git clone https://github.com/dapr/quickstarts.git
```
### Run `checkout` service
In a terminal window, navigate to the `checkout` directory.
```bash
cd service_invocation/javascript/http/checkout
```
Install the dependencies:
```bash
npm install
```
Run the `checkout` service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- npm start
```
In the `checkout` service, you'll notice there's no need to rewrite your app code to use Dapr's service invocation. You can enable service invocation by simply adding the `dapr-app-id` header, which specifies the ID of the target service.
```javascript
let axiosConfig = {
headers: {
"dapr-app-id": "order-processor"
}
};
const res = await axios.post(`${DAPR_HOST}:${DAPR_HTTP_PORT}/orders`, order , axiosConfig);
console.log("Order passed: " + res.config.data);
```
### Run `order-processor` service
In a new terminal window, navigate to `order-processor` directory.
```bash
cd service_invocation/javascript/http/order-processor
```
Install the dependencies:
```bash
npm install
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-port 6001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- npm start
```
```javascript
app.post('/orders', (req, res) => {
console.log("Order received:", req.body);
res.sendStatus(200);
});
```
### View the Service Invocation outputs
Dapr invokes an application on any Dapr instance. In the code, the sidecar programming model encourages each application to talk to its own instance of Dapr. The Dapr instances then discover and communicate with one another.
`checkout` service output:
```
== APP == Order passed: {"orderId": 1}
== APP == Order passed: {"orderId": 2}
== APP == Order passed: {"orderId": 3}
== APP == Order passed: {"orderId": 4}
== APP == Order passed: {"orderId": 5}
== APP == Order passed: {"orderId": 6}
== APP == Order passed: {"orderId": 7}
== APP == Order passed: {"orderId": 8}
== APP == Order passed: {"orderId": 9}
== APP == Order passed: {"orderId": 10}
```
`order-processor` service output:
```
== APP == Order received: {"orderId": 1}
== APP == Order received: {"orderId": 2}
== APP == Order received: {"orderId": 3}
== APP == Order received: {"orderId": 4}
== APP == Order received: {"orderId": 5}
== APP == Order received: {"orderId": 6}
== APP == Order received: {"orderId": 7}
== APP == Order received: {"orderId": 8}
== APP == Order received: {"orderId": 9}
== APP == Order received: {"orderId": 10}
```
{{% /codetab %}}
<!-- .NET -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the sample we've provided.
```bash
git clone https://github.com/dapr/quickstarts.git
```
### Run `checkout` service
In a terminal window, navigate to the `checkout` directory.
```bash
cd service_invocation/csharp/http/checkout
```
Install the dependencies:
```bash
dotnet restore
dotnet build
```
Run the `checkout` service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- dotnet run
```
In the `checkout` service, you'll notice there's no need to rewrite your app code to use Dapr's service invocation. You can enable service invocation by simply adding the `dapr-app-id` header, which specifies the ID of the target service.
```csharp
var client = new HttpClient();
client.DefaultRequestHeaders.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));
client.DefaultRequestHeaders.Add("dapr-app-id", "order-processor");
var response = await client.PostAsync($"{baseURL}/orders", content);
Console.WriteLine("Order passed: " + order);
```
### Run `order-processor` service
In a new terminal window, navigate to `order-processor` directory.
```bash
cd service_invocation/csharp/http/order-processor
```
Install the dependencies:
```bash
dotnet restore
dotnet build
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-port 7001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- dotnet run
```
```csharp
app.MapPost("/orders", async context => {
var data = await context.Request.ReadFromJsonAsync<Order>();
Console.WriteLine("Order received : " + data);
await context.Response.WriteAsync(data.ToString());
});
```
### View the Service Invocation outputs
Dapr invokes an application on any Dapr instance. In the code, the sidecar programming model encourages each application to talk to its own instance of Dapr. The Dapr instances then discover and communicate with one another.
`checkout` service output:
```
== APP == Order passed: Order { OrderId: 1 }
== APP == Order passed: Order { OrderId: 2 }
== APP == Order passed: Order { OrderId: 3 }
== APP == Order passed: Order { OrderId: 4 }
== APP == Order passed: Order { OrderId: 5 }
== APP == Order passed: Order { OrderId: 6 }
== APP == Order passed: Order { OrderId: 7 }
== APP == Order passed: Order { OrderId: 8 }
== APP == Order passed: Order { OrderId: 9 }
== APP == Order passed: Order { OrderId: 10 }
```
`order-processor` service output:
```
== APP == Order received: Order { OrderId: 1 }
== APP == Order received: Order { OrderId: 2 }
== APP == Order received: Order { OrderId: 3 }
== APP == Order received: Order { OrderId: 4 }
== APP == Order received: Order { OrderId: 5 }
== APP == Order received: Order { OrderId: 6 }
== APP == Order received: Order { OrderId: 7 }
== APP == Order received: Order { OrderId: 8 }
== APP == Order received: Order { OrderId: 9 }
== APP == Order received: Order { OrderId: 10 }
```
{{% /codetab %}}
<!-- Java -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11), or
- [OpenJDK](https://jdk.java.net/13/)
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the sample we've provided.
```bash
git clone https://github.com/dapr/quickstarts.git
```
### Run `checkout` service
In a terminal window, navigate to the `checkout` directory.
```bash
cd service_invocation/java/http/checkout
```
Install the dependencies:
```bash
mvn clean install
```
Run the `checkout` service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- java -jar target/CheckoutService-0.0.1-SNAPSHOT.jar
```
In the `checkout` service, you'll notice there's no need to rewrite your app code to use Dapr's service invocation. You can enable service invocation by simply adding the `dapr-app-id` header, which specifies the ID of the target service.
```java
.header("Content-Type", "application/json")
.header("dapr-app-id", "order-processor")
HttpResponse<String> response = httpClient.send(request, HttpResponse.BodyHandlers.ofString());
System.out.println("Order passed: "+ orderId)
```
### Run `order-processor` service
In a new terminal window, navigate to `order-processor` directory.
```bash
cd service_invocation/java/http/order-processor
```
Install the dependencies:
```bash
mvn clean install
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --app-port 6001 --app-protocol http --dapr-http-port 3501 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
```
```java
public String processOrders(@RequestBody Order body) {
System.out.println("Order received: "+ body.getOrderId());
return "CID" + body.getOrderId();
}
```
### View the Service Invocation outputs
Dapr invokes an application on any Dapr instance. In the code, the sidecar programming model encourages each application to talk to its own instance of Dapr. The Dapr instances then discover and communicate with one another.
`checkout` service output:
```
== APP == Order passed: 1
== APP == Order passed: 2
== APP == Order passed: 3
== APP == Order passed: 4
== APP == Order passed: 5
== APP == Order passed: 6
== APP == Order passed: 7
== APP == Order passed: 8
== APP == Order passed: 9
== APP == Order passed: 10
```
`order-processor` service output:
```
== APP == Order received: 1
== APP == Order received: 2
== APP == Order received: 3
== APP == Order received: 4
== APP == Order received: 5
== APP == Order received: 6
== APP == Order received: 7
== APP == Order received: 8
== APP == Order received: 9
== APP == Order received: 10
```
{{% /codetab %}}
<!-- Go -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Latest version of Go](https://go.dev/dl/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the sample we've provided.
```bash
git clone https://github.com/dapr/quickstarts.git
```
### Run `checkout` service
In a terminal window, navigate to the `checkout` directory.
```bash
cd service_invocation/go/http/checkout
```
Install the dependencies:
```bash
go build app.go
```
Run the `checkout` service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- go run app.go
```
In the `checkout` service, you'll notice there's no need to rewrite your app code to use Dapr's service invocation. You can enable service invocation by simply adding the `dapr-app-id` header, which specifies the ID of the target service.
```go
req.Header.Add("dapr-app-id", "order-processor")
response, err := client.Do(req)
if err != nil {
fmt.Print(err.Error())
os.Exit(1)
}
result, err := ioutil.ReadAll(response.Body)
if err != nil {
log.Fatal(err)
}
log.Println("Order passed: ", string(result))
```
### Run `order-processor` service
In a new terminal window, navigate to `order-processor` directory.
```bash
cd service_invocation/go/http/order-processor
```
Install the dependencies:
```bash
go build app.go
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-port 6001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- go run app.go
```
```go
func getOrder(w http.ResponseWriter, r *http.Request) {
data, err := ioutil.ReadAll(r.Body)
if err != nil {
log.Fatal(err)
}
log.Printf("Order received : %s", string(data))
```
### View the Service Invocation outputs
Dapr invokes an application on any Dapr instance. In the code, the sidecar programming model encourages each application to talk to its own instance of Dapr. The Dapr instances then discover and communicate with one another.
`checkout` service output:
```
== APP == Order passed: "{\"orderId\":1}"
== APP == Order passed: "{\"orderId\":2}"
== APP == Order passed: "{\"orderId\":3}"
== APP == Order passed: "{\"orderId\":4}"
== APP == Order passed: "{\"orderId\":5}"
== APP == Order passed: "{\"orderId\":6}"
== APP == Order passed: "{\"orderId\":7}"
== APP == Order passed: "{\"orderId\":8}"
== APP == Order passed: "{\"orderId\":9}"
== APP == Order passed: "{\"orderId\":10}"
```
`order-processor` service output:
```
== APP == Order received : {"orderId":1}
== APP == Order received : {"orderId":2}
== APP == Order received : {"orderId":3}
== APP == Order received : {"orderId":4}
== APP == Order received : {"orderId":5}
== APP == Order received : {"orderId":6}
== APP == Order received : {"orderId":7}
== APP == Order received : {"orderId":8}
== APP == Order received : {"orderId":9}
== APP == Order received : {"orderId":10}
```
{{% /codetab %}}
{{% /tabs %}}
## Tell us what you think!
We're continuously working to improve our quickstart examples and value your feedback. Did you find this quickstart helpful? Do you have suggestions for improvement?
Join the discussion in our [discord channel](https://discord.gg/22ZtJrNe).
## Next Steps
- Learn more about [Service Invocation as a Dapr building block]({{< ref service-invocation-overview.md >}})
- Learn more about how to invoke Dapr's Service Invocation with:
- [HTTP]({{< ref howto-invoke-discover-services.md >}}), or
- [gRPC]({{< ref howto-invoke-services-grpc.md >}})
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}

View File

@ -0,0 +1,34 @@
---
type: docs
title: "Dapr Tutorials"
linkTitle: "Dapr Tutorials"
weight: 70
description: "Walk through in-depth examples to learn more about how to work with Dapr concepts"
no_list: true
---
Now that you've already initialized Dapr and experimented with some of Dapr's building blocks, walk through our more detailed tutorials.
#### Before you begin
- [Set up your local Dapr environment]({{< ref "install-dapr-cli.md" >}}).
- [Explore one of Dapr's building blocks via our quickstarts]({{< ref "getting-started/quickstarts/_index.md" >}}).
## Tutorials
Thanks to our expansive Dapr community, we offer tutorials hosted both on Dapr Docs and on our [GitHub repository](https://github.com/dapr/quickstarts).
| Dapr Docs tutorials | Description |
|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Define a component]({{< ref get-started-component.md >}}) | Create a component definition file to interact with the Secrets building block.
| [Configure State & Pub/sub]({{< ref configure-state-pubsub.md >}}) | Configure State Store and Pub/sub message broker components for Dapr.
| GitHub tutorials | Description |
|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world) | *Recommended* <br> Demonstrates how to run Dapr locally. Highlights service invocation and state management. |
| [Hello World Kubernetes](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) | *Recommended* <br> Demonstrates how to run Dapr in Kubernetes. Highlights service invocation and state management. |
| [Distributed Calculator](https://github.com/dapr/quickstarts/tree/master/tutorials/distributed-calculator) | Demonstrates a distributed calculator application that uses Dapr services to power a React web app. Highlights polyglot (multi-language) programming, service invocation and state management. |
| [Pub/Sub](https://github.com/dapr/quickstarts/tree/master/tutorials/pub-sub) | Demonstrates how to use Dapr to enable pub-sub applications. Uses Redis as a pub-sub component. |
| [Bindings](https://github.com/dapr/quickstarts/tree/master/tutorials/bindings) | Demonstrates how to use Dapr to create input and output bindings to other components. Uses bindings to Kafka. |
| [Observability](https://github.com/dapr/quickstarts/tree/master/tutorials/observability) | Demonstrates Dapr tracing capabilities. Uses Zipkin as a tracing component. |
| [Secret Store](https://github.com/dapr/quickstarts/tree/master/tutorials/secretstore) | Demonstrates the use of Dapr Secrets API to access secret stores. |

View File

@ -0,0 +1,339 @@
---
type: docs
title: "Tutorial: Configure state store and pub/sub message broker"
linkTitle: "Configure state & pub/sub"
weight: 80
description: "Configure state store and pub/sub message broker components for Dapr"
aliases:
- /getting-started/tutorials/configure-redis/
---
To get up and running with the state and Pub/sub building blocks, you'll need two components:
- A state store component for persistence and restoration.
- As pub/sub message broker component for async-style message delivery.
A full list of supported components can be found here:
- [Supported state stores]({{< ref supported-state-stores >}})
- [Supported pub/sub message brokers]({{< ref supported-pubsub >}})
For this tutorial, we describe how to get up and running with Redis.
### Step 1: Create a Redis store
Dapr can use any Redis instance, either:
- Containerized on your local dev machine, or
- A managed cloud service.
If you already have a Redis store, move on to the [configuration](#configure-dapr-components) section.
{{< tabs "Self-Hosted" "Kubernetes" "Azure" "AWS" "GCP" >}}
{{% codetab %}}
Redis is automatically installed in self-hosted environments by the Dapr CLI as part of the initialization process. You are all set! Skip ahead to the [next steps](#next-steps).
{{% /codetab %}}
{{% codetab %}}
You can use [Helm](https://helm.sh/) to create a Redis instance in our Kubernetes cluster. Before beginning, [install Helm v3](https://github.com/helm/helm#install).
Install Redis into your cluster:
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install redis bitnami/redis
```
For Dapr's Pub/sub functionality, you'll need at least Redis version 5. For state store, you can use a lower version.
Run `kubectl get pods` to see the Redis containers now running in your cluster:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-0 1/1 Running 0 69s
redis-replicas-0 1/1 Running 0 69s
redis-replicas-1 1/1 Running 0 22s
```
For Kubernetes:
- The hostname is `redis-master.default.svc.cluster.local:6379`
- The secret, `redis`, is created automatically.
{{% /codetab %}}
{{% codetab %}}
Verify you have an [Azure subscription](https://azure.microsoft.com/free/).
1. Open and log into the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow.
1. Fill out the necessary information.
- Dapr Pub/sub uses [Redis streams](https://redis.io/topics/streams-intro) introduced by Redis 5.0. To use Azure Redis Cache for Pub/sub, set the version to *(PREVIEW) 6*.
1. Click **Create** to kickoff deployment of your Redis instance.
1. Make note of the Redis instance hostname from the **Overview** page in Azure portal for later.
- It should look like `xxxxxx.redis.cache.windows.net:6380`.
1. Once your instance is created, grab your access key:
1. Navigate to **Access Keys** under **Settings**.
1. Create a Kubernetes secret to store your Redis password:
```bash
kubectl create secret generic redis --from-literal=redis-password=*********
```
{{% /codetab %}}
{{% codetab %}}
1. Deploy a Redis instance from [AWS Redis](https://aws.amazon.com/redis/).
1. Note the Redis hostname in the AWS portal for later.
1. Create a Kubernetes secret to store your Redis password:
```bash
kubectl create secret generic redis --from-literal=redis-password=*********
```
{{% /codetab %}}
{{% codetab %}}
1. Deploy a MemoryStore instance from [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/).
1. Note the Redis hostname in the GCP portal for later.
1. Create a Kubernetes secret to store your Redis password:
```bash
kubectl create secret generic redis --from-literal=redis-password=*********
```
{{% /codetab %}}
{{< /tabs >}}
### Step 2: Configure Dapr components
Dapr defines resources to use for building block functionality with components. These steps go through how to connect the resources you created above to Dapr for state and pub/sub.
#### Locate your component filese
{{< tabs "Self-Hosted" "Kubernetes" >}}
{{% codetab %}}
In self-hosted mode, component files are automatically created under:
- **Windows**: `%USERPROFILE%\.dapr\components\`
- **Linux/MacOS**: `$HOME/.dapr/components`
{{% /codetab %}}
{{% codetab %}}
Since Kubernetes files are applied with `kubectl`, they can be created in any directory.
{{% /codetab %}}
{{< /tabs >}}
#### Create State store component
Create a file named `redis-state.yaml`, and paste the following:
{{< tabs "Self-Hosted" "Kubernetes" >}}
{{% codetab %}}
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
{{% /codetab %}}
{{% codetab %}}
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: <REPLACE WITH HOSTNAME FROM ABOVE - for Redis on Kubernetes it is redis-master.default.svc.cluster.local:6379>
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
Note the above code example uses the Kubernetes secret you created earlier when setting up a cluster.
{{% /codetab %}}
{{< /tabs >}}
{{% alert title="Other stores" color="primary" %}}
If using a state store other than Redis, refer to the [supported state stores]({{< ref supported-state-stores >}}) for information on options to set.
{{% /alert %}}
#### Create Pub/sub message broker component
Create a file called `redis-pubsub.yaml`, and paste the following:
{{< tabs "Self-Hosted" "Kubernetes" >}}
{{% codetab %}}
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
{{% /codetab %}}
{{% codetab %}}
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: <REPLACE WITH HOSTNAME FROM ABOVE - for Redis on Kubernetes it is redis-master.default.svc.cluster.local:6379>
- name: redisPassword
secretKeyRef:
name: redis
key: redis-password
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
Note the above code example uses the Kubernetes secret you created earlier when setting up a cluster.
{{% /codetab %}}
{{< /tabs >}}
{{% alert title="Other stores" color="primary" %}}
If using a pub/sub message broker other than Redis, refer to the [supported pub/sub message brokers]({{< ref supported-pubsub >}}) for information on options to set.
{{% /alert %}}
#### Hard coded passwords (not recommended)
For development purposes *only*, you can skip creating Kubernetes secrets and place passwords directly into the Dapr component file:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
### Step 3: Apply the configuration
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
When you run `dapr init`, Dapr creates a default redis `pubsub.yaml` on your local machine. Verify by opening your components directory:
- On Windows, under `%UserProfile%\.dapr\components\pubsub.yaml`
- On Linux/MacOS, under `~/.dapr/components/pubsub.yaml`
For new component files:
1. Create a new `components` directory in your app folder containing the YAML files.
1. Provide the path to the `dapr run` command with the flag `--components-path`
If you initialized Dapr in [slim mode]({{< ref self-hosted-no-docker.md >}}) (without Docker), you need to manually create the default directory, or always specify a components directory using `--components-path`.
{{% /codetab %}}
{{% codetab %}}
Run `kubectl apply -f <FILENAME>` for both state and pubsub files:
```bash
kubectl apply -f redis-state.yaml
kubectl apply -f redis-pubsub.yaml
```
{{% /codetab %}}
{{< /tabs >}}
## Next steps
[Try out a Dapr quickstart]({{< ref quickstarts.md >}})

View File

@ -0,0 +1,107 @@
---
type: docs
title: "Quickstart: Define a component"
linkTitle: "Define a component"
weight: 70
description: "Create a component definition file to interact with the Secrets building block"
---
When building an app, you'd most likely create your own component file definitions, depending on the building block and specific component that you'd like to use.
In this quickstart, you will create a component definition file to interact with the [Secrets building block]({{< ref secrets >}}):
- Create a local JSON secret store.
- Register the secret store with Dapr using a component definition file.
- Obtain the secret using the Dapr HTTP API.
## Step 1: Create a JSON secret store
Dapr supports [many types of secret stores]({{< ref supported-secret-stores >}}), but for this quickstart, create a local JSON file named `mysecrets.json` with the following secret:
```json
{
"my-secret" : "I'm Batman"
}
```
## Step 2: Create a secret store Dapr component
1. Create a new directory named `my-components` to hold the new component file:
```bash
mkdir my-components
```
1. Navigate into this directory.
```bash
cd my-components
```
1. Create a new file `localSecretStore.yaml` with the following contents:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: my-secret-store
namespace: default
spec:
type: secretstores.local.file
version: v1
metadata:
- name: secretsFile
value: <PATH TO SECRETS FILE>/mysecrets.json
- name: nestedSeparator
value: ":"
```
In the above file definition:
- `type: secretstores.local.file` tells Dapr to use the local file component as a secret store.
- The metadata fields provide component-specific information needed to work with this component. In this case, the secret store JSON path is relative to where you call `dapr run`.
## Step 3: Run the Dapr sidecar
Launch a Dapr sidecar that will listen on port 3500 for a blank application named `myapp`:
```bash
dapr run --app-id myapp --dapr-http-port 3500 --components-path ./my-components
```
{{% alert title="Tip" color="primary" %}}
If an error message occurs, stating the `app-id` is already in use, you may need to stop any currently running Dapr sidecars. Stop the sidecar before running the next `dapr run` command by either:
- Pressing Ctrl+C or Command+C.
- Running the `dapr stop` command in the terminal.
{{% /alert %}}
## Step 4: Get a secret
In a separate terminal, run:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}}
```bash
curl http://localhost:3500/v1.0/secrets/my-secret-store/my-secret
```
{{% /codetab %}}
{{% codetab %}}
```powershell
Invoke-RestMethod -Uri 'http://localhost:3500/v1.0/secrets/my-secret-store/my-secret'
```
{{% /codetab %}}
{{< /tabs >}}
**Output:**
```json
{"my-secret":"I'm Batman"}
```
{{< button text="Next step: Set up a Pub/sub broker >>" page="pubsub-quickstart" >}}

View File

@ -0,0 +1,29 @@
---
type: docs
title: "Updating components"
linkTitle: "Updating components"
weight: 250
description: "Updating deployed components used by applications"
---
When making an update to an existing deployed component used by an application, Dapr does not update the component automatically. The Dapr sidecar needs to be restarted in order to pick up the latest version of the component. How this done depends on the hosting environment.
## Kubernetes
When running in Kubernetes, the process of updating a component involves two steps:
1. Applying the new component YAML to the desired namespace
2. Performing a [rollout restart operation](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources) on your deployments to pick up the latest component
## Self Hosted
When running in Self Hosted mode, the process of updating a component involves a single step of stopping the `daprd` process and starting it again to pick up the latest component.
## Further reading
- [Components concept]({{< ref components-concept.md >}})
- [Reference secrets in component definitions]({{< ref component-secrets.md >}})
- [Supported state stores]({{< ref supported-state-stores >}})
- [Supported pub/sub brokers]({{< ref supported-pubsub >}})
- [Supported secret stores]({{< ref supported-secret-stores >}})
- [Supported bindings]({{< ref supported-bindings >}})
- [Set component scopes]({{< ref component-scopes.md >}})

View File

@ -47,7 +47,7 @@ Visit [this guide]({{< ref "howto-publish-subscribe.md#step-3-publish-a-topic" >
## Related links
- Overview of the Dapr [Pub/Sub building block]({{< ref pubsub-overview.md >}})
- Try the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/pub-sub)
- Try the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/tutorials/pub-sub)
- Read the [guide on publishing and subscribing]({{< ref howto-publish-subscribe.md >}})
- Learn about [topic scoping]({{< ref pubsub-scopes.md >}})
- Learn about [message time-to-live]({{< ref pubsub-message-ttl.md >}})

View File

@ -12,7 +12,7 @@ In some scenarios, applications can be spread across namespaces and share a queu
Namespaces are a Dapr concept used for scoping applications and components. This example uses Kubernetes namespaces, however the Dapr component namespace scoping can be used on any supported platform. Read [How-To: Scope components to one or more applications]({{< ref "component-scopes.md" >}}) for more information on scoping components.
{{% /alert %}}
This example uses the [PubSub sample](https://github.com/dapr/quickstarts/tree/master/pub-sub). The Redis installation and the subscribers are in `namespace-a` while the publisher UI is in `namespace-b`. This solution will also work if Redis is installed on another namespace or if you use a managed cloud service like Azure ServiceBus, AWS SNS/SQS or GCP PubSub.
This example uses the [PubSub sample](https://github.com/dapr/quickstarts/tree/master/tutorials/pub-sub). The Redis installation and the subscribers are in `namespace-a` while the publisher UI is in `namespace-b`. This solution will also work if Redis is installed on another namespace or if you use a managed cloud service like Azure ServiceBus, AWS SNS/SQS or GCP PubSub.
This is a diagram of the example using namespaces.
@ -33,7 +33,7 @@ The table below shows which resources are deployed to which namespaces:
## Pre-requisites
* [Dapr installed on Kubernetes]({{< ref "kubernetes-deploy.md" >}}) in any namespace since Dapr works at the cluster level.
* Checkout and cd into the directory for [PubSub quickstart](https://github.com/dapr/quickstarts/tree/master/pub-sub).
* Checkout and cd into the directory for [PubSub quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/pub-sub).
## Setup `namespace-a`
@ -43,7 +43,7 @@ kubectl create namespace namespace-a
kubectl config set-context --current --namespace=namespace-a
```
Install Redis (master and slave) on `namespace-a`, following [these instructions]({{< ref "configure-state-pubsub.md" >}}).
Install Redis (master and slave) on `namespace-a`, following [these instructions]({{< ref "getting-started/tutorials/configure-state-pubsub.md" >}}).
Now, configure `deploy/redis.yaml`, paying attention to the hostname containing `namespace-a`.

View File

@ -258,7 +258,7 @@ spec:
```
### Self-hosted mode
This example uses the [hello world](https://github.com/dapr/quickstarts/tree/master/hello-world/README.md) quickstart.
This example uses the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world/README.md) quickstart.
The following steps run the Sentry service locally with mTLS enabled, set up necessary environment variables to access certificates, and then launch both the node app and python app each referencing the Sentry service to apply the ACLs.
@ -341,7 +341,7 @@ The following steps run the Sentry service locally with mTLS enabled, set up nec
8. You should see the calls to the node app fail in the python app command prompt based due to the **deny** operation action in the nodeappconfig file. Change this action to **allow** and re-run the apps and you should then see this call succeed.
### Kubernetes mode
This example uses the [hello kubernetes](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes/README.md) quickstart.
This example uses the [hello kubernetes](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes/README.md) quickstart.
You can create and apply the above configuration files `nodeappconfig.yaml` and `pythonappconfig.yaml` as described in the [configuration]({{< ref "configuration-concept.md" >}}) to the Kubernetes deployments.

View File

@ -173,4 +173,4 @@ dapr-sentry-9435776c7f-8f7yd 1/1 Running 0 40s
## Next steps
- [Configure state store & pubsub message broker]({{< ref configure-state-pubsub.md >}})
- [Configure state store & pubsub message broker]({{< ref "getting-started/tutorials/configure-state-pubsub.md" >}})

View File

@ -42,7 +42,7 @@ For information about pulling your application images from a private registry, r
## Quickstart
You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes) in the Kubernetes getting started quickstart.
You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) in the Kubernetes getting started quickstart.
## Supported versions
Dapr support for Kubernetes is aligned with [Kubernetes Version Skew Policy](https://kubernetes.io/releases/version-skew-policy).
@ -52,5 +52,5 @@ Dapr support for Kubernetes is aligned with [Kubernetes Version Skew Policy](htt
- [Deploy Dapr to a Kubernetes cluster]({{< ref kubernetes-deploy >}})
- [Upgrade Dapr on a Kubernetes cluster]({{< ref kubernetes-upgrade >}})
- [Production guidelines for Dapr on Kubernetes]({{< ref kubernetes-production.md >}})
- [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes)
- [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes)
- [Use Bridge to Kubernetes to debug Dapr apps locally, while connected to your Kubernetes cluster]({{< ref bridge-to-kubernetes >}})

View File

@ -84,6 +84,10 @@ spec:
...
```
## API Logging
API logging enables you to see the API calls from your application to the Dapr sidecar to debug issues. You can combine both Dapr API logging with Dapr log events. See [configure and view Dapr Logs]({{< ref "logs-troubleshooting.md" >}}) and [configure and view Dapr API Logs]({{< ref "api-logs-troubleshooting.md" >}}) for more information.
## Log collectors
If you run Dapr in a Kubernetes cluster, [Fluentd](https://www.fluentd.org/) is a popular container log collector. You can use Fluentd with a [json parser plugin](https://docs.fluentd.org/parser/json) to parse Dapr JSON formatted logs. This [how-to]({{< ref fluentd.md >}}) shows how to configure the Fluentd in your cluster.
@ -100,3 +104,5 @@ If you are using the Azure Kubernetes Service, you can use [Azure monitor for co
- [How-to : Set up Fleuntd, Elastic search, and Kibana]({{< ref fluentd.md >}})
- [How-to : Set up Azure Monitor in Azure Kubernetes Service]({{< ref azure-monitor.md >}})
- [Configure and view Dapr Logs]({{< ref "logs-troubleshooting.md" >}})
- [Configure and view Dapr API Logs]({{< ref "api-logs-troubleshooting.md" >}})

View File

@ -55,7 +55,7 @@ spec:
dapr.io/config: "appconfig"
```
Some of the quickstarts such as [distributed calculator](https://github.com/dapr/quickstarts/tree/master/distributed-calculator) already configure these settings, so if you are using those no additional settings are needed.
Some of the quickstarts such as [distributed calculator](https://github.com/dapr/quickstarts/tree/master/tutorials/distributed-calculator) already configure these settings, so if you are using those no additional settings are needed.
That's it! There's no need include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
@ -68,5 +68,5 @@ Deploy and run some applications. After a few minutes, you should see tracing lo
> **NOTE**: Only operations going through Dapr API exposed by Dapr sidecar (e.g. service invocation or event publishing) are displayed in Application Map topology.
## Related links
* Try out the [observability quickstart](https://github.com/dapr/quickstarts/tree/master/observability/README.md)
* Try out the [observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability/README.md)
* How to set [tracing configuration options]({{< ref "configuration-overview.md#tracing" >}})

View File

@ -57,7 +57,7 @@ spec:
dapr.io/config: "appconfig"
```
Some of the quickstarts such as [distributed calculator](https://github.com/dapr/quickstarts/tree/master/distributed-calculator) already configure these settings, so if you are using those no additional settings are needed.
Some of the quickstarts such as [distributed calculator](https://github.com/dapr/quickstarts/tree/master/tutorials/distributed-calculator) already configure these settings, so if you are using those no additional settings are needed.
That's it! There's no need include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
@ -66,6 +66,6 @@ That's it! There's no need include any SDKs or instrument your application code.
Deploy and run some applications. Wait for the trace to propagate to your tracing backend and view them there.
## Related links
* Try out the [observability quickstart](https://github.com/dapr/quickstarts/tree/master/observability/README.md)
* Try out the [observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability/README.md)
* How to set [tracing configuration options]({{< ref "configuration-overview.md#tracing" >}})

View File

@ -11,9 +11,6 @@ Preview features in Dapr are considered experimental when they are first release
## Current preview features
| Feature | Description | Setting | Documentation |
| ---------- |-------------|---------|---------------|
| **Actor reentrancy** | Enables actors to be called multiple times in the same call chain allowing call backs between actors. | `Actor.Reentrancy` | [Actor reentrancy]({{<ref actor-reentrancy>}}) |
| **Partition actor reminders** | Allows actor reminders to be partitioned across multiple keys in the underlying statestore in order to improve scale and performance. | `Actor.TypeMetadata` | [How-To: Partition Actor Reminders]({{< ref "howto-actors.md#partitioning-reminders" >}}) |
| **gRPC proxying** | Enables calling endpoints using service invocation on gRPC services through Dapr via gRPC proxying, without requiring the use of Dapr SDKs. | `proxy.grpc` | [How-To: Invoke services using gRPC]({{<ref howto-invoke-services-grpc>}}) |
| **State store encryption** | Enables automatic client side encryption for state stores | `State.Encryption` | [How-To: Encrypt application state]({{<ref howto-encrypt-state>}}) |
| **Pub/Sub routing** | Allow the use of expressions to route cloud events to different URIs/paths and event handlers in your application. | `PubSub.Routing` | [How-To: Publish a message and subscribe to a topic]({{<ref howto-route-messages>}}) |
| **ARM64 Mac Support** | Dapr CLI, sidecar, and Dashboard are now natively compiled for ARM64 Macs, along with Dapr CLI installation via Homebrew. | N/A | [Install the Dapr CLI]({{<ref install-dapr-cli>}}) |

View File

@ -0,0 +1,79 @@
---
type: docs
title: "Dapr API Logs"
linkTitle: "API Logs"
weight: 3000
description: "Understand how API logging works in Dapr and how to view logs"
---
API logging enables you to see the API calls from your application to the Dapr sidecar and debug issues as a result when the flag is set. You can also combine Dapr API logging with Dapr log events (see [configure and view Dapr Logs]({{< ref "logs-troubleshooting.md" >}}) into the output, if you want to use the logging capabilities together.
## Overview
The default value of the flag is false.
To enable API logging, you can use the `--enable-api-logging` command-line option. For example:
```bash
./daprd --enable-api-logging
```
This starts the Dapr runtime with API logging.
## Configuring API logging in self hosted mode
To enable API logging when running your app with the Dapr CLI, pass the `enable-api-logging` flag:
```bash
dapr run --enable-api-logging node myapp.js
```
### Viewing API logs in self hosted mode
When running Dapr with the Dapr CLI, both your app's log output and the Dapr runtime log output are redirected to the same session, for easy debugging.
The example below shows some API logs:
```bash
dapr run --enable-api-logging -- node myapp.js
Starting Dapr with id order-processor on port 56730
✅ You are up and running! Both Dapr and your app logs will appear here.
.....
INFO[0000] HTTP API Called: POST /v1.0/state/statestore app_id=order-processor instance=QTM-SWATHIKIL-1.redmond.corp.microsoft.com scope=dapr.runtime.http type=log ver=edge
== APP == INFO:root:Saving Order: {'orderId': '483'}
INFO[0000] HTTP API Called: GET /v1.0/state/statestore/483 app_id=order-processor instance=QTM-SWATHIKIL-1.redmond.corp.microsoft.com scope=dapr.runtime.http type=log ver=edge
== APP == INFO:root:Getting Order: {'orderId': '483'}
INFO[0000] HTTP API Called: DELETE /v1.0/state/statestore app_id=order-processor instance=QTM-SWATHIKIL-1.redmond.corp.microsoft.com scope=dapr.runtime.http type=log ver=edge
== APP == INFO:root:Deleted Order: {'orderId': '483'}
INFO[0000] HTTP API Called: PUT /v1.0/metadata/cliPID app_id=order-processor instance=QTM-SWATHIKIL-1.redmond.corp.microsoft.com scope=dapr.runtime.http type=log ver=edge
```
## Configuring API logging in Kubernetes
You can enable the API logs for every sidecar by providing the following annotation in your pod spec template:
```yml
annotations:
dapr.io/enable-api-logging: true
```
### Viewing API logs on Kubernetes
Dapr API logs are written to stdout and stderr and you can view API logs on Kubernetes.
See the kubernetes API logs by executing the below command.
```bash
kubectl logs <pod_name> daprd -n <name_space>
```
The example below show `info` level API logging in Kubernetes.
```bash
time="2022-03-16T18:32:02.487041454Z" level=info msg="HTTP API Called: GET /v1.0/invoke/invoke-receiver/method/my-method" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http type=log ver=edge
time="2022-03-16T18:32:02.698387866Z" level=info msg="HTTP API Called: GET /v1.0/invoke/invoke-receiver/method/my-method" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http type=log ver=edge
time="2022-03-16T18:32:02.917629403Z" level=info msg="HTTP API Called: GET /v1.0/invoke/invoke-receiver/method/my-method" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http type=log ver=edge
time="2022-03-16T18:32:03.137830112Z" level=info msg="HTTP API Called: GET /v1.0/invoke/invoke-receiver/method/my-method" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http type=log ver=edge
time="2022-03-16T18:32:03.359097916Z" level=info msg="HTTP API Called: GET /v1.0/invoke/invoke-receiver/method/my-method" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http type=log ver=edge
```

View File

@ -461,6 +461,9 @@ actorIdleTimeout | Specifies how long to wait before deactivating an idle actor.
actorScanInterval | A duration which specifies how often to scan for actors to deactivate idle actors. Actors that have been idle longer than the actorIdleTimeout will be deactivated.
drainOngoingCallTimeout | A duration used when in the process of draining rebalanced actors. This specifies how long to wait for the current active actor method to finish. If there is no current actor method call, this is ignored.
drainRebalancedActors | A bool. If true, Dapr will wait for `drainOngoingCallTimeout` to allow a current actor call to complete before trying to deactivate an actor. If false, do not wait.
reentrancy | A configuration object that holds the options for actor reentrancy.
enabled | A flag in the reentrancy configuration that is needed to enable reentrancy.
maxStackDepth | A value in the reentrancy configuration that controls how many reentrant calls be made to the same actor.
```json
{
@ -468,7 +471,11 @@ drainRebalancedActors | A bool. If true, Dapr will wait for `drainOngoingCallTi
"actorIdleTimeout": "1h",
"actorScanInterval": "30s",
"drainOngoingCallTimeout": "30s",
"drainRebalancedActors": true
"drainRebalancedActors": true,
"reentrancy": {
"enabled": true,
"maxStackDepth": 32
}
}
```

View File

@ -168,4 +168,4 @@ Dapr Pub/Sub adheres to version 1.0 of CloudEvents.
## Related links
* [How to publish to and consume topics]({{< ref howto-publish-subscribe.md >}})
* [Sample for pub/sub](https://github.com/dapr/quickstarts/tree/master/pub-sub)
* [Sample for pub/sub](https://github.com/dapr/quickstarts/tree/master/tutorials/pub-sub)

View File

@ -30,6 +30,7 @@ This table is meant to help users understand the equivalent options for running
| `--unix-domain-socket` | `--unix-domain-socket` | `-u` | not supported | On Linux, when communicating with the Dapr sidecar, use unix domain sockets for lower latency and greater throughput compared to TCP ports. Not available on Windows OS |
| `--log-as-json` | not supported | | `dapr.io/log-as-json` | Setting this parameter to `true` outputs logs in JSON format. Default is `false` |
| `--log-level` | `--log-level` | | `dapr.io/log-level` | Sets the log level for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` |
| `--enable-api-logging` | `--enable-api-logging` | | `dapr.io/enable-api-logging` | Enables API logging for the Dapr sidecar |
| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the concurrency of your application. A valid value is any number larger than `0`
| `--metrics-port` | `--metrics-port` | | `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090` |
| `--mode` | not supported | | not supported | Runtime mode for Dapr (default "standalone") |

View File

@ -36,6 +36,7 @@ dapr run [flags] [command]
| `--help`, `-h` | | | Print this help message |
| `--image` | | | The image to build the code in. Input is: `repository/image` |
| `--log-level` | | `info` | The log verbosity. Valid values are: `debug`, `info`, `warn`, `error`, `fatal`, or `panic` |
| `--enable-api-logging` | | `false` | Enable the logging of all API calls from application to Dapr |
| `--metrics-port` | `DAPR_METRICS_PORT` | `9090` | The port that Dapr sends its metrics information to |
| `--profile-port` | | `7777` | The port for the profile server to listen on |
| `--unix-domain-socket`, `-u` | | | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows OS |
@ -64,4 +65,7 @@ dapr run --app-id myapp
# Run a gRPC application written in Go (listening on port 3000)
dapr run --app-id myapp --app-port 5000 --app-protocol grpc -- go run main.go
# Run a NodeJs application that listens to port 3000 with API logging enabled
dapr run --app-id myapp --app-port 3000 --enable-api-logging -- node myapp.js
```

View File

@ -114,10 +114,11 @@ The payload format is documented [here](https://developer.apple.com/documentatio
"operation": "create"
}
```
<!-- IGNORE_LINKS -->
The `data` object contains a complete push notification specification as described in the [Apple documentation](https://developer.apple.com/documentation/usernotifications/setting_up_a_remote_notification_server/generating_a_remote_notification). The `data` object will be sent directly to the APNs service.
Besides the `device-token` value, the HTTP headers specified in the [Apple documentation](https://developer.apple.com/documentation/usernotifications/setting_up_a_remote_notification_server/sending_notification_requests_to_apns) can be sent as metadata fields and will be included in the HTTP request to the APNs service.
<!-- END_IGNORE -->
### Response format

View File

@ -43,7 +43,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| redisUsername | N | Output | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `"username"` |
| enableTLS | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |
| failover | N | Output | Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | Output | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel) | `""`, `"127.0.0.1:6379"`
| sentinelMasterName | N | Output | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| redeliverInterval | N | Output | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`
| processingTimeout | N | Output | The amount time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"`
| redisType | N | Output | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"`

View File

@ -9,9 +9,9 @@ aliases:
## Component format
To setup AWS S3 binding create a component of type `bindings.aws.s3`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
To setup an AWS S3 binding create a component of type `bindings.aws.s3`. This binding works with other S3-compatible services, such as Minio. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes.
```yaml
apiVersion: dapr.io/v1alpha1
@ -43,6 +43,8 @@ spec:
value: <bool>
- name: disableSSL
value: <bool>
- name: insecureSSL
value: <bool>
```
{{% alert title="Warning" color="warning" %}}
@ -63,7 +65,16 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| decodeBase64 | N | Output | Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
| encodeBase64 | N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
| disableSSL | N | Output | Allows to connect to non `https://` endpoints. Defaults to `false` | `true`, `false` |
| insecureSSL | N | Output | When connecting to `https://` endpoints, accepts invalid or self-signed certificates. Defaults to `false` | `true`, `false` |
### Using with Minio
[Minio](https://min.io/) is a service that exposes local storage as S3-compatible block storage, and it's a popular alternative to S3 especially in development environments. You can use the S3 binding with Minio too, with some configuration tweaks:
1. Set `endpoint` to the address of the Minio server, including protocol (`http://` or `https://`) and the optional port at the end. For example, `http://minio.local:9000` (the values depend on your environment).
2. `forcePathStyle` must be set to `true`
3. The value for `region` is not important; you can set it to `us-east-1`.
4. Depending on your environment, you may need to set `disableSSL` to `true` if you're connecting to Minio using a non-secure connection (using the `http://` protocol). If you are using a secure connection (`https://` protocol) but with a self-signed certificate, you may need to set `insecureSSL` to `true`.
## Binding support

View File

@ -52,8 +52,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
| maxRetryBackoff | N | Minimum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel) | `""`, `"127.0.0.1:6379"`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
## Setup Redis

View File

@ -185,20 +185,20 @@ In order to run in AWS, you should create or assign an IAM user with permissions
"Sid": "YOUR_POLICY_NAME",
"Effect": "Allow",
"Action": [
"sqs:CreateQueue",
"sqs:DeleteMessage",
"sqs:ReceiveMessage",
"sqs:ChangeMessageVisibility",
"sqs:GetQueueUrl",
"sqs:GetQueueAttributes",
"sqs:SetQueueAttributes",
"sns:CreateTopic",
"sns:GetTopicAttributes",
"sns:ListSubscriptionsByTopic",
"sns:Publish",
"sns:Subscribe",
"sns:ListSubscriptionsByTopic",
"sns:GetTopicAttributes"
"sns:TagResource",
"sqs:ChangeMessageVisibility",
"sqs:CreateQueue",
"sqs:DeleteMessage",
"sqs:GetQueueAttributes",
"sqs:GetQueueUrl",
"sqs:ReceiveMessage",
"sqs:SetQueueAttributes",
"sqs:TagQueue"
],
"Resource": [
"arn:aws:sns:AWS_REGION:AWS_ACCOUNT_ID:*",

View File

@ -48,6 +48,10 @@ spec:
value: "false"
- name: enableMessageOrdering
value: "false"
- name: maxReconnectionAttempts # Optional
value: 30
- name: connectionRecoveryInSec # Optional
value: 2
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
@ -70,6 +74,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| clientX509CertUrl | N | If using explicit credentials, this field should contain the `client_x509_cert_url` field from the service account json | `https://www.googleapis.com/robot/v1/metadata/x509/myserviceaccount%40myproject.iam.gserviceaccount.com`
| disableEntityManagement | N | When set to `"true"`, topics and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
| enableMessageOrdering | N | When set to `"true"`, subscribed messages will be received in order, depending on publishing and permissions configuration. | `"true"`, `"false"`
| maxReconnectionAttempts | N |Defines the maximum number of reconnect attempts. Default: `30` | `30`
| connectionRecoveryInSec | N |Time in seconds to wait between connection recovery attempts. Default: `2` | `2`
{{% alert title="Warning" color="warning" %}}
If `enableMessageOrdering` is set to "true", the roles/viewer or roles/pubsub.viewer role will be required on the service account in order to guarantee ordering in cases where order tokens are not embedded in the messages. If this role is not given, or the call to Subscription.Config() fails for any other reason, ordering by embedded order tokens will still function correctly.

View File

@ -63,7 +63,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel) | `""`, `"127.0.0.1:6379"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| maxLenApprox | N | Maximum number of items inside a stream.The old entries are automatically evicted when the specified length is reached, so that the stream is left at a constant size. Defaults to unlimited. | `"10000"`
## Create a Redis instance

View File

@ -23,21 +23,19 @@ spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
- name: vaultName # Required
value: [your_keyvault_name]
- name: spnTenantId
- name: azureEnvironment # Optional, defaults to AZUREPUBLICCLOUD
value: "AZUREPUBLICCLOUD"
# See authentication section below for all options
- name: azureTenantId
value: "[your_service_principal_tenant_id]"
- name: spnClientId
- name: azureClientId
value: "[your_service_principal_app_id]"
value : "[pfx_certificate_contents]"
- name: spnCertificateFile
- name: azureCertificateFile
value : "[pfx_certificate_file_fully_qualified_local_path]"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a local secret store such as [Kubernetes secret store]({{< ref kubernetes-secret-store.md >}}) or a [local file]({{< ref file-secret-store.md >}}) to bootstrap secure key storage.
{{% /alert %}}
## Authenticating with Azure AD
The Azure Key Vault secret store component supports authentication with Azure AD only. Before you enable this component, make sure you've read the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document and created an Azure AD application (also called Service Principal). Alternatively, make sure you have created a managed identity for your application platform.
@ -48,10 +46,11 @@ The Azure Key Vault secret store component supports authentication with Azure AD
|--------------------|:--------:|---------|---------|
| `vaultName` | Y | The name of the Azure Key Vault | `"mykeyvault"` |
| `azureEnvironment` | N | Optional name for the Azure environment if using a different Azure cloud | `"AZUREPUBLICCLOUD"` (default value), `"AZURECHINACLOUD"`, `"AZUREUSGOVERNMENTCLOUD"`, `"AZUREGERMANCLOUD"` |
| Auth metadata | | See [Authenticating to Azure]({{< ref authenticating-azure.md >}}) for more information
Additionally, you must provide the authentication fields as explained in the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
## Create the Azure Key Vault and authorize the Service Principal
## Example: Create an Azure Key Vault and authorize a Service Principal
### Prerequisites
@ -111,7 +110,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
--scope "${RG_ID}/providers/Microsoft.KeyVault/vaults/${KEYVAULT_NAME}"
```
## Configure the component
### Configure the component
{{< tabs "Self-Hosted" "Kubernetes">}}

View File

@ -61,7 +61,7 @@ If `multiValued` is `"false"`, the store will load the file and create a map wit
| flattened key | value |
| --- | --- |
|"redis" | "your redis password" |
|"redisPassword" | "your redis password" |
|"connectionStrings:sql" | "your sql connection string" |
|"connectionStrings:mysql"| "your mysql connection string" |

View File

@ -30,6 +30,7 @@ The following stores are supported, at various levels, by the Dapr state managem
|----------------------------------------------------|----|-------------|----|----|----|----|-------|----|-----|
| [Aerospike]({{< ref setup-aerospike.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Apache Cassandra]({{< ref setup-cassandra.md >}}) | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [CockroachDB]({{< ref setup-cockroachdb.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | Alpha | v1 | 1.7 |
| [Couchbase]({{< ref setup-couchbase.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Hashicorp Consul]({{< ref setup-consul.md >}}) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Hazelcast]({{< ref setup-hazelcast.md >}}) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |

View File

@ -0,0 +1,69 @@
---
type: docs
title: "CockroachDB"
linkTitle: "CockroachDB"
description: Detailed information on the CockroachDB state store component
aliases:
- "/operations/components/setup-state-store/supported-state-stores/setup-cockroachdb/"
---
## Create a Dapr component
Create a file called `cockroachdb.yaml`, paste the following and replace the `<CONNECTION STRING>` value with your connection string. The connection string for CockroachDB follow the same standard for PostgreSQL connection string. For example, `"host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"`. See the CockroachDB [documentation on database connections](https://www.cockroachlabs.com/docs/stable/connect-to-the-database.html) for information on how to define a connection string.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.cockroachdb
version: v1
metadata:
- name: connectionString
value: "<CONNECTION STRING>"
```
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| connectionString | Y | The connection string for CockroachDB | `"host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"`
| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
## Setup CockroachDB
{{< tabs "Self-Hosted" "Kubernetes" >}}
{{% codetab %}}
1. Run an instance of CockroachDB. You can run a local instance of CockroachDB in Docker CE with the following command:
This example does not describe a production configuration because it sets a single-node cluster, it's only recommend for local environment.
```bash
docker run --name roach1 -p 26257:26257 cockroachdb/cockroach:v21.2.3 start-single-node --insecure
```
2. Create a database for state data.
To create a new database in CockroachDB, run the following SQL command inside container:
```bash
docker exec -it roach1 ./cockroach sql --insecure -e 'create database dapr_test'
```
{{% /codetab %}}
{{% codetab %}}
The easiest way to install CockroachDB on Kubernetes is by using the [CockroachDB Operator](https://github.com/cockroachdb/cockroach-operator):
{{% /codetab %}}
{{% /tabs %}}
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
- [State management building block]({{< ref state-management >}})

View File

@ -40,6 +40,8 @@ spec:
value: <REPLACE-WITH-READ-CONCERN> # Optional.
- name: operationTimeout
value: <REPLACE-WITH-OPERATION-TIMEOUT> # Optional. default: "5s"
- name: params
value: <REPLACE-WITH-ADDITIONAL-PARAMETERS> # Optional. Example: "?authSource=daprStore&ssl=true"
```
{{% alert title="Warning" color="warning" %}}
@ -67,9 +69,12 @@ If you wish to use MongoDB as an actor store, append the following to the yaml.
| writeconcern | N | The write concern to use | `"majority"`
| readconcern | N | The read concern to use | `"majority"`, `"local"`,`"available"`, `"linearizable"`, `"snapshot"`
| operationTimeout | N | The timeout for the operation. Defaults to `"5s"` | `"5s"`
| params | N<sup>**</sup> | Additional parameters to use | `"?authSource=daprStore&ssl=true"`
> <sup>[*]</sup> The `server` and `host` fields are mutually exclusive. If neither or both are set, Dapr will return an error.
> <sup>[**]</sup> The `params` field accepts a query string that specifies connection specific options as `<name>=<value>` pairs, separated by `"&"` and prefixed with `"?"`. e.g. to use "daprStore" db as authentication database and enabling SSL/TLS in connection, specify params as `"?authSource=daprStore&ssl=true"`. See [the mongodb manual](https://docs.mongodb.com/manual/reference/connection-string/#std-label-connections-connection-options) for the list of available options and their use cases.
## Setup MongoDB
{{< tabs "Self-Hosted" "Kubernetes" >}}

View File

@ -64,8 +64,8 @@ If you wish to use Redis as an actor store, append the following to the yaml.
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
| maxRetryBackoff | N | Minimum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel) | `""`, `"127.0.0.1:6379"`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| redeliverInterval | N | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`
| processingTimeout | N | The amount time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"`
| redisType | N | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"`

Binary file not shown.

After

Width:  |  Height:  |  Size: 788 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

@ -1 +1 @@
Subproject commit b47c63ac140845b178a1b29b1e988e30e4c7b579
Subproject commit 2ffbb113e7b5186a96ee38426a2c08526e83b0e0

@ -1 +1 @@
Subproject commit db33b48fd4af80f638d4fa8713b557e43cabec49
Subproject commit d3df194bad3826069b7c9cda5178196e92dacad1

@ -1 +1 @@
Subproject commit 18a72819a6b620e889ae4b5beecba100ee65ee34
Subproject commit 1e23f32eafdebe571db6e19717cf5317f09a5402

@ -1 +1 @@
Subproject commit 6d7d9400736d2c58901c7b49f666a159f987e789
Subproject commit 058cfcf4d603823c5916bb5ae533bb9f5bb862fd