Merge branch 'v1.11' into kitex

This commit is contained in:
Hannah Hunter 2023-05-17 16:01:36 -04:00 committed by GitHub
commit d03dc15b75
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 234 additions and 216 deletions

View File

@ -34,7 +34,6 @@ Dapr does work with service meshes. In the case where both are deployed together
Watch these recordings from the Dapr community calls showing presentations on running Dapr together with different service meshes:
- General overview and a demo of [Dapr and Linkerd](https://youtu.be/xxU68ewRmz8?t=142)
- Demo of running [Dapr and Istio](https://youtu.be/ngIDOQApx8g?t=335)
- Learn more about [running Dapr with Open Service Mesh (OSM)]({{<ref open-service-mesh>}}).
## When to use Dapr or a service mesh or both
Should you be using Dapr, a service mesh, or both? The answer depends on your requirements. If, for example, you are looking to use Dapr for one or more building blocks such as state management or pub/sub, and you are considering using a service mesh just for network security or observability, you may find that Dapr is a good fit and that a service mesh is not required.

View File

@ -1,22 +1,27 @@
---
type: docs
title: "Autoscaling a Dapr app with KEDA"
linkTitle: "Autoscale with KEDA"
title: "How to: Autoscale a Dapr app with KEDA"
linkTitle: "How to: Autoscale with KEDA"
description: "How to configure your Dapr application to autoscale using KEDA"
weight: 2000
weight: 3000
---
Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components]({{< ref pubsub >}}), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting layer.
Dapr, with its building-block API approach, along with the many [pub/sub components]({{< ref pubsub >}}), makes it easy to write message processing applications. Since Dapr can run in many environments (for example VMs, bare-metal, Cloud or Edge Kubernetes) the autoscaling of Dapr applications is managed by the hosting layer.
For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), an event driven autoscaler for Kubernetes. Many of Dapr's pub/sub components overlap with the scalers provided by [KEDA](https://github.com/kedacore/keda) so it's easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA.
For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), an event driven autoscaler for Kubernetes. Many of Dapr's pub/sub components overlap with the scalers provided by [KEDA](https://github.com/kedacore/keda), so it's easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA.
This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to any [pub/sub components]({{< ref pubsub >}}) offered by Dapr.
In this guide, you configure a scalable Dapr application, along with the back pressure on Kafka topic. However, you can apply this approach to _any_ [pub/sub components]({{< ref pubsub >}}) offered by Dapr.
{{% alert title="Note" color="primary" %}}
If you're working with Azure Container Apps, refer to the official Azure documentation for [scaling Dapr applications using KEDA scalers](https://learn.microsoft.com/azure/container-apps/dapr-keda-scaling).
{{% /alert %}}
## Install KEDA
To install KEDA, follow the [Deploying KEDA](https://keda.sh/docs/latest/deploy/) instructions on the KEDA website.
## Install Kafka (optional)
## Install and deploy Kafka
If you don't have access to a Kafka service, you can install it into your Kubernetes cluster for this example by using Helm:
@ -39,16 +44,16 @@ kubectl rollout status statefulset.apps/kafka-cp-kafka -n kafka
kubectl rollout status statefulset.apps/kafka-cp-zookeeper -n kafka
```
When done, also deploy the Kafka client and wait until it's ready:
Once installed, deploy the Kafka client and wait until it's ready:
```shell
kubectl apply -n kafka -f deployment/kafka-client.yaml
kubectl wait -n kafka --for=condition=ready pod kafka-client --timeout=120s
```
Next, create the topic which is used in this example (for example `demo-topic`):
## Create the Kafka topic
> The number of topic partitions is related to the maximum number of replicas KEDA creates for your deployments
Create the topic used in this example (`demo-topic`):
```shell
kubectl -n kafka exec -it kafka-client -- kafka-topics \
@ -60,9 +65,11 @@ kubectl -n kafka exec -it kafka-client -- kafka-topics \
--if-not-exists
```
## Deploy a Dapr Pub/Sub component
> The number of topic `partitions` is related to the maximum number of replicas KEDA creates for your deployments.
Next, we'll deploy the Dapr Kafka pub/sub component for Kubernetes. Paste the following YAML into a file named `kafka-pubsub.yaml`:
## Deploy a Dapr pub/sub component
Deploy the Dapr Kafka pub/sub component for Kubernetes. Paste the following YAML into a file named `kafka-pubsub.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
@ -81,9 +88,11 @@ spec:
value: autoscaling-subscriber
```
The above YAML defines the pub/sub component that your application subscribes to, the `demo-topic` we created above. If you used the Kafka Helm install instructions above you can leave the `brokers` value as is. Otherwise, change this to the connection string to your Kafka brokers.
The above YAML defines the pub/sub component that your application subscribes to and that [you created earlier (`demo-topic`)]({{< ref "#create-the-kakfa-topic" >}}).
Also notice the `autoscaling-subscriber` value set for `consumerID` which is used later to make sure that KEDA and your deployment use the same [Kafka partition offset](http://cloudurable.com/blog/kafka-architecture-topics/index.html#:~:text=Kafka%20continually%20appended%20to%20partitions,fit%20on%20a%20single%20server.).
If you used the [Kafka Helm install instructions]({{< ref "#install-and-deploy-kafka" >}}), you can leave the `brokers` value as-is. Otherwise, change this value to the connection string to your Kafka brokers.
Notice the `autoscaling-subscriber` value set for `consumerID`. This value is used later to ensure that KEDA and your deployment use the same [Kafka partition offset](http://cloudurable.com/blog/kafka-architecture-topics/index.html#:~:text=Kafka%20continually%20appended%20to%20partitions,fit%20on%20a%20single%20server.).
Now, deploy the component to the cluster:
@ -93,7 +102,9 @@ kubectl apply -f kafka-pubsub.yaml
## Deploy KEDA autoscaler for Kafka
Next, we will deploy the KEDA scaling object that monitors the lag on the specified Kafka topic and configures the Kubernetes Horizontal Pod Autoscaler (HPA) to scale your Dapr deployment in and out.
Deploy the KEDA scaling object that:
- Monitors the lag on the specified Kafka topic
- Configures the Kubernetes Horizontal Pod Autoscaler (HPA) to scale your Dapr deployment in and out
Paste the following into a file named `kafka_scaler.yaml`, and configure your Dapr deployment in the required place:
@ -117,19 +128,25 @@ spec:
lagThreshold: "5"
```
A few things to review here in the above file:
Let's review a few metadata values in the file above:
* `name` in the `scaleTargetRef` section in the `spec:` is the Dapr ID of your app defined in the Deployment (The value of the `dapr.io/id` annotation)
* `pollingInterval` is the frequency in seconds with which KEDA checks Kafka for current topic partition offset
* `minReplicaCount` is the minimum number of replicas KEDA creates for your deployment. (Note, if your application takes a long time to start it may be better to set that to `1` to ensure at least one replica of your deployment is always running. Otherwise, set that to `0` and KEDA creates the first replica for you)
* `maxReplicaCount` is the maximum number of replicas for your deployment. Given how [Kafka partition offset](http://cloudurable.com/blog/kafka-architecture-topics/index.html#:~:text=Kafka%20continually%20appended%20to%20partitions,fit%20on%20a%20single%20server.) works, you shouldn't set that value higher than the total number of topic partitions
* `topic` in the Kafka `metadata` section which should be set to the same topic to which your Dapr deployment subscribe (In this example `demo-topic`)
* Similarly the `bootstrapServers` should be set to the same broker connection string used in the `kafka-pubsub.yaml` file
* The `consumerGroup` should be set to the same value as the `consumerID` in the `kafka-pubsub.yaml` file
| Values | Description |
| ------ | ----------- |
| `scaleTargetRef`/`name` | The Dapr ID of your app defined in the Deployment (The value of the `dapr.io/id` annotation). |
| `pollingInterval` | The frequency in seconds with which KEDA checks Kafka for current topic partition offset. |
| `minReplicaCount` | The minimum number of replicas KEDA creates for your deployment. If your application takes a long time to start, it may be better to set this to `1` to ensure at least one replica of your deployment is always running. Otherwise, set to `0` and KEDA creates the first replica for you. |
| `maxReplicaCount` | The maximum number of replicas for your deployment. Given how [Kafka partition offset](http://cloudurable.com/blog/kafka-architecture-topics/index.html#:~:text=Kafka%20continually%20appended%20to%20partitions,fit%20on%20a%20single%20server.) works, you shouldn't set that value higher than the total number of topic partitions. |
| `triggers`/`metadata`/`topic` | Should be set to the same topic to which your Dapr deployment subscribed (in this example, `demo-topic`). |
| `triggers`/`metadata`/`bootstrapServers` | Should be set to the same broker connection string used in the `kafka-pubsub.yaml` file. |
| `triggers`/`metadata`/`consumerGroup` | Should be set to the same value as the `consumerID` in the `kafka-pubsub.yaml` file. |
> Note: setting the connection string, topic, and consumer group to the *same* values for both the Dapr service subscription and the KEDA scaler configuration is critical to ensure the autoscaling works correctly.
{{% alert title="Important" color="warning" %}}
Setting the connection string, topic, and consumer group to the *same* values for both the Dapr service subscription and the KEDA scaler configuration is critical to ensure the autoscaling works correctly.
Next, deploy the KEDA scaler to Kubernetes:
{{% /alert %}}
Deploy the KEDA scaler to Kubernetes:
```bash
kubectl apply -f kafka_scaler.yaml
@ -137,6 +154,12 @@ kubectl apply -f kafka_scaler.yaml
All done!
Now, that the `ScaledObject` KEDA object is configured, your deployment will scale based on the lag of the Kafka topic. More information on configuring KEDA for Kafka topics is available [here](https://keda.sh/docs/2.0/scalers/apache-kafka/).
## See the KEDA scaler work
You can now start publishing messages to your Kafka topic `demo-topic` and watch the pods autoscale when the lag threshold is higher than `5` topics, as we have defined in the KEDA scaler manifest. You can publish messages to the Kafka Dapr component by using the Dapr [Publish]({{< ref dapr-publish >}}) CLI command
Now that the `ScaledObject` KEDA object is configured, your deployment will scale based on the lag of the Kafka topic. [Learn more about configuring KEDA for Kafka topics](https://keda.sh/docs/2.0/scalers/apache-kafka/).
As defined in the KEDA scaler manifest, you can now start publishing messages to your Kafka topic `demo-topic` and watch the pods autoscale when the lag threshold is higher than `5` topics. Publish messages to the Kafka Dapr component by using the Dapr [Publish]({{< ref dapr-publish >}}) CLI command.
## Next steps
[Learn about scaling your Dapr pub/sub or binding application with KEDA in Azure Container Apps](https://learn.microsoft.com/azure/container-apps/dapr-keda-scaling)

View File

@ -1,35 +1,40 @@
---
type: docs
title: "Dapr's gRPC Interface"
linkTitle: "gRPC interface"
title: "How to: Use the gRPC interface in your Dapr application"
linkTitle: "How to: gRPC interface"
weight: 6000
description: "Use the Dapr gRPC API in your application"
type: docs
---
# Dapr and gRPC
Dapr implements both an HTTP and a gRPC API for local calls. [gRPC](https://grpc.io/) is useful for low-latency, high performance scenarios and has language integration using the proto clients.
Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients.
You can find a list of auto-generated clients [here](https://github.com/dapr/docs#sdks).
[Find a list of auto-generated clients in the Dapr SDK documentation]({{< ref sdks >}}).
The Dapr runtime implements a [proto service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto) that apps can communicate with via gRPC.
In addition to calling Dapr via gRPC, Dapr supports service to service calls with gRPC by acting as a proxy. See more information [here]({{< ref howto-invoke-services-grpc.md >}}).
In addition to calling Dapr via gRPC, Dapr supports service-to-service calls with gRPC by acting as a proxy. [Learn more in the gRPC service invocation how-to guide]({{< ref howto-invoke-services-grpc.md >}}).
## Configuring Dapr to communicate with an app via gRPC
This guide demonstrates configuring and invoking Dapr with gRPC using a Go SDK application.
### Self hosted
## Configure Dapr to communicate with an app via gRPC
When running in self hosted mode, use the `--app-protocol` flag to tell Dapr to use gRPC to talk to the app:
{{< tabs "Self-hosted" "Kubernetes">}}
<!--selfhosted-->
{{% codetab %}}
When running in self-hosted mode, use the `--app-protocol` flag to tell Dapr to use gRPC to talk to the app.
```bash
dapr run --app-protocol grpc --app-port 5005 node app.js
```
This tells Dapr to communicate with your app via gRPC over port `5005`.
{{% /codetab %}}
### Kubernetes
<!--k8s-->
{{% codetab %}}
On Kubernetes, set the following annotations in your deployment YAML:
@ -58,178 +63,195 @@ spec:
...
```
## Invoking Dapr with gRPC - Go example
{{% /codetab %}}
The following steps show you how to create a Dapr client and call the `SaveStateData` operation on it:
{{< /tabs >}}
1. Import the package
## Invoke Dapr with gRPC
```go
package main
The following steps show how to create a Dapr client and call the `SaveStateData` operation on it.
import (
"context"
"log"
"os"
1. Import the package:
dapr "github.com/dapr/go-sdk/client"
)
```
```go
package main
import (
"context"
"log"
"os"
dapr "github.com/dapr/go-sdk/client"
)
```
2. Create the client
1. Create the client:
```go
// just for this demo
ctx := context.Background()
data := []byte("ping")
// create the client
client, err := dapr.NewClient()
if err != nil {
log.Panic(err)
}
defer client.Close()
```
3. Invoke the Save State method
```go
// save state with the key key1
err = client.SaveState(ctx, "statestore", "key1", data)
if err != nil {
log.Panic(err)
}
log.Println("data saved")
```
Hooray!
```go
// just for this demo
ctx := context.Background()
data := []byte("ping")
// create the client
client, err := dapr.NewClient()
if err != nil {
log.Panic(err)
}
defer client.Close()
```
3. Invoke the `SaveState` method:
```go
// save state with the key key1
err = client.SaveState(ctx, "statestore", "key1", data)
if err != nil {
log.Panic(err)
}
log.Println("data saved")
```
Now you can explore all the different methods on the Dapr client.
## Creating a gRPC app with Dapr
## Create a gRPC app with Dapr
The following steps will show you how to create an app that exposes a server for Dapr to communicate with.
The following steps will show how to create an app that exposes a server for with which Dapr can communicate.
1. Import the package
1. Import the package:
```go
package main
```go
package main
import (
"context"
"fmt"
"log"
"net"
"github.com/golang/protobuf/ptypes/any"
"github.com/golang/protobuf/ptypes/empty"
commonv1pb "github.com/dapr/dapr/pkg/proto/common/v1"
pb "github.com/dapr/go-sdk/dapr/proto/runtime/v1"
"google.golang.org/grpc"
)
```
import (
"context"
"fmt"
"log"
"net"
1. Implement the interface:
"github.com/golang/protobuf/ptypes/any"
"github.com/golang/protobuf/ptypes/empty"
```go
// server is our user app
type server struct {
pb.UnimplementedAppCallbackServer
}
// EchoMethod is a simple demo method to invoke
func (s *server) EchoMethod() string {
return "pong"
}
// This method gets invoked when a remote service has called the app through Dapr
// The payload carries a Method to identify the method, a set of metadata properties and an optional payload
func (s *server) OnInvoke(ctx context.Context, in *commonv1pb.InvokeRequest) (*commonv1pb.InvokeResponse, error) {
var response string
switch in.Method {
case "EchoMethod":
response = s.EchoMethod()
}
return &commonv1pb.InvokeResponse{
ContentType: "text/plain; charset=UTF-8",
Data: &any.Any{Value: []byte(response)},
}, nil
}
// Dapr will call this method to get the list of topics the app wants to subscribe to. In this example, we are telling Dapr
// To subscribe to a topic named TopicA
func (s *server) ListTopicSubscriptions(ctx context.Context, in *empty.Empty) (*pb.ListTopicSubscriptionsResponse, error) {
return &pb.ListTopicSubscriptionsResponse{
Subscriptions: []*pb.TopicSubscription{
{Topic: "TopicA"},
},
}, nil
}
// Dapr will call this method to get the list of bindings the app will get invoked by. In this example, we are telling Dapr
// To invoke our app with a binding named storage
func (s *server) ListInputBindings(ctx context.Context, in *empty.Empty) (*pb.ListInputBindingsResponse, error) {
return &pb.ListInputBindingsResponse{
Bindings: []string{"storage"},
}, nil
}
// This method gets invoked every time a new event is fired from a registered binding. The message carries the binding name, a payload and optional metadata
func (s *server) OnBindingEvent(ctx context.Context, in *pb.BindingEventRequest) (*pb.BindingEventResponse, error) {
fmt.Println("Invoked from binding")
return &pb.BindingEventResponse{}, nil
}
// This method is fired whenever a message has been published to a topic that has been subscribed. Dapr sends published messages in a CloudEvents 0.3 envelope.
func (s *server) OnTopicEvent(ctx context.Context, in *pb.TopicEventRequest) (*pb.TopicEventResponse, error) {
fmt.Println("Topic message arrived")
return &pb.TopicEventResponse{}, nil
}
```
commonv1pb "github.com/dapr/dapr/pkg/proto/common/v1"
pb "github.com/dapr/go-sdk/dapr/proto/runtime/v1"
"google.golang.org/grpc"
)
```
1. Create the server:
2. Implement the interface
```go
func main() {
// create listener
lis, err := net.Listen("tcp", ":50001")
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
// create grpc server
s := grpc.NewServer()
pb.RegisterAppCallbackServer(s, &server{})
fmt.Println("Client starting...")
// and start...
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
```
```go
// server is our user app
type server struct {
pb.UnimplementedAppCallbackServer
}
This creates a gRPC server for your app on port 50001.
// EchoMethod is a simple demo method to invoke
func (s *server) EchoMethod() string {
return "pong"
}
## Run the application
// This method gets invoked when a remote service has called the app through Dapr
// The payload carries a Method to identify the method, a set of metadata properties and an optional payload
func (s *server) OnInvoke(ctx context.Context, in *commonv1pb.InvokeRequest) (*commonv1pb.InvokeResponse, error) {
var response string
switch in.Method {
case "EchoMethod":
response = s.EchoMethod()
}
return &commonv1pb.InvokeResponse{
ContentType: "text/plain; charset=UTF-8",
Data: &any.Any{Value: []byte(response)},
}, nil
}
// Dapr will call this method to get the list of topics the app wants to subscribe to. In this example, we are telling Dapr
// To subscribe to a topic named TopicA
func (s *server) ListTopicSubscriptions(ctx context.Context, in *empty.Empty) (*pb.ListTopicSubscriptionsResponse, error) {
return &pb.ListTopicSubscriptionsResponse{
Subscriptions: []*pb.TopicSubscription{
{Topic: "TopicA"},
},
}, nil
}
// Dapr will call this method to get the list of bindings the app will get invoked by. In this example, we are telling Dapr
// To invoke our app with a binding named storage
func (s *server) ListInputBindings(ctx context.Context, in *empty.Empty) (*pb.ListInputBindingsResponse, error) {
return &pb.ListInputBindingsResponse{
Bindings: []string{"storage"},
}, nil
}
// This method gets invoked every time a new event is fired from a registered binding. The message carries the binding name, a payload and optional metadata
func (s *server) OnBindingEvent(ctx context.Context, in *pb.BindingEventRequest) (*pb.BindingEventResponse, error) {
fmt.Println("Invoked from binding")
return &pb.BindingEventResponse{}, nil
}
// This method is fired whenever a message has been published to a topic that has been subscribed. Dapr sends published messages in a CloudEvents 0.3 envelope.
func (s *server) OnTopicEvent(ctx context.Context, in *pb.TopicEventRequest) (*pb.TopicEventResponse, error) {
fmt.Println("Topic message arrived")
return &pb.TopicEventResponse{}, nil
}
```
3. Create the server
```go
func main() {
// create listener
lis, err := net.Listen("tcp", ":50001")
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
// create grpc server
s := grpc.NewServer()
pb.RegisterAppCallbackServer(s, &server{})
fmt.Println("Client starting...")
// and start...
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
```
This creates a gRPC server for your app on port 50001.
4. Run your app
{{< tabs "Self-hosted" "Kubernetes">}}
<!--selfhosted-->
{{% codetab %}}
To run locally, use the Dapr CLI:
```
```bash
dapr run --app-id goapp --app-port 50001 --app-protocol grpc go run main.go
```
On Kubernetes, set the required `dapr.io/app-protocol: "grpc"` and `dapr.io/app-port: "50001` annotations in your pod spec template as mentioned above.
{{% /codetab %}}
<!--k8s-->
{{% codetab %}}
On Kubernetes, set the required `dapr.io/app-protocol: "grpc"` and `dapr.io/app-port: "50001` annotations in your pod spec template, as mentioned above.
{{% /codetab %}}
{{< /tabs >}}
## Other languages
You can use Dapr with any language supported by Protobuf, and not just with the currently available generated SDKs.
Using the [protoc](https://developers.google.com/protocol-buffers/docs/downloads) tool you can generate the Dapr clients for other languages like Ruby, C++, Rust and others.
Using the [protoc](https://developers.google.com/protocol-buffers/docs/downloads) tool, you can generate the Dapr clients for other languages like Ruby, C++, Rust, and others.
## Related Topics
- [Service invocation building block]({{< ref service-invocation >}})

View File

@ -1,14 +1,16 @@
---
type: docs
weight: 5000
title: "Use the Dapr CLI in a GitHub Actions workflow"
linkTitle: "GitHub Actions"
title: "How to: Use the Dapr CLI in a GitHub Actions workflow"
linkTitle: "How to: GitHub Actions"
description: "Add the Dapr CLI to your GitHub Actions to deploy and manage Dapr in your environments."
---
Dapr can be integrated with GitHub Actions via the [Dapr tool installer](https://github.com/marketplace/actions/dapr-tool-installer) available in the GitHub Marketplace. This installer adds the Dapr CLI to your workflow, allowing you to deploy, manage, and upgrade Dapr across your environments.
Copy and paste the following installer snippet into your applicatin's YAML file to get started:
## Install the Dapr CLI via the Dapr tool installer
Copy and paste the following installer snippet into your application's YAML file:
```yaml
- name: Dapr tool installer
@ -21,6 +23,8 @@ Refer to the [`action.yml` metadata file](https://github.com/dapr/setup-dapr/blo
## Example
For example, for an application using the [Dapr extention for Azure Kubernetes Service (AKS)]({{< ref azure-kubernetes-service-extension.md >}}), your application YAML will look like the following:
```yaml
- name: Install Dapr
uses: dapr/setup-dapr@v1
@ -45,4 +49,5 @@ Refer to the [`action.yml` metadata file](https://github.com/dapr/setup-dapr/blo
## Next steps
Learn more about [GitHub Actions](https://docs.github.com/en/actions).
- Learn more about [GitHub Actions](https://docs.github.com/en/actions).
- Follow the tutorial to learn how [GitHub Actions works with your Dapr container app (Azure Container Apps)](https://learn.microsoft.com/azure/container-apps/dapr-github-actions?tabs=azure-cli)

View File

@ -1,31 +0,0 @@
---
type: docs
title: "Running Dapr and Open Service Mesh together"
linkTitle: "Open Service Mesh"
weight: 4000
description: "Learn how to run both Open Service Mesh and Dapr on the same Kubernetes cluster"
---
## Overview
[Open Service Mesh (OSM)](https://openservicemesh.io/) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
{{< button text="Learn more" link="https://openservicemesh.io/" >}}
## Dapr integration
Users are able to leverage both OSM SMI traffic policies and Dapr capabilities on the same Kubernetes cluster. Visit [this guide](https://docs.openservicemesh.io/docs/integrations/demo_dapr/) to get started.
{{< button text="Deploy OSM and Dapr" link="https://docs.openservicemesh.io/docs/integrations/demo_dapr/" >}}
## Example
Watch the OSM team present the OSM and Dapr integration in the 05/18/2021 community call:
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/LSYyTL0nS8Y?start=1916" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## Additional resources
- [Dapr and service meshes]({{< ref service-mesh.md >}})