From b0b7016ba34a535a9eccb207fdc783a3127d82ef Mon Sep 17 00:00:00 2001 From: Hannah Hunter Date: Wed, 10 May 2023 14:14:23 -0400 Subject: [PATCH 1/5] freshness pass on keda doc for k8s Signed-off-by: Hannah Hunter --- .../integrations/autoscale-keda.md | 75 ++++++++++++------- 1 file changed, 49 insertions(+), 26 deletions(-) diff --git a/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md b/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md index 7e685010f..5087fac1d 100644 --- a/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md +++ b/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md @@ -1,22 +1,27 @@ --- type: docs -title: "Autoscaling a Dapr app with KEDA" -linkTitle: "Autoscale with KEDA" +title: "How to: Autoscale a Dapr app with KEDA" +linkTitle: "How to: Autoscale with KEDA" description: "How to configure your Dapr application to autoscale using KEDA" -weight: 2000 +weight: 3000 --- Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components]({{< ref pubsub >}}), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting layer. -For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), an event driven autoscaler for Kubernetes. Many of Dapr's pub/sub components overlap with the scalers provided by [KEDA](https://github.com/kedacore/keda) so it's easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA. +For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), an event driven autoscaler for Kubernetes. Many of Dapr's pub/sub components overlap with the scalers provided by [KEDA](https://github.com/kedacore/keda), so it's easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA. -This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to any [pub/sub components]({{< ref pubsub >}}) offered by Dapr. +In this guide, you configure a scalable Dapr application, along with the back pressure on Kafka topic. However, you can apply this approach to _any_ [pub/sub components]({{< ref pubsub >}}) offered by Dapr. + +{{% alert title="Note" color="primary" %}} + If you're working with Azure Container Apps, refer to the official Azure documentation for [scaling Dapr applications using KEDA scalers](https://learn.microsoft.com/en-us/azure/container-apps/dapr-keda-scaling). + +{{% /alert %}} ## Install KEDA To install KEDA, follow the [Deploying KEDA](https://keda.sh/docs/latest/deploy/) instructions on the KEDA website. -## Install Kafka (optional) +## Install and deploy Kafka If you don't have access to a Kafka service, you can install it into your Kubernetes cluster for this example by using Helm: @@ -39,16 +44,16 @@ kubectl rollout status statefulset.apps/kafka-cp-kafka -n kafka kubectl rollout status statefulset.apps/kafka-cp-zookeeper -n kafka ``` -When done, also deploy the Kafka client and wait until it's ready: +Once installed, deploy the Kafka client and wait until it's ready: ```shell kubectl apply -n kafka -f deployment/kafka-client.yaml kubectl wait -n kafka --for=condition=ready pod kafka-client --timeout=120s ``` -Next, create the topic which is used in this example (for example `demo-topic`): +## Create the Kafka topic -> The number of topic partitions is related to the maximum number of replicas KEDA creates for your deployments +Create the topic used in this example (`demo-topic`): ```shell kubectl -n kafka exec -it kafka-client -- kafka-topics \ @@ -60,9 +65,11 @@ kubectl -n kafka exec -it kafka-client -- kafka-topics \ --if-not-exists ``` -## Deploy a Dapr Pub/Sub component +> The number of topic `partitions` is related to the maximum number of replicas KEDA creates for your deployments. -Next, we'll deploy the Dapr Kafka pub/sub component for Kubernetes. Paste the following YAML into a file named `kafka-pubsub.yaml`: +## Deploy a Dapr pub/sub component + +Deploy the Dapr Kafka pub/sub component for Kubernetes. Paste the following YAML into a file named `kafka-pubsub.yaml`: ```yaml apiVersion: dapr.io/v1alpha1 @@ -81,9 +88,11 @@ spec: value: autoscaling-subscriber ``` -The above YAML defines the pub/sub component that your application subscribes to, the `demo-topic` we created above. If you used the Kafka Helm install instructions above you can leave the `brokers` value as is. Otherwise, change this to the connection string to your Kafka brokers. +The above YAML defines the pub/sub component that your application subscribes to and that [you created earlier (`demo-topic`)]({{< ref "#create-the-kakfa-topic" >}}). -Also notice the `autoscaling-subscriber` value set for `consumerID` which is used later to make sure that KEDA and your deployment use the same [Kafka partition offset](http://cloudurable.com/blog/kafka-architecture-topics/index.html#:~:text=Kafka%20continually%20appended%20to%20partitions,fit%20on%20a%20single%20server.). +If you used the [Kafka Helm install instructions]({{< ref "#install-and-deploy-kafka" >}}), you can leave the `brokers` value as-is. Otherwise, change this value to the connection string to your Kafka brokers. + +Notice the `autoscaling-subscriber` value set for `consumerID`. This value is used later to ensure that KEDA and your deployment use the same [Kafka partition offset](http://cloudurable.com/blog/kafka-architecture-topics/index.html#:~:text=Kafka%20continually%20appended%20to%20partitions,fit%20on%20a%20single%20server.). Now, deploy the component to the cluster: @@ -93,7 +102,9 @@ kubectl apply -f kafka-pubsub.yaml ## Deploy KEDA autoscaler for Kafka -Next, we will deploy the KEDA scaling object that monitors the lag on the specified Kafka topic and configures the Kubernetes Horizontal Pod Autoscaler (HPA) to scale your Dapr deployment in and out. +Deploy the KEDA scaling object that: +- Monitors the lag on the specified Kafka topic +- Configures the Kubernetes Horizontal Pod Autoscaler (HPA) to scale your Dapr deployment in and out Paste the following into a file named `kafka_scaler.yaml`, and configure your Dapr deployment in the required place: @@ -117,19 +128,25 @@ spec: lagThreshold: "5" ``` -A few things to review here in the above file: +Let's review a few metadata values in the file above: -* `name` in the `scaleTargetRef` section in the `spec:` is the Dapr ID of your app defined in the Deployment (The value of the `dapr.io/id` annotation) -* `pollingInterval` is the frequency in seconds with which KEDA checks Kafka for current topic partition offset -* `minReplicaCount` is the minimum number of replicas KEDA creates for your deployment. (Note, if your application takes a long time to start it may be better to set that to `1` to ensure at least one replica of your deployment is always running. Otherwise, set that to `0` and KEDA creates the first replica for you) -* `maxReplicaCount` is the maximum number of replicas for your deployment. Given how [Kafka partition offset](http://cloudurable.com/blog/kafka-architecture-topics/index.html#:~:text=Kafka%20continually%20appended%20to%20partitions,fit%20on%20a%20single%20server.) works, you shouldn't set that value higher than the total number of topic partitions -* `topic` in the Kafka `metadata` section which should be set to the same topic to which your Dapr deployment subscribe (In this example `demo-topic`) -* Similarly the `bootstrapServers` should be set to the same broker connection string used in the `kafka-pubsub.yaml` file -* The `consumerGroup` should be set to the same value as the `consumerID` in the `kafka-pubsub.yaml` file +| Values | Description | +| ------ | ----------- | +| `scaleTargetRef`/`name` | The Dapr ID of your app defined in the Deployment (The value of the `dapr.io/id` annotation). | +| `pollingInterval` | The frequency in seconds with which KEDA checks Kafka for current topic partition offset. | +| `minReplicaCount` | The minimum number of replicas KEDA creates for your deployment. If your application takes a long time to start, it may be better to set this to `1` to ensure at least one replica of your deployment is always running. Otherwise, set to `0` and KEDA creates the first replica for you. | +| `maxReplicaCount` | The maximum number of replicas for your deployment. Given how [Kafka partition offset](http://cloudurable.com/blog/kafka-architecture-topics/index.html#:~:text=Kafka%20continually%20appended%20to%20partitions,fit%20on%20a%20single%20server.) works, you shouldn't set that value higher than the total number of topic partitions. | +| `triggers`/`metadata`/`topic` | Should be set to the same topic to which your Dapr deployment subscribed (in this example, `demo-topic`). | +| `triggers`/`metadata`/`bootstrapServers` | Should be set to the same broker connection string used in the `kafka-pubsub.yaml` file. | +| `triggers`/`metadata`/`consumerGroup` | Should be set to the same value as the `consumerID` in the `kafka-pubsub.yaml` file. | -> Note: setting the connection string, topic, and consumer group to the *same* values for both the Dapr service subscription and the KEDA scaler configuration is critical to ensure the autoscaling works correctly. +{{% alert title="Important" color="primary" %}} + Setting the connection string, topic, and consumer group to the *same* values for both the Dapr service subscription and the KEDA scaler configuration is critical to ensure the autoscaling works correctly. -Next, deploy the KEDA scaler to Kubernetes: +{{% /alert %}} + + +Deploy the KEDA scaler to Kubernetes: ```bash kubectl apply -f kafka_scaler.yaml @@ -137,6 +154,12 @@ kubectl apply -f kafka_scaler.yaml All done! -Now, that the `ScaledObject` KEDA object is configured, your deployment will scale based on the lag of the Kafka topic. More information on configuring KEDA for Kafka topics is available [here](https://keda.sh/docs/2.0/scalers/apache-kafka/). +## See the KEDA scaler work -You can now start publishing messages to your Kafka topic `demo-topic` and watch the pods autoscale when the lag threshold is higher than `5` topics, as we have defined in the KEDA scaler manifest. You can publish messages to the Kafka Dapr component by using the Dapr [Publish]({{< ref dapr-publish >}}) CLI command +Now that the `ScaledObject` KEDA object is configured, your deployment will scale based on the lag of the Kafka topic. [Learn more about configuring KEDA for Kafka topics](https://keda.sh/docs/2.0/scalers/apache-kafka/). + +As defined in the KEDA scaler manifest, you can now start publishing messages to your Kafka topic `demo-topic` and watch the pods autoscale when the lag threshold is higher than `5` topics. Publish messages to the Kafka Dapr component by using the Dapr [Publish]({{< ref dapr-publish >}}) CLI command. + +## Next steps + +[Learn about scaling your Dapr pub/sub or binding application with KEDA in Azure Container Apps](https://learn.microsoft.com/en-us/azure/container-apps/dapr-keda-scaling) \ No newline at end of file From 5821c3e9e97360d1f2568f4d2b806c2865d8908b Mon Sep 17 00:00:00 2001 From: Hannah Hunter Date: Wed, 10 May 2023 14:47:32 -0400 Subject: [PATCH 2/5] freshness pass on grpc and gh actions docs Signed-off-by: Hannah Hunter --- .../integrations/autoscale-keda.md | 2 +- .../integrations/gRPC-integration.md | 328 ++++++++++-------- .../integrations/github_actions.md | 13 +- 3 files changed, 185 insertions(+), 158 deletions(-) diff --git a/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md b/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md index 5087fac1d..b242fc3eb 100644 --- a/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md +++ b/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md @@ -140,7 +140,7 @@ Let's review a few metadata values in the file above: | `triggers`/`metadata`/`bootstrapServers` | Should be set to the same broker connection string used in the `kafka-pubsub.yaml` file. | | `triggers`/`metadata`/`consumerGroup` | Should be set to the same value as the `consumerID` in the `kafka-pubsub.yaml` file. | -{{% alert title="Important" color="primary" %}} +{{% alert title="Important" color="warning" %}} Setting the connection string, topic, and consumer group to the *same* values for both the Dapr service subscription and the KEDA scaler configuration is critical to ensure the autoscaling works correctly. {{% /alert %}} diff --git a/daprdocs/content/en/developing-applications/integrations/gRPC-integration.md b/daprdocs/content/en/developing-applications/integrations/gRPC-integration.md index 40b53b073..c7999a637 100644 --- a/daprdocs/content/en/developing-applications/integrations/gRPC-integration.md +++ b/daprdocs/content/en/developing-applications/integrations/gRPC-integration.md @@ -1,35 +1,40 @@ --- type: docs -title: "Dapr's gRPC Interface" -linkTitle: "gRPC interface" +title: "How to: Use the gRPC interface in your Dapr application" +linkTitle: "How to: gRPC interface" weight: 6000 description: "Use the Dapr gRPC API in your application" type: docs --- -# Dapr and gRPC +Dapr implements both an HTTP and a gRPC API for local calls. [gRPC](https://grpc.io/) is useful for low-latency, high performance scenarios and has language integration using the proto clients. -Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients. - -You can find a list of auto-generated clients [here](https://github.com/dapr/docs#sdks). +[Find a list of auto-generated clients in the Dapr SDK documentation]({{< ref sdks >}}). The Dapr runtime implements a [proto service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto) that apps can communicate with via gRPC. -In addition to calling Dapr via gRPC, Dapr supports service to service calls with gRPC by acting as a proxy. See more information [here]({{< ref howto-invoke-services-grpc.md >}}). +In addition to calling Dapr via gRPC, Dapr supports service-to-service calls with gRPC by acting as a proxy. [Learn more in the gRPC service invocation how-to guide]({{< ref howto-invoke-services-grpc.md >}}). -## Configuring Dapr to communicate with an app via gRPC +This guide demonstrates configuring and invoking Dapr with gRPC using a Go SDK application. -### Self hosted +## Configure Dapr to communicate with an app via gRPC -When running in self hosted mode, use the `--app-protocol` flag to tell Dapr to use gRPC to talk to the app: +{{< tabs "Self-hosted" "Kubernetes">}} + +{{% codetab %}} + +When running in self-hosted mode, use the `--app-protocol` flag to tell Dapr to use gRPC to talk to the app. ```bash dapr run --app-protocol grpc --app-port 5005 node app.js ``` + This tells Dapr to communicate with your app via gRPC over port `5005`. +{{% /codetab %}} -### Kubernetes + +{{% codetab %}} On Kubernetes, set the following annotations in your deployment YAML: @@ -58,178 +63,195 @@ spec: ... ``` -## Invoking Dapr with gRPC - Go example +{{% /codetab %}} -The following steps show you how to create a Dapr client and call the `SaveStateData` operation on it: +{{< /tabs >}} -1. Import the package +## Invoke Dapr with gRPC -```go -package main +The following steps show how to create a Dapr client and call the `SaveStateData` operation on it. -import ( - "context" - "log" - "os" +1. Import the package: - dapr "github.com/dapr/go-sdk/client" -) -``` + ```go + package main + + import ( + "context" + "log" + "os" + + dapr "github.com/dapr/go-sdk/client" + ) + ``` -2. Create the client +1. Create the client: -```go -// just for this demo -ctx := context.Background() -data := []byte("ping") - -// create the client -client, err := dapr.NewClient() -if err != nil { - log.Panic(err) -} -defer client.Close() -``` - -3. Invoke the Save State method - -```go -// save state with the key key1 -err = client.SaveState(ctx, "statestore", "key1", data) -if err != nil { - log.Panic(err) -} -log.Println("data saved") -``` - -Hooray! + ```go + // just for this demo + ctx := context.Background() + data := []byte("ping") + + // create the client + client, err := dapr.NewClient() + if err != nil { + log.Panic(err) + } + defer client.Close() + ``` + + 3. Invoke the `SaveState` method: + + ```go + // save state with the key key1 + err = client.SaveState(ctx, "statestore", "key1", data) + if err != nil { + log.Panic(err) + } + log.Println("data saved") + ``` Now you can explore all the different methods on the Dapr client. -## Creating a gRPC app with Dapr +## Create a gRPC app with Dapr -The following steps will show you how to create an app that exposes a server for Dapr to communicate with. +The following steps will show how to create an app that exposes a server for with which Dapr can communicate. -1. Import the package +1. Import the package: -```go -package main + ```go + package main + + import ( + "context" + "fmt" + "log" + "net" + + "github.com/golang/protobuf/ptypes/any" + "github.com/golang/protobuf/ptypes/empty" + + commonv1pb "github.com/dapr/dapr/pkg/proto/common/v1" + pb "github.com/dapr/go-sdk/dapr/proto/runtime/v1" + "google.golang.org/grpc" + ) + ``` -import ( - "context" - "fmt" - "log" - "net" +1. Implement the interface: - "github.com/golang/protobuf/ptypes/any" - "github.com/golang/protobuf/ptypes/empty" + ```go + // server is our user app + type server struct { + pb.UnimplementedAppCallbackServer + } + + // EchoMethod is a simple demo method to invoke + func (s *server) EchoMethod() string { + return "pong" + } + + // This method gets invoked when a remote service has called the app through Dapr + // The payload carries a Method to identify the method, a set of metadata properties and an optional payload + func (s *server) OnInvoke(ctx context.Context, in *commonv1pb.InvokeRequest) (*commonv1pb.InvokeResponse, error) { + var response string + + switch in.Method { + case "EchoMethod": + response = s.EchoMethod() + } + + return &commonv1pb.InvokeResponse{ + ContentType: "text/plain; charset=UTF-8", + Data: &any.Any{Value: []byte(response)}, + }, nil + } + + // Dapr will call this method to get the list of topics the app wants to subscribe to. In this example, we are telling Dapr + // To subscribe to a topic named TopicA + func (s *server) ListTopicSubscriptions(ctx context.Context, in *empty.Empty) (*pb.ListTopicSubscriptionsResponse, error) { + return &pb.ListTopicSubscriptionsResponse{ + Subscriptions: []*pb.TopicSubscription{ + {Topic: "TopicA"}, + }, + }, nil + } + + // Dapr will call this method to get the list of bindings the app will get invoked by. In this example, we are telling Dapr + // To invoke our app with a binding named storage + func (s *server) ListInputBindings(ctx context.Context, in *empty.Empty) (*pb.ListInputBindingsResponse, error) { + return &pb.ListInputBindingsResponse{ + Bindings: []string{"storage"}, + }, nil + } + + // This method gets invoked every time a new event is fired from a registered binding. The message carries the binding name, a payload and optional metadata + func (s *server) OnBindingEvent(ctx context.Context, in *pb.BindingEventRequest) (*pb.BindingEventResponse, error) { + fmt.Println("Invoked from binding") + return &pb.BindingEventResponse{}, nil + } + + // This method is fired whenever a message has been published to a topic that has been subscribed. Dapr sends published messages in a CloudEvents 0.3 envelope. + func (s *server) OnTopicEvent(ctx context.Context, in *pb.TopicEventRequest) (*pb.TopicEventResponse, error) { + fmt.Println("Topic message arrived") + return &pb.TopicEventResponse{}, nil + } + + ``` - commonv1pb "github.com/dapr/dapr/pkg/proto/common/v1" - pb "github.com/dapr/go-sdk/dapr/proto/runtime/v1" - "google.golang.org/grpc" -) -``` +1. Create the server: -2. Implement the interface + ```go + func main() { + // create listener + lis, err := net.Listen("tcp", ":50001") + if err != nil { + log.Fatalf("failed to listen: %v", err) + } + + // create grpc server + s := grpc.NewServer() + pb.RegisterAppCallbackServer(s, &server{}) + + fmt.Println("Client starting...") + + // and start... + if err := s.Serve(lis); err != nil { + log.Fatalf("failed to serve: %v", err) + } + } + ``` -```go -// server is our user app -type server struct { - pb.UnimplementedAppCallbackServer -} + This creates a gRPC server for your app on port 50001. -// EchoMethod is a simple demo method to invoke -func (s *server) EchoMethod() string { - return "pong" -} +## Run the application -// This method gets invoked when a remote service has called the app through Dapr -// The payload carries a Method to identify the method, a set of metadata properties and an optional payload -func (s *server) OnInvoke(ctx context.Context, in *commonv1pb.InvokeRequest) (*commonv1pb.InvokeResponse, error) { - var response string - - switch in.Method { - case "EchoMethod": - response = s.EchoMethod() - } - - return &commonv1pb.InvokeResponse{ - ContentType: "text/plain; charset=UTF-8", - Data: &any.Any{Value: []byte(response)}, - }, nil -} - -// Dapr will call this method to get the list of topics the app wants to subscribe to. In this example, we are telling Dapr -// To subscribe to a topic named TopicA -func (s *server) ListTopicSubscriptions(ctx context.Context, in *empty.Empty) (*pb.ListTopicSubscriptionsResponse, error) { - return &pb.ListTopicSubscriptionsResponse{ - Subscriptions: []*pb.TopicSubscription{ - {Topic: "TopicA"}, - }, - }, nil -} - -// Dapr will call this method to get the list of bindings the app will get invoked by. In this example, we are telling Dapr -// To invoke our app with a binding named storage -func (s *server) ListInputBindings(ctx context.Context, in *empty.Empty) (*pb.ListInputBindingsResponse, error) { - return &pb.ListInputBindingsResponse{ - Bindings: []string{"storage"}, - }, nil -} - -// This method gets invoked every time a new event is fired from a registered binding. The message carries the binding name, a payload and optional metadata -func (s *server) OnBindingEvent(ctx context.Context, in *pb.BindingEventRequest) (*pb.BindingEventResponse, error) { - fmt.Println("Invoked from binding") - return &pb.BindingEventResponse{}, nil -} - -// This method is fired whenever a message has been published to a topic that has been subscribed. Dapr sends published messages in a CloudEvents 0.3 envelope. -func (s *server) OnTopicEvent(ctx context.Context, in *pb.TopicEventRequest) (*pb.TopicEventResponse, error) { - fmt.Println("Topic message arrived") - return &pb.TopicEventResponse{}, nil -} - -``` - -3. Create the server - -```go -func main() { - // create listener - lis, err := net.Listen("tcp", ":50001") - if err != nil { - log.Fatalf("failed to listen: %v", err) - } - - // create grpc server - s := grpc.NewServer() - pb.RegisterAppCallbackServer(s, &server{}) - - fmt.Println("Client starting...") - - // and start... - if err := s.Serve(lis); err != nil { - log.Fatalf("failed to serve: %v", err) - } -} -``` - -This creates a gRPC server for your app on port 50001. - -4. Run your app +{{< tabs "Self-hosted" "Kubernetes">}} + +{{% codetab %}} To run locally, use the Dapr CLI: -``` +```bash dapr run --app-id goapp --app-port 50001 --app-protocol grpc go run main.go ``` -On Kubernetes, set the required `dapr.io/app-protocol: "grpc"` and `dapr.io/app-port: "50001` annotations in your pod spec template as mentioned above. +{{% /codetab %}} + + +{{% codetab %}} + +On Kubernetes, set the required `dapr.io/app-protocol: "grpc"` and `dapr.io/app-port: "50001` annotations in your pod spec template, as mentioned above. + +{{% /codetab %}} + +{{< /tabs >}} + ## Other languages You can use Dapr with any language supported by Protobuf, and not just with the currently available generated SDKs. -Using the [protoc](https://developers.google.com/protocol-buffers/docs/downloads) tool you can generate the Dapr clients for other languages like Ruby, C++, Rust and others. + +Using the [protoc](https://developers.google.com/protocol-buffers/docs/downloads) tool, you can generate the Dapr clients for other languages like Ruby, C++, Rust, and others. ## Related Topics - [Service invocation building block]({{< ref service-invocation >}}) diff --git a/daprdocs/content/en/developing-applications/integrations/github_actions.md b/daprdocs/content/en/developing-applications/integrations/github_actions.md index 781ed49a9..15f87ffb4 100644 --- a/daprdocs/content/en/developing-applications/integrations/github_actions.md +++ b/daprdocs/content/en/developing-applications/integrations/github_actions.md @@ -1,14 +1,16 @@ --- type: docs weight: 5000 -title: "Use the Dapr CLI in a GitHub Actions workflow" -linkTitle: "GitHub Actions" +title: "How to: Use the Dapr CLI in a GitHub Actions workflow" +linkTitle: "How to: GitHub Actions" description: "Add the Dapr CLI to your GitHub Actions to deploy and manage Dapr in your environments." --- Dapr can be integrated with GitHub Actions via the [Dapr tool installer](https://github.com/marketplace/actions/dapr-tool-installer) available in the GitHub Marketplace. This installer adds the Dapr CLI to your workflow, allowing you to deploy, manage, and upgrade Dapr across your environments. -Copy and paste the following installer snippet into your applicatin's YAML file to get started: +## Install the Dapr CLI via the Dapr tool installer + +Copy and paste the following installer snippet into your application's YAML file: ```yaml - name: Dapr tool installer @@ -21,6 +23,8 @@ Refer to the [`action.yml` metadata file](https://github.com/dapr/setup-dapr/blo ## Example +For example, for an application using the [Dapr extention for Azure Kubernetes Service (AKS)]({{< ref azure-kubernetes-service-extension.md >}}), your application YAML will look like the following: + ```yaml - name: Install Dapr uses: dapr/setup-dapr@v1 @@ -45,4 +49,5 @@ Refer to the [`action.yml` metadata file](https://github.com/dapr/setup-dapr/blo ## Next steps -Learn more about [GitHub Actions](https://docs.github.com/en/actions). \ No newline at end of file +- Learn more about [GitHub Actions](https://docs.github.com/en/actions). +- Follow the tutorial to learn how [GitHub Actions works with your Dapr container app (Azure Container Apps)](https://learn.microsoft.com/azure/container-apps/dapr-github-actions?tabs=azure-cli) \ No newline at end of file From 75aae3d92e686bc74eeb5110758bfbcccc871892 Mon Sep 17 00:00:00 2001 From: Hannah Hunter Date: Wed, 10 May 2023 14:55:48 -0400 Subject: [PATCH 3/5] osm freshness and fix link Signed-off-by: Hannah Hunter --- .../integrations/autoscale-keda.md | 2 +- .../integrations/open-service-mesh.md | 24 +++++++------------ 2 files changed, 10 insertions(+), 16 deletions(-) diff --git a/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md b/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md index b242fc3eb..ffaf9bb67 100644 --- a/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md +++ b/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md @@ -162,4 +162,4 @@ As defined in the KEDA scaler manifest, you can now start publishing messages to ## Next steps -[Learn about scaling your Dapr pub/sub or binding application with KEDA in Azure Container Apps](https://learn.microsoft.com/en-us/azure/container-apps/dapr-keda-scaling) \ No newline at end of file +[Learn about scaling your Dapr pub/sub or binding application with KEDA in Azure Container Apps](https://learn.microsoft.com/azure/container-apps/dapr-keda-scaling) \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/integrations/open-service-mesh.md b/daprdocs/content/en/developing-applications/integrations/open-service-mesh.md index 7df6b35f2..3001682e3 100644 --- a/daprdocs/content/en/developing-applications/integrations/open-service-mesh.md +++ b/daprdocs/content/en/developing-applications/integrations/open-service-mesh.md @@ -1,31 +1,25 @@ --- type: docs -title: "Running Dapr and Open Service Mesh together" -linkTitle: "Open Service Mesh" +title: "How to: Run Dapr and Open Service Mesh together" +linkTitle: "How to: Open Service Mesh" weight: 4000 description: "Learn how to run both Open Service Mesh and Dapr on the same Kubernetes cluster" --- -## Overview - -[Open Service Mesh (OSM)](https://openservicemesh.io/) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments. - -{{< button text="Learn more" link="https://openservicemesh.io/" >}} +With [Open Service Mesh (OSM)](https://openservicemesh.io/), you can uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments. ## Dapr integration -Users are able to leverage both OSM SMI traffic policies and Dapr capabilities on the same Kubernetes cluster. Visit [this guide](https://docs.openservicemesh.io/docs/integrations/demo_dapr/) to get started. +You can leverage _both_ OSM SMI traffic policies and Dapr capabilities on the same Kubernetes cluster. Refer to the official OSM documentation to get started. -{{< button text="Deploy OSM and Dapr" link="https://docs.openservicemesh.io/docs/integrations/demo_dapr/" >}} +{{< button text="Get started with deploying Dapr and OSM" link="https://docs.openservicemesh.io/docs/integrations/demo_dapr/" >}} -## Example +## Demo -Watch the OSM team present the OSM and Dapr integration in the 05/18/2021 community call: +Watch the OSM team present the OSM and Dapr integration during [Dapr's Community Call 38](https://youtu.be/LSYyTL0nS8Y?t=1916): -
-
-## Additional resources +## Related links -- [Dapr and service meshes]({{< ref service-mesh.md >}}) \ No newline at end of file +Learn more about [Dapr and service meshes]({{< ref service-mesh.md >}}). \ No newline at end of file From 830026d88912b014d7c871e2301fe307dfa31fd5 Mon Sep 17 00:00:00 2001 From: Hannah Hunter Date: Wed, 10 May 2023 18:05:22 -0400 Subject: [PATCH 4/5] fix other localized link Signed-off-by: Hannah Hunter --- .../en/developing-applications/integrations/autoscale-keda.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md b/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md index ffaf9bb67..c241e37b3 100644 --- a/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md +++ b/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md @@ -13,7 +13,7 @@ For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), a In this guide, you configure a scalable Dapr application, along with the back pressure on Kafka topic. However, you can apply this approach to _any_ [pub/sub components]({{< ref pubsub >}}) offered by Dapr. {{% alert title="Note" color="primary" %}} - If you're working with Azure Container Apps, refer to the official Azure documentation for [scaling Dapr applications using KEDA scalers](https://learn.microsoft.com/en-us/azure/container-apps/dapr-keda-scaling). + If you're working with Azure Container Apps, refer to the official Azure documentation for [scaling Dapr applications using KEDA scalers](https://learn.microsoft.com/azure/container-apps/dapr-keda-scaling). {{% /alert %}} From a36dea92c4cd07221c6b88a3b02159877ee1d377 Mon Sep 17 00:00:00 2001 From: Hannah Hunter Date: Thu, 11 May 2023 15:47:36 -0400 Subject: [PATCH 5/5] updates per Mark Signed-off-by: Hannah Hunter --- daprdocs/content/en/concepts/service-mesh.md | 1 - .../integrations/autoscale-keda.md | 2 +- .../integrations/open-service-mesh.md | 25 ------------------- 3 files changed, 1 insertion(+), 27 deletions(-) delete mode 100644 daprdocs/content/en/developing-applications/integrations/open-service-mesh.md diff --git a/daprdocs/content/en/concepts/service-mesh.md b/daprdocs/content/en/concepts/service-mesh.md index 96613a319..0e3c4cf9e 100644 --- a/daprdocs/content/en/concepts/service-mesh.md +++ b/daprdocs/content/en/concepts/service-mesh.md @@ -34,7 +34,6 @@ Dapr does work with service meshes. In the case where both are deployed together Watch these recordings from the Dapr community calls showing presentations on running Dapr together with different service meshes: - General overview and a demo of [Dapr and Linkerd](https://youtu.be/xxU68ewRmz8?t=142) - Demo of running [Dapr and Istio](https://youtu.be/ngIDOQApx8g?t=335) -- Learn more about [running Dapr with Open Service Mesh (OSM)]({{}}). ## When to use Dapr or a service mesh or both Should you be using Dapr, a service mesh, or both? The answer depends on your requirements. If, for example, you are looking to use Dapr for one or more building blocks such as state management or pub/sub, and you are considering using a service mesh just for network security or observability, you may find that Dapr is a good fit and that a service mesh is not required. diff --git a/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md b/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md index c241e37b3..b0b70928c 100644 --- a/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md +++ b/daprdocs/content/en/developing-applications/integrations/autoscale-keda.md @@ -6,7 +6,7 @@ description: "How to configure your Dapr application to autoscale using KEDA" weight: 3000 --- -Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components]({{< ref pubsub >}}), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting layer. +Dapr, with its building-block API approach, along with the many [pub/sub components]({{< ref pubsub >}}), makes it easy to write message processing applications. Since Dapr can run in many environments (for example VMs, bare-metal, Cloud or Edge Kubernetes) the autoscaling of Dapr applications is managed by the hosting layer. For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), an event driven autoscaler for Kubernetes. Many of Dapr's pub/sub components overlap with the scalers provided by [KEDA](https://github.com/kedacore/keda), so it's easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA. diff --git a/daprdocs/content/en/developing-applications/integrations/open-service-mesh.md b/daprdocs/content/en/developing-applications/integrations/open-service-mesh.md deleted file mode 100644 index 3001682e3..000000000 --- a/daprdocs/content/en/developing-applications/integrations/open-service-mesh.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -type: docs -title: "How to: Run Dapr and Open Service Mesh together" -linkTitle: "How to: Open Service Mesh" -weight: 4000 -description: "Learn how to run both Open Service Mesh and Dapr on the same Kubernetes cluster" ---- - -With [Open Service Mesh (OSM)](https://openservicemesh.io/), you can uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments. - -## Dapr integration - -You can leverage _both_ OSM SMI traffic policies and Dapr capabilities on the same Kubernetes cluster. Refer to the official OSM documentation to get started. - -{{< button text="Get started with deploying Dapr and OSM" link="https://docs.openservicemesh.io/docs/integrations/demo_dapr/" >}} - -## Demo - -Watch the OSM team present the OSM and Dapr integration during [Dapr's Community Call 38](https://youtu.be/LSYyTL0nS8Y?t=1916): - - - -## Related links - -Learn more about [Dapr and service meshes]({{< ref service-mesh.md >}}). \ No newline at end of file