diff --git a/.github/workflows/dapr-bot.yml b/.github/workflows/dapr-bot.yml new file mode 100644 index 000000000..3e8665eb2 --- /dev/null +++ b/.github/workflows/dapr-bot.yml @@ -0,0 +1,32 @@ +name: dapr-bot + +on: + issue_comment: {types: created} + +jobs: + daprbot: + name: bot-processor + runs-on: ubuntu-latest + steps: + - name: Comment analyzer + uses: actions/github-script@v1 + with: + github-token: ${{secrets.DAPR_BOT_TOKEN}} + script: | + const payload = context.payload; + const issue = context.issue; + const isFromPulls = !!payload.issue.pull_request; + const commentBody = payload.comment.body; + + if (!isFromPulls && commentBody && commentBody.indexOf("/assign") == 0) { + if (!issue.assignees || issue.assignees.length === 0) { + await github.issues.addAssignees({ + owner: issue.owner, + repo: issue.repo, + issue_number: issue.number, + assignees: [context.actor], + }) + } + + return; + } \ No newline at end of file diff --git a/daprdocs/content/en/concepts/overview.md b/daprdocs/content/en/concepts/overview.md index 93ce1e4c6..328a5399e 100644 --- a/daprdocs/content/en/concepts/overview.md +++ b/daprdocs/content/en/concepts/overview.md @@ -9,7 +9,9 @@ description: > Dapr is a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks. - +
## Any language, any framework, anywhere @@ -118,4 +120,4 @@ Dapr is designed for [operations]({{< ref operations >}}) and security. The Dapr The [services dashboard](https://github.com/dapr/dashboard), installed via the Dapr CLI, provides a web-based UI enabling you to see information, view logs and more for the Dapr sidecars. -The [monitoring tools support]({{< ref monitoring >}}) provides deeper visibility into the Dapr system services and side-cars and the [observability capabilities]({{}}) of Dapr provide insights into your application such as tracing and metrics. +The [monitoring tools support]({{< ref monitoring >}}) provides deeper visibility into the Dapr system services and side-cars and the [observability capabilities]({{}}) of Dapr provide insights into your application such as tracing and metrics. \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/building-blocks/actors/howto-actors.md b/daprdocs/content/en/developing-applications/building-blocks/actors/howto-actors.md index 4dd39d18c..249fe28a3 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/actors/howto-actors.md +++ b/daprdocs/content/en/developing-applications/building-blocks/actors/howto-actors.md @@ -137,7 +137,9 @@ The number of repetitions i.e. the number of times the reminder is run should be Watch this [video](https://www.youtube.com/watch?v=B_vkXqptpXY&t=1002s) for more information on using ISO 861 for Reminders + #### Retrieve actor reminder @@ -385,4 +387,4 @@ For production scenarios, there are some points to be considered before enabling * Number of partitions can only be increased and not decreased. This allows Dapr to automatically redistribute the data on a rolling restart where one or more partition configurations might be active. #### Demo -* [Actor reminder partitioning community call video](https://youtu.be/ZwFOEUYe1WA?t=1493) +* [Actor reminder partitioning community call video](https://youtu.be/ZwFOEUYe1WA?t=1493) \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/building-blocks/bindings/howto-bindings.md b/daprdocs/content/en/developing-applications/building-blocks/bindings/howto-bindings.md index 7a025f749..ea45ababf 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/bindings/howto-bindings.md +++ b/daprdocs/content/en/developing-applications/building-blocks/bindings/howto-bindings.md @@ -10,8 +10,10 @@ Output bindings enable you to invoke external resources without taking dependenc For a complete sample showing output bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings). Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&t=1960) on how to use bi-directional output bindings. - + ## 1. Create a binding @@ -93,4 +95,4 @@ You can check [here]({{< ref supported-bindings >}}) which operations are suppor - [Binding API]({{< ref bindings_api.md >}}) - [Binding components]({{< ref bindings >}}) -- [Binding detailed specifications]({{< ref supported-bindings >}}) +- [Binding detailed specifications]({{< ref supported-bindings >}}) \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-scopes.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-scopes.md index e3a055d20..20c233341 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-scopes.md +++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-scopes.md @@ -158,7 +158,9 @@ The table below shows which application is allowed to subscribe to the topics: ## Demo + ## Related links diff --git a/daprdocs/content/en/developing-applications/building-blocks/secrets/secrets-scopes.md b/daprdocs/content/en/developing-applications/building-blocks/secrets/secrets-scopes.md index c476e5188..83a933e54 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/secrets/secrets-scopes.md +++ b/daprdocs/content/en/developing-applications/building-blocks/secrets/secrets-scopes.md @@ -14,7 +14,10 @@ To limit the secrets to which the Dapr application has access to, you can define The secret scoping policy applies to any [secret store]({{< ref supported-secret-stores.md >}}), whether that is a local secret store, a Kubernetes secret store or a public cloud secret store. For details on how to set up a [secret stores]({{< ref setup-secret-store.md >}}) read [How To: Retrieve a secret]({{< ref howto-secrets.md >}}) Watch this [video](https://youtu.be/j99RN_nxExA?start=2272) for a demo on how to use secret scoping with your application. + + ## Scenario 1 : Deny access to all secrets for a secret store diff --git a/daprdocs/content/en/developing-applications/debugging/bridge-to-kubernetes.md b/daprdocs/content/en/developing-applications/debugging/bridge-to-kubernetes.md index b7c336628..a7408a91e 100644 --- a/daprdocs/content/en/developing-applications/debugging/bridge-to-kubernetes.md +++ b/daprdocs/content/en/developing-applications/debugging/bridge-to-kubernetes.md @@ -14,7 +14,9 @@ Bridge to Kubernetes allows you to run and debug code on your development comput Bridge to Kubernetes supports debugging Dapr apps on your machine, while still having them interact with the services and applications running on your Kubernetes cluster. This example showcases Bridge to Kubernetes enabling a developer to debug the [distributed calculator quickstart](https://github.com/dapr/quickstarts/tree/master/distributed-calculator): + {{% alert title="Isolation mode" color="warning" %}} [Isolation mode](https://aka.ms/bridge-isolation-vscode-dapr) is currently not supported with Dapr apps. Make sure to launch Bridge to Kubernetes mode without isolation. diff --git a/daprdocs/content/en/developing-applications/ides/vscode/vscode-dapr-extension.md b/daprdocs/content/en/developing-applications/ides/vscode/vscode-dapr-extension.md index 802f292f3..e80b80239 100644 --- a/daprdocs/content/en/developing-applications/ides/vscode/vscode-dapr-extension.md +++ b/daprdocs/content/en/developing-applications/ides/vscode/vscode-dapr-extension.md @@ -63,4 +63,7 @@ Using the VS Code extension, you can debug multiple Dapr applications at the sam ### Community call demo Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&t=85) on how to use the Dapr VS Code extension: + + \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/ides/vscode/vscode-remote-dev-containers.md b/daprdocs/content/en/developing-applications/ides/vscode/vscode-remote-dev-containers.md index 74fbdd533..aaf6a583c 100644 --- a/daprdocs/content/en/developing-applications/ides/vscode/vscode-remote-dev-containers.md +++ b/daprdocs/content/en/developing-applications/ides/vscode/vscode-remote-dev-containers.md @@ -28,4 +28,7 @@ Dapr has pre-built Docker remote containers for NodeJS and C#. You can pick the #### Example Watch this [video](https://www.youtube.com/watch?v=D2dO4aGpHcg&t=120) on how to use the Dapr VS Code Remote Containers with your application. - \ No newline at end of file + + \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/integrations/open-service-mesh.md b/daprdocs/content/en/developing-applications/integrations/open-service-mesh.md index cf78c7613..772bf929b 100644 --- a/daprdocs/content/en/developing-applications/integrations/open-service-mesh.md +++ b/daprdocs/content/en/developing-applications/integrations/open-service-mesh.md @@ -22,7 +22,9 @@ Users are able to leverage both OSM SMI traffic policies and Dapr capabilities o Watch the OSM team present the OSM and Dapr integration in the 05/18/2021 community call: + ## Additional resources diff --git a/daprdocs/content/en/developing-applications/integrations/workflows.md b/daprdocs/content/en/developing-applications/integrations/workflows.md index 5dc904208..572899c53 100644 --- a/daprdocs/content/en/developing-applications/integrations/workflows.md +++ b/daprdocs/content/en/developing-applications/integrations/workflows.md @@ -221,7 +221,9 @@ Prerequisites: Watch an example from the Dapr community call: + ## Additional resources diff --git a/daprdocs/content/en/operations/components/component-scopes.md b/daprdocs/content/en/operations/components/component-scopes.md index 1822f8f3a..29834b032 100644 --- a/daprdocs/content/en/operations/components/component-scopes.md +++ b/daprdocs/content/en/operations/components/component-scopes.md @@ -119,7 +119,9 @@ scopes: ## Example + ## Related links diff --git a/daprdocs/content/en/operations/configuration/control-concurrency.md b/daprdocs/content/en/operations/configuration/control-concurrency.md index 741533356..ed16bdac3 100644 --- a/daprdocs/content/en/operations/configuration/control-concurrency.md +++ b/daprdocs/content/en/operations/configuration/control-concurrency.md @@ -14,7 +14,10 @@ Using Dapr, you can control how many requests and events will invoke your applic *Note that rate limiting per second can be achieved by using the **middleware.http.ratelimit** middleware. However, there is an imporant difference between the two approaches. The rate limit middlware is time bound and limits the number of requests per second, while the `app-max-concurrency` flag specifies the number of concurrent requests (and events) at any point of time. See [Rate limit middleware]({{< ref middleware-rate-limit.md >}}). * Watch this [video](https://youtu.be/yRI5g6o_jp8?t=1710) on how to control concurrency and rate limiting ". + + ## Setting app-max-concurrency @@ -58,4 +61,4 @@ To set app-max-concurrency with the Dapr CLI for running on your local dev machi dapr run --app-max-concurrency 1 --app-port 5000 python ./app.py ``` -The above examples will effectively turn your app into a single concurrent service. +The above examples will effectively turn your app into a single concurrent service. \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/invoke-allowlist.md b/daprdocs/content/en/operations/configuration/invoke-allowlist.md index bdae45310..24aaf6809 100644 --- a/daprdocs/content/en/operations/configuration/invoke-allowlist.md +++ b/daprdocs/content/en/operations/configuration/invoke-allowlist.md @@ -11,7 +11,10 @@ Access control enables the configuration of policies that restrict what operatio An access control policy is specified in configuration and be applied to Dapr sidecar for the *called* application. Example access policies are shown below and access to the called app is based on the matched policy action. You can provide a default global action for all calling applications and if no access control policy is specified, the default behavior is to allow all calling applications to access to the called app. Watch this [video](https://youtu.be/j99RN_nxExA?t=1108) on how to apply access control list for service invocation. + + ## Concepts @@ -353,4 +356,4 @@ spec: containers: - name: python image: dapriosamples/hello-k8s-python:edge - ``` + ``` \ No newline at end of file diff --git a/daprdocs/content/en/operations/monitoring/logging/fluentd.md b/daprdocs/content/en/operations/monitoring/logging/fluentd.md index 35a14b738..0c15c1584 100644 --- a/daprdocs/content/en/operations/monitoring/logging/fluentd.md +++ b/daprdocs/content/en/operations/monitoring/logging/fluentd.md @@ -12,16 +12,15 @@ description: "How to install Fluentd, Elastic Search, and Kibana to search logs - [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) - [Helm 3](https://helm.sh/) - ## Install Elastic search and Kibana -1. Create namespace for monitoring tool and add Helm repo for Elastic Search +1. Create a Kubernetes namespace for monitoring tools ```bash kubectl create namespace dapr-monitoring ``` -2. Add Elastic helm repo +2. Add the helm repo for Elastic Search ```bash helm repo add elastic https://helm.elastic.co @@ -30,23 +29,23 @@ description: "How to install Fluentd, Elastic Search, and Kibana to search logs 3. Install Elastic Search using Helm -By default the chart creates 3 replicas which must be on different nodes. If your cluster has less than 3 nodes, specify a lower number of replicas. For example, this sets it to 1: + By default, the chart creates 3 replicas which must be on different nodes. If your cluster has fewer than 3 nodes, specify a smaller number of replicas. For example, this sets the number of replicas to 1: -```bash -helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set replicas=1 -``` + ```bash + helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set replicas=1 + ``` -Otherwise: + Otherwise: -```bash -helm install elasticsearch elastic/elasticsearch -n dapr-monitoring -``` + ```bash + helm install elasticsearch elastic/elasticsearch -n dapr-monitoring + ``` -If you are using minikube or want to disable persistent volumes for development purposes, you can disable it by using the following command: + If you are using minikube or simply want to disable persistent volumes for development purposes, you can do so by using the following command: -```bash -helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set persistence.enabled=false,replicas=1 -``` + ```bash + helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set persistence.enabled=false,replicas=1 + ``` 4. Install Kibana @@ -54,12 +53,10 @@ helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set persis helm install kibana elastic/kibana -n dapr-monitoring ``` -5. Validation - - Ensure Elastic Search and Kibana are running in your Kubernetes cluster. +5. Ensure that Elastic Search and Kibana are running in your Kubernetes cluster ```bash - kubectl get pods -n dapr-monitoring + $ kubectl get pods -n dapr-monitoring NAME READY STATUS RESTARTS AGE elasticsearch-master-0 1/1 Running 0 6m58s kibana-kibana-95bc54b89-zqdrk 1/1 Running 0 4m21s @@ -69,30 +66,29 @@ helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set persis 1. Install config map and Fluentd as a daemonset -Download these config files: -- [fluentd-config-map.yaml](/docs/fluentd-config-map.yaml) -- [fluentd-dapr-with-rbac.yaml](/docs/fluentd-dapr-with-rbac.yaml) + Download these config files: + - [fluentd-config-map.yaml](/docs/fluentd-config-map.yaml) + - [fluentd-dapr-with-rbac.yaml](/docs/fluentd-dapr-with-rbac.yaml) -> Note: If you already have Fluentd running in your cluster, please enable the nested json parser to parse JSON formatted log from Dapr. + > Note: If you already have Fluentd running in your cluster, please enable the nested json parser so that it can parse JSON-formatted logs from Dapr. -Apply the configurations to your cluster: + Apply the configurations to your cluster: -```bash -kubectl apply -f ./fluentd-config-map.yaml -kubectl apply -f ./fluentd-dapr-with-rbac.yaml -``` + ```bash + kubectl apply -f ./fluentd-config-map.yaml + kubectl apply -f ./fluentd-dapr-with-rbac.yaml + ``` -2. Ensure that Fluentd is running as a daemonset; the number of instances should be the same as the number of cluster nodes. In the example below we only have 1 node. - -```bash -kubectl get pods -n kube-system -w -NAME READY STATUS RESTARTS AGE -coredns-6955765f44-cxjxk 1/1 Running 0 4m41s -coredns-6955765f44-jlskv 1/1 Running 0 4m41s -etcd-m01 1/1 Running 0 4m48s -fluentd-sdrld 1/1 Running 0 14s -``` +2. Ensure that Fluentd is running as a daemonset. The number of FluentD instances should be the same as the number of cluster nodes. In the example below, there is only one node in the cluster: + ```bash + $ kubectl get pods -n kube-system -w + NAME READY STATUS RESTARTS AGE + coredns-6955765f44-cxjxk 1/1 Running 0 4m41s + coredns-6955765f44-jlskv 1/1 Running 0 4m41s + etcd-m01 1/1 Running 0 4m48s + fluentd-sdrld 1/1 Running 0 14s + ``` ## Install Dapr with JSON formatted logs @@ -106,80 +102,83 @@ fluentd-sdrld 1/1 Running 0 14s 2. Enable JSON formatted log in Dapr sidecar -Add `dapr.io/log-as-json: "true"` annotation to your deployment yaml. + Add the `dapr.io/log-as-json: "true"` annotation to your deployment yaml. For example: -Example: -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: pythonapp - namespace: default - labels: - app: python -spec: - replicas: 1 - selector: - matchLabels: - app: python - template: + ```yaml + apiVersion: apps/v1 + kind: Deployment metadata: + name: pythonapp + namespace: default labels: app: python - annotations: - dapr.io/enabled: "true" - dapr.io/app-id: "pythonapp" - dapr.io/log-as-json: "true" -... -``` + spec: + replicas: 1 + selector: + matchLabels: + app: python + template: + metadata: + labels: + app: python + annotations: + dapr.io/enabled: "true" + dapr.io/app-id: "pythonapp" + dapr.io/log-as-json: "true" + ... + ``` ## Search logs > Note: Elastic Search takes a time to index the logs that Fluentd sends. -1. Port-forward to svc/kibana-kibana +1. Port-forward from localhost to `svc/kibana-kibana` -``` -$ kubectl port-forward svc/kibana-kibana 5601 -n dapr-monitoring -Forwarding from 127.0.0.1:5601 -> 5601 -Forwarding from [::1]:5601 -> 5601 -Handling connection for 5601 -Handling connection for 5601 -``` + ```bash + $ kubectl port-forward svc/kibana-kibana 5601 -n dapr-monitoring + Forwarding from 127.0.0.1:5601 -> 5601 + Forwarding from [::1]:5601 -> 5601 + Handling connection for 5601 + Handling connection for 5601 + ``` -2. Browse `http://localhost:5601` +2. Browse to `http://localhost:5601` -3. Click Management -> Index Management +3. Expand the drop-down menu and click **Management → Stack Management** - +  -4. Wait until dapr-* is indexed. +4. On the Stack Management page, select **Data → Index Management** and wait until `dapr-*` is indexed. - +  -5. Once dapr-* indexed, click Kibana->Index Patterns and Create Index Pattern +5. Once `dapr-*` is indexed, click on **Kibana → Index Patterns** and then the **Create index pattern** button. - +  -6. Define index pattern - type `dapr*` in index pattern +6. Define a new index pattern by typing `dapr*` into the **Index Pattern name** field, then click the **Next step** button to continue. - +  -7. Select time stamp filed: `@timestamp` +7. Configure the primary time field to use with the new index pattern by selecting the `@timestamp` option from the **Time field** drop-down. Click the **Create index pattern** button to complete creation of the index pattern. - +  -8. Confirm that `scope`, `type`, `app_id`, `level`, etc are being indexed. +8. The newly created index pattern should be shown. Confirm that the fields of interest such as `scope`, `type`, `app_id`, `level`, etc. are being indexed by using the search box in the **Fields** tab. -> Note: if you cannot find the indexed field, please wait. it depends on the volume of data and resource size where elastic search is running. + > Note: If you cannot find the indexed field, please wait. The time it takes to search across all indexed fields depends on the volume of data and size of the resource that the elastic search is running on. - +  -9. Click `discover` icon and search `scope:*` +9. To explore the indexed data, expand the drop-down menu and click **Analytics → Discover**. -> Note: it would take some time to make log searchable based on the data volume and resource. +  - +10. In the search box, type in a query string such as `scope:*` and click the **Refresh** button to view the results. + + > Note: This can take a long time. The time it takes to return all results depends on the volume of data and size of the resource that the elastic search is running on. + +  ## References diff --git a/daprdocs/content/en/operations/monitoring/logging/newrelic.md b/daprdocs/content/en/operations/monitoring/logging/newrelic.md index cb0ab5d2b..b7885bc7c 100644 --- a/daprdocs/content/en/operations/monitoring/logging/newrelic.md +++ b/daprdocs/content/en/operations/monitoring/logging/newrelic.md @@ -24,7 +24,7 @@ This document explains how to install it in your cluster, either using a Helm ch 2. Add the New Relic official Helm chart repository following these instructions -3. Run the following command to install the New Relic Logging Kubernetes plugin via Helm, replacing the placeholder value YOUR_LICENSE_KEY with your [New Relic license key](https://docs.newrelic.com/docs/accounts/install-new-relic/account-setup/license-key): +3. Run the following command to install the New Relic Logging Kubernetes plugin via Helm, replacing the placeholder value YOUR_LICENSE_KEY with your [New Relic license key](https://docs.newrelic.com/docs/accounts/accounts-billing/account-setup/new-relic-license-key/): - Helm 3 ```bash @@ -74,5 +74,5 @@ By default, tailing is set to /var/log/containers/*.log. To change this setting, * [New Relic Account Signup](https://newrelic.com/signup) * [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform) * [New Relic Logging](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging) -* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys) +* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/) * [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence) diff --git a/daprdocs/content/en/operations/monitoring/metrics/grafana.md b/daprdocs/content/en/operations/monitoring/metrics/grafana.md index 0b64a7316..509adb3b3 100644 --- a/daprdocs/content/en/operations/monitoring/metrics/grafana.md +++ b/daprdocs/content/en/operations/monitoring/metrics/grafana.md @@ -173,4 +173,7 @@ First you need to connect Prometheus as a data source to Grafana. * [Supported Dapr metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md) ## Example + + \ No newline at end of file diff --git a/daprdocs/content/en/operations/monitoring/metrics/newrelic.md b/daprdocs/content/en/operations/monitoring/metrics/newrelic.md index 567d7dcd1..219c92306 100644 --- a/daprdocs/content/en/operations/monitoring/metrics/newrelic.md +++ b/daprdocs/content/en/operations/monitoring/metrics/newrelic.md @@ -22,7 +22,7 @@ This document explains how to install it in your cluster, either using a Helm ch 2. Add the New Relic official Helm chart repository following [these instructions](https://github.com/newrelic/helm-charts/blob/master/README.md#installing-charts) -3. Run the following command to install the New Relic Logging Kubernetes plugin via Helm, replacing the placeholder value YOUR_LICENSE_KEY with your [New Relic license key](https://docs.newrelic.com/docs/accounts/install-new-relic/account-setup/license-key): +3. Run the following command to install the New Relic Logging Kubernetes plugin via Helm, replacing the placeholder value YOUR_LICENSE_KEY with your [New Relic license key](https://docs.newrelic.com/docs/accounts/accounts-billing/account-setup/new-relic-license-key): ```bash helm install nri-prometheus newrelic/nri-prometheus --set licenseKey=YOUR_LICENSE_KEY @@ -39,5 +39,5 @@ This document explains how to install it in your cluster, either using a Helm ch * [New Relic Account Signup](https://newrelic.com/signup) * [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform) * [New Relic Prometheus OpenMetrics Integration](https://github.com/newrelic/helm-charts/tree/master/charts/nri-prometheus) -* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys) +* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/) * [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence) diff --git a/daprdocs/content/en/operations/monitoring/metrics/prometheus.md b/daprdocs/content/en/operations/monitoring/metrics/prometheus.md index 0318f3235..b81f1f539 100644 --- a/daprdocs/content/en/operations/monitoring/metrics/prometheus.md +++ b/daprdocs/content/en/operations/monitoring/metrics/prometheus.md @@ -111,9 +111,12 @@ dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0 ``` ## Example - + + ## References * [Prometheus Installation](https://github.com/prometheus-community/helm-charts) -* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/) +* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/) \ No newline at end of file diff --git a/daprdocs/content/en/operations/monitoring/tracing/supported-tracing-backends/newrelic.md b/daprdocs/content/en/operations/monitoring/tracing/supported-tracing-backends/newrelic.md index d6ddd905d..9b98303d1 100644 --- a/daprdocs/content/en/operations/monitoring/tracing/supported-tracing-backends/newrelic.md +++ b/daprdocs/content/en/operations/monitoring/tracing/supported-tracing-backends/newrelic.md @@ -14,7 +14,7 @@ description: "Set-up New Relic for distributed tracing" Dapr natively captures metrics and traces that can be send directly to New Relic. The easiest way to export these is by configuring Dapr to send the traces to [New Relic's Trace API](https://docs.newrelic.com/docs/distributed-tracing/trace-api/report-zipkin-format-traces-trace-api/) using the Zipkin trace format. -In order for the integration to send data to New Relic [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform), you need a [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#insights-insert-key). +In order for the integration to send data to New Relic [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform), you need a [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#insights-insert-key). ```yaml apiVersion: dapr.io/v1alpha1 @@ -39,7 +39,7 @@ New Relic Distributed Tracing details ## (optional) New Relic Instrumentation -In order for the integrations to send data to New Relic Telemetry Data Platform, you either need a [New Relic license key](https://docs.newrelic.com/docs/accounts/accounts-billing/account-setup/new-relic-license-key) or [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#insights-insert-key). +In order for the integrations to send data to New Relic Telemetry Data Platform, you either need a [New Relic license key](https://docs.newrelic.com/docs/accounts/accounts-billing/account-setup/new-relic-license-key) or [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#insights-insert-key). ### OpenTelemetry instrumentation @@ -47,13 +47,13 @@ Leverage the different language specific OpenTelemetry implementations, for exam ### New Relic Language agent -Similarly to the OpenTelemetry instrumentation, you can also leverage a New Relic language agent. As an example, the [New Relic agent instrumentation for .NET Core](https://docs.newrelic.com/docs/agents/net-agent/installation/install-docker-container) is part of the Dockerfile. See example [here](https://github.com/harrykimpel/quickstarts/blob/master/distributed-calculator/csharp/Dockerfile). +Similarly to the OpenTelemetry instrumentation, you can also leverage a New Relic language agent. As an example, the [New Relic agent instrumentation for .NET Core](https://docs.newrelic.com/docs/agents/net-agent/other-installation/install-net-agent-docker-container) is part of the Dockerfile. See example [here](https://github.com/harrykimpel/quickstarts/blob/master/distributed-calculator/csharp/Dockerfile). ## (optional) Enable New Relic Kubernetes integration In case Dapr and your applications run in the context of a Kubernetes environment, you can enable additional metrics and logs. -The easiest way to install the New Relic Kubernetes integration is to use the [automated installer](https://one.newrelic.com/launcher/nr1-core.settings?pane=eyJuZXJkbGV0SWQiOiJrOHMtY2x1c3Rlci1leHBsb3Jlci1uZXJkbGV0Lms4cy1zZXR1cCJ9) to generate a manifest. It bundles not just the integration DaemonSets, but also other New Relic Kubernetes configurations, like [Kubernetes events](https://docs.newrelic.com/docs/integrations/kubernetes-integration/kubernetes-events/install-kubernetes-events-integration), [Prometheus OpenMetrics](https://docs.newrelic.com/docs/integrations/prometheus-integrations/get-started/new-relic-prometheus-openmetrics-integration-kubernetes), and [New Relic log monitoring](https://docs.newrelic.com/docs/logs). +The easiest way to install the New Relic Kubernetes integration is to use the [automated installer](https://one.newrelic.com/launcher/nr1-core.settings?pane=eyJuZXJkbGV0SWQiOiJrOHMtY2x1c3Rlci1leHBsb3Jlci1uZXJkbGV0Lms4cy1zZXR1cCJ9) to generate a manifest. It bundles not just the integration DaemonSets, but also other New Relic Kubernetes configurations, like [Kubernetes events](https://docs.newrelic.com/docs/integrations/kubernetes-integration/kubernetes-events/install-kubernetes-events-integration), [Prometheus OpenMetrics](https://docs.newrelic.com/docs/integrations/prometheus-integrations/get-started/send-prometheus-metric-data-new-relic/), and [New Relic log monitoring](https://docs.newrelic.com/docs/logs). ### New Relic Kubernetes Cluster Explorer @@ -107,8 +107,8 @@ All the data that is collected from Dapr, Kubernetes or any services that run on * [New Relic Account Signup](https://newrelic.com/signup) * [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform) -* [Distributed Tracing](https://docs.newrelic.com/docs/understand-dependencies/distributed-tracing/get-started/introduction-distributed-tracing) +* [Distributed Tracing](https://docs.newrelic.com/docs/distributed-tracing/concepts/introduction-distributed-tracing/) * [New Relic Trace API](https://docs.newrelic.com/docs/distributed-tracing/trace-api/introduction-trace-api/) -* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys) +* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/) * [New Relic OpenTelemetry User Experience](https://blog.newrelic.com/product-news/opentelemetry-user-experience/) * [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence) diff --git a/daprdocs/static/docs/fluentd-dapr-with-rbac.yaml b/daprdocs/static/docs/fluentd-dapr-with-rbac.yaml index d6c06ff5a..f6b12fae5 100644 --- a/daprdocs/static/docs/fluentd-dapr-with-rbac.yaml +++ b/daprdocs/static/docs/fluentd-dapr-with-rbac.yaml @@ -6,7 +6,7 @@ metadata: namespace: kube-system --- -apiVersion: rbac.authorization.k8s.io/v1beta1 +apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: fluentd @@ -24,7 +24,7 @@ rules: --- kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1beta1 +apiVersion: rbac.authorization.k8s.io/v1 metadata: name: fluentd namespace: default diff --git a/daprdocs/static/images/kibana-1.png b/daprdocs/static/images/kibana-1.png index 2ff76b0d0..738e88b53 100644 Binary files a/daprdocs/static/images/kibana-1.png and b/daprdocs/static/images/kibana-1.png differ diff --git a/daprdocs/static/images/kibana-2.png b/daprdocs/static/images/kibana-2.png index eb1b0835d..e2cbd3bf3 100644 Binary files a/daprdocs/static/images/kibana-2.png and b/daprdocs/static/images/kibana-2.png differ diff --git a/daprdocs/static/images/kibana-3.png b/daprdocs/static/images/kibana-3.png index 151de4a99..fa4184e6a 100644 Binary files a/daprdocs/static/images/kibana-3.png and b/daprdocs/static/images/kibana-3.png differ diff --git a/daprdocs/static/images/kibana-4.png b/daprdocs/static/images/kibana-4.png index c11a7a9a6..600be9c0a 100644 Binary files a/daprdocs/static/images/kibana-4.png and b/daprdocs/static/images/kibana-4.png differ diff --git a/daprdocs/static/images/kibana-5.png b/daprdocs/static/images/kibana-5.png index b29ad2749..c04128dd1 100644 Binary files a/daprdocs/static/images/kibana-5.png and b/daprdocs/static/images/kibana-5.png differ diff --git a/daprdocs/static/images/kibana-6.png b/daprdocs/static/images/kibana-6.png index 09b8bbb4d..377e23742 100644 Binary files a/daprdocs/static/images/kibana-6.png and b/daprdocs/static/images/kibana-6.png differ diff --git a/daprdocs/static/images/kibana-7.png b/daprdocs/static/images/kibana-7.png index d8c7cd7f3..7d1c5081f 100644 Binary files a/daprdocs/static/images/kibana-7.png and b/daprdocs/static/images/kibana-7.png differ diff --git a/daprdocs/static/images/kibana-8.png b/daprdocs/static/images/kibana-8.png new file mode 100644 index 000000000..c2c9cc34d Binary files /dev/null and b/daprdocs/static/images/kibana-8.png differ