mirror of https://github.com/dapr/docs.git
Integrating final readme files from OLD
This commit is contained in:
parent
515d26d35a
commit
3fcb15a93a
|
@ -1,25 +0,0 @@
|
|||
---
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: tracing
|
||||
namespace: default
|
||||
spec:
|
||||
tracing:
|
||||
enabled: true
|
||||
expandParams: true
|
||||
includeBody: true
|
||||
---
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: native
|
||||
namespace: default
|
||||
spec:
|
||||
type: exporters.native
|
||||
metadata:
|
||||
- name: enabled
|
||||
value: "true"
|
||||
- name: agentEndpoint
|
||||
value: dapr-localforwarder.default.svc.cluster.local:55678
|
||||
---
|
|
@ -1,44 +0,0 @@
|
|||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: dapr-localforwarder
|
||||
namespace: default
|
||||
labels:
|
||||
app: dapr-localforwarder
|
||||
spec:
|
||||
selector:
|
||||
app: dapr-localforwarder
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 55678
|
||||
targetPort: 55678
|
||||
type: ClusterIP
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: dapr-localforwarder
|
||||
namespace: default
|
||||
labels:
|
||||
app: dapr-localforwarder
|
||||
spec:
|
||||
replicas: 3 # Adjust replica # based on your telemetry volume
|
||||
selector:
|
||||
matchLabels:
|
||||
app: dapr-localforwarder
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: dapr-localforwarder
|
||||
spec:
|
||||
containers:
|
||||
- name: dapr-localforwarder
|
||||
image: docker.io/daprio/dapr-localforwarder:latest
|
||||
ports:
|
||||
- containerPort: 55678
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: APPINSIGHTS_INSTRUMENTATIONKEY
|
||||
value: <APPINSIGHT INSTRUMENTATIONKEY> # Replace with your ikey
|
||||
- name: APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY
|
||||
value: <APPINSIGHT API KEY> # Replace with your generated api key
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
title: "W3C trace context"
|
||||
linkTitle: "W3C trace context"
|
||||
weight: 1000
|
||||
description: Background and scenarios for using W3C tracing with Dapr
|
||||
type: docs
|
||||
---
|
|
@ -1,3 +1,11 @@
|
|||
---
|
||||
title: "How-To: Use W3C trace context with Dapr"
|
||||
linkTitle: "Overview"
|
||||
weight: 20000
|
||||
description: Using W3C tracing standard with Dapr
|
||||
type: docs
|
||||
---
|
||||
|
||||
# How to use trace context
|
||||
Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Dapr does all the heavy lifting of generating and propagating the trace context information and there are very few cases where you need to either propagate or create a trace context. First read scenarios in the [W3C trace context for distributed tracing](../../concepts/observability/W3C-traces.md) article to understand whether you need to propagate or create a trace context.
|
||||
|
|
@ -1,8 +1,9 @@
|
|||
---
|
||||
title: "W3C trace context for distributed tracing"
|
||||
linkTitle: "W3C Traces"
|
||||
weight: 2000
|
||||
description: Using W3C tracing standard with Dapr
|
||||
title: "W3C trace context overview"
|
||||
linkTitle: "Overview"
|
||||
weight: 10000
|
||||
description: Background and scenarios for using W3C tracing with Dapr
|
||||
type: docs
|
||||
---
|
||||
|
||||
## Introduction
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: "Autoscaling a Dapr app with KEDA"
|
||||
linkTitle: "Autoscale"
|
||||
weight: 3000
|
||||
weight: 2000
|
||||
---
|
||||
|
||||
Dapr, with its modular building-block approach, along with the 10+ different [pub/sub components](../../concepts/publish-subscribe-messaging), make it easy to write message processing applications. Since Dapr can run in many environments (e.g. VM, bare-metal, Cloud, or Edge) the autoscaling of Dapr applications is managed by the hosting later.
|
||||
|
|
|
@ -1,12 +1,14 @@
|
|||
---
|
||||
title: "Guide: Use gRPC Interface"
|
||||
linkTitle: "Referencing Secrets"
|
||||
weight: 5000
|
||||
title: "Dapr's gRPC Interface"
|
||||
linkTitle: "gRPC"
|
||||
weight: 1000
|
||||
description: "Use the Dapr gRPC API in your application"
|
||||
type: docs
|
||||
---
|
||||
|
||||
# Dapr and gRPC
|
||||
|
||||
Dapr implements both an HTTP and a gRPC API for local calls.gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients.
|
||||
Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients.
|
||||
|
||||
You can find a list of auto-generated clients [here](https://github.com/dapr/docs#sdks).
|
||||
|
|
@ -2,24 +2,6 @@
|
|||
title: "Getting started with Dapr"
|
||||
linkTitle: "Getting started"
|
||||
weight: 20
|
||||
description: "Get up and running with Dapr to start Daperizing your apps"
|
||||
---
|
||||
|
||||
Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, microservice stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
|
||||
|
||||
## Core concepts
|
||||
|
||||
* **Building blocks** are a collection of components that implement distributed system capabilities, such as pub/sub, state management, resource bindings, and distributed tracing.
|
||||
|
||||
* **Components** encapsulate the implementation for a building block API. Example implementations for the state building block may include Redis, Azure Storage, Azure Cosmos DB, and AWS DynamoDB. Many of the components are pluggable so that one implementation can be swapped out for another.
|
||||
|
||||
To learn more, see [Dapr Concepts](/docs/concepts).
|
||||
|
||||
## Setup the development environment
|
||||
|
||||
Dapr can be run locally or in Kubernetes. We recommend starting with a local setup to explore the core Dapr concepts and familiarize yourself with the Dapr CLI. Follow these instructions to [configure Dapr locally and on Kubernetes](/docs/concepts/getting-started/install-dapr).
|
||||
|
||||
## Next steps
|
||||
|
||||
1. Once Dapr is installed, continue to the [Hello World quickstart](https://github.com/dapr/quickstarts/tree/master/hello-world).
|
||||
2. Explore additional [quickstarts](https://github.com/dapr/quickstarts) for more advanced concepts, such as service invocation, pub/sub, and state management.
|
||||
description: "Get up and running with Dapr"
|
||||
type: docs
|
||||
---
|
|
@ -1,3 +1,11 @@
|
|||
---
|
||||
title: "How-To: Setup Redis"
|
||||
linkTitle: "How-To: Setup Redis"
|
||||
weight: 30
|
||||
description: "Configure Redis for Dapr state management or Pub/Sub"
|
||||
type: docs
|
||||
---
|
||||
|
||||
# Configure Redis for state management or pub/sub
|
||||
|
||||
Dapr can use Redis in two ways:
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
title: "Getting started guide"
|
||||
linkTitle: "Guide"
|
||||
weight: 10
|
||||
description: "Instructions for getting started with Dapr"
|
||||
---
|
||||
|
||||
Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, microservice stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
|
||||
|
||||
## Core concepts
|
||||
|
||||
* **Building blocks** are a collection of components that implement distributed system capabilities, such as pub/sub, state management, resource bindings, and distributed tracing.
|
||||
|
||||
* **Components** encapsulate the implementation for a building block API. Example implementations for the state building block may include Redis, Azure Storage, Azure Cosmos DB, and AWS DynamoDB. Many of the components are pluggable so that one implementation can be swapped out for another.
|
||||
|
||||
To learn more, see [Dapr Concepts](/docs/concepts).
|
||||
|
||||
## Setup the development environment
|
||||
|
||||
Dapr can be run locally or in Kubernetes. We recommend starting with a local setup to explore the core Dapr concepts and familiarize yourself with the Dapr CLI. Follow these instructions to [configure Dapr locally and on Kubernetes](/docs/concepts/getting-started/install-dapr).
|
||||
|
||||
## Next steps
|
||||
|
||||
1. Once Dapr is installed, continue to the [Hello World quickstart](https://github.com/dapr/quickstarts/tree/master/hello-world).
|
||||
2. Explore additional [quickstarts](https://github.com/dapr/quickstarts) for more advanced concepts, such as service invocation, pub/sub, and state management.
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: "Setup Dapr Environment"
|
||||
linkTitle: "Install Dapr"
|
||||
weight: 10
|
||||
description: >
|
||||
How to setup Dapr in a local environment or in a Kubernetes cluster
|
||||
title: "How-To: Setup Dapr environment"
|
||||
linkTitle: "How-To: Setup environment"
|
||||
weight: 20
|
||||
description: "Setup Dapr in a local environment or in a Kubernetes cluster"
|
||||
type: docs
|
||||
---
|
||||
|
||||
Dapr can be run in either self hosted or Kubernetes modes. Running Dapr runtime in self hosted mode enables you to develop Dapr applications in your local development environment and then deploy and run them in other Dapr supported environments. For example, you can develop Dapr applications in self hosted mode and then deploy them to any Kubernetes cluster.
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
title: "Bindings components"
|
||||
linkTitle: "Bindings"
|
||||
description: "Guidance on setting up Dapr bindings components"
|
||||
weight: 4000
|
||||
type: docs
|
||||
---
|
|
@ -1,4 +1,10 @@
|
|||
# How to track RethinkDB state store changes
|
||||
---
|
||||
title: "RethinkDB binding"
|
||||
linkTitle: "RethinkDB"
|
||||
description: "Use bindings to RethinkDB for tracking state store changes"
|
||||
weight: 4000
|
||||
type: docs
|
||||
---
|
||||
|
||||
The RethinkDB state store supports transactions which means it can be used to support Dapr actors. Dapr persists only the actor's current state which doesn't allow the users to track how actor's state may have changed over time.
|
||||
|
|
@ -2,6 +2,6 @@
|
|||
title: "Secret store components"
|
||||
linkTitle: "Secret stores"
|
||||
description: "Guidance on setting up different secret store components"
|
||||
weight: 2000
|
||||
weight: 3000
|
||||
type: docs
|
||||
---
|
||||
|
|
|
@ -1,4 +1,10 @@
|
|||
# Set up Application Insights for distributed tracing
|
||||
---
|
||||
title: "Set up Application Insights for distributed tracing"
|
||||
linkTitle: "Application Insights"
|
||||
weight: 3000
|
||||
description: "Enable Application Insights to visualize Dapr tracing and application map"
|
||||
type: docs
|
||||
---
|
||||
|
||||
Dapr integrates with Application Insights through OpenTelemetry's default exporter along with a dedicated agent known as the [Local Forwarder](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder).
|
||||
|
||||
|
@ -78,7 +84,55 @@ dapr run --app-id mynode --app-port 3000 --config ./components/tracing.yaml node
|
|||
|
||||
#### Kubernetes environment
|
||||
|
||||
1. Download [dapr-localforwarder.yaml](./localforwarder/dapr-localforwarder.yaml)
|
||||
1. Create a file named `dapr-localforwarder.yaml` with the following contents:
|
||||
|
||||
```yaml
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: dapr-localforwarder
|
||||
namespace: default
|
||||
labels:
|
||||
app: dapr-localforwarder
|
||||
spec:
|
||||
selector:
|
||||
app: dapr-localforwarder
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 55678
|
||||
targetPort: 55678
|
||||
type: ClusterIP
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: dapr-localforwarder
|
||||
namespace: default
|
||||
labels:
|
||||
app: dapr-localforwarder
|
||||
spec:
|
||||
replicas: 3 # Adjust replica # based on your telemetry volume
|
||||
selector:
|
||||
matchLabels:
|
||||
app: dapr-localforwarder
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: dapr-localforwarder
|
||||
spec:
|
||||
containers:
|
||||
- name: dapr-localforwarder
|
||||
image: docker.io/daprio/dapr-localforwarder:latest
|
||||
ports:
|
||||
- containerPort: 55678
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: APPINSIGHTS_INSTRUMENTATIONKEY
|
||||
value: <APPINSIGHT INSTRUMENTATIONKEY> # Replace with your ikey
|
||||
- name: APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY
|
||||
value: <APPINSIGHT API KEY> # Replace with your generated api key
|
||||
```
|
||||
|
||||
2. Replace `<APPINSIGHT INSTRUMENTATIONKEY>` with your Instrumentation Key and `<APPINSIGHT API KEY>` with the generated key in the file
|
||||
|
||||
```yaml
|
|
@ -1,4 +1,10 @@
|
|||
# Set up Zipkin for distributed tracing
|
||||
---
|
||||
title: "Set up Zipkin for distributed tracing"
|
||||
linkTitle: "Zipkin"
|
||||
weight: 3000
|
||||
description: "Set up Zipkin for distributed tracing"
|
||||
type: docs
|
||||
---
|
||||
|
||||
- [Configure self hosted mode](#Configure-self-hosted-mode)
|
||||
- [Configure Kubernetes](#Configure-Kubernetes)
|
Loading…
Reference in New Issue