Merge branch 'v1.12' into issue_3567

This commit is contained in:
Hannah Hunter 2023-09-18 14:58:14 -04:00 committed by GitHub
commit f476d575d9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
28 changed files with 716 additions and 545 deletions

View File

@ -176,7 +176,7 @@ Below are the supported parameters for VS Code tasks. These parameters are equiv
| `appProtocol` | Tells Dapr which protocol your application is using. Valid options are `http`, `grpc`, `https`, `grpcs`, `h2c`. Default is `http`. | No | `"appProtocol": "http"`
| `args` | Sets a list of arguments to pass on to the Dapr app | No | "args": []
| `componentsPath` | Path for components directory. If empty, components will not be loaded. | No | `"componentsPath": "./components"`
| `config` | Tells Dapr which Configuration CRD to use | No | `"config": "./config"`
| `config` | Tells Dapr which Configuration resource to use | No | `"config": "./config"`
| `controlPlaneAddress` | Address for a Dapr control plane | No | `"controlPlaneAddress": "http://localhost:1366/"`
| `enableProfiling` | Enable profiling | No | `"enableProfiling": false`
| `enableMtls` | Enables automatic mTLS for daprd to daprd communication channels | No | `"enableMtls": false`

View File

@ -18,13 +18,13 @@ A Dapr sidecar can also apply a configuration by using a `--config` flag to the
#### Kubernetes sidecar
In Kubernetes mode the Dapr configuration is a Configuration CRD, that is applied to the cluster. For example:
In Kubernetes mode the Dapr configuration is a Configuration resource, that is applied to the cluster. For example:
```bash
kubectl apply -f myappconfig.yaml
```
You can use the Dapr CLI to list the Configuration CRDs
You can use the Dapr CLI to list the Configuration resources
```bash
dapr configurations -k
@ -269,11 +269,11 @@ spec:
action: allow
```
## Control-plane configuration
## Control plane configuration
There is a single configuration file called `daprsystem` installed with the Dapr control plane system services that applies global settings. This is only set up when Dapr is deployed to Kubernetes.
### Control-plane configuration settings
### Control plane configuration settings
A Dapr control plane configuration contains the following sections:

View File

@ -3,12 +3,12 @@ type: docs
title: "How-To: Limit the secrets that can be read from secret stores"
linkTitle: "Limit secret store access"
weight: 3000
description: "To limit the secrets to which the Dapr application has access, users can define secret scopes by augmenting existing configuration CRD with restrictive permissions."
description: "To limit the secrets to which the Dapr application has access, users can define secret scopes by augmenting existing configuration resource with restrictive permissions."
---
In addition to scoping which applications can access a given component, for example a secret store component (see [Scoping components]({{< ref "component-scopes.md">}})), a named secret store component itself can be scoped to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` list, applications can be restricted to access only specific secrets.
Follow [these instructions]({{< ref "configuration-overview.md" >}}) to define a configuration CRD.
Follow [these instructions]({{< ref "configuration-overview.md" >}}) to define a configuration resource.
## Configure secrets access

View File

@ -1,56 +1,63 @@
---
type: docs
title: "Setup an Azure Kubernetes Service (AKS) cluster"
title: "Set up an Azure Kubernetes Service (AKS) cluster"
linkTitle: "Azure Kubernetes Service (AKS)"
weight: 2000
description: >
How to setup Dapr on an Azure Kubernetes Cluster.
Learn how to set up an Azure Kubernetes Cluster
---
# Set up an Azure Kubernetes Service cluster
This guide walks you through installing an Azure Kubernetes Service (AKS) cluster. If you need more information, refer to [Quickstart: Deploy an AKS cluster using the Azure CLI](https://docs.microsoft.com/azure/aks/kubernetes-walkthrough)
## Prerequisites
- [Docker](https://docs.docker.com/install/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest)
- Install:
- [Docker](https://docs.docker.com/install/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
## Deploy an Azure Kubernetes Service cluster
## Deploy an AKS cluster
This guide walks you through installing an Azure Kubernetes Service cluster. If you need more information, refer to [Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure CLI](https://docs.microsoft.com/azure/aks/kubernetes-walkthrough)
1. In the terminal, log into Azure.
1. Login to Azure
```bash
az login
```
```bash
az login
```
1. Set your default subscription:
2. Set the default subscription
```bash
az account set -s [your_subscription_id]
```
```bash
az account set -s [your_subscription_id]
```
1. Create a resource group.
3. Create a resource group
```bash
az group create --name [your_resource_group] --location [region]
```
```bash
az group create --name [your_resource_group] --location [region]
```
1. Create an AKS cluster. To use a specific version of Kubernetes, use `--kubernetes-version` (1.13.x or newer version required).
4. Create an Azure Kubernetes Service cluster
```bash
az aks create --resource-group [your_resource_group] --name [your_aks_cluster_name] --node-count 2 --enable-addons http_application_routing --generate-ssh-keys
```
> **Note:** To use a specific version of Kubernetes use `--kubernetes-version` (1.13.x or newer version required)
1. Get the access credentials for the AKS cluster.
```bash
az aks create --resource-group [your_resource_group] --name [your_aks_cluster_name] --node-count 2 --enable-addons http_application_routing --generate-ssh-keys
```
```bash
az aks get-credentials -n [your_aks_cluster_name] -g [your_resource_group]
```
5. Get the access credentials for the Azure Kubernetes cluster
## AKS Edge Essentials
To create a single-machine K8s/K3s Linux-only cluster using Azure Kubernetes Service (AKS) Edge Essentials, you can follow the quickstart guide available at [AKS Edge Essentials quickstart guide](https://learn.microsoft.com/azure/aks/hybrid/aks-edge-quickstart).
```bash
az aks get-credentials -n [your_aks_cluster_name] -g [your_resource_group]
```
{{% alert title="Note" color="primary" %}}
AKS Edge Essentials does not come with a default storage class, which may cause issues when deploying Dapr. To avoid this, make sure to enable the **local-path-provisioner** storage class on the cluster before deploying Dapr. If you need more information, refer to [Local Path Provisioner on AKS EE](https://learn.microsoft.com/azure/aks/hybrid/aks-edge-howto-use-storage-local-path).
{{% /alert %}}
## Next steps
## Related links
{{< button text="Install Dapr using the AKS Dapr extension >>" page="azure-kubernetes-service-extension" >}}
- Learn more about [the Dapr extension for AKS]({{< ref azure-kubernetes-service-extension >}})
- [Install the Dapr extension for AKS](https://learn.microsoft.com/azure/aks/dapr)
- [Configure the Dapr extension for AKS](https://learn.microsoft.com/azure/aks/dapr-settings)
- [Deploy and run workflows with the Dapr extension for AKS](https://learn.microsoft.com/azure/aks/dapr-workflow)

View File

@ -1,55 +1,86 @@
---
type: docs
title: "Setup a Google Kubernetes Engine (GKE) cluster"
title: "Set up a Google Kubernetes Engine (GKE) cluster"
linkTitle: "Google Kubernetes Engine (GKE)"
weight: 3000
description: "Setup a Google Kubernetes Engine cluster"
description: "Set up a Google Kubernetes Engine cluster"
---
### Prerequisites
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Google Cloud SDK](https://cloud.google.com/sdk)
- Install:
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Google Cloud SDK](https://cloud.google.com/sdk)
## Create a new cluster
Create a GKE cluster by running the following:
```bash
$ gcloud services enable container.googleapis.com && \
gcloud container clusters create $CLUSTER_NAME \
--zone $ZONE \
--project $PROJECT_ID
```
For more options refer to the [Google Cloud SDK docs](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create), or instead create a cluster through the [Cloud Console](https://console.cloud.google.com/kubernetes) for a more interactive experience.
For more options:
- Refer to the [Google Cloud SDK docs](https://cloud.google.com/sdk/gcloud/reference/container/clusters/create).
- Create a cluster through the [Cloud Console](https://console.cloud.google.com/kubernetes) for a more interactive experience.
{{% alert title="For private GKE clusters" color="warning" %}}
Sidecar injection will not work for private clusters without extra steps. An automatically created firewall rule for master access does not open port 4000. This is needed for Dapr sidecar injection.
## Sidecar injection for private GKE clusters
_**Sidecar injection for private clusters requires extra steps.**_
In private GKE clusters, an automatically created firewall rule for master access doesn't open port 4000, which Dapr needs for sidecar injection.
Review the relevant firewall rule:
To review the relevant firewall rule:
```bash
$ gcloud compute firewall-rules list --filter="name~gke-${CLUSTER_NAME}-[0-9a-z]*-master"
```
To replace the existing rule and allow kubernetes master access to port 4000:
Replace the existing rule and allow Kubernetes master access to port 4000:
```bash
$ gcloud compute firewall-rules update <firewall-rule-name> --allow tcp:10250,tcp:443,tcp:4000
```
{{% /alert %}}
## Retrieve your credentials for `kubectl`
Run the following command to retrieve your credentials:
```bash
$ gcloud container clusters get-credentials $CLUSTER_NAME \
--zone $ZONE \
--project $PROJECT_ID
```
## (optional) Install Helm v3
## Install Helm v3 (optional)
1. [Install Helm v3 client](https://helm.sh/docs/intro/install/)
If you are using Helm, install the [Helm v3 client](https://helm.sh/docs/intro/install/).
> **Note:** The latest Dapr helm chart no longer supports Helm v2. Please migrate from helm v2 to helm v3 by following [this guide](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/).
{{% alert title="Important" color="warning" %}}
The latest Dapr Helm chart no longer supports Helm v2. [Migrate from Helm v2 to Helm v3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/).
{{% /alert %}}
2. In case you need permissions the kubernetes dashboard (i.e. configmaps is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list configmaps in the namespace "default", etc.) execute this command
## Troubleshooting
### Kubernetes dashboard permissions
Let's say you receive an error message similar to the following:
```
configmaps is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list configmaps in the namespace "default"
```
Execute this command:
```bash
kubectl create clusterrolebinding kubernetes-dashboard -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
```
## Related links
- [Learn more about GKE clusters](https://cloud.google.com/kubernetes-engine/docs)
- [Try out a Dapr quickstart]({{< ref quickstarts.md >}})
- Learn how to [deploy Dapr on your cluster]({{< ref kubernetes-deploy.md >}})
- [Upgrade Dapr on Kubernetes]({{< ref kubernetes-upgrade.md >}})
- [Kubernetes production guidelines]({{< ref kubernetes-production.md >}})

View File

@ -4,108 +4,117 @@ title: "Set up a KiND cluster"
linkTitle: "KiND"
weight: 1100
description: >
How to set up Dapr on a KiND cluster.
How to set up a KiND cluster
---
# Set up a KiND cluster
## Prerequisites
- [Docker](https://docs.docker.com/install/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
> Note: For Windows, enable Virtualization in BIOS and [install Hyper-V](https://docs.microsoft.com/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v)
- Install:
- [Docker](https://docs.docker.com/install/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- For Windows:
- Enable Virtualization in BIOS
- [Install Hyper-V](https://docs.microsoft.com/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v)
## Install and configure KiND
Make sure you follow one of the [Installation](https://kind.sigs.k8s.io/docs/user/quick-start) options for KiND.
[Refer to the KiND documentation to install.](https://kind.sigs.k8s.io/docs/user/quick-start)
In case you are using Docker Desktop, double-check that you have performed the recommended [settings](https://kind.sigs.k8s.io/docs/user/quick-start#settings-for-docker-desktop) (4 CPUs and 8 GiB of RAM available to Docker Engine).
If you are using Docker Desktop, verify that you have [the recommended settings](https://kind.sigs.k8s.io/docs/user/quick-start#settings-for-docker-desktop).
## Configure and create the KiND cluster
1. Create a file named `kind-cluster-config.yaml`, and paste the following:
```yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 8081
protocol: TCP
- containerPort: 443
hostPort: 8443
protocol: TCP
- role: worker
- role: worker
```
This is going to request KiND to spin up a kubernetes cluster comprised of a control plane and two worker nodes. It also allows for future setup of ingresses and exposes container ports to the host machine.
```yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 8081
protocol: TCP
- containerPort: 443
hostPort: 8443
protocol: TCP
- role: worker
- role: worker
```
2. Run the `kind create cluster` providing the cluster configuration file:
This cluster configuration:
- Requests KiND to spin up a Kubernetes cluster comprised of a control plane and two worker nodes.
- Allows for future setup of ingresses.
- Exposes container ports to the host machine.
```bash
kind create cluster --config kind-cluster-config.yaml
```
1. Run the `kind create cluster` command, providing the cluster configuration file:
Wait until the cluster is created, the output should look like this:
```bash
kind create cluster --config kind-cluster-config.yaml
```
```md
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
**Expected output**
kubectl cluster-info --context kind-kind
```md
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! 😊
```
Thanks for using kind! 😊
```
## Initialize and run Dapr
## Dapr
1. Initialize Dapr in Kubernetes.
1. Initialize Dapr:
```bash
dapr init --kubernetes
```
```bash
dapr init --kubernetes
```
Once Dapr finishes initializing its core components are ready to be used on the cluster.
Once Dapr finishes initializing, you can use its core components on the cluster.
To verify the status of these components run:
```bash
dapr status -k
```
the output should look like this:
1. Verify the status of the Dapr components:
```md
NAME NAMESPACE HEALTHY STATUS REPLICAS VERSION AGE CREATED
dapr-sentry dapr-system True Running 1 1.5.1 53s 2021-12-10 09:27.17
dapr-operator dapr-system True Running 1 1.5.1 53s 2021-12-10 09:27.17
dapr-sidecar-injector dapr-system True Running 1 1.5.1 53s 2021-12-10 09:27.17
dapr-dashboard dapr-system True Running 1 0.9.0 53s 2021-12-10 09:27.17
dapr-placement-server dapr-system True Running 1 1.5.1 52s 2021-12-10 09:27.18
```
```bash
dapr status -k
```
2. Forward a port to [Dapr dashboard](https://docs.dapr.io/reference/cli/dapr-dashboard/):
**Expected output**
```bash
dapr dashboard -k -p 9999
```
```md
NAME NAMESPACE HEALTHY STATUS REPLICAS VERSION AGE CREATED
dapr-sentry dapr-system True Running 1 1.5.1 53s 2021-12-10 09:27.17
dapr-operator dapr-system True Running 1 1.5.1 53s 2021-12-10 09:27.17
dapr-sidecar-injector dapr-system True Running 1 1.5.1 53s 2021-12-10 09:27.17
dapr-dashboard dapr-system True Running 1 0.9.0 53s 2021-12-10 09:27.17
dapr-placement-server dapr-system True Running 1 1.5.1 52s 2021-12-10 09:27.18
```
So that you can validate that the setup finished successfully by navigating to `http://localhost:9999`.
1. Forward a port to [Dapr dashboard](https://docs.dapr.io/reference/cli/dapr-dashboard/):
## Next steps
```bash
dapr dashboard -k -p 9999
```
1. Navigate to `http://localhost:9999` to validate a successful setup.
## Related links
- [Try out a Dapr quickstart]({{< ref quickstarts.md >}})
- Learn how to [deploy Dapr on your cluster]({{< ref kubernetes-deploy.md >}})
- [Upgrade Dapr on Kubernetes]({{< ref kubernetes-upgrade.md >}})
- [Kubernetes production guidelines]({{< ref kubernetes-production.md >}})

View File

@ -1,60 +1,63 @@
---
type: docs
title: "Setup an Minikube cluster"
title: "Set up a Minikube cluster"
linkTitle: "Minikube"
weight: 1000
description: >
How to setup Dapr on a Minikube cluster.
How to setup a Minikube cluster
---
# Set up a Minikube cluster
## Prerequisites
- [Docker](https://docs.docker.com/install/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Minikube](https://minikube.sigs.k8s.io/docs/start/)
- Install:
- [Docker](https://docs.docker.com/install/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Minikube](https://minikube.sigs.k8s.io/docs/start/)
- For Windows:
- Enable Virtualization in BIOS
- [Install Hyper-V](https://docs.microsoft.com/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v)
> Note: For Windows, enable Virtualization in BIOS and [install Hyper-V](https://docs.microsoft.com/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v)
{{% alert title="Note" color="primary" %}}
See [the official Minikube documentation on drivers](https://minikube.sigs.k8s.io/docs/reference/drivers/) for details on supported drivers and how to install plugins.
{{% /alert %}}
## Start the Minikube cluster
1. (optional) Set the default VM driver
1. If applicable for your project, set the default VM.
```bash
minikube config set vm-driver [driver_name]
```
```bash
minikube config set vm-driver [driver_name]
```
> Note: See [DRIVERS](https://minikube.sigs.k8s.io/docs/reference/drivers/) for details on supported drivers and how to install plugins.
1. Start the cluster. If necessary, specify version 1.13.x or newer of Kubernetes with `--kubernetes-version`
2. Start the cluster
Use 1.13.x or newer version of Kubernetes with `--kubernetes-version`
```bash
minikube start --cpus=4 --memory=4096
```
```bash
minikube start --cpus=4 --memory=4096
```
1. Enable the Minikube dashboard and ingress add-ons.
3. Enable dashboard and ingress addons
```bash
# Enable dashboard
minikube addons enable dashboard
# Enable ingress
minikube addons enable ingress
```
```bash
# Enable dashboard
minikube addons enable dashboard
## Install Helm v3 (optional)
# Enable ingress
minikube addons enable ingress
```
If you are using Helm, install the [Helm v3 client](https://helm.sh/docs/intro/install/).
## (optional) Install Helm v3
{{% alert title="Important" color="warning" %}}
The latest Dapr Helm chart no longer supports Helm v2. [Migrate from Helm v2 to Helm v3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/).
{{% /alert %}}
1. [Install Helm v3 client](https://helm.sh/docs/intro/install/)
## Troubleshooting
> **Note:** The latest Dapr helm chart no longer supports Helm v2. Please migrate from helm v2 to helm v3 by following [this guide](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/).
The external IP address of load balancer is not shown from `kubectl get svc`.
### Troubleshooting
1. The external IP address of load balancer is not shown from `kubectl get svc`
In Minikube, EXTERNAL-IP in `kubectl get svc` shows `<pending>` state for your service. In this case, you can run `minikube service [service_name]` to open your service without external IP address.
In Minikube, `EXTERNAL-IP` in `kubectl get svc` shows `<pending>` state for your service. In this case, you can run `minikube service [service_name]` to open your service without external IP address.
```bash
$ kubectl get svc
@ -72,3 +75,9 @@ $ minikube service calculator-front-end
|-----------|----------------------|-------------|---------------------------|
🎉 Opening kubernetes service default/calculator-front-end in default browser...
```
## Related links
- [Try out a Dapr quickstart]({{< ref quickstarts.md >}})
- Learn how to [deploy Dapr on your cluster]({{< ref kubernetes-deploy.md >}})
- [Upgrade Dapr on Kubernetes]({{< ref kubernetes-upgrade.md >}})
- [Kubernetes production guidelines]({{< ref kubernetes-production.md >}})

View File

@ -8,83 +8,93 @@ aliases:
- /getting-started/install-dapr-kubernetes/
---
When setting up Kubernetes you can use either the Dapr CLI or Helm.
For more information on what is deployed to your Kubernetes cluster read the [Kubernetes overview]({{< ref kubernetes-overview.md >}})
## Prerequisites
- Install [Dapr CLI]({{< ref install-dapr-cli.md >}})
- Install [kubectl](https://kubernetes.io/docs/tasks/tools/)
- Kubernetes cluster (see below if needed)
### Create cluster
You can install Dapr on any Kubernetes cluster. Here are some helpful links:
- [Setup KiNd Cluster]({{< ref setup-kind.md >}})
- [Setup Minikube Cluster]({{< ref setup-minikube.md >}})
- [Setup Azure Kubernetes Service Cluster]({{< ref setup-aks.md >}})
- [Setup Google Cloud Kubernetes Engine](https://docs.dapr.io/operations/hosting/kubernetes/cluster/setup-gke/)
- [Setup Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
When [setting up Dapr on Kubernetes]({{< ref kubernetes-overview.md >}}), you can use either the Dapr CLI or Helm.
{{% alert title="Hybrid clusters" color="primary" %}}
Both the Dapr CLI and the Dapr Helm chart automatically deploy with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy Dapr to Windows nodes if your application requires it. For more information see [Deploying to a hybrid Linux/Windows Kubernetes cluster]({{<ref kubernetes-hybrid-clusters>}}).
Both the Dapr CLI and the Dapr Helm chart automatically deploy with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy Dapr to Windows nodes if your application requires it. For more information, see [Deploying to a hybrid Linux/Windows Kubernetes cluster]({{< ref kubernetes-hybrid-clusters >}}).
{{% /alert %}}
{{< tabs "Dapr CLI" "Helm" >}}
<!-- Dapr CLI -->
{{% codetab %}}
## Install with Dapr CLI
You can install Dapr to a Kubernetes cluster using the [Dapr CLI]({{< ref install-dapr-cli.md >}}).
You can install Dapr on a Kubernetes cluster using the [Dapr CLI]({{< ref install-dapr-cli.md >}}).
### Install Dapr (from an official Dapr Helm chart)
### Prerequisites
- Install:
- [Dapr CLI]({{< ref install-dapr-cli.md >}})
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- Create a Kubernetes cluster with Dapr. Here are some helpful links:
- [Set up KiNd Cluster]({{< ref setup-kind.md >}})
- [Set up Minikube Cluster]({{< ref setup-minikube.md >}})
- [Set up Azure Kubernetes Service Cluster]({{< ref setup-aks.md >}})
- [Set up GKE cluster]({{< ref setup-gke.md >}})
- [Set up Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
### Installation options
You can install Dapr from an official Helm chart or a private chart, using a custom namespace, etc.
#### Install Dapr from an official Dapr Helm chart
The `-k` flag initializes Dapr on the Kubernetes cluster in your current context.
{{% alert title="Ensure correct cluster is set" color="warning" %}}
Make sure the correct "target" cluster is set. Check `kubectl context (kubectl config get-contexts)` to verify. You can set a different context using `kubectl config use-context <CONTEXT>`.
{{% /alert %}}
1. Verify the correct "target" cluster is set by checking `kubectl context (kubectl config get-contexts)`.
- You can set a different context using `kubectl config use-context <CONTEXT>`.
Run the following command on your local machine to init Dapr on your cluster:
1. Initialize Dapr on your cluster with the following command:
```bash
dapr init -k
```
```bash
dapr init -k
```
```bash
⌛ Making the jump to hyperspace...
**Expected output**
```bash
⌛ Making the jump to hyperspace...
✅ Deploying the Dapr control plane to your cluster...
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run "dapr status -k" in your terminal. To get started, go here: https://aka.ms/dapr-getting-started
```
1. Run the dashboard:
✅ Deploying the Dapr control plane to your cluster...
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run "dapr status -k" in your terminal. To get started, go here: https://aka.ms/dapr-getting-started
```
```bash
dapr dashboard -k
```
To run the dashboard, run:
If you installed Dapr in a **non-default namespace**, run:
```bash
dapr dashboard -k -n <your-namespace>
```
```bash
dapr dashboard -k
```
#### Install Dapr from a private Dapr Helm chart
If you installed Dapr in a non-default namespace, run:
Installing Dapr from a private Helm chart can be helpful for when you:
- Need more granular control of the Dapr Helm chart
- Have a custom Dapr deployment
- Pull Helm charts from trusted registries that are managed and maintained by your organization
```bash
dapr dashboard -k -n <your-namespace>
```
### Install Dapr (a private Dapr Helm chart)
There are some scenarios where it's necessary to install Dapr from a private Helm chart, such as:
- needing more granular control of the Dapr Helm chart
- having a custom Dapr deployment
- pulling Helm charts from trusted registries that are managed and maintained by your organization
Set the following parameters to allow `dapr init -k` to install Dapr images from the configured Helm repository.
```
export DAPR_HELM_REPO_URL="https://helm.custom-domain.com/dapr/dapr"
export DAPR_HELM_REPO_USERNAME="username_xxx"
export DAPR_HELM_REPO_PASSWORD="passwd_xxx"
```
#### Install in high availability mode
Setting the above parameters will allow `dapr init -k` to install Dapr images from the configured Helm repository.
You can run Dapr with three replicas of each control plane pod in the `dapr-system` namespace for [production scenarios]({{< ref kubernetes-production.md >}}).
### Install in custom namespace
```bash
dapr init -k --enable-ha=true
```
#### Install in custom namespace
The default namespace when initializing Dapr is `dapr-system`. You can override this with the `-n` flag.
@ -92,15 +102,7 @@ The default namespace when initializing Dapr is `dapr-system`. You can override
dapr init -k -n mynamespace
```
### Install in highly available mode
You can run Dapr with 3 replicas of each control plane pod in the dapr-system namespace for [production scenarios]({{< ref kubernetes-production.md >}}).
```bash
dapr init -k --enable-ha=true
```
### Disable mTLS
#### Disable mTLS
Dapr is initialized by default with [mTLS]({{< ref "security-concept.md#sidecar-to-sidecar-communication" >}}). You can disable it with:
@ -108,11 +110,9 @@ Dapr is initialized by default with [mTLS]({{< ref "security-concept.md#sidecar-
dapr init -k --enable-mtls=false
```
### Wait for the installation to complete
#### Wait for the installation to complete
You can wait for the installation to complete its deployment with the `--wait` flag.
The default timeout is 300s (5 min), but can be customized with the `--timeout` flag.
You can wait for the installation to complete its deployment with the `--wait` flag. The default timeout is 300s (5 min), but can be customized with the `--timeout` flag.
```bash
dapr init -k --wait --timeout 600
@ -126,18 +126,33 @@ Run the following command on your local machine to uninstall Dapr on your cluste
dapr uninstall -k
```
## Install with Helm (advanced)
{{% /codetab %}}
You can install Dapr on Kubernetes using a Helm 3 chart.
<!-- Helm -->
{{% codetab %}}
## Install with Helm
You can install Dapr on Kubernetes using a Helm v3 chart.
❗**Important:** The latest Dapr Helm chart no longer supports Helm v2. [Migrate from Helm v2 to Helm v3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/).
### Prerequisites
- Install:
- [Helm v3](https://helm.sh/docs/intro/install/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- Create a Kubernetes cluster with Dapr. Here are some helpful links:
- [Set up KiNd Cluster]({{< ref setup-kind.md >}})
- [Set up Minikube Cluster]({{< ref setup-minikube.md >}})
- [Set up Azure Kubernetes Service Cluster]({{< ref setup-aks.md >}})
- [Set up GKE cluster]({{< ref setup-gke.md >}})
- [Set up Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
{{% alert title="Ensure you are on Helm v3" color="primary" %}}
The latest Dapr helm chart no longer supports Helm v2. Please migrate from Helm v2 to Helm v3 by following [this guide](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/).
{{% /alert %}}
### Add and install Dapr Helm chart
1. Make sure [Helm 3](https://github.com/helm/helm/releases) is installed on your machine
1. Add Helm repo and update
1. Add the Helm repo and update:
```bash
// Add the official Dapr Helm chart.
@ -160,7 +175,7 @@ The latest Dapr helm chart no longer supports Helm v2. Please migrate from Helm
--wait
```
To install in high availability mode:
To install in **high availability** mode:
```bash
helm upgrade --install dapr dapr/dapr \
@ -173,18 +188,7 @@ The latest Dapr helm chart no longer supports Helm v2. Please migrate from Helm
See [Guidelines for production ready deployments on Kubernetes]({{< ref kubernetes-production.md >}}) for more information on installing and upgrading Dapr using Helm.
### Uninstall Dapr on Kubernetes
```bash
helm uninstall dapr --namespace dapr-system
```
### More information
- Read [this guide]({{< ref kubernetes-production.md >}}) for recommended Helm chart values for production setups
- See [this page](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md) for details on Dapr Helm charts.
## Installing the Dapr dashboard as part of the control plane
### (optional) Install the Dapr dashboard as part of the control plane
If you want to install the Dapr dashboard, use this Helm chart with the additional settings of your choice:
@ -200,9 +204,9 @@ kubectl create namespace dapr-system
helm install dapr dapr/dapr-dashboard --namespace dapr-system
```
## Verify installation
### Verify installation
Once the installation is complete, verify that the dapr-operator, dapr-placement, dapr-sidecar-injector and dapr-sentry pods are running in the `dapr-system` namespace:
Once the installation is complete, verify that the `dapr-operator`, `dapr-placement`, `dapr-sidecar-injector`, and `dapr-sentry` pods are running in the `dapr-system` namespace:
```bash
kubectl get pods --namespace dapr-system
@ -217,14 +221,44 @@ dapr-sidecar-injector-8555576b6f-29cqm 1/1 Running 0 40s
dapr-sentry-9435776c7f-8f7yd 1/1 Running 0 40s
```
## Using Mariner-based images
### Uninstall Dapr on Kubernetes
When deploying Dapr, whether on Kubernetes or in Docker self-hosted, the default container images that are pulled are based on [*distroless*](https://github.com/GoogleContainerTools/distroless).
```bash
helm uninstall dapr --namespace dapr-system
```
### More information
- Read [the Kubernetes productions guidelines]({{< ref kubernetes-production.md >}}) for recommended Helm chart values for production setups
- [More details on Dapr Helm charts](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md)
{{% /codetab %}}
{{< /tabs >}}
### Use Mariner-based images
The default container images pulled on Kubernetes are based on [*distroless*](https://github.com/GoogleContainerTools/distroless).
Alternatively, you can use Dapr container images based on Mariner 2 (minimal distroless). [Mariner](https://github.com/microsoft/CBL-Mariner/), officially known as CBL-Mariner, is a free and open-source Linux distribution and container base image maintained by Microsoft. For some Dapr users, leveraging container images based on Mariner can help you meet compliance requirements.
To use Mariner-based images for Dapr, you need to add `-mariner` to your Docker tags. For example, while `ghcr.io/dapr/dapr:latest` is the Docker image based on *distroless*, `ghcr.io/dapr/dapr:latest-mariner` is based on Mariner. Tags pinned to a specific version are also available, such as `{{% dapr-latest-version short="true" %}}-mariner`.
{{< tabs "Dapr CLI" "Helm" >}}
<!-- Dapr CLI -->
{{% codetab %}}
In the Dapr CLI, you can switch to using Mariner-based images with the `--image-variant` flag.
```sh
dapr init --image-variant mariner
```
{{% /codetab %}}
<!-- Helm -->
{{% codetab %}}
With Kubernetes and Helm, you can use Mariner-based images by setting the `global.tag` option and adding `-mariner`. For example:
```sh
@ -236,6 +270,12 @@ helm upgrade --install dapr dapr/dapr \
--wait
```
## Next steps
{{% /codetab %}}
{{< /tabs >}}
## Related links
- [Deploy Dapr with Helm parameters and other details]({{< ref "kubernetes-production.md#deploy-dapr-with-helm" >}})
- [Upgrade Dapr on Kubernetes]({{< ref kubernetes-upgrade.md >}})
- [Kubernetes production guidelines]({{< ref kubernetes-production.md >}})
- [Configure state store & pubsub message broker]({{< ref "getting-started/tutorials/configure-state-pubsub.md" >}})

View File

@ -6,24 +6,30 @@ weight: 60000
description: "How to run Dapr apps on Kubernetes clusters with Windows nodes"
---
Dapr supports running on Kubernetes clusters with Windows nodes. You can run your Dapr microservices exclusively on Windows, exclusively on Linux, or a combination of both. This is helpful to users who may be doing a piecemeal migration of a legacy application into a Dapr Kubernetes cluster.
Dapr supports running your microservices on Kubernetes clusters on:
- Windows
- Linux
- A combination of both
Kubernetes uses a concept called node affinity so that you can denote whether you want your application to be launched on a Linux node or a Windows node. When deploying to a cluster which has both Windows and Linux nodes, you must provide affinity rules for your applications, otherwise the Kubernetes scheduler might launch your application on the wrong type of node.
This is especially helpful during a piecemeal migration of a legacy application into a Dapr Kubernetes cluster.
## Pre-requisites
Kubernetes uses a concept called **node affinity** to denote whether you want your application to be launched on a Linux node or a Windows node. When deploying to a cluster which has both Windows and Linux nodes, you must provide affinity rules for your applications, otherwise the Kubernetes scheduler might launch your application on the wrong type of node.
You will need a Kubernetes cluster with Windows nodes. Many Kubernetes providers support the automatic provisioning of Windows enabled Kubernetes clusters.
## Prerequisites
1. Follow your preferred provider's instructions for setting up a cluster with Windows enabled
Before you begin, set up a Kubernetes cluster with Windows nodes. Many Kubernetes providers support the automatic provisioning of Windows enabled Kubernetes clusters.
- [Setting up Windows on Azure AKS](https://docs.microsoft.com/azure/aks/windows-container-cli)
- [Setting up Windows on AWS EKS](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html)
- [Setting up Windows on Google Cloud GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster-windows)
1. Follow your preferred provider's instructions for setting up a cluster with Windows enabled.
2. Once you have set up the cluster, you should see that it has both Windows and Linux nodes available
- [Setting up Windows on Azure AKS](https://docs.microsoft.com/azure/aks/windows-container-cli)
- [Setting up Windows on AWS EKS](https://docs.aws.amazon.com/eks/latest/userguide/windows-support.html)
- [Setting up Windows on Google Cloud GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster-windows)
1. Once you have set up the cluster, verify that both Windows and Linux nodes are available.
```bash
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks-nodepool1-11819434-vmss000000 Ready agent 6d v1.17.9 10.240.0.4 <none> Ubuntu 16.04.6 LTS 4.15.0-1092-azure docker://3.0.10+azure
aks-nodepool1-11819434-vmss000001 Ready agent 6d v1.17.9 10.240.0.35 <none> Ubuntu 16.04.6 LTS 4.15.0-1092-azure docker://3.0.10+azure
@ -31,29 +37,31 @@ You will need a Kubernetes cluster with Windows nodes. Many Kubernetes providers
akswin000000 Ready agent 6d v1.17.9 10.240.0.66 <none> Windows Server 2019 Datacenter 10.0.17763.1339 docker://19.3.5
akswin000001 Ready agent 6d v1.17.9 10.240.0.97 <none> Windows Server 2019 Datacenter 10.0.17763.1339 docker://19.3.5
```
## Installing the Dapr control plane
If you are installing using the Dapr CLI or via a helm chart, simply follow the normal deployment procedures:
[Installing Dapr on a Kubernetes cluster]({{< ref "install-dapr-selfhost.md#installing-Dapr-on-a-kubernetes-cluster" >}})
## Install the Dapr control plane
If you are installing using the Dapr CLI or via a Helm chart, simply follow the normal deployment procedures: [Installing Dapr on a Kubernetes cluster]({{< ref "install-dapr-selfhost.md#installing-Dapr-on-a-kubernetes-cluster" >}})
Affinity will be automatically set for `kubernetes.io/os=linux`. This will be sufficient for most users, as Kubernetes requires at least one Linux node pool.
> **Note:** Dapr control plane containers are built and tested for both Windows and Linux, however, we generally recommend using the Linux control plane containers. They tend to be smaller and have a much larger user base.
{{% alert title="Note" color="primary" %}}
Dapr control plane containers are built and tested for both Windows and Linux. However, it's recommended to use the Linux control plane containers, which tend to be smaller and have a much larger user base.
If you understand the above, but want to deploy the Dapr control plane to Windows, you can do so by setting:
```
```sh
helm install dapr dapr/dapr --set global.daprControlPlaneOs=windows
```
{{% /alert %}}
## Installing Dapr applications
## Install Dapr applications
### Windows applications
In order to launch a Dapr application on Windows, you'll first need to create a Docker container with your application installed. For a step by step guide see [Get started: Prep Windows for containers](https://docs.microsoft.com/virtualization/windowscontainers/quick-start/set-up-environment). Once you have a docker container with your application, create a deployment YAML file with node affinity set to kubernetes.io/os: windows.
1. Create a deployment YAML
1. [Follow the Microsoft documentation to create a Docker Windows container with your application installed](https://learn.microsoft.com/virtualization/windowscontainers/quick-start/set-up-environment?tabs=dockerce).
1. Once you've created a Docker container with your application, create a deployment YAML file with the node affinity set to `kubernetes.io/os: windows`. In the example `deploy_windows.yaml` deployment file below:
Here is a sample deployment with nodeAffinity set to "windows". Modify as needed for your application.
```yaml
apiVersion: apps/v1
kind: Deployment
@ -92,9 +100,8 @@ In order to launch a Dapr application on Windows, you'll first need to create a
values:
- windows
```
This deployment yaml will be the same as any other dapr application, with an additional spec.template.spec.affinity section as shown above.
2. Deploy to your Kubernetes cluster
1. Deploy the YAML file to your Kubernetes cluster.
```bash
kubectl apply -f deploy_windows.yaml
@ -102,11 +109,10 @@ In order to launch a Dapr application on Windows, you'll first need to create a
### Linux applications
If you already have a Dapr application that runs on Linux, you'll still need to add affinity rules as above, but choose Linux affinity instead.
If you already have a Dapr application that runs on Linux, you still need to add affinity rules.
1. Create a deployment YAML
1. Create a deployment YAML file with the node affinity set to `kubernetes.io/os: linux`. In the example `deploy_linux.yaml` deployment file below:
Here is a sample deployment with nodeAffinity set to "linux". Modify as needed for your application.
```yaml
apiVersion: apps/v1
kind: Deployment
@ -146,13 +152,17 @@ If you already have a Dapr application that runs on Linux, you'll still need to
- linux
```
2. Deploy to your Kubernetes cluster
1. Deploy the YAML to your Kubernetes cluster.
```bash
kubectl apply -f deploy_linux.yaml
```
## Cleanup
That's it!
## Clean up
To remove the deployments from this guide, run the following commands:
```bash
kubectl delete -f deploy_linux.yaml

View File

@ -7,19 +7,19 @@ description: "Use Dapr API in a Kubernetes Job context"
type: docs
---
# Kubernetes Job
The Dapr sidecar is designed to be a long running process. In the context of a [Kubernetes Job](https://kubernetes.io/docs/concepts/workloads/controllers/job/) this behavior can block your job completion.
The Dapr sidecar is designed to be a long running process, in the context of a [Kubernetes Job](https://kubernetes.io/docs/concepts/workloads/controllers/job/) this behaviour can block your job completion.
To address this issue the Dapr sidecar has an endpoint to `Shutdown` the sidecar.
To address this issue, the Dapr sidecar has an endpoint to `Shutdown` the sidecar.
When running a basic [Kubernetes Job](https://kubernetes.io/docs/concepts/workloads/controllers/job/) you will need to call the `/shutdown` endpoint for the sidecar to gracefully stop and the job will be considered `Completed`.
When running a basic [Kubernetes Job](https://kubernetes.io/docs/concepts/workloads/controllers/job/), you need to call the `/shutdown` endpoint for the sidecar to gracefully stop and the job to be considered `Completed`.
When a job is finished without calling `Shutdown`, your job will be in a `NotReady` state with only the `daprd` container running endlessly.
When a job is finished without calling `Shutdown`, your job is in a `NotReady` state with only the `daprd` container running endlessly.
Stopping the Dapr sidecar causes its readiness and liveness probes to fail in your container.
Stopping the dapr sidecar will cause its readiness and liveness probes to fail in your container because the dapr sidecar was shutdown.
To prevent Kubernetes from trying to restart your job, set your job's `restartPolicy` to `Never`.
Be sure to use the *POST* HTTP verb when calling the shutdown HTTP API.
Be sure to use the *POST* HTTP verb when calling the shutdown HTTP API. For example:
```yaml
apiVersion: batch/v1
@ -40,7 +40,7 @@ spec:
restartPolicy: Never
```
You can also call the `Shutdown` from any of the Dapr SDKs
You can also call the `Shutdown` from any of the Dapr SDKs. For example, for the Go SDK:
```go
package main
@ -63,3 +63,8 @@ func main() {
// Job
}
```
## Related links
- [Deploy Dapr on Kubernetes]({{< ref kubernetes-deploy.md >}})
- [Upgrade Dapr on Kubernetes]({{< ref kubernetes-upgrade.md >}})

View File

@ -6,23 +6,30 @@ weight: 10000
description: "Overview of how to get Dapr running on your Kubernetes cluster"
---
## Dapr on Kubernetes
Dapr can be configured to run on any supported versions of Kubernetes. To achieve this, Dapr begins by deploying the following Kubernetes services, which provide first-class integration to make running applications with Dapr easy.
Dapr can be configured to run on any supported versions of Kubernetes. To achieve this, Dapr begins by deploying the `dapr-sidecar-injector`, `dapr-operator`, `dapr-placement`, and `dapr-sentry` Kubernetes services. These provide first-class integration to make running applications with Dapr easy.
- **dapr-operator:** Manages [component]({{< ref components >}}) updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.)
- **dapr-sidecar-injector:** Injects Dapr into [annotated](#adding-dapr-to-a-kubernetes-deployment) deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values.
- **dapr-placement:** Used for [actors]({{< ref actors >}}) only. Creates mapping tables that map actor instances to pods
- **dapr-sentry:** Manages mTLS between services and acts as a certificate authority. For more information read the [security overview]({{< ref "security-concept.md" >}}).
| Kubernetes services | Description |
| ------------------- | ----------- |
| `dapr-operator` | Manages [component]({{< ref components >}}) updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) |
| `dapr-sidecar-injector` | Injects Dapr into [annotated](#adding-dapr-to-a-kubernetes-deployment) deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values. |
| `dapr-placement` | Used for [actors]({{< ref actors >}}) only. Creates mapping tables that map actor instances to pods |
| `dapr-sentry` | Manages mTLS between services and acts as a certificate authority. For more information read the [security overview]({{< ref "security-concept.md" >}}) |
<img src="/images/overview-kubernetes.png" width=1000>
## Supported versions
Dapr support for Kubernetes is aligned with [Kubernetes Version Skew Policy](https://kubernetes.io/releases/version-skew-policy).
## Deploying Dapr to a Kubernetes cluster
Read [this guide]({{< ref kubernetes-deploy.md >}}) to learn how to deploy Dapr to your Kubernetes cluster.
Read [Deploy Dapr on a Kubernetes cluster]({{< ref kubernetes-deploy.md >}}) to learn how to deploy Dapr to your Kubernetes cluster.
## Adding Dapr to a Kubernetes deployment
Deploying and running a Dapr enabled application into your Kubernetes cluster is as simple as adding a few annotations to the pods schema. To give your service an `id` and `port` known to Dapr, turn on tracing through configuration and launch the Dapr sidecar container, you annotate your Kubernetes pod like this. For more information check [dapr annotations]({{< ref arguments-annotations-overview.md >}})
Deploying and running a Dapr-enabled application into your Kubernetes cluster is as simple as adding a few annotations to the pods schema. For example, in the following example, your Kubernetes pod is annotated to:
- Give your service an `id` and `port` known to Dapr
- Turn on tracing through configuration
- Launch the Dapr sidecar container
```yml
annotations:
@ -32,20 +39,21 @@ Deploying and running a Dapr enabled application into your Kubernetes cluster is
dapr.io/config: "tracing"
```
For more information, check [Dapr annotations]({{< ref arguments-annotations-overview.md >}}).
## Pulling container images from private registries
Dapr works seamlessly with any user application container image, regardless of its origin. Simply init Dapr and add the [Dapr annotations]({{< ref arguments-annotations-overview.md >}}) to your Kubernetes definition to add the Dapr sidecar.
Dapr works seamlessly with any user application container image, regardless of its origin. Simply [initialize Dapr]({{< ref install-dapr-selfhost.md >}}) and add the [Dapr annotations]({{< ref arguments-annotations-overview.md >}}) to your Kubernetes definition to add the Dapr sidecar.
The Dapr control-plane and sidecar images come from the [daprio Docker Hub](https://hub.docker.com/u/daprio) container registry, which is a public registry.
The Dapr control plane and sidecar images come from the [daprio Docker Hub](https://hub.docker.com/u/daprio) container registry, which is a public registry.
For information about pulling your application images from a private registry, reference the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/). If you are using Azure Container Registry with Azure Kubernetes Service, reference the [AKS documentation](https://docs.microsoft.com/azure/aks/cluster-container-registry-integration).
For information about:
- Pulling your application images from a private registry, reference the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/).
- Using Azure Container Registry with Azure Kubernetes Service, reference the [AKS documentation](https://docs.microsoft.com/azure/aks/cluster-container-registry-integration).
## Quickstart
## Tutorials
You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) in the Kubernetes getting started quickstart.
## Supported versions
Dapr support for Kubernetes is aligned with [Kubernetes Version Skew Policy](https://kubernetes.io/releases/version-skew-policy).
[Work through the Hello Kubernetes tutorial](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) to learn more about getting started with Dapr on your Kubernetes cluster.
## Related links

View File

@ -3,16 +3,14 @@ type: docs
title: "Production guidelines on Kubernetes"
linkTitle: "Production guidelines"
weight: 40000
description: "Recommendations and practices for deploying Dapr to a Kubernetes cluster in a production-ready configuration"
description: "Best practices for deploying Dapr to a Kubernetes cluster in a production-ready configuration"
---
## Cluster and capacity requirements
Dapr support for Kubernetes is aligned with [Kubernetes Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/).
For a production-ready Kubernetes cluster deployment, we recommended you run a cluster of at least 3 worker nodes to support a highly-available control plane installation.
Use the following resource settings as a starting point. Requirements will vary depending on cluster size, number of pods, and other factors, so you should perform individual testing to find the right values for your environment:
Use the following resource settings as a starting point. Requirements vary depending on cluster size, number of pods, and other factors. Perform individual testing to find the right values for your environment.
| Deployment | CPU | Memory
|-------------|-----|-------
@ -23,7 +21,7 @@ Use the following resource settings as a starting point. Requirements will vary
| **Dashboard** | Limit: 200m, Request: 50m | Limit: 200Mi, Request: 20Mi
{{% alert title="Note" color="primary" %}}
For more info, read the [concept article on CPU and Memory resource units and their meaning](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes).
For more information, refer to the Kubernetes documentation on [CPU and Memory resource units and their meaning](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes).
{{% /alert %}}
### Helm
@ -32,29 +30,26 @@ When installing Dapr using Helm, no default limit/request values are set. Each c
The [Helm chart readme](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md) has detailed information and examples.
For local/dev installations, you might simply want to skip configuring the `resources` options.
For local/dev installations, you might want to skip configuring the `resources` options.
### Optional components
The following Dapr control plane deployments are optional:
- **Placement**: needed to use Dapr Actors
- **Sentry**: needed for mTLS for service to service invocation
- **Dashboard**: needed to get an operational view of the cluster
- **Placement**: For using Dapr Actors
- **Sentry**: For mTLS for service-to-service invocation
- **Dashboard**: For an operational view of the cluster
## Sidecar resource settings
To set the resource assignments for the Dapr sidecar, see the annotations [here]({{< ref "arguments-annotations-overview.md" >}}).
The specific annotations related to resource constraints are:
[Set the resource assignments for the Dapr sidecar using the supported annotations]({{< ref "arguments-annotations-overview.md" >}}). The specific annotations related to **resource constraints** are:
- `dapr.io/sidecar-cpu-limit`
- `dapr.io/sidecar-memory-limit`
- `dapr.io/sidecar-cpu-request`
- `dapr.io/sidecar-memory-request`
If not set, the Dapr sidecar will run without resource settings, which may lead to issues. For a production-ready setup it is strongly recommended to configure these settings.
For more details on configuring resource in Kubernetes see [Assign Memory Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/) and [Assign CPU Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/).
If not set, the Dapr sidecar runs without resource settings, which may lead to issues. For a production-ready setup, it's strongly recommended to configure these settings.
Example settings for the Dapr sidecar in a production-ready setup:
@ -62,20 +57,21 @@ Example settings for the Dapr sidecar in a production-ready setup:
|-----|--------|
| Limit: 300m, Request: 100m | Limit: 1000Mi, Request: 250Mi
{{% alert title="Note" color="primary" %}}
Since Dapr is intended to do much of the I/O heavy lifting for your app, it's expected that the resources given to Dapr enable you to drastically reduce the resource allocations for the application.
{{% /alert %}}
The CPU and memory limits above account for Dapr supporting a high number of I/O bound operations. Use a [monitoring tool]({{< ref observability >}}) to get a baseline for the sidecar (and app) containers and tune these settings based on those baselines.
The CPU and memory limits above account for the fact that Dapr is intended to support a high number of I/O bound operations. It is strongly recommended that you use a monitoring tool to get a baseline for the sidecar (and app) containers and tune these settings based on those baselines.
For more details on configuring resource in Kubernetes, see the following Kubernetes guides:
- [Assign Memory Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/)
- [Assign CPU Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/)
{{% alert title="Note" color="primary" %}}
Since Dapr is intended to do much of the I/O heavy lifting for your app, the resources given to Dapr drastically reduce the resource allocations for the application.
{{% /alert %}}
### Setting soft memory limits on Dapr sidecar
It is recommended to set soft memory limits on the Dapr sidecar when you have set up memory limits.
This allows the sidecar garbage collector to free up memory when the memory usage is above the limit instead of
waiting to be double of the last amount of memory present in the heap when it was run, which is the default behavior
of the [garbage collector](https://tip.golang.org/doc/gc-guide#Memory_limit) used in Go, and can lead to OOM Kill events.
Set soft memory limits on the Dapr sidecar when you've set up memory limits. With soft memory limits, the sidecar garbage collector frees up memory once it exceeds the limit instead of waiting for it to be double of the last amount of memory present in the heap when it was run. Waiting is the default behavior of the [garbage collector](https://tip.golang.org/doc/gc-guide#Memory_limit) used in Go, and can lead to OOM Kill events.
For example, for an app with app-id `nodeapp`, if you have set your memory limit to be 1000Mi as mentioned above, you can use the following in your pod annotations:
For example, for an app with app-id `nodeapp` with memory limit set to 1000Mi, you can use the following in your pod annotations:
```yaml
annotations:
@ -86,29 +82,31 @@ For example, for an app with app-id `nodeapp`, if you have set your memory limit
dapr.io/env: "GOMEMLIMIT=900MiB" # 90% of your memory limit. Also notice the suffix "MiB" instead of "Mi"
```
In this example, the soft limit has been set to be 90% as recommended in [garbage collector tips](https://tip.golang.org/doc/gc-guide#Memory_limit) where it is recommend to leave 5-10% for other services.
In this example, the soft limit has been set to be 90% to leave 5-10% for other services, [as recommended](https://tip.golang.org/doc/gc-guide#Memory_limit).
The `GOMEMLIMIT` environment variable [allows](https://pkg.go.dev/runtime) certain suffixes for the memory size: `B, KiB, MiB, GiB, and TiB.`
The `GOMEMLIMIT` environment variable [allows certain suffixes for the memory size: `B`, `KiB`, `MiB`, `GiB`, and `TiB`.](https://pkg.go.dev/runtime)
## Highly-available mode
## High availability mode
When deploying Dapr in a production-ready configuration, it is recommend to deploy with a highly available (HA) configuration of the control plane, which creates 3 replicas of each control plane pod in the dapr-system namespace. This configuration allows the Dapr control plane to retain 3 running instances and survive individual node failures and other outages.
When deploying Dapr in a production-ready configuration, it's best to deploy with a high availability (HA) configuration of the control plane. This creates three replicas of each control plane pod in the `dapr-system` namespace, allowing the Dapr control plane to retain three running instances and survive individual node failures and other outages.
For a new Dapr deployment, the HA mode can be set with both the [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}}) and with [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}}).
For a new Dapr deployment, HA mode can be set with both:
- The [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}}), and
- [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}})
For an existing Dapr deployment, enabling the HA mode requires additional steps. Please refer to [this paragraph]({{< ref "#enabling-high-availability-in-an-existing-dapr-deployment" >}}) for more details.
For an existing Dapr deployment, [you can enable HA mode in a few extra steps]({{< ref "#enabling-high-availability-in-an-existing-dapr-deployment" >}}).
## Deploying Dapr with Helm
## Deploy Dapr with Helm
[Visit the full guide on deploying Dapr with Helm]({{< ref "kubernetes-deploy.md#install-with-helm-advanced" >}}).
### Parameters file
Instead of specifying parameters on the command line, it's recommended to create a values file. This file should be checked into source control so that you can track its changes.
It's recommended to create a values file, instead of specifying parameters on the command. Check the values file into source control so that you can track its changes.
For a full list of all available options you can set in the values file (or by using the `--set` command-line option), see https://github.com/dapr/dapr/blob/master/charts/dapr/README.md.
[See a full list of available parameters and settings](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md).
Instead of using either `helm install` or `helm upgrade` as shown below, you can also run `helm upgrade --install` - this will dynamically determine whether to install or upgrade.
The following command runs three replicas of each control plane service in the `dapr-system` namespace.
```bash
# Add/update a official Dapr Helm repo.
@ -141,84 +139,85 @@ helm install dapr dapr/dapr \
kubectl get pods --namespace dapr-system
```
This command will run 3 replicas of each control plane service in the dapr-system namespace.
{{% alert title="Note" color="primary" %}}
The Dapr Helm chart automatically deploys with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy the Dapr control plane to Windows nodes, but most users should not need to. For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster]({{< ref "kubernetes-hybrid-clusters.md" >}}).
The example above uses `helm install` and `helm upgrade`. You can also run `helm upgrade --install` to dynamically determine whether to install or upgrade.
{{% /alert %}}
## Upgrading Dapr with Helm
The Dapr Helm chart automatically deploys with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy the Dapr control plane to Windows nodes. For more information, see [Deploying to a Hybrid Linux/Windows K8s Cluster]({{< ref "kubernetes-hybrid-clusters.md" >}}).
Dapr supports zero-downtime upgrades. The upgrade path includes the following steps:
## Upgrade Dapr with Helm
1. Upgrading a CLI version (optional but recommended)
2. Updating the Dapr control plane
3. Updating the data plane (Dapr sidecars)
Dapr supports zero-downtime upgrades in the following steps.
### Upgrading the CLI
### Upgrade the CLI (recommended)
To upgrade the Dapr CLI, [download the latest version](https://github.com/dapr/cli/releases) of the CLI and ensure it's in your path.
Upgrading the CLI is optional, but recommended.
### Upgrading the control plane
1. [Download the latest version](https://github.com/dapr/cli/releases) of the CLI.
1. Verify the Dapr CLI is in your path.
See [steps to upgrade Dapr on a Kubernetes cluster]({{< ref "kubernetes-upgrade.md#helm" >}}).
### Upgrade the control plane
### Updating the data plane (sidecars)
[Upgrade Dapr on a Kubernetes cluster]({{< ref "kubernetes-upgrade.md#helm" >}}).
The last step is to update pods that are running Dapr to pick up the new version of the Dapr runtime.
To do that, simply issue a rollout restart command for any deployment that has the `dapr.io/enabled` annotation:
### Update the data plane (sidecars)
```bash
kubectl rollout restart deploy/<Application deployment name>
```
Update pods that are running Dapr to pick up the new version of the Dapr runtime.
To see a list of all your Dapr enabled deployments, you can either use the [Dapr Dashboard](https://github.com/dapr/dashboard) or run the following command using the Dapr CLI:
1. Issue a rollout restart command for any deployment that has the `dapr.io/enabled` annotation:
```bash
dapr list -k
```bash
kubectl rollout restart deploy/<Application deployment name>
```
APP ID APP PORT AGE CREATED
nodeapp 3000 16h 2020-07-29 17:16.22
```
1. View a list of all your Dapr enabled deployments via either:
- The [Dapr Dashboard](https://github.com/dapr/dashboard)
- Running the following command using the Dapr CLI:
### Enabling high-availability in an existing Dapr deployment
```bash
dapr list -k
APP ID APP PORT AGE CREATED
nodeapp 3000 16h 2020-07-29 17:16.22
```
### Enable high availability in an existing Dapr deployment
Enabling HA mode for an existing Dapr deployment requires two steps:
1. Delete the existing placement stateful set:
1. Delete the existing placement stateful set.
```bash
kubectl delete statefulset.apps/dapr-placement-server -n dapr-system
```
1. Issue the upgrade command:
You delete the placement stateful set because, in HA mode, the placement service adds [Raft](https://raft.github.io/) for leader election. However, Kubernetes only allows for limited fields in stateful sets to be patched, subsequently failing upgrade of the placement service.
Deletion of the existing placement stateful set is safe. The agents reconnect and re-register with the newly created placement service, which persist its table in Raft.
1. Issue the upgrade command.
```bash
helm upgrade dapr ./charts/dapr -n dapr-system --set global.ha.enabled=true
```
You delete the placement stateful set because, in the HA mode, the placement service adds [Raft](https://raft.github.io/) for leader election. However, Kubernetes only allows for limited fields in stateful sets to be patched, subsequently failing upgrade of the placement service.
Deletion of the existing placement stateful set is safe. The agents will reconnect and re-register with the newly created placement service, which will persist its table in Raft.
## Recommended security configuration
When properly configured, Dapr ensures secure communication. It can also make your application more secure with a number of built-in features.
When properly configured, Dapr ensures secure communication and can make your application more secure with a number of built-in features.
It is recommended that a production-ready deployment includes the following settings:
Verify your production-ready deployment includes the following settings:
1. **Mutual Authentication (mTLS)** should be enabled. Note that Dapr has mTLS on by default. For details on how to bring your own certificates, see [here]({{< ref "mtls.md#bringing-your-own-certificates" >}})
1. **Mutual Authentication (mTLS)** is enabled. Dapr has mTLS on by default. [Learn more about how to bring your own certificates]({{< ref "mtls.md#bringing-your-own-certificates" >}}).
2. **App to Dapr API authentication** is enabled. This is the communication between your application and the Dapr sidecar. To secure the Dapr API from unauthorized application access, it is recommended to enable Dapr's token based auth. See [enable API token authentication in Dapr]({{< ref "api-token.md" >}}) for details
1. **App to Dapr API authentication** is enabled. This is the communication between your application and the Dapr sidecar. To secure the Dapr API from unauthorized application access, [enable Dapr's token-based authentication]({{< ref "api-token.md" >}}).
3. **Dapr to App API authentication** is enabled. This is the communication between Dapr and your application. This ensures that Dapr knows that it is communicating with an authorized application. See [Authenticate requests from Dapr using token authentication]({{< ref "app-api-token.md" >}}) for details
1. **Dapr to App API authentication** is enabled. This is the communication between Dapr and your application. [Let Dapr know that it is communicating with an authorized application using token authentication]({{< ref "app-api-token.md" >}}).
4. All component YAMLs should have **secret data configured in a secret store** and not hard-coded in the YAML file. See [here]({{< ref "component-secrets.md" >}}) on how to use secrets with Dapr components
1. **Component secret data is configured in a secret store** and not hard-coded in the component YAML file. [Learn how to use secrets with Dapr components]({{< ref "component-secrets.md" >}}).
5. The Dapr **control plane is installed on a dedicated namespace** such as `dapr-system`.
1. The Dapr **control plane is installed on a dedicated namespace**, such as `dapr-system`.
6. Dapr also supports **scoping components for certain applications**. This is not a required practice, and can be enabled according to your security needs. See [here]({{< ref "component-scopes.md" >}}) for more info.
1. Dapr supports and is enabled to **scope components for certain applications**. This is not a required practice. [Learn more about component scopes]({{< ref "component-scopes.md" >}}).
## Service account tokens
@ -226,47 +225,55 @@ By default, Kubernetes mounts a volume containing a [Service Account token](http
When creating a new Pod (or a Deployment, StatefulSet, Job, etc), you can disable auto-mounting the Service Account token by setting `automountServiceAccountToken: false` in your pod's spec.
It is recommended that you consider deploying your apps with `automountServiceAccountToken: false` to improve the security posture of your pods, unless your apps depend on having a Service Account token. For example, you may need a Service Account token if:
It's recommended that you consider deploying your apps with `automountServiceAccountToken: false` to improve the security posture of your pods, unless your apps depend on having a Service Account token. For example, you may need a Service Account token if:
- You are using Dapr components that interact with the Kubernetes APIs, for example the [Kubernetes secret store]({{< ref "kubernetes-secret-store.md" >}}) or the [Kubernetes Events binding]{{< ref "kubernetes-binding.md" >}}).
Note that initializing Dapr components using [component secrets]({{< ref "component-secrets.md" >}}) stored as Kubernetes secrets does **not** require a Service Account token, so you can still set `automountServiceAccountToken: false` in this case. Only calling the Kubernetes secret store at runtime, using the [Secrets management]({{< ref "secrets-overview.md" >}}) building block, is impacted.
- Your own application needs to interact with the Kubernetes APIs.
- Your application needs to interact with the Kubernetes APIs.
- You are using Dapr components that interact with the Kubernetes APIs; for example, the [Kubernetes secret store]({{< ref "kubernetes-secret-store.md" >}}) or the [Kubernetes Events binding]({{< ref "kubernetes-binding.md" >}}).
Because of the reasons above, Dapr does not set `automountServiceAccountToken: false` automatically for you. However, in all situations where the Service Account is not required by your solution, it is recommended that you set this option in the pods spec.
Thus, Dapr does not set `automountServiceAccountToken: false` automatically for you. However, in all situations where the Service Account is not required by your solution, it's recommended that you set this option in the pods spec.
{{% alert title="Note" color="primary" %}}
Initializing Dapr components using [component secrets]({{< ref "component-secrets.md" >}}) stored as Kubernetes secrets does **not** require a Service Account token, so you can still set `automountServiceAccountToken: false` in this case. Only calling the Kubernetes secret store at runtime, using the [Secrets management]({{< ref "secrets-overview.md" >}}) building block, is impacted.
{{% /alert %}}
## Tracing and metrics configuration
Dapr has tracing and metrics enabled by default. It is *recommended* that you set up distributed tracing and metrics for your applications and the Dapr control plane in production.
Tracing and metrics are enabled in Dapr by default. It's recommended that you set up distributed tracing and metrics for your applications and the Dapr control plane in production.
If you already have your own observability set-up, you can disable tracing and metrics for Dapr.
If you already have your own observability setup, you can disable tracing and metrics for Dapr.
### Tracing
To configure a tracing backend for Dapr visit [this]({{< ref "setup-tracing.md" >}}) link.
[Configure a tracing backend for Dapr]({{< ref "setup-tracing.md" >}}).
### Metrics
For metrics, Dapr exposes a Prometheus endpoint listening on port 9090 which can be scraped by Prometheus.
For metrics, Dapr exposes a Prometheus endpoint listening on port 9090, which can be scraped by Prometheus.
To setup Prometheus, Grafana and other monitoring tools with Dapr, visit [this]({{< ref "observability" >}}) link.
[Set up Prometheus, Grafana, and other monitoring tools with Dapr]({{< ref "observability" >}}).
## Injector watchdog
The Dapr Operator service includes an _injector watchdog_ which can be used to detect and remediate situations where your application's pods may be deployed without the Dapr sidecar (the `daprd` container) when they should have been. For example, it can assist with recovering the applications after a total cluster failure.
The Dapr Operator service includes an **injector watchdog**, which can be used to detect and remediate situations where your application's pods may be deployed without the Dapr sidecar (the `daprd` container). For example, it can assist with recovering the applications after a total cluster failure.
The injector watchdog is disabled by default when running Dapr in Kubernetes mode and it is recommended that you consider enabling it with values that are appropriate for your specific situation.
The injector watchdog is disabled by default when running Dapr in Kubernetes mode. However, you should consider enabling it with the appropriate values for your specific situation.
Refer to the documentation for the [Dapr operator]({{< ref operator >}}) service for more details on the injector watchdog and how to enable it.
Refer to the [Dapr operator service documentation]({{< ref operator >}}) for more details on the injector watchdog and how to enable it.
## Configuring seccompProfile for sidecar containers
## Configure `seccompProfile` for sidecar containers
By default, the Dapr sidecar Injector injects a sidecar without any `seccompProfile`. However, to have Dapr sidecar container run successfully in a namespace with [Restricted](https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted) profile, the sidecar container needs to have `securityContext.seccompProfile.Type` to not be `nil`.
By default, the Dapr sidecar injector injects a sidecar without any `seccompProfile`. However, for the Dapr sidecar container to run successfully in a namespace with the [Restricted](https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted) profile, the sidecar container needs `securityContext.seccompProfile.Type` to not be `nil`.
Refer to [this]({{< ref "arguments-annotations-overview.md" >}}) documentation to set appropriate `seccompProfile` on sidecar container according to which profile it is running with.
Refer to [the Arguments and Annotations overview]({{< ref "arguments-annotations-overview.md" >}}) to set the appropriate `seccompProfile` on the sidecar container.
## Best Practices
Watch this video for a deep dive into the best practices for running Dapr in production with Kubernetes
Watch this video for a deep dive into the best practices for running Dapr in production with Kubernetes.
<div class="embed-responsive embed-responsive-16by9">
<iframe width="360" height="315" src="https://www.youtube-nocookie.com/embed/_U9wJqq-H1g" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Related links
- [Deploy Dapr on Kubernetes]({{< ref kubernetes-deploy.md >}})
- [Upgrade Dapr on Kubernetes]({{< ref kubernetes-upgrade.md >}})

View File

@ -6,34 +6,37 @@ weight: 30000
description: "Follow these steps to upgrade Dapr on Kubernetes and ensure a smooth upgrade."
---
## Prerequisites
- [Dapr CLI]({{< ref install-dapr-cli.md >}})
- [Helm 3](https://github.com/helm/helm/releases) (if using Helm)
## Upgrade existing cluster to {{% dapr-latest-version long="true" %}}
There are two ways to upgrade the Dapr control plane on a Kubernetes cluster using either the Dapr CLI or Helm.
You can upgrade the Dapr control plane on a Kubernetes cluster using either the Dapr CLI or Helm.
{{% alert title="Note" color="primary" %}}
Refer to the [Dapr version policy]({{< ref "support-release-policy.md#upgrade-paths" >}}) for guidance on which versions of Dapr can be upgraded to which versions.
Refer to the [Dapr version policy]({{< ref "support-release-policy.md#upgrade-paths" >}}) for guidance on Dapr's upgrade path.
{{% /alert %}}
### Dapr CLI
{{< tabs "Dapr CLI" "Helm" >}}
<!-- Dapr CLI -->
{{% codetab %}}
## Upgrade using the Dapr CLI
The example below shows how to upgrade to version {{% dapr-latest-version long="true" %}}:
You can upgrade Dapr using the [Dapr CLI]({{< ref install-dapr-cli.md >}}).
```bash
dapr upgrade -k --runtime-version={{% dapr-latest-version long="true" %}}
```
### Prerequisites
You can provide all the available Helm chart configurations using the Dapr CLI.
See [here](https://github.com/dapr/cli#supplying-helm-values) for more info.
- [Install the Dapr CLI]({{< ref install-dapr-cli.md >}})
- An existing [Kubernetes cluster running with Dapr]({{< ref cluster >}})
#### Troubleshooting upgrade using the CLI
### Upgrade existing cluster to {{% dapr-latest-version long="true" %}}
```bash
dapr upgrade -k --runtime-version={{% dapr-latest-version long="true" %}}
```
[You can provide all the available Helm chart configurations using the Dapr CLI.](https://github.com/dapr/cli#supplying-helm-values)
### Troubleshoot upgrading via the CLI
There is a known issue running upgrades on clusters that may have previously had a version prior to 1.0.0-rc.2 installed on a cluster.
Most users should not encounter this issue, but there are a few upgrade path edge cases that may leave an incompatible CustomResourceDefinition installed on your cluster. The error message for this case looks like this:
While this issue is uncommon, a few upgrade path edge cases may leave an incompatible `CustomResourceDefinition` installed on your cluster. If this is your scenario, you may see an error message like the following:
```
❌ Failed to upgrade Dapr: Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
@ -41,25 +44,38 @@ The CustomResourceDefinition "configurations.dapr.io" is invalid: spec.preserveU
```
To resolve this issue please run the follow command to upgrade the CustomResourceDefinition to a compatible version:
#### Solution
```
kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/5a15b3e0f093d2d0938b12f144c7047474a290fe/charts/dapr/crds/configuration.yaml
```
1. Run the following command to upgrade the `CustomResourceDefinition` to a compatible version:
Then proceed with the `dapr upgrade --runtime-version {{% dapr-latest-version long="true" %}} -k` command as above.
```sh
kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/5a15b3e0f093d2d0938b12f144c7047474a290fe/charts/dapr/crds/configuration.yaml
```
### Helm
1. Proceed with the `dapr upgrade --runtime-version {{% dapr-latest-version long="true" %}} -k` command.
From version 1.0.0 onwards, upgrading Dapr using Helm is no longer a disruptive action since existing certificate values will automatically be re-used.
{{% /codetab %}}
1. Upgrade Dapr from 1.0.0 (or newer) to any [NEW VERSION] > 1.0.0:
<!-- Helm -->
{{% codetab %}}
## Upgrade using Helm
*Helm does not handle upgrading CRDs, so you need to perform that manually. CRDs are backward-compatible and should only be installed forward.*
You can upgrade Dapr using a Helm v3 chart.
>Note: The Dapr version is included in the commands below.
❗**Important:** The latest Dapr Helm chart no longer supports Helm v2. [Migrate from Helm v2 to Helm v3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/).
For version {{% dapr-latest-version long="true" %}}:
### Prerequisites
- [Install Helm v3](https://github.com/helm/helm/releases)
- An existing [Kubernetes cluster running with Dapr]({{< ref cluster >}})
### Upgrade existing cluster to {{% dapr-latest-version long="true" %}}
As of version 1.0.0 onwards, existing certificate values will automatically be reused when upgrading Dapr using Helm.
> **Note** Helm does not handle upgrading resources, so you need to perform that manually. Resources are backward-compatible and should only be installed forward.
1. Upgrade Dapr to version {{% dapr-latest-version long="true" %}}:
```bash
kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/v{{% dapr-latest-version long="true" %}}/charts/dapr/crds/components.yaml
@ -76,9 +92,9 @@ From version 1.0.0 onwards, upgrading Dapr using Helm is no longer a disruptive
```bash
helm upgrade dapr dapr/dapr --version {{% dapr-latest-version long="true" %}} --namespace dapr-system --wait
```
*If you're using a values file, remember to add the `--values` option when running the upgrade command.*
> If you're using a values file, remember to add the `--values` option when running the upgrade command.*
2. Ensure all pods are running:
1. Ensure all pods are running:
```bash
kubectl get pods -n dapr-system -w
@ -91,20 +107,23 @@ From version 1.0.0 onwards, upgrading Dapr using Helm is no longer a disruptive
dapr-sidecar-injector-68f868668f-6xnbt 1/1 Running 0 41s
```
3. Restart your application deployments to update the Dapr runtime:
1. Restart your application deployments to update the Dapr runtime:
```bash
kubectl rollout restart deploy/<DEPLOYMENT-NAME>
```
4. All done!
{{% /codetab %}}
#### Upgrading existing Dapr to enable high availability mode
Enabling HA mode in an existing Dapr deployment requires additional steps. Please refer to [this paragraph]({{< ref "kubernetes-production.md#enabling-high-availability-in-an-existing-dapr-deployment" >}}) for more details.
{{< /tabs >}}
## Next steps
## Upgrade existing Dapr deployment to enable high availability mode
[Enable high availability mode in an existing Dapr deployment with a few additional steps.]({{< ref "kubernetes-production.md#enabling-high-availability-in-an-existing-dapr-deployment" >}})
## Related links
- [Dapr on Kubernetes]({{< ref kubernetes-overview.md >}})
- [Dapr production guidelines]({{< ref kubernetes-production.md >}})
- [More on upgrading Dapr with Helm]({{< ref "kubernetes-production.md#upgrade-dapr-with-helm" >}})
- [Dapr production guidelines]({{< ref kubernetes-production.md >}})

View File

@ -6,8 +6,6 @@ weight: 80000
description: "Configure the Dapr sidecar to mount Pod Volumes"
---
## Introduction
The Dapr sidecar can be configured to mount any Kubernetes Volume attached to the application Pod. These Volumes can be accessed by the `daprd` (sidecar) container in _read-only_ or _read-write_ modes. If a Volume is configured to be mounted but it does not exist in the Pod, Dapr logs a warning and ignores it.
For more information on different types of Volumes, check the [Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/).
@ -16,21 +14,30 @@ For more information on different types of Volumes, check the [Kubernetes docume
You can set the following annotations in your deployment YAML:
1. **dapr.io/volume-mounts**: for read-only volume mounts
1. **dapr.io/volume-mounts-rw**: for read-write volume mounts
| Annotation | Description |
| ---------- | ----------- |
| `dapr.io/volume-mounts` | For read-only volume mounts |
| `dapr.io/volume-mounts-rw` | For read-write volume mounts |
These annotations are comma separated pairs of `volume-name:path/in/container`. Make sure that the corresponding Volumes exist in the Pod spec.
These annotations are comma separated pairs of `volume-name:path/in/container`. Verify the corresponding Volumes exist in the Pod spec.
Within the official container images, Dapr runs as a process with user ID (UID) `65532`. Make sure that folders and files inside the mounted Volume are writable or readable by user `65532` as appropriate.
Although you can mount a Volume in any folder within the Dapr sidecar container, prevent conflicts and ensure smooth operations going forward by placing all mountpoints within one of these two locations, or in a subfolder within them:
Although you can mount a Volume in any folder within the Dapr sidecar container, prevent conflicts and ensure smooth operations going forward by placing all mountpoints within one of the following locations, or in a subfolder within them:
- `/mnt` is recommended for Volumes containing persistent data that the Dapr sidecar process can read and/or write.
- `/tmp` is recommended for Volumes containing temporary data, such as scratch disks.
| Location | Description |
| -------- | ----------- |
| `/mnt` | Recommended for Volumes containing persistent data that the Dapr sidecar process can read and/or write. |
| `/tmp` | Recommended for Volumes containing temporary data, such as scratch disks. |
### Example
## Examples
In the example Deployment resource below, `my-volume1` and `my-volume2` are available inside the sidecar container at `/mnt/sample1` and `/mnt/sample2` respectively, in read-only mode. `my-volume3` is available inside the sidecar container at `/tmp/sample3` in read-write mode.
### Basic deployment resource example
In the example Deployment resource below:
- `my-volume1` is available inside the sidecar container at `/mnt/sample1` in read-only mode
- `my-volume2` is available inside the sidecar container at `/mnt/sample2` in read-only mode
- `my-volume3` is available inside the sidecar container at `/tmp/sample3` in read-write mode
```yaml
apiVersion: apps/v1
@ -68,59 +75,57 @@ spec:
...
```
## Examples
### Custom secrets storage using local file secret store
Since any type of Kubernetes Volume can be attached to the sidecar, you can use the local file secret store to read secrets from a variety of places. For example, if you have a Network File Share (NFS) server running at `10.201.202.203`, with secrets stored at `/secrets/stage/secrets.json`, you can use that as a secrets storage.
1. Configure the application pod to mount the NFS and attach it to the Dapr sidecar.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
...
spec:
...
template:
...
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "myapp"
dapr.io/app-port: "8000"
dapr.io/volume-mounts: "nfs-secrets-vol:/mnt/secrets"
spec:
volumes:
- name: nfs-secrets-vol
nfs:
server: 10.201.202.203
path: /secrets/stage
...
```
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
...
spec:
...
template:
...
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "myapp"
dapr.io/app-port: "8000"
dapr.io/volume-mounts: "nfs-secrets-vol:/mnt/secrets"
spec:
volumes:
- name: nfs-secrets-vol
nfs:
server: 10.201.202.203
path: /secrets/stage
...
```
2. Point the local file secret store component to the attached file.
1. Point the local file secret store component to the attached file.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: local-secret-store
spec:
type: secretstores.local.file
version: v1
metadata:
- name: secretsFile
value: /mnt/secrets/secrets.json
```
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: local-secret-store
spec:
type: secretstores.local.file
version: v1
metadata:
- name: secretsFile
value: /mnt/secrets/secrets.json
```
3. Use the secrets.
1. Use the secrets.
```
GET http://localhost:<daprPort>/v1.0/secrets/local-secret-store/my-secret
```
```
GET http://localhost:<daprPort>/v1.0/secrets/local-secret-store/my-secret
```
## Related links
- [Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}})
[Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}})

View File

@ -16,7 +16,7 @@ This article provides guidance on running Dapr with Docker on a Windows/Linux/ma
## Initialize Dapr environment
To initialize the Dapr control-plane containers and create a default configuration file, run:
To initialize the Dapr control plane containers and create a default configuration file, run:
```bash
dapr init

View File

@ -15,7 +15,7 @@ This article provides guidance on running Dapr with Podman on a Windows/Linux/ma
## Initialize Dapr environment
To initialize the Dapr control-plane containers and create a default configuration file, run:
To initialize the Dapr control plane containers and create a default configuration file, run:
```bash
dapr init --container-runtime podman

View File

@ -11,7 +11,7 @@ description: See and measure the message calls to components and between network
<iframe width="560" height="315" src="https://www.youtube.com/embed/0y7ne6teHT4?si=iURnLk57t2zN-7zP&amp;start=12653" title="YouTube video player" style="padding-bottom:25px;" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
{{% alert title="More about Dapr Observability" color="primary" %}}
Learn more about how to use Dapr Observability Lock:
Learn more about how to use Dapr Observability:
- Explore observability via any of the supporting [Dapr SDKs]({{< ref sdks >}}).
- Review the [Observability API reference documentation]({{< ref health_api.md >}}).
- Read the [general overview of the observability concept]({{< ref observability-concept >}}) in Dapr.

View File

@ -1,7 +1,7 @@
---
type: docs
title: "Logs"
linkTitle: "Logs"
linkTitle: "Overview"
weight: 1000
description: "Understand Dapr logging"
---

View File

@ -1,7 +1,7 @@
---
type: docs
title: "Configure metrics"
linkTitle: "Configure metrics"
linkTitle: "Overview"
weight: 4000
description: "Enable or disable Dapr metrics "
---

View File

@ -1,42 +1,47 @@
---
type: docs
title: "Using OpenTelemetry Collector to collect traces to send to AppInsights"
linkTitle: "Using the OpenTelemetry for Azure AppInsights"
title: "Using OpenTelemetry Collector to collect traces to send to App Insights"
linkTitle: "Using the OpenTelemetry for Azure App Insights"
weight: 1000
description: "How to push trace events to Azure Application Insights, using the OpenTelemetry Collector."
---
Dapr integrates with [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) using the Zipkin API. This guide walks through an example using Dapr to push trace events to Azure Application Insights, using the OpenTelemetry Collector.
Dapr integrates with [OpenTelemetry (OTEL) Collector](https://github.com/open-telemetry/opentelemetry-collector) using the Zipkin API. This guide walks through an example using Dapr to push trace events to Azure Application Insights, using the OpenTelemetry Collector.
## Requirements
## Prerequisites
A installation of Dapr on Kubernetes.
- [Install Dapr on Kubernetes]({{< ref kubernetes >}})
- [Set up an App Insights resource](https://docs.microsoft.com/azure/azure-monitor/app/create-new-resource) and make note of your App Insights instrumentation key.
## How to configure distributed tracing with Application Insights
## Set up OTEL Collector to push to your App Insights instance
### Setup Application Insights
To push events to your App Insights instance, install the OTEL Collector to your Kubernetes cluster.
1. First, you'll need an Azure account. See instructions [here](https://azure.microsoft.com/free/) to apply for a **free** Azure account.
2. Follow instructions [here](https://docs.microsoft.com/azure/azure-monitor/app/create-new-resource) to create a new Application Insights resource.
3. Get the Application Insights Intrumentation key from your Application Insights page.
1. Check out the [`open-telemetry-collector-appinsights.yaml`](/docs/open-telemetry-collector/open-telemetry-collector-appinsights.yaml) file.
### Run OpenTelemetry Collector to push to your Application Insights instance
1. Replace the `<INSTRUMENTATION-KEY>` placeholder with your App Insights instrumentation key.
Install the OpenTelemetry Collector to your Kubernetes cluster to push events to your Application Insights instance
1. Apply the configuration with:
1. Check out the file [open-telemetry-collector-appinsights.yaml](/docs/open-telemetry-collector/open-telemetry-collector-appinsights.yaml) and replace the `<INSTRUMENTATION-KEY>` placeholder with your Application Insights Instrumentation Key.
```sh
kubectl apply -f open-telemetry-collector-appinsights.yaml
```
2. Apply the configuration with `kubectl apply -f open-telemetry-collector-appinsights.yaml`.
## Set up Dapr to send trace to OTEL Collector
Next, set up both a Dapr configuration file to turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.
Set up a Dapr configuration file to turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.
1. Create a collector-config.yaml file with this [content](/docs/open-telemetry-collector/collector-config.yaml)
1. Use this [`collector-config.yaml`](/docs/open-telemetry-collector/collector-config.yaml) file to create your own configuration.
2. Apply the configuration with `kubectl apply -f collector-config.yaml`.
1. Apply the configuration with:
### Deploy your app with tracing
```sh
kubectl apply -f collector-config.yaml
```
When running in Kubernetes mode, apply the `appconfig` configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following example:
## Deploy your app with tracing
Apply the `appconfig` configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following example:
```yaml
apiVersion: apps/v1
@ -55,18 +60,24 @@ spec:
dapr.io/config: "appconfig"
```
Some of the quickstarts such as [distributed calculator](https://github.com/dapr/quickstarts/tree/master/tutorials/distributed-calculator) already configure these settings, so if you are using those no additional settings are needed.
{{% alert title="Note" color="primary" %}}
If you are using one of the Dapr tutorials, such as [distributed calculator](https://github.com/dapr/quickstarts/tree/master/tutorials/distributed-calculator), the `appconfig` configuration is already configured, so no additional settings are needed.
{{% /alert %}}
That's it! There's no need include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.
> **NOTE**: You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.
That's it! There's no need to include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your Application Insights resource. You can also use the **Application Map** to examine the topology of your services, as shown below:
## View traces
Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your App Insights resource. You can also use the **Application Map** to examine the topology of your services, as shown below:
![Application map](/images/open-telemetry-app-insights.png)
> **NOTE**: Only operations going through Dapr API exposed by Dapr sidecar (e.g. service invocation or event publishing) are displayed in Application Map topology.
{{% alert title="Note" color="primary" %}}
Only operations going through Dapr API exposed by Dapr sidecar (for example, service invocation or event publishing) are displayed in Application Map topology.
{{% /alert %}}
## Related links
* Try out the [observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability/README.md)
* How to set [tracing configuration options]({{< ref "configuration-overview.md#tracing" >}})
- Try out the [observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability/README.md)
- Learn how to set [tracing configuration options]({{< ref "configuration-overview.md#tracing" >}})

View File

@ -6,42 +6,48 @@ weight: 900
description: "How to use Dapr to push trace events through the OpenTelemetry Collector."
---
{{% alert title="Note" color="primary" %}}
Dapr directly writes traces using the OpenTelemetry (OTEL) protocol as the recommended method. For observability tools that support OTEL protocol, you do not need to use the OpenTelemetry Collector.
Dapr directly writes traces using the OpenTelemetry (OTEL) protocol as the **recommended** method. For observability tools that support OTEL protocol, it is recommended to use the OpenTelemetry Collector, as it allows your application to quickly offload data and includes features, such as retries, batching, and encryption. For more information, read the Open Telemetry [documentation](https://opentelemetry.io/docs/collector/#when-to-use-a-collector).
Dapr can also write traces using the Zipkin protocol. Previous to supporting the OTEL protocol, combining the Zipkin protocol with the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) enabled you to send traces to observability tools such as AWS X-Ray, Google Cloud Operations Suite, and Azure AppInsights. This approach remains for reference purposes only.
{{% /alert %}}
Dapr can also write traces using the Zipkin protocol. Previous to supporting the OTEL protocol, you use the Zipkin protocol with the [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) to send traces to observability tools such as AWS X-Ray, Google Cloud Operations Suite, and Azure Monitor. Both protocol approaches are valid, however OTEL is the recommended choice.
![Using OpenTelemetry Collect to integrate with many backend](/images/open-telemetry-collector.png)
## Requirements
## Prerequisites
1. A installation of Dapr on Kubernetes.
- [Install Dapr on Kubernetes]({{< ref kubernetes >}})
- Verify your trace backends are already set up to receive traces
- Review your OTEL Collector exporter's required parameters:
- [`opentelemetry-collector-contrib/exporter`](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter)
- [`opentelemetry-collector/exporter`](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter)
2. You are already setting up your trace backends to receive traces.
## Set up OTEL Collector to push to your trace backend
3. Check OpenTelemetry Collector exporters [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter) and [here](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter) to see if your trace backend is supported by the OpenTelemetry Collector. On those linked pages, find the exporter you want to use and read its doc to find out the parameters required.
1. Check out the [`open-telemetry-collector-generic.yaml`](/docs/open-telemetry-collector/open-telemetry-collector-generic.yaml).
## Setting OpenTelemetry Collector
1. Replace the `<your-exporter-here>` section with the correct settings for your trace exporter.
- Refer to the OTEL Collector links in the [prerequisites section]({{< ref "#prerequisites.md" >}}) to determine the correct settings.
### Run OpenTelemetry Collector to push to your trace backend
1. Apply the configuration with:
1. Check out the file [open-telemetry-collector-generic.yaml](/docs/open-telemetry-collector/open-telemetry-collector-generic.yaml) and replace the section marked with `<your-exporter-here>` with the correct settings for your trace exporter. Again, refer to the OpenTelemetry Collector links in the Prerequisites section to determine the correct settings.
```sh
kubectl apply -f open-telemetry-collector-generic.yaml
```
2. Apply the configuration with `kubectl apply -f open-telemetry-collector-generic.yaml`.
## Set up Dapr to send traces to OTEL Collector
## Set up Dapr to send trace to OpenTelemetry Collector
Set up a Dapr configuration file to turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.
### Turn on tracing in Dapr
Next, set up both a Dapr configuration file to turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.
1. Use this [`collector-config.yaml`](/docs/open-telemetry-collector/collector-config.yaml) file to create your own configuration.
1. Create a collector-config.yaml file with this [content](/docs/open-telemetry-collector/collector-config.yaml)
1. Apply the configuration with:
2. Apply the configuration with `kubectl apply -f collector-config.yaml`.
```sh
kubectl apply -f collector-config.yaml
```
### Deploy your app with tracing
## Deploy your app with tracing
When running in Kubernetes mode, apply the `appconfig` configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following example:
Apply the `appconfig` configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following example:
```yaml
apiVersion: apps/v1
@ -60,15 +66,18 @@ spec:
dapr.io/config: "appconfig"
```
Some of the quickstarts such as [distributed calculator](https://github.com/dapr/quickstarts/tree/master/tutorials/distributed-calculator) already configure these settings, so if you are using those no additional settings are needed.
{{% alert title="Note" color="primary" %}}
If you are using one of the Dapr tutorials, such as [distributed calculator](https://github.com/dapr/quickstarts/tree/master/tutorials/distributed-calculator), the `appconfig` configuration is already configured, so no additional settings are needed.
{{% /alert %}}
That's it! There's no need include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.
> **NOTE**: You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.
That's it! There's no need to include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
## View traces
Deploy and run some applications. Wait for the trace to propagate to your tracing backend and view them there.
## Related links
* Try out the [observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability/README.md)
* How to set [tracing configuration options]({{< ref "configuration-overview.md#tracing" >}})
- Try out the [observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability/README.md)
- Learn how to set [tracing configuration options]({{< ref "configuration-overview.md#tracing" >}})

View File

@ -10,10 +10,11 @@ Dapr uses the Open Telemetry (OTEL) and Zipkin protocols for distributed traces.
Most observability tools support OTEL, including:
- [Google Cloud Operations](https://cloud.google.com/products/operations)
- [AWS X-ray](https://aws.amazon.com/xray/)
- [New Relic](https://newrelic.com)
- [Azure Monitor](https://azure.microsoft.com/services/monitor/)
- [Datadog](https://www.datadoghq.com)
- Instana
- [Zipkin](https://zipkin.io/)
- [Jaeger](https://www.jaegertracing.io/)
- [SignalFX](https://www.signalfx.com/)

View File

@ -24,7 +24,7 @@ Different authorization servers provide different application registration exper
* [Slack](https://api.slack.com/docs/oauth)
* [Twitter](http://apps.twitter.com/)
<!-- END_IGNORE -->
To figure the Dapr OAuth middleware, you'll need to collect the following information:
To configure the Dapr OAuth middleware, you'll need to collect the following information:
* Client ID (see [here](https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/))
* Client secret (see [here](https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/))

View File

@ -15,7 +15,7 @@ Breaking changes are defined as a change to any of the following that cause comp
- Default configuration value
- Command line argument
- Published metric
- Kubernetes CRD template
- Kubernetes resource template
- Publicly accessible API
- Publicly visible SDK interface, method, class, or attribute

View File

@ -94,7 +94,7 @@ There are some known cases where this might not properly work:
- Make sure the kube api server can reach the following webhooks services:
- [Sidecar Mutating Webhook Injector Service](https://github.com/dapr/dapr/blob/44235fe8e8799589bb393a3124d2564db2dd6885/charts/dapr/charts/dapr_sidecar_injector/templates/dapr_sidecar_injector_deployment.yaml#L157) at port __4000__ that is served from the sidecar injector.
- [CRD Conversion Webhook Service](https://github.com/dapr/dapr/blob/44235fe8e8799589bb393a3124d2564db2dd6885/charts/dapr/charts/dapr_operator/templates/dapr_operator_service.yaml#L28) at port __19443__ that is served from the operator.
- [Resource Conversion Webhook Service](https://github.com/dapr/dapr/blob/44235fe8e8799589bb393a3124d2564db2dd6885/charts/dapr/charts/dapr_operator/templates/dapr_operator_service.yaml#L28) at port __19443__ that is served from the operator.
Check with your cluster administrators to setup allow ingress
rules to the above ports, __4000__ and __19443__, in the cluster from the kube api servers.

View File

@ -17,7 +17,7 @@ This table is meant to help users understand the equivalent options for running
| `--app-port` | `--app-port` | `-p` | `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on |
| `--components-path` | `--components-path` | `-d` | not supported | **Deprecated** in favor of `--resources-path` |
| `--resources-path` | `--resources-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded. |
| `--config` | `--config` | `-c` | `dapr.io/config` | Tells Dapr which Configuration CRD to use |
| `--config` | `--config` | `-c` | `dapr.io/config` | Tells Dapr which Configuration resource to use |
| `--control-plane-address` | not supported | | not supported | Address for a Dapr control plane |
| `--dapr-grpc-port` | `--dapr-grpc-port` | | not supported | gRPC port for the Dapr API to listen on (default "50001") |
| `--dapr-http-port` | `--dapr-http-port` | | not supported | The HTTP port for the Dapr API |

View File

@ -6,7 +6,7 @@ weight: 1000
description: "The basic spec for a Dapr component"
---
Dapr defines and registers components using a [CustomResourceDefinition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). All components are defined as a CRD and can be applied to any hosting environment where Dapr is running, not just Kubernetes.
Dapr defines and registers components using a [resource specifications](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). All components are defined as a resource and can be applied to any hosting environment where Dapr is running, not just Kubernetes.
## Format
@ -31,7 +31,7 @@ spec:
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| apiVersion | Y | The version of the Dapr (and Kubernetes if applicable) API you are calling | `dapr.io/v1alpha1`
| kind | Y | The type of CRD. For components is must always be `Component` | `Component`
| kind | Y | The type of resource. For components is must always be `Component` | `Component`
| **metadata** | - | **Information about the component registration** |
| metadata.name | Y | The name of the component | `prod-statestore`
| metadata.namespace | N | The namespace for the component for hosting environments with namespaces | `myapp-namespace`

View File

@ -6,7 +6,7 @@ description: "The basic spec for a Dapr Configuration resource"
weight: 5000
---
The `Configuration` is a Dapr resource that is used to configure the Dapr sidecar, control-plane, and others.
The `Configuration` is a Dapr resource that is used to configure the Dapr sidecar, control plane, and others.
## Sidecar format
@ -76,7 +76,7 @@ spec:
| tracing | N | Turns on tracing for an application. | [Learn more about the `tracing` configuration.]({{< ref "configuration-overview.md#tracing" >}}) |
## Control-plane format
## Control plane format
The `daprsystem` configuration file installed with Dapr applies global settings and is only set up when Dapr is deployed to Kubernetes.