mirror of https://github.com/dapr/docs.git
Merge pull request #4734 from marcduiker/upgrade-hugo-docsy
Upgrade hugo docsy for 1.16
This commit is contained in:
commit
c74101d7c4
|
|
@ -1,14 +1,14 @@
|
|||
name: Azure Static Web App v1.15
|
||||
name: Azure Static Web App v1.16
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
push:
|
||||
branches:
|
||||
- v1.15
|
||||
- v1.16
|
||||
pull_request:
|
||||
types: [opened, synchronize, reopened, closed]
|
||||
branches:
|
||||
- v1.15
|
||||
- v1.16
|
||||
|
||||
jobs:
|
||||
build_and_deploy_job:
|
||||
|
|
@ -47,7 +47,7 @@ jobs:
|
|||
HUGO_ENV: production
|
||||
HUGO_VERSION: "0.147.9"
|
||||
with:
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_15 }}
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_16 }}
|
||||
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
|
||||
skip_deploy_on_missing_secrets: true
|
||||
action: "upload"
|
||||
|
|
@ -66,6 +66,6 @@ jobs:
|
|||
id: closepullrequest
|
||||
uses: Azure/static-web-apps-deploy@v1
|
||||
with:
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_15 }}
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_16 }}
|
||||
skip_deploy_on_missing_secrets: true
|
||||
action: "close"
|
||||
|
|
@ -112,6 +112,8 @@ The code examples below leverage Dapr SDKs to invoke the output bindings endpoin
|
|||
|
||||
Here's an example of using a console app with top-level statements in .NET 6+:
|
||||
|
||||
Here's an example of using a console app with top-level statements in .NET 6+:
|
||||
|
||||
```csharp
|
||||
using System.Text;
|
||||
using System.Threading.Tasks;
|
||||
|
|
|
|||
|
|
@ -121,6 +121,8 @@ Below are code examples that leverage Dapr SDKs to demonstrate an input binding.
|
|||
|
||||
The following example demonstrates how to configure an input binding using ASP.NET Core controllers.
|
||||
|
||||
The following example demonstrates how to configure an input binding using ASP.NET Core controllers.
|
||||
|
||||
```csharp
|
||||
using System.Collections.Generic;
|
||||
using System.Threading.Tasks;
|
||||
|
|
|
|||
|
|
@ -15,20 +15,19 @@ into the features and concepts included with Dapr Jobs and the various SDKs. Dap
|
|||
|
||||
## Job identity
|
||||
|
||||
All jobs are registered with a case-sensitive job name. These names are intended to be unique across all services
|
||||
interfacing with the Dapr runtime. The name is used as an identifier when creating and modifying the job as well as
|
||||
All jobs are registered with a case-sensitive job name. These names are intended to be unique across all services
|
||||
interfacing with the Dapr runtime. The name is used as an identifier when creating and modifying the job as well as
|
||||
to indicate which job a triggered invocation is associated with.
|
||||
|
||||
Only one job can be associated with a name at any given time. Any attempt to create a new job using the same name
|
||||
as an existing job will result in an overwrite of this existing job.
|
||||
Only one job can be associated with a name at any given time. By default, any attempt to create a new job using the same name as an existing job results in an error. However, if the `overwrite` flag is set to `true`, the new job overwrites the existing job with the same name.
|
||||
|
||||
## Scheduling Jobs
|
||||
A job can be scheduled using any of the following mechanisms:
|
||||
- Intervals using Cron expressions, duration values, or period expressions
|
||||
- Specific dates and times
|
||||
|
||||
For all time-based schedules, if a timestamp is provided with a time zone via the RFC3339 specification, that
|
||||
time zone is used. When not provided, the time zone used by the server running Dapr is used.
|
||||
For all time-based schedules, if a timestamp is provided with a time zone via the RFC3339 specification, that
|
||||
time zone is used. When not provided, the time zone used by the server running Dapr is used.
|
||||
In other words, do **not** assume that times run in UTC time zone, unless otherwise specified when scheduling
|
||||
the job.
|
||||
|
||||
|
|
@ -48,7 +47,7 @@ fields spanning the values specified in the table below:
|
|||
|
||||
### Schedule using a duration value
|
||||
You can schedule jobs using [a Go duration string](https://pkg.go.dev/time#ParseDuration), in which
|
||||
a string consists of a (possibly) signed sequence of decimal numbers, each with an optional fraction and a unit suffix.
|
||||
a string consists of a (possibly) signed sequence of decimal numbers, each with an optional fraction and a unit suffix.
|
||||
Valid time units are `"ns"`, `"us"`, `"ms"`, `"s"`, `"m"`, or `"h"`.
|
||||
|
||||
#### Example 1
|
||||
|
|
@ -70,7 +69,7 @@ The following period expressions are supported. The "@every" expression also acc
|
|||
| @hourly | Run once an hour at the beginning of the hour | 0 0 * * * * |
|
||||
|
||||
### Schedule using a specific date/time
|
||||
A job can also be scheduled to run at a particular point in time by providing a date using the
|
||||
A job can also be scheduled to run at a particular point in time by providing a date using the
|
||||
[RFC3339 specification](https://www.rfc-editor.org/rfc/rfc3339).
|
||||
|
||||
#### Example 1
|
||||
|
|
@ -107,7 +106,7 @@ In this setup, you have full control over how triggered jobs are received and pr
|
|||
through this gRPC method.
|
||||
|
||||
### HTTP
|
||||
If a gRPC server isn't registered with Dapr when the application starts up, Dapr instead triggers jobs by making a
|
||||
If a gRPC server isn't registered with Dapr when the application starts up, Dapr instead triggers jobs by making a
|
||||
POST request to the endpoint `/job/<job-name>`. The body includes the following information about the job:
|
||||
- `Schedule`: When the job triggers occur
|
||||
- `RepeatCount`: An optional value indicating how often the job should repeat
|
||||
|
|
@ -115,7 +114,9 @@ POST request to the endpoint `/job/<job-name>`. The body includes the following
|
|||
or the not-before time from which the schedule should take effect
|
||||
- `Ttl`: An optional value indicating when the job should expire
|
||||
- `Payload`: A collection of bytes containing data originally stored when the job was scheduled
|
||||
- `Overwrite`: A flag to allow the requested job to overwrite an existing job with the same name, if it already exists.
|
||||
- `FailurePolicy`: An optional failure policy for the job.
|
||||
|
||||
The `DueTime` and `Ttl` fields will reflect an RC3339 timestamp value reflective of the time zone provided when the job was
|
||||
originally scheduled. If no time zone was provided, these values indicate the time zone used by the server running
|
||||
Dapr.
|
||||
Dapr.
|
||||
|
|
|
|||
|
|
@ -53,11 +53,11 @@ Dapr's jobs API ensures the tasks represented in these scenarios are performed c
|
|||
|
||||
## Features
|
||||
|
||||
The jobs API provides several features to make it easy for you to schedule jobs.
|
||||
The main functionality of the Jobs API allows you to create, retrieve, and delete scheduled jobs. By default, when you create a job with a name that already exists, the operation fails unless you explicitly set the `overwrite` flag to `true`. This ensures that existing jobs are not accidentally modified or overwritten.
|
||||
|
||||
### Schedule jobs across multiple replicas
|
||||
|
||||
When you create a job, it replaces any existing job with the same name. This means that every time a job is created, it resets the count and only keeps 1 record in the embedded etcd for that job. Therefore, you don't need to worry about multiple jobs being created and firing off — only the most recent job is recorded and executed, even if all your apps schedule the same job on startup.
|
||||
When you create a job, it does not replace an existing job with the same name, unless you explicitly set the `overwrite` flag. This means that every time a job is created, it resets the count and only keeps 1 record in the embedded etcd for that job. Therefore, you don't need to worry about multiple jobs being created and firing off — only the most recent job is recorded and executed, even if all your apps schedule the same job on startup.
|
||||
|
||||
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 Scheduler service instance.
|
||||
|
||||
|
|
|
|||
|
|
@ -624,6 +624,29 @@ await context.CallActivityAsync("PostResults", sum);
|
|||
|
||||
{{< /tabpane >}}
|
||||
|
||||
With the release of 1.16, it's even easier to process workflow activities in parallel while putting an upper cap on
|
||||
concurrency by using the following extension methods on the `WorkflowContext`:
|
||||
|
||||
{{% tabpane %}}
|
||||
|
||||
{{% tab header=".NET" %}}
|
||||
<!-- .NET -->
|
||||
```csharp
|
||||
//Revisiting the earlier example...
|
||||
// Get a list of work items to process
|
||||
var workBatch = await context.CallActivityAsync<object[]>("GetWorkBatch", null);
|
||||
|
||||
// Process deterministically in parallel with an upper cap of 5 activities at a time
|
||||
var results = await context.ProcessInParallelAsync(workBatch, workItem => context.CallActivityAsync<int>("ProcessWorkItem", workItem), maxConcurrency: 5);
|
||||
|
||||
var sum = results.Sum(t => t);
|
||||
await context.CallActivityAsync("PostResults", sum);
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{% /tabpane %}}
|
||||
|
||||
Limiting the degree of concurrency in this way can be useful for limiting contention against shared resources. For example, if the activities need to call into external resources that have their own concurrency limits, like a databases or external APIs, it can be useful to ensure that no more than a specified number of activities call that resource concurrently.
|
||||
|
||||
## Async HTTP APIs
|
||||
|
|
|
|||
|
|
@ -77,6 +77,14 @@ kubectl delete pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-
|
|||
Persistent Volume Claims are not deleted automatically with an [uninstall]({{% ref dapr-uninstall.md %}}). This is a deliberate safety measure to prevent accidental data loss.
|
||||
{{% /alert %}}
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
For storage providers that do NOT support dynamic volume expansion: If Dapr has ever been installed on the cluster before, the Scheduler's Persistent Volume Claims must be manually uninstalled in order for new ones with increased storage size to be created.
|
||||
```bash
|
||||
kubectl delete pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 dapr-scheduler-data-dir-dapr-scheduler-server-1 dapr-scheduler-data-dir-dapr-scheduler-server-2
|
||||
```
|
||||
Persistent Volume Claims are not deleted automatically with an [uninstall]({{< ref dapr-uninstall.md >}}). This is a deliberate safety measure to prevent accidental data loss.
|
||||
{{% /alert %}}
|
||||
|
||||
#### Increase existing Scheduler Storage Size
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
|
|||
|
|
@ -37,6 +37,8 @@ Parameter | Description
|
|||
`dueTime` | An optional time at which the job should be active, or the "one shot" time, if other scheduling type fields are not provided. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from creation time), or non-repeating ISO8601.
|
||||
`repeats` | An optional number of times in which the job should be triggered. If not set, the job runs indefinitely or until expiration.
|
||||
`ttl` | An optional time to live or expiration of the job. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from job creation time), or non-repeating ISO8601.
|
||||
`overwrite` | A boolean value to specify if the job can overwrite an existing one with the same name. Default value is `false`
|
||||
`failure_policy` | An optional failure policy for the job. Details of the format are below. If not set, the job is retried up to 3 times with a delay of 1 second between retries.
|
||||
|
||||
#### schedule
|
||||
`schedule` accepts both systemd timer-style cron expressions, as well as human readable '@' prefixed period strings, as defined below.
|
||||
|
|
@ -62,6 +64,39 @@ Entry | Description | Equivalent
|
|||
@daily (or @midnight) | Run once a day, midnight | 0 0 0 * * *
|
||||
@hourly | Run once an hour, beginning of hour | 0 0 * * * *
|
||||
|
||||
#### failure_policy
|
||||
|
||||
`failure_policy` specifies how the job should handle failures.
|
||||
|
||||
It can be set to `constant` or `drop`.
|
||||
- The `constant` policy retries the job constantly with the following configuration options.
|
||||
- `max_retries` configures how many times the job should be retried. Defaults to retrying indefinitely. `nil` denotes unlimited retries, while `0` means the request will not be retried.
|
||||
- `interval` configures the delay between retries. Defaults to retrying immediately. Valid values are of the form `200ms`, `15s`, `2m`, etc.
|
||||
- The `drop` policy drops the job after the first failure, without retrying.
|
||||
|
||||
##### Example 1
|
||||
|
||||
```json
|
||||
{
|
||||
//...
|
||||
"failure_policy": {
|
||||
"constant": {
|
||||
"max_retries": 3,
|
||||
"interval": "10s"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
##### Example 2
|
||||
|
||||
```json
|
||||
{
|
||||
//...
|
||||
"failure_policy": {
|
||||
"drop": {}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Request body
|
||||
|
||||
|
|
|
|||
|
|
@ -82,8 +82,13 @@ This component supports **output binding** with the following operations:
|
|||
|
||||
- `create` : [Create file](#create-file)
|
||||
- `get` : [Get file](#get-file)
|
||||
- `bulkGet` : [Bulk get objects](#bulk-get-objects)
|
||||
- `delete` : [Delete file](#delete-file)
|
||||
- `list`: [List file](#list-files)
|
||||
- `copy`: [Copy file](#copy-files)
|
||||
- `move`: [Move file](#move-files)
|
||||
- `rename`: [Rename file](#rename-files)
|
||||
|
||||
|
||||
### Create file
|
||||
|
||||
|
|
@ -216,6 +221,72 @@ The metadata parameters are:
|
|||
|
||||
The response body contains the value stored in the object.
|
||||
|
||||
### Bulk get objects
|
||||
|
||||
To perform a bulk get operation that retrieves all bucket files at once, invoke the GCP bucket binding with a `POST` method and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
"operation": "bulkGet",
|
||||
}
|
||||
```
|
||||
|
||||
The metadata parameters are:
|
||||
|
||||
- `encodeBase64` - (optional) configuration to encode base64 file content before return the content for all files
|
||||
|
||||
#### Example
|
||||
|
||||
{{% tabpane text=true %}}
|
||||
|
||||
{{% tab header="Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"bulkget\"}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab header="Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "bulkget"}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% /tabpane %}}
|
||||
|
||||
#### Response
|
||||
|
||||
The response body contains an array of objects, where each object represents a file in the bucket with the following structure:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"name": "file1.txt",
|
||||
"data": "content of file1",
|
||||
"attrs": {
|
||||
"bucket": "mybucket",
|
||||
"name": "file1.txt",
|
||||
"size": 1234,
|
||||
...
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "file2.txt",
|
||||
"data": "content of file2",
|
||||
"attrs": {
|
||||
"bucket": "mybucket",
|
||||
"name": "file2.txt",
|
||||
"size": 5678,
|
||||
...
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Each object in the array contains:
|
||||
- `name`: The name of the file
|
||||
- `data`: The content of the file
|
||||
- `attrs`: Object attributes from GCP Storage including metadata like creation time, size, content type, etc.
|
||||
|
||||
### Delete object
|
||||
|
||||
|
|
@ -262,7 +333,7 @@ An HTTP 204 (No Content) and empty body will be retuned if successful.
|
|||
|
||||
### List objects
|
||||
|
||||
To perform a list object operation, invoke the S3 binding with a `POST` method and the following JSON body:
|
||||
To perform a list object operation, invoke the GCP bucket binding with a `POST` method and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
|
|
@ -321,6 +392,58 @@ The list of objects will be returned as JSON array in the following form:
|
|||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Copy objects
|
||||
|
||||
To perform a copy object operation, invoke the GCP bucket binding with a `POST` method and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
"operation": "copy",
|
||||
"metadata": {
|
||||
"destinationBucket": "destination-bucket-name",
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The metadata parameters are:
|
||||
|
||||
- `destinationBucket` - the name of the destination bucket (required)
|
||||
|
||||
### Move objects
|
||||
|
||||
To perform a move object operation, invoke the GCP bucket binding with a `POST` method and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
"operation": "move",
|
||||
"metadata": {
|
||||
"destinationBucket": "destination-bucket-name",
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The metadata parameters are:
|
||||
|
||||
- `destinationBucket` - the name of the destination bucket (required)
|
||||
|
||||
### Rename objects
|
||||
|
||||
To perform a rename object operation, invoke the GCP bucket binding with a `POST` method and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
"operation": "rename",
|
||||
"metadata": {
|
||||
"newName": "object-new-name",
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The metadata parameters are:
|
||||
|
||||
- `newName` - the new name of the object (required)
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{% ref component-schema %}})
|
||||
|
|
|
|||
|
|
@ -41,6 +41,24 @@ The following metadata options are **required** to authenticate using a PostgreS
|
|||
|--------|:--------:|---------|---------|
|
||||
| `connectionString` | Y | The connection string for the PostgreSQL database. See the PostgreSQL [documentation on database connections](https://www.postgresql.org/docs/current/libpq-connect.html) for information on how to define a connection string. | `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db"`
|
||||
|
||||
#### Authenticate using individual connection parameters
|
||||
|
||||
In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------|:--------:|---------|---------|
|
||||
| `host` | Y | The host name or IP address of the PostgreSQL server | `"localhost"` |
|
||||
| `hostaddr` | N | The IP address of the PostgreSQL server (alternative to host) | `"127.0.0.1"` |
|
||||
| `port` | Y | The port number of the PostgreSQL server | `"5432"` |
|
||||
| `database` | Y | The name of the database to connect to | `"my_db"` |
|
||||
| `user` | Y | The PostgreSQL user to connect as | `"postgres"` |
|
||||
| `password` | Y | The password for the PostgreSQL user | `"example"` |
|
||||
| `sslRootCert` | N | Path to the SSL root certificate file | `"/path/to/ca.crt"` |
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
When using individual connection parameters, these will override the ones present in the `connectionString`.
|
||||
{{% /alert %}}
|
||||
|
||||
### Authenticate using Microsoft Entra ID
|
||||
|
||||
Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials ("service principal") and Managed Identity.
|
||||
|
|
|
|||
|
|
@ -39,6 +39,8 @@ spec:
|
|||
# value: "http://127.0.0.1:10001"
|
||||
# - name: visibilityTimeout
|
||||
# value: "30s"
|
||||
# - name: initialVisibilityDelay
|
||||
# value: "30s"
|
||||
# - name: direction
|
||||
# value: "input, output"
|
||||
```
|
||||
|
|
@ -59,7 +61,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| `decodeBase64` | N | Input | Configuration to decode base64 content received from the Storage Queue into a string. Defaults to `false` | `true`, `false` |
|
||||
| `encodeBase64` | N | Output | If enabled base64 encodes the data payload before uploading to Azure storage queues. Default `false`. | `true`, `false` |
|
||||
| `endpoint` | N | Input/Output | Optional custom endpoint URL. This is useful when using the [Azurite emulator](https://github.com/Azure/azurite) or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (`http://` or `https://`), the IP or FQDN, and optional port. | `"http://127.0.0.1:10001"` or `"https://accountName.queue.example.com"` |
|
||||
| `visibilityTimeout` | N | Input | Allows setting a custom queue visibility timeout to avoid immediate retrying of recently failed messages. Defaults to 30 seconds. | `"100s"` |
|
||||
| `initialVisibilityDelay` | N | Input | Allows setting a custom queue visibility timeout to avoid immediate retrying of recently failed messages. Defaults to 30 seconds. | `"100s"` |
|
||||
| `visibilityTimeout` | N | Input | Sets a delay before a message becomes visible in the queue after being added. It can also be specified per message by setting the `initialVisibilityDelay` property in the invocation request's metadata. Defaults to 0 seconds. | `"30s"` |
|
||||
| `direction` | N | Input/Output | Direction of the binding. | `"input"`, `"output"`, `"input, output"` |
|
||||
|
||||
### Microsoft Entra ID authentication
|
||||
|
|
@ -98,6 +101,30 @@ curl -X POST http://localhost:3500/v1.0/bindings/myStorageQueue \
|
|||
}'
|
||||
```
|
||||
|
||||
## Specifying a Initial Visibility delay per message
|
||||
|
||||
An initial visibility delay can be defined on queue level or at the message level. The value defined at message level overwrites any value set at a queue level.
|
||||
|
||||
To set an initial visibility delay value at the message level, use the `metadata` section in the request body during the binding invocation.
|
||||
|
||||
The field name is `initialVisbilityDelay`.
|
||||
|
||||
Example.
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/bindings/myStorageQueue \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"data": {
|
||||
"message": "Hi"
|
||||
},
|
||||
"metadata": {
|
||||
"initialVisbilityDelay": "30"
|
||||
},
|
||||
"operation": "create"
|
||||
}'
|
||||
```
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{% ref component-schema %}})
|
||||
|
|
|
|||
|
|
@ -27,6 +27,21 @@ spec:
|
|||
# Name of the table which holds configuration information
|
||||
- name: table
|
||||
value: "[your_configuration_table_name]"
|
||||
# Individual connection parameters - can be used instead to override connectionString parameters
|
||||
#- name: host
|
||||
# value: "localhost"
|
||||
#- name: hostaddr
|
||||
# value: "127.0.0.1"
|
||||
#- name: port
|
||||
# value: "5432"
|
||||
#- name: database
|
||||
# value: "my_db"
|
||||
#- name: user
|
||||
# value: "postgres"
|
||||
#- name: password
|
||||
# value: "example"
|
||||
#- name: sslRootCert
|
||||
# value: "/path/to/ca.crt"
|
||||
# Timeout for database operations, in seconds (optional)
|
||||
#- name: timeoutInSeconds
|
||||
# value: 20
|
||||
|
|
@ -67,6 +82,24 @@ The following metadata options are **required** to authenticate using a PostgreS
|
|||
|--------|:--------:|---------|---------|
|
||||
| `connectionString` | Y | The connection string for the PostgreSQL database. See the PostgreSQL [documentation on database connections](https://www.postgresql.org/docs/current/libpq-connect.html) for information on how to define a connection string. | `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db"`
|
||||
|
||||
#### Authenticate using individual connection parameters
|
||||
|
||||
In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------|:--------:|---------|---------|
|
||||
| `host` | Y | The host name or IP address of the PostgreSQL server | `"localhost"` |
|
||||
| `hostaddr` | N | The IP address of the PostgreSQL server (alternative to host) | `"127.0.0.1"` |
|
||||
| `port` | Y | The port number of the PostgreSQL server | `"5432"` |
|
||||
| `database` | Y | The name of the database to connect to | `"my_db"` |
|
||||
| `user` | Y | The PostgreSQL user to connect as | `"postgres"` |
|
||||
| `password` | Y | The password for the PostgreSQL user | `"example"` |
|
||||
| `sslRootCert` | N | Path to the SSL root certificate file | `"/path/to/ca.crt"` |
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
When using individual connection parameters, these will override the ones present in the `connectionString`.
|
||||
{{% /alert %}}
|
||||
|
||||
### Authenticate using Microsoft Entra ID
|
||||
|
||||
Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials ("service principal") and Managed Identity.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
type: docs
|
||||
title: "GoogleAI"
|
||||
linkTitle: "GoogleAI"
|
||||
description: Detailed information on the GoogleAI conversation component
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
A Dapr `conversation.yaml` component file has the following structure:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: googleai
|
||||
spec:
|
||||
type: conversation.googleai
|
||||
metadata:
|
||||
- name: key
|
||||
value: mykey
|
||||
- name: model
|
||||
value: gemini-1.5-flash
|
||||
- name: cacheTTL
|
||||
value: 10m
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `key` | Y | API key for GoogleAI. | `mykey` |
|
||||
| `model` | N | The GoogleAI LLM to use. Defaults to `gemini-1.5-flash`. | `gemini-2.0-flash` |
|
||||
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
|
||||
|
||||
## Related links
|
||||
|
||||
- [Conversation API overview]({{< ref conversation-overview.md >}})
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Ollama"
|
||||
linkTitle: "Ollama"
|
||||
description: Detailed information on the Ollama conversation component
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
A Dapr `conversation.yaml` component file has the following structure:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: ollama
|
||||
spec:
|
||||
type: conversation.ollama
|
||||
metadata:
|
||||
- name: model
|
||||
value: llama3.2:latest
|
||||
- name: cacheTTL
|
||||
value: 10m
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `model` | N | The Ollama LLM to use. Defaults to `llama3.2:latest`. | `phi4:latest` |
|
||||
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
|
||||
|
||||
## Related links
|
||||
|
||||
- [Conversation API overview]({{< ref conversation-overview.md >}})
|
||||
|
|
@ -21,6 +21,8 @@ spec:
|
|||
value: mykey
|
||||
- name: model
|
||||
value: gpt-4-turbo
|
||||
- name: endpoint
|
||||
value: 'https://api.openai.com/v1'
|
||||
- name: cacheTTL
|
||||
value: 10m
|
||||
```
|
||||
|
|
@ -35,6 +37,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|--------------------|:--------:|---------|---------|
|
||||
| `key` | Y | API key for OpenAI. | `mykey` |
|
||||
| `model` | N | The OpenAI LLM to use. Defaults to `gpt-4-turbo`. | `gpt-4-turbo` |
|
||||
| `endpoint` | N | Custom API endpoint URL for OpenAI API-compatible services. If not specified, the default OpenAI API endpoint is used. | `https://api.openai.com/v1` |
|
||||
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
|
||||
|
||||
## Related links
|
||||
|
|
|
|||
|
|
@ -59,6 +59,8 @@ spec:
|
|||
value: 2097152
|
||||
- name: channelBufferSize # Optional. Advanced setting. The number of events to buffer in internal and external channels.
|
||||
value: 512
|
||||
- name: consumerGroupRebalanceStrategy # Optional. Advanced setting. The strategy to use for consumer group rebalancing.
|
||||
value: sticky
|
||||
- name: schemaRegistryURL # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry URL.
|
||||
value: http://localhost:8081
|
||||
- name: schemaRegistryAPIKey # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry API Key.
|
||||
|
|
@ -69,6 +71,8 @@ spec:
|
|||
value: true
|
||||
- name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available.
|
||||
value: 5m
|
||||
- name: useAvroJson # Optional. Enables Avro JSON schema for serialization as opposed to Standard JSON default. Only applicable when the subscription uses valueSchemaType=Avro
|
||||
value: "true"
|
||||
- name: escapeHeaders # Optional.
|
||||
value: false
|
||||
|
||||
|
|
@ -115,6 +119,7 @@ spec:
|
|||
| schemaRegistryAPISecret | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret. | `ABCDEFGMEADFF` |
|
||||
| schemaCachingEnabled | N | When using Schema Registry Avro serialization/deserialization. Enables caching for schemas. Default is `true` | `true` |
|
||||
| schemaLatestVersionCacheTTL | N | When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. Default is 5 min | `5m` |
|
||||
| useAvroJson | N | Enables Avro JSON schema for serialization as opposed to Standard JSON default. Only applicable when the subscription uses valueSchemaType=Avro. Default is `"false"` | `"true"` |
|
||||
| clientConnectionTopicMetadataRefreshInterval | N | The interval for the client connection's topic metadata to be refreshed with the broker as a Go duration. Defaults to `9m`. | `"4m"` |
|
||||
| clientConnectionKeepAliveInterval | N | The maximum time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely. | `"4m"` |
|
||||
| consumerFetchMin | N | The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available. The default is `1`, as `0` causes the consumer to spin when no messages are available. Equivalent to the JVM's `fetch.min.bytes`. | `"2"` |
|
||||
|
|
@ -122,6 +127,7 @@ spec:
|
|||
| channelBufferSize | N | The number of events to buffer in internal and external channels. This permits the producer and consumer to continue processing some messages in the background while user code is working, greatly improving throughput. Defaults to `256`. | `"512"` |
|
||||
| heartbeatInterval | N | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to "3s". | `"5s"` |
|
||||
| sessionTimeout | N | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s". | `"20s"` |
|
||||
| consumerGroupRebalanceStrategy | N | The strategy to use for consumer group rebalancing. Supported values: `range`, `sticky`, `roundrobin`. Default is `range` | `"sticky"` |
|
||||
| escapeHeaders | N | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` |
|
||||
|
||||
The `secretKeyRef` above is referencing a [kubernetes secrets store]({{% ref kubernetes-secret-store.md %}}) to access the tls information. Visit [here]({{% ref setup-secret-store.md %}}) to learn more about how to configure a secret store component.
|
||||
|
|
@ -583,7 +589,12 @@ You can configure pub/sub to publish or consume data encoded using [Avro binary
|
|||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
Currently, only message value serialization/deserialization is supported. Since cloud events are not supported, the `rawPayload=true` metadata must be passed when publishing Avro messages.
|
||||
|
||||
Please note that `rawPayload=true` should NOT be set for consumers, as the message value will be wrapped into a CloudEvent and base64-encoded. Leaving `rawPayload` as default (i.e. `false`) will send the Avro-decoded message to the application as a JSON payload.
|
||||
|
||||
When setting the `useAvroJson` component metadata to `true`, the inbound/outbound Avro binary is converted into/from Avro JSON encoding.
|
||||
This can be preferable when accurate type mapping is desirable.
|
||||
The default is standard JSON which is typically easier to bind to a native type in an application.
|
||||
{{% /alert %}}
|
||||
|
||||
When configuring the Kafka pub/sub component metadata, you must define:
|
||||
|
|
@ -671,7 +682,25 @@ app.include_router(router)
|
|||
|
||||
{{< /tabpane >}}
|
||||
|
||||
### Overriding default consumer group rebalancing
|
||||
In Kafka, rebalancing strategies determine how partitions are assigned to consumers within a consumer group. The default strategy is "range", but "roundrobin" and "sticky" are also available.
|
||||
- `Range`:
|
||||
Partitions are assigned to consumers based on their lexicographical order.
|
||||
If you have three partitions (0, 1, 2) and two consumers (A, B), consumer A might get partitions 0 and 1, while consumer B gets partition 2.
|
||||
- `RoundRobin`:
|
||||
Partitions are assigned to consumers in a round-robin fashion.
|
||||
With the same example above, consumer A might get partitions 0 and 2, while consumer B gets partition 1.
|
||||
- `Sticky`:
|
||||
This strategy aims to preserve previous assignments as much as possible while still maintaining a balanced distribution.
|
||||
If a consumer leaves or joins the group, only the affected partitions are reassigned, minimizing disruption.
|
||||
|
||||
#### Choosing a Strategy:
|
||||
- `Range`:
|
||||
Simple to understand and implement, but can lead to uneven distribution if partition sizes vary significantly.
|
||||
- `RoundRobin`:
|
||||
Provides a good balance in many cases, but might not be optimal if message keys are unevenly distributed.
|
||||
- `Sticky`:
|
||||
Generally preferred for its ability to minimize disruption during rebalances, especially when dealing with a large number of partitions or frequent consumer group changes.
|
||||
|
||||
## Create a Kafka instance
|
||||
|
||||
|
|
|
|||
|
|
@ -91,6 +91,8 @@ The above example uses secrets as plain strings. It is recommended to use a [sec
|
|||
| keys | N | A comma delimited string containing names of [Pulsar session keys](https://pulsar.apache.org/docs/3.0.x/security-encryption/#how-it-works-in-pulsar). Used in conjunction with `publicKey` for publisher encryption |
|
||||
| processMode | N | Enable processing multiple messages at once. Default: `"async"` | `"async"`, `"sync"`|
|
||||
| subscribeType | N | Pulsar supports four kinds of [subscription types](https://pulsar.apache.org/docs/3.0.x/concepts-messaging/#subscription-types). Default: `"shared"` | `"shared"`, `"exclusive"`, `"failover"`, `"key_shared"`|
|
||||
| subscribeInitialPosition | N | Subscription position is the initial position which the cursor is set when start consuming. Default: `"latest"` | `"latest"`, `"earliest"` |
|
||||
| subscribeMode | N | Subscription mode indicates the cursor persistence, durable subscription retains messages and persists the current position. Default: `"durable"` | `"durable"`, `"non_durable"` |
|
||||
| partitionKey | N | Sets the key of the message for routing policy. Default: `""` | |
|
||||
| `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `100` | `10`
|
||||
|
||||
|
|
|
|||
|
|
@ -92,6 +92,10 @@ For example, if installing using the example above, the Cassandra DNS would be:
|
|||
|
||||
[Apache Ignite](https://ignite.apache.org/)'s integration with Cassandra as a caching layer is not supported by this component.
|
||||
|
||||
## Apache Ignite
|
||||
|
||||
[Apache Ignite](https://ignite.apache.org/)'s integration with Cassandra as a caching layer is not supported by this component.
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{% ref component-schema %}})
|
||||
- Read [this guide]({{% ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" %}}) for instructions on configuring state store components
|
||||
|
|
|
|||
|
|
@ -0,0 +1,159 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Coherence"
|
||||
linkTitle: "Coherence"
|
||||
description: Detailed information on the Coherence state store component
|
||||
aliases:
|
||||
- "/operations/components/setup-state-store/supported-state-stores/setup-coherence/"
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To setup Coherence state store, create a component of type `state.coherence`. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
spec:
|
||||
type: state.coherence
|
||||
version: v1
|
||||
metadata:
|
||||
- name: serverAddress
|
||||
value: <REPLACE-WITH-GRPC-PROXY-HOST-AND-PORT> # Required. Example: "my-cluster-grpc:1408"
|
||||
- name: tlsEnabled
|
||||
value: <REPLACE-WITH-BOOLEAN> # Optional
|
||||
- name: tlsClientCertPath
|
||||
value: <REPLACE-WITH-PATH> # Optional
|
||||
- name: tlsClientKey
|
||||
value: <REPLACE-WITH-PATH> # Optional
|
||||
- name: tlsCertsPath
|
||||
value: <REPLACE-WITH-PATH> # Optional
|
||||
- name: ignoreInvalidCerts
|
||||
value: <REPLACE-WITH-BOOLEAN> # Optional
|
||||
- name: scopeName
|
||||
value: <REPLACE-WITH-SCOPE> # Optional
|
||||
- name: requestTimeout
|
||||
value: <REPLACE-WITH-REQUEST-TIMEOUT> # Optional
|
||||
- name: nearCacheTTL
|
||||
value: <REPLACE-WITH-NEAR-CACHE-TTL> # Optional
|
||||
- name: nearCacheUnits
|
||||
value: <REPLACE-WITH-NEAR-CACHE-UNITS> # Optional
|
||||
- name: nearCacheMemory
|
||||
value: <REPLACE-WITH-NEAR-CACHE-MEMORY> # Optional
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------|
|
||||
| serverAddress | Y | Comma delimited endpoints | `"my-cluster-grpc:1408"` |
|
||||
| tlsEnabled | N | Indicates if TLS should be enabled. Defaults to false | `"true"` |
|
||||
| tlsClientCertPath | N | Client certificate path for Coherence. Defaults to "". Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}). | `"-----BEGIN CERTIFICATE-----\nMIIC9TCCA..."` |
|
||||
| tlsClientKey | N | Client key for Coherence. Defaults to "". Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}). | `"-----BEGIN CERTIFICATE-----\nMIIC9TCCA..."` |
|
||||
| tlsCertsPath | N | Additional certificates for Coherence. Defaults to "". Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}). | `"-----BEGIN CERTIFICATE-----\nMIIC9TCCA..."` |
|
||||
| ignoreInvalidCerts | N | Indicates if to ignore self-signed certificates for testing only, not to be used in production. Defaults to false | `"false"` |
|
||||
| scopeName | N | A scope name to use for the internal cache. Defaults to "" | `"my-scope"` |
|
||||
| requestTimeout | N | ATimeout for calls to the cluster Defaults to "30s" | `"15s"` |
|
||||
| nearCacheTTL | N | If non-zero a near cache is used and the TTL of the near cache is this value. Defaults to 0s | `"60s"` |
|
||||
| nearCacheUnits | N | If non-zero a near cache is used and the maximum size of the near cache is this value in units. Defaults to 0 | `"1000"` |
|
||||
| nearCacheMemory | N | If non-zero a near cache is used and the maximum size of the near cache is this value in bytes. Defaults to 0 | `"4096"` |
|
||||
|
||||
### About Using Near Cache TTL
|
||||
|
||||
The Coherence state store allows you to specify a near cache to cache frequently accessed data when using the DAPR client.
|
||||
When you access data using `Get(ctx context.Context, req *GetRequest)`, returned entries are stored in the near cache and
|
||||
subsequent data access for keys in the near cache is almost instant, where without a near cache each `Get()` operation results in a network call.
|
||||
|
||||
When using the near cache option, Coherence automatically adds a MapListener to the internal cache which listens on all cache events and updates or invalidates entries in the near cache that have been changed or removed on the server.
|
||||
|
||||
To manage the amount of memory used by the near cache, the following options are supported when creating one:
|
||||
|
||||
- nearCacheTTL – objects expired after time in near cache, for example 5 minutes
|
||||
- nearCacheUnits – maximum number of cache entries in the near cache
|
||||
- nearCacheMemory – maximum amount of memory used by cache entries
|
||||
|
||||
You can specify either High-Units or Memory and in either case, optionally, a TTL.
|
||||
|
||||
The minimum expiry time for a near cache entry is 1/4 second. This is to ensure that expiry of elements is as
|
||||
efficient as possible. You will receive an error if you try to set the TTL to a lower value.
|
||||
|
||||
## Setup Coherence
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
Run Coherence locally using Docker:
|
||||
|
||||
```
|
||||
docker run -d -p 1408:1408 -p 30000:30000 ghcr.io/oracle/coherence-ce:25.03.1
|
||||
```
|
||||
|
||||
You can then interact with the server using `localhost:1408`.
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
The easiest way to install Coherence on Kubernetes is by using the [Coherence Operator](https://docs.coherence.community/coherence-operator/docs/latest/docs/about/03_quickstart):
|
||||
|
||||
**Install the Operator:**
|
||||
|
||||
```
|
||||
kubectl apply -f https://github.com/oracle/coherence-operator/releases/download/v3.5.2/coherence-operator.yaml
|
||||
```
|
||||
|
||||
> Note: Change v3.5.2 to the latest release.
|
||||
|
||||
This installs the Coherence operator into the `coherence` namespace.
|
||||
|
||||
**Create a Coherence Cluster yaml my-cluster.yaml**
|
||||
|
||||
```yaml
|
||||
apiVersion: coherence.oracle.com/v1
|
||||
kind: Coherence
|
||||
metadata:
|
||||
name: my-cluster
|
||||
spec:
|
||||
coherence:
|
||||
management:
|
||||
enabled: true
|
||||
ports:
|
||||
- name: management
|
||||
- name: grpc
|
||||
port: 1408
|
||||
```
|
||||
|
||||
**Apply the yaml**
|
||||
|
||||
```bash
|
||||
kubectl apply -f my-cluster.yaml
|
||||
```
|
||||
|
||||
To interact with Coherence, find the service with: `kubectl get svc` and look for service named '*grpc'.
|
||||
|
||||
```bash
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9m
|
||||
my-cluster-grpc ClusterIP 10.96.225.43 <none> 1408/TCP 7m3s
|
||||
my-cluster-management ClusterIP 10.96.41.6 <none> 30000/TCP 7m3s
|
||||
my-cluster-sts ClusterIP None <none> 7/TCP,7575/TCP,7574/TCP,6676/TCP,30000/TCP,1408/TCP 7m3s
|
||||
my-cluster-wka ClusterIP None <none> 7/TCP,7575/TCP,7574/TCP,6676/TCP 7m3s
|
||||
```
|
||||
|
||||
For example, if installing using the example above, the Coherence host address would be:
|
||||
|
||||
`my-cluster-grpc`
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
|
||||
- [State management building block]({{< ref state-management >}})
|
||||
- [Coherence CE on GitHub](https://github.com/oracle/coherence)
|
||||
- [Coherence Community - All things Coherence](https://coherence.community/)
|
||||
|
|
@ -31,6 +31,21 @@ spec:
|
|||
# Connection string
|
||||
- name: connectionString
|
||||
value: "<CONNECTION STRING>"
|
||||
# Individual connection parameters - can be used instead to override connectionString parameters
|
||||
#- name: host
|
||||
# value: "localhost"
|
||||
#- name: hostaddr
|
||||
# value: "127.0.0.1"
|
||||
#- name: port
|
||||
# value: "5432"
|
||||
#- name: database
|
||||
# value: "my_db"
|
||||
#- name: user
|
||||
# value: "postgres"
|
||||
#- name: password
|
||||
# value: "example"
|
||||
#- name: sslRootCert
|
||||
# value: "/path/to/ca.crt"
|
||||
# Timeout for database operations, as a Go duration or number of seconds (optional)
|
||||
#- name: timeout
|
||||
# value: 20
|
||||
|
|
@ -71,6 +86,24 @@ The following metadata options are **required** to authenticate using a PostgreS
|
|||
|--------|:--------:|---------|---------|
|
||||
| `connectionString` | Y | The connection string for the PostgreSQL database. See the PostgreSQL [documentation on database connections](https://www.postgresql.org/docs/current/libpq-connect.html) for information on how to define a connection string. | `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db"` |
|
||||
|
||||
#### Authenticate using individual connection parameters
|
||||
|
||||
In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------|:--------:|---------|---------|
|
||||
| `host` | Y | The host name or IP address of the PostgreSQL server | `"localhost"` |
|
||||
| `hostaddr` | N | The IP address of the PostgreSQL server (alternative to host) | `"127.0.0.1"` |
|
||||
| `port` | Y | The port number of the PostgreSQL server | `"5432"` |
|
||||
| `database` | Y | The name of the database to connect to | `"my_db"` |
|
||||
| `user` | Y | The PostgreSQL user to connect as | `"postgres"` |
|
||||
| `password` | Y | The password for the PostgreSQL user | `"example"` |
|
||||
| `sslRootCert` | N | Path to the SSL root certificate file | `"/path/to/ca.crt"` |
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
When using individual connection parameters, these will override the ones present in the `connectionString`.
|
||||
{{% /alert %}}
|
||||
|
||||
### Authenticate using Microsoft Entra ID
|
||||
|
||||
Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials ("service principal") and Managed Identity.
|
||||
|
|
|
|||
|
|
@ -31,6 +31,21 @@ spec:
|
|||
# Connection string
|
||||
- name: connectionString
|
||||
value: "<CONNECTION STRING>"
|
||||
# Individual connection parameters - can be used instead to override connectionString parameters
|
||||
#- name: host
|
||||
# value: "localhost"
|
||||
#- name: hostaddr
|
||||
# value: "127.0.0.1"
|
||||
#- name: port
|
||||
# value: "5432"
|
||||
#- name: database
|
||||
# value: "my_db"
|
||||
#- name: user
|
||||
# value: "postgres"
|
||||
#- name: password
|
||||
# value: "example"
|
||||
#- name: sslRootCert
|
||||
# value: "/path/to/ca.crt"
|
||||
# Timeout for database operations, as a Go duration or number of seconds (optional)
|
||||
#- name: timeout
|
||||
# value: 20
|
||||
|
|
@ -71,6 +86,24 @@ The following metadata options are **required** to authenticate using a PostgreS
|
|||
|--------|:--------:|---------|---------|
|
||||
| `connectionString` | Y | The connection string for the PostgreSQL database. See the PostgreSQL [documentation on database connections](https://www.postgresql.org/docs/current/libpq-connect.html) for information on how to define a connection string. | `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db"` |
|
||||
|
||||
#### Authenticate using individual connection parameters
|
||||
|
||||
In addition to using a connection string, you can optionally specify individual connection parameters. These parameters are equivalent to the standard PostgreSQL connection parameters.
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------|:--------:|---------|---------|
|
||||
| `host` | Y | The host name or IP address of the PostgreSQL server | `"localhost"` |
|
||||
| `hostaddr` | N | The IP address of the PostgreSQL server (alternative to host) | `"127.0.0.1"` |
|
||||
| `port` | Y | The port number of the PostgreSQL server | `"5432"` |
|
||||
| `database` | Y | The name of the database to connect to | `"my_db"` |
|
||||
| `user` | Y | The PostgreSQL user to connect as | `"postgres"` |
|
||||
| `password` | Y | The password for the PostgreSQL user | `"example"` |
|
||||
| `sslRootCert` | N | Path to the SSL root certificate file | `"/path/to/ca.crt"` |
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
When using individual connection parameters, these will override the ones present in the `connectionString`.
|
||||
{{% /alert %}}
|
||||
|
||||
### Authenticate using Microsoft Entra ID
|
||||
|
||||
Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials ("service principal") and Managed Identity.
|
||||
|
|
|
|||
|
|
@ -23,8 +23,13 @@
|
|||
state: Alpha
|
||||
version: v1
|
||||
since: "1.15"
|
||||
- component: Local echo
|
||||
link: local-echo
|
||||
state: Stable
|
||||
- component: Ollama
|
||||
link: ollama
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.15"
|
||||
since: "1.16"
|
||||
- component: GoogleAI
|
||||
link: googleai
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.16"
|
||||
|
|
|
|||
|
|
@ -9,6 +9,17 @@
|
|||
etag: true
|
||||
ttl: true
|
||||
query: false
|
||||
- component: Coherence
|
||||
link: setup-coherence
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.16"
|
||||
features:
|
||||
crud: true
|
||||
transactions: false
|
||||
etag: false
|
||||
ttl: true
|
||||
query: false
|
||||
- component: Object Storage
|
||||
link: setup-oci-objectstorage
|
||||
state: Alpha
|
||||
|
|
|
|||
|
|
@ -0,0 +1 @@
|
|||
{{- if .Get "short" }}1.15{{ else if .Get "long" }}1.15.5{{ else if .Get "cli" }}1.15.1{{ else }}1.15.1{{ end -}}
|
||||
12
hugo.yaml
12
hugo.yaml
|
|
@ -1,4 +1,4 @@
|
|||
baseURL: https://docs.dapr.io
|
||||
baseURL: https://v1-16.docs.dapr.io
|
||||
title: Dapr Docs
|
||||
|
||||
# Output directory for generated site
|
||||
|
|
@ -120,7 +120,7 @@ params:
|
|||
|
||||
# Menu title if your navbar has a versions selector to access old versions of your site.
|
||||
# This menu appears only if you have at least one [params.versions] set.
|
||||
version_menu: v1.15 (latest)
|
||||
version_menu: v1.16 (preview)
|
||||
|
||||
# Flag used in the "version-banner" partial to decide whether to display a
|
||||
# banner on every page indicating that this is an archived version of the docs.
|
||||
|
|
@ -130,7 +130,7 @@ params:
|
|||
# The version number for the version of the docs represented in this doc set.
|
||||
# Used in the "version-banner" partial to display a version number for the
|
||||
# current doc set.
|
||||
version: v1.15
|
||||
version: v1.16
|
||||
|
||||
# A link to latest version of the docs. Used in the "version-banner" partial to
|
||||
# point people to the main doc site.
|
||||
|
|
@ -147,13 +147,13 @@ params:
|
|||
|
||||
# Uncomment this if your GitHub repo does not have "main" as the default branch,
|
||||
# or specify a new value if you want to reference another branch in your GitHub links
|
||||
github_branch: v1.15
|
||||
github_branch: v1.16
|
||||
|
||||
versions:
|
||||
- version: v1.16 (preview)
|
||||
url: https://v1-16.docs.dapr.io
|
||||
url: "#"
|
||||
- version: v1.15 (latest)
|
||||
url: #
|
||||
url: "https://docs.dapr.io"
|
||||
- version: v1.14
|
||||
url: https://v1-14.docs.dapr.io
|
||||
- version: v1.13
|
||||
|
|
|
|||
Loading…
Reference in New Issue