Fix typos in docs (#178)

* Typos for API reference's state management

* Fix typos for docs
This commit is contained in:
Quy Le 2019-10-21 11:36:14 +07:00 committed by Yaron Schneider
parent 1e174e20bc
commit 98d2bcf1db
22 changed files with 49 additions and 49 deletions

View File

@ -29,7 +29,7 @@ Note that `profile-port` is not required, and Dapr will pick an available port.
## Debug a profiling session
After profiloing is enabled, we can start a profiling session to investigate what's going on with the Dapr runtime.
After profiling is enabled, we can start a profiling session to investigate what's going on with the Dapr runtime.
### Kubernetes
@ -101,7 +101,7 @@ APP ID DAPR PORT APP PORT COMMAND AGE CREATED PID
node-subscriber 3500 3000 node app.js 12s 2019-09-09 15:11.24 896
```
Grab the DAPR PORT, and if profiling has been enabled as desribed above, you can now start using `pprof` to profile Dapr.
Grab the DAPR PORT, and if profiling has been enabled as described above, you can now start using `pprof` to profile Dapr.
Look at the Kubernetes examples above for some useful commands to profile Dapr.
More info on pprof can be found [here](https://github.com/google/pprof).

View File

@ -98,7 +98,7 @@ docker run -d -p 9411:9411 openzipkin/zipkin
3. Launch Dapr with the `--config` param:
```
dapr run --app-id mynode --app-port 3000 --config ./cofig.yaml node app.js
dapr run --app-id mynode --app-port 3000 --config ./config.yaml node app.js
```
## Tracing Configuration

View File

@ -14,11 +14,11 @@ This directory contains various Dapr concepts. The goal of these documents is to
* **Components**
Dapr uses a modular design, in which functionalities are grouped and delivered by a number of *components*, such as [pub/sub](./publish-subscribe-messaging/README.md) and [secrets](./components/secrets.md). Many of the components are pluggable so that you can swap out the default implemenation with your custom implementations.
Dapr uses a modular design, in which functionalities are grouped and delivered by a number of *components*, such as [pub/sub](./publish-subscribe-messaging/README.md) and [secrets](./components/secrets.md). Many of the components are pluggable so that you can swap out the default implementation with your custom implementations.
* [**Distributed Tracing**](./distributed-tracing/README.md)
Distirbuted tracing collects and aggregates trace events by transactions. It allows you to trace the entire call chain across multiple services. Dapr integrates with [OpenTelemetry](https://opentelemetry.io/) for distributed tracing and metrics collection.
Distributed tracing collects and aggregates trace events by transactions. It allows you to trace the entire call chain across multiple services. Dapr integrates with [OpenTelemetry](https://opentelemetry.io/) for distributed tracing and metrics collection.
* [**Publish/Subscribe Messaging**](./publish-subscribe-messaging/README.md)

View File

@ -1,6 +1,6 @@
# Building blocks
Dapr consists of a set of building blocks that can be called from any programming language through Dapr HTTP or gRPC APIs. These building blocks address common challenges in building resilient, microservices applications. And they catpure and share best practices and patterns that empower distributed application developers.
Dapr consists of a set of building blocks that can be called from any programming language through Dapr HTTP or gRPC APIs. These building blocks address common challenges in building resilient, microservices applications. And they capture and share best practices and patterns that empower distributed application developers.
![Dapr building blocks](../../images/overview.png)

View File

@ -37,7 +37,7 @@ Every binding has its own unique set of properties. Click the name link to see t
## Input Bindings
Input bindings are used to trigger your application when an event from an external resource has occured.
Input bindings are used to trigger your application when an event from an external resource has occurred.
An optional payload and metadata might be sent with the request.
In order to receive events from an input binding:

View File

@ -1,4 +1,4 @@
# Publish/Subcribe Messaging
# Publish/Subscribe Messaging
Dapr enables developers to design their application with a pub/sub pattern using a message broker, where event consumers and producers are decoupled from one another, and communicate by sending and receiving messages that are associated with a namespace, usually in the form of topics.
@ -7,14 +7,14 @@ This allows event producers to send messages to consumers that aren't running, a
Dapr provides At-Least-Once messaging guarantees, and integrates with various message brokers implementations.
These implementations are pluggable, and developed outside of the Dapr runtime in [components-contrib](https://github.com/dapr/components-contrib/tree/master/pubsub).
## Publish/Subcribe API
## Publish/Subscribe API
The API for Publish/Subcribe can be found in the [spec repo](../../reference/api/pubsub.md).
The API for Publish/Subscribe can be found in the [spec repo](../../reference/api/pubsub.md).
## Behavior and Guarantees
Dapr guarantees At-Least-Once semantics for message delivery.
That is, when an application publishes a message to a topic using the Publish/Subcribe API, it can assume the message is delivered at least once to any subscriber when the response status code from that endpoint is `200`, or returns no error if using the gRPC client.
That is, when an application publishes a message to a topic using the Publish/Subscribe API, it can assume the message is delivered at least once to any subscriber when the response status code from that endpoint is `200`, or returns no error if using the gRPC client.
The burden of dealing with concepts like consumer groups and multiple instances inside consumer groups is all catered for by Dapr.

View File

@ -12,7 +12,7 @@ However, in a multi-tenant environment, a secured communication channel among Da
An alternative is to use service mesh technologies such as [Istio]( https://istio.io/) to provide secured communications among your application components. Dapr works well with popular service meshes.
By default, Dapr supports Cross-Origin Resoruce Sharing (CORS) from all origins. You can configure Dapr runtime to allow only specific origins.
By default, Dapr supports Cross-Origin Resource Sharing (CORS) from all origins. You can configure Dapr runtime to allow only specific origins.
## Network security
@ -26,7 +26,7 @@ Authentication with a binding target is configured by the bindings configurat
## State store security
Dapr doesnt transform the state data from applications. This means Dapr doesnt attempt to encrypt/decrypt state data. However, your application can adopt encryption/decryption methods of your choice, and the state data remains opaque to Dapr.
Dapr doesn't transform the state data from applications. This means Dapr doesn't attempt to encrypt/decrypt state data. However, your application can adopt encryption/decryption methods of your choice, and the state data remains opaque to Dapr.
Dapr uses the configured authentication method to authenticate with the underlying state store. And many state store implementations use official client libraries that generally use secured communication channels with the servers.

View File

@ -6,7 +6,7 @@ Dapr makes it simple for you to store key/value data in a store of your choice.
## State management API
Dapr brings reliable state management to applications through a simple state API. Developers can use this API to retrive, save and delete states by keys.
Dapr brings reliable state management to applications through a simple state API. Developers can use this API to retrieve, save and delete states by keys.
Dapr data stores are pluggable. Dapr ships with [Redis](https://redis.io
) out-of-box. And it allows you to plug in other data stores such as [Azure CosmosDB](https://azure.microsoft.com/Databases/Cosmos_DB
@ -14,18 +14,18 @@ Dapr data stores are pluggable. Dapr ships with [Redis](https://redis.io
), [GCP Cloud Spanner](https://cloud.google.com/spanner
) and [Cassandra](http://cassandra.apache.org/).
See the Dapr API specification for details on [state manangement API](../../reference/api/state.md))
See the Dapr API specification for details on [state management API](../../reference/api/state.md))
> **NOTE:** Dapr prefixes state keys with the ID of the current Dapr instance/sidecar. This allows multiple Dapr instances to share the same state store.
## State store behaviors
Dapr allows developers to attach to a state operation request additional metadata that describes how the request is expected to be handled. For example, you can attach concurrency requirement, consistency requirement, and retry poliy to any state operation requests.
Dapr allows developers to attach to a state operation request additional metadata that describes how the request is expected to be handled. For example, you can attach concurrency requirement, consistency requirement, and retry policy to any state operation requests.
By default, your application should assume a data store is **eventually consistent** and uses a **last-write-wins** concurrency pattern. On the other hand, if you do attach metadata to your requests, Dapr passes the metadata along with the reuqests to the state store and expects the data store or fullfill the requests.
By default, your application should assume a data store is **eventually consistent** and uses a **last-write-wins** concurrency pattern. On the other hand, if you do attach metadata to your requests, Dapr passes the metadata along with the requests to the state store and expects the data store or full fill the requests.
Not all stores are created equal. To ensure portability of your application, you can query the capabilities of the store and make your code adaptive to different store capabilites.
Not all stores are created equal. To ensure portability of your application, you can query the capabilities of the store and make your code adaptive to different store capabilities.
The following table summarizes the capbilites of existing data store implemenatations.
The following table summarizes the capabilities of existing data store implementations.
Store | Strong consistent write | Strong consistent read | ETag|
----|----|----|----
@ -34,7 +34,7 @@ Redis | Yes | Yes | Yes
Redis (clustered)| Yes | No | Yes
## Concurrency
Dapr supports optimsistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an **ETag** property to the returned state. And when the user code tries to update or delete a state, it's expected to attach the ETag through the **If-Match** header. The write operation can succeed only when the provided ETag matches with the ETag in the database.
Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an **ETag** property to the returned state. And when the user code tries to update or delete a state, it's expected to attach the ETag through the **If-Match** header. The write operation can succeed only when the provided ETag matches with the ETag in the database.
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a [Retry Policy](#Retry-Policies) to compensate for such conflicts when using ETags.
@ -45,10 +45,10 @@ If your application omits ETags in writing requests, Dapr skips ETag checks whil
## Consistency
Dapr supports both **strong consistency** and **eventual consistency**, with eventual consistency as the default behavior.
When strong consistency is used, Dapr waits for all replicas (or desingated quorums) to acknowlege before it acknowledges a write request. When eventual consistency is used, Dapr returns as soons as the write request is accepted by the underlying data store, even if this is a single replica.
When strong consistency is used, Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request. When eventual consistency is used, Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica.
## Retry policies
Dapr allows you to attach a retry policy to any write request. A policy is described by an **retryInteval**, a **retryPattern** and a **retryThreshold**. Dapr keeps retrying the request at the given interval up to the specified threshold. You can choose between a **linear** retry pattern or an **exponential** (backoff) pattern. When the **exponential** pattern is used, the retry interval is doubled after each attempt.
Dapr allows you to attach a retry policy to any write request. A policy is described by an **retryInterval**, a **retryPattern** and a **retryThreshold**. Dapr keeps retrying the request at the given interval up to the specified threshold. You can choose between a **linear** retry pattern or an **exponential** (backoff) pattern. When the **exponential** pattern is used, the retry interval is doubled after each attempt.
## Bulk operations
@ -72,16 +72,16 @@ If the data store supports SQL queries, you can query an actor's state using SQL
SELECT * FROM StateTable WHERE Id='<dapr-id>-<actor-type>-<actor-id>-<key>'
```
You can also perform aggregate queries across actor instances, avoiding the common turn-based concurrency limitations of actor frameworks. For example, to calculate the average temperature of all therometer actors, use:
You can also perform aggregate queries across actor instances, avoiding the common turn-based concurrency limitations of actor frameworks. For example, to calculate the average temperature of all thermometer actors, use:
```sql
SELECT AVG(value) FROM StateTable WHERE Id LIKE '<dapr-id>-<therometer>-*-temperature'
SELECT AVG(value) FROM StateTable WHERE Id LIKE '<dapr-id>-<thermometer>-*-temperature'
```
> **NOTE:** Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the actor instances.
## References
* [Spec: Dapr state managment specification](../../reference/api/state.md)
* [Spec: Dapr state management specification](../../reference/api/state.md)
* [Spec: Dapr actors specification](../../reference/api/actors.md)
* [How-to: Set up Azure Cosmos DB store](../../howto/setup-state-store/setup-azure-cosmosdb.md)
* [How-to: Query Azure Cosmos DB store](../../howto/query-state-store/query-cosmosdb-store.md)

View File

@ -71,7 +71,7 @@ kubectl describe deployment tiller-deploy --namespace kube-system
2. The external IP address of load balancer is not shown from `kubectl get svc`
In Minikube, EXTERNAL-IP in `kubectl get svc` shows `<pending>` state for your service. In this case, you can run `minikube serivce [service_name]` to open your service without external IP address.
In Minikube, EXTERNAL-IP in `kubectl get svc` shows `<pending>` state for your service. In this case, you can run `minikube service [service_name]` to open your service without external IP address.
```
$ kubectl get svc

View File

@ -4,7 +4,7 @@ Dapr can be run in either Standalone or Kubernetes modes. Running Dapr runtime i
## Contents
- [Prerequsites](#prerequisites)
- [Prerequisites](#prerequisites)
- [Installing Dapr CLI](#installing-dapr-cli)
- [Installing Dapr in standalone mode](#installing-dapr-in-standalone-mode)
- [Installing Dapr on Kubernetes cluster](#installing-dapr-on-a-kubernetes-cluster)

View File

@ -1,7 +1,7 @@
# Use Pub/Sub to consume messages from topics
Pub/Sub is a very common pattern in a distributed system with many services that want to utilize decoupled, asynchronous messaging.
Using Pub/Sub, you can enable scnearios where event consumers are decoupled from event producers.
Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers.
Dapr provides an extensible Pub/Sub system with At-Least-Once guarantees, allowing developers to publish and subscribe to topics.
Dapr provides different implementation of the underlying system, and allows operators to bring in their preferred infrastructure, for example Redis Streams, Kafka, etc.
@ -31,7 +31,7 @@ To deploy this into a Kubernetes cluster, fill in the `metadata` connection deta
## Subscribe to topics
To subscribe to topics, start a web server in the programming langauge of your choice and listen on the following `GET` endpoint: `/dapr/subscribe`.
To subscribe to topics, start a web server in the programming language of your choice and listen on the following `GET` endpoint: `/dapr/subscribe`.
The Dapr instance will call into your app, and expect a JSON value of an array of topics.
*Note: The following example is written in node, but can be in any programming language*
@ -68,9 +68,9 @@ app.post('/topic1', (req, res) => {
})
```
### Acking a message
### ACK-ing a message
In order to tell Dapr that a message was processed succesfully, return a `200 OK` response:
In order to tell Dapr that a message was processed successfully, return a `200 OK` response:
```
res.status(200).send()

View File

@ -154,7 +154,7 @@ func (s *server) GetBindingsSubscriptions(ctx context.Context, in *empty.Empty)
}, nil
}
// This method gets invoked every time a new event is fired from a registerd binding. The message carries the binding name, a payload and optional metadata
// This method gets invoked every time a new event is fired from a registered binding. The message carries the binding name, a payload and optional metadata
func (s *server) OnBindingEvent(ctx context.Context, in *pb.BindingEventEnvelope) (*pb.BindingResponseEnvelope, error) {
fmt.Println("Invoked from binding")
return &pb.BindingResponseEnvelope{}, nil
@ -172,7 +172,7 @@ func (s *server) OnTopicEvent(ctx context.Context, in *pb.CloudEventEnvelope) (*
```go
func main() {
// create listiner
// create listener
lis, err := net.Listen("tcp", ":4000")
if err != nil {
log.Fatalf("failed to listen: %v", err)

View File

@ -9,7 +9,7 @@ In this guide we'll start of with the basics: Using the key/value state API to a
## 1. Setup a state store
A state store component represents a resource that Dapr uses to communicate with a database.
For the purpose of this howto, we'll use a Redis state store.
For the purpose of this how to, we'll use a Redis state store.
See a list of supported state stores [here](../setup-state-store/supported-state-stores.md)

View File

@ -1,6 +1,6 @@
# Set up distributed tracing
Dapr integrates seamlessly with OpenTelemetry for telemetry and tracing. It is recommended to run Dapr with tracing enabled for any production scenario. Since Dapr uses OpenTelemetry, you can configure various exporters for tracing and telemtry data based on your environment, whether it is running in the cloud or on-premises.
Dapr integrates seamlessly with OpenTelemetry for telemetry and tracing. It is recommended to run Dapr with tracing enabled for any production scenario. Since Dapr uses OpenTelemetry, you can configure various exporters for tracing and telemetry data based on your environment, whether it is running in the cloud or on-premises.
## How to configure distributed tracing with Zipkin on Kubernetes
@ -93,7 +93,7 @@ docker run -d -p 9411:9411 openzipkin/zipkin
3. Launch Dapr with the `--config` param:
```
dapr run --app-id mynode --app-port 3000 --config ./cofig.yaml node app.js
dapr run --app-id mynode --app-port 3000 --config ./config.yaml node app.js
```
## Tracing Configuration

View File

@ -88,7 +88,7 @@ To invoke a 'DELETE' endpoint:
curl http://localhost:3500/v1.0/invoke/cart/add -X DELETE
```
Dapr puts any payload return by ther called service in the HTTP response's body.
Dapr puts any payload return by their called service in the HTTP response's body.
## Overview

View File

@ -1,6 +1,6 @@
# Query Azure Cosmos DB state store
Dapr doesn't transform state values while saving and retriving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
> **NOTE:** Azure Cosmos DB is a multi-modal database that supports multiple APIs. The default Dapr Cosmos DB state store implementation uses the [Azure Cosmos DB SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started).
@ -8,7 +8,7 @@ Dapr doesn't transform state values while saving and retriving states. Dapr requ
The easiest way to connect to your Cosmos DB instance is to use the Data Explorer on [Azure Management Portal](https://portal.azure.com). Alternatively, you can use [various SDKs and tools](https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction).
> **NOTE:** The following samples use Cosmos DB [SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). When you configure an Azure Cosmos DB for Dapr, you need to specify the exact database and collection to use. The follow samples asume you've already connected to the right database and a collection named "states".
> **NOTE:** The following samples use Cosmos DB [SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). When you configure an Azure Cosmos DB for Dapr, you need to specify the exact database and collection to use. The follow samples assume you've already connected to the right database and a collection named "states".
## 2. List keys by Dapr id

View File

@ -1,6 +1,6 @@
# Query Redis state store
Dapr doesn't transform state values while saving and retriving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
>**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation.

View File

@ -1,7 +1,7 @@
# Send events to external systems using Output Bindings
Using bindings, its possible to invoke external resources without tying in to special SDK or libraries.
For a compelete sample showing output bindings, visit this [link](<PLACEHOLDER>).
For a complete sample showing output bindings, visit this [link](<PLACEHOLDER>).
## 1. Create a binding

View File

@ -76,7 +76,7 @@ az ad sp show --id [service_principal_app_id]
az keyvault set-policy --name [your_keyvault] --object-id [your_service_principal_object_id] --secret-permissions get
```
Now, your service principal has access to your keyvault, you are ready to configure the secret store component to use secrets stored in your keyvault to access other compoents securely.
Now, your service principal has access to your keyvault, you are ready to configure the secret store component to use secrets stored in your keyvault to access other components securely.
5. Download PFX cert from your Azure Keyvault
@ -220,7 +220,7 @@ spec:
- name: spnCertificate
secretKeyRef:
name: [your_k8s_spn_secret_name]
key: [pfx_certficiate_file_local_name]
key: [pfx_certificate_file_local_name]
auth:
secretStore: kubernetes
```

View File

@ -1,6 +1,6 @@
# Setup a Dapr state store
Dapr integrates with existing databases to provide apps with state management capabilities for CRUD oprations, transactions and more.
Dapr integrates with existing databases to provide apps with state management capabilities for CRUD operations, transactions and more.
Currently, Dapr supports the configuration of one state store per cluster.
State stores are extensible and can be found in the [components-contrib repo](https://github.com/dapr/components-contrib).

View File

@ -2,7 +2,7 @@
Using bindings, your code can be triggered with incoming events from different resources which can be anything: a queue, messaging pipeline, cloud-service, filesystem etc.
This is ideal for event-driven processing, data pipeplines or just generally reacting to events and doing further processing.
This is ideal for event-driven processing, data pipelines or just generally reacting to events and doing further processing.
Dapr bindings allow you to:
@ -44,7 +44,7 @@ Inside the `metadata` section, configure the Kafka related properties such as th
## 2. Listen for incoming events
Now configure your applicaiton to receive incoming events. If using HTTP, you need to listen on a `POST` endpoint with the name of the binding as specifiied in `metadata.name` in the file. In this example, this is `myEvent`.
Now configure your application to receive incoming events. If using HTTP, you need to listen on a `POST` endpoint with the name of the binding as specified in `metadata.name` in the file. In this example, this is `myEvent`.
*The following example shows how you would listen for the event in Node.js, but this is applicable to any programming language*
@ -64,7 +64,7 @@ app.post('/myEvent', (req, res) => {
app.listen(port, () => console.log(`Kafka consumer app listening on port ${port}!`))
```
#### Acknowleding an event
#### ACK-ing an event
In order to tell Dapr that you successfully processed an event in your application, return a `200 OK` response from your HTTP handler.