Twitter Demo update (#48)

* Demo 2 is working with RC2

* Fixing run commands and addign better errors.

* Improved logging

* Demo 2 Updated to RC2

* Polish

* Fixed Demo3 yaml files.

* removing the vendor folder

* Helm chart is working!

* Setup script is working

* Adding helpful comments

* Make all version consistent

* Demo2 polish
moved compoents folders.

* Only thing not working is Zipkin

* The W3C contextis now propagated.

* Zipkin now works in K8s as well.

* removing bin folder from .net project

* chaning bin folder to exec so I can have in all projects.
bin is not a good name in .net projects

* Fixed copy and paste error.

* Fixed copy and paste error.

* Clean-up

* Added setup.ps1 to demo2
This creates the cognitive services needed for the demo.

* Adding location for those that don't have defaults set.
Found this testing on virgin machine.

* Clean up

* Updated command name from a local alias

* Adding a setup.sh script to demo 2
Added .sh to bash scripts

* Added bash script for demo3

* reducing to a single componets folder.

* Reduced folders

* Making all .sh file executable.

* Moving to test on mac

* added u+x

* Tested on macOS

* Show different notes from helm on Windows.

* Polish on the setup scripts for demo 3.

* improved formatting

* Works on mac time for docs

* Cleaned up everything.

* Fixed twitter demo readme.

* Final test on macOS

* Updated image tag to match runtime.
Added unique value to resource names.

* adding detail to readme

* Added Luke's feedback

* Fixed gitignore

* Polish scripts

* Let you set k8sversion

* Polished Scripts.

* Updated everything to RC4

* Updated readme.md to make the requirement of .net core 3.1 clear.

* Returning the Java and Python demos.
This commit is contained in:
Donovan Brown 2021-02-12 18:04:32 -06:00 committed by GitHub
parent bf1f30ad42
commit 1213ded4ab
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1170 changed files with 2289 additions and 292713 deletions

5
.gitignore vendored
View File

@ -1 +1,6 @@
.vscode/
node_modules/
package-lock.json
vendor/**
twitter-sentiment-processor/demos/demo2/viewer/handler
twitter-sentiment-processor/demos/demo3/demochart/templates/twitter.yaml

View File

@ -1,48 +1,37 @@
# Twitter Sentiment Processor
>Note: this demo uses Dapr v0.7.1 and may break if different Dapr versions are used
This demo shows just how easy it is to setup a local development environment with [Dapr](https://dapr.io) and transition your work to the cloud.
To simulate the transition from local development to the cloud this demo has three stages demos 1-3.
* **Demo 1** - local development showing the speed with which developers can start and use the Dapr components (Twitter and state)
* **Demo 2** - expands on Demo 1 adding monitoring and service invocation using both, direct invocation and consumption of events across applications using PubSub
* **Demo 3** - deploys the application into Kubernetes and showcases the pluggability of components by switching the state to Azure Table Storage and PubSub to Azure Service Bus without making any code changes
## Sample info
| Attribute | Details |
|--------|--------|
| Dapr runtime version | v0.7.1 |
| Language | Go, C# (.NET Core), Node.js |
| Dapr runtime version | v1.0.0-rc.4 |
| Language | Go, C# (.NET Core 3.1), Node.js |
| Environment | Local or Kubernetes |
## Recordings
View the [recorded session](https://mybuild.microsoft.com/sessions/3f296b9a-7fe8-479b-b098-a1bfc7783476?source=sessions) and the [demo recordings](https://www.youtube.com/playlist?list=PLcip_LgkYwzu2ABITS_3cSV_6AeLsX-d0)
>Note: this demo uses Dapr v1.0.0-rc.4 and may break if different Dapr versions are used
## Overview
This demo illustrates the simplicity of [Dapr](https://github.com/dapr/dapr) on Day 1 and it's flexibility of Dapr to adopt to complex of complex use-cases Day 2 by walking through 3 demos:
![Architecture Overview](images/overview.png)
* **Demo 1** - local development showing the speed with which developers can start and the use of Dapr components (Twitter and state)
* **Demo 2** - expands on Demo 1 and adds service invocation using both, direct invocation and consumption of events across applications using PubSub
* **Demo 3** - takes Demo 2 and illustrates how platform agnostic Dapr really is by containerizing these applications without any changes and deploying them onto Kubernetes. This demo also showcases the pluggability of components (state backed by Azure Table Storage, pubsub by Azure Service Bus)
* **Java Demo** - Implements [a similar scenario in Java](javademo/README.md), using Dapr's Java SDK.
![](images/overview.png)
## Demo 1
C# ASP.NET app (`provider`) using dapr Twitter input component to subscribe to Twitter search results. This app uses the default `statestore` to persist each tweet into the Redis backed `dapr` state store.
### Objectives
* Show idiomatic experience in Dapr allowing developers to be effective Day 1 (no Dapr libraries or attributes in user code)
* Introduce the concept of components as a way to leverage existing capabilities
### Requirements
* Docker
* Node.js or dotnet core > 3.1 (instructions below are for Node.js though demo can be run using dotnet)
* [Twitter API credentials](https://developer.twitter.com/en/docs/basics/getting-started)
Twitter credentials will have to be added to `components/twitter.yaml`:
> All the demos rely on the Dapr Twitter input binding. For that binding to work you must add your [Twitter API credentials](https://developer.twitter.com/en/docs/basics/getting-started) to `components/twitter.yaml`:
```yaml
spec:
type: bindings.twitter
# PLACE TWITTER CREDS HERE
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: tweets
spec:
type: bindings.twitter
version: v1
metadata:
# PLACE TWITTER CREDENTIALS HERE
metadata:
- name: consumerKey
value: "" # twitter api consumer key, required
@ -53,264 +42,266 @@ Twitter credentials will have to be added to `components/twitter.yaml`:
- name: accessSecret
value: "" # twitter api access secret, required
- name: query
value: "dapr" # your search query, required
value: "microsoft" # your search query, required
```
### Run demo 1
## Demo 1
Starting from the root of demo 1 (`demos/demo1`)
This demo contains two versions of the same application one C# .NET Core (`provider-net`) and Node.js (`provider`) using the dapr Twitter input binding component to subscribe to Twitter search results. This application uses the default Redis `statestore` to persist each tweet.
### Demo 1 Objectives (Node.js)
The goal of this demo is to show how quickly you can get an application running with Dapr.
* Show idiomatic experience in Dapr allowing developers to be effective Day 1 (no Dapr libraries or attributes in user code)
* Introduce the concept of components as a way to leverage existing capabilities (Twitter Binding and Redis state store)
### Demo 1 Requirements (Node.js)
* Docker
* Node.js
* Dapr CLI v1.0.0-rc.4
* [Twitter API credentials](https://developer.twitter.com/en/docs/basics/getting-started)
### Run Demo 1 (Node.js)
Starting from the provider folder of demo 1 (`demos/demo1/provider`)
* Install [Dapr CLI](https://docs.dapr.io/getting-started/install-dapr/#install-the-dapr-cli)
* Initialize Dapr
```bash
dapr init --runtime-version '1.0.0-rc.4'
```
* Launch app locally using Dapr by running `run.sh` for Bash or `run.ps1` for PowerShell
```bash
./run.sh
```
This will launch Dapr and your application locally. Wait for tweets with the word `microsoft` to begin to arrive. In the terminal you will see logs from both Dapr and your application. As each tweet arrives you will see additional logging information appear.
You can use a Visual Studio Code [extension](https://marketplace.visualstudio.com/items?itemName=cweijan.vscode-mysql-client2) to view the data in Redis.
![Data in Redis](images/redis.png)
### Demo 1 Objectives (dotnet)
This is the same application written in C# and leveraging the Dapr SDK. This highlights the flexibility to use Dapr with any language.
* Show idiomatic experience in Dapr allowing developers to be effective Day 1 (uses optional SDK)
* Introduce the concept of components as a way to leverage existing capabilities (Twitter Binding and Redis state store)
### Demo 1 Requirements (dotnet)
* Docker
* [.NET Core 3.1](http://bit.ly/DownloadDotNetCore)
* Dapr CLI v1.0.0-rc.4
* [Twitter API credentials](https://developer.twitter.com/en/docs/basics/getting-started)
> Note .NET Core 3.1 is required. You can install along side .NET 5.0
### Run Demo 1 (dotnet)
Starting from the provider-net folder of demo 1 (`demos/demo1/provider-net`)
* Install [Dapr CLI](https://docs.dapr.io/getting-started/install-dapr/#install-the-dapr-cli)
* Run
```sh
dapr init
```
* Launch app locally using Dapr by running (example, running via dotnet)
```sh
dapr run --app-id provider --app-port 5000 --port 3500 node app.js
```powershell
dapr init --runtime-version '1.0.0-rc.4'
```
* Post a tweet with the word `dapr` (e.g. "watching a cool dapr demo #build2020")
* Show dapr log to see the debug info
* View Redis for persisted data
* Launch app locally using Dapr by running `run.sh` for Bash or `run.ps1` for PowerShell
```powershell
./run.ps1
```
This will launch Dapr and your application locally. Wait for tweets with the word `microsoft` to begin to arrive. In the terminal you will see logs from both Dapr and your application. As each tweet arrives you will see additional logging information appear.
You can use a Visual Studio Code [extension](https://marketplace.visualstudio.com/items?itemName=cweijan.vscode-mysql-client2) to view the data in Redis.
![Data in Redis](images/redis.png)
## Demo 2
Demo 2 builds on demo 1. It illustrates interaction between multiple microservices in Dapr (`processor` being invoked by `processor`) and adds the concept of pubsub, where each scored tweet in stead of being saved in state store is being published onto a topic. This demo also includes a Go viewer app (`viewer`) which subscribes to pubsub `processed-tweets` topic and streaming scored tweets over WebSockets to a SPA in JavaScript which displays streamed tweets.
Demo 2 builds on demo 1 adding service-to-service invocation between multiple microservices in Dapr (`processor` being invoked by `provider`). The use of the state store is replaced with PubSub, where each scored tweet is published onto a topic where it will be read by the viewer app. This demo also includes a Go viewer app (`viewer`) which subscribes to a PubSub `tweet-pubsub` topic and streams scored tweets over WebSockets to a SPA in JavaScript for display.
### Objectives
### Demo 2 Objectives
* Builds on Demo 1, illustrate interaction between multiple microservices in Dapr
* Introduces service to service discovery/invocation
* Introduces eventing using Dapr pubsub component
All the applications at this stage run locally.
### Requirements
* Go
* [Azure Account](https://azure.microsoft.com/en-us/free/)
* [Cognitive Services account](https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account)
* Builds on Demo 1, illustrate interaction between multiple microservices in Dapr
* Introduces service to service discovery/invocation
* Introduces eventing using Dapr PubSub component
* Introduces monitoring using Zipkin
### Demo 2 Requirements
* Go v1.4
* Node.js
* [Azure CLI v2.18.0](https://docs.microsoft.com/cli/azure/install-azure-cli)
* Dapr CLI v1.0.0-rc.4
* [Azure Account](https://azure.microsoft.com/free/)
* [Cognitive Services account](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account)
* [Twitter API credentials](https://developer.twitter.com/en/docs/basics/getting-started)
### Run demo 2
Starting from the root of demo 2 (`demos/demo2`)
Starting from the root of demo 2 (`demos/demo2`) make sure you are logged into Azure.
[Start processor](https://github.com/azure-octo/build2020-dapr-demo/tree/master/demos/demo2/processor) so it's ready when `provider` starts
```bash
az login
```
> Make sure you have the defined the `CS_TOKEN` environment variable holding your Azure Cognitive Services token [docs](https://docs.microsoft.com/en-us/azure/cognitive-services/authentication)
Set the desired subscription.
```shell
cd processor
dapr run node app.js --app-id processor --app-port 3002 --protocol http --port 3500
```bash
az account set --subscription <id or name>
```
Now we can deploy the required infrastructure by running `setup.sh` for Bash or `setup.ps1` for PowerShell. When calling `setup.sh` you must use the `source` command so the environment variables are properly set (this is not required for the PowerShell version). These scripts will run an Azure Resource Manager template deployment and set the required CS_TOKEN and CS_ENDPOINT environment variables for the `processor` application. The scripts take two arguments.
1. resource group name: This will be the resource group created in Azure. If you do not provide a value `twitterDemo2` will be used.
1. location: This is the location to deploy all your resources. If you do not provide a value `eastus` will be used.
Bash
```bash
source ./setup.sh
```
PowerShell
```powershell
./setup.ps1
```
[Start viewer](https://github.com/azure-octo/build2020-dapr-demo/tree/master/demos/demo1/viewer) so it's ready when `provider` starts
The results should look similar to this:
```shell
```bash
You can now run the processor from this terminal.
```
Start `processor` so it's ready when `provider` starts
```bash
cd processor
./run.sh
```
Start `viewer` so it's ready when `provider` starts
```bash
cd viewer
dapr run go run handler.go main.go --app-id viewer --app-port 8083 --protocol http
./run.sh
```
Navigate to the viewer UI in browser (make sure WebSockets connection is opened)
http://localhost:8083
[http://localhost:8083](http://localhost:8083)
Start `provider`
[Start provider](https://github.com/azure-octo/build2020-dapr-demo/tree/master/demos/demo2/provider) provider
> For demo purposes use a frequently tweeted about topic, like microsoft (the default value). You may change the search term in the [demos/components/twitter.yaml](demos/components/twitter.yaml) file under `query` metadata element **BEFORE** you start provider
> For demo purposes use a frequently tweeted about topic, like microsoft. You need to change that search term in the [demos/demo2/provider/components/twitter.yaml](demos/demo2/provider/components/twitter.yaml) file under `query` metadata element BEFORE you start provider
```shell
```bash
cd provider
dapr run node app.js --app-id provider --app-port 3001 --protocol http
./run.sh
```
Switch back to the UI to see the scored tweets
Switch back to the UI to see the scored tweets
http://localhost:8083
[http://localhost:8083](http://localhost:8083)
The UI should look something like this
The UI should look something like this
![](images/ui.png)
![Tweets in web page](images/ui.png)
This demo also shows monitoring via Zipkin. You can view the trace data of a tweet being processed by visiting [http://localhost:9411](http://localhost:9411). Click `Run Query` to see the captured traces.
![Zipkin traces](images/zipkin.png)
## Demo 3
Demo 3 takes the local development work and illustrates how platform agnostic the developer experience really is in Dapr by deploying all the previously developed code onto Kubernetes.
Demo 3 takes the local development work and moves it to Kubernetes in Azure and all the components are configured to point to Azure based resources. The required Docker images have already been uploaded to Docker Hub for you.
> Note, this demo requires Dapr v0.7
### Demo 3 Objectives
### Objectives
* Show deployment of locally developed artifacts onto Kubernetes
* Illustrate the run-time portability and component pluggability
* Show deployment of locally developed artifacts onto Kubernetes
* Illustrate the run-time portability and component pluggability
### Demo 3 Requirements
* Go v1.4
* Node.js
* [Helm v3](https://helm.sh/)
* [Azure CLI v2.18.0](https://docs.microsoft.com/cli/azure/install-azure-cli)
* Dapr CLI v1.0.0-rc.4
* [Azure Account](https://azure.microsoft.com/free/)
* [Cognitive Services account](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account)
* [Twitter API credentials](https://developer.twitter.com/en/docs/basics/getting-started)
### Run demo 3
> Assumes the use of pre-built images for [provider](https://hub.docker.com/repository/docker/mchmarny/provider), [processor](https://hub.docker.com/repository/docker/mchmarny/processor), and [viewer](https://hub.docker.com/repository/docker/mchmarny/viewer)
> Assumes the use of pre-built images for [provider](https://hub.docker.com/repository/docker/darquewarrior/provider), [processor](https://hub.docker.com/repository/docker/darquewarrior/processor), and [viewer](https://hub.docker.com/repository/docker/darquewarrior/viewer)
Starting from the root of demo 3 (`demos/demo3`) make sure you are logged into Azure.
This instructions assume [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) has been configured and that the following CLI defaults have been set:
```bash
az login
```
```shell
az account set --subscription <id or name>
az configure --defaults location=<preferred location> group=<preferred resource group>
Set the desired subscription.
```bash
az account set --subscription <id or name>
```
Now we can deploy the required infrastructure by running `setup.sh` for Bash or `setup.ps1` for PowerShell. Unlike with demo 2 you **do not** have to use the `source` command to run the Bash script as no environment variables are set. These scripts will run an Azure Resource Manager template deployment and a Helm install to deploy the entire demo. The scripts take three arguments.
1. resource group name: This will be the resource group created in Azure. If you do not provide a value `twitterDemo3` will be used.
1. location: This is the location to deploy all your resources. If you do not provide a value `eastus` will be used.
1. runtime version: This is the runtime version of Dapr to deploy to the cluster. If you do not provide a value `1.0.0-rc.4` will be used.
Bash
```bash
./setup.sh
```
PowerShell
```powershell
./setup.ps1
```
> Note, this demo installs into the `default` namespace in your cluster. When installing into a different namespace, make sure to append the `-n <your namespace name>` to all commands below (secret, component, and deployment)
The results should look similar to this:
#### State Store
```bash
Getting IP addresses. Please wait...
To configure state component to use Azure Table Storage follow [these instructions](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal). Once finished, you will need to configure the Kubernates secrets to hold the Azure Table Storage token:
```shell
kubectl create secret generic demo-state-secret \
--from-literal=account-name="" \
--from-literal=account-key=""
Your app is accessible from http://52.167.250.162
Zipkin is accessible from http://52.247.23.115
```
Once the secret is configured, deploy the `dapr` state component from the `demos/demo3` directory:
#### Observability
```shell
kubectl apply -f component/statestore.yaml
```
You can view the scored tweets in Azure table storage
#### PubSub Topic
![Azure Portal showing state in Azure table storage](images/state.png)
To configure pubsub component to use Azure Service Bus follow [these instructions](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal). Once finished, you will need to configure the Kubernates secret to hold Azure Service Bus connection string information.
Similarly you can monitor the PubSub topic throughout in Azure Service Bus
![Azure Portal showing PubSub in Azure Service Bus](images/pubsub.png)
```shell
kubectl create secret generic demo-bus-secret \
--from-literal=connection-string=""
```
Once the secret is configured, deploy the `dapr` pubsub topic components from the `demos/demo3` directory:
```shell
kubectl apply -f component/pubsub.yaml
```
#### Twitter Input Binding
Finally, to use the Dapr Twitter input binding we need to configure the Twitter API secretes. You can get these by registering Twitter application and obtain this information [here](https://developer.twitter.com/en/apps/create).
```shell
kubectl create secret generic demo-twitter-secrets \
--from-literal=access-secret="" \
--from-literal=access-token="" \
--from-literal=consumer-key="" \
--from-literal=consumer-secret=""
```
Once the secret is configured you can deploy the Twitter binding:
```shell
kubectl apply -f component/twitter.yaml
```
#### Deploy Demo
Once the necessary components are created, you just need to create one more secret for the Cognitive Service token that is used in the `processor` service:
```shell
kubectl create secret generic demo-processor-secret \
--from-literal=token=""
```
And now you can deploy the entire pipeline (`provider`, `processor`, `viewer`) with a single command:
```shell
kubectl apply -f provider.yaml \
-f processor.yaml \
-f viewer.yaml
```
You can check on the status of your deployment like this:
```shell
kubectl get pods -l demo=build2020
```
The results should look similar to this (make sure each pod has READY status 2/2)
```shell
NAME READY STATUS RESTARTS AGE
processor-89666d54b-hkd5t 2/2 Running 0 18s
provider-85cfbf5456-lc85g 2/2 Running 0 18s
viewer-76448d65fb-bm2dc 2/2 Running 0 18s
```
#### Exposing viewer UI
To expose the viewer application externally, create Kubernetes `service` using [route.yaml](./viewer-route.yaml)
```shell
kubectl apply -f service/viewer.yaml
```
> Note, the provisioning of External IP may take little time.
To view the viewer application:
```shell
export VIEWER_IP=$(kubectl get svc viewer --output 'jsonpath={.status.loadBalancer.ingress[0].ip}')
open "http://${VIEWER_IP}/"
```
> To change the Twitter topic query simply edit the [demos/demo3/component/twitter.yaml](demos/demo3/component/twitter.yaml), apply it, and `kubectl rollout restart deployment provider` to ensure the new configuration is applied.
#### Observability
You can view the scored tweets in Azure table storage
![](images/state.png)
Similarly you can monitor the pubsub topic throughout in Azure Service Bus
![](images/pubsub.png)
In addition to the state and pubsub, you can also observe Dapr metrics and logs for this demo.
The Dapr sidecar Grafana dashboard
![](images/metric.png)
And the Elastic backed Kibana dashboard for logs
![](images/log.png)
For tracing first apply the tracing config
```shell
kubectl apply -f tracing/tracing.yaml
```
And then, if you have not already have it, install Zipkin
```shell
kubectl create deployment zipkin --image openzipkin/zipkin
kubectl expose deployment zipkin --type ClusterIP --port 9411
```
And configure the Zipkin exporter
```shell
kubectl apply -f tracing/zipkin .yaml
```
You may have to restart the deployments
```shell
kubectl rollout restart deployment processor provider viewer
```
At this point you should be able to access the Zipkin UI
http://localhost:9411/zipkin/
In addition to the state and PubSub, you can also observe application traces in Zipkin.
![Zipkin traces](images/zipkin.png)
## Recordings
View the [recorded session](https://mybuild.microsoft.com/sessions/3f296b9a-7fe8-479b-b098-a1bfc7783476?source=sessions) and the [demo recordings](https://www.youtube.com/playlist?list=PLcip_LgkYwzu2ABITS_3cSV_6AeLsX-d0)

View File

@ -1,7 +1,7 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
name: tweet-pubsub
spec:
type: pubsub.redis
version: v1

View File

@ -1,7 +1,7 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
name: tweet-store
spec:
type: state.redis
version: v1

View File

@ -0,0 +1,19 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: tweets
spec:
type: bindings.twitter
version: v1
metadata:
# PLACE TWITTER CREDENTIALS HERE
- name: consumerKey
value: "wF6e4o8BcvxS7cnGL5gXK3GSF"
- name: consumerSecret
value: "HaSVyLVw3UwbYi7KcUyJTaStIvgYrGqPOR9Gwlhsj39pR6hGan"
- name: accessToken
value: "51239568-FZFdReYIdblCNrc6Mn2k3lMVxwinPk4fllaFQTlsO"
- name: accessSecret
value: "XV9VQYII3XQfFu76i4WquYJceJMUdcnQ7OOWIEotcyO7J"
- name: query
value: "microsoft"

View File

@ -1,19 +0,0 @@
# Demo1
Start the service in Dapr with explicit port so we can invoke it later:
```shell
dapr run --app-id producer --app-port 5000 --port 3501 dotnet run
```
## node.js version
Inside of the `provider` directory
```shell
dapr run node app.js \
--app-id provider \
--app-port 3001 \
--protocol http \
--port 3500
```

View File

@ -1 +1,2 @@
obj/
bin/

View File

@ -1,13 +1,13 @@
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
<RootNamespace>Provider</RootNamespace>
</PropertyGroup>
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
<RootNamespace>Provider</RootNamespace>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="dapr.AspnetCore" Version="0.7.0-preview01" />
<PackageReference Include="Dapr.Client" Version="0.7.0-preview01" />
</ItemGroup>
<ItemGroup>
<PackageReference Include="dapr.AspnetCore" Version="1.0.0-rc02" />
<PackageReference Include="Dapr.Client" Version="1.0.0-rc02" />
</ItemGroup>
</Project>
</Project>

View File

@ -1,26 +1,20 @@
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
namespace Provider
{
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
}

View File

@ -7,62 +7,64 @@ using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using Dapr.Client;
namespace Provider
{
public class Startup
{
public const string stateStore = "statestore";
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public class Startup
{
// Name of the Dapr state store component
public const string stateStore = "tweet-store";
public IConfiguration Configuration { get; }
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddDaprClient();
services.AddSingleton(new JsonSerializerOptions()
{
PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
PropertyNameCaseInsensitive = true,
});
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddDaprClient();
services.AddSingleton(new JsonSerializerOptions()
{
PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
PropertyNameCaseInsensitive = true,
});
app.UseRouting();
}
app.UseEndpoints(endpoints =>
{
endpoints.MapPost("tweet", Tweet);
});
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
async Task Tweet (HttpContext context)
{
var client = context.RequestServices.GetRequiredService<DaprClient>();
var requestBodyStream = context.Request.Body;
app.UseRouting();
var tweet = await JsonSerializer.DeserializeAsync<TwitterTweet>(requestBodyStream);
Console.WriteLine("Tweet received: {0}: {1}", tweet.ID, tweet.Text);
await client.SaveStateAsync<TwitterTweet>(stateStore, tweet.ID, tweet);
Console.WriteLine("Tweet saved: {0}: {1}", tweet.ID, tweet);
return;
}
}
}
app.UseEndpoints(endpoints =>
{
// The first value must match the name of the Dapr twitter binding
// component.
endpoints.MapPost("tweets", Tweet);
});
async Task Tweet(HttpContext context)
{
var client = context.RequestServices.GetRequiredService<DaprClient>();
var requestBodyStream = context.Request.Body;
var tweet = await JsonSerializer.DeserializeAsync<TwitterTweet>(requestBodyStream);
Console.WriteLine("Tweet received: {0}: {1}", tweet.ID, tweet.Text);
await client.SaveStateAsync<TwitterTweet>(stateStore, tweet.ID, tweet);
Console.WriteLine("Tweet saved: {0}: {1}", tweet.ID, tweet);
return;
}
}
}
}

View File

@ -1,63 +1,69 @@
using System;
using System.Text.Json;
using System.Text.Json.Serialization;
namespace Provider
{
// SearchResult is the metadata from executed search
public class TwitterTweet {
// ID is tweet's id_str
[JsonPropertyName("id_str")]
public string ID {get; set; }
// Author is tweet's user
[JsonPropertyName("user")]
public TwitterUser Author {get; set; }
// Text is tweet's full_text
[JsonPropertyName("full_text")]
public string FullText {get; set; }
// Text is tweet's text (used only if full_text not set)
[JsonPropertyName("text")]
public string Text {get; set; }
// SearchResult is the metadata from executed search
public class TwitterTweet
{
// ID is tweet's id_str
[JsonPropertyName("id_str")]
public string ID { get; set; }
//Published is tweet's created_at time
// [JsonPropertyName("created_at")]
// public DateTimeOffset Published {get; set; }
}
// Author is tweet's user
[JsonPropertyName("user")]
public TwitterUser Author { get; set; }
public class TwitterUser {
// Name is tweet author's name
[JsonPropertyName("name")]
public string Name {get; set; }
// Pic is tweet author's profile pic URL
[JsonPropertyName("profile_image_url_https")]
public string Pic {get; set; }
}
// Text is tweet's full_text
[JsonPropertyName("full_text")]
public string FullText { get; set; }
// Text is tweet's text (used only if full_text not set)
[JsonPropertyName("text")]
public string Text { get; set; }
}
public class TwitterUser
{
// Name is tweet author's name
[JsonPropertyName("name")]
public string Name { get; set; }
// Pic is tweet author's profile pic URL
[JsonPropertyName("profile_image_url_https")]
public string Pic { get; set; }
}
//SimpleTweet represents the Twiter query result item
public class SimpleTweet {
// ID is the string representation of the tweet ID
[JsonPropertyName("id")]
public string ID {get; set; }
// Query is the text of the original query
[JsonPropertyName("query")]
public string Query {get; set; }
// Author is the name of the tweet user
[JsonPropertyName("author")]
public string Author {get; set; }
// AuthorPic is the url to author profile pic
[JsonPropertyName("author_pic")]
public string AuthorPic {get; set; }
// Content is the full text body of the tweet
[JsonPropertyName("content")]
public string Content {get; set; }
// Published is the parsed tweet create timestamp
[JsonPropertyName("published")]
public DateTime Published {get; set; }
//Score is Content's sentiment score
[JsonPropertyName("sentiment")]
public float Score {get; set; }
}
// SimpleTweet represents the Twitter query result item
public class SimpleTweet
{
// ID is the string representation of the tweet ID
[JsonPropertyName("id")]
public string ID { get; set; }
// Query is the text of the original query
[JsonPropertyName("query")]
public string Query { get; set; }
// Author is the name of the tweet user
[JsonPropertyName("author")]
public string Author { get; set; }
// AuthorPic is the url to author profile pic
[JsonPropertyName("author_pic")]
public string AuthorPic { get; set; }
// Content is the full text body of the tweet
[JsonPropertyName("content")]
public string Content { get; set; }
// Published is the parsed tweet create timestamp
[JsonPropertyName("published")]
public DateTime Published { get; set; }
//Score is Content's sentiment score
[JsonPropertyName("sentiment")]
public float Score { get; set; }
}
}

View File

@ -1,48 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: tweet
spec:
type: bindings.twitter
version: v1
# PLACE TWITTER CREDS HERE
metadata:
- name: consumerKey
value: "" # twitter api consumer key, required
- name: consumerSecret
value: "" # twitter api consumer secret, required
- name: accessToken
value: "" # twitter api access token, required
- name: accessSecret
value: "" # twitter api access secret, required
- name: query
value: "microsoft" # your search query, required
---
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: publishTweet
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
---
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: producerStateStore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -1,12 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -0,0 +1,2 @@
# PowerShell
dapr run --app-id provider --app-port 5000 --components-path ../../components -- dotnet run

View File

@ -0,0 +1,6 @@
#!/bin/bash
set -o errexit
set -o pipefail
dapr run --app-id provider --app-port 5000 --components-path ../../components/ -- dotnet run

View File

@ -1,7 +0,0 @@
# provider (node version)
Assuming you have Dapr initialized locally and the `processor` service already started:
```shell
bin/run
```

View File

@ -1,71 +1,78 @@
const express = require('express');
const bodyParser = require('body-parser');
require('isomorphic-fetch');
// This app is called by the Dapr each time a Tweet is received. In later
// demos this calls a service to score the tweet. Now it just stores the
// tweet in a state store.
require("isomorphic-fetch");
const logger = require("./logger");
const express = require("express");
const bodyParser = require("body-parser");
// express
// express
const port = 3001;
const app = express();
app.use(bodyParser.json());
const port = 3001;
// dapr
// Dapr
const daprPort = process.env.DAPR_HTTP_PORT || "3500";
// The Dapr endpoint for the state store component to store the tweets.
const stateEndpoint = `http://localhost:${daprPort}/v1.0/state/tweet-store`;
// store state
var saveContent = function(obj) {
return new Promise(
function(resolve, reject) {
if (!obj || !obj.id) {
reject({message: "invalid content"});
return;
}
const state = [{ key: obj.id, value: obj }];
fetch(stateEndpoint, {
method: "POST",
body: JSON.stringify(state),
headers: { "Content-Type": "application/json" }
}).then((_res) => {
if (!_res.ok) {
console.log(_res.statusText);
reject({message: "error saving content"});
}else{
resolve(obj)
}
}).catch((error) => {
reject({message: error});
});
// store state
var saveContent = function (obj) {
return new Promise(function (resolve, reject) {
if (!obj || !obj.id) {
reject({ message: "invalid content" });
return;
}
const state = [{ key: obj.id, value: obj }];
fetch(stateEndpoint, {
method: "POST",
body: JSON.stringify(state),
headers: { "Content-Type": "application/json" },
})
.then((_res) => {
if (!_res.ok) {
logger.debug(_res.statusText);
reject({ message: "error saving content" });
} else {
resolve(obj);
}
);
})
.catch((error) => {
logger.error(error);
reject({ message: error });
});
});
};
// tweets handler
app.post('/tweets', (req, res) => {
const tweet = req.body;
if (!tweet) {
res.status(400).send({error: "invalid content"});
return;
}
// tweets handler
app.post("/tweets", (req, res) => {
logger.debug("/tweets invoked...");
const tweet = req.body;
if (!tweet) {
res.status(400).send({ error: "invalid content" });
return;
}
let obj = {
id: tweet.id_str,
author: tweet.user.screen_name,
author_pic: tweet.user.profile_image_url_https,
content: tweet.full_text || tweet.text, // if extended then use it
lang: tweet.lang,
published: tweet.created_at,
sentiment: 0.5 // default to neutral sentiment
};
let obj = {
id: tweet.id_str,
author: tweet.user.screen_name,
author_pic: tweet.user.profile_image_url_https,
content: tweet.full_text || tweet.text, // if extended then use it
lang: tweet.lang,
published: tweet.created_at,
sentiment: 0.5, // default to neutral sentiment
};
saveContent(obj)
.then(function(fulfilled) {
console.log(fulfilled);
res.status(200).send({});
})
.catch(function (error) {
console.log(error.message);
res.status(500).send(error);
});
saveContent(obj)
.then(function (rez) {
logger.debug("rez: " + JSON.stringify(rez));
res.status(200).send({});
})
.catch(function (error) {
logger.error(error.message);
res.status(500).send(error);
});
});
app.listen(port, () => console.log(`Port: ${port}!`));
app.listen(port, () => console.log(`Port: ${port}!`));

View File

@ -1,8 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
curl -d '{"lang":"en", "text":"I am so happy this worked"}' \
-H "Content-type: application/json" \
"http://localhost:3500/v1.0/invoke/processor/method/sentiment-score"

View File

@ -1,10 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
dapr run node app.js \
--app-id provider \
--app-port 3001 \
--protocol http \
--port 3500

View File

@ -1,14 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: tweet-store
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: actorStateStore
value: "true"

View File

@ -1,19 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: tweets
spec:
type: bindings.twitter
version: v1
metadata:
# PLACE TWITTER CREDS HERE
- name: consumerKey
value: ""
- name: consumerSecret
value: ""
- name: accessToken
value: ""
- name: accessSecret
value: ""
- name: query
value: "microsoft"

View File

@ -0,0 +1,26 @@
// A structured logger
// This provides a better logging experience than console.log
const { createLogger, format, transports, config } = require("winston");
const { combine, timestamp, label, printf } = format;
const daprFormat = printf(({ level, message, label, timestamp }) => {
return `${label} == time="${timestamp}" level=${level} msg="${message}"`;
});
const options = {
console: {
level: "debug",
handleExceptions: true,
json: false,
colorize: true,
},
};
const logger = createLogger({
format: combine(label({ label: "PROVIDER" }), timestamp(), daprFormat),
levels: config.npm.levels,
transports: [new transports.Console(options.console)],
exitOnError: false,
});
module.exports = logger;

View File

@ -1,15 +1,17 @@
{
"name": "provider",
"version": "1.0.0",
"description": "Dapr demo",
"main": "app.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "mark@chmarny.com",
"license": "MIT",
"dependencies": {
"express": "^4.17.1",
"isomorphic-fetch": "^2.2.1"
}
"name": "provider",
"version": "1.0.0",
"description": "Dapr demo",
"main": "app.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "mark@chmarny.com",
"license": "MIT",
"dependencies": {
"es6-promise": "^4.2.8",
"express": "^4.17.1",
"isomorphic-fetch": "^3.0.0",
"winston": "^3.3.3"
}
}

View File

@ -0,0 +1,5 @@
# PowerShell
npm install
dapr run --app-id provider --app-port 3001 --components-path ../../components -- node app.js

View File

@ -0,0 +1,8 @@
#!/bin/bash
set -o errexit
set -o pipefail
npm install
dapr run --app-id provider --app-port 3001 --components-path ../../components/ -- node app.js

View File

@ -1,27 +0,0 @@
# Demo2
## provider
## processor
Start the service in Dapr with explicit port so we can invoke it later:
```shell
dapr run node app.js \
--log-level debug \
--app-id processor \
--app-port 3000 \
--protocol http \
--port 3500
```
Invoke it from curl or another service will look like this:
```shell
curl -d '{"lang":"en", "text":"I am so happy this worked"}' \
-H "Content-type: application/json" \
"http://localhost:3500/v1.0/invoke/processor/method/sentiment-score"
```

View File

@ -0,0 +1,10 @@
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprConfig
namespace: default
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: "http://localhost:9411/api/v2/spans"

View File

@ -0,0 +1,17 @@
targetScope = 'subscription'
param rgName string = 'twitterDemo2'
param location string = 'eastus'
resource rg 'Microsoft.Resources/resourceGroups@2020-06-01' = {
name: rgName
location: location
}
module twitterDemo './twitterDemo2.bicep' = {
name: 'twitterDemo2'
scope: resourceGroup(rg.name)
}
output cognitiveServiceKey string = twitterDemo.outputs.cognitiveServiceKey
output cognitiveServiceEndpoint string = twitterDemo.outputs.cognitiveServiceEndpoint

View File

@ -0,0 +1,89 @@
{
"$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"rgName": {
"type": "string",
"defaultValue": "twitterDemo2"
},
"location": {
"type": "string",
"defaultValue": "eastus"
}
},
"functions": [],
"resources": [
{
"type": "Microsoft.Resources/resourceGroups",
"apiVersion": "2020-06-01",
"name": "[parameters('rgName')]",
"location": "[parameters('location')]"
},
{
"type": "Microsoft.Resources/deployments",
"apiVersion": "2019-10-01",
"name": "twitterDemo2",
"resourceGroup": "[parameters('rgName')]",
"properties": {
"expressionEvaluationOptions": {
"scope": "inner"
},
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string",
"defaultValue": "eastus2"
}
},
"functions": [],
"variables": {
"csApiVersion": "2017-04-18",
"csName": "[concat('cs', uniqueString(resourceGroup().id))]",
"cognitiveServicesId": "[resourceId('Microsoft.CognitiveServices/accounts/', variables('csName'))]"
},
"resources": [
{
"type": "Microsoft.CognitiveServices/accounts",
"apiVersion": "2017-04-18",
"name": "[variables('csName')]",
"kind": "CognitiveServices",
"location": "[parameters('location')]",
"sku": {
"name": "S0"
},
"properties": {
"customSubDomainName": "[variables('csName')]"
}
}
],
"outputs": {
"cognitiveServiceKey": {
"type": "string",
"value": "[listkeys(variables('cognitiveServicesId'), variables('csApiVersion')).key1]"
},
"cognitiveServiceEndpoint": {
"type": "string",
"value": "[reference(variables('cognitiveServicesId'), variables('csApiVersion')).endpoint]"
}
}
}
},
"dependsOn": [
"[subscriptionResourceId('Microsoft.Resources/resourceGroups', parameters('rgName'))]"
]
}
],
"outputs": {
"cognitiveServiceKey": {
"type": "string",
"value": "[reference(extensionResourceId(format('/subscriptions/{0}/resourceGroups/{1}', subscription().subscriptionId, parameters('rgName')), 'Microsoft.Resources/deployments', 'twitterDemo2'), '2019-10-01').outputs.cognitiveServiceKey.value]"
},
"cognitiveServiceEndpoint": {
"type": "string",
"value": "[reference(extensionResourceId(format('/subscriptions/{0}/resourceGroups/{1}', subscription().subscriptionId, parameters('rgName')), 'Microsoft.Resources/deployments', 'twitterDemo2'), '2019-10-01').outputs.cognitiveServiceEndpoint.value]"
}
}
}

View File

@ -0,0 +1,20 @@
param location string = 'eastus2'
var csApiVersion = '2017-04-18'
var csName = concat('cs', uniqueString(resourceGroup().id))
var cognitiveServicesId = resourceId('Microsoft.CognitiveServices/accounts/', csName)
resource cs 'Microsoft.CognitiveServices/accounts@2017-04-18' = {
name: csName
kind: 'CognitiveServices'
location: location
sku: {
name: 'S0'
}
properties: {
customSubDomainName: csName
}
}
output cognitiveServiceKey string = listkeys(cognitiveServicesId, csApiVersion).key1
output cognitiveServiceEndpoint string = reference(cognitiveServicesId, csApiVersion).endpoint

View File

@ -1,67 +1,100 @@
const express = require('express');
const bodyParser = require('body-parser');
require('isomorphic-fetch');
// This app is called by the provider to score the tweets using cognitive
// services. When this app starts Dapr registers its name so other services
// can use Dapr to call this service.
require("isomorphic-fetch");
const express = require("express");
const logger = require("./logger");
const bodyParser = require("body-parser");
// express
// express
const port = 3002;
const app = express();
app.use(bodyParser.json());
// cognitive API
// Cognitive Services API
// The KEY 1 value from Azure Portal, Keys and Endpoint section
const apiToken = process.env.CS_TOKEN || "";
const region = process.env.AZ_REGION || "westus2";
const endpoint = `${region}.api.cognitive.microsoft.com`;
const apiURL = `https://${endpoint}/text/analytics/v2.1/sentiment`;
const port = 3002;
// The Endpoint value from Azure Portal, Keys and Endpoint section
const endpoint = process.env.CS_ENDPOINT || "";
// The full URL to the sentiment service
const apiURL = `${endpoint}text/analytics/v2.1/sentiment`;
// Root get that just returns the configured values.
app.get("/", (req, res) => {
res.status(200).send({message: "hi, nothing to see here, try => POST /sentiment-score"});
logger.debug("sentiment endpoint: " + endpoint);
logger.debug("sentiment apiURL: " + apiURL);
res.status(200).json({
message: "hi, nothing to see here, try => POST /sentiment-score",
endpoint: endpoint,
apiURL: apiURL,
});
});
// service
// This service provides this scoring method
app.post("/sentiment-score", (req, res) => {
let body = req.body;
console.log("sentiment req: " + JSON.stringify(body));
let lang = body.lang;
let text = body.text;
if (!text || !text.trim()) {
res.status(400).send({error: "text required"});
let body = req.body;
let lang = body.lang;
let text = body.text;
logger.debug("sentiment req: " + JSON.stringify(body));
if (!text || !text.trim()) {
res.status(400).send({ error: "text required" });
return;
}
if (!lang || !lang.trim()) {
lang = "en";
}
const reqBody = {
documents: [
{
id: "1",
language: lang,
text: text,
},
],
};
// Call cognitive service to score the tweet
fetch(apiURL, {
method: "POST",
body: JSON.stringify(reqBody),
headers: {
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": apiToken,
},
})
.then((_res) => {
if (!_res.ok) {
res.status(400).send({ error: "error invoking cognitive service" });
return;
}
if (!lang || !lang.trim()) {
lang = "en";
}
const reqBody = {
"documents": [{
"id": "1",
"language": lang,
"text": text
}]
};
fetch(apiURL, {
method: "POST",
body: JSON.stringify(reqBody),
headers: {
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": apiToken
}
}).then((_res) => {
if (!_res.ok) {
res.status(400).send({error: "error invoking cognitive service"});
return;
}
return _res.json();
}).then((_resp) => {
const result = _resp.documents[0];
console.log(JSON.stringify(result));
res.status(200).send(result);
}).catch((error) => {
console.log(error);
res.status(500).send({message: error});
}
return _res.json();
})
.then((_resp) => {
// Send the response back to the other service.
const result = _resp.documents[0];
logger.debug(JSON.stringify(result));
res.status(200).send(result);
})
.catch((error) => {
logger.error(error);
res.status(500).send({ message: error });
});
});
app.listen(port, () => console.log(`Node App listening on port ${port}!`));
// Make sure we have all the required information
if (apiToken == "" || endpoint == "") {
logger.error("you must set CS_TOKEN and CS_ENDPOINT environment variables");
throw new Error(
"you must set CS_TOKEN and CS_ENDPOINT environment variables"
);
} else {
logger.debug("CS_TOKEN: " + apiToken);
logger.debug("CS_ENDPOINT: " + endpoint);
}
app.listen(port, () => logger.info(`Node App listening on port ${port}!`));

View File

@ -1,8 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
curl -d '{"lang":"en", "text":"I am so happy this worked"}' \
-H "Content-type: application/json" \
"http://localhost:3000/sentiment-score"

View File

@ -1,14 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
RELEASE_VERSION=v0.3.3
docker build -t mchmarny/processor:$RELEASE_VERSION .
docker push mchmarny/processor:$RELEASE_VERSION
# docker run -it -p 3002:3002 -d mchmarny/processor:v0.3.3

View File

@ -1,6 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
dapr run node app.js --app-id processor --app-port 3002 --protocol http --port 3500

View File

@ -1,12 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -1,14 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: actorStateStore
value: "true"

View File

@ -0,0 +1,26 @@
// A structured logger
// This provides a better logging experience than console.log
const { createLogger, format, transports, config } = require("winston");
const { combine, timestamp, label, printf } = format;
const daprFormat = printf(({ level, message, label, timestamp }) => {
return `${label} == time="${timestamp}" level=${level} msg="${message}"`;
});
const options = {
console: {
level: "debug",
handleExceptions: true,
json: false,
colorize: true,
},
};
const logger = createLogger({
format: combine(label({ label: "PROCESSOR" }), timestamp(), daprFormat),
levels: config.npm.levels,
transports: [new transports.Console(options.console)],
exitOnError: false,
});
module.exports = logger;

View File

@ -10,6 +10,7 @@
"license": "MIT",
"dependencies": {
"express": "^4.17.1",
"isomorphic-fetch": "^2.2.1"
"isomorphic-fetch": "^3.0.0",
"winston": "^3.3.3"
}
}

View File

@ -0,0 +1,22 @@
# Be sure and log into your docker hub account
[CmdletBinding()]
param (
[Parameter(
Position = 0,
HelpMessage = "The name of the docker up user to push images to."
)]
[string]
$dockerHubUser = 'darquewarrior',
[Parameter(
Position = 1,
HelpMessage = "The version of the dapr runtime version to use as image tag."
)]
[string]
$daprVersion = "1.0.0-rc.4"
)
docker build -t $dockerHubUser/processor:$daprVersion .
docker push $dockerHubUser/processor:$daprVersion

View File

@ -0,0 +1,20 @@
#!/bin/bash
# Use this script to publish new images to Docker hub.
# This is not required to complete the demo as images
# have already been pushed to Docker hub.
set -o errexit
set -o pipefail
# The name of the docker up user to push images to.
dockerHubUser=$1
dockerHubUser=${dockerHubUser:-darquewarrior}
# The version of the dapr runtime version to use as image tag.
daprVersion=$2
daprVersion=${daprVersion:-1.0.0-rc.4}
docker build -t $dockerHubUser/processor:$daprVersion .
docker push $dockerHubUser/processor:$daprVersion

View File

@ -0,0 +1,3 @@
npm install
dapr run --app-id processor --app-port 3002 --components-path ../../components --config ../config.yaml --log-level debug -- node ./app.js

View File

@ -0,0 +1,8 @@
#!/bin/bash
set -o errexit
set -o pipefail
npm install
dapr run --app-id processor --app-port 3002 --components-path ../../components -- node app.js

View File

@ -1,7 +0,0 @@
# provider (node version)
Assuming you have Dapr initialized locally and the `processor` service already started:
```shell
bin/run
```

View File

@ -1,141 +1,163 @@
const express = require('express');
const bodyParser = require('body-parser');
require('es6-promise').polyfill();
require('isomorphic-fetch');
// This app is called by the Dapr each time a Tweet is received. In demo1 this
// service just saved the tweets to the state store. Now we are going to call
// another service via Dapr to score the tweet using direct invocation. Once
// the tweet is processed it is posted to a pub/sub service to be read by
// another service.
require("isomorphic-fetch");
require("es6-promise").polyfill();
const logger = require("./logger");
const express = require("express");
const bodyParser = require("body-parser");
// express
// express
const port = 3001;
const app = express();
app.use(bodyParser.json());
const port = 3001;
// dapr
// Dapr
const daprPort = process.env.DAPR_HTTP_PORT || "3500";
const serviceEndpoint = `http://localhost:${daprPort}/v1.0/invoke/processor/method/sentiment-score`;
// The Dapr endpoint for the state store component to store the tweets.
const stateEndpoint = `http://localhost:${daprPort}/v1.0/state/tweet-store`;
const pubEndpoint = `http://localhost:${daprPort}/v1.0/publish/processed`;
// The Dapr endpoint for the Pub/Sub component used to communicate with other
// services in a loosely coupled way. Tweets is the name of the component and
// processed is the name of the topic to which other services will subscribe.
const pubEndpoint = `http://localhost:${daprPort}/v1.0/publish/tweet-pubsub/scored`;
// publish scored tweets
var publishContent = function(obj) {
return new Promise(
function(resolve, reject) {
if (!obj || !obj.id) {
reject({message: "invalid content"});
return;
}
fetch(pubEndpoint, {
method: "POST",
body: JSON.stringify(obj),
headers: {
"Content-Type": "application/json"
}
}).then((_res) => {
if (!_res.ok) {
console.log(_res.statusText);
reject({message: "error publishing content"});
}else{
resolve(obj)
}
}).catch((error) => {
console.log(error);
reject({message: error});
});
}
);
};
// The Dapr endpoint used to invoke the sentiment-score method on the processor service.
// We are able to invoke the service using its appId processor
const serviceEndpoint = `http://localhost:${daprPort}/v1.0/invoke/processor/method/sentiment-score`;
// store state
var saveContent = function(obj) {
return new Promise(
function(resolve, reject) {
if (!obj || !obj.id) {
reject({message: "invalid content"});
return;
}
const state = [{ key: obj.id, value: obj }];
fetch(stateEndpoint, {
method: "POST",
body: JSON.stringify(state),
headers: {
"Content-Type": "application/json"
}
}).then((_res) => {
if (!_res.ok) {
console.log(_res.statusText);
reject({message: "error saving content"});
}else{
resolve(obj)
}
}).catch((error) => {
console.log(error);
reject({message: error});
});
}
);
};
// score sentiment
var scoreSentiment = function(obj) {
return new Promise(
function(resolve, reject) {
fetch(serviceEndpoint, {
method: "POST",
body: JSON.stringify({lang: obj.lang, text: obj.content}),
headers: {
"Content-Type": "application/json"
}
}).then((_res) => {
if (!_res.ok) {
console.log(_res.statusText);
reject({message: "error invoking service"});
}else{
return _res.json();
}
}).then((_res) => {
console.log(_res);
obj.sentiment = _res.score;
resolve(obj)
}).catch((error) => {
console.log(error);
reject({message: error});
});
}
);
};
// tweets handler
app.post("/tweets", (req, res) => {
console.log("/tweets invoked...");
const tweet = req.body;
if (!tweet) {
res.status(400).send({error: "invalid content"});
return;
// store state
var saveContent = function (obj) {
return new Promise(function (resolve, reject) {
if (!obj || !obj.id) {
reject({ message: "invalid content" });
return;
}
const state = [{ key: obj.id, value: obj }];
fetch(stateEndpoint, {
method: "POST",
body: JSON.stringify(state),
headers: {
"Content-Type": "application/json",
traceparent: obj.trace_parent,
tracestate: obj.trace_state,
},
})
.then((_res) => {
if (!_res.ok) {
logger.debug(_res.statusText);
reject({ message: "error saving content" });
} else {
resolve(obj);
}
})
.catch((error) => {
logger.error(error);
reject({ message: error });
});
});
};
// let ctx
// tweets handler
app.post("/tweets", (req, res) => {
logger.debug("/tweets invoked...");
const tweet = req.body;
if (!tweet) {
res.status(400).send({ error: "invalid content" });
return;
}
let obj = {
id: tweet.id_str,
author: tweet.user.screen_name,
author_pic: tweet.user.profile_image_url_https,
content: tweet.full_text || tweet.text, // if extended then use it
lang: tweet.lang,
published: tweet.created_at,
sentiment: 0.5 // default to neutral sentiment
};
let obj = {
id: tweet.id_str,
author: tweet.user.screen_name,
author_pic: tweet.user.profile_image_url_https,
content: tweet.full_text || tweet.text, // if extended then use it
lang: tweet.lang,
published: tweet.created_at,
trace_state: req.get("tracestate"),
trace_parent: req.get("traceparent"),
sentiment: 0.5, // default to neutral sentiment
};
scoreSentiment(obj)
.then(saveContent)
.then(publishContent)
.then(function(rez) {
console.log(rez);
res.status(200).send({});
})
.catch(function (error) {
console.log(error.message);
res.status(500).send(error);
});
logger.debug("obj: " + JSON.stringify(obj));
scoreSentiment(obj)
.then(saveContent)
.then(publishContent)
.then(function (rez) {
logger.debug("rez: " + JSON.stringify(rez));
res.status(200).send({});
})
.catch(function (error) {
logger.error(error.message);
res.status(500).send(error);
});
});
// score sentiment
var scoreSentiment = function (obj) {
return new Promise(function (resolve, reject) {
fetch(serviceEndpoint, {
method: "POST",
body: JSON.stringify({ lang: obj.lang, text: obj.content }),
headers: {
"Content-Type": "application/json",
traceparent: obj.trace_parent,
tracestate: obj.trace_state,
},
})
.then((_res) => {
if (!_res.ok) {
logger.debug(_res.statusText);
reject({ message: "error invoking service" });
} else {
return _res.json();
}
})
.then((_res) => {
logger.debug("_res: " + JSON.stringify(_res));
obj.sentiment = _res.score;
resolve(obj);
})
.catch((error) => {
logger.debug(error);
reject({ message: error });
});
});
};
app.listen(port, () => console.log(`Port: ${port}!`));
// publish scored tweets
var publishContent = function (obj) {
return new Promise(function (resolve, reject) {
if (!obj || !obj.id) {
reject({ message: "invalid content" });
return;
}
fetch(pubEndpoint, {
method: "POST",
body: JSON.stringify(obj),
headers: {
"Content-Type": "application/json",
traceparent: obj.trace_parent,
tracestate: obj.trace_state,
},
})
.then((_res) => {
if (!_res.ok) {
logger.debug(_res.statusText);
reject({ message: "error publishing content" });
} else {
resolve(obj);
}
})
.catch((error) => {
logger.error(error);
reject({ message: error });
});
});
};
app.listen(port, () => logger.info(`Port: ${port}!`));

View File

@ -1,8 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
curl -d '{"lang":"en", "text":"I am so happy this worked"}' \
-H "Content-type: application/json" \
"http://localhost:3500/v1.0/invoke/processor/method/sentiment-score"

View File

@ -1,16 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
RELEASE_VERSION=v0.3.3
docker build -t mchmarny/provider:$RELEASE_VERSION .
docker push mchmarny/provider:$RELEASE_VERSION
# docker run -it -p 3002:3002 -d mchmarny/provider:v0.3.3

View File

@ -1,6 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
dapr run node app.js --app-id provider --app-port 3001 --protocol http

View File

@ -1,12 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: processed
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -1,14 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: tweet-store
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: actorStateStore
value: "true"

View File

@ -1,18 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: tweets
spec:
type: bindings.twitter
version: v1
metadata:
- name: consumerKey
value: ""
- name: consumerSecret
value: ""
- name: accessToken
value: ""
- name: accessSecret
value: ""
- name: query
value: "dapr" # need more tweets during dev ;)

View File

@ -0,0 +1,24 @@
const { createLogger, format, transports, config } = require("winston");
const { combine, timestamp, label, printf } = format;
const daprFormat = printf(({ level, message, label, timestamp }) => {
return `${label} == time="${timestamp}" level=${level} msg="${message}"`;
});
const options = {
console: {
level: "debug",
handleExceptions: true,
json: false,
colorize: true,
},
};
const logger = createLogger({
format: combine(label({ label: "PROVIDER" }), timestamp(), daprFormat),
levels: config.npm.levels,
transports: [new transports.Console(options.console)],
exitOnError: false,
});
module.exports = logger;

View File

@ -1,16 +1,17 @@
{
"name": "provider",
"version": "1.0.0",
"description": "Dapr demo",
"main": "app.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "Mark Chmarny <mark@chmarny.com>",
"license": "MIT",
"dependencies": {
"es6-promise": "^4.2.8",
"express": "^4.17.1",
"isomorphic-fetch": "^2.2.1"
}
"name": "provider",
"version": "1.0.0",
"description": "Dapr demo",
"main": "app.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "Mark Chmarny <mark@chmarny.com>",
"license": "MIT",
"dependencies": {
"es6-promise": "^4.2.8",
"express": "^4.17.1",
"isomorphic-fetch": "^3.0.0",
"winston": "^3.3.3"
}
}

View File

@ -0,0 +1,21 @@
# Be sure and log into your docker hub account
[CmdletBinding()]
param (
[Parameter(
Position = 0,
HelpMessage = "The name of the docker up user to push images to."
)]
[string]
$dockerHubUser = 'darquewarrior',
[Parameter(
Position = 1,
HelpMessage = "The version of the dapr runtime version to use as image tag."
)]
[string]
$daprVersion = "1.0.0-rc.4"
)
docker build -t $dockerHubUser/provider:$daprVersion .
docker push $dockerHubUser/provider:$daprVersion

View File

@ -0,0 +1,16 @@
#!/bin/bash
set -o errexit
set -o pipefail
# The name of the docker up user to push images to.
dockerHubUser=$1
dockerHubUser=${dockerHubUser:-darquewarrior}
# The version of the dapr runtime version to use as image tag.
daprVersion=$2
daprVersion=${daprVersion:-1.0.0-rc.4}
docker build -t $dockerHubUser/provider:$daprVersion .
docker push $dockerHubUser/provider:$daprVersion

View File

@ -0,0 +1,3 @@
npm install
dapr run --app-id provider --app-port 3001 --components-path ../../components --config ../config.yaml --log-level debug -- node app.js

View File

@ -0,0 +1,8 @@
#!/bin/bash
set -o errexit
set -o pipefail
npm install
dapr run --app-id provider --app-port 3001 --components-path ../../components -- node app.js

View File

@ -0,0 +1,38 @@
# This script will run an ARM template deployment to deploy all the
# required resources into Azure. All the keys, tokens and endpoints
# will be automatically retreived and set as required environment
# variables.
# Requirements:
# PowerShell Core 7 (runs on macOS, Linux and Windows)
# Azure CLI (log in, runs on macOS, Linux and Windows)
[CmdletBinding()]
param (
[Parameter(
Position = 0,
HelpMessage = "The name of the resource group to be created. All resources will be place in the resource group and start with this name."
)]
[string]
$rgName = "twitterDemo2",
[Parameter(
Position = 1,
HelpMessage = "The location to store the meta data for the deployment."
)]
[string]
$location = "eastus"
)
# Deploy the infrastructure
$deployment = $(az deployment sub create --location $location --template-file ./iac/main.json --parameters rgName=$rgName --output json) | ConvertFrom-Json
# Get all the outputs
$cognitiveServiceKey = $deployment.properties.outputs.cognitiveServiceKey.value
$cognitiveServiceEndpoint = $deployment.properties.outputs.cognitiveServiceEndpoint.value
Write-Verbose "cognitiveServiceKey = $cognitiveServiceKey"
Write-Verbose "cognitiveServiceEndpoint = $cognitiveServiceEndpoint"
$env:CS_TOKEN=$cognitiveServiceKey
$env:CS_ENDPOINT=$cognitiveServiceEndpoint
Write-Output "You can now run the processor from this terminal.`n"

View File

@ -0,0 +1,39 @@
#!/bin/bash
# This script will run an ARM template deployment to deploy all the
# required resources into Azure. All the keys, tokens and endpoints
# will be automatically retreived and set as required environment
# variables.
# Requirements:
# Azure CLI (log in)
# This script is setting environment variables needed by the processor.
# There for you must run this script using the source command.
# source setup.sh
# If you just run the command using ./setup.sh the resources in Azure will be
# created but the environment variables will not be set.
function getOutput {
echo $(az deployment sub show --name $rgName --query "properties.outputs.$1.value" --output tsv)
}
# The name of the resource group to be created. All resources will be place in
# the resource group and start with name.
rgName=$1
rgName=${rgName:-twitterDemo2}
# The location to store the meta data for the deployment.
location=$2
location=${location:-eastus}
# Deploy the infrastructure
az deployment sub create --name $rgName --location $location --template-file ./iac/main.json --parameters rgName=$rgName --output none
# Get all the outputs
cognitiveServiceKey=$(getOutput 'cognitiveServiceKey')
cognitiveServiceEndpoint=$(getOutput 'cognitiveServiceEndpoint')
export CS_TOKEN=$cognitiveServiceKey
export CS_ENDPOINT=$cognitiveServiceEndpoint
printf "You can now run the processor from this terminal.\n"

View File

@ -0,0 +1,2 @@
.git
components

View File

@ -11,7 +11,7 @@ ENV GO111MODULE=on
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
go build -a -tags netgo -ldflags \
"-w -extldflags '-static' -X main.AppVersion=${APP_VERSION}" \
-mod vendor -o ./service .
-o ./service .
FROM gcr.io/distroless/static:nonroot
COPY --from=builder /src/service .

View File

@ -1,7 +0,0 @@
# viewer
Assuming you have Dapr initialized locally and the `provider` and `processor` service already started:
```shell
bin/run
```

View File

@ -1,9 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
go mod tidy
go mod vendor
go run -v handler.go main.go

View File

@ -1,17 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
go mod tidy
go mod vendor
RELEASE_VERSION=v0.3.3
docker build \
--build-arg APP_VERSION=$RELEASE_VERSION \
-t mchmarny/viewer:$RELEASE_VERSION \
.
docker push mchmarny/viewer:$RELEASE_VERSION

View File

@ -1,11 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
go mod tidy
go mod vendor
dapr run go run handler.go main.go --app-id viewer --app-port 8083 --protocol http

View File

@ -1,10 +0,0 @@
#!/bin/bash
set -o errexit
set -o pipefail
go mod tidy
go mod vendor
go test -v -count=1 -race ./...
# go test -v -count=1 -run TestScoring ./...

View File

@ -1,12 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: processed
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -1,14 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: actorStateStore
value: "true"

View File

@ -1,4 +1,4 @@
module github.com/mchmarny/dapr-pipeline/src/viewer
module github.com/dapr/samples/twitter-sentiment-processor/demos/demo2/viewer
go 1.14
@ -8,7 +8,6 @@ require (
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e // indirect
github.com/golang/protobuf v1.4.0 // indirect
github.com/gorilla/websocket v1.4.2 // indirect
github.com/mchmarny/gcputil v0.3.3
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.1 // indirect
github.com/pkg/errors v0.9.1 // indirect

View File

@ -2,18 +2,8 @@ cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMT
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.40.0/go.mod h1:Tk58MuI9rbLMKlAjeO/bDnteAx7tX2gJIXw4T5Jwlro=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.49.0/go.mod h1:hGvAdzcWNbyuxS3nWhD7H2cIJxjRRTRLQVB0bdputVY=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
contrib.go.opencensus.io/exporter/ocagent v0.4.12/go.mod h1:450APlNTSR6FrvC3CTRqYosuDstRB9un7SOx2k/9ckA=
contrib.go.opencensus.io/exporter/prometheus v0.1.0/go.mod h1:cGFniUXGZlKRjzOyuZJ6mgB+PgBcCIa79kEKR8YCW+A=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/azure-sdk-for-go v30.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/go-autorest/autorest v0.2.0/go.mod h1:AKyIcETwSUFxIcs/Wnq/C+kwCtlEYGUVd7FPNb2slmg=
github.com/Azure/go-autorest/autorest/adal v0.1.0/go.mod h1:MeS4XhScH55IST095THyTxElntu7WqB7pNbZo8Q5G3E=
@ -25,7 +15,6 @@ github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6L
github.com/Azure/go-autorest/tracing v0.1.0/go.mod h1:ROEEAFwXycQw7Sn3DXNtEedEvdeRAgDr0izn4z5Ij88=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DataDog/datadog-go v2.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
@ -51,7 +40,6 @@ github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZm
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
@ -62,7 +50,6 @@ github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.6.2 h1:88crIK23zO6TqlQBt+f9FrPJNKm9ZEr7qjp9vl/d5TM=
github.com/gin-gonic/gin v1.6.2/go.mod h1:75u5sXoLsGZoRN5Sgbi1eraJ4GU3++wFwWzhwvtwp4M=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-playground/assert/v2 v2.0.1 h1:MsBgLAaY856+nPRTKrp3/OZK38U/wa0CcBYNjji3q3A=
@ -81,13 +68,10 @@ github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zV
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191027212112-611e8accdfc9 h1:uHTyIjqVhYRhLbJ8nIiOJHkEZZ+5YoOsAbD3sk82NiE=
github.com/golang/groupcache v0.0.0-20191027212112-611e8accdfc9/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@ -101,7 +85,6 @@ github.com/golang/protobuf v1.4.0 h1:oOuy+ugB+P/kBdUnG5QaMXSIyJ1q38wWSojYCb3z5VQ
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
@ -110,7 +93,6 @@ github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
@ -158,8 +140,6 @@ github.com/lightstep/tracecontext.go v0.0.0-20181129014701-1757c391b1ac/go.mod h
github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mchmarny/gcputil v0.3.3 h1:7xy8l8UEotam8uJPQe2uxAHhklJU0SttQ6VMRY0eQ/g=
github.com/mchmarny/gcputil v0.3.3/go.mod h1:h8quvmF3tFoEILEmgkMPtZoXW+P16Wn5GV1aKHkUhnQ=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421 h1:ZqeYNhU3OHLH3mGKHDcjJRFFRrJa6eAM5H+CtDdOsPc=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
@ -230,8 +210,6 @@ go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2 h1:75k/FF0Q2YM8QYo07VPddOLBslDt1MZOdEslOHvmzAs=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3 h1:8sGtKOrtQqkN1bp2AtX+misvLIlOmsEsNd+9NIcPEm8=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.uber.org/atomic v1.4.0 h1:cxzIVoETapQEqDhQu3QfnvXAV4AlzcvUCxkVUFw3+EU=
@ -251,29 +229,18 @@ go.uber.org/zap v1.15.0/go.mod h1:Mb2vm2krFEG5DV0W9qcHBYFtp/Wku1cvYaqPsS/WYfc=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200204104054-c9f3fb736b72/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200206161412-a0c6ece9d31a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -287,16 +254,13 @@ golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191126235420-ef20fe5d7933/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e h1:3G+cUijn7XD+S4eJFddp53Pv7+slrESplyjG25HgL+k=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191122200657-5d9234df094c/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -310,15 +274,11 @@ golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190523142557-0e01d883c5c5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191128015809-6d18c012aee9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd h1:xhmwyvizuTgC2qz7ZlMluP20uW+C3Rm0FD/WLDX8884=
@ -330,7 +290,6 @@ golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@ -338,55 +297,35 @@ golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGm
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5 h1:hKsoRgsbwY1NafxrwTs+k64bikrLBkAgPir1TNCj3Zs=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2 h1:EtTFh6h4SAKemS+CURDMTDIANuduG5zKEXShyy18bGA=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.6.0/go.mod h1:btoxGiFvQNVUZQ8W08zLtrVS08CNpINPEfxXxgJL1Q4=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191115221424-83cc0476cb11/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.19.1/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=

View File

@ -10,27 +10,31 @@ import (
const (
// SupportedCloudEventVersion indicates the version of CloudEvents suppored by this handler
SupportedCloudEventVersion = "0.3"
SupportedCloudEventVersion = "1.0"
//SupportedCloudEventContentTye indicates the content type supported by this handlers
SupportedCloudEventContentTye = "application/json"
)
type subscription struct {
Topic string `json:"Topic"`
Route string `json:"Route"`
PubSubName string `json:"pubsubname"`
Topic string `json:"topic"`
Route string `json:"route"`
}
// Is called by dapr to see which topic this applications wants to subscribe to
// Return a subscription object with the PubSubName (dapr component name), topic
// to subscribe to, and the route to send the items to.
func subscribeHandler(c *gin.Context) {
topics := []subscription{
subscription{
Topic: sourceTopic,
Route: "/" + sourceTopic,
{
PubSubName: "tweet-pubsub",
Topic: "scored",
Route: "/" + topicRoute,
},
}
logger.Printf("subscription tipics: %v", topics)
logger.Printf("subscription topics: %v", topics)
c.JSON(http.StatusOK, topics)
}
@ -49,6 +53,7 @@ func rootHandler(c *gin.Context) {
}
// This is called each time a tweet is posted to this app.
func eventHandler(c *gin.Context) {
e := ce.NewEvent()
@ -61,7 +66,7 @@ func eventHandler(c *gin.Context) {
return
}
// logger.Printf("received event: %v", e.Context)
logger.Printf("received event: %v", e.Context)
eventVersion := e.Context.GetSpecVersion()
if eventVersion != SupportedCloudEventVersion {

View File

@ -8,7 +8,6 @@ import (
"os"
"github.com/gin-gonic/gin"
"github.com/mchmarny/gcputil/env"
"gopkg.in/olahol/melody.v1"
)
@ -18,10 +17,8 @@ var (
// AppVersion will be overritten during build
AppVersion = "v0.0.1-default"
// service
servicePort = env.MustGetEnvVar("PORT", "8083")
sourceTopic = env.MustGetEnvVar("VIEWER_SOURCE_TOPIC_NAME", "processed")
// The route name to process the incoming tweets on
topicRoute = "showOnWebSite"
broadcaster *melody.Melody
)
@ -53,15 +50,14 @@ func main() {
})
// topic route
viewerRoute := fmt.Sprintf("/%s", sourceTopic)
viewerRoute := fmt.Sprintf("/%s", topicRoute)
logger.Printf("viewer route: %s", viewerRoute)
r.POST(viewerRoute, eventHandler)
// server
hostPort := net.JoinHostPort("0.0.0.0", servicePort)
hostPort := net.JoinHostPort("0.0.0.0", "8083")
logger.Printf("Server (%s) starting: %s \n", AppVersion, hostPort)
if err := r.Run(hostPort); err != nil {
logger.Fatal(err)
}
}

View File

@ -0,0 +1,21 @@
# Be sure and log into your docker hub account
[CmdletBinding()]
param (
[Parameter(
Position = 0,
HelpMessage = "The name of the docker up user to push images to."
)]
[string]
$dockerHubUser = 'darquewarrior',
[Parameter(
Position = 1,
HelpMessage = "The version of the dapr runtime version to use as image tag."
)]
[string]
$daprVersion = "1.0.0-rc.4"
)
docker build --build-arg APP_VERSION=$daprVersion -t $dockerHubUser/viewer:$daprVersion .
docker push $dockerHubUser/viewer:$daprVersion

View File

@ -0,0 +1,18 @@
#!/bin/bash
set -o errexit
set -o pipefail
# The name of the docker up user to push images to.
dockerHubUser=$1
dockerHubUser=${dockerHubUser:-darquewarrior}
# The version of the dapr runtime version to use as image tag.
daprVersion=$2
daprVersion=${daprVersion:-1.0.0-rc.4}
go mod tidy
docker build --build-arg APP_VERSION=$daprVersion -t $dockerHubUser/viewer:$daprVersion .
docker push $dockerHubUser/viewer:$daprVersion

View File

@ -16,7 +16,7 @@ window.onload = function () {
log.scrollTop = log.scrollHeight - log.clientHeight;
}
}
}
if (log) {
@ -49,14 +49,15 @@ window.onload = function () {
}else {
scoreStr = "neutral"
}
var item = document.createElement("div");
item.className = "item";
// TODO: template this
var tmsg = "<img src='" + t.author_pic + "' class='profile-pic' />" +
"<div class='item-text'><b><img src='static/img/" + scoreStr +
".svg' alt='sentiment' class='sentiment' />" + t.author +
".svg' alt='sentiment' class='sentiment' title='" + scoreStr +
"' />" + t.author +
"<a href='https://twitter.com/" + t.author + "/status/" + t.id +
"' target='_blank'><img src='static/img/tw.svg' class='tweet-link' /></a></b>" +
"<br /><i>" + t.content + "</i><br /><i class='small'>Query: " +

View File

@ -2,8 +2,8 @@
<!-- Footer -->
<div id="page-footer">
<p>
<a href="https://github.com/mchmarny/dapr-pipeline">
dapr-pipeline
<a href="https://github.com/dapr/samples/tree/master/twitter-sentiment-processor">
twitter-sentiment-processor
</a>
</p>
<p>

View File

@ -0,0 +1,3 @@
go build handler.go main.go
dapr run --app-id viewer --app-port 8083 --components-path ../../components --config ../config.yaml --log-level debug -- go run handler.go main.go

View File

@ -0,0 +1,9 @@
#!/bin/bash
set -o errexit
set -o pipefail
go mod tidy
go build handler.go main.go
dapr run --app-id viewer --app-port 8083 --components-path ../../components --config ../config.yaml --log-level debug -- go run handler.go main.go

View File

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,161 +0,0 @@
package v2
// Package cloudevents alias' common functions and types to improve discoverability and reduce
// the number of imports for simple HTTP clients.
import (
"github.com/cloudevents/sdk-go/v2/binding"
"github.com/cloudevents/sdk-go/v2/client"
"github.com/cloudevents/sdk-go/v2/context"
"github.com/cloudevents/sdk-go/v2/event"
"github.com/cloudevents/sdk-go/v2/observability"
"github.com/cloudevents/sdk-go/v2/protocol"
"github.com/cloudevents/sdk-go/v2/protocol/http"
"github.com/cloudevents/sdk-go/v2/types"
)
// Client
type ClientOption client.Option
type Client = client.Client
// Event
type Event = event.Event
type Result = protocol.Result
// Context
type EventContext = event.EventContext
type EventContextV1 = event.EventContextV1
type EventContextV03 = event.EventContextV03
// Custom Types
type Timestamp = types.Timestamp
type URIRef = types.URIRef
// HTTP Protocol
type HTTPOption http.Option
type HTTPProtocol = http.Protocol
// Encoding
type Encoding = binding.Encoding
// Message
type Message = binding.Message
const (
// ReadEncoding
ApplicationXML = event.ApplicationXML
ApplicationJSON = event.ApplicationJSON
TextPlain = event.TextPlain
ApplicationCloudEventsJSON = event.ApplicationCloudEventsJSON
ApplicationCloudEventsBatchJSON = event.ApplicationCloudEventsBatchJSON
Base64 = event.Base64
// Event Versions
VersionV1 = event.CloudEventsVersionV1
VersionV03 = event.CloudEventsVersionV03
// Encoding
EncodingBinary = binding.EncodingBinary
EncodingStructured = binding.EncodingStructured
)
var (
// ContentType Helpers
StringOfApplicationJSON = event.StringOfApplicationJSON
StringOfApplicationXML = event.StringOfApplicationXML
StringOfTextPlain = event.StringOfTextPlain
StringOfApplicationCloudEventsJSON = event.StringOfApplicationCloudEventsJSON
StringOfApplicationCloudEventsBatchJSON = event.StringOfApplicationCloudEventsBatchJSON
StringOfBase64 = event.StringOfBase64
// Client Creation
NewClient = client.New
NewClientObserved = client.NewObserved
NewDefaultClient = client.NewDefault
NewHTTPReceiveHandler = client.NewHTTPReceiveHandler
// Client Options
WithEventDefaulter = client.WithEventDefaulter
WithUUIDs = client.WithUUIDs
WithTimeNow = client.WithTimeNow
WithTracePropagation = client.WithTracePropagation()
// Results
ResultIs = protocol.ResultIs
ResultAs = protocol.ResultAs
// Receipt helpers
NewReceipt = protocol.NewReceipt
ResultACK = protocol.ResultACK
ResultNACK = protocol.ResultNACK
IsACK = protocol.IsACK
IsNACK = protocol.IsNACK
// Event Creation
NewEvent = event.New
NewResult = protocol.NewResult
NewHTTPResult = http.NewResult
// Message Creation
ToMessage = binding.ToMessage
// HTTP Messages
WriteHTTPRequest = http.WriteRequest
// Tracing
EnableTracing = observability.EnableTracing
// Context
ContextWithTarget = context.WithTarget
TargetFromContext = context.TargetFrom
WithEncodingBinary = binding.WithForceBinary
WithEncodingStructured = binding.WithForceStructured
// Custom Types
ParseTimestamp = types.ParseTimestamp
ParseURIRef = types.ParseURIRef
ParseURI = types.ParseURI
// HTTP Protocol
NewHTTP = http.New
// HTTP Protocol Options
WithTarget = http.WithTarget
WithHeader = http.WithHeader
WithShutdownTimeout = http.WithShutdownTimeout
//WithEncoding = http.WithEncoding
//WithStructuredEncoding = http.WithStructuredEncoding // TODO: expose new way
WithPort = http.WithPort
WithPath = http.WithPath
WithMiddleware = http.WithMiddleware
WithListener = http.WithListener
WithRoundTripper = http.WithRoundTripper
)

View File

@ -1,39 +0,0 @@
package binding
import (
"context"
"io"
"github.com/cloudevents/sdk-go/v2/binding/spec"
)
// BinaryWriter is used to visit a binary Message and generate a new representation.
//
// Protocols that supports binary encoding should implement this interface to implement direct
// binary to binary encoding and event to binary encoding.
//
// Start() and End() methods are invoked every time this BinaryWriter implementation is used to visit a Message
type BinaryWriter interface {
// Method invoked at the beginning of the visit. Useful to perform initial memory allocations
Start(ctx context.Context) error
// Set a standard attribute.
//
// The value can either be the correct golang type for the attribute, or a canonical
// string encoding. See package types to perform the needed conversions
SetAttribute(attribute spec.Attribute, value interface{}) error
// Set an extension attribute.
//
// The value can either be the correct golang type for the attribute, or a canonical
// string encoding. See package types to perform the needed conversions
SetExtension(name string, value interface{}) error
// SetData receives an io.Reader for the data attribute.
// io.Reader is not invoked when the data attribute is empty
SetData(data io.Reader) error
// End method is invoked only after the whole encoding process ends successfully.
// If it fails, it's never invoked. It can be used to finalize the message.
End(ctx context.Context) error
}

View File

@ -1,64 +0,0 @@
package binding
/*
Package binding defines interfaces for protocol bindings.
NOTE: Most applications that emit or consume events should use the ../client
package, which provides a simpler API to the underlying binding.
The interfaces in this package provide extra encoding and protocol information
to allow efficient forwarding and end-to-end reliable delivery between a
Receiver and a Sender belonging to different bindings. This is useful for
intermediary applications that route or forward events, but not necessary for
most "endpoint" applications that emit or consume events.
Protocol Bindings
A protocol binding usually implements a Message, a Sender and Receiver, a StructuredWriter and a BinaryWriter (depending on the supported encodings of the protocol) and an Write[ProtocolMessage] method.
Read and write events
The core of this package is the binding.Message interface.
Through binding.MessageReader It defines how to read a protocol specific message for an
encoded event in structured mode or binary mode.
The entity who receives a protocol specific data structure representing a message
(e.g. an HttpRequest) encapsulates it in a binding.Message implementation using a NewMessage method (e.g. http.NewMessage).
Then the entity that wants to send the binding.Message back on the wire,
translates it back to the protocol specific data structure (e.g. a Kafka ConsumerMessage), using
the writers BinaryWriter and StructuredWriter specific to that protocol.
Binding implementations exposes their writers
through a specific Write[ProtocolMessage] function (e.g. kafka.EncodeProducerMessage),
in order to simplify the encoding process.
The encoding process can be customized in order to mutate the final result with binding.TransformerFactory.
A bunch of these are provided directly by the binding/transformer module.
Usually binding.Message implementations can be encoded only one time, because the encoding process drain the message itself.
In order to consume a message several times, the binding/buffering module provides several APIs to buffer the Message.
A message can be converted to an event.Event using binding.ToEvent() method.
An event.Event can be used as Message casting it to binding.EventMessage.
In order to simplify the encoding process for each protocol, this package provide several utility methods like binding.Write and binding.DirectWrite.
The binding.Write method tries to preserve the structured/binary encoding, in order to be as much efficient as possible.
Messages can be eventually wrapped to change their behaviours and binding their lifecycle, like the binding.FinishMessage.
Every Message wrapper implements the MessageWrapper interface
Sender and Receiver
A Receiver receives protocol specific messages and wraps them to into binding.Message implementations.
A Sender converts arbitrary Message implementations to a protocol-specific form using the protocol specific Write method
and sends them.
Message and ExactlyOnceMessage provide methods to allow acknowledgments to
propagate when a reliable messages is forwarded from a Receiver to a Sender.
QoS 0 (unreliable), 1 (at-least-once) and 2 (exactly-once) are supported.
Transport
A binding implementation providing Sender and Receiver implementations can be used as a Transport through the BindingTransport adapter.
*/

View File

@ -1,26 +0,0 @@
package binding
import "errors"
// Encoding enum specifies the type of encodings supported by binding interfaces
type Encoding int
const (
// Binary encoding as specified in https://github.com/cloudevents/spec/blob/master/spec.md#message
EncodingBinary Encoding = iota
// Structured encoding as specified in https://github.com/cloudevents/spec/blob/master/spec.md#message
EncodingStructured
// Message is an instance of EventMessage or it contains EventMessage nested (through MessageWrapper)
EncodingEvent
// When the encoding is unknown (which means that the message is a non-event)
EncodingUnknown
)
// Error to specify that or the Message is not an event or it is encoded with an unknown encoding
var ErrUnknownEncoding = errors.New("unknown Message encoding")
// ErrNotStructured returned by Message.Structured for non-structured messages.
var ErrNotStructured = errors.New("message is not in structured mode")
// ErrNotBinary returned by Message.Binary for non-binary messages.
var ErrNotBinary = errors.New("message is not in binary mode")

View File

@ -1,90 +0,0 @@
package binding
import (
"bytes"
"context"
"github.com/cloudevents/sdk-go/v2/binding/format"
"github.com/cloudevents/sdk-go/v2/binding/spec"
"github.com/cloudevents/sdk-go/v2/event"
)
const (
FORMAT_EVENT_STRUCTURED = "FORMAT_EVENT_STRUCTURED"
)
// EventMessage type-converts a event.Event object to implement Message.
// This allows local event.Event objects to be sent directly via Sender.Send()
// s.Send(ctx, binding.EventMessage(e))
// When an event is wrapped into a EventMessage, the original event could be
// potentially mutated. If you need to use the Event again, after wrapping it into
// an Event message, you should copy it before
type EventMessage event.Event
func ToMessage(e *event.Event) Message {
return (*EventMessage)(e)
}
func (m *EventMessage) ReadEncoding() Encoding {
return EncodingEvent
}
func (m *EventMessage) ReadStructured(ctx context.Context, builder StructuredWriter) error {
f := GetOrDefaultFromCtx(ctx, FORMAT_EVENT_STRUCTURED, format.JSON).(format.Format)
b, err := f.Marshal((*event.Event)(m))
if err != nil {
return err
}
return builder.SetStructuredEvent(ctx, f, bytes.NewReader(b))
}
func (m *EventMessage) ReadBinary(ctx context.Context, b BinaryWriter) (err error) {
err = b.Start(ctx)
if err != nil {
return err
}
err = eventContextToBinaryWriter(m.Context, b)
if err != nil {
return err
}
// Pass the body
body := (*event.Event)(m).Data()
if len(body) > 0 {
err = b.SetData(bytes.NewReader(body))
if err != nil {
return err
}
}
return b.End(ctx)
}
func eventContextToBinaryWriter(c event.EventContext, b BinaryWriter) (err error) {
// Pass all attributes
sv := spec.VS.Version(c.GetSpecVersion())
for _, a := range sv.Attributes() {
value := a.Get(c)
if value != nil {
err = b.SetAttribute(a, value)
}
if err != nil {
return err
}
}
// Pass all extensions
for k, v := range c.GetExtensions() {
err = b.SetExtension(k, v)
if err != nil {
return err
}
}
return nil
}
func (*EventMessage) Finish(error) error { return nil }
var _ Message = (*EventMessage)(nil) // Test it conforms to the interface
// Configure which format to use when marshalling the event to structured mode
func UseFormatForEvent(ctx context.Context, f format.Format) context.Context {
return context.WithValue(ctx, FORMAT_EVENT_STRUCTURED, f)
}

View File

@ -1,27 +0,0 @@
package binding
type finishMessage struct {
Message
finish func(error)
}
func (m *finishMessage) GetWrappedMessage() Message {
return m.Message
}
func (m *finishMessage) Finish(err error) error {
err2 := m.Message.Finish(err) // Finish original message first
if m.finish != nil {
m.finish(err) // Notify callback
}
return err2
}
var _ MessageWrapper = (*finishMessage)(nil)
// WithFinish returns a wrapper for m that calls finish() and
// m.Finish() in its Finish().
// Allows code to be notified when a message is Finished.
func WithFinish(m Message, finish func(error)) Message {
return &finishMessage{Message: m, finish: finish}
}

View File

@ -1,8 +0,0 @@
package format
/*
Package format formats structured events.
The "application/cloudevents+json" format is built-in and always
available. Other formats may be added.
*/

View File

@ -1,71 +0,0 @@
package format
import (
"encoding/json"
"fmt"
"strings"
"github.com/cloudevents/sdk-go/v2/event"
)
// Format marshals and unmarshals structured events to bytes.
type Format interface {
// MediaType identifies the format
MediaType() string
// Marshal event to bytes
Marshal(*event.Event) ([]byte, error)
// Unmarshal bytes to event
Unmarshal([]byte, *event.Event) error
}
// Prefix for event-format media types.
const Prefix = "application/cloudevents"
// IsFormat returns true if mediaType begins with "application/cloudevents"
func IsFormat(mediaType string) bool { return strings.HasPrefix(mediaType, Prefix) }
// JSON is the built-in "application/cloudevents+json" format.
var JSON = jsonFmt{}
type jsonFmt struct{}
func (jsonFmt) MediaType() string { return event.ApplicationCloudEventsJSON }
func (jsonFmt) Marshal(e *event.Event) ([]byte, error) { return json.Marshal(e) }
func (jsonFmt) Unmarshal(b []byte, e *event.Event) error {
return json.Unmarshal(b, e)
}
// built-in formats
var formats map[string]Format
func init() {
formats = map[string]Format{}
Add(JSON)
}
// Lookup returns the format for mediaType, or nil if not found.
func Lookup(mediaType string) Format { return formats[mediaType] }
func unknown(mediaType string) error {
return fmt.Errorf("unknown event format media-type %#v", mediaType)
}
// Add a new Format. It can be retrieved by Lookup(f.MediaType())
func Add(f Format) { formats[f.MediaType()] = f }
// Marshal an event to bytes using the mediaType event format.
func Marshal(mediaType string, e *event.Event) ([]byte, error) {
if f := formats[mediaType]; f != nil {
return f.Marshal(e)
}
return nil, unknown(mediaType)
}
// Unmarshal bytes to an event using the mediaType event format.
func Unmarshal(mediaType string, b []byte, e *event.Event) error {
if f := formats[mediaType]; f != nil {
return f.Unmarshal(b, e)
}
return unknown(mediaType)
}

View File

@ -1,99 +0,0 @@
package binding
import "context"
// The ReadStructured and ReadBinary methods allows to perform an optimized encoding of a Message to a specific data structure.
// A Sender should try each method of interest and fall back to binding.ToEvent() if none are supported.
// An out of the box algorithm is provided for writing a message: binding.Write().
type MessageReader interface {
// Return the type of the message Encoding.
// The encoding should be preferably computed when the message is constructed.
ReadEncoding() Encoding
// ReadStructured transfers a structured-mode event to a StructuredWriter.
// It must return ErrNotStructured if message is not in structured mode.
//
// Returns a different err if something wrong happened while trying to read the structured event.
// In this case, the caller must Finish the message with appropriate error.
//
// This allows Senders to avoid re-encoding messages that are
// already in suitable structured form.
ReadStructured(context.Context, StructuredWriter) error
// ReadBinary transfers a binary-mode event to an BinaryWriter.
// It must return ErrNotBinary if message is not in binary mode.
//
// Returns a different err if something wrong happened while trying to read the binary event
// In this case, the caller must Finish the message with appropriate error
//
// This allows Senders to avoid re-encoding messages that are
// already in suitable binary form.
ReadBinary(context.Context, BinaryWriter) error
}
// Message is the interface to a binding-specific message containing an event.
//
// Reliable Delivery
//
// There are 3 reliable qualities of service for messages:
//
// 0/at-most-once/unreliable: messages can be dropped silently.
//
// 1/at-least-once: messages are not dropped without signaling an error
// to the sender, but they may be duplicated in the event of a re-send.
//
// 2/exactly-once: messages are never dropped (without error) or
// duplicated, as long as both sending and receiving ends maintain
// some binding-specific delivery state. Whether this is persisted
// depends on the configuration of the binding implementations.
//
// The Message interface supports QoS 0 and 1, the ExactlyOnceMessage interface
// supports QoS 2
//
// Message includes the MessageReader interface to read messages. Every binding.Message implementation *must* specify if the message can be accessed one or more times.
//
// When a Message can be forgotten by the entity who produced the message, Message.Finish() *must* be invoked.
type Message interface {
MessageReader
// Finish *must* be called when message from a Receiver can be forgotten by
// the receiver. A QoS 1 sender should not call Finish() until it gets an acknowledgment of
// receipt on the underlying transport. For QoS 2 see ExactlyOnceMessage.
//
// Note that, depending on the Message implementation, forgetting to Finish the message
// could produce memory/resources leaks!
//
// Passing a non-nil err indicates sending or processing failed.
// A non-nil return indicates that the message was not accepted
// by the receivers peer.
Finish(error) error
}
// ExactlyOnceMessage is implemented by received Messages
// that support QoS 2. Only transports that support QoS 2 need to
// implement or use this interface.
type ExactlyOnceMessage interface {
Message
// Received is called by a forwarding QoS2 Sender when it gets
// acknowledgment of receipt (e.g. AMQP 'accept' or MQTT PUBREC)
//
// The receiver must call settle(nil) when it get's the ack-of-ack
// (e.g. AMQP 'settle' or MQTT PUBCOMP) or settle(err) if the
// transfer fails.
//
// Finally the Sender calls Finish() to indicate the message can be
// discarded.
//
// If sending fails, or if the sender does not support QoS 2, then
// Finish() may be called without any call to Received()
Received(settle func(error))
}
// Message Wrapper interface is used to walk through a decorated Message and unwrap it.
type MessageWrapper interface {
Message
// Method to get the wrapped message
GetWrappedMessage() Message
}

View File

@ -1,136 +0,0 @@
package spec
import (
"fmt"
"time"
"github.com/cloudevents/sdk-go/v2/event"
"github.com/cloudevents/sdk-go/v2/types"
)
// Kind is a version-independent identifier for a CloudEvent context attribute.
type Kind uint8
const (
// Required cloudevents attributes
ID Kind = iota
Source
SpecVersion
Type
// Optional cloudevents attributes
DataContentType
DataSchema
Subject
Time
)
const nAttrs = int(Time) + 1
var kindNames = [nAttrs]string{
"id",
"source",
"specversion",
"type",
"datacontenttype",
"dataschema",
"subject",
"time",
}
// String is a human-readable string, for a valid attribute name use Attribute.Name
func (k Kind) String() string { return kindNames[k] }
// IsRequired returns true for attributes defined as "required" by the CE spec.
func (k Kind) IsRequired() bool { return k < DataContentType }
// Attribute is a named attribute accessor.
// The attribute name is specific to a Version.
type Attribute interface {
Kind() Kind
// Name of the attribute with respect to the current spec Version() with prefix
PrefixedName() string
// Name of the attribute with respect to the current spec Version()
Name() string
// Version of the spec that this attribute belongs to
Version() Version
// Get the value of this attribute from an event context
Get(event.EventContextReader) interface{}
// Set the value of this attribute on an event context
Set(event.EventContextWriter, interface{}) error
// Delete this attribute from and event context, when possible
Delete(event.EventContextWriter) error
}
// accessor provides Kind, Get, Set.
type accessor interface {
Kind() Kind
Get(event.EventContextReader) interface{}
Set(event.EventContextWriter, interface{}) error
Delete(event.EventContextWriter) error
}
var acc = [nAttrs]accessor{
&aStr{aKind(ID), event.EventContextReader.GetID, event.EventContextWriter.SetID},
&aStr{aKind(Source), event.EventContextReader.GetSource, event.EventContextWriter.SetSource},
&aStr{aKind(SpecVersion), event.EventContextReader.GetSpecVersion, func(writer event.EventContextWriter, s string) error { return nil }},
&aStr{aKind(Type), event.EventContextReader.GetType, event.EventContextWriter.SetType},
&aStr{aKind(DataContentType), event.EventContextReader.GetDataContentType, event.EventContextWriter.SetDataContentType},
&aStr{aKind(DataSchema), event.EventContextReader.GetDataSchema, event.EventContextWriter.SetDataSchema},
&aStr{aKind(Subject), event.EventContextReader.GetSubject, event.EventContextWriter.SetSubject},
&aTime{aKind(Time), event.EventContextReader.GetTime, event.EventContextWriter.SetTime},
}
// aKind implements Kind()
type aKind Kind
func (kind aKind) Kind() Kind { return Kind(kind) }
type aStr struct {
aKind
get func(event.EventContextReader) string
set func(event.EventContextWriter, string) error
}
func (a *aStr) Get(c event.EventContextReader) interface{} {
if s := a.get(c); s != "" {
return s
}
return nil // Treat blank as missing
}
func (a *aStr) Set(c event.EventContextWriter, v interface{}) error {
s, err := types.ToString(v)
if err != nil {
return fmt.Errorf("invalid value for %s: %#v", a.Kind(), v)
}
return a.set(c, s)
}
func (a *aStr) Delete(c event.EventContextWriter) error {
return a.set(c, "")
}
type aTime struct {
aKind
get func(event.EventContextReader) time.Time
set func(event.EventContextWriter, time.Time) error
}
func (a *aTime) Get(c event.EventContextReader) interface{} {
if v := a.get(c); !v.IsZero() {
return v
}
return nil // Treat zero time as missing.
}
func (a *aTime) Set(c event.EventContextWriter, v interface{}) error {
t, err := types.ToTime(v)
if err != nil {
return fmt.Errorf("invalid value for %s: %#v", a.Kind(), v)
}
return a.set(c, t)
}
func (a *aTime) Delete(c event.EventContextWriter) error {
return a.set(c, time.Time{})
}

View File

@ -1,9 +0,0 @@
package spec
/*
Package spec provides spec-version metadata.
For use by code that maps events using (prefixed) attribute name strings.
Supports handling multiple spec versions uniformly.
*/

View File

@ -1,184 +0,0 @@
package spec
import (
"strings"
"github.com/cloudevents/sdk-go/v2/event"
)
// Version provides meta-data for a single spec-version.
type Version interface {
// String name of the version, e.g. "1.0"
String() string
// Prefix for attribute names.
Prefix() string
// Attribute looks up a prefixed attribute name (case insensitive).
// Returns nil if not found.
Attribute(prefixedName string) Attribute
// Attribute looks up the attribute from kind.
// Returns nil if not found.
AttributeFromKind(kind Kind) Attribute
// Attributes returns all the context attributes for this version.
Attributes() []Attribute
// Convert translates a context to this version.
Convert(event.EventContextConverter) event.EventContext
// NewContext returns a new context for this version.
NewContext() event.EventContext
// SetAttribute sets named attribute to value.
//
// Name is case insensitive.
// Does nothing if name does not start with prefix.
SetAttribute(context event.EventContextWriter, name string, value interface{}) error
}
// Versions contains all known versions with the same attribute prefix.
type Versions struct {
prefix string
all []Version
m map[string]Version
}
// Versions returns the list of all known versions, most recent first.
func (vs *Versions) Versions() []Version { return vs.all }
// Version returns the named version.
func (vs *Versions) Version(name string) Version {
return vs.m[name]
}
// Latest returns the latest Version
func (vs *Versions) Latest() Version { return vs.all[0] }
// PrefixedSpecVersionName returns the specversion attribute PrefixedName
func (vs *Versions) PrefixedSpecVersionName() string { return vs.prefix + "specversion" }
// Prefix is the lowercase attribute name prefix.
func (vs *Versions) Prefix() string { return vs.prefix }
type attribute struct {
accessor
name string
version Version
}
func (a *attribute) PrefixedName() string { return a.version.Prefix() + a.name }
func (a *attribute) Name() string { return a.name }
func (a *attribute) Version() Version { return a.version }
type version struct {
prefix string
context event.EventContext
convert func(event.EventContextConverter) event.EventContext
attrMap map[string]Attribute
attrs []Attribute
}
func (v *version) Attribute(name string) Attribute { return v.attrMap[strings.ToLower(name)] }
func (v *version) Attributes() []Attribute { return v.attrs }
func (v *version) String() string { return v.context.GetSpecVersion() }
func (v *version) Prefix() string { return v.prefix }
func (v *version) NewContext() event.EventContext { return v.context.Clone() }
// HasPrefix is a case-insensitive prefix check.
func (v *version) HasPrefix(name string) bool {
return strings.HasPrefix(strings.ToLower(name), v.prefix)
}
func (v *version) Convert(c event.EventContextConverter) event.EventContext { return v.convert(c) }
func (v *version) SetAttribute(c event.EventContextWriter, name string, value interface{}) error {
if a := v.Attribute(name); a != nil { // Standard attribute
return a.Set(c, value)
}
name = strings.ToLower(name)
var err error
if v.HasPrefix(name) { // Extension attribute
return c.SetExtension(strings.TrimPrefix(name, v.prefix), value)
}
return err
}
func (v *version) AttributeFromKind(kind Kind) Attribute {
for _, a := range v.Attributes() {
if a.Kind() == kind {
return a
}
}
return nil
}
func newVersion(
prefix string,
context event.EventContext,
convert func(event.EventContextConverter) event.EventContext,
attrs ...*attribute,
) *version {
v := &version{
prefix: strings.ToLower(prefix),
context: context,
convert: convert,
attrMap: map[string]Attribute{},
attrs: make([]Attribute, len(attrs)),
}
for i, a := range attrs {
a.version = v
v.attrs[i] = a
v.attrMap[strings.ToLower(a.PrefixedName())] = a
}
return v
}
// WithPrefix returns a set of versions with prefix added to all attribute names.
func WithPrefix(prefix string) *Versions {
attr := func(name string, kind Kind) *attribute {
return &attribute{accessor: acc[kind], name: name}
}
vs := &Versions{
m: map[string]Version{},
prefix: prefix,
all: []Version{
newVersion(prefix, event.EventContextV1{}.AsV1(),
func(c event.EventContextConverter) event.EventContext { return c.AsV1() },
attr("id", ID),
attr("source", Source),
attr("specversion", SpecVersion),
attr("type", Type),
attr("datacontenttype", DataContentType),
attr("dataschema", DataSchema),
attr("subject", Subject),
attr("time", Time),
),
newVersion(prefix, event.EventContextV03{}.AsV03(),
func(c event.EventContextConverter) event.EventContext { return c.AsV03() },
attr("specversion", SpecVersion),
attr("type", Type),
attr("source", Source),
attr("schemaurl", DataSchema),
attr("subject", Subject),
attr("id", ID),
attr("time", Time),
attr("datacontenttype", DataContentType),
),
},
}
for _, v := range vs.all {
vs.m[v.String()] = v
}
return vs
}
// New returns a set of versions
func New() *Versions { return WithPrefix("") }
// Built-in un-prefixed versions.
var (
VS *Versions
V03 Version
V1 Version
)
func init() {
VS = New()
V03 = VS.Version("0.3")
V1 = VS.Version("1.0")
}

View File

@ -1,17 +0,0 @@
package binding
import (
"context"
"io"
"github.com/cloudevents/sdk-go/v2/binding/format"
)
// StructuredWriter is used to visit a structured Message and generate a new representation.
//
// Protocols that supports structured encoding should implement this interface to implement direct
// structured to structured encoding and event to structured encoding.
type StructuredWriter interface {
// Event receives an io.Reader for the whole event.
SetStructuredEvent(ctx context.Context, format format.Format, event io.Reader) error
}

View File

@ -1,131 +0,0 @@
package binding
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"github.com/cloudevents/sdk-go/v2/binding/format"
"github.com/cloudevents/sdk-go/v2/binding/spec"
"github.com/cloudevents/sdk-go/v2/event"
"github.com/cloudevents/sdk-go/v2/types"
)
// Generic error when a conversion of a Message to an Event fails
var ErrCannotConvertToEvent = errors.New("cannot convert message to event")
// Translates a Message with a valid Structured or Binary representation to an Event.
// This function returns the Event generated from the Message and the original encoding of the message or
// an error that points the conversion error.
// transformers can be nil and this function guarantees that they are invoked only once during the encoding process.
func ToEvent(ctx context.Context, message MessageReader, transformers ...TransformerFactory) (*event.Event, error) {
if message == nil {
return nil, nil
}
messageEncoding := message.ReadEncoding()
if messageEncoding == EncodingEvent {
m := message
for m != nil {
if em, ok := m.(*EventMessage); ok {
e := (*event.Event)(em)
var tf TransformerFactories
tf = transformers
if err := tf.EventTransformer()(e); err != nil {
return nil, err
}
return e, nil
}
if mw, ok := m.(MessageWrapper); ok {
m = mw.GetWrappedMessage()
} else {
break
}
}
return nil, ErrCannotConvertToEvent
}
e := event.New()
encoder := &messageToEventBuilder{event: &e}
if _, err := DirectWrite(
context.Background(),
message,
encoder,
encoder,
); err != nil {
return nil, err
}
var tf TransformerFactories
tf = transformers
if err := tf.EventTransformer()(&e); err != nil {
return nil, err
}
return &e, nil
}
type messageToEventBuilder struct {
event *event.Event
}
var _ StructuredWriter = (*messageToEventBuilder)(nil)
var _ BinaryWriter = (*messageToEventBuilder)(nil)
func (b *messageToEventBuilder) SetStructuredEvent(ctx context.Context, format format.Format, event io.Reader) error {
var buf bytes.Buffer
_, err := io.Copy(&buf, event)
if err != nil {
return err
}
return format.Unmarshal(buf.Bytes(), b.event)
}
func (b *messageToEventBuilder) Start(ctx context.Context) error {
return nil
}
func (b *messageToEventBuilder) End(ctx context.Context) error {
return nil
}
func (b *messageToEventBuilder) SetData(data io.Reader) error {
var buf bytes.Buffer
w, err := io.Copy(&buf, data)
if err != nil {
return err
}
if w != 0 {
b.event.DataEncoded = buf.Bytes()
}
return nil
}
func (b *messageToEventBuilder) SetAttribute(attribute spec.Attribute, value interface{}) error {
// If spec version we need to change to right context struct
if attribute.Kind() == spec.SpecVersion {
str, err := types.ToString(value)
if err != nil {
return err
}
switch str {
case event.CloudEventsVersionV03:
b.event.Context = b.event.Context.AsV03()
case event.CloudEventsVersionV1:
b.event.Context = b.event.Context.AsV1()
default:
return fmt.Errorf("unrecognized event version %s", str)
}
return nil
}
return attribute.Set(b.event.Context, value)
}
func (b *messageToEventBuilder) SetExtension(name string, value interface{}) error {
value, err := types.Validate(value)
if err != nil {
return err
}
b.event.SetExtension(name, value)
return nil
}

View File

@ -1,73 +0,0 @@
package binding
import (
"github.com/cloudevents/sdk-go/v2/event"
)
// Implements a transformation process while transferring the event from the Message implementation
// to the provided encoder
//
// A transformer could optionally not provide an implementation for binary and/or structured encodings,
// returning nil to the respective factory method.
type TransformerFactory interface {
// Can return nil if the transformation doesn't support structured encoding directly
StructuredTransformer(writer StructuredWriter) StructuredWriter
// Can return nil if the transformation doesn't support binary encoding directly
BinaryTransformer(writer BinaryWriter) BinaryWriter
// Can return nil if the transformation doesn't support events
EventTransformer() EventTransformer
}
// Utility type alias to manage multiple TransformerFactory
type TransformerFactories []TransformerFactory
func (t TransformerFactories) StructuredTransformer(writer StructuredWriter) StructuredWriter {
if writer == nil {
return nil
}
res := writer
for _, b := range t {
if r := b.StructuredTransformer(res); r != nil {
res = r
} else {
return nil // Structured not supported!
}
}
return res
}
func (t TransformerFactories) BinaryTransformer(writer BinaryWriter) BinaryWriter {
if writer == nil {
return nil
}
res := writer
for i := range t {
b := t[len(t)-i-1]
if r := b.BinaryTransformer(res); r != nil {
res = r
} else {
return nil // Binary not supported!
}
}
return res
}
func (t TransformerFactories) EventTransformer() EventTransformer {
return func(e *event.Event) error {
for _, b := range t {
f := b.EventTransformer()
if f != nil {
err := f(e)
if err != nil {
return err
}
}
}
return nil
}
}
// EventTransformer mutates the provided Event
type EventTransformer func(*event.Event) error

View File

@ -1,148 +0,0 @@
package binding
import (
"context"
"github.com/cloudevents/sdk-go/v2/event"
)
const (
SKIP_DIRECT_STRUCTURED_ENCODING = "SKIP_DIRECT_STRUCTURED_ENCODING"
SKIP_DIRECT_BINARY_ENCODING = "SKIP_DIRECT_BINARY_ENCODING"
PREFERRED_EVENT_ENCODING = "PREFERRED_EVENT_ENCODING"
)
// Invokes the encoders. structuredWriter and binaryWriter could be nil if the protocol doesn't support it.
// transformers can be nil and this function guarantees that they are invoked only once during the encoding process.
//
// Returns:
// * EncodingStructured, nil if message is correctly encoded in structured encoding
// * EncodingBinary, nil if message is correctly encoded in binary encoding
// * EncodingStructured, err if message was structured but error happened during the encoding
// * EncodingBinary, err if message was binary but error happened during the encoding
// * EncodingUnknown, ErrUnknownEncoding if message is not a structured or a binary Message
func DirectWrite(
ctx context.Context,
message MessageReader,
structuredWriter StructuredWriter,
binaryWriter BinaryWriter,
transformers ...TransformerFactory,
) (Encoding, error) {
if structuredWriter != nil && !GetOrDefaultFromCtx(ctx, SKIP_DIRECT_STRUCTURED_ENCODING, false).(bool) {
// Wrap the transformers in the structured builder
structuredWriter = TransformerFactories(transformers).StructuredTransformer(structuredWriter)
// StructuredTransformer could return nil if one of transcoders doesn't support
// direct structured transcoding
if structuredWriter != nil {
if err := message.ReadStructured(ctx, structuredWriter); err == nil {
return EncodingStructured, nil
} else if err != ErrNotStructured {
return EncodingStructured, err
}
}
}
if binaryWriter != nil && !GetOrDefaultFromCtx(ctx, SKIP_DIRECT_BINARY_ENCODING, false).(bool) {
binaryWriter = TransformerFactories(transformers).BinaryTransformer(binaryWriter)
if binaryWriter != nil {
if err := message.ReadBinary(ctx, binaryWriter); err == nil {
return EncodingBinary, nil
} else if err != ErrNotBinary {
return EncodingBinary, err
}
}
}
return EncodingUnknown, ErrUnknownEncoding
}
// This is the full algorithm to encode a Message using transformers:
// 1. It first tries direct encoding using DirectWrite
// 2. If no direct encoding is possible, it uses ToEvent to generate an Event representation
// 3. From the Event, the message is encoded back to the provided structured or binary encoders
// You can tweak the encoding process using the context decorators WithForceStructured, WithForceStructured, etc.
// transformers can be nil and this function guarantees that they are invoked only once during the encoding process.
// Returns:
// * EncodingStructured, nil if message is correctly encoded in structured encoding
// * EncodingBinary, nil if message is correctly encoded in binary encoding
// * EncodingUnknown, ErrUnknownEncoding if message.ReadEncoding() == EncodingUnknown
// * _, err if error happened during the encoding
func Write(
ctx context.Context,
message MessageReader,
structuredWriter StructuredWriter,
binaryWriter BinaryWriter,
transformers ...TransformerFactory,
) (Encoding, error) {
enc := message.ReadEncoding()
var err error
// Skip direct encoding if the event is an event message
if enc != EncodingEvent {
enc, err = DirectWrite(ctx, message, structuredWriter, binaryWriter, transformers...)
if enc != EncodingUnknown {
// Message directly encoded, nothing else to do here
return enc, err
}
}
var e *event.Event
e, err = ToEvent(ctx, message, transformers...)
if err != nil {
return enc, err
}
message = (*EventMessage)(e)
if GetOrDefaultFromCtx(ctx, PREFERRED_EVENT_ENCODING, EncodingBinary).(Encoding) == EncodingStructured {
if structuredWriter != nil {
return EncodingStructured, message.ReadStructured(ctx, structuredWriter)
}
if binaryWriter != nil {
return EncodingBinary, message.ReadBinary(ctx, binaryWriter)
}
} else {
if binaryWriter != nil {
return EncodingBinary, message.ReadBinary(ctx, binaryWriter)
}
if structuredWriter != nil {
return EncodingStructured, message.ReadStructured(ctx, structuredWriter)
}
}
return EncodingUnknown, ErrUnknownEncoding
}
// Skip direct structured to structured encoding during the encoding process
func WithSkipDirectStructuredEncoding(ctx context.Context, skip bool) context.Context {
return context.WithValue(ctx, SKIP_DIRECT_STRUCTURED_ENCODING, skip)
}
// Skip direct binary to binary encoding during the encoding process
func WithSkipDirectBinaryEncoding(ctx context.Context, skip bool) context.Context {
return context.WithValue(ctx, SKIP_DIRECT_BINARY_ENCODING, skip)
}
// Define the preferred encoding from event to message during the encoding process
func WithPreferredEventEncoding(ctx context.Context, enc Encoding) context.Context {
return context.WithValue(ctx, PREFERRED_EVENT_ENCODING, enc)
}
// Force structured encoding during the encoding process
func WithForceStructured(ctx context.Context) context.Context {
return context.WithValue(context.WithValue(ctx, PREFERRED_EVENT_ENCODING, EncodingStructured), SKIP_DIRECT_BINARY_ENCODING, true)
}
// Force binary encoding during the encoding process
func WithForceBinary(ctx context.Context) context.Context {
return context.WithValue(context.WithValue(ctx, PREFERRED_EVENT_ENCODING, EncodingBinary), SKIP_DIRECT_STRUCTURED_ENCODING, true)
}
// Get a configuration value from the provided context
func GetOrDefaultFromCtx(ctx context.Context, key string, def interface{}) interface{} {
if val := ctx.Value(key); val != nil {
return val
} else {
return def
}
}

View File

@ -1,222 +0,0 @@
package client
import (
"context"
"errors"
"fmt"
"io"
"sync"
"go.uber.org/zap"
"github.com/cloudevents/sdk-go/v2/binding"
cecontext "github.com/cloudevents/sdk-go/v2/context"
"github.com/cloudevents/sdk-go/v2/event"
"github.com/cloudevents/sdk-go/v2/protocol"
)
// Client interface defines the runtime contract the CloudEvents client supports.
type Client interface {
// Send will transmit the given event over the client's configured transport.
Send(ctx context.Context, event event.Event) protocol.Result
// Request will transmit the given event over the client's configured
// transport and return any response event.
Request(ctx context.Context, event event.Event) (*event.Event, protocol.Result)
// StartReceiver will register the provided function for callback on receipt
// of a cloudevent. It will also start the underlying protocol as it has
// been configured.
// This call is blocking.
// Valid fn signatures are:
// * func()
// * func() error
// * func(context.Context)
// * func(context.Context) protocol.Result
// * func(event.Event)
// * func(event.Event) protocol.Result
// * func(context.Context, event.Event)
// * func(context.Context, event.Event) protocol.Result
// * func(event.Event) *event.Event
// * func(event.Event) (*event.Event, protocol.Result)
// * func(context.Context, event.Event) *event.Event
// * func(context.Context, event.Event) (*event.Event, protocol.Result)
StartReceiver(ctx context.Context, fn interface{}) error
}
// New produces a new client with the provided transport object and applied
// client options.
func New(obj interface{}, opts ...Option) (Client, error) {
c := &ceClient{}
if p, ok := obj.(protocol.Sender); ok {
c.sender = p
}
if p, ok := obj.(protocol.Requester); ok {
c.requester = p
}
if p, ok := obj.(protocol.Responder); ok {
c.responder = p
}
if p, ok := obj.(protocol.Receiver); ok {
c.receiver = p
}
if p, ok := obj.(protocol.Opener); ok {
c.opener = p
}
if err := c.applyOptions(opts...); err != nil {
return nil, err
}
return c, nil
}
type ceClient struct {
sender protocol.Sender
requester protocol.Requester
receiver protocol.Receiver
responder protocol.Responder
// Optional.
opener protocol.Opener
outboundContextDecorators []func(context.Context) context.Context
invoker Invoker
receiverMu sync.Mutex
eventDefaulterFns []EventDefaulter
}
func (c *ceClient) applyOptions(opts ...Option) error {
for _, fn := range opts {
if err := fn(c); err != nil {
return err
}
}
return nil
}
func (c *ceClient) Send(ctx context.Context, e event.Event) protocol.Result {
if c.sender == nil {
return errors.New("sender not set")
}
for _, f := range c.outboundContextDecorators {
ctx = f(ctx)
}
if len(c.eventDefaulterFns) > 0 {
for _, fn := range c.eventDefaulterFns {
e = fn(ctx, e)
}
}
if err := e.Validate(); err != nil {
return err
}
return c.sender.Send(ctx, (*binding.EventMessage)(&e))
}
func (c *ceClient) Request(ctx context.Context, e event.Event) (*event.Event, protocol.Result) {
if c.requester == nil {
return nil, errors.New("requester not set")
}
for _, f := range c.outboundContextDecorators {
ctx = f(ctx)
}
if len(c.eventDefaulterFns) > 0 {
for _, fn := range c.eventDefaulterFns {
e = fn(ctx, e)
}
}
if err := e.Validate(); err != nil {
return nil, err
}
// If provided a requester, use it to do request/response.
var resp *event.Event
msg, err := c.requester.Request(ctx, (*binding.EventMessage)(&e))
if msg != nil {
defer func() {
if err := msg.Finish(err); err != nil {
cecontext.LoggerFrom(ctx).Warnw("failed calling message.Finish", zap.Error(err))
}
}()
}
// try to turn msg into an event, it might not work and that is ok.
if rs, rserr := binding.ToEvent(ctx, msg); rserr != nil {
cecontext.LoggerFrom(ctx).Debugw("response: failed calling ToEvent", zap.Error(rserr), zap.Any("resp", msg))
if err != nil {
err = fmt.Errorf("%w; failed to convert response into event: %s", err, rserr)
} else {
// If the protocol returns no error, it is an ACK on the request, but we had
// issues turning the response into an event, so make an ACK Result and pass
// down the ToEvent error as well.
err = fmt.Errorf("%w; failed to convert response into event: %s", protocol.ResultACK, rserr)
}
} else {
resp = rs
}
return resp, err
}
// StartReceiver sets up the given fn to handle Receive.
// See Client.StartReceiver for details. This is a blocking call.
func (c *ceClient) StartReceiver(ctx context.Context, fn interface{}) error {
c.receiverMu.Lock()
defer c.receiverMu.Unlock()
if c.invoker != nil {
return fmt.Errorf("client already has a receiver")
}
invoker, err := newReceiveInvoker(fn, c.eventDefaulterFns...) // TODO: this will have to pick between a observed invoker or not.
if err != nil {
return err
}
if invoker.IsReceiver() && c.receiver == nil {
return fmt.Errorf("mismatched receiver callback without protocol.Receiver supported by protocol")
}
if invoker.IsResponder() && c.responder == nil {
return fmt.Errorf("mismatched receiver callback without protocol.Responder supported by protocol")
}
c.invoker = invoker
defer func() {
c.invoker = nil
}()
// Start the opener, if set.
if c.opener != nil {
go func() {
// TODO: handle error correctly here.
if err := c.opener.OpenInbound(ctx); err != nil {
panic(err)
}
}()
}
var msg binding.Message
var respFn protocol.ResponseFn
// Start Polling.
for {
if c.responder != nil {
msg, respFn, err = c.responder.Respond(ctx)
} else if c.receiver != nil {
msg, err = c.receiver.Receive(ctx)
} else {
return errors.New("responder nor receiver set")
}
if err == io.EOF { // Normal close
return nil
}
if err := c.invoker.Invoke(ctx, msg, respFn); err != nil {
return err
}
}
}

View File

@ -1,26 +0,0 @@
package client
import (
"github.com/cloudevents/sdk-go/v2/protocol/http"
)
// NewDefault provides the good defaults for the common case using an HTTP
// Protocol client. The http transport has had WithBinaryEncoding http
// transport option applied to it. The client will always send Binary
// encoding but will inspect the outbound event context and match the version.
// The WithTimeNow, and WithUUIDs client options are also applied to the
// client, all outbound events will have a time and id set if not already
// present.
func NewDefault() (Client, error) {
p, err := http.New()
if err != nil {
return nil, err
}
c, err := NewObserved(p, WithTimeNow(), WithUUIDs())
if err != nil {
return nil, err
}
return c, nil
}

View File

@ -1,101 +0,0 @@
package client
import (
"context"
"github.com/cloudevents/sdk-go/v2/event"
"github.com/cloudevents/sdk-go/v2/extensions"
"github.com/cloudevents/sdk-go/v2/observability"
"github.com/cloudevents/sdk-go/v2/protocol"
"go.opencensus.io/trace"
)
// New produces a new client with the provided transport object and applied
// client options.
func NewObserved(protocol interface{}, opts ...Option) (Client, error) {
client, err := New(protocol, opts...)
if err != nil {
return nil, err
}
c := &obsClient{client: client}
if err := c.applyOptions(opts...); err != nil {
return nil, err
}
return c, nil
}
type obsClient struct {
client Client
addTracing bool
}
func (c *obsClient) applyOptions(opts ...Option) error {
for _, fn := range opts {
if err := fn(c); err != nil {
return err
}
}
return nil
}
// Send transmits the provided event on a preconfigured Protocol. Send returns
// an error if there was an an issue validating the outbound event or the
// transport returns an error.
func (c *obsClient) Send(ctx context.Context, e event.Event) protocol.Result {
ctx, r := observability.NewReporter(ctx, reportSend)
ctx, span := trace.StartSpan(ctx, observability.ClientSpanName, trace.WithSpanKind(trace.SpanKindClient))
defer span.End()
if span.IsRecordingEvents() {
span.AddAttributes(EventTraceAttributes(&e)...)
}
if c.addTracing {
e.Context = e.Context.Clone()
extensions.FromSpanContext(span.SpanContext()).AddTracingAttributes(&e)
}
result := c.client.Send(ctx, e)
if protocol.IsACK(result) {
r.OK()
} else {
r.Error()
}
return result
}
func (c *obsClient) Request(ctx context.Context, e event.Event) (*event.Event, protocol.Result) {
ctx, r := observability.NewReporter(ctx, reportRequest)
ctx, span := trace.StartSpan(ctx, observability.ClientSpanName, trace.WithSpanKind(trace.SpanKindClient))
defer span.End()
if span.IsRecordingEvents() {
span.AddAttributes(EventTraceAttributes(&e)...)
}
resp, result := c.client.Request(ctx, e)
if protocol.IsACK(result) {
r.OK()
} else {
r.Error()
}
return resp, result
}
// StartReceiver sets up the given fn to handle Receive.
// See Client.StartReceiver for details. This is a blocking call.
func (c *obsClient) StartReceiver(ctx context.Context, fn interface{}) error {
ctx, r := observability.NewReporter(ctx, reportStartReceiver)
err := c.client.StartReceiver(ctx, fn)
if err != nil {
r.Error()
} else {
r.OK()
}
return err
}

View File

@ -1,52 +0,0 @@
package client
import (
"context"
"time"
"github.com/cloudevents/sdk-go/v2/event"
"github.com/google/uuid"
)
// EventDefaulter is the function signature for extensions that are able
// to perform event defaulting.
type EventDefaulter func(ctx context.Context, event event.Event) event.Event
// DefaultIDToUUIDIfNotSet will inspect the provided event and assign a UUID to
// context.ID if it is found to be empty.
func DefaultIDToUUIDIfNotSet(ctx context.Context, event event.Event) event.Event {
if event.Context != nil {
if event.ID() == "" {
event.Context = event.Context.Clone()
event.SetID(uuid.New().String())
}
}
return event
}
// DefaultTimeToNowIfNotSet will inspect the provided event and assign a new
// Timestamp to context.Time if it is found to be nil or zero.
func DefaultTimeToNowIfNotSet(ctx context.Context, event event.Event) event.Event {
if event.Context != nil {
if event.Time().IsZero() {
event.Context = event.Context.Clone()
event.SetTime(time.Now())
}
}
return event
}
// NewDefaultDataContentTypeIfNotSet returns a defaulter that will inspect the
// provided event and set the provided content type if content type is found
// to be empty.
func NewDefaultDataContentTypeIfNotSet(contentType string) EventDefaulter {
return func(ctx context.Context, event event.Event) event.Event {
if event.Context != nil {
if event.DataContentType() == "" {
event.SetDataContentType(contentType)
}
}
return event
}
}

View File

@ -1,6 +0,0 @@
/*
Package client holds the recommended entry points for interacting with the CloudEvents Golang SDK. The client wraps
a selected transport. The client adds validation and defaulting for sending events, and flexible receiver method
registration. For full details, read the `client.Client` documentation.
*/
package client

View File

@ -1,39 +0,0 @@
package client
import (
"context"
"net/http"
thttp "github.com/cloudevents/sdk-go/v2/protocol/http"
)
func NewHTTPReceiveHandler(ctx context.Context, p *thttp.Protocol, fn interface{}) (*EventReceiver, error) {
invoker, err := newReceiveInvoker(fn)
if err != nil {
return nil, err
}
return &EventReceiver{
p: p,
invoker: invoker,
}, nil
}
type EventReceiver struct {
p *thttp.Protocol
invoker Invoker
}
func (r *EventReceiver) ServeHTTP(rw http.ResponseWriter, req *http.Request) {
go func() {
r.p.ServeHTTP(rw, req)
}()
ctx := context.Background()
msg, respFn, err := r.p.Respond(ctx)
if err != nil {
// TODO
} else if err := r.invoker.Invoke(ctx, msg, respFn); err != nil {
// TODO
}
}

Some files were not shown because too many files have changed in this diff Show More