Compare commits

..

No commits in common. "main" and "v0.1" have entirely different histories.
main ... v0.1

260 changed files with 2299 additions and 19104 deletions

View File

@ -1,8 +0,0 @@
# CLOMonitor metadata file
# This file must be located at the root of the repository
# Checks exemptions
exemptions:
- check: analytics
reason: "Can't add analytics to markdown"

2
.gitattributes vendored
View File

@ -1,2 +0,0 @@
# Force shell scripts to be LF on checkout to support executing on Windows WSL.
*.sh text eol=lf

View File

@ -1 +0,0 @@
blank_issues_enabled: false

View File

@ -1,34 +0,0 @@
---
name: New issue
about: 'Create a new issue'
title: ''
labels: ''
assignees: ''
---
(Please remove the text below after reading it.)
When filing an issue in this repository, please consider the following:
- Is this:
- A feature request for a new form of integration etc?
- A report of a technical issue with the existing spec?
- A suggestion for improving the clarity of the existing spec
(even if it's not "wrong" as such)?
- Something else?
- Is there context behind your request that would be useful for readers to
understand? (There's no need to go into huge amounts of detail, but a few
sentences about the motivation can be really helpful.)
- Do you know *roughly* what you'd expect a change to address this issue would
look like? If so, it's useful to create a PR at the same time, linking to
the issue. This doesn't need to be polished and ready to merge - it's just to
help clarify roughly what you're considering.
If the issue is requires discussion, it's really useful if you're able to
attend the weekly working group meeting as described
[here](https://github.com/cloudevents/spec/?tab=readme-ov-file#meeting-time).
Often a discussion which would take multiple back-and-forth comments on an
issue can be resolved with a few minutes of conversation. We understand the
timing may not be convenient for everyone - please let us know if that's the
case, and we can potentially arrange something with the most relevant group
members at a more convenient time.

View File

@ -1,20 +0,0 @@
Fixes #
<!-- Please include the 'why' behind your changes if no issue exists -->
## Proposed Changes
-
-
-
**Release Note**
<!--
If this change has user-visible impact, write a release note in the block
below. If this change has no user-visible impact, no release note is needed.
-->
```release-note
```

View File

@ -1,91 +0,0 @@
# Copyright 2021 The CloudEvents Authors.
# Copyright 2020 The Knative Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: 'Release Notes'
on:
workflow_dispatch:
inputs:
branch:
description: 'Branch? (master)'
required: true
default: 'master'
start-sha:
description: 'Starting SHA? (last tag on branch)'
end-sha:
description: 'Ending SHA? (latest HEAD)'
jobs:
release-notes:
name: Release Notes
runs-on: 'ubuntu-latest'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Set up Go
uses: actions/setup-go@v2
- name: Install Dependencies
run: GO111MODULE=on go get k8s.io/release/cmd/release-notes
- name: Check out code
uses: actions/checkout@v2
with:
fetch-depth: 0
# Note: Defaults needs to run after we check out the repo.
- name: Defaults
run: |
echo ORG=$(echo '${{ github.repository }}' | awk -F '/' '{print $1}') >> $GITHUB_ENV
echo REPO=$(echo '${{ github.repository }}' | awk -F '/' '{print $2}') >> $GITHUB_ENV
echo "BRANCH=${{ github.event.inputs.branch }}" >> $GITHUB_ENV
if [[ "${{ github.event.inputs.start-sha }}" != "" ]]; then
echo "START_SHA=${{ github.event.inputs.start-sha }}" >> $GITHUB_ENV
else
# Default Starting SHA (thanks @dprotaso)
export semver=$(git describe --match "v[0-9]*" --abbrev=0)
echo "Using ${semver} tag for starting sha."
echo START_SHA=$(git rev-list -n 1 "${semver}") >> $GITHUB_ENV
fi
if [[ "${{ github.event.inputs.end-sha }}" != "" ]]; then
echo "END_SHA=${{ github.event.inputs.end-sha }}" >> $GITHUB_ENV
else
# Default Ending SHA (thanks @dprotaso)
echo "END_SHA=$(git rev-list -n 1 HEAD)" >> $GITHUB_ENV
fi
- name: Generate Notes
run: |
# See https://github.com/kubernetes/release/tree/master/cmd/release-notes for options.
# Note: we are setting env vars in the Defaults step.
release-notes \
--required-author "" \
--repo-path "$(pwd)" \
--output release-notes.md
- name: Display Notes
run: |
cat release-notes.md
- name: Archive Release Notes
uses: actions/upload-artifact@v2
with:
name: release-notes.md
path: release-notes.md

View File

@ -1,20 +0,0 @@
name: verify
on: [push, pull_request]
jobs:
verify:
name: verify
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: "3.10"
cache: "pip"
cache-dependency-path: "tools/requirements.txt"
- name: Install tools dependencies
run: python -m pip install -r tools/requirements.txt
- name: Test Tools
run: make -f tools/Makefile test_tools
- name: Verify Specification
run: make -f tools/Makefile verify

6
.gitignore vendored
View File

@ -1,6 +0,0 @@
.idea/
gen/
.vscode/
gen/
.DS_Store
*.pyc

3
.travis.yml Normal file
View File

@ -0,0 +1,3 @@
language: generic
script:
- make

View File

@ -1,46 +1,37 @@
# Contributing to CloudEvents
<!-- no verify-specs -->
This page contains information about reporting issues, how to suggest changes as
well as the guidelines we follow for how our documents are formatted.
This page contains information about reporting issues, how to suggest changes
as well as the guidelines we follow for how our documents are formatted.
## Table of Contents
- [Contributing to CloudEvents](#contributing-to-cloudevents)
- [Table of Contents](#table-of-contents)
- [Reporting an Issue](#reporting-an-issue)
- [Suggesting a Change](#suggesting-a-change)
- [Assigning and Owning work](#assigning-and-owning-work)
- [Sign your work](#sign-your-work)
- [Spec Formatting Conventions](#spec-formatting-conventions)
* [Reporting an Issue](#reporting-an-issue)
* [Suggesting a Change](#suggesting-a-change)
* [Spec Formatting Conventions](#spec-formatting-conventions)
## Reporting an Issue
To report an issue, or to suggest an idea for a change that you haven't had time
to write-up yet, open an [issue](https://github.com/cloudevents/spec/issues). It
is best to check our existing
[issues](https://github.com/cloudevents/spec/issues) first to see if a similar
one has already been opened and discussed.
To report an issue, or to suggest an idea for a change that you haven't
had time to write-up yet, open an
[issue](https://github.com/cloudevents/spec/issues). It is best to check
our existing [issues](https://github.com/cloudevents/spec/issues) first
to see if a similar one has already been opened and discussed.
## Suggesting a Change
To suggest a change to this repository, submit a
[pull request](https://github.com/cloudevents/spec/pulls)(PR) with the complete
To suggest a change to this repository, submit a [pull
request](https://github.com/cloudevents/spec/pulls)(PR) with the complete
set of changes you'd like to see. See the
[Spec Formatting Conventions](#spec-formatting-conventions) section for the
guidelines we follow for how documents are formatted.
Please use [conventional commits](https://conventionalcommits.org) when writing
commit messages.
[Spec Formatting Conventions](#spec-formatting-conventions) section for
the guidelines we follow for how documents are formatted.
Each PR must be signed per the following section.
### Assigning and Owning work
If you want to own and work on an issue, add a comment or “#dibs” it asking
about ownership. A maintainer will then add the Assigned label and modify the
first comment in the issue to include `Assigned to: @person`
about ownership. A maintainer will then add the Assigned label and modify
the first comment in the issue to include `Assigned to: @person`
### Sign your work
@ -97,8 +88,8 @@ Use your real name (sorry, no pseudonyms or anonymous contributions.)
If you set your `user.name` and `user.email` git configs, you can sign your
commit automatically with `git commit -s`.
Note: If your git config information is set properly then viewing the `git log`
information for your commit will look something like this:
Note: If your git config information is set properly then viewing the
`git log` information for your commit will look something like this:
```
Author: Joe Smith <joe.smith@email.com>
@ -109,13 +100,13 @@ Date: Thu Feb 2 11:41:15 2018 -0800
Signed-off-by: Joe Smith <joe.smith@email.com>
```
Notice the `Author` and `Signed-off-by` lines match. If they don't your PR will
be rejected by the automated DCO check.
Notice the `Author` and `Signed-off-by` lines match. If they don't
your PR will be rejected by the automated DCO check.
## Spec Formatting Conventions
Documents in this repository will adhere to the following rules:
* Lines are wrapped at 80 columns (when possible)
* Specifications will use [RFC2119](https://tools.ietf.org/html/rfc2119)
keywords to indicate normative requirements
- Lines are wrapped at 80 columns (when possible)
- Specifications will use [RFC2119](https://tools.ietf.org/html/rfc2119)
keywords to indicate normative requirements

75
GOVERNANCE.md Normal file
View File

@ -0,0 +1,75 @@
# Governance
This document describes the governance process under which the Serverless
Working Group (WG) will manage this repository.
## Working Group Meetings
In order to provide equitable rights to all Working Group members,
the following process will be followed:
* Official WG meetings will be announced at least a week in advance.
* Proposed changes to any document will be done via a Pull Request (PR).
* PRs will be reviewed during official WG meetings.
* During meetings, priority will be given to PRs that appear to be ready for
a vote over those that appear to require discussions.
* PRs should not be merged if they have had substantial changes made within
two days of the meeting.
Rebases, typo fixes, etc. do not count as substantial.
Note, administrivia PRs that do not materially modify WG output documents
may be processed by WG admins as needed.
* Resolving PRs ("merging" or "closing with no action") will be done as a
result of a motion made during a WG meeting, and approved.
* Reopening a PR can be done if new information is made available, and a
motion to do so is approved.
* Any motion that does not have "unanimous consent" will result in a formal
vote. See [Voting](#voting).
## PRs
Typically, PRs are expected to meet the following criteria prior to being
merged:
* The author of the PR indicates that it is ready for review by asking for it
to be discussed in an upcoming meeting - eg. by adding it to the agenda
document.
* All comments have been addressed.
* PRs that have objections/concerns will be discussed off-line by interested
parties. A resolution, updated PR, will be expected from those talks.
## Voting
If a vote is taken during a WG meeting, the follow rules will be followed:
* There is only 1 vote per participating company, or nonaffiliated individual.
* Each participating company can assign a primary and secondary representative.
* A participating company, or nonaffiliated individual, attains voting rights
by having any of their assigned representative(s) attend 3 out of the last
4 meetings. They obtain voting rights after the 3rd meeting, not during.
* Only WG members with voting rights will be allowed to vote.
* A vote passes if more than 50% of the votes cast approve the motion.
* Only "yes" or "no" votes count, "abstain" votes do not count towards the
total.
* Meeting attendence will be formally tracked
[here](https://docs.google.com/spreadsheets/d/1bw5s9sC2ggYyAiGJHEk7xm-q2KG6jyrfBy69ifkdmt0/edit#gid=0).
Members must acknowledge their presence verbally, meaning, adding yourself
to the "Attendees" section of the Agenda document is not sufficient.
## Release Process
To create a new release:
* Create a PR that modifies the [README](README.md), and all specifications
(ie. *.md files) that include a version string, to the new release
version string.
* Merge the PR.
* Create a [new release](https://github.com/cloudevents/spec/releases/new):
* Choose a "Tag version" of the form: `vX.Y`, e.g. `v0.1`
* Target should be `master`, the default value
* Release title should be the same as the Tag - `vX.Y`
* Add some descriptive text, or the list of PRs that have been merged
since the previous release.
The git query to get the list commits since the last release is:
`git log --pretty=format:%s master...v0.1`.
Just replace "v0.1" with the name of the previous release.
* Press `Publish release` button

10
Makefile Normal file
View File

@ -0,0 +1,10 @@
all: verify
verify:
@echo Running href checker:
@# Use "-x" if you want to skip exernal links
@tools/verify-links.sh -v .
@echo Running the spec phrase checker:
@tools/verify-specs.sh -v spec.md extensions.md json-format.md http-transport-binding.md
@echo Running the doc phrase checker:
@tools/verify-docs.sh -v .

12
OWNERS
View File

@ -1,12 +0,0 @@
# See the docs/GOVERNANCE.md document for the definition of the roles and
# responsibilities
admins:
- deissnerk
- duglin
- kenowens12
- markpeek
approvers:
# Approvers are "Voting Members" as defined in the GOVERNANCE.md document.
# See the "Voting Rights?" column in:
# https://docs.google.com/spreadsheets/d/1bw5s9sC2ggYyAiGJHEk7xm-q2KG6jyrfBy69ifkdmt0/edit?pli=1#gid=0
# for the current list of Voting members

217
README.md
View File

@ -1,178 +1,97 @@
# CloudEvents
<!-- no verify-specs -->
![CloudEvents logo](https://github.com/cncf/artwork/blob/master/other/cloudevents/horizontal/color/cloudevents-horizontal-color.png)
![CloudEvents logo](https://github.com/cncf/artwork/blob/main/projects/cloudevents/horizontal/color/cloudevents-horizontal-color.png?raw=true)
[![CLOMonitor](https://img.shields.io/endpoint?url=https://clomonitor.io/api/projects/cncf/cloudevents/badge)](https://clomonitor.io/projects/cncf/cloudevents)
[![OpenSSF Best Practices](https://bestpractices.coreinfrastructure.org/projects/6770/badge)](https://bestpractices.coreinfrastructure.org/projects/6770)
Events are everywhere. However, event producers tend to describe events
Events are everywhere. However, event publishers tend to describe events
differently.
The lack of a common way of describing events means developers must constantly
re-learn how to consume events. This also limits the potential for libraries,
re-learn how to receive events. This also limits the potential for libraries,
tooling and infrastructure to aide the delivery of event data across
environments, like SDKs, event routers or tracing systems. The portability and
environments, like SDKs, event routers or tracing systems. The portability and
productivity we can achieve from event data is hindered overall.
CloudEvents is a specification for describing event data in common formats to
provide interoperability across services, platforms and systems.
Enter CloudEvents, a specification for describing event data in a common way.
CloudEvents seeks to ease event declaration and delivery across services,
platforms and beyond.
CloudEvents has received a large amount of industry interest, ranging from major
cloud providers to popular SaaS companies. CloudEvents is hosted by the
[Cloud Native Computing Foundation](https://cncf.io) (CNCF) and was approved as
a Cloud Native sandbox level project on
[May 15, 2018](https://docs.google.com/presentation/d/1KNSv70fyTfSqUerCnccV7eEC_ynhLsm9A_kjnlmU_t0/edit#slide=id.g37acf52904_1_41), an
incubator project on [Oct 24, 2019](https://github.com/cncf/toc/pull/297)
and a graduated project on [Jan 25, 2024](https://github.com/cncf/toc/pull/996)
([announcement](https://www.cncf.io/announcements/2024/01/25/cloud-native-computing-foundation-announces-the-graduation-of-cloudevents/)).
CloudEvents is a new effort and it's still under active development. However,
its working group has received a surprising amount of industry interest,
ranging from major cloud providers to popular SaaS companies. Our end goal is
to offer this specification to the
[Cloud Native Computing Foundation](https://www.cncf.io/).
## CloudEvents Documents
| | Latest Release | Working Draft |
| :---------------------------- | :-----------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------: |
| **Core Specification:** |
| CloudEvents | [v1.0.2](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md) | [WIP](cloudevents/spec.md) |
| |
| **Optional Specifications:** |
| AMQP Protocol Binding | [v1.0.2](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/amqp-protocol-binding.md) | [WIP](cloudevents/bindings/amqp-protocol-binding.md) |
| AVRO Event Format | [v1.0.2](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/avro-format.md) | [WIP](cloudevents/formats/avro-format.md) |
| AVRO Compact Event Format | | [WIP](cloudevents/working-drafts/avro-compact-format.md) |
| HTTP Protocol Binding | [v1.0.2](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/http-protocol-binding.md) | [WIP](cloudevents/bindings/http-protocol-binding.md) |
| JSON Event Format | [v1.0.2](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/json-format.md) | [WIP](cloudevents/formats/json-format.md) |
| Kafka Protocol Binding | [v1.0.2](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/kafka-protocol-binding.md) | [WIP](cloudevents/bindings/kafka-protocol-binding.md) |
| MQTT Protocol Binding | [v1.0.2](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/mqtt-protocol-binding.md) | [WIP](cloudevents/bindings/mqtt-protocol-binding.md) |
| NATS Protocol Binding | [v1.0.2](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/nats-protocol-binding.md) | [WIP](cloudevents/bindings/nats-protocol-binding.md) |
| WebSockets Protocol Binding | - | [WIP](cloudevents/bindings/websockets-protocol-binding.md) |
| Protobuf Event Format | [v1.0.2](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/protobuf-format.md) | [WIP](cloudevents/formats/protobuf-format.md) |
| XML Event Format | - | [WIP](cloudevents/working-drafts/xml-format.md) |
| Web hook | [v1.0.2](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/http-webhook.md) | [WIP](cloudevents/http-webhook.md) |
| |
| **Additional Documentation:** |
| CloudEvents Primer | [v1.0.2](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/primer.md) | [WIP](cloudevents/primer.md) |
| [CloudEvents Adapters](cloudevents/adapters/README.md) | - | [Not versioned](cloudevents/adapters/README.md) |
| [CloudEvents SDK Requirements](cloudevents/SDK.md) | - | [Not versioned](cloudevents/SDK.md) |
| [Documented Extensions](cloudevents/extensions/README.md) | - | [Not versioned](cloudevents/extensions/README.md) |
| [Proprietary Specifications](cloudevents/proprietary-specs.md) | - | [Not versioned](cloudevents/proprietary-specs.md) |
The following specifications are available:
## Other Specifications
| | Latest Release | Working Draft |
| :-------------- | :-------------------------------------------------------------: | :---------------------------: |
| CE SQL | [v1.0.0](https://github.com/cloudevents/spec/tree/cesql/v1.0.0/cesql) | [WIP](cesql/spec.md) |
| Subscriptions | - | [WIP](subscriptions/spec.md) |
| | Latest Release | Working Draft |
| :--- | :---: | :---: |
| **CloudEvents** | [v0.1](https://github.com/cloudevents/spec/blob/v0.1/spec.md) | [master](https://github.com/cloudevents/spec/blob/master/spec.md) |
| **HTTP Transport Binding** | [v0.1](https://github.com/cloudevents/spec/blob/v0.1/http-transport-binding.md) | [master](https://github.com/cloudevents/spec/blob/master/http-transport-binding.md) |
| **JSON Event Format** | [v0.1](https://github.com/cloudevents/spec/blob/v0.1/json-format.md) | [master](https://github.com/cloudevents/spec/blob/master/json-format.md) |
The Registry and Pagination specifications can now be found in the
[xRegistry/spec](https://github.com/xregistry/spec) repo.
There is also the [CloudEvents Extension Attributes](https://github.com/cloudevents/spec/blob/master/extensions.md)
document.
Additional release related information:
[Historical releases and changelogs](docs/RELEASES.md)
## Working Group process
If you are new to CloudEvents, it is recommended that you start by reading the
[Primer](cloudevents/primer.md) for an overview of the specification's goals
and design decisions, and then move on to the
[core specification](cloudevents/spec.md).
The CNCF Serverless WG is working to formalize the [specification](spec.md)
based on [design goals](spec.md#design-goals) which focus on interoperability
between systems which generate and respond to events.
Since not all event producers generate CloudEvents by default, there is
documentation describing the recommended process for adapting some popular
events into CloudEvents, see
[CloudEvents Adapters](cloudevents/adapters/README.md).
In order to achieve these goals, the Serverless WG must describe:
- Common attributes of an *event* that facilitate interoperability
- One or more common architectures that are in active use today or planned to be
built by WG members
- How events are transported from producer to consumer via at least one protocol
- Identify and resolve whatever else is needed for interoperability
## SDKs
## Communications
In addition to the documentation mentioned above, there are also a set of
language specific SDKs being developed:
- [C#/.NET](https://github.com/cloudevents/sdk-csharp)
- [Go](https://github.com/cloudevents/sdk-go)
- [Java](https://github.com/cloudevents/sdk-java)
- [Javascript](https://github.com/cloudevents/sdk-javascript)
- [PHP](https://github.com/cloudevents/sdk-php)
- [PowerShell](https://github.com/cloudevents/sdk-powershell)
- [Python](https://github.com/cloudevents/sdk-python)
- [Ruby](https://github.com/cloudevents/sdk-ruby)
- [Rust](https://github.com/cloudevents/sdk-rust)
The [SDK requirements](cloudevents/SDK.md) document provides information
on how the SDKs are managed and what is expected of each one.
The SDK [feature support table](cloudevents/SDK.md#feature-support) is a
good resource to see which features, event formats and bindings are supported
by each SDK.
For more information about how the SDKs operate, please see the following
documents:
- [SDK Governance](docs/SDK-GOVERNANCE.md)
- [SDK Maintainer Guidlines](docs/SDK-maintainer-guidelines.md)
- [SDK PR Guidlines](docs/SDK-PR-guidelines.md)
## Community and Docs
Learn more about the people and organizations who are creating a dynamic cloud
native ecosystem by making our systems interoperable with CloudEvents.
- Our [Governance](docs/GOVERNANCE.md) documentation.
- [Contributing](docs/CONTRIBUTING.md) guidance.
- [Roadmap](docs/ROADMAP.md)
- [Adopters](https://cloudevents.io/) - See "Integrations".
- [Contributors](docs/contributors.md): people and organizations who helped
us get started or are actively working on the CloudEvents specification.
- [Presentations, notes and other misc shared
docs](https://drive.google.com/drive/folders/1eKH-tVNV25jwkuBEoi3ESqvVjNRlJqYX?usp=sharing)
- [Demos & open source](docs/README.md) -- if you have something to share
about your use of CloudEvents, please submit a PR!
- [Potential CloudEvents v2 work items](cloudevents/v2.md)
- [Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)
### Security Concerns
If there is a security concern with one of the CloudEvents specifications, or
with one of the project's SDKs, please send an email to
[cncf-cloudevents-security@lists.cncf.io](mailto:cncf-cloudevents-security@lists.cncf.io).
A security assessment was performed by
[Trail of Bits](https://www.trailofbits.com/) in October 2022. The report
can be found [here](docs/CE-SecurityAudit-2022-10.pdf) or on the Trail of Bits
[website](https://github.com/trailofbits/publications/blob/master/reviews/CloudEvents.pdf).
### Communications
The main mailing list for e-mail communications:
- Send emails to: [cncf-cloudevents](mailto:cncf-cloudevents@lists.cncf.io)
- To subscribe see: https://lists.cncf.io/g/cncf-cloudevents
- Archives are at: https://lists.cncf.io/g/cncf-cloudevents/topics
We have google group for e-mail communications:
[cncf-wg-serverless](https://groups.google.com/forum/#!forum/cncf-wg-serverless)
And a #cloudevents Slack channel under
[CNCF's Slack workspace](http://slack.cncf.io/).
[CNCF's Slack workspace](https://slack.cncf.io/).
For SDK related comments and questions:
- Email to: [cncf-cloudevents-sdk](mailto:cncf-cloudevents-sdk@lists.cncf.io)
- To subscribe see: https://lists.cncf.io/g/cncf-cloudevents-sdk
- Archives are at: https://lists.cncf.io/g/cncf-cloudevents-sdk/topics
- Slack: #cloudeventssdk on [CNCF's Slack workspace](http://slack.cncf.io/)
For SDK specific communications, please see the main README in each
SDK's github repo - see the [list of SDKs](#sdks).
### Meeting Time
## Meeting Time
See the [CNCF public events calendar](https://www.cncf.io/community/calendar/).
This specification is being developed by the
[CNCF Serverless Working Group](https://github.com/cncf/wg-serverless). This
working group meets every Thursday at 9AM PT (USA Pacific)
([World Time Zone Converter](http://www.thetimezoneconverter.com/?t=9:00%20am&tz=San%20Francisco&)):
[CNCF Serverless Working Group](https://github.com/cncf/wg-serverless).
This working group meets every Thursday at 9AM PT (USA Pacific):
Please see the
[meeting minutes doc](https://docs.google.com/document/d/1OVF68rpuPK5shIHILK9JOqlZBbfe91RNzQ7u_P7YCDE/edit#)
for the latest information on how to join the calls.
Join from PC, Mac, Linux, iOS or Android: https://zoom.us/my/cncfserverlesswg
Recording from our calls are available
[here](https://www.youtube.com/playlist?list=PLO-qzjSpLN1BEyKjOVX_nMg7ziHXUYwec), and
older ones are
Or iPhone one-tap :
US: +16465588656,,3361029682# or +16699006833,,3361029682#
Or Telephone:
Dial:
US: +1 646 558 8656 (US Toll) or +1 669 900 6833 (US Toll)
or +1 855 880 1246 (Toll Free) or +1 877 369 0926 (Toll Free)
Meeting ID: 336 102 9682
International numbers available:
https://zoom.us/zoomconference?m=QpOqQYfTzY_Gbj9_8jPtsplp1pnVUKDr
NOTE: Please use \*6 to mute/un-mute your phone during the call.
World Time Zone Converter:
http://www.thetimezoneconverter.com/?t=9:00%20am&tz=San%20Francisco&
## In Person Meetings
None planned at this time.
## Meeting Minutes
The minutes from our calls are available
[here](https://docs.google.com/document/d/1OVF68rpuPK5shIHILK9JOqlZBbfe91RNzQ7u_P7YCDE/edit#).
Recording from our calls are available
[here](https://www.youtube.com/playlist?list=PLj6h78yzYM2Ph7YoBIgsZNW_RGJvNlFOt).
Periodically, the group may have in-person meetings that coincide with a major
conference. Please see the
[meeting minutes doc](https://docs.google.com/document/d/1OVF68rpuPK5shIHILK9JOqlZBbfe91RNzQ7u_P7YCDE/edit#)
for any future plans.

66
about/contributors.md Normal file
View File

@ -0,0 +1,66 @@
## CloudEvents contributors
We welcome you to join us! This list acknowleges those who contribute whether
it be via GitHub pull request or in real life in the working group, as well as
those who contributed before this became a CNCF Serverless WG project. If you
are participating in some way, please add your information via pull request.
This list is intended to build community, helping the working group to connect
github handles to real world identities and get to know each other, and for new
folks to see who has been involved already.
Contributions do not constitute an official endorsement.
* **Amazon**
* [AWS Lambda events](https://docs.aws.amazon.com/lambda/latest/dg/invoking-lambda-function.html)
* Arun Gupta, Ajay Nair, Rob Leidle, Orr Weinstein
* **Google**
* [Google Cloud Functions](https://cloud.google.com/functions/) and [Cloud Functions for Firebase](https://firebase.google.com/docs/functions/)
* Sarah Allen - [@ultrasaurus](https://github.com/ultrasaurus)
* Rachel Myers - [@rachelmyers](https://github.com/rachelmyers)
* Thomas Bouldin - [@inlined](https://github.com/inlined)
* Mike McDonald, Morgan Hallmon, Robert-Jan Huijsman
* **Huawei**
* [Huawei Function Stage](http://www.huaweicloud.com/en-us/product/functionstage.html)
* [Huawei Function Graph](https://www.huaweicloud.com/en-us/product/functiongraph.html)
* Cathy Hong Zhang - [@cathyhongzhang](https://github.com/cathyhongzhang)
* Louis Fourie - [@lfourie](https://github.com/lfourie)
* **IBM**
* [IBM Cloud Functions](https://console.bluemix.net/openwhisk/)
* Doug Davis - [@duglin](https://github.com/duglin)
* Daniel Krook - [@krook](https://github.com/krook)
* Matt Rutkowski - [@mrutkows](https://github.com/mrutkows)
* Michael M Behrendt - [@mbehrendt](https://github.com/mbehrendt)
* **Iguazio**
* Yaron Haviv - [@iguazio](https://github.com/iguazio)
* Orit Nissan-Messing
* **Intel**
* David Lyle - [@dklyle](https://github.com/dklyle)
* **Microsoft**
* [Microsoft Event Grid](https://azure.microsoft.com/en-us/services/event-grid/)
* Clemens Vasters - [@clemensv](https://github.com/clemensv)
* Bahram Banisadr - [@banisadr](https://github.com/banisadr)
* Dan Rosanova - [@djrosanova](https://github.com/djrosanova)
* Cesar Ruiz-Meraz, Raja Ravipati
* **Oracle**
* [Fn Project](https://fnproject.io/)
* Chad Arimura - [@carimura](https://github.com/banisadr)
* Stanley Halka - [@shalka](https://github.com/banisadr)
* Travis Reeder - [@treeder](https://github.com/banisadr)
* **Red Hat**
* Jim Curtis - [@jimcurtis64](https://github.com/jimcurtis2)
* William Markito Oliveira - [@william_markito](https://github.com/markito)
* **SAP**
* Nathan Oyler - [@notque](https://github.com/notque)
* **Serverless Inc**
* [Serverless Framework and Event Gateway](https://serverless.com/)
* Austen Collins - [@ac360](https://github.com/ac360)
* Rupak Ganguly - [@rupakg](https://github.com/rupakg)
* Brian Neisler - [@brianneisler](https://github.com/brianneisler)
* Jeremy Coffield, Ganesh Radhakirshnan
* **SolarWinds**
* Lee Calcote - [@leecalcote](https://github.com/leecalcote)
* **VMWare**
* [Dispatch Functions Framework](https://vmware.github.io/dispatch/)
* Mark Peek - [@leecalcote](https://github.com/markpeek)

165
about/references.md Normal file
View File

@ -0,0 +1,165 @@
# References
Examples of current event formats that exist today.
### Microsoft - Event Grid
```
{
"topic":"/subscriptions/{subscription-id}",
"subject":"/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.EventGrid/eventSubscriptions/LogicAppdd584bdf-8347-49c9-b9a9-d1f980783501",
"eventType":"Microsoft.Resources.ResourceWriteSuccess",
"eventTime":"2017-08-16T03:54:38.2696833Z",
"id":"25b3b0d0-d79b-44d5-9963-440d4e6a9bba",
"data": {
"authorization":"{azure_resource_manager_authorizations}",
"claims":"{azure_resource_manager_claims}",
"correlationId":"54ef1e39-6a82-44b3-abc1-bdeb6ce4d3c6",
"httpRequest":"",
"resourceProvider":"Microsoft.EventGrid",
"resourceUri":"/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.EventGrid/eventSubscriptions/LogicAppdd584bdf-8347-49c9-b9a9-d1f980783501",
"operationName":"Microsoft.EventGrid/eventSubscriptions/write",
"status":"Succeeded",
"subscriptionId":"{subscription-id}",
"tenantId":"72f988bf-86f1-41af-91ab-2d7cd011db47"
}
}
```
[Documentation](https://docs.microsoft.com/en-us/azure/event-grid/event-schema)
### Google - Cloud Functions (potential future)
```
{
"data": {
"@type": "types.googleapis.com/google.pubsub.v1.PubsubMessage",
"attributes": {
"foo": "bar",
},
"messageId": "12345",
"publishTime": "2017-06-05T12:00:00.000Z",
"data": "somebase64encodedmessage"
},
"context": {
"eventId": "12345",
"timestamp": "2017-06-05T12:00:00.000Z",
"eventTypeId": "google.pubsub.topic.publish",
"resource": {
"name": "projects/myProject/topics/myTopic",
"service": "pubsub.googleapis.com"
}
}
}
```
### AWS - SNS
```
{
"Records": [
{
"EventVersion": "1.0",
"EventSubscriptionArn": eventsubscriptionarn,
"EventSource": "aws:sns",
"Sns": {
"SignatureVersion": "1",
"Timestamp": "1970-01-01T00:00:00.000Z",
"Signature": "EXAMPLE",
"SigningCertUrl": "EXAMPLE",
"MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
"Message": "Hello from SNS!",
"MessageAttributes": {
"Test": {
"Type": "String",
"Value": "TestString"
},
"TestBinary": {
"Type": "Binary",
"Value": "TestBinary"
}
},
"Type": "Notification",
"UnsubscribeUrl": "EXAMPLE",
"TopicArn": topicarn,
"Subject": "TestInvoke"
}
}
]
}
```
[Documentation](http://docs.aws.amazon.com/lambda/latest/dg/eventsources.html)
### AWS - Kinesis
```
{
"Records": [
{
"eventID": "shardId-000000000000:49545115243490985018280067714973144582180062593244200961",
"eventVersion": "1.0",
"kinesis": {
"partitionKey": "partitionKey-3",
"data": "SGVsbG8sIHRoaXMgaXMgYSB0ZXN0IDEyMy4=",
"kinesisSchemaVersion": "1.0",
"sequenceNumber": "49545115243490985018280067714973144582180062593244200961"
},
"invokeIdentityArn": identityarn,
"eventName": "aws:kinesis:record",
"eventSourceARN": eventsourcearn,
"eventSource": "aws:kinesis",
"awsRegion": "us-east-1"
}
]
}
```
### IBM - OpenWhisk - Web Action Event
```
{
"__ow_method": "post",
"__ow_headers": {
"accept": "*/*",
"connection": "close",
"content-length": "4",
"content-type": "text/plain",
"host": "172.17.0.1",
"user-agent": "curl/7.43.0"
},
"__ow_path": "",
"__ow_body": "Jane"
}
```
### OpenStack - Audit Middleware - Event
```
{
"typeURI": "http://schemas.dmtf.org/cloud/audit/1.0/event",
"id": "d8304637-3f63-5092-9ab3-18c9781871a2",
"eventTime": "2018-01-30T10:46:16.740253+00:00",
"action": "delete",
"eventType": "activity",
"outcome": "success",
"reason": {
"reasonType": "HTTP",
"reasonCode": "204"
},
"initiator": {
"typeURI": "service/security/account/user",
"name": "user1",
"domain": "domain1",
"id": "52d28347f0b4cf9cc1717c00adf41c74cc764fe440b47aacb8404670a7cd5d22",
"host": {
"address": "127.0.0.1",
"agent": "python-novaclient"
},
"project_id": "ae63ddf2076d4342a56eb049e37a7621"
},
"target": {
"typeURI": "compute/server",
"id": "b1b475fc-ef0a-4899-87f3-674ac0d56855"
},
"observer": {
"typeURI": "service/compute",
"name": "nova",
"id": "1b5dbef1-c2e8-5614-888d-bb56bcf65749"
},
"requestPath": "/v2/ae63ddf2076d4342a56eb049e37a7621/servers/b1b475fc-ef0a-4899-87f3-674ac0d56855"
}
```
[Documentation](https://github.com/openstack/pycadf/blob/master/doc/source/event_concept.rst)

120
about/use-cases.md Normal file
View File

@ -0,0 +1,120 @@
# CloudEvents - Use Cases
[WIP] Use-case examples to help end users understand the value of CloudEvents.
### Normalizing Events Across Services & Platforms
Major event publishers (e.g. AWS, Microsoft, Google, etc.) all publish events
in different formats on their respective platforms. There are even a few cases
where services on the same provider publish events in different formats (e.g.
AWS). This forces event consumers to implement custom logic to read or munge
event data across platforms and occasionally across services on a single
platform.
CloudEvents can offer a single experience for authoring consumers that handle
events across all platforms and services.
### Facilitating Integrations Across Services & Platforms
Event data being transported across environments is increasingly common.
However, without a common way of describing events, delivery of events across
environments is hindered. There is no single way of determining where an event
came from and where it might be going. This prevents tooling to facilitate
successful event delivery and consumers from knowing what to do with event
data.
CloudEvents offers useful metadata which middleware and consumers can rely upon
to facilitate event routing, logging, delivery and receipt.
### Increasing Portability of Functions-as-a-Service
Functions-as-a-Service (also known as serverless computing) is one of the
fastest growing trends in IT and it is largely event-driven. However, a
primary concern of FaaS is vendor lock-in. This lock-in is partially caused
by differences in function APIs and signatures across providers, but the
lock-in is also caused by differences in the format of event data received
within functions.
CloudEvents' common way of describing event data increases the portability of
Functions-as-a-Service.
### Improving Development & Testing of Event-Driven/Serverless Architectures
The lack of a common event format complicates development and testing of
event-driven and serverless architectures. There is no easy way to mock events
accurately for development and testing purposes, and help emulate event-driven
workflows in a development environment.
CloudEvents can enable better developer tools for building, testing and
handling the end-to-end lifecycle of event-driven and serverless architectures.
### Event Data Evolution
Most platforms and services version the data model of their events differently
(if they do this at all). This creates an inconsistent experience for
publishing and consuming the data model of events as those data models evolve.
CloudEvents can offer a common way to version and evolve event data. This will
help event publishers safely version their data models based on best practices,
and this help event consumers safely work with event data as it evolves.
### Normalizing Webhooks
Webhooks is a style of event publishing which does not use a common format.
Consumers of webhooks dont have a consistent way to develop, test, identify,
validate, and overall process event data delivered via webhooks.
CloudEvents can offer consistency in webhook publishing and consumption.
### Policy Enforcement
The transiting of events between systems may need to be filtered, transformed,
or blocked due to security and policy concerns. Examples may be to prevent
ingress or egress of the events such as event data containing sensitive
information or wanting to disallow the information flow between the sender and
receiver.
A common event format would allow easier reasoning about the data being
transited and allow for better introspection of the data.
### Event Tracing
An event sent from a source may result in a sequence of additional events
sent from various middleware devices such as event brokers and gateways.
CloudEvents includes metadata in events to associate these events as being
part of an event sequence for the purpose of event tracing and
troubleshooting.
An event sent from a source may result in a sequence of additional events
sent from various middleware devices such as event brokers and gateways.
CloudEvents includes metadata in events to associate these events as being
part of an event sequence for the purpose of event tracing and
troubleshooting.
### Cloudbursting
### IoT
IoT devices send and receive events related to their functionality.
For example, a connected thermostat will send telemetry on the current
temperature and could receive events to change temperatures.
These devices typically have a constrained operating environment
(cpu, memory) requiring a well defined event message format.
In a lot of cases these messages are binary encoded instead of textual.
Whether directly from the device or transformed via a gateway, CloudEvents
would allow for a better description of the origin of the message and the
format of the data contained within the message.
### Event Correlation
A serverless application/workflow could be associated with multiple events from
different event sources/producers. For example, a burglary detection
application/workflow could involve both a motion event and a door/window open event.
A serverless platform could receive many instances of each type of events,
e.g. it could receive motion events and window open events from different houses.
The serverless platform needs to correlate one type of event instance correctly
with other types of event instances and map a received event instance to the
correct application/workflow instance. CloudEvents will provide a standard way
for any event consumer (eg. the serverless platform) to locate the event
correlation information/token in the event data and map a received event instance
to the correct application/workflow instance.

View File

@ -1,79 +0,0 @@
lexer grammar CESQLLexer;
// NOTE:
// This grammar is case-sensitive, although CESQL keywords are case-insensitive.
// In order to implement case-insensitivity, check out
// https://github.com/antlr/antlr4/blob/master/doc/case-insensitive-lexing.md#custom-character-streams-approach
// Skip tab, carriage return and newlines
SPACE: [ \t\r\n]+ -> skip;
// Fragments for Literal primitives
fragment ID_LITERAL: [a-zA-Z0-9]+;
fragment DQUOTA_STRING: '"' ( '\\'. | '""' | ~('"'| '\\') )* '"';
fragment SQUOTA_STRING: '\'' ('\\'. | '\'\'' | ~('\'' | '\\'))* '\'';
fragment INT_DIGIT: [0-9];
fragment FN_LITERAL: [A-Z] [A-Z_]*;
// Constructors symbols
LR_BRACKET: '(';
RR_BRACKET: ')';
COMMA: ',';
SINGLE_QUOTE_SYMB: '\'';
DOUBLE_QUOTE_SYMB: '"';
fragment QUOTE_SYMB
: SINGLE_QUOTE_SYMB | DOUBLE_QUOTE_SYMB
;
// Operators
// - Logic
AND: 'AND';
OR: 'OR';
XOR: 'XOR';
NOT: 'NOT';
// - Arithmetics
STAR: '*';
DIVIDE: '/';
MODULE: '%';
PLUS: '+';
MINUS: '-';
// - Comparison
EQUAL: '=';
NOT_EQUAL: '!=';
GREATER: '>';
GREATER_OR_EQUAL: '>=';
LESS: '<';
LESS_GREATER: '<>';
LESS_OR_EQUAL: '<=';
// Like, exists, in
LIKE: 'LIKE';
EXISTS: 'EXISTS';
IN: 'IN';
// Booleans
TRUE: 'TRUE';
FALSE: 'FALSE';
// Literals
DQUOTED_STRING_LITERAL: DQUOTA_STRING;
SQUOTED_STRING_LITERAL: SQUOTA_STRING;
INTEGER_LITERAL: ( '+' | '-' )? INT_DIGIT+;
// Identifiers
IDENTIFIER: [a-zA-Z]+;
IDENTIFIER_WITH_NUMBER: [a-zA-Z0-9]+;
FUNCTION_IDENTIFIER_WITH_UNDERSCORE: [A-Z] [A-Z_]*;

View File

@ -1,62 +0,0 @@
grammar CESQLParser;
import CESQLLexer;
// Entrypoint
cesql: expression EOF;
// Structure of operations, function invocations and expression
expression
: functionIdentifier functionParameterList #functionInvocationExpression
// unary operators are the highest priority
| NOT expression #unaryLogicExpression
| MINUS expression # unaryNumericExpression
// LIKE, EXISTS and IN takes precedence over all the other binary operators
| expression NOT? LIKE stringLiteral #likeExpression
| EXISTS identifier #existsExpression
| expression NOT? IN setExpression #inExpression
// Numeric operations
| expression (STAR | DIVIDE | MODULE) expression #binaryMultiplicativeExpression
| expression (PLUS | MINUS) expression #binaryAdditiveExpression
// Comparison operations
| expression (EQUAL | NOT_EQUAL | LESS_GREATER | GREATER_OR_EQUAL | LESS_OR_EQUAL | LESS | GREATER) expression #binaryComparisonExpression
// Logic operations
|<assoc=right> expression (AND | OR | XOR) expression #binaryLogicExpression
// Subexpressions and atoms
| LR_BRACKET expression RR_BRACKET #subExpression
| atom #atomExpression
;
atom
: booleanLiteral #booleanAtom
| integerLiteral #integerAtom
| stringLiteral #stringAtom
| identifier #identifierAtom
;
// Identifiers
identifier
: (IDENTIFIER | IDENTIFIER_WITH_NUMBER)
;
functionIdentifier
: (IDENTIFIER | FUNCTION_IDENTIFIER_WITH_UNDERSCORE)
;
// Literals
booleanLiteral: (TRUE | FALSE);
stringLiteral: (DQUOTED_STRING_LITERAL | SQUOTED_STRING_LITERAL);
integerLiteral: INTEGER_LITERAL;
// Functions
functionParameterList
: LR_BRACKET ( expression ( COMMA expression )* )? RR_BRACKET
;
// Sets
setExpression
: LR_BRACKET expression ( COMMA expression )* RR_BRACKET // Empty sets are not allowed
;

View File

@ -1,7 +0,0 @@
# CloudEvents SQL Expression Language - Version 1.0.0
CloudEvents SQL expressions (also known as CESQL) allow computing values and
matching of CloudEvent attributes against complex expressions that lean on the
syntax of Structured Query Language (SQL) `WHERE` clauses.
For more information, see the [CESQL specification](spec.md).

View File

@ -1,25 +0,0 @@
# CESQL Release Notes
<!-- no verify-specs -->
## v1.0.0 - 2024/06/13
This is the v1 release of the specification! CESQL v1 provides users with the
ability to write and execute queries against CloudEvents. This allows for
computing values and matchins of CloudEvent attributes against complex
expressions that lean on the syntax of Structured Query Language (SQL).
Notable changes between the WIP draft and the v1 specification are:
- Specify error types
- Clarify return values of expressions that encounter errors
- Clarify that missing attributes result in an error and the expression
returning it's default value
- Add support for boolean to integer and integer to boolean type casting
- Clarify the order of operations
- Clarify how user defined functions work
- Define the default "zero" values for the built in types
- Clarify that string comparisons are case sensitive
- Specify which characters are treated as whitespace for the TRIM function
- Specify that functions must still return values along with errors, as well as
the behaviour when user defined function do not do this correctly
- For the fail fast error handling mode, expressions now return the zero value
for their return type when they encounter an error, rather than undefined

View File

@ -1,29 +0,0 @@
# CloudEvents Expression Language TCK
Each file of this TCK contains a set of test cases, testing one or more specific features of the language.
The root file structure is composed by:
* `name`: Name of the test suite contained in the file
* `tests`: List of tests
Each test definition includes:
* `name`: Name of the test case
* `expression`: Expression to test.
* `result`: Expected result (OPTIONAL). Can be a boolean, an integer or a string.
* `error`: Expected error (OPTIONAL). If absent, no error is expected.
* `event`: Input event (OPTIONAL). If present, this is a valid event serialized in JSON format. If absent, when testing
the expression, any valid event can be passed.
* `eventOverrides`: Overrides to the input event (OPTIONAL). This might be used when `event` is missing, in order to
define only some specific values, while the other (REQUIRED) attributes can be any value.
The `error` values could be any of the following:
* `parse`: Error while parsing the expression
* `math`: Math error while evaluating a math operator
* `cast`: Casting error
* `missingFunction`: Addressed a missing function
* `functionEvaluation`: Error while evaluating a function
* `missingAttribute`: Error due to a missing attribute
* `generic`: A generic error

View File

@ -1,106 +0,0 @@
name: Binary comparison operations
tests:
- name: True is equal to false
expression: TRUE = FALSE
result: false
- name: False is equal to false
expression: FALSE = FALSE
result: true
- name: 1 is equal to 2
expression: 1 = 2
result: false
- name: 2 is equal to 2
expression: 2 = 2
result: true
- name: abc is equal to 123
expression: "'abc' = '123'"
result: false
- name: abc is equal to abc
expression: "'abc' = 'abc'"
result: true
- name: Equals operator returns false when encountering a missing attribute
expression: missing = 2
result: false
error: missingAttribute
- name: True is not equal to false
expression: TRUE != FALSE
result: true
- name: False is not equal to false
expression: FALSE != FALSE
result: false
- name: 1 is not equal to 2
expression: 1 != 2
result: true
- name: 2 is not equal to 2
expression: 2 != 2
result: false
- name: abc is not equal to 123
expression: "'abc' != '123'"
result: true
- name: abc is not equal to abc
expression: "'abc' != 'abc'"
result: false
- name: Not equal operator returns false when encountering a missing attribute
expression: missing != 2
result: false
error: missingAttribute
- name: True is not equal to false (diamond operator)
expression: TRUE <> FALSE
result: true
- name: False is not equal to false (diamond operator)
expression: FALSE <> FALSE
result: false
- name: 1 is not equal to 2 (diamond operator)
expression: 1 <> 2
result: true
- name: 2 is not equal to 2 (diamond operator)
expression: 2 <> 2
result: false
- name: abc is not equal to 123 (diamond operator)
expression: "'abc' <> '123'"
result: true
- name: abc is not equal to abc (diamond operator)
expression: "'abc' <> 'abc'"
result: false
- name: Diamond operator returns false when encountering a missing attribute
expression: missing <> 2
result: false
error: missingAttribute
- name: 1 is less or equal than 2
expression: 2 <= 2
result: true
- name: 3 is less or equal than 2
expression: 3 <= 2
result: false
- name: 1 is less than 2
expression: 1 < 2
result: true
- name: 2 is less than 2
expression: 2 < 2
result: false
- name: 2 is greater or equal than 2
expression: 2 >= 2
result: true
- name: 2 is greater or equal than 3
expression: 2 >= 3
result: false
- name: 2 is greater than 1
expression: 2 > 1
result: true
- name: 2 is greater than 2
expression: 2 > 2
result: false
- name: Less than or equal operator returns false when encountering a missing attribute
expression: missing <= 2
result: false
error: missingAttribute
- name: implicit casting with string as right type
expression: "true = 'TRUE'"
result: false
- name: implicit casting with boolean as right type
expression: "'TRUE' = true"
result: true

View File

@ -1,54 +0,0 @@
name: Binary logical operations
tests:
- name: False and false
expression: FALSE AND FALSE
result: false
- name: False and true
expression: FALSE AND TRUE
result: false
- name: True and false
expression: TRUE AND FALSE
result: false
- name: True and true
expression: TRUE AND TRUE
result: true
- name: AND operator is short circuit evaluated
expression: "false and (1 != 1 / 0)"
result: false
- name: AND operator is NOT short circuit evaluated when the first operand evaluates to true
expression: "true and (1 != 1 / 0)"
error: math
result: false
- name: False or false
expression: FALSE OR FALSE
result: false
- name: False or true
expression: FALSE OR TRUE
result: true
- name: True or false
expression: TRUE OR FALSE
result: true
- name: True or true
expression: TRUE OR TRUE
result: true
- name: OR operator is short circuit evaluated
expression: "true or (1 != 1 / 0)"
result: true
- name: OR operator is NOT short circuit evaluated when the first operand evaluates to false
expression: "false or (1 != 1 / 0)"
error: math
result: false
- name: False xor false
expression: FALSE XOR FALSE
result: false
- name: False xor true
expression: FALSE XOR TRUE
result: true
- name: True xor false
expression: TRUE XOR FALSE
result: true
- name: True xor true
expression: TRUE XOR TRUE
result: false

View File

@ -1,63 +0,0 @@
name: Binary math operations
tests:
- name: Operator precedence without parenthesis
expression: 4 * 2 + 4 / 2
result: 10
- name: Operator precedence with parenthesis
expression: 4 * (2 + 4) / 2
result: 12
- name: Truncated division
expression: 5 / 3
result: 1
- name: Division by zero returns 0 and fail
expression: 5 / 0
result: 0
error: math
- name: Module
expression: 5 % 2
result: 1
- name: Module by zero returns 0 and fail
expression: 5 % 0
result: 0
error: math
- name: Missing attribute in division results in missing attribute error, not divide by 0 error
expression: missing / 0
result: 0
error: missingAttribute
- name: Missing attribute in modulo results in missing attribute error, not divide by 0 error
expression: missing % 0
result: 0
error: missingAttribute
- name: Positive plus positive number
expression: 4 + 1
result: 5
- name: Negative plus positive number
expression: -4 + 1
result: -3
- name: Negative plus Negative number
expression: -4 + -1
result: -5
- name: Positive plus negative number
expression: 4 + -1
result: 3
- name: Positive minus positive number
expression: 4 - 1
result: 3
- name: Negative minus positive number
expression: -4 - 1
result: -5
- name: Implicit casting, with left value string
expression: "'5' + 3"
result: 8
- name: Implicit casting, with right value string
expression: "5 + '3'"
result: 8
- name: Implicit casting, with both values string
expression: "'5' + '3'"
result: 8
- name: Implicit casting, with boolean value
expression: "5 + TRUE"
result: 6

View File

@ -1,25 +0,0 @@
name: Case sensitivity
tests:
- name: TRUE
expression: TRUE
result: true
- name: true
expression: true
result: true
- name: tRuE
expression: tRuE
result: true
- name: FALSE
expression: FALSE
result: false
- name: false
expression: false
result: false
- name: FaLsE
expression: FaLsE
result: false
- name: String literals casing preserved
expression: "'aBcD'"
result: aBcD

View File

@ -1,69 +0,0 @@
name: Casting functions
tests:
- name: Cast '1' to integer
expression: INT('1')
result: 1
- name: Cast '-1' to integer
expression: INT('-1')
result: -1
- name: Cast identity 1
expression: INT(1)
result: 1
- name: Cast identity -1
expression: INT(-1)
result: -1
- name: Cast from TRUE to int
expression: INT(TRUE)
result: 1
- name: Cast from FALSE to int
expression: INT(FALSE)
result: 0
- name: Invalid cast from string to int
expression: INT('ABC')
result: 0
error: cast
- name: Cast 'TRUE' to boolean
expression: BOOL('TRUE')
result: true
- name: Cast "false" to boolean
expression: BOOL("false")
result: false
- name: Cast identity TRUE
expression: BOOL(TRUE)
result: true
- name: Cast identity FALSE
expression: BOOL(FALSE)
result: FALSE
- name: Invalid cast from string to boolean
expression: BOOL('ABC')
result: false
error: cast
- name: Cast from 1 to boolean
expression: BOOL(1)
result: true
- name: Cast from 0 to boolean
expression: BOOL(0)
result: false
- name: Cast from 100 to boolean
expression: BOOL(100)
result: true
- name: Cast from -50 to boolean
expression: BOOL(-50)
result: true
- name: Cast TRUE to string
expression: STRING(TRUE)
result: 'true'
- name: Cast FALSE to string
expression: STRING(FALSE)
result: 'false'
- name: Cast 1 to string
expression: STRING(1)
result: '1'
- name: Cast -1 to string
expression: STRING(-1)
result: '-1'
- name: Cast identity "abc"
expression: STRING("abc")
result: "abc"

View File

@ -1,53 +0,0 @@
name: Context attributest test
tests:
- name: Access to required attribute
expression: id
eventOverrides:
id: myId
result: myId
- name: Access to optional attribute
expression: subject
eventOverrides:
subject: mySubject
result: mySubject
- name: Absent optional attribute
expression: subject
event:
specversion: "1.0"
id: myId
source: localhost.localdomain
type: myType
result: false
error: missingAttribute
- name: Access to optional boolean extension
expression: mybool
eventOverrides:
mybool: true
result: true
- name: Access to optional integer extension
expression: myint
eventOverrides:
myint: 10
result: 10
- name: Access to optional string extension
expression: myext
eventOverrides:
myext: "my extension"
result: "my extension"
- name: URL type cohercion to string
expression: source
event:
specversion: "1.0"
id: myId
source: "http://localhost/source"
type: myType
result: "http://localhost/source"
- name: Timestamp type cohercion to string
expression: time
event:
specversion: "1.0"
id: myId
source: "http://localhost/source"
type: myType
time: 2018-04-26T14:48:09+02:00
result: 2018-04-26T14:48:09+02:00

View File

@ -1,57 +0,0 @@
name: Exists expression
tests:
- name: required attributes always exist
expression: EXISTS specversion AND EXISTS id AND EXISTS type AND EXISTS SOURCE
result: true
- name: optional attribute available
expression: EXISTS time
event:
specversion: "1.0"
id: myId
source: "http://localhost/source"
type: myType
time: 2018-04-26T14:48:09+02:00
result: true
- name: optional attribute absent
expression: EXISTS time
event:
specversion: "1.0"
id: myId
source: "http://localhost/source"
type: myType
result: false
- name: optional attribute absent (negated)
expression: NOT EXISTS time
event:
specversion: "1.0"
id: myId
source: "http://localhost/source"
type: myType
result: true
- name: optional extension available
expression: EXISTS myext
event:
specversion: "1.0"
id: myId
source: "http://localhost/source"
type: myType
myext: my value
result: true
- name: optional extension absent
expression: EXISTS myext
event:
specversion: "1.0"
id: myId
source: "http://localhost/source"
type: myType
result: false
- name: optional extension absent (negated)
expression: NOT EXISTS myext
event:
specversion: "1.0"
id: myId
source: "http://localhost/source"
type: myType
result: true

View File

@ -1,78 +0,0 @@
name: In expression
tests:
- name: int in int set
expression: 123 IN (1, 2, 3, 12, 13, 23, 123)
result: true
- name: int not in int set
expression: 123 NOT IN (1, 2, 3, 12, 13, 23, 123)
result: false
- name: string in string set
expression: "'abc' IN ('abc', \"bcd\")"
result: true
- name: string not in string set
expression: "'aaa' IN ('abc', \"bcd\")"
result: false
- name: bool in bool set
expression: TRUE IN (TRUE, FALSE)
result: true
- name: bool not in bool set
expression: TRUE IN (FALSE)
result: false
- name: mix literals and identifiers (1)
expression: source IN (myext, 'abc')
event:
specversion: "1.0"
id: myId
source: "http://localhost/source"
type: myType
myext: "http://localhost/source"
result: true
- name: mix literals and identifiers (2)
expression: source IN (source)
event:
specversion: "1.0"
id: myId
source: "http://localhost/source"
type: myType
myext: "http://localhost/source"
result: true
- name: mix literals and identifiers (3)
expression: "source IN (id, \"http://localhost/source\")"
event:
specversion: "1.0"
id: myId
source: "http://localhost/source"
type: myType
myext: "http://localhost/source"
result: true
- name: mix literals and identifiers (4)
expression: source IN (id, 'xyz')
event:
specversion: "1.0"
id: myId
source: "http://localhost/source"
type: myType
result: false
- name: type coercion with booleans (1)
expression: "'true' IN (TRUE, 'false')"
result: true
- name: type coercion with booleans (2)
expression: "'true' IN ('TRUE', 'false')"
result: false
- name: type coercion with booleans (3)
expression: TRUE IN ('true', 'false')
result: true
- name: type coercion with booleans (4)
expression: "'TRUE' IN (TRUE, 'false')"
result: false
- name: type coercion with int (1)
expression: "1 IN ('1', '2')"
result: true
- name: type coercion with int (2)
expression: "'1' IN (1, 2)"
result: true

View File

@ -1,16 +0,0 @@
name: Integer builtin functions
tests:
- name: ABS (1)
expression: ABS(10)
result: 10
- name: ABS (2)
expression: ABS(-10)
result: 10
- name: ABS (3)
expression: ABS(0)
result: 0
- name: ABS overflow
expression: ABS(-2147483648)
result: 2147483647
error: math

View File

@ -1,129 +0,0 @@
name: Like expression
tests:
- name: Exact match (1)
expression: "'abc' LIKE 'abc'"
result: true
- name: Exact match (2)
expression: "'ab\\c' LIKE 'ab\\c'"
result: true
- name: Exact match (negate)
expression: "'abc' NOT LIKE 'abc'"
result: false
- name: Percentage operator (1)
expression: "'abc' LIKE 'a%b%c'"
result: true
- name: Percentage operator (2)
expression: "'azbc' LIKE 'a%b%c'"
result: true
- name: Percentage operator (3)
expression: "'azzzbzzzc' LIKE 'a%b%c'"
result: true
- name: Percentage operator (4)
expression: "'a%b%c' LIKE 'a%b%c'"
result: true
- name: Percentage operator (5)
expression: "'ac' LIKE 'abc'"
result: false
- name: Percentage operator (6)
expression: "'' LIKE 'abc'"
result: false
- name: Percentage operator (7)
expression: "'.ab.cde.' LIKE '.%.%.'"
result: true
- name: Percentage operator (8)
expression: "'ab.cde' LIKE '.%.%.'"
result: false
- name: Underscore operator (1)
expression: "'abc' LIKE 'a_b_c'"
result: false
- name: Underscore operator (2)
expression: "'a_b_c' LIKE 'a_b_c'"
result: true
- name: Underscore operator (3)
expression: "'abzc' LIKE 'a_b_c'"
result: false
- name: Underscore operator (4)
expression: "'azbc' LIKE 'a_b_c'"
result: false
- name: Underscore operator (5)
expression: "'azbzc' LIKE 'a_b_c'"
result: true
- name: Underscore operator (6)
expression: "'.a.b.' LIKE '._._.'"
result: true
- name: Underscore operator (7)
expression: "'abcd.' LIKE '._._.'"
result: false
- name: Escaped underscore wildcards (1)
expression: "'a_b_c' LIKE 'a\\_b\\_c'"
result: true
- name: Escaped underscore wildcards (2)
expression: "'a_b_c' NOT LIKE 'a\\_b\\_c'"
result: false
- name: Escaped underscore wildcards (3)
expression: "'azbzc' LIKE 'a\\_b\\_c'"
result: false
- name: Escaped underscore wildcards (4)
expression: "'abc' LIKE 'a\\_b\\_c'"
result: false
- name: Escaped percentage wildcards (1)
expression: "'abc' LIKE 'a\\%b\\%c'"
result: false
- name: Escaped percentage wildcards (2)
expression: "'a%b%c' LIKE 'a\\%b\\%c'"
result: true
- name: Escaped percentage wildcards (3)
expression: "'azbzc' LIKE 'a\\%b\\%c'"
result: false
- name: Escaped percentage wildcards (4)
expression: "'abc' LIKE 'a\\%b\\%c'"
result: false
- name: With access to event attributes
expression: "myext LIKE 'abc%123\\%456\\_d_f'"
eventOverrides:
myext: "abc123123%456_dzf"
result: true
- name: With access to event attributes (negated)
expression: "myext NOT LIKE 'abc%123\\%456\\_d_f'"
eventOverrides:
myext: "abc123123%456_dzf"
result: false
- name: With type coercion from int (1)
expression: "234 LIKE '23_'"
result: true
- name: With type coercion from int (2)
expression: "2344 LIKE '23%'"
result: true
- name: With type coercion from int (3)
expression: "2344 LIKE '23_'"
result: false
- name: With type coercion from bool (1)
expression: "TRUE LIKE 'tr%'"
result: true
- name: With type coercion from bool (2)
expression: "TRUE LIKE '%ue'"
result: true
- name: With type coercion from bool (3)
expression: "FALSE LIKE 'tr%'"
result: false
- name: With type coercion from bool (4)
expression: "FALSE LIKE 'fal%'"
result: true
- name: Invalid string literal in comparison causes parse error
expression: "x LIKE 123"
result: false
error: parse
eventOverrides:
x: "123"
- name: Missing attribute returns empty string
expression: "missing LIKE 'missing'"
result: false
error: missingAttribute

View File

@ -1,36 +0,0 @@
name: Literals
tests:
- name: TRUE literal
expression: TRUE
result: true
- name: FALSE literal
expression: FALSE
result: false
- name: 0 literal
expression: 0
result: 0
- name: 1 literal
expression: 1
result: 1
- name: String literal single quoted
expression: "'abc'"
result: abc
- name: String literal double quoted
expression: "\"abc\""
result: abc
- name: String literal single quoted with case
expression: "'aBc'"
result: aBc
- name: String literal double quoted with case
expression: "\"AbC\""
result: AbC
- name: Escaped string literal (1)
expression: "'a\"b\\'c'"
result: a"b'c
- name: Escaped string literal (2)
expression: "\"a'b\\\"c\""
result: a'b"c

View File

@ -1,24 +0,0 @@
name: Negate operator
tests:
- name: Minus 10
expression: -10
result: -10
- name: Minus minus 10
expression: --10
result: 10
- name: Minus 10 with casting
expression: -'10'
result: -10
- name: Minus minus 10 with casting
expression: --'10'
result: 10
- name: Minus with boolean cast
expression: -TRUE
result: -1
- name: Minus with missing attribute
expression: -missing
result: 0
error: missingAttribute

View File

@ -1,25 +0,0 @@
name: Not operator
tests:
- name: Not true
expression: NOT TRUE
result: false
- name: Not false
expression: NOT FALSE
result: true
- name: Not true with casting
expression: NOT 'TRUE'
result: false
- name: Not false 10 with casting
expression: NOT 'FALSE'
result: true
- name: Invalid int cast
expression: NOT 10
result: true
error: cast
- name: Not missing attribute
expression: NOT missing
result: false
error: missingAttribute

View File

@ -1,5 +0,0 @@
name: Parsing errors
tests:
- name: No closed parenthesis
expression: ABC(
error: parse

View File

@ -1,77 +0,0 @@
name: Specification examples
tests:
- name: Case insensitive hops (1)
expression: int(hop) < int(ttl) and int(hop) < 1000
eventOverrides:
hop: '5'
ttl: '10'
result: true
- name: Case insensitive hops (2)
expression: INT(hop) < INT(ttl) AND INT(hop) < 1000
eventOverrides:
hop: '5'
ttl: '10'
result: true
- name: Case insensitive hops (3)
expression: hop < ttl
eventOverrides:
hop: '5'
ttl: '10'
result: true
- name: Equals with casting (1)
expression: sequence = 5
eventOverrides:
sequence: '5'
result: true
- name: Equals with casting (2)
expression: sequence = 5
eventOverrides:
sequence: '6'
result: false
- name: Logic expression (1)
expression: firstname = 'Francesco' OR subject = 'Francesco'
eventOverrides:
subject: Francesco
firstname: Doug
result: true
- name: Logic expression (2)
expression: firstname = 'Francesco' OR subject = 'Francesco'
eventOverrides:
firstname: Francesco
subject: Doug
result: true
- name: Logic expression (3)
expression: (firstname = 'Francesco' AND lastname = 'Guardiani') OR subject = 'Francesco Guardiani'
eventOverrides:
subject: Doug
firstname: Francesco
lastname: Guardiani
result: true
- name: Logic expression (4)
expression: (firstname = 'Francesco' AND lastname = 'Guardiani') OR subject = 'Francesco Guardiani'
eventOverrides:
subject: Francesco Guardiani
firstname: Doug
lastname: Davis
result: true
- name: Subject exists
expression: EXISTS subject
eventOverrides:
subject: Francesco Guardiani
result: true
- name: Missing attribute (1)
expression: true AND (missing = "")
result: false
error: missingAttribute
- name: Missing attribute (2)
expression: missing * 5
result: 0
error: missingAttribute
- name: Missing attribute (3)
expression: 1 / missing
result: 0
error: missingAttribute

View File

@ -1,143 +0,0 @@
name: String builtin functions
tests:
- name: LENGTH (1)
expression: "LENGTH('abc')"
result: 3
- name: LENGTH (2)
expression: "LENGTH('')"
result: 0
- name: LENGTH (3)
expression: "LENGTH('2')"
result: 1
- name: LENGTH (4)
expression: "LENGTH(TRUE)"
result: 4
- name: CONCAT (1)
expression: "CONCAT('a', 'b', 'c')"
result: abc
- name: CONCAT (2)
expression: "CONCAT()"
result: ""
- name: CONCAT (3)
expression: "CONCAT('a')"
result: "a"
- name: CONCAT_WS (1)
expression: "CONCAT_WS(',', 'a', 'b', 'c')"
result: a,b,c
- name: CONCAT_WS (2)
expression: "CONCAT_WS(',')"
result: ""
- name: CONCAT_WS (3)
expression: "CONCAT_WS(',', 'a')"
result: "a"
- name: CONCAT_WS without arguments doesn't exist
expression: CONCAT_WS()
error: missingFunction
result: false
- name: LOWER (1)
expression: "LOWER('ABC')"
result: abc
- name: LOWER (2)
expression: "LOWER('AbC')"
result: abc
- name: LOWER (3)
expression: "LOWER('abc')"
result: abc
- name: UPPER (1)
expression: "UPPER('ABC')"
result: ABC
- name: UPPER (2)
expression: "UPPER('AbC')"
result: ABC
- name: UPPER (3)
expression: "UPPER('abc')"
result: ABC
- name: TRIM (1)
expression: "TRIM(' a b c ')"
result: "a b c"
- name: TRIM (2)
expression: "TRIM(' a b c')"
result: "a b c"
- name: TRIM (3)
expression: "TRIM('a b c ')"
result: "a b c"
- name: TRIM (4)
expression: "TRIM('a b c')"
result: "a b c"
- name: LEFT (1)
expression: LEFT('abc', 2)
result: ab
- name: LEFT (2)
expression: LEFT('abc', 10)
result: abc
- name: LEFT (3)
expression: LEFT('', 0)
result: ""
- name: LEFT (4)
expression: LEFT('abc', -2)
result: "abc"
error: functionEvaluation
- name: RIGHT (1)
expression: RIGHT('abc', 2)
result: bc
- name: RIGHT (2)
expression: RIGHT('abc', 10)
result: abc
- name: RIGHT (3)
expression: RIGHT('', 0)
result: ""
- name: RIGHT (4)
expression: RIGHT('abc', -2)
result: "abc"
error: functionEvaluation
- name: SUBSTRING (1)
expression: "SUBSTRING('abcdef', 1)"
result: "abcdef"
- name: SUBSTRING (2)
expression: "SUBSTRING('abcdef', 2)"
result: "bcdef"
- name: SUBSTRING (3)
expression: "SUBSTRING('Quadratically', 5)"
result: "ratically"
- name: SUBSTRING (4)
expression: "SUBSTRING('Sakila', -3)"
result: "ila"
- name: SUBSTRING (5)
expression: "SUBSTRING('abcdef', 1, 6)"
result: "abcdef"
- name: SUBSTRING (6)
expression: "SUBSTRING('abcdef', 2, 4)"
result: "bcde"
- name: SUBSTRING (7)
expression: "SUBSTRING('Sakila', -5, 3)"
result: "aki"
- name: SUBSTRING (8)
expression: "SUBSTRING('Quadratically', 0)"
result: ""
- name: SUBSTRING (9)
expression: "SUBSTRING('Quadratically', 0, 1)"
result: ""
- name: SUBSTRING (10)
expression: "SUBSTRING('abcdef', 10)"
result: ""
error: functionEvaluation
- name: SUBSTRING (11)
expression: "SUBSTRING('abcdef', -10)"
result: ""
error: functionEvaluation
- name: SUBSTRING (12)
expression: "SUBSTRING('abcdef', 10, 10)"
result: ""
error: functionEvaluation
- name: SUBSTRING (13)
expression: "SUBSTRING('abcdef', -10, 10)"
result: ""
error: functionEvaluation

View File

@ -1,12 +0,0 @@
name: Sub expressions
tests:
- name: Sub expression with literal
expression: "(TRUE)"
result: true
- name: Math (1)
expression: "4 * (2 + 3)"
result: 20
- name: Math (2)
expression: "(2 + 3) * 4"
result: 20

View File

@ -1,173 +0,0 @@
name: SubscriptionsAPI Recreations
tests:
- name: Prefix filter (1)
expression: "source LIKE 'https://%'"
result: true
eventOverrides:
source: "https://example.com"
- name: Prefix filter (2)
expression: "source LIKE 'https://%'"
result: false
eventOverrides:
source: "http://example.com"
- name: Prefix filter on string extension
expression: "myext LIKE 'custom%'"
result: true
eventOverrides:
myext: "customext"
- name: Prefix filter on missing string extension
expression: "myext LIKE 'custom%'"
result: false
error: missingAttribute
- name: Suffix filter (1)
expression: "type like '%.error'"
result: true
eventOverrides:
type: "com.github.error"
- name: Suffix filter (2)
expression: "type like '%.error'"
result: false
eventOverrides:
type: "com.github.success"
- name: Suffix filter on string extension
expression: "myext LIKE '%ext'"
result: true
eventOverrides:
myext: "customext"
- name: Suffix filter on missing string extension
expression: "myext LIKE '%ext'"
result: false
error: missingAttribute
- name: Exact filter (1)
expression: "id = 'myId'"
result: true
eventOverrides:
id: "myId"
- name: Exact filter (2)
expression: "id = 'myId'"
result: false
eventOverrides:
id: "notmyId"
- name: Exact filter on string extension
expression: "myext = 'customext'"
result: true
eventOverrides:
myext: "customext"
- name: Exact filter on missing string extension
expression: "myext = 'customext'"
result: false
error: missingAttribute
- name: Prefix filter AND Suffix filter (1)
expression: "id LIKE 'my%' AND source LIKE '%.ca'"
result: true
eventOverrides:
id: "myId"
source: "http://www.some-website.ca"
- name: Prefix filter AND Suffix filter (2)
expression: "id LIKE 'my%' AND source LIKE '%.ca'"
result: false
eventOverrides:
id: "myId"
source: "http://www.some-website.com"
- name: Prefix filter AND Suffix filter (3)
expression: "myext LIKE 'custom%' AND type LIKE '%.error'"
result: true
eventOverrides:
myext: "customext"
type: "com.github.error"
- name: Prefix AND Suffix filter (4)
expression: "type LIKE 'example.%' AND myext LIKE 'custom%'"
result: false
eventOverrides:
type: "example.event.type"
error: missingAttribute
- name: Prefix OR Suffix filter (1)
expression: "id LIKE 'my%' OR source LIKE '%.ca'"
result: true
eventOverrides:
id: "myId"
source: "http://www.some-website.ca"
- name: Prefix OR Suffix filter (2)
expression: "id LIKE 'my%' OR source LIKE '%.ca'"
result: true
eventOverrides:
id: "myId"
source: "http://www.some-website.com"
- name: Prefix OR Suffix filter (3)
expression: "id LIKE 'my%' OR source LIKE '%.ca'"
result: true
eventOverrides:
id: "notmyId"
source: "http://www.some-website.ca"
- name: Prefix OR Suffix filter (4)
expression: "id LIKE 'my%' OR source LIKE '%.ca'"
result: false
eventOverrides:
id: "notmyId"
source: "http://www.some-website.com"
- name: Disjunctive Normal Form (1)
expression: "(id = 'myId' AND type LIKE '%.success') OR (id = 'notmyId' AND source LIKE 'http://%' AND type LIKE '%.warning')"
result: true
eventOverrides:
id: "myId"
type: "example.event.success"
- name: Disjunctive Normal Form (2)
expression: "(id = 'myId' AND type LIKE '%.success') OR (id = 'notmyId' AND source LIKE 'http://%' AND type LIKE '%.warning')"
result: true
eventOverrides:
id: "notmyId"
type: "example.event.warning"
source: "http://localhost.localdomain"
- name: Disjunctive Normal Form (3)
expression: "(id = 'myId' AND type LIKE '%.success') OR (id = 'notmyId' AND source LIKE 'http://%' AND type LIKE '%.warning')"
result: false
eventOverrides:
id: "notmyId"
type: "example.event.warning"
source: "https://localhost.localdomain"
- name: Conjunctive Normal Form (1)
expression: "(id = 'myId' OR type LIKE '%.success') AND (id = 'notmyId' OR source LIKE 'https://%' OR type LIKE '%.warning')"
result: true
eventOverrides:
id: "myId"
type: "example.event.warning"
source: "http://localhost.localdomain"
- name: Conjunctive Normal Form (2)
expression: "(id = 'myId' OR type LIKE '%.success') AND (id = 'notmyId' OR source LIKE 'https://%' OR type LIKE '%.warning')"
result: true
eventOverrides:
id: "notmyId"
type: "example.event.success"
source: "http://localhost.localdomain"
- name: Conjunctive Normal Form (3)
expression: "(id = 'myId' OR type LIKE '%.success') AND (id = 'notmyId' OR source LIKE 'https://%' OR type LIKE '%.warning')"
result: false
eventOverrides:
id: "notmyId"
type: "example.event.warning"
source: "http://localhost.localdomain"
- name: Conjunctive Normal Form (4)
expression: "(id = 'myId' OR type LIKE '%.success') AND (id = 'notmyId' OR source LIKE 'https://%' OR type LIKE '%.warning')"
result: false
eventOverrides:
id: "myId"
type: "example.event.success"
source: "http://localhost.localdomain"
- name: Conjunctive Normal Form (5)
expression: "(id = 'myId' OR type LIKE '%.success') AND (id = 'notmyId' OR source LIKE 'https://%' OR type LIKE '%.warning') AND (myext = 'customext')"
result: false
eventOverrides:
id: "myId"
type: "example.event.warning"
source: "http://localhost.localdomain"
error: missingAttribute

View File

@ -1,3 +0,0 @@
# CESQL v1.0.0 (制作中)
请参阅[CESQL 规范](spec.md)

View File

@ -1,6 +0,0 @@
# CESQL 规范发行信息
<!-- no verify-specs -->
## vX.Y.Z - YYYY/MM/DD
- 尚未发行 (#000)

View File

@ -1,6 +0,0 @@
# CloudEvents Expression Language TCK
本文档尚未被翻译,请先阅读英文[原版文档](../../../cesql_tck/README.md) 。
如果您迫切地需要此文档的中文翻译,请[提交一个issue](https://github.com/cloudevents/spec/issues)
我们会尽快安排专人进行翻译。

View File

@ -1,6 +0,0 @@
# CloudEvents SQL Expression Language - Version 1.0.0
本文档尚未被翻译,请先阅读英文[原版文档](../../spec.md) 。
如果您迫切地需要此文档的中文翻译,请[提交一个issue](https://github.com/cloudevents/spec/issues)
我们会尽快安排专人进行翻译。

View File

@ -1,8 +0,0 @@
# Translation list of the cesql spec
| Documents | Status | Last edited | Version |
| :--------- | :---------: | :--------- | :---------: |
| /cesql/README.md | PR reviewing | 2022-03-26T13:55:02.773Z | - |
| /cesql/RELEASE_NOTES.md | PR reviewing | 2022-03-26T14:00:58.009Z | - |
| /cesql/spec.md | Ready to start | | |
| /cesql/cesql_tck/README.md | Ready to start | | |

View File

@ -1,658 +0,0 @@
# CloudEvents SQL Expression Language - Version 1.0.0
## Abstract
The goal of this specification is to define a SQL-like expression language
which can be used to express predicates on CloudEvent instances.
## Table of Contents
1. [Introduction](#1-introduction)
- 1.1. [Conformance](#11-conformance)
- 1.2. [Relation to Subscriptions API](#12-relation-to-the-subscriptions-api)
2. [Language syntax](#2-language-syntax)
- 2.1. [Expression](#21-expression)
- 2.2. [Value identifiers and literals](#22-value-identifiers-and-literals)
- 2.3. [Operators](#23-operators)
- 2.4. [Function invocations](#24-function-invocations)
3. [Language semantics](#3-language-semantics)
- 3.1. [Type system](#31-type-system)
- 3.2. [CloudEvent context identifiers and types](#32-cloudevent-context-identifiers-and-types)
- 3.3. [Errors](#33-errors)
- 3.4. [Operators](#34-operators)
- 3.5. [Functions](#35-functions)
- 3.6. [Evaluation of the expression](#36-evaluation-of-the-expression)
- 3.7. [Type casting](#37-type-casting)
4. [Implementation suggestions](#4-implementation-suggestions)
- 4.1. [Error handling](#41-error-handling)
5. [Examples](#5-examples)
6. [References](#6-references)
## 1. Introduction
CloudEvents SQL expressions (also known as CESQL) allow computing values and
matching of CloudEvent attributes against complex expressions that lean on the
syntax of Structured Query Language (SQL) `WHERE` clauses. Using SQL-derived
expressions for message filtering has widespread implementation usage because
the Java Message Service (JMS) message selector syntax also leans on SQL. Note
that neither the [SQL standard (ISO 9075)][iso-9075] nor the JMS standard nor
any other SQL dialect is used as a normative foundation or to constrain the
expression syntax defined in this specification, but the syntax is informed by
them.
CESQL is a _[total pure functional programming
language][total-programming-language-wiki]_ in order to guarantee the
termination of the evaluation of the expression. It features a type system
correlated to the [CloudEvents type
system][ce-type-system], and it features boolean and arithmetic operations,
as well as built-in functions for string manipulation.
The language is not constrained to a particular execution environment, which
means it might run in a source, in a producer, or in an intermediary, and it
can be implemented using any technology stack.
The CloudEvents Expression Language assumes the input always includes, but is
not limited to, a single valid and type-checked CloudEvent instance. An
expression MUST NOT mutate the value of the input CloudEvent instance, nor any
of the other input values. The evaluation of an expression observes the concept
of [referential transparency][referential-transparency-wiki]. This means that
any part of an expression can be replaced with its output value and the overall
result of the expression will be unchanged. The primary output of a CESQL
expression evaluation is always a _boolean_, an _integer_ or a _string_. The
secondary output of a CESQL expression evaluation is a set of errors which
occurred during evaluation. This set MAY be empty, indicating that no error
occurred during execution of the expression. The values used by CESQL engines
to represent a set of errors (empty or not) is out of the scope of this
specification.
The CloudEvents Expression Language doesn't support the handling of the data
field of the CloudEvent instances, due to its polymorphic nature and
complexity. Users that need this functionality ought to use other more
appropriate tools.
### 1.1. Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [RFC2119][rfc2119].
### 1.2. Relation to the Subscriptions API
The CESQL can be used as a [filter dialect][subscriptions-filter-dialect] to
filter on the input values.
When used as a filter predicate, the expression output value MUST be a
_Boolean_. If the output value is not a _Boolean_, or any errors are returned,
the event MUST NOT pass the filter. Due to the requirement that events MUST NOT
pass the filter if any errors occur, when used in a filtering context the CESQL
engine SHOULD follow the "fail fast mode" error handling described in section
4.1.
## 2. Language syntax
The grammar of the language is defined using the EBNF Notation from [W3C XML
specification][ebnf-xml-spec].
Although in the EBNF keywords are defined using uppercase characters, they are
case-insensitive. For example:
```
int(hop) < int(ttl) and int(hop) < 1000
```
Has the same syntactical meaning of:
```
INT(hop) < INT(ttl) AND INT(hop) < 1000
```
### 2.1 Expression
The root of the expression is the `expression` rule:
```ebnf
expression ::= value-identifier | literal | unary-operation | binary-operation | function-invocation | like-operation | exists-operation | in-operation | ( "(" expression ")" )
```
Nested expressions MUST be correctly parenthesized.
### 2.2. Value identifiers and literals
Value identifiers in CESQL MUST follow the same restrictions of the [Attribute
Naming Convention][ce-attribute-naming-convention] from the CloudEvents spec. A
value identifier SHOULD NOT be greater than 20 characters in length.
```ebnf
lowercase-char ::= [a-z]
value-identifier ::= ( lowercase-char | digit )+
```
CESQL defines 3 different literal kinds: integer numbers, `true` or `false`
booleans, and `''` or `""` delimited strings. Integer literals MUST be valid 32
bit signed integer values.
```ebnf
digit ::= [0-9]
integer-literal ::= ( '+' | '-' )? digit+
boolean-literal ::= "true" | "false" (* Case insensitive *)
string-literal ::= ( "'" ( [^'] | "\'" )* "'" ) | ( '"' ( [^"] | '\"' )* '"')
literal ::= integer-literal | boolean-literal | string-literal
```
Because string literals can be either `''` or `""` delimited, in the former
case, the `'` character has to be escaped if it is to be used in the string
literal, while in the latter the `"` has to be escaped if it is to be used in
the string literal.
### 2.3. Operators
CESQL defines boolean unary and binary operators, arithmetic unary and binary
operators, and the `LIKE`, `IN`, `EXISTS` operators.
```ebnf
not-operator ::= "NOT"
unary-logic-operator ::= not-operator
binary-logic-operator ::= "AND" | "OR" | "XOR"
unary-numeric-operator ::= "-"
binary-comparison-operator ::= "=" | "!=" | "<>" | ">=" | "<=" | "<" | ">"
binary-numeric-arithmetic-operator ::= "+" | "-" | "*" | "/" | "%"
like-operator ::= "LIKE"
exists-operator ::= "EXISTS"
in-operator ::= "IN"
unary-operation ::= (unary-numeric-operator | unary-logic-operator) expression
binary-operation ::= expression (binary-comparison-operator | binary-numeric-arithmetic-operator | binary-logic-operator) expression
like-operation ::= expression not-operator? like-operator string-literal
exists-operation ::= exists-operator value-identifier
set-expression ::= "(" expression ("," expression)* ")"
in-operation ::= expression not-operator? in-operator set-expression
```
### 2.4. Function invocations
CESQL supports n-ary function invocation:
```ebnf
char ::= [A-Z] | [a-z]
argument ::= expression
function-identifier ::= char ( "_" | char )*
argument-list ::= argument ("," argument)*
function-invocation ::= function-identifier "(" argument-list? ")"
```
## 3. Language semantics
### 3.1. Type system
The type system contains 3 _primitive_ types:
- _String_: Sequence of Unicode characters.
- _Integer_: A whole number in the range -2,147,483,648 to +2,147,483,647
inclusive. This is the range of a signed, 32-bit, twos-complement encoding.
- _Boolean_: A boolean value of "true" or "false".
For each of the 3 _primitive_ types there is an associated zero value, which
can be thought of as the "default" value for that type:
| Type | Zero Value |
| --------- | ---------- |
| _String_ | `""` |
| _Integer_ | `0` |
| _Boolean_ | `false` |
The types _URI_, _URI Reference_, and _Timestamp_ ([defined in the CloudEvents
specification][ce-type-system]) are represented as _String_.
The type system also includes _Set_, which is an unordered collection of
_Strings_ of arbitrary length. This type can be used in the `IN` operator.
### 3.2. CloudEvent context identifiers and types
Each CloudEvent context attribute and extension MUST be addressable from an
expression using its identifier, as defined by the spec. For example, using
`id` in an expression will address the CloudEvent [id
attribute][ce-id-attribute].
If the value of the attribute or extension is not one of the primitive CESQL
types, it MUST be represented by the _String_ type.
When addressing an attribute not included in the input event, the subexpression
referencing the missing attribute MUST evaluate to the zero value for the
return type of the subexpression,along with a _MissingAttributeError_. For
example, `true AND (missingAttribute = "")` would evaluate to
`false, (missingAttributeError)` as the subexpression `missingAttribute = ""`
would be false, given that the return type for the `=` operator is _Boolean_.
However, the expression `missingattribute * 5` would evaluate to
`0, (missingAttributeError)` because the return type for the `*` operator is
_Integer_. Note that this does not mean that the _value_ of the missing
attribute is set to be the zero value for the type of the missing attribute.
Rather, the subexpression with the missingAttribute returns the zero value of
the return type of the subexpression. As an example, `1 / missingAttribute`
does not raise a _MathError_ due to the division by zero, instead it returns
`0, (missingAttributeError)` as that is the zero value for the return type of
the subexpression.
In cases where the return type of the subexpression cannot be determined by the
CESQL engine, the CESQL engine MUST assume a return type of _Boolean_. In such
cases, the return value would therefore be `false, (missingAttributeError)`.
### 3.3. Errors
Because every operator and function is total, an expression evaluation flow is
defined statically and cannot be modified by expected or unexpected errors.
Nevertheless CESQL includes the concept of errors: when an expression is
evaluated, in case an error arises, the evaluator collects a list of errors,
referred in this spec as _error list_, which is then returned together with the
evaluated value of the CESQL expression.
Whenever possible, some error checks SHOULD be done at compile time by the
expression evaluator, in order to prevent runtime errors.
Every CESQL engine MUST support the following error types:
- _ParseError_: An error that occurs during parsing
- _MathError_: An error that occurs during the evaluation of a mathematical
operation
- _CastError_: An error that occurs during an implicit or explicit type cast
- _MissingAttributeError_: An error that occurs when addressing an attribute
which is not present on the input event
- _MissingFunctionError_: An error that occurs due to a call to a function
that has not been registered with the CESQL engine
- _FunctionEvaluationError_: An error that occurs during the evaluation of a
function
- _GenericError_: Any error not specified above
Whenever an operator or function encounters an error, it MUST result in a
"return value" as well as one or more "error values". In cases where there is
not an obvious "return value" for the expression, the operator or function
SHOULD return the zero value for the return type of the operator or function.
### 3.4. Operators
The following tables show the operators that MUST be supported by a CESQL
evaluator. When evaluating an operator, a CESQL engine MUST attempt to cast the
operands to the specified types.
#### 3.4.1. Unary operators
Corresponds to the syntactic rule `unary-operation`:
| Definition | Semantics |
| --------------------------- | ------------------------------- |
| `NOT x: Boolean -> Boolean` | Returns the negate value of `x` |
| `-x: Integer -> Integer` | Returns the minus value of `x` |
#### 3.4.2. Binary operators
Corresponds to the syntactic rule `binary-operation`:
| Definition | Semantics |
| --------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| `x = y: Boolean x Boolean -> Boolean` | Returns `true` if the values of `x` and `y` are equal |
| `x != y: Boolean x Boolean -> Boolean` | Same as `NOT (x = y)` |
| `x <> y: Boolean x Boolean -> Boolean` | Same as `NOT (x = y)` |
| `x AND y: Boolean x Boolean -> Boolean` | Returns the logical and of `x` and `y` |
| `x OR y: Boolean x Boolean -> Boolean` | Returns the logical or of `x` and `y` |
| `x XOR y: Boolean x Boolean -> Boolean` | Returns the logical xor of `x` and `y` |
| `x = y: Integer x Integer -> Boolean` | Returns `true` if the values of `x` and `y` are equal |
| `x != y: Integer x Integer -> Boolean` | Same as `NOT (x = y)` |
| `x <> y: Integer x Integer -> Boolean` | Same as `NOT (x = y)` |
| `x < y: Integer x Integer -> Boolean` | Returns `true` if `x` is less than `y` |
| `x <= y: Integer x Integer -> Boolean` | Returns `true` if `x` is less than or equal to `y` |
| `x > y: Integer x Integer -> Boolean` | Returns `true` if `x` is greater than `y` |
| `x >= y: Integer x Integer -> Boolean` | Returns `true` if `x` is greater than or equal to `y` |
| `x * y: Integer x Integer -> Integer` | Returns the product of `x` and `y` |
| `x / y: Integer x Integer -> Integer` | Returns the result of dividing `x` by `y`, rounded towards `0` to obtain an integer. Returns `0` and a _MathError_ if `y = 0` |
| `x % y: Integer x Integer -> Integer` | Returns the remainder of `x` divided by `y`, where the result has the same sign as `x`. Returns `0` and a _MathError_ if `y = 0` |
| `x + y: Integer x Integer -> Integer` | Returns the sum of `x` and `y` |
| `x - y: Integer x Integer -> Integer` | Returns the value of `y` subtracted from `x` |
| `x = y: String x String -> Boolean` | Returns `true` if the values of `x` and `y` are equal (case sensitive) |
| `x != y: String x String -> Boolean` | Same as `NOT (x = y)` (case sensitive) |
| `x <> y: String x String -> Boolean` | Same as `NOT (x = y)` (case sensitive) |
The AND and OR operators MUST be short-circuit evaluated. This means that
whenever the left operand of the AND operation evaluates to `false`, the right
operand MUST NOT be evaluated. Similarly, whenever the left operand of the OR
operation evaluates to `true`, the right operand MUST NOT be evaluated.
#### 3.4.3. Like operator
| Definition | Semantics |
| ------------------------------------------------ | --------------------------------------------------- |
| `x LIKE pattern: String x String -> Boolean` | Returns `true` if the value x matches the `pattern` |
| `x NOT LIKE pattern: String x String -> Boolean` | Same as `NOT (x LIKE PATTERN)` |
The pattern of the `LIKE` operator MUST be a string literal, and can contain:
- `%` represents zero, one, or multiple characters
- `_` represents a single character
- Any other character, representing exactly that character (case sensitive)
For example, the pattern `_b*` will accept values `ab`, `abc`, `abcd1` but
won't accept values `b` or `acd` or `aBc`.
Both `%` and `_` can be escaped with `\`, in order to be matched literally.
For example, the pattern `abc\%` will match `abc%` but won't match `abcd`.
In cases where the left operand is not a `String`, it MUST be cast to a
`String` before the comparison is made. The pattern of the `LIKE` operator
(that is, the right operand of the operator) MUST be a valid string literal
without casting, otherwise the parser MUST return a parse error.
#### 3.4.4. Exists operator
| Definition | Semantics |
| ----------------------------------- | --------------------------------------------------------------------------- |
| `EXISTS identifier: Any -> Boolean` | Returns `true` if the attribute `identifier` exists in the input CloudEvent |
Note: `EXISTS` MUST always return `true` for the REQUIRED context attributes
because the input CloudEvent is always assumed valid, e.g. `EXISTS id` MUST
always return `true`.
#### 3.4.5. In operator
| Definition | Semantics |
| ------------------------------------------------------- | -------------------------------------------------------------------------- |
| `x IN (y1, y2, ...): Any x Any^n -> Boolean`, n > 0 | Returns `true` if `x` is equal to an element in the _Set_ of `yN` elements |
| `x NOT IN (y1, y2, ...): Any x Any^n -> Boolean`, n > 0 | Same as `NOT (x IN set)` |
The matching is done using the same semantics of the equal `=` operator, but
using `x` type as the target type for the implicit type casting.
### 3.5. Functions
CESQL provides the concept of function, and defines some built-in functions
that every engine MUST implement. An engine SHOULD also allow users to define
their custom functions, however, the mechanism by which this is done is out of
scope of this specification.
A function is identified by its name, its parameters and the return value. A
function can be variadic, that is, the arity is not fixed.
CESQL allows overloading, that is, the engine MUST be able to distinguish
between two functions defined with the same name but different arity. Because
of implicit casting, no functions with the same name and same arity but
different types are allowed.
A function name MAY have at most one variadic overload definition and only if
the number of initial fixed arguments is greater than the maximum arity of all
other function definitions for that function name.
For example, the following set of definitions are valid and will all be allowed
by the rules:
- `ABC(x): String -> Integer`: Unary function (arity 1).
- `ABC(x, y): String x String -> Integer`: Binary function (arity 2).
- `ABC(x, y, z, ...): String x String x String x String^n -> Integer`: n-ary
function (variable arity), but the initial fixed arguments are at least 3.
But the following set is invalid, so the engine MUST reject them:
- `ABC(x...): String^n -> Integer`: n-ary function (variable arity), but there
are no initial fixed arguments.
- `ABC(x, y, z): String x String x String -> Integer`: Ternary function
(arity 3).
These two are incompatible because the n-ary function `ABC(x...)` can not be
distinguished in any way from the ternary function `ABC(x, y, z)` if the n-ary
function were called with three arguments. In order for these definitions to be
valid, the n-ary function would need to have at least 4 fixed arguments.
When a function invocation cannot be dispatched, the return value is `false`,
and a _MissingFunctionError_ is also returned.
The following tables show the built-in functions that MUST be supported by a
CESQL evaluator.
#### 3.5.1. Built-in String manipulation
| Definition | Semantics |
| ------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `LENGTH(x): String -> Integer` | Returns the character length of the String `x`. |
| `CONCAT(x1, x2, ...): String^n -> String`, n >= 0 | Returns the concatenation of `x1` up to `xN`. |
| `CONCAT_WS(delimiter, x1, x2, ...): String x String^n -> String`, n >= 0 | Returns the concatenation of `x1` up to `xN`, using the `delimiter` between each string, but not before `x1` or after `xN`. |
| `LOWER(x): String -> String` | Returns `x` in lowercase. |
| `UPPER(x): String -> String` | Returns `x` in uppercase. |
| `TRIM(x): String -> String` | Returns `x` with leading and trailing whitespaces (as defined by unicode) trimmed. This does not remove any characters which are not unicode whitespace characters, such as control characters. |
| `LEFT(x, y): String x Integer -> String` | Returns a new string with the first `y` characters of `x`, or returns `x` if `LENGTH(x) <= y`. Returns `x` if `y < 0` and a _FunctionEvaluationError_. |
| `RIGHT(x, y): String x Integer -> String` | Returns a new string with the last `y` characters of `x` or returns `x` if `LENGTH(x) <= y`. Returns `x` if `y < 0` and a _FunctionEvaluationError_. |
| `SUBSTRING(x, pos): String x Integer x Integer -> String` | Returns the substring of `x` starting from index `pos` (included) up to the end of `x`. Characters' index starts from `1`. If `pos` is negative, the beginning of the substring is `pos` characters from the end of the string. If `pos` is 0, then returns the empty string. Returns the empty string and a _FunctionEvaluationError_ if `pos > LENGTH(x) OR pos < -LENGTH(x)`. |
| `SUBSTRING(x, pos, len): String x Integer x Integer -> String` | Returns the substring of `x` starting from index `pos` (included) of length `len`. Characters' index starts from `1`. If `pos` is negative, the beginning of the substring is `pos` characters from the end of the string. If `pos` is 0, then returns the empty string. If `len` is greater than the maximum substring starting at `pos`, then return the maximum substring. Returns the empty string and a _FunctionEvaluationError_ if `pos > LENGTH(x) OR pos < -LENGTH(x)` or if `len` is negative. |
#### 3.5.2. Built-in Math functions
| Definition | Semantics |
| ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `ABS(x): Integer -> Integer` | Returns the absolute value of `x`. If the value of `x` is `-2147483648` (the most negative 32 bit integer value possible), then this returns `2147483647` as well as a _MathError_. |
#### 3.5.3 Function Errors
As specified in 3.3, in the event of an error a function MUST still return a
valid return value for its defined return type. A CESQL engine MUST guarantee
that all built-in functions comply with this. For user defined functions, if
they return one or more errors and fail to provide a valid return value for
their return type the CESQL engine MUST return the zero value for the return
type of the function, along with a _FunctionEvaluationError_.
### 3.6. Evaluation of the expression
Operators and functions MUST be evaluated in order of precedence, and MUST be
evaluated left to right when the precedence is equal. The order of precedence
is as follows:
1. Function invocations
1. Unary operators
1. NOT unary operator
1. `-` unary operator
1. LIKE operator
1. EXISTS operator
1. IN operator
1. Binary operators
1. `*`, `/`, `%` binary operators
1. `+`, `-` binary operators
1. `=`, `!=`, `<>`, `>=`, `<=`, `>`, `<` binary operators
1. AND, OR, XOR binary operators
1. Subexpressions
1. Attributes and literal values
AND and OR operations MUST be short-circuit evaluated. When the left operand of
the AND operation evaluates to `false`, the right operand MUST NOT be evaluated.
Similarly, when the left operand of the OR operation evalues to `true`, the
right operand MUST NOT be evaluated.
#### 3.7. Type casting
The following table indicates which type casts a CESQL engine MUST or MUST NOT
support:
| Type | Integer | String | Boolean |
| ------- | ------- | ------ | ------- |
| Integer | N/A | MUST | MUST |
| String | MUST | N/A | MUST |
| Boolean | MUST | MUST | N/A |
For all of the type casts which a CESQL engine MUST support, the semantics
which the engine MUST use are defined as follows:
| Definition | Semantics |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `Integer -> String` | Returns the string representation of the integer value in base 10, without leading `0`s. If the value is less than 0, the '-' character is prepended to the result. |
| `Integer -> Boolean` | Returns `false` if the integer is `0`, and `true` otherwise. |
| `String -> Integer` | Returns the result of interpreting the string as a 32 bit base 10 integer. The string MAY begin with a leading sign '+' or '-'. If the result will overflow or the string is not a valid integer an error is returned along with a value of `0`. |
| `String -> Boolean` | Returns `true` or `false` if the lower case representation of the string is exactly "true" or "false, respectively. Otherwise returns an error along with a value of `false` |
| `Boolean -> Integer` | Returns `1` if the boolean is `true`, and `0` if the boolean is `false`. |
| `Boolean -> String` | Returns `"true"` if the boolean is `true`, and `"false"` if the boolean is `false`. |
An example of how _Boolean_ values cast to _String_ combines with the case
insensitivity of CESQL keywords is that:
```
TRUE = "true" AND FALSE = "false"
```
will evaluate to `true`, while
```
TRUE = "TRUE" OR FALSE = "FALSE"
```
will evaluate to `false`.
When the argument types of an operator/function invocation don't match the
signature of the operator/function being invoked, the CESQL engine MUST try to
perform an implicit cast.
This section defines an **ambiguous** operator as an operator that is
overloaded with another operator definition with same symbol/name and arity but
different parameter types. Note: a function can not be ambiguous as it is not
allowed for two functions to have the same arity and name.
A CESQL engine MUST apply the following implicit casting rules in order:
1. If the operator/function is unary (argument `x`):
1. If it's not ambiguous, cast `x` to the parameter type.
1. If it's ambiguous, raise a _CastError_ and the cast result is `false`.
1. If the operator is binary (left operand `x` and right operand `y`):
1. If it's not ambiguous, cast `x` and `y` to the corresponding parameter
types.
1. If it's ambiguous, use the `y` type to search, in the set of ambiguous
operators, every definition of the operator using the `y` type as the
right parameter type:
1. If such operator definition exists and is unique, cast `x` to the type
of the left parameter
1. Otherwise, raise a _CastError_ and the result is `false`
1. If the function is n-ary with `n > 1`:
1. Cast all the arguments to the corresponding parameter types.
1. If the operator is n-ary with `n > 2`:
1. If it's not ambiguous, cast all the operands to the target type.
1. If it's ambiguous, raise a _CastError_ and the cast result is `false`.
For the `IN` operator, a special rule is defined: the left argument MUST be
used as the target type to eventually cast the set elements.
For example, assuming `MY_STRING_PREDICATE` is a unary predicate accepting a
_String_ parameter and returning a _Boolean_, this expression:
```
MY_STRING_PREDICATE(sequence + 10)
```
MUST be evaluated as follows:
1. `sequence` is cast to _Integer_ using the same semantics of `INT`
1. `sequence + 10` is executed
1. `sequence + 10` result is cast to _String_ using the same semantics of
`STRING`
1. `MY_STRING_PREDICATE` is invoked with the result of the previous point as
input.
Another example, in this expression `sequence` is cast to _Integer_:
```
sequence = 10
```
`=` is an arity-2 ambiguous operator, because it's defined for
`String x String`, `Boolean x Boolean` and `Integer x Integer`. Because the
right operand of the operator is an _Integer_ and there is only one `=`
definition which uses the type _Integer_ as the right parameter, `sequence`
is cast to _Integer_.
## 4. Implementation suggestions
This section is meant to provide some suggestions while implementing and
adopting the CloudEvents Expression Language. It's non-normative, hence none of
the below text is mandatory.
### 4.1. Error handling
Because CESQL expressions are total, they always define a return value,
included in the [type system](#31-type-system), even after an error occurs.
When evaluating an expression, the evaluator can operate in two _modes_, in
relation to error handling:
- Fail fast mode: When an error is triggered, the evaluation is interrupted and
returns the error, with the zero value for the return type of the expression.
- Complete evaluation mode: When an error is triggered, the evaluation is
continued, and the evaluation of the expression returns both the result and
the error(s).
Choosing which evaluation mode to adopt and implement depends on the use case.
## 5. Examples
_CloudEvent including a subject_
```
EXISTS subject
```
_CloudEvent including the extension 'firstname' with value 'Francesco'_
```
firstname = 'Francesco'
```
_CloudEvent including the extension 'firstname' with value 'Francesco' or the
subject with value 'Francesco'_
```
firstname = 'Francesco' OR subject = 'Francesco'
```
_CloudEvent including the extension 'firstname' with value 'Francesco' and
extension 'lastname' with value 'Guardiani', or the subject with value
'Francesco Guardiani'_
```
(firstname = 'Francesco' AND lastname = 'Guardiani') OR subject = 'Francesco Guardiani'
```
_CloudEvent including the extension 'sequence' with numeric value 10_
```
sequence = 10
```
_CloudEvent including the extension 'hop' and 'ttl', where 'hop' is smaller
than 'ttl'_
```
hop < ttl
```
## 6. References
- [RFC2119][rfc2119] Key words for use in RFCs to Indicate Requirement Levels
[rfc2119]: https://tools.ietf.org/html/rfc2119
[total-programming-language-wiki]: https://en.wikipedia.org/wiki/Total_functional_programming
[referential-transparency-wiki]: https://en.wikipedia.org/wiki/Referential_transparency
[ce-attribute-naming-convention]: ../cloudevents/spec.md#naming-conventions
[ce-type-system]: ../cloudevents/spec.md#type-system
[ce-id-attribute]: ../cloudevents/spec.md#id
[subscriptions-filter-dialect]: ../subscriptions/spec.md#3241-filter-dialects
[ebnf-xml-spec]: https://www.w3.org/TR/REC-xml/#sec-notation
[modulo-operation-wiki]: https://en.wikipedia.org/wiki/Modulo_operation
[iso-9075]: https://en.wikipedia.org/wiki/ISO/IEC_9075

View File

@ -1,3 +0,0 @@
# CloudEvents - Version 1.0.3-wip
See the [CloudEvents specification](spec.md).

View File

@ -1,160 +0,0 @@
# CloudEvents Release Notes
<!-- no verify-specs -->
## v1.0.2 - 2022/02/05
- Add C# namespace option to proto (#937)
- Tweak SDK requirements wording (#915)
- Re-organized repo directory structure (#904/#905)
- Translate CE specs into Chinese (#899/#898)
- Explicitly state application/json defaulting when serializing (#881)
- Add PowerShell SDK to the list of SDKs (#875)
- WebHook "Origin" header concept clashes with RFC6454 (#870)
- Clarify data encoding in JSON format with a JSON datacontenttype (#861)
- Webhook-Allowed-Origin instead of Webhook-Request-Origin (#836)
- Clean-up of Sampled Rate Extension (#832)
- Remove the conflicting sentence in Kafka Binding (#823/#813)
- Fix the sentences conflict in Kafka Binding (#814)
- Clarify HTTP header value encoding and decoding requirements (#793)
- Expand versioning suggestions in Primer (#799)
- Add support for protobuf batch format (#801)
- Clarify HTTP header value encoding and decoding requirements (#816)
- Primer guidance for dealing with errors (#763)
- Information Classification Extension (#785)
- Clarify the role of partitioning extension in Kafka (#727)
## v1.0.1 - 2020/12/12
- Add protobuf format as a sub-protocol (#721)
- Allow JSON values to be null, meaning unset (#713)
- [Primer] Adding a Non-Goal w.r.t Security (#712)
- WebSockets protocol binding (#697)
- Clarify difference between message mode and HTTP content mode (#672)
- add missing sdks to readme (#666)
- New sdk maintainers rules (#665)
- move sdk governance and cleanup (#663)
- Bring 'datadef' definition into line with specification (#658)
- Add CoC and move some governance docs into 'community' (#656)
- Add blog post around understanding Cloud Events interactions (#651)
- SDK governance draft (#649)
- docs: add common processes for SDK maintainers and contributors (#648)
- Adding Demo for Cloud Events Orchestration (#646)
- Clarified MUST requirement for JSON format (#644)
- Re-Introducing Protocol Buffer Representation (#626)
- Closes #615 (#616)
- Reworked Distributed Tracing Extension (#607)
- Minor updates to Cloud Events Primer (#600)
- Kafka clarifications (#599)
- Proprietary binding spec inclusion guide (#595)
- Adding link to Pub/Sub binding (#588)
- Add some clarity around SDK milestones (#584)
- How to determine binary CE vs random non-CE message (#577)
- Adding Visual Studio Code extension to community open-source doc (#573)
- Specify encoding of kafka header keys and values and message key (#572)
- Fix distributed tracing example (#569)
- Paragraph about nested events to the primer (#567)
- add rules for changing Admins- #564
- Updating JSON Schema (#563)
- Say it's ok to ignore non-MUST recommendations - at your own risk (#562)
- Update Distributed Tracing extension spec links (#550)
- Add Ruby SDK to SDK lists (#548)
## v1.0.0 - 2019/10/24
- Use "producer" and "consumer" instead of "sender" and "receiver"
- Clarification that intermediaries should forward optional attributes
- Remove constraint that attribute names must start with a letter
- Remove suggestion that attributes names should be descriptive and terse
- Clarify that a single occurrence may result in more than one event
- Add an Event Data section (replacing `data`), making event data a top level
concept rather than an attribute
- Introduce an Event Format section
- Define structured-mode and binary-mode messages
- Define protocol binding
- Add extension attributes into "context attributes" description
- Move mention of attribute serialization mechanism from "context attributes"
description into "type system"
- Change "transport" to "protocol" universally
- Introduce the Boolean, URI and URI-reference types for attributes
- Remove the Any and Map types for attributes
- Clarify which Unicode characters are permitted in String attributes
- Require all context attribute values to be one of the listed types,
and state that they may be presented as native types or strings.
- Require `source` to be non-empty; recommend an absolute URI
- Update version number from 0.3 to 1.0
- Clarify that `type` is related to "the originating occurrence"
- Remove `datacontentencoding` section
- Clarify handling of missing `datacontenttype` attribute
- Rename `schemaurl` to `dataschema`, and change type from URI-reference to URI
- Constrain `dataschema` to be non-empty (when present)
- Add details of how `time` can be generated when it can't be determined,
specifically around consistency within a source
- Add details of extension context attribute handling
- Add recommendation that CloudEvent receivers pass on non-CloudEvent metadata
- Sample CloudEvent no longer has a JSON object as a sample extension value
## v0.3 - 2019/06/13
- Update to title format. (#447)
- Remove blank section
- Misc. typo fixes
- Add some guidance on how to construct CloudEvents (#404)
- Size Constraints (#405)
- Type system changes // canonical string representation (#432)
- Add some additional design points for ID (#403)
- Add "Subject" context attribute definition
- Terminology additions for Source, Consumer, Producer and Intermediary (#420)
- Added partitioning extension (#218)
- Issue #331: Clarify scope of source and id uniqueness
- Added link to dataref.md
- Order the attributes section
- Fix bad hrefs for our images (#424)
- Master represents the future version of the spec, use the future version in the text. (#415)
- Adjust examples to include AWS CloudWatch events, our de facto central Event format, and remove valid-but-not-terribly-relevant SNS & Kinesis examples.
- Add dataref attribute and describe Claim Check Pattern (#377)
- Do some clean-up items for PR 406 that were missed
- Introducing "subject" (#406)
- Added datacontentencoding (#387)
- Add Apache RocketMQ proprietary binding with cloudevents
- Ran https://prettier.io/ command on all markdown. (#411)
- Privacy & Security (#399)
- clarify what OPTIONAL means
- Extensions follow attribute naming scheme introduced in #321
- add to README
- fix typo
- The linter can't abide placeholder links 🙄
- Collect proprietary specs in dedicated file
- Move "extension attributes" section down to end of context attributes
- Fixed a broken link in the primer
- Fix broken link
- Consistency: schemaurl uses URI-reference, protobuf uses URI-reference
- HTTP Transport Binding for batching JSON (#370)
- minLength for non-empty attributes, add schemaurl (#372)
- Fix type reference in it's description
- Format data consistently with the paragraph above
- remove duplicate paragraph
- Add Integer as allowed for Any in description of variant type
- s/contenttype/datatype/g
- Add an announcement to our release process
- Transports are responsible for batching messages (#360)
- Specify range of Integer type exactly (#361)
- Fix TOC in http transport
- add KubeCon demo info
## v0.2 - 2018/12/06
- Added HTTP WebHook Specification (#155)
- Added AMQP 1.0 transport and AMQP type system mapping (#157)
- Added MQTT 3.1.1 and 5.0 transport binding (#158)
- Added NATS transport binding (#215)
- Added Distributed Tracing extension (#227)
- Added a Primer (#238)
- Added Sampling extension (#243)
- Defined minimum bar for new protocols/encodings (#254)
- Removed eventTypeVersion (#256)
- Moved serialization of extensions to be top-level JSON attributes (#277)
- Added Sequence extension (#291)
- Added Protobuf transport (#295)
- Defined minimum bar for new extensions (#308)
- Require all attributes to be lowercase and restrict the character set (#321)
- Simplified/shortened the attribute names (#339)
- Added initial draft of an SDK design doc (#356)
## v0.1 - 2018/04/20
- First draft release of the spec!

View File

@ -1,229 +0,0 @@
# CloudEvents SDK Requirements
<!-- no verify-specs -->
The intent of this document to describe a minimum set of requirements for new
Software Development Kits (SDKs) for CloudEvents. These SDKs are designed and
implemented to enhance and speed up CloudEvents integration. As part of
community efforts CloudEvents team committed to support and maintain the
following SDKs:
- [C#/.NET SDK](https://github.com/cloudevents/sdk-csharp)
- [Go SDK](https://github.com/cloudevents/sdk-go)
- [Java SDK](https://github.com/cloudevents/sdk-java)
- [JavaScript SDK](https://github.com/cloudevents/sdk-javascript)
- [PHP SDK](https://github.com/cloudevents/sdk-php)
- [PowerShell SDK](https://github.com/cloudevents/sdk-powershell)
- [Python SDK](https://github.com/cloudevents/sdk-python)
- [Ruby SDK](https://github.com/cloudevents/sdk-ruby)
- [Rust SDK](https://github.com/cloudevents/sdk-rust)
This is intended to provide guidance and requirements for SDK authors. This
document is intended to be kept up to date with the CloudEvents spec.
The SDKs are community driven activities and are (somewhat) distinct from the
CloudEvents specification itself. In other words, while ideally the SDKs are
expected to keep up with changes to the specification, it is not a hard
requirement that they do so. It will be continguent on the specific SDK's
maintainers to find the time.
## Contribution Acceptance
Being an open source community the CloudEvents team is open for new members as
well open to their contributions. In order to ensure that an SDK is going to be
supported and maintained the CloudEvents community would like to ensure that:
- Each SDK has active points of contact.
- Each SDK supports the latest(N), and N-1, major releases of the
[CloudEvent spec](spec.md)\*.
- Within the scope of a major release, only support for the latest minor
version is needed.
Support for release candidates is not required, but strongly encouraged.
\* Note: v1.0 is a special case and it is recommended that as long as v1.0
is the latest version, SDKs should also support v0.3.
## Technical Requirements
Each SDK MUST meet these requirements:
- Supports CloudEvents at spec milestones and ongoing development version.
- Encode a canonical Event into a transport specific encoded message.
- Decode transport specific encoded messages into a Canonical Event.
- Idiomatic usage of the programming language.
- Using current language version(s).
- Supports HTTP transport renderings in both `structured` and `binary`
content mode.
### Object Model Structure Guidelines
Each SDK will provide a generic CloudEvents class/object/structure that
represents the canonical form of an Event.
The SDK should enable users to bypass implementing transport specific encoding
and decoding of the CloudEvents `Event` object. The general flow for Objects
should be:
```
Event (-> Message) -> Transport
```
and
```
Transport (-> Message) -> Event
```
An SDK is not required to implement a wrapper around the transport, the focus
should be around allowing programming models to work with the high level `Event`
object, and providing tools to take the `Event` and turn it into something that
can be used with the implementation transport selected.
At a high level, the SDK needs to be able to help with the following tasks:
1. Compose an Event.
1. Encode an Event given a transport and encoding (into a Transport Message if
appropriate).
1. Decode an Event given a transport specific message, request or response (into
a Transport Message if appropriate).
#### Compose an Event
Provide a convenient way to compose both a single message and many messages.
Implementers will need a way to quickly build up and convert their event data
into the a CloudEvents encoded Event. In practice there tend to be two aspects
to event composition,
1. Event Creation
- "I have this data that is not formatted as a CloudEvent and I want it to be."
1. Event Mutation
- "I have a CloudEvents formatted Event and I need it to be a different Event."
- "I have a CloudEvents formatted Event and I need to mutate the Event."
Event creation is highly idiomatic to the SDK language.
Event mutation tends to be solved with an accessor pattern, like getters and
setters. But direct key access could be leveraged, or named-key accessor
functions.
In either case, there MUST be a method for validating the resulting Event object
based on the parameters set, most importantly the CloudEvents spec version.
#### Encode/Decode an Event
Each SDK MUST support encoding and decoding an Event with regards to a transport
and encoding:
- Each SDK MUST support structured-mode messages for each transport that it
supports.
- Each SDK SHOULD support binary-mode messages for each transport that it
supports.
- Each SDK SHOULD support batch-mode messages for each transport that it
supports (where the event format and transport combination supports batch mode).
- Each SDK SHOULD indicate which modes it supports for each supported event
format, both in the [table below](#feature-support) and in any SDK-specific
documentation provided.
Note that when decoding an event, media types MUST be matched
case-insensitively, as specified in [RFC 2045]
(https://tools.ietf.org/html/rfc2045).
#### Data
Data access from the event has some considerations, the Event at rest could be
encoded into the `base64` form, as structured data, or as a wire format like
`json`. An SDK MUST provide a method for unpacking the data from these formats
into a native format.
#### Extensions
Supporting CloudEvents extensions is idiomatic again, but a method that mirrors
the data access seems to work.
#### Validation
Validation MUST be possible on an individual Event. Validation MUST take into
account the spec version, and all the requirements put in-place by the spec at
each version.
SDKs SHOULD perform validation on context attribute values provided to it by
the SDK user. This will help ensure that only valid CloudEvents are generated.
## Documentation
Each SDK must provide examples using at least HTTP transport of:
- Composing an Event.
- Encoding and sending a composed Event.
- Receiving and decoding an Event.
## Feature Support
Each SDK must update the following "support table" periodically to ensure
they accurately the status of each SDK's support for the stated features.
<!--
Do these commands in vi with the cursor after these comments.
Easiest to edit table by first doing this:
:g/^|/s/ :heavy_check_mark: / :y: /g
and making the window wide enough that lines don't wrap. Then it should look nice.
Undo it when done:
:g/^|/s/ :y: / :heavy_check_mark: /g
-->
| Feature | C# | Go | Java | JS | PHP | PS | Python | Ruby | Rust |
|:----------------------------------------------------------------------------------------------------------------------------------------------| :-: | :-: | :--: | :-: | :-: | :-: | :----: | :--: | :--: |
| **[v1.0](https://github.com/cloudevents/spec/tree/v1.0)** |
| [CloudEvents Core](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Event Formats |
| [Avro](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/avro-format.md) | :heavy_check_mark: | | :x: | :x: | | | | :x: | :x: |
| [Avro Compact](https://github.com/cloudevents/spec/blob/main/cloudevents/working-drafts/avro-compact-format.md) | :heavy_check_mark: | | :x: | :x: | | | | | :x: |
| [JSON](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/json-format.md) | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| [Protobuf ](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/protobuf-format.md) | :heavy_check_mark: | | :heavy_check_mark: | :x: | | | | :x: | :x: |
| Bindings / Content Modes |
| [AMQP Binary](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/amqp-protocol-binding.md#31-binary-content-mode) | :heavy_check_mark: | | :heavy_check_mark: | :x: | | | | :x: | :x: |
| [AMQP Structured](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/amqp-protocol-binding.md#32-structured-content-mode) | :heavy_check_mark: | | :heavy_check_mark: | :x: | | | | :x: | :x: |
| [HTTP Binary](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/http-protocol-binding.md#31-binary-content-mode) | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| [HTTP Structured](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/http-protocol-binding.md#32-structured-content-mode) | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| [HTTP Batch](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/http-protocol-binding.md#33-batched-content-mode) | :heavy_check_mark: | | :x: | :x: | | | | :heavy_check_mark: | :heavy_check_mark: |
| [Kafka Binary](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/kafka-protocol-binding.md#32-binary-content-mode) | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :x: | :heavy_check_mark: |
| [Kafka Structured](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/kafka-protocol-binding.md#33-structured-content-mode) | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :x: | :heavy_check_mark: |
| [MQTT v5 Binary](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/mqtt-protocol-binding.md#31-binary-content-mode) | :x: | | :x: | :heavy_check_mark: | | | | :x: | :x: |
| [MQTT Structured](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/mqtt-protocol-binding.md#32-structured-content-mode) | :heavy_check_mark: | | :x: | :heavy_check_mark: | | | | :x: | :x: |
| [NATS Binary](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/nats-protocol-binding.md) | :x: | | :x: | :x: | | | | :x: | :heavy_check_mark: |
| [NATS Structured](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/nats-protocol-binding.md) | :x: | | :x: | :x: | | | | :x: | :heavy_check_mark: |
| [WebSockets Binary](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/websockets-protocol-binding.md) | :x: | | :x: | :heavy_check_mark: | | | | :x: | :x: |
| [WebSockets Structured](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/websockets-protocol-binding.md) | :x: | | :x: | :heavy_check_mark: | | | | :x: | :x: |
| Proprietary Bindings |
| [RocketMQ](https://github.com/apache/rocketmq-externals/blob/master/rocketmq-cloudevents-binding/rocketmq-transport-binding.md) | :x: | | :heavy_check_mark: | :x: | | | | :x: | :x: |
| [RabbitMQ](https://github.com/knative-extensions/eventing-rabbitmq/blob/main/cloudevents-protocol-spec/spec.md) | :x: | | | | | | | | |
| |
| **[v0.3](https://github.com/cloudevents/spec/tree/v0.3)** |
| [CloudEvents Core](https://github.com/cloudevents/spec/blob/v0.3/spec.md) | :x: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Event Formats |
| [AMQP](https://github.com/cloudevents/spec/blob/v0.3/amqp-format.md) | :x: | | :x: | :x: | | | | :x: | :x: |
| [JSON](https://github.com/cloudevents/spec/blob/v0.3/json-format.md) | :x: | | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| [Protobuf](https://github.com/cloudevents/spec/blob/v0.3/protobuf-format.md) | :x: | | :heavy_check_mark: | :x: | | | | :x: | :x: |
| Bindings / Content Modes |
| [AMQP Binary](https://github.com/cloudevents/spec/blob/v0.3/amqp-transport-binding.md#31-binary-content-mode) | :x: | | :heavy_check_mark: | :x: | | | | :x: | :x: |
| [AMQP Structured](https://github.com/cloudevents/spec/blob/v0.3/amqp-transport-binding.md#32-structured-content-mode) | :x: | | :heavy_check_mark: | :x: | | | | :x: | :x: |
| [HTTP Binary](https://github.com/cloudevents/spec/blob/v0.3/http-transport-binding.md) | :x: | | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| [HTTP Structured](https://github.com/cloudevents/spec/blob/v0.3/http-transport-binding.md) | :x: | | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| [HTTP Batch](https://github.com/cloudevents/spec/blob/v0.3/http-transport-binding.md) | :x: | | :x: | :x: | | | | :heavy_check_mark: | :heavy_check_mark: |
| [Kafka Binary](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/kafka-protocol-binding.md#32-binary-content-mode) | :x: | | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :x: | :heavy_check_mark: |
| [Kafka Structured](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/kafka-protocol-binding.md#33-structured-content-mode) | :x: | | :heavy_check_mark: | :heavy_check_mark: | | | :heavy_check_mark: | :x: | :heavy_check_mark: |
| [MQTT v5 Binary](https://github.com/cloudevents/spec/blob/v0.3/mqtt-transport-binding.md) | :x: | | :x: | :x: | | | | :x: | :x: |
| [MQTT Structured](https://github.com/cloudevents/spec/blob/v0.3/mqtt-transport-binding.md) | :x: | | :x: | :x: | | | | :x: | :x: |
| [NATS Binary](https://github.com/cloudevents/spec/blob/v0.3/nats-transport-binding.md) | :x: | | :x: | :x: | | | | :x: | :heavy_check_mark: |
| [NATS Structured](https://github.com/cloudevents/spec/blob/v0.3/nats-transport-binding.md) | :x: | | :x: | :x: | | | | :x: | :heavy_check_mark: |
| [WebSockets Binary](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/websockets-protocol-binding.md) | :x: | | :x: | :heavy_check_mark: | | | | :x: | :x: |
| [WebSockets Structured](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/bindings/websockets-protocol-binding.md) | :x: | | :x: | :heavy_check_mark: | | | | :x: | :x: |
| Proprietary Bindings |
| [RocketMQ](https://github.com/apache/rocketmq-externals/blob/master/rocketmq-cloudevents-binding/rocketmq-transport-binding.md) | :x: | | :heavy_check_mark: | :x: | | | | :x: | :x: |

View File

@ -1,16 +0,0 @@
# CloudEvents Adapters
<!-- no verify-specs -->
Not all event producers will produce CloudEvents natively. As a result,
some "adapter" might be needed to convert these events into CloudEvents.
This will typically mean extracting metadata from the events to be used as
CloudEvents attributes. In order to promote interoperability across multiple
implementations of these adapters, the following documents show the proposed
algorithms that should be used:
- [AWS S3](./aws-s3.md)
- [AWS SNS](./aws-sns.md)
- [CouchDB](./couchdb.md)
- [GitHub](./github.md)
- [GitLab](./gitlab.md)

View File

@ -1,34 +0,0 @@
# AWS Simple Storage Service CloudEvents Adapter
This document describes how to convert
[AWS S3 events](https://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html)
into CloudEvents.
AWS S3 event documentation:
https://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html
All S3 events are converted into CloudEvents using the
same pattern as described in the following table:
| CloudEvents Attribute | Value |
| :-------------------- | :---------------------------------------------- |
| `id` | "responseElements.x-amz-request-id" + `.` + "responseElements.x-amz-id-2" |
| `source` | "eventSource" value + `.` + "awsRegion" value + `.` + "s3.buckets.name" value |
| `specversion` | `1.0` |
| `type` | `com.amazonaws.s3.` + "eventName" value |
| `datacontenttype` | S3 event type (e.g. `application/json`) |
| `dataschema` | Omit |
| `subject` | "s3.object.key" value |
| `time` | "eventTime" value |
| `data` | S3 event |
Comments:
- While the "eventSource" value will always be static (`aws:s3`) when
the event is coming from S3, if some other cloud provider is supporting
the S3 event format it is expected that this value will not be
`aws:s3` for them - it is expected to be something specific to their
environment.
- Consumers of these events will therefore be able to know if the event
is an S3 type of event (regardless of whether it is coming from S3 or
an S3-compatible provider) by detecting the `com.amazonaws.s3` prefix
on the `type` attribute.

View File

@ -1,53 +0,0 @@
# Amazon Simple Notification Service CloudEvents Adapter
This document describes how to convert [AWS SNS messages][sns-messages] into CloudEvents.
Amazon SNS MAY send a subscription confirmation, notification, or unsubscribe confirmation
message to your HTTP/HTTPS endpoints.
Each section below describes how to determine the CloudEvents attributes
based on the specified type of SNS messages.
### Subscription Confirmation
| CloudEvents Attribute | Value |
| :-------------------- | :---------------------------------------------- |
| `id` | "x-amz-sns-message-id" value |
| `source` | "x-amz-sns-topic-arn" value |
| `specversion` | `1.0` |
| `type` | `com.amazonaws.sns.` + "x-amz-sns-message-type" value |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | Omit |
| `time` | "Timestamp" value |
| `data` | HTTP payload |
### Notification
| CloudEvents Attribute | Value |
| :-------------------- | :---------------------------------------------- |
| `id` | "x-amz-sns-message-id" value |
| `source` | "x-amz-sns-subscription-arn" value |
| `specversion` | `1.0` |
| `type` | `com.amazonaws.sns.` + "x-amz-sns-message-type" value |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | "Subject" value (if present) |
| `time` | "Timestamp" value |
| `data` | HTTP payload |
### Unsubscribe Confirmation
| CloudEvents Attribute | Value |
| :-------------------- | :---------------------------------------------- |
| `id` | "x-amz-sns-message-id" value |
| `source` | "x-amz-sns-subscription-arn" value |
| `specversion` | `1.0` |
| `type` | `com.amazonaws.sns.` + "x-amz-sns-message-type" value |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | Omit |
| `time` | "Timestamp" value |
| `data` | HTTP payload |
[sns-messages]: https://docs.aws.amazon.com/sns/latest/dg/sns-message-and-json-formats.html

View File

@ -1,74 +0,0 @@
# CouchDB CloudEvents Adapter
This document describes how to convert:
- [CouchDB Document events](http://docs.couchdb.org/en/stable/api/database/changes.html) and
- [CouchDB Database events](http://docs.couchdb.org/en/stable/api/server/common.html#db-updates)
into CloudEvents.
Each section below describes how to determine the CloudEvents attributes
based on the event type.
## /db/\_changes
### Update
| CloudEvents Attribute | Value |
| :-------------------- | :------------------------------------ |
| `id` | The event sequence identifier (`seq`) |
| `source` | The server URL / `db` |
| `specversion` | `1.0` |
| `type` | `org.apache.couchdb.document.updated` |
| `datacontenttype` | `application/json` |
| `subject` | The document identifier (`id`) |
| `time` | Current time |
| `data` | `changes` value (array of `revs`) |
### Delete
| CloudEvents Attribute | Value |
| :-------------------- | :------------------------------------ |
| `id` | The event sequence identifier (`seq`) |
| `source` | The server URL / `db` |
| `specversion` | `1.0` |
| `type` | `org.apache.couchdb.document.deleted` |
| `datacontenttype` | `application/json` |
| `subject` | The document identifier (`id`) |
| `time` | Current time |
| `data` | `changes` value (array of `revs`) |
## /\_db_updates
### Create
| CloudEvents Attribute | Value |
| :-------------------- | :------------------------------------ |
| `id` | The event sequence identifier (`seq`) |
| `source` | The server URL |
| `specversion` | `1.0` |
| `type` | `org.apache.couchdb.database.created` |
| `subject` | The database name (`db_name`) |
| `time` | Current time |
### Update
| CloudEvents Attribute | Value |
| :-------------------- | :------------------------------------ |
| `id` | The event sequence identifier (`seq`) |
| `source` | The server URL |
| `specversion` | `1.0` |
| `type` | `org.apache.couchdb.database.updated` |
| `subject` | The database name (`db_name`) |
| `time` | Current time |
### Delete
| CloudEvents Attribute | Value |
| :-------------------- | :------------------------------------ |
| `id` | The event sequence identifier (`seq`) |
| `source` | The server URL |
| `specversion` | `1.0` |
| `type` | `org.apache.couchdb.database.deleted` |
| `subject` | The database name (`db_name`) |
| `time` | Current time |

File diff suppressed because it is too large Load Diff

View File

@ -1,166 +0,0 @@
# GitLab CloudEvents Adapter
This document describes how to convert
[GitLab webhook events](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html#events)
into a CloudEvents.
GitLab webhook event documentation:
https://docs.gitlab.com/ee/user/project/integrations/webhooks.html#events
Each section below describes how to determine the CloudEvents attributes
based on the specified event.
## Push Event
| CloudEvents Attribute | Value |
| :-------------------- | :--------------------------------------- |
| `id` | Generate a new unique value, e.g. a UUID |
| `source` | "repository.homepage" value |
| `specversion` | `1.0` |
| `type` | `com.gitlab.push` |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | "checkout_sha" value |
| `time` | Current time |
| `data` | Content of HTTP request body |
## Tag Event
| CloudEvents Attribute | Value |
| :-------------------- | :--------------------------------------- |
| `id` | Generate a new unique value, e.g. a UUID |
| `source` | "repository.homepage" value |
| `specversion` | `1.0` |
| `type` | `com.gitlab.tag_push` |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | "ref" value |
| `time` | Current time |
| `data` | Content of HTTP request body |
## Issue Event
| CloudEvents Attribute | Value |
| :-------------------- | :---------------------------------------------------- |
| `id` | Generate a new unique value, e.g. a UUID |
| `source` | "repository.homepage" value |
| `specversion` | `1.0` |
| `type` | `com.gitlab.issue.` + "object_attributes.state" value |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | "object_attributes.iid" value |
| `time` | Current time |
| `data` | Content of HTTP request body |
## Comment on Commit Event
| CloudEvents Attribute | Value |
| :-------------------- | :--------------------------------------- |
| `id` | Generate a new unique value, e.g. a UUID |
| `source` | "commit.url" value |
| `specversion` | `1.0` |
| `type` | `com.gitlab.note.commit` |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | "object_attributes.id" value |
| `time` | "object_attributes.created_at" value |
| `data` | Content of HTTP request body |
## Comment on Merge Request Event
| CloudEvents Attribute | Value |
| :-------------------- | :----------------------------------------------------------- |
| `id` | Generate a new unique value, e.g. a UUID |
| `source` | "object_attributes.url" value, without the `#note\_...` part |
| `specversion` | `1.0` |
| `type` | `com.gitlab.note.merge_request` |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | "object_attributes.id" value |
| `time` | "object_attributes.created_at` value |
| `data` | Content of HTTP request body |
## Comment on Issue Event
| CloudEvents Attribute | Value |
| :-------------------- | :---------------------------------------------------------- |
| `id` | Generate a new unique value, e.g. a UUID |
| `source` | "object_attributes.url" value without the `#note\_...` part |
| `specversion` | `1.0` |
| `type` | `com.gitlab.note.issue` |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | "object_attributes.id" value |
| `time` | "object_attributes.created_at" value |
| `data` | Content of HTTP request body |
## Comment on Code Snippet Event
| CloudEvents Attribute | Value |
| :-------------------- | :---------------------------------------------------------- |
| `id` | Generate a new unique value, e.g. a UUID |
| `source` | "object_attributes.url" value without the `#note\_...` part |
| `specversion` | `1.0` |
| `type` | `com.gitlab.note.snippet` |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | "object_attributes.id" value |
| `time` | "object_attributes.created_at" value |
| `data` | Content of HTTP request body |
## Merge Request Event
| CloudEvents Attribute | Value |
| :-------------------- | :------------------------------------------------------------- |
| `id` | Generate a new unique value, e.g. a UUID |
| `source` | "repository.homepage" value |
| `specversion` | `1.0` |
| `type` | `com.gitlab.merge_request.` + "object_attributes.action" value |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | "object_attributes.iid" value |
| `time` | "object_attributes.created_at" value |
| `data` | Content of HTTP request body |
## Wiki Page Event
| CloudEvents Attribute | Value |
| :-------------------- | :--------------------------------------------------------- |
| `id` | Generate a new unique value, e.g. a UUID |
| `source` | "project.web_url" value |
| `specversion` | `1.0` |
| `type` | `com.gitlab.wiki_page.` + "object_attributes.action" value |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | "object_attributes.slug" value |
| `time` | Current time |
| `data` | Content of HTTP request body |
## Pipeline Event
| CloudEvents Attribute | Value |
| :-------------------- | :-------------------------------------------------------- |
| `id` | Generate a new unique value, e.g. a UUID |
| `source` | "project.web_url" value |
| `specversion` | `1.0` |
| `type` | `com.gitlab.pipeline.` + "object_attributes.status" value |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | "object_attributes.id" value |
| `time` | Current time |
| `data` | Content of HTTP request body |
## Job Event
| CloudEvents Attribute | Value |
| :-------------------- | :--------------------------------------- |
| `id` | Generate a new unique value, e.g. a UUID |
| `source` | "repository.homepage" value |
| `specversion` | `1.0` |
| `type` | `com.gitlab.job.` + "job_status" value |
| `datacontenttype` | `application/json` |
| `dataschema` | Omit |
| `subject` | "job_id" value |
| `time` | Current time |
| `data` | Content of HTTP request body |

View File

@ -1,345 +0,0 @@
# AMQP Protocol Binding for CloudEvents - Version 1.0.3-wip
## Abstract
The AMQP Protocol Binding for CloudEvents defines how events are mapped to
OASIS AMQP 1.0 ([OASIS][oasis-amqp-1.0]; ISO/IEC 19464:2014) messages.
## Table of Contents
1. [Introduction](#1-introduction)
- 1.1. [Conformance](#11-conformance)
- 1.2. [Relation to AMQP](#12-relation-to-amqp)
- 1.3. [Content Modes](#13-content-modes)
- 1.4. [Event Formats](#14-event-formats)
- 1.5. [Security](#15-security)
2. [Use of CloudEvents Attributes](#2-use-of-cloudevents-attributes)
3. [AMQP Message Mapping](#3-amqp-message-mapping)
- 3.2. [Binary Content Mode](#31-binary-content-mode)
- 3.1. [Structured Content Mode](#32-structured-content-mode)
4. [References](#4-references)
## 1. Introduction
[CloudEvents][ce] is a standardized and protocol-agnostic definition of the
structure and metadata description of events. This specification defines how the
elements defined in the CloudEvents specification are to be used in
[AMQP][oasis-amqp-1.0] messages.
### 1.1. Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [RFC2119][rfc2119].
### 1.2. Relation to AMQP
This specification does not prescribe rules constraining transfer or settlement
of event messages with AMQP; it solely defines how CloudEvents are expressed as
AMQP 1.0 messages.
AMQP-based messaging and eventing infrastructures often provide higher-level
programming-level abstractions that do not expose all AMQP protocol elements, or
map AMQP protocol elements or names to proprietary constructs. This
specification uses AMQP terminology, and implementers can refer to the
respective infrastructure's AMQP documentation to determine the mapping into a
programming-level abstraction.
This specification assumes use of the default AMQP [message
format][message-format].
### 1.3. Content Modes
The CloudEvents specification defines three content modes for transferring
events: _structured_, _binary_ and _batch_. The AMQP protocol binding does not
currently support the batch content mode. Every compliant implementation SHOULD
support both structured and binary modes.
In the _structured_ content mode, event metadata attributes and event data are
placed into the AMQP message's [application data][data] section using an
event format as defined in the CloudEvents [spec][ce].
In the _binary_ content mode, the value of the event `data` is placed into the
AMQP message's [application data][data] section as-is, with the
`datacontenttype` attribute value declaring its media type mapped to the AMQP
`content-type` message property; all other event attributes are mapped to the
AMQP [application-properties][app-properties] section.
### 1.4. Event Formats
Event formats, used with the _structured_ content mode, define how an event is
expressed in a particular data format. All implementations of this specification
that support the _structured_ content mode MUST support the [JSON event
format][json-format].
### 1.5. Security
This specification does not introduce any new security features for AMQP, or
mandate specific existing features to be used.
## 2. Use of CloudEvents Attributes
This specification does not further define any of the [CloudEvents][ce] event
attributes.
One event attribute, `datacontenttype` is handled specially in _binary_ content
mode and mapped onto the AMQP content-type message property. All other
attributes are transferred as metadata without further interpretation.
This mapping is intentionally robust against changes, including the addition and
removal of event attributes, and also accommodates vendor extensions to the
event metadata. Any mention of event attributes other than `datacontenttype`
is exemplary.
## 3. AMQP Message Mapping
The content mode is chosen by the sender of the event, which is either the
requesting or the responding party. Protocol interaction patterns that might
allow solicitation of events using a particular content mode might be defined by
an application, but are not defined here.
The receiver of the event can distinguish between the two modes by inspecting
the `content-type` message property field. If the value is prefixed with the
CloudEvents media type `application/cloudevents` (matched case-insensitively),
indicating the use of a known [event format](#14-event-formats), the receiver
uses _structured_ mode, otherwise it defaults to _binary_ mode.
If a receiver detects the CloudEvents media type, but with an event format that
it cannot handle, for instance `application/cloudevents+avro`, it MAY still
treat the event as binary and forward it to another party as-is.
When the `content-type` message property is not prefixed with the CloudEvents
media type, being able to know when the message ought to be attempted to be
parsed as a CloudEvent can be a challenge. While this specification can not
mandate that senders do not include any of the CloudEvents message properties
when the message is not a CloudEvent, it would be reasonable for a receiver
to assume that if the message has all of the mandatory CloudEvents attributes
as message properties then it's probably a CloudEvent. However, as with all
CloudEvent messages, if it does not adhere to all of the normative language of
this specification then it is not a valid CloudEvent.
### 3.1. Binary Content Mode
The _binary_ content mode accommodates any shape of event data, and allows for
efficient transfer and without transcoding effort.
#### 3.1.1. AMQP content-type
For the _binary_ mode, the AMQP `content-type` property field value maps
directly to the CloudEvents `datacontenttype` attribute.
#### 3.1.2. Event Data Encoding
Event data is assumed to contain opaque application data that is
encoded as declared by the `datacontenttype` attribute.
An application is free to hold the information in any in-memory representation
of its choosing, but as it is transposed into AMQP as defined in this
specification, the assumption is that the event data is made available as a
sequence of bytes. The byte sequence is used as the AMQP
[application-data][data] section.
Example:
If the declared `datacontenttype` is `application/json;charset=utf-8`, the
expectation is that the event data is made available as [UTF-8][rfc3629] encoded
JSON text for use in AMQP.
#### 3.1.3. Metadata Headers
All [CloudEvents][ce] attributes with exception of `datacontenttype` MUST be
individually mapped to and from the AMQP
[application-properties][app-properties] section.
CloudEvents extensions that define their own attributes MAY define a secondary
mapping to AMQP properties for those attributes, also in different message
sections, especially if specific attributes or their names need to align with
AMQP features or with other specifications that have explicit AMQP header
bindings. However, they MUST also include the previously defined primary
mapping.
An extension specification that defines a secondary mapping rule for AMQP, and
any revision of such a specification, MUST also define explicit mapping rules
for all other protocol bindings that are part of the CloudEvents core at the
time of the submission or revision.
##### 3.1.3.1 AMQP Application Property Names
CloudEvent attributes MUST be prefixed with either "cloudEvents_" or
"cloudEvents:" for use in the application-properties section.
The '\_' separator character SHOULD be preferred in the interest of
compatibility with JMS 2.0 clients and JMS message selectors where the ':'
separator is not permitted for property identifiers (see section 3.8.1.1 of
[JMS2.0][JMS20]). Any single message MUST use the same separator for all
CloudEvents attributes, but a single queue MAY contain messages which use
different separators.
CloudEvents AMQP consumers SHOULD understand the "cloudEvents" prefix with both
the '\_' and the ':' separators as permitted within the constraints of the
client model. JMS 2.0 AMQP consumers MUST understand the '\_' separator; they
cannot understand the ':' separator as per the cited JMS constraints.
Examples:
* `time` maps to `cloudEvents_time`
* `id` maps to `cloudEvents_id`
* `specversion` maps to `cloudEvents_specversion`
##### 3.1.3.2 AMQP Application Property Values
The value for each AMQP application property is constructed from the respective
attribute's AMQP type representation.
The CloudEvents type system MUST be mapped to AMQP types as follows, with
additional notes below.
| CloudEvents | AMQP |
| ------------- | --------------------------- |
| Boolean | [boolean][amqp-boolean] |
| Integer | [long][amqp-long] |
| String | [string][amqp-string] |
| Binary | [binary][amqp-binary] |
| URI | [string][amqp-string] |
| URI-reference | [string][amqp-string] |
| Timestamp | [timestamp][amqp-timestamp] |
All attribute values in an AMQP binary message MUST either be represented using
the native AMQP types above or the canonical string form.
An implementation
- MUST be able to interpret both forms on an incoming AMQP message
- MAY further relax the requirements for incoming messages (for example
accepting numeric types other than AMQP long), but MUST be strict for outgoing
messages.
- SHOULD use the native AMQP form on outgoing AMQP messages when it is efficient
to do so, but MAY forward values as canonical strings
#### 3.1.4 Examples
This example shows the _binary_ mode mapping of an event into the [bare
message][message-format] sections of AMQP:
```text
--------------- properties ------------------
to: myqueue
content-type: application/json; charset=utf-8
----------- application-properties -----------
cloudEvents:specversion: 1.0
cloudEvents:type: com.example.someevent
cloudEvents:time: 2018-04-05T03:56:24Z
cloudEvents:id: 1234-1234-1234
cloudEvents:source: /mycontext/subcontext
.... further attributes ...
------------- application-data ---------------
{
... application data ...
}
----------------------------------------------
```
### 3.2. Structured Content Mode
The _structured_ content mode keeps event metadata and data together in the
payload, allowing simple forwarding of the same event across multiple routing
hops, and across multiple protocols.
#### 3.2.1. AMQP Content-Type
The [AMQP `content-type`][content-type] property field is set to the media type
of an [event format](#14-event-formats).
Example for the [JSON format][json-format]:
```text
content-type: application/cloudevents+json; charset=UTF-8
```
#### 3.2.2. Event Data Encoding
The chosen [event format](#14-event-formats) defines how all attributes
and `data` are represented.
The event metadata and data is then rendered in accordance with the event format
specification and the resulting data becomes the AMQP application [data][data]
section.
#### 3.2.3. Metadata Headers
Implementations MAY include the same AMQP application-properties as defined for
the [binary mode](#313-metadata-headers).
#### 3.2.4 Examples
This example shows a JSON event format encoded event:
```text
--------------- properties ------------------------------
to: myqueue
content-type: application/cloudevents+json; charset=utf-8
----------- application-properties ----------------------
------------- application-data --------------------------
{
"specversion" : "1.0",
"type" : "com.example.someevent",
... further attributes omitted ...
"data" : {
... application data ...
}
}
---------------------------------------------------------
```
## 4. References
- [RFC2046][rfc2046] Multipurpose Internet Mail Extensions (MIME) Part Two:
Media Types
- [RFC2119][rfc2119] Key words for use in RFCs to Indicate Requirement Levels
- [RFC3629][rfc3629] UTF-8, a transformation format of ISO 10646
- [RFC4627][rfc4627] The application/json Media Type for JavaScript Object
Notation (JSON)
- [RFC7159][rfc7159] The JavaScript Object Notation (JSON) Data Interchange
Format
- [OASIS-AMQP-1.0][oasis-amqp-1.0] OASIS Advanced Message Queuing Protocol
(AMQP) Version 1.0
- [JMS20][JMS20] JSR-343 Java Message Service 2.0
[ce]: ../spec.md
[json-format]: ../formats/json-format.md
[content-type]: https://tools.ietf.org/html/rfc7231#section-3.1.1.5
[json-value]: https://tools.ietf.org/html/rfc7159#section-3
[rfc2046]: https://tools.ietf.org/html/rfc2046
[rfc2119]: https://tools.ietf.org/html/rfc2119
[rfc3629]: https://tools.ietf.org/html/rfc3629
[rfc4627]: https://tools.ietf.org/html/rfc4627
[rfc6839]: https://tools.ietf.org/html/rfc6839#section-3.1
[rfc7159]: https://tools.ietf.org/html/rfc7159
[oasis-amqp-1.0]: http://docs.oasis-open.org/amqp/core/v1.0/amqp-core-overview-v1.0.html
[message-format]: http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-messaging-v1.0-os.html#section-message-format
[data]: http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-messaging-v1.0-os.html#type-data
[app-properties]: http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-messaging-v1.0-os.html#type-application-properties
[amqp-boolean]: http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-types-v1.0-os.html#type-boolean
[amqp-long]: http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-types-v1.0-os.html#type-long
[amqp-binary]: http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-types-v1.0-os.html#type-binary
[amqp-string]: http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-types-v1.0-os.html#type-string
[amqp-timestamp]: http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-types-v1.0-os.html#type-timestamp
[jms20]: https://jcp.org/aboutJava/communityprocess/final/jsr343/index.html

View File

@ -1,540 +0,0 @@
# HTTP Protocol Binding for CloudEvents - Version 1.0.3-wip
## Abstract
The HTTP Protocol Binding for CloudEvents defines how events are mapped to HTTP
1.1 request and response messages.
## Table of Contents
1. [Introduction](#1-introduction)
- 1.1. [Conformance](#11-conformance)
- 1.2. [Relation to HTTP](#12-relation-to-http)
- 1.3. [Content Modes](#13-content-modes)
- 1.4. [Event Formats](#14-event-formats)
- 1.5. [Security](#15-security)
2. [Use of CloudEvents Attributes](#2-use-of-cloudevents-attributes)
- 2.1. [datacontenttype Attribute](#21-datacontenttype-attribute)
- 2.2. [data](#22-data)
3. [HTTP Message Mapping](#3-http-message-mapping)
- 3.1. [Binary Content Mode](#31-binary-content-mode)
- 3.2. [Structured Content Mode](#32-structured-content-mode)
- 3.3. [Batched Content Mode](#33-batched-content-mode)
4. [References](#4-references)
## 1. Introduction
[CloudEvents][ce] is a standardized and protocol-agnostic definition of the
structure and metadata description of events. This specification defines how the
elements defined in the CloudEvents specification are to be used in [HTTP
1.1][rfc7230] requests and response messages.
### 1.1. Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [RFC2119][rfc2119].
### 1.2. Relation to HTTP
This specification does not prescribe rules constraining the use or handling of
specific [HTTP methods][rfc7231-section-4], and it also does not constrain the
[HTTP target resource][rfc7230-section-5-1] that is used for transferring or
soliciting events.
Events can be transferred with all standard or application-defined HTTP request
methods that support payload body transfers. Events can be also be transferred
in HTTP responses and with all HTTP status codes that permit payload body
transfers.
All examples herein that show HTTP methods, HTTP target URIs, and HTTP status
codes are non-normative illustrations.
This specification also applies equivalently to HTTP/2 ([RFC7540][rfc7540]),
which is compatible with HTTP 1.1 semantics.
### 1.3. Content Modes
The CloudEvents specification defines three content modes for transferring
events: _structured_, _binary_ and _batch_. The HTTP protocol binding supports
all three content modes. Every compliant implementation SHOULD
support both structured and binary modes.
In the _binary_ content mode, the value of the event `data` is placed into the
HTTP request, or response, body as-is, with the `datacontenttype` attribute
value declaring its media type in the HTTP `Content-Type` header; all other
event attributes are mapped to HTTP headers.
In the _structured_ content mode, event metadata attributes and event data are
placed into the HTTP request or response body using an
[event format](#14-event-formats) that supports
[structured-mode messages][ce-message].
In the _batched_ content mode, event metadata attributes and event data of
multiple events are batched into a single HTTP request or response body using
an [event format](#14-event-formats) that supports batching
[structured-mode messages][ce-message].
### 1.4. Event Formats
Event formats, used with the _structured_ content mode, define how an event is
expressed in a particular data format. All implementations of this specification
that support the _structured_ content mode MUST support the non-batching [JSON
event format][json-format], but MAY support any additional, including
proprietary, formats.
Event formats MAY additionally define how a batch of events is expressed. Those
can be used with the _batched_ content mode.
### 1.5. Security
This specification does not introduce any new security features for HTTP, or
mandate specific existing features to be used. This specification applies
identically to [HTTP over TLS][rfc2818].
## 2. Use of CloudEvents Attributes
This specification does not further define any of the core [CloudEvents][ce]
event attributes.
This mapping is intentionally robust against changes, including the addition and
removal of event attributes, and also accommodates vendor extensions to the
event metadata.
### 2.1. datacontenttype Attribute
The `datacontenttype` attribute is assumed to contain a [RFC2046][rfc2046]
compliant media-type expression.
### 2.2. data
`data` is assumed to contain opaque application data that is encoded as declared
by the `datacontenttype` attribute.
An application is free to hold the information in any in-memory representation
of its choosing, but as the value is transposed into HTTP as defined in this
specification, the assumption is that the `data` value is made available as a
sequence of bytes.
For instance, if the declared `datacontenttype` is
`application/json;charset=utf-8`, the expectation is that the `data` value is
made available as [UTF-8][rfc3629] encoded JSON text to HTTP.
## 3. HTTP Message Mapping
The event binding is identical for both HTTP request and response messages.
The content mode is chosen by the sender of the event, which is either the
requesting or the responding party. Gestures that might allow solicitation of
events using a particular mode might be defined by an application, but are not
defined here. The _batched_ mode MUST NOT be used unless solicited, and the
gesture SHOULD allow the receiver to choose the maximum size of a batch.
The receiver of the event can distinguish between the three modes by inspecting
the `Content-Type` header value. If the value is prefixed with the CloudEvents
media type `application/cloudevents` (matched case-insensitively), indicating
the use of a known [event format](#14-event-formats), the receiver uses
_structured_ mode. If the value is prefixed with `application/cloudevents-batch`,
the receiver uses the _batched_ mode. Otherwise it defaults to _binary_ mode.
If a receiver detects the CloudEvents media type, but with an event format that
it cannot handle, for instance `application/cloudevents+avro`, it MAY still
treat the event as binary and forward it to another party as-is.
When the `Content-Type` header value is not prefixed with the CloudEvents media
type, knowing when the message ought to be parsed as a CloudEvent can be a
challenge. While this specification can not mandate that senders do not include
any of the CloudEvents HTTP headers when the message is not a CloudEvent, it
would be reasonable for a receiver to assume that if the message has all of the
mandatory CloudEvents attributes as HTTP headers then it's probably a
CloudEvent. However, as with all CloudEvent messages, if it does not adhere to
all of the normative language of this specification then it is not a valid
CloudEvent.
### 3.1. Binary Content Mode
The _binary_ content mode accommodates any shape of event data, and allows for
efficient transfer and without transcoding effort.
#### 3.1.1. HTTP Content-Type
For the _binary_ mode, the HTTP `Content-Type` header value corresponds to
(MUST be populated from or written to) the CloudEvents `datacontenttype`
attribute. Note that a `ce-datacontenttype` HTTP header MUST NOT also be
present in the message.
#### 3.1.2. Event Data Encoding
The [`data`](#22-data) byte-sequence is used as the HTTP message body.
#### 3.1.3. Metadata Headers
All other [CloudEvents][ce] attributes, including extensions, MUST be
individually mapped to and from distinct HTTP message header.
CloudEvents extensions that define their own attributes MAY define a secondary
mapping to HTTP headers for those attributes, especially if specific attributes
need to align with HTTP features or with other specifications that have explicit
HTTP header bindings. Note that these attributes MUST also still appear in the
HTTP message as HTTP headers with the `ce-` prefix as noted in
[HTTP Header Names](#3131-http-header-names).
##### 3.1.3.1. HTTP Header Names
Except where noted, all CloudEvents context attributes, including extensions,
MUST be mapped to HTTP headers with the same name as the attribute name but
prefixed with `ce-`.
Examples:
* `time` maps to `ce-time`
* `id` maps to `ce-id`
* `specversion` maps to `ce-specversion`
Note: per the [HTTP](https://tools.ietf.org/html/rfc7230#section-3.2)
specification, header names are case-insensitive.
##### 3.1.3.2. HTTP Header Values
The value for each HTTP header is constructed from the respective attribute
type's [canonical string representation][ce-types].
Some CloudEvents metadata attributes can contain arbitrary UTF-8 string content,
and per [RFC7230, section 3][rfc7230-section-3], HTTP headers MUST only use
printable characters from the US-ASCII character set, and are terminated by a
CRLF sequence with OPTIONAL whitespace around the header value.
When encoding a CloudEvent as an HTTP message, string values
represented as HTTP header values MUST be percent-encoded as
described below. This is compatible with [RFC3986, section
2.1][rfc3986-section-2-1] but is more specific about what needs
encoding. The resulting string SHOULD NOT be further encoded.
(Rationale: quoted string escaping is unnecessary when every space
and double-quote character is already percent-encoded.)
When decoding an HTTP message into a CloudEvent, any HTTP header
value MUST first be unescaped with respect to double-quoted strings,
as described in [RFC7230, section 3.2.6][rfc7230-section-3-2-6]. A single
round of percent-decoding MUST then be performed as described
below. HTTP headers for CloudEvent attribute values do not support
parenthetical comments, so the initial unescaping only needs to handle
double-quoted values, including processing backslash escapes within
double-quoted values. Header values produced via the
percent-encoding described here will never include double-quoted
values, but they MUST be supported when receiving events, for
compatibility with older versions of this specification which did
not require double-quote and space characters to be percent-encoded.
Percent encoding is performed by considering each Unicode character
within the attribute's canonical string representation. Any
character represented in memory as a [Unicode surrogate
pair][surrogate-pair] MUST be treated as a single Unicode character.
The following characters MUST be percent-encoded:
- Space (U+0020)
- Double-quote (U+0022)
- Percent (U+0025)
- Any characters outside the printable ASCII range of U+0021-U+007E
inclusive
Attribute values are already constrained to prohibit characters in
the range U+0000-U+001F inclusive and U+007F-U+009F inclusive;
however for simplicity and to account for potential future changes,
it is RECOMMENDED that any HTTP header encoding implementation treats
such characters as requiring percent-encoding.
Space and double-quote are encoded to avoid requiring any further
quoting. Percent is encoded to avoid ambiguity with percent-encoding
itself.
Steps to encode a Unicode character:
- Encode the character using UTF-8, to obtain a byte sequence.
- Encode each byte within the sequence as `%xy` where `x` is a
hexadecimal representation of the most significant 4 bits of the byte,
and `y` is a hexadecimal representation of the least significant 4
bits of the byte.
Percent-encoding SHOULD be performed using upper-case for values A-F,
but decoding MUST accept lower-case values.
When performing percent-decoding (when decoding an HTTP message to a
CloudEvent), values that have been unnecessarily percent-encoded MUST be
accepted, but encoded byte sequences which are invalid in UTF-8 MUST be
rejected. (For example, "%C0%A0" is an overlong encoding of U+0020, and
MUST be rejected.)
Example: a header value of "Euro &#x20AC; &#x1F600;" SHOULD be encoded as follows:
- The characters, 'E', 'u', 'r', 'o' do not require encoding
- Space, the Euro symbol, and the grinning face emoji require encoding.
They are characters U+0020, U+20AC and U+1F600 respectively.
- The encoded HTTP header value is therefore "Euro%20%E2%82%AC%20%F0%9F%98%80"
where "%20" is the encoded form of space, "%E2%82%AC" is the encoded form
of the Euro symbol, and "%F0%9F%98%80" is the encoded form of the
grinning face emoji.
#### 3.1.4. Examples
This example shows the _binary_ mode mapping of an event with an HTTP POST
request:
```text
POST /someresource HTTP/1.1
Host: webhook.example.com
ce-specversion: 1.0
ce-type: com.example.someevent
ce-time: 2018-04-05T03:56:24Z
ce-id: 1234-1234-1234
ce-source: /mycontext/subcontext
.... further attributes ...
Content-Type: application/json; charset=utf-8
Content-Length: nnnn
{
... application data ...
}
```
This example shows a response containing an event:
```text
HTTP/1.1 200 OK
ce-specversion: 1.0
ce-type: com.example.someevent
ce-time: 2018-04-05T03:56:24Z
ce-id: 1234-1234-1234
ce-source: /mycontext/subcontext
.... further attributes ...
Content-Type: application/json; charset=utf-8
Content-Length: nnnn
{
... application data ...
}
```
### 3.2. Structured Content Mode
The _structured_ content mode keeps event metadata and data together in the
payload, allowing simple forwarding of the same event across multiple routing
hops, and across multiple protocols.
#### 3.2.1. HTTP Content-Type
The [HTTP `Content-Type`][content-type] header MUST be set to the media type of
an [event format](#14-event-formats).
Example for the [JSON format][json-format]:
```text
Content-Type: application/cloudevents+json; charset=UTF-8
```
#### 3.2.2. Event Data Encoding
The chosen [event format](#14-event-formats) defines how all attributes, and
`data`, are represented.
The event metadata and data is then rendered in accordance with the event format
specification and the resulting data becomes the HTTP message body.
#### 3.2.3. Metadata Headers
Implementations MAY include the same HTTP headers as defined for the
[binary mode](#313-metadata-headers).
All CloudEvents metadata attributes MUST be mapped into the payload, even if
they are also mapped into HTTP headers.
#### 3.2.4. Examples
This example shows a JSON event format encoded event, sent with a PUT request:
```text
PUT /myresource HTTP/1.1
Host: webhook.example.com
Content-Type: application/cloudevents+json; charset=utf-8
Content-Length: nnnn
{
"specversion" : "1.0",
"type" : "com.example.someevent",
... further attributes omitted ...
"data" : {
... application data ...
}
}
```
This example shows a JSON encoded event returned in a response:
```text
HTTP/1.1 200 OK
Content-Type: application/cloudevents+json; charset=utf-8
Content-Length: nnnn
{
"specversion" : "1.0",
"type" : "com.example.someevent",
... further attributes omitted ...
"data" : {
... application data ...
}
}
```
### 3.3. Batched Content Mode
In the _batched_ content mode several events are batched into a single HTTP
request or response body. The chosen [event format](#14-event-formats) MUST
define how a batch is represented, including a suitable media type.
#### 3.3.1. HTTP Content-Type
The [HTTP `Content-Type`][content-type] header MUST be set to the media type of
the batch mode for the [event format](#14-event-formats).
Example for the [JSON Batch format][json-batch-format]:
```text
Content-Type: application/cloudevents-batch+json; charset=UTF-8
```
#### 3.3.2. Event Data Encoding
The chosen [event format](#14-event-formats) defines how a batch of events and
all event attributes, and `data`, are represented.
The batch of events is then rendered in accordance with the event format
specification and the resulting data becomes the HTTP message body.
#### 3.3.3. Examples
This example shows two batched CloudEvents, sent with a PUT request:
```text
PUT /myresource HTTP/1.1
Host: webhook.example.com
Content-Type: application/cloudevents-batch+json; charset=utf-8
Content-Length: nnnn
[
{
"specversion" : "1.0",
"type" : "com.example.someevent",
... further attributes omitted ...
"data" : {
... application data ...
}
},
{
"specversion" : "1.0",
"type" : "com.example.someotherevent",
... further attributes omitted ...
"data" : {
... application data ...
}
}
]
```
This example shows two batched CloudEvents returned in a response:
```text
HTTP/1.1 200 OK
Content-Type: application/cloudevents-batch+json; charset=utf-8
Content-Length: nnnn
[
{
"specversion" : "1.0",
"type" : "com.example.someevent",
... further attributes omitted ...
"data" : {
... application data ...
}
},
{
"specversion" : "1.0",
"type" : "com.example.someotherevent",
... further attributes omitted ...
"data" : {
... application data ...
}
}
]
```
## 4. References
- [RFC2046][rfc2046] Multipurpose Internet Mail Extensions (MIME) Part Two:
Media Types
- [RFC2119][rfc2119] Key words for use in RFCs to Indicate Requirement Levels
- [RFC2818][rfc2818] HTTP over TLS
- [RFC3629][rfc3629] UTF-8, a transformation format of ISO 10646
- [RFC3986][rfc3986] Uniform Resource Identifier (URI): Generic Syntax
- [RFC4627][rfc4627] The application/json Media Type for JavaScript Object
Notation (JSON)
- [RFC4648][rfc4648] The Base16, Base32, and Base64 Data Encodings
- [RFC6839][rfc6839] Additional Media Type Structured Syntax Suffixes
- [RFC7159][rfc7159] The JavaScript Object Notation (JSON) Data Interchange
Format
- [RFC7230][rfc7230] Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and
Routing
- [RFC7231][rfc7231] Hypertext Transfer Protocol (HTTP/1.1): Semantics and
Content
- [RFC7540][rfc7540] Hypertext Transfer Protocol Version 2 (HTTP/2)
[ce]: ../spec.md
[ce-message]: ../spec.md#message
[ce-types]: ../spec.md#type-system
[json-format]: ../formats/json-format.md
[json-batch-format]: ../formats/json-format.md#4-json-batch-format
[content-type]: https://tools.ietf.org/html/rfc7231#section-3.1.1.5
[json-value]: https://tools.ietf.org/html/rfc7159#section-3
[json-array]: https://tools.ietf.org/html/rfc7159#section-5
[rfc2046]: https://tools.ietf.org/html/rfc2046
[rfc2119]: https://tools.ietf.org/html/rfc2119
[rfc2818]: https://tools.ietf.org/html/rfc2818
[rfc3629]: https://tools.ietf.org/html/rfc3629
[rfc3986]: https://tools.ietf.org/html/rfc3986
[rfc3986-section-2-1]: https://tools.ietf.org/html/rfc3986#section-2.1
[rfc4627]: https://tools.ietf.org/html/rfc4627
[rfc4648]: https://tools.ietf.org/html/rfc4648
[rfc6839]: https://tools.ietf.org/html/rfc6839#section-3.1
[rfc7159]: https://tools.ietf.org/html/rfc7159
[rfc7230]: https://tools.ietf.org/html/rfc7230
[rfc7230-section-3]: https://tools.ietf.org/html/rfc7230#section-3
[rfc7230-section-3-2-6]: https://tools.ietf.org/html/rfc7230#section-3.2.6
[rfc7230-section-5-1]: https://tools.ietf.org/html/rfc7230#section-5.1
[rfc7231]: https://tools.ietf.org/html/rfc7231
[rfc7231-section-4]: https://tools.ietf.org/html/rfc7231#section-4
[rfc7540]: https://tools.ietf.org/html/rfc7540
[surrogate-pair]: http://unicode.org/glossary/#surrogate_pair

View File

@ -1,351 +0,0 @@
# Kafka Protocol Binding for CloudEvents - Version 1.0.3-wip
## Abstract
The [Kafka][kafka] Protocol Binding for CloudEvents defines how events are
mapped to [Kafka messages][kafka-message-format].
## Table of Contents
1. [Introduction](#1-introduction)
- 1.1. [Conformance](#11-conformance)
- 1.2. [Relation to Kafka](#12-relation-to-kafka)
- 1.3. [Content Modes](#13-content-modes)
- 1.4. [Event Formats](#14-event-formats)
- 1.5. [Security](#15-security)
2. [Use of CloudEvents Attributes](#2-use-of-cloudevents-attributes)
- 2.1. [data](#21-data)
3. [Kafka Message Mapping](#3-kafka-message-mapping)
- 3.1. [Key Mapping](#31-key-mapping)
- 3.2. [Binary Content Mode](#32-binary-content-mode)
- 3.3. [Structured Content Mode](#33-structured-content-mode)
4. [References](#4-references)
## 1. Introduction
[CloudEvents][ce] is a standardized and protocol-agnostic definition of the
structure and metadata description of events. This specification defines how the
elements defined in the CloudEvents specification are to be used in the Kafka
protocol as [Kafka messages][kafka-message-format] (aka Kafka records).
### 1.1. Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [RFC2119][rfc2119].
### 1.2. Relation to Kafka
This specification does not prescribe rules constraining transfer or settlement
of event messages with Kafka; it solely defines how CloudEvents are expressed in
the Kafka protocol as [Kafka messages][kafka-message-format].
The Kafka documentation uses "message" and "record" somewhat interchangeably and
therefore the terms are to be considered synonyms in this specification as well.
Conceptually, Kafka is a log-oriented store for records, each holding a singular
key/value pair. The store is commonly partitioned, and the partition for a
record is typically chosen based on the key's value. Kafka clients accomplish
this by using a hash function.
This binding specification defines how attributes and data of a CloudEvent is
mapped to the value and headers sections of a Kafka record.
Generally, the user SHOULD configure the key and/or the partition of the Kafka
record in a way that makes more sense for his/her use case (e.g. streaming
applications), in order to co-partition values, define relationships between
events, etc. This spec provides an OPTIONAL definition to map the key section of
the Kafka record, without constraining the user to implement it nor use it. An
example use case of this definition is when the sink of the event is a Kafka
topic, but the source is another transport (e.g. HTTP), and the user needs a way
to key the record. As a counter example, it doesn't make sense to use it when
the sink and source are Kafka topics, because this might cause the re-keying of
the records.
### 1.3. Content Modes
The CloudEvents specification defines three content modes for transferring
events: _structured_, _binary_ and _batch_. The Kafka protocol binding does not
currently support the batch content mode. Every compliant implementation SHOULD
support both structured and binary modes.
The specification defines three content modes for transferring events:
_structured_, _binary_ and _batch_. The Kafka protocol binding does not
currently support the _batch_ content mode.
In the _structured_ content mode, event metadata attributes and event data are
placed into the Kafka message value section using an
[event format](#14-event-formats).
In the _binary_ content mode, the value of the event `data` MUST be placed into
the Kafka message's value section as-is, with the `content-type` header value
declaring its media type; all other event attributes MUST be mapped to the Kafka
message's [header section][kafka-message-header].
Implementations that use Kafka 0.11.0.0 and above MAY use either _binary_ or
_structured_ modes. Implementations that use Kafka 0.10.x.x and below MUST only
use _structured_ mode. This is because older versions of Kafka lacked
support for message level headers.
### 1.4. Event Formats
Event formats, used with the _structured_ content mode, define how an event is
expressed in a particular data format. All implementations of this specification
that support the _structured_ content mode MUST support the [JSON event
format][json-format].
### 1.5. Security
This specification does not introduce any new security features for Kafka, or
mandate specific existing features to be used.
## 2. Use of CloudEvents Attributes
This specification does not further define any of the [CloudEvents][ce] event
attributes.
### 2.1. data
`data` is assumed to contain opaque application data that is encoded as declared
by the `datacontenttype` attribute.
An application is free to hold the information in any in-memory representation
of its choosing, but as the value is transposed into Kafka as defined in this
specification, core Kafka provides data available as a sequence of bytes.
For instance, if the declared `datacontenttype` is
`application/json;charset=utf-8`, the expectation is that the `data` value is
made available as [UTF-8][rfc3629] encoded JSON text.
## 3. Kafka Message Mapping
With Kafka 0.11.0.0 and above, the content mode is chosen by the sender of the
event. Protocol usage patterns that might allow solicitation of events using a
particular content mode might be defined by an application, but are not defined
here.
The receiver of the event can distinguish between the two content modes by
inspecting the `content-type` [Header][kafka-message-header] of the Kafka
message. If the header is present and its value is prefixed with the CloudEvents
media type `application/cloudevents` (matched case-insensitively),
indicating the use of a known [event format](#14-event-formats), the receiver
uses _structured_ mode, otherwise it defaults to _binary_ mode.
If a receiver finds a CloudEvents media type as per the above rule, but with an
event format that it cannot handle, for instance `application/cloudevents+avro`,
it MAY still treat the event as binary and forward it to another party as-is.
When the `content-type` header value is not prefixed with the CloudEvents media
type, knowing when the message ought to be parsed as a CloudEvent can be a
challenge. While this specification can not mandate that senders do not include
any of the CloudEvents headers when the message is not a CloudEvent, it would be
reasonable for a receiver to assume that if the message has all of the mandatory
CloudEvents attributes as headers then it's probably a CloudEvent. However, as
with all CloudEvent messages, if it does not adhere to all of the normative
language of this specification then it is not a valid CloudEvent.
### 3.1. Key Mapping
Every implementation MUST, by default, map the user provided record key to the
Kafka record key.
The 'key' of the Kafka message MAY be populated by a "Key Mapper" function,
which might map the key directly from one of the CloudEvent's attributes, but
might also use information from the application environment, from the
CloudEvent's data or other sources.
The shape and configuration of the "Key Mapper" function is implementation
specific.
Every implementation SHOULD provide an opt-in "Key Mapper" implementation that
maps the [Partitioning](../extensions/partitioning.md) `partitionkey` attribute
value to the 'key' of the Kafka message as-is, if present.
A mapping function MUST NOT modify the CloudEvent. This means that the
aforementioned `partitionkey` attribute MUST still be included with the
transmitted event, if present. It also means that a mapping function that uses
key information from an out-of-band source, like a parameter or configuration
setting, MUST NOT add an attribute to the CloudEvent.
### 3.2. Binary Content Mode
The _binary_ content mode accommodates any shape of event data, and allows for
efficient transfer and without transcoding effort.
#### 3.2.1. Content Type
For the _binary_ mode, the header `content-type` property MUST be mapped
directly to the CloudEvents `datacontenttype` attribute.
#### 3.2.2. Event Data Encoding
The [`data`](#21-data) byte-sequence MUST be used as the value of the Kafka
message.
In binary mode, the Kafka representation of a CloudEvent with no `data` is a
Kafka message with no value. In a topic with log compaction enabled, any such
message will represent a _tombstone_ record, as described in the
[Kafka compaction documentation][kafka-log-compaction].
#### 3.2.3. Metadata Headers
All [CloudEvents][ce] attributes and
[CloudEvent Attributes
Extensions](../primer.md#cloudevents-extension-attributes)
with exception of `data` MUST be individually mapped to and from the Header
fields in the Kafka message. Both header keys and header values MUST be encoded
as UTF-8 strings.
##### 3.2.3.1 Property Names
CloudEvent attributes are prefixed with `ce_` for use in the
[message-headers][kafka-message-header] section.
Examples:
* `time` maps to `ce_time`
* `id` maps to `ce_id`
* `specversion` maps to `ce_specversion`
##### 3.2.4.2 Property Values
The value for each Kafka header is constructed from the respective header's
Kafka representation, compliant with the [Kafka message
format][kafka-message-format] specification.
#### 3.2.5 Example
This example shows the _binary_ mode mapping of an event into the Kafka message.
All other CloudEvents attributes are mapped to Kafka Header fields with prefix
`ce_`.
Mind that `ce_` here does refer to the event `data` content carried in the
payload.
```text
------------------ Message -------------------
Topic Name: mytopic
------------------- key ----------------------
Key: mykey
------------------ headers -------------------
ce_specversion: "1.0"
ce_type: "com.example.someevent"
ce_source: "/mycontext/subcontext"
ce_id: "1234-1234-1234"
ce_time: "2018-04-05T03:56:24Z"
content-type: application/avro
.... further attributes ...
------------------- value --------------------
... application data encoded in Avro ...
-----------------------------------------------
```
### 3.3. Structured Content Mode
The _structured_ content mode keeps event metadata and data together in the
payload, allowing simple forwarding of the same event across multiple routing
hops, and across multiple protocols.
#### 3.3.1. Kafka Content-Type
If present, the Kafka message header property `content-type` MUST be set to the
media type of an [event format](#14-event-formats).
Example for the [JSON format][json-format]:
```text
content-type: application/cloudevents+json; charset=UTF-8
```
#### 3.3.2. Event Data Encoding
The chosen [event format](#14-event-formats) defines how all attributes, and
`data`, are represented.
The event metadata and data are then rendered in accordance with the
[event format](#14-event-formats) specification and the resulting data becomes
the Kafka application [data](#21-data) section.
In structured mode, the Kafka representation of a CloudEvent with no `data`
is a Kafka message which still has a data section (containing the attributes
of the CloudEvent). Such a message does _not_ represent a tombstone record in
a topic with log compaction enabled, unlike the representation in binary mode.
#### 3.3.3. Metadata Headers
Implementations MAY include the same Kafka headers as defined for the
[binary mode](#32-binary-content-mode).
#### 3.3.4 Example
This example shows a JSON event format encoded event:
```text
------------------ Message -------------------
Topic Name: mytopic
------------------- key ----------------------
Key: mykey
------------------ headers -------------------
content-type: application/cloudevents+json; charset=UTF-8
------------------- value --------------------
{
"specversion" : "1.0",
"type" : "com.example.someevent",
"source" : "/mycontext/subcontext",
"id" : "1234-1234-1234",
"time" : "2018-04-05T03:56:24Z",
"datacontenttype" : "application/xml",
... further attributes omitted ...
"data" : {
... application data encoded in XML ...
}
}
-----------------------------------------------
```
## 4. References
- [Kafka][kafka] The distributed stream platform
- [Kafka-Message-Format][kafka-message-format] The Kafka format message
- [RFC2046][rfc2046] Multipurpose Internet Mail Extensions (MIME) Part Two:
Media Types
- [RFC2119][rfc2119] Key words for use in RFCs to Indicate Requirement Levels
- [RFC3629][rfc3629] UTF-8, a transformation format of ISO 10646
- [RFC7159][rfc7159] The JavaScript Object Notation (JSON) Data Interchange
Format
[ce]: ../spec.md
[json-format]: ../formats/json-format.md
[kafka]: https://kafka.apache.org
[kafka-message-format]: https://kafka.apache.org/documentation/#messageformat
[kafka-message-header]: https://kafka.apache.org/documentation/#recordheader
[kafka-log-compaction]: https://kafka.apache.org/documentation/#design_compactionbasics
[json-value]: https://tools.ietf.org/html/rfc7159#section-3
[rfc2046]: https://tools.ietf.org/html/rfc2046
[rfc2119]: https://tools.ietf.org/html/rfc2119
[rfc3629]: https://tools.ietf.org/html/rfc3629
[rfc7159]: https://tools.ietf.org/html/rfc7159

View File

@ -1,333 +0,0 @@
# MQTT Protocol Binding for CloudEvents - Version 1.0.3-wip
## Abstract
The MQTT Protocol Binding for CloudEvents defines how events are mapped to MQTT
3.1.1 ([OASIS][oasis-mqtt-3.1.1]; ISO/IEC 20922:2016) and MQTT 5.0
([OASIS][oasis-mqtt-5]) messages.
## Table of Contents
1. [Introduction](#1-introduction)
- 1.1. [Conformance](#11-conformance)
- 1.2. [Relation to MQTT](#12-relation-to-mqtt)
- 1.3. [Content Modes](#13-content-modes)
- 1.4. [Event Formats](#14-event-formats)
- 1.5. [Security](#15-security)
2. [Use of CloudEvents Attributes](#2-use-of-cloudevents-attributes)
- 2.1. [datacontenttype Attribute](#21-datacontenttype-attribute)
- 2.2. [data](#22-data)
3. [MQTT PUBLISH Message Mapping](#3-mqtt-publish-message-mapping)
- 3.2. [Binary Content Mode](#31-binary-content-mode)
- 3.1. [Structured Content Mode](#32-structured-content-mode)
4. [References](#4-references)
## 1. Introduction
[CloudEvents][ce] is a standardized and protocol-agnostic definition of the
structure and metadata description of events. This specification defines how the
elements defined in the CloudEvents specification are to be used in MQTT PUBLISH
([3.1.1][3-publish], [5.0][5-publish]) messages.
### 1.1. Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [RFC2119][rfc2119].
### 1.2. Relation to MQTT
This specification does not prescribe rules constraining transfer or settlement
of event messages with MQTT; it solely defines how CloudEvents are expressed as
MQTT PUBLISH messages ([3.1.1][3-publish], [5.0][5-publish]).
### 1.3. Content Modes
The CloudEvents specification defines three content modes for transferring
events: _structured_, _binary_ and _batch_. The MQTT protocol binding does not
currently support the batch content mode. Every compliant implementation SHOULD
support both structured and binary modes.
The _binary_ mode _only_ applies to MQTT 5.0, because of MQTT 3.1.1's lack of
support for custom metadata.
In the _structured_ content mode, event metadata attributes and event data are
placed into the MQTT PUBLISH message payload section using an
[event format](#14-event-formats).
In the _binary_ content mode, the value of the event `data` is placed
into the MQTT PUBLISH message's payload section as-is, with the
`datacontenttype` attribute value declaring its media type in the MQTT
PUBLISH message's [`Content Type`][5-content-type] property; all other
event attributes are mapped to User Property fields.
### 1.4. Event Formats
Event formats, used with the _structured_ content mode, define how an event is
expressed in a particular data format. All implementations of this specification
that support the _structured_ content mode MUST support the [JSON event
format][json-format].
### 1.5. Security
This specification does not introduce any new security features for MQTT, or
mandate specific existing features to be used.
## 2. Use of CloudEvents Attributes
This specification does not further define any of the [CloudEvents][ce] event
attributes.
This mapping is intentionally robust against changes, including the addition and
removal of event attributes, and also accommodates vendor extensions to the
event metadata.
### 2.1. datacontenttype Attribute
The `datacontenttype` attribute is assumed to contain a [RFC2046][rfc2046]
compliant media-type expression.
### 2.2. data
`data` is assumed to contain opaque application data that is
encoded as declared by the `datacontenttype` attribute.
An application is free to hold the information in any in-memory representation
of its choosing, but as the value is transposed into MQTT as defined in this
specification, the assumption is that the `data` value is made
available as a sequence of bytes.
For instance, if the declared `datacontenttype` is
`application/json;charset=utf-8`, the expectation is that the `data`
value is made available as [UTF-8][rfc3629] encoded JSON text for use in MQTT.
## 3. MQTT PUBLISH Message Mapping
With MQTT 5.0, the content mode is chosen by the sender of the event. Protocol
usage patterns that might allow solicitation of events using a particular
content mode might be defined by an application, but are not defined here.
The receiver of the event can distinguish between the two content modes by
inspecting the`Content Type` property of the MQTT PUBLISH message. If the value
of the `Content Type` property is prefixed with the CloudEvents media type
`application/cloudevents`, indicating the use of a known
[event format](#14-event-formats), the receiver uses _structured_ mode,
otherwise it defaults to _binary_ mode.
If a receiver finds a CloudEvents media type as per the above rule, but with an
event format that it cannot handle, for instance `application/cloudevents+avro`,
it MAY still treat the event as binary and forward it to another party as-is.
When the `Content Type` header value is not prefixed with the CloudEvents media
type, knowing when the message ought to be parsed as a CloudEvent can be a
challenge. While this specification can not mandate that senders do not include
any of the CloudEvents properties when the message is not a CloudEvent, it would
be reasonable for a receiver to assume that if the message has all of the
mandatory CloudEvents attributes as message properties then it's probably a
CloudEvent. However, as with all CloudEvent messages, if it does not adhere to
all of the normative language of this specification then it is not a valid
CloudEvent.
With MQTT 3.1.1, the content mode is always _structured_ and the message payload
MUST use the [JSON event format][json-format].
### 3.1. Binary Content Mode
The _binary_ content mode accommodates any shape of event data, and allows for
efficient transfer and without transcoding effort.
#### 3.1.1. MQTT PUBLISH Content Type
For the _binary_ mode, the MQTT PUBLISH message's
[`Content Type`][5-content-type] property MUST be mapped directly to the
CloudEvents `datacontenttype` attribute.
#### 3.1.2. Event Data Encoding
The [`data`](#22-data) byte-sequence MUST be used as the
payload of the MQTT PUBLISH message.
#### 3.1.3. Metadata Headers
All other [CloudEvents][ce] context attributes, including extensions, MUST be
individually mapped to and from the User Property fields in the MQTT
PUBLISH message.
CloudEvents extensions that define their own attributes MAY define a secondary
mapping to MQTT user properties or features for those attributes, especially if
specific attributes need to align with MQTT features, or with other
specifications that have explicit MQTT header bindings. However, they MUST
also include the previously defined primary mapping.
##### 3.1.3.1 User Property Names
CloudEvents attribute names MUST be used unchanged in each mapped User Property
in the MQTT PUBLISH message.
##### 3.1.3.2 User Property Values
The value for each MQTT PUBLISH User Property MUST be constructed from the
respective CloudEvents attribute type's [canonical string
representation][ce-types].
#### 3.1.4 Examples
This example shows the _binary_ mode mapping of an event into the MQTT 5.0
PUBLISH message. The CloudEvents `datacontenttype` attribute is mapped to the
MQTT PUBLISH `Content Type` field; all other CloudEvents attributes are mapped
to MQTT PUBLISH User Property fields. The `Topic name` is chosen by the MQTT
client and not derived from the CloudEvents event data.
Mind that `Content Type` here does refer to the event `data` content carried in
the payload.
```text
------------------ PUBLISH -------------------
Topic Name: mytopic
Content Type: application/json; charset=utf-8
------------- User Properties ----------------
specversion: 1.0
type: com.example.someevent
time: 2018-04-05T03:56:24Z
id: 1234-1234-1234
source: /mycontext/subcontext
datacontenttype: application/json; charset=utf-8
.... further attributes ...
------------------ payload -------------------
{
... application data ...
}
-----------------------------------------------
```
### 3.2. Structured Content Mode
The _structured_ content mode keeps event metadata and data together in the
payload, allowing simple forwarding of the same event across multiple routing
hops, and across multiple protocols. This is the only supported mode for MQTT
3.1.1
#### 3.2.1. MQTT Content Type
For MQTT 5.0, the [MQTT PUBLISH message's `Content Type`][5-content-type]
property MUST be set to the media type of an [event format](#14-event-formats).
For MQTT 3.1.1, the media type of the [JSON event format][json-format] is always
implied:
Example for the [JSON format][json-format]:
```text
content-type: application/cloudevents+json; charset=utf-8
```
#### 3.2.2. Event Data Encoding
The chosen [event format](#14-event-formats) defines how all attributes,
and `data`, are represented.
The event metadata and data MUST then be rendered in accordance with the event
format specification and the resulting data becomes the MQTT PUBLISH payload.
#### 3.2.3. Metadata Headers
For MQTT 5.0, implementations MAY include the same MQTT PUBLISH User Properties
as defined for the [binary mode](#313-metadata-headers).
#### 3.2.4. Examples
The first example shows a JSON event format encoded event with MQTT 5.0
```text
------------------ PUBLISH -------------------
Topic Name: mytopic
Content Type: application/cloudevents+json; charset=utf-8
------------------ payload -------------------
{
"specversion" : "1.0",
"type" : "com.example.someevent",
"time" : 2018-04-05T03:56;24Z,
"id" : 1234-1234-1234,
"source" : "/mycontext/subcontext",
"datacontenttype" : "application/json; charset=utf-8",
... further attributes omitted ...
"data" : {
... application data ...
}
}
-----------------------------------------------
```
For MQTT 3.1.1, the example looks nearly identical, but `Content Type` is absent
because not yet supported in that version of the MQTT specification and
therefore `application/cloudevents+json` is implied:
```text
------------------ PUBLISH -------------------
Topic Name: mytopic
------------------ payload -------------------
{
"specversion" : "1.0",
"type" : "com.example.someevent",
"time" : 2018-04-05T03:56;24Z,
"id" : 1234-1234-1234,
"source" : "/mycontext/subcontext",
"datacontenttype" : "application/json; charset=utf-8",
... further attributes omitted ...
"data" : {
... application data ...
}
}
-----------------------------------------------
```
## 4. References
- [MQTT 3.1.1][oasis-mqtt-3.1.1] MQTT Version 3.1.1
- [MQTT 5.0][oasis-mqtt-5] MQTT Version 5.0
- [RFC2046][rfc2046] Multipurpose Internet Mail Extensions (MIME) Part Two:
Media Types
- [RFC2119][rfc2119] Key words for use in RFCs to Indicate Requirement Levels
- [RFC3629][rfc3629] UTF-8, a transformation format of ISO 10646
- [RFC4627][rfc4627] The application/json Media Type for JavaScript Object
Notation (JSON)
- [RFC7159][rfc7159] The JavaScript Object Notation (JSON) Data Interchange
Format
[ce]: ../spec.md
[ce-types]: ../spec.md#type-system
[json-format]: ../formats/json-format.md
[oasis-mqtt-3.1.1]: http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/mqtt-v3.1.1.html
[oasis-mqtt-5]: http://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html
[3-publish]: http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/errata01/os/mqtt-v3.1.1-errata01-os-complete.html#_Toc442180850
[5-content-type]: http://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html#_Toc502667341
[5-publish]: https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901100
[json-value]: https://tools.ietf.org/html/rfc7159#section-3
[rfc2046]: https://tools.ietf.org/html/rfc2046
[rfc2119]: https://tools.ietf.org/html/rfc2119
[rfc3629]: https://tools.ietf.org/html/rfc3629
[rfc4627]: https://tools.ietf.org/html/rfc4627
[rfc7159]: https://tools.ietf.org/html/rfc7159

View File

@ -1,330 +0,0 @@
# NATS Protocol Binding for CloudEvents - Version 1.0.3-wip
## Abstract
The [NATS][nats] Protocol Binding for CloudEvents defines how events are mapped
to [NATS messages][nats-msg-proto].
## Table of Contents
1. [Introduction](#1-introduction)
- 1.1 [Conformance](#11-conformance)
- 1.2 [Relation to NATS](#12-relation-to-nats)
- 1.3 [Content Modes](#13-content-modes)
- 1.4 [Event Formats](#14-event-formats)
- 1.5 [Security](#15-security)
2. [Use of CloudEvents Attributes](#2-use-of-cloudevents-attributes)
- 2.1 [datacontenttype Attribute](#21-datacontenttype-attribute)
- 2.2 [data](#22-data)
3. [NATS Message Mapping](#3-nats-message-mapping)
- 3.1 [Binary Content Mode](#31-binary-content-mode)
- 3.2 [Structured Content Mode](#32-structured-content-mode)
4. [References](#4-references)
## 1. Introduction
[CloudEvents][ce] is a standardized and protocol-agnostic definition of the
structure and metadata description of events. This specification defines how the
elements defined in the CloudEvents specification are to be used in the NATS
protocol as client [produced][nats-pub-proto] and [consumed][nats-msg-proto]
messages.
### 1.1 Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [RFC2119][rfc2119].
### 1.2 Relation to NATS
This specification does not prescribe rules constraining transfer or settlement
of event messages with NATS; it solely defines how CloudEvents are expressed in
the NATS protocol as client messages that are [produced][nats-pub-proto] and
[consumed][nats-msg-proto].
### 1.3 Content Modes
The CloudEvents specification defines three content modes for transferring
events: _structured_, _binary_ and _batch_. The NATS protocol binding does not
currently support the batch content mode. Every compliant implementation SHOULD
support both structured and binary modes.
In the _binary_ content mode, event metadata attributes are placed in message
headers and the event data are placed in the NATS message payload. Binary mode
is supported as of [NATS 2.2][nats22], which introduced message headers.
In the _structured_ content mode, event metadata attributes and event data
are placed into the NATS message payload using an [event format](#14-event-formats).
### 1.4 Event Formats
Event formats, used with the _structured_ content mode, define how an event is
expressed in a particular data format. All implementations of this specification
MUST support the [JSON event format][json-format].
### 1.5 Security
This specification does not introduce any new security features for NATS, or
mandate specific existing features to be used.
## 2. Use of CloudEvents Attributes
This specification does not further define any of the [CloudEvents][ce] event
attributes.
### 2.1 datacontenttype Attribute
The `datacontenttype` attribute is assumed to contain a media-type expression
compliant with [RFC2046][rfc2046].
### 2.2 data
`data` is assumed to contain opaque application data that is
encoded as declared by the `datacontenttype` attribute.
An application is free to hold the information in any in-memory representation
of its choosing, but as the value is transposed into NATS as defined in this
specification, core NATS provides data available as a sequence of bytes.
For instance, if the declared `datacontenttype` is
`application/json;charset=utf-8`, the expectation is that the `data`
value is made available as [UTF-8][rfc3629] encoded JSON text.
## 3. NATS Message Mapping
The content mode is chosen by the sender of the event, which is either the
requesting or the responding party. Gestures that might allow solicitation
of events using a particular mode might be defined by an application, but
are not defined here.
The receiver of the event can distinguish between the two modes using two
conditions:
- If the server is a version earlier than NATS 2.2, the content mode is
always _structured_.
- If the server is version 2.2 or above and the `Content-Type` header of
`application/cloudevents` is present (matched case-insensitively),
then the message is in _structured_ mode, otherwise it is using binary mode.
If the content mode is _structured_ then the NATS message payload MUST be
the [JSON event format][json-format] serialized as specified by the
[UTF-8][rfc3629] encoded JSON text for use in NATS.
### 3.1 Binary Content Mode
The _binary_ content mode accommodates any shape of event data, and allows for
efficient transfer and without transcoding effort.
#### 3.1.1 Event Data Encoding
The [`data`](#22-data) byte-sequence is used as the message body.
#### 3.1.2 Metadata Headers
All [CloudEvents][ce] attributes, including extensions, MUST be individually
mapped to and from distinct NATS message header.
CloudEvents extensions that define their own attributes MAY define a secondary
mapping to NATS headers for those attributes, especially if specific attributes
need to align with NATS features or with other specifications that have explicit
NATS header bindings. Note that these attributes MUST also still appear in the
NATS message as NATS headers with the `ce-` prefix as noted in
[NATS Header Names](#3131-nats-header-names).
##### 3.1.3.1 NATS Header Names
Except where noted, all CloudEvents context attributes, including extensions,
MUST be mapped to NATS headers with the same name as the attribute name but
prefixed with `ce-`.
Examples:
* `time` maps to `ce-time`
* `id` maps to `ce-id`
* `specversion` maps to `ce-specversion`
* `datacontenttype` maps to `ce-datacontenttype`
Note: per the [NATS][nats-message-headers] design specification, header names are
case-insensitive.
##### 3.1.3.2 NATS Header Values
The value for each NATS header is constructed from the respective attribute
type's [canonical string representation][ce-types].
Some CloudEvents metadata attributes can contain arbitrary UTF-8 string content,
and per [RFC7230, section 3][rfc7230-section-3], NATS headers MUST only use
printable characters from the US-ASCII character set, and are terminated by a
CRLF sequence with OPTIONAL whitespace around the header value.
When encoding a CloudEvent as an NATS message, string values
represented as NATS header values MUST be percent-encoded as
described below. This is compatible with [RFC3986, section
2.1][rfc3986-section-2-1] but is more specific about what needs
encoding. The resulting string SHOULD NOT be further encoded.
(Rationale: quoted string escaping is unnecessary when every space
and double-quote character is already percent-encoded.)
When decoding an NATS message into a CloudEvent, any NATS header
value MUST first be unescaped with respect to double-quoted strings,
as described in [RFC7230, section 3.2.6][rfc7230-section-3-2-6]. A single
round of percent-decoding MUST then be performed as described
below. NATS headers for CloudEvent attribute values do not support
parenthetical comments, so the initial unescaping only needs to handle
double-quoted values, including processing backslash escapes within
double-quoted values. Header values produced via the
percent-encoding described here will never include double-quoted
values, but they MUST be supported when receiving events, for
compatibility with older versions of this specification which did
not require double-quote and space characters to be percent-encoded.
Percent encoding is performed by considering each Unicode character
within the attribute's canonical string representation. Any
character represented in memory as a [Unicode surrogate
pair][surrogate-pair] MUST be treated as a single Unicode character.
The following characters MUST be percent-encoded:
- Space (U+0020)
- Double-quote (U+0022)
- Percent (U+0025)
- Any characters outside the printable ASCII range of U+0021-U+007E
inclusive
Attribute values are already constrained to prohibit characters in
the range U+0000-U+001F inclusive and U+007F-U+009F inclusive;
however for simplicity and to account for potential future changes,
it is RECOMMENDED that any NATS header encoding implementation treats
such characters as requiring percent-encoding.
Space and double-quote are encoded to avoid requiring any further
quoting. Percent is encoded to avoid ambiguity with percent-encoding
itself.
Steps to encode a Unicode character:
- Encode the character using UTF-8, to obtain a byte sequence.
- Encode each byte within the sequence as `%xy` where `x` is a
hexadecimal representation of the most significant 4 bits of the byte,
and `y` is a hexadecimal representation of the least significant 4
bits of the byte.
Percent-encoding SHOULD be performed using upper-case for values A-F,
but decoding MUST accept lower-case values.
When performing percent-decoding (when decoding an NATS message to a
CloudEvent), values that have been unnecessarily percent-encoded MUST be
accepted, but encoded byte sequences which are invalid in UTF-8 MUST be
rejected. (For example, "%C0%A0" is an overlong encoding of U+0020, and
MUST be rejected.)
Example: a header value of "Euro &#x20AC; &#x1F600;" SHOULD be encoded as follows:
- The characters, 'E', 'u', 'r', 'o' do not require encoding
- Space, the Euro symbol, and the grinning face emoji require encoding.
They are characters U+0020, U+20AC and U+1F600 respectively.
- The encoded NATS header value is therefore "Euro%20%E2%82%AC%20%F0%9F%98%80"
where "%20" is the encoded form of space, "%E2%82%AC" is the encoded form
of the Euro symbol, and "%F0%9F%98%80" is the encoded form of the
grinning face emoji.
#### 3.1.4 Example
This example shows the _binary_ mode mapping of an event in client messages that
are [produced][nats-pub-proto] and [consumed][nats-msg-proto].
```text
------------------ Message -------------------
Subject: mySubject
------------------ header --------------------
ce-specversion: 1.0
ce-type: com.example.someevent
ce-time: 2018-04-05T03:56:24Z
ce-id: 1234-1234-1234
ce-source: /mycontext/subcontext
ce-datacontenttype: application/json
------------------ payload -------------------
{
... application data ...
}
-----------------------------------------------
```
### 3.2 Structured Content Mode
The chosen [event format](#14-event-formats) defines how all attributes,
including the payload, are represented.
The event metadata and data MUST then be rendered in accordance with the event
format specification and the resulting data becomes the payload.
### 3.2.1 Example
This example shows a JSON event format encoded event in client messages that are
[produced][nats-pub-proto] and [consumed][nats-msg-proto].
```text
------------------ Message -------------------
Subject: mySubject
------------------ payload -------------------
{
"specversion" : "1.0",
"type" : "com.example.someevent",
... further attributes omitted ...
"data" : {
... application data ...
}
}
-----------------------------------------------
```
## 4. References
- [NATS][nats] The NATS Messaging System
- [NATS-PUB-PROTO][nats-pub-proto] The NATS protocol for messages published by a
client
- [NATS-MSG-PROTO][nats-msg-proto] The NATS protocol for messages received by a
client
- [RFC2046][rfc2046] Multipurpose Internet Mail Extensions (MIME) Part Two:
Media Types
- [RFC2119][rfc2119] Key words for use in RFCs to Indicate Requirement Levels
- [RFC3629][rfc3629] UTF-8, a transformation format of ISO 10646
- [RFC7159][rfc7159] The JavaScript Object Notation (JSON) Data Interchange
Format
[ce]: ../spec.md
[ce-types]: ../spec.md#type-system
[json-format]: ../formats/json-format.md
[json-value]: https://tools.ietf.org/html/rfc7159#section-3
[nats]: https://nats.io
[nats22]: https://docs.nats.io/release-notes/whats_new/whats_new_22#message-headers
[nats-message-headers]: https://github.com/nats-io/nats-architecture-and-design/blob/main/adr/ADR-4.md#nats-message-headers
[nats-msg-proto]: https://docs.nats.io/reference/reference-protocols/nats-protocol#protocol-messages
[nats-pub-proto]: https://docs.nats.io/reference/reference-protocols/nats-protocol#pub
[rfc2046]: https://tools.ietf.org/html/rfc2046
[rfc2119]: https://tools.ietf.org/html/rfc2119
[rfc3629]: https://tools.ietf.org/html/rfc3629
[rfc3986-section-2-1]: https://tools.ietf.org/html/rfc3986#section-2.1
[rfc7159]: https://tools.ietf.org/html/rfc7159
[rfc7230]: https://tools.ietf.org/html/rfc7230
[rfc7230-section-3]: https://tools.ietf.org/html/rfc7230#section-3
[rfc7230-section-3-2-6]: https://tools.ietf.org/html/rfc7230#section-3.2.6
[surrogate-pair]: http://unicode.org/glossary/#surrogate_pair

View File

@ -1,166 +0,0 @@
# WebSockets Protocol Binding for CloudEvents - Version 1.0.3-wip
## Abstract
The WebSockets Protocol Binding for CloudEvents defines how to establish and use
full-duplex CloudEvents streams using [WebSockets][rfc6455].
## Table of Contents
1. [Introduction](#1-introduction)
- 1.1. [Conformance](#11-conformance)
- 1.2. [Relation to WebSockets](#12-relation-to-websockets)
- 1.3. [Content Modes](#13-content-modes)
- 1.4. [Handshake](#14-handshake)
- 1.5. [CloudEvents Subprotocols](#15-cloudevents-subprotocols)
- 1.6. [Security](#16-security)
2. [Use of CloudEvents Attributes](#2-use-of-cloudevents-attributes)
3. [WebSocket Message Mapping](#3-websocket-message-mapping)
- 3.1. [Event Data Encoding](#31-event-data-encoding)
4. [References](#4-references)
## 1. Introduction
[CloudEvents][ce] is a standardized and protocol-agnostic definition of the
structure and metadata description of events. This specification defines how the
elements defined in the CloudEvents specification are to be used in
[WebSockets][rfc6455], in order to establish and use a full-duplex CloudEvents
stream.
### 1.1. Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [RFC2119][rfc2119].
### 1.2. Relation to WebSockets
This specification does not prescribe rules constraining the use or handling of
specific [HTTP target resource][rfc7230-section-5-1] to establish the WebSocket
upgrade.
This specification prescribes rules constraining the [WebSockets
Subprotocols][rfc6455-section-5-1] in order to reach agreement on the event
format to use when sending and receiving serialized CloudEvents.
Events are sent as WebSocket messages, serialized using an [event
format][ce-event-format].
### 1.3. Content Modes
The [CloudEvents specification][ce-message] defines three content modes for
transferring events: _structured_, _binary_ and _batch_.
Because of the nature of WebSockets messages, this specification supports only
_structured_ data mode, hence event metadata attributes and event data are
sent in WebSocket messages using an [event format][ce-event-format].
The [event format][ce-event-format] to be used in a full-duplex WebSocket stream
is agreed during the [handshake](#14-handshake) and cannot change during the
same stream.
### 1.4. Handshake
The [opening handshake][rfc6455-section-1-3] MUST follow the set of rules
specified in the [RFC6455][rfc6455-section-4].
In addition, the client MUST include, in the opening handshake, the
[`Sec-WebSocket-Protocol` header][rfc6455-section-1-9]. The client MUST include
in this header one or more
[CloudEvents subprotocols](#15-cloudevents-subprotocols), depending on the
subprotocols the client supports.
The server MUST reply with the chosen CloudEvents subprotocol using the
`Sec-WebSocket-Protocol` header. If the server doesn't support any of the
subprotocols included in the opening handshake, the server response SHOULD NOT
contain any `Sec-WebSocket-Protocol` header.
#### 1.4.1 Example
Example client request to begin the opening handshake:
```text
GET /events HTTP/1.1
Host: server.example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==
Sec-WebSocket-Protocol: cloudevents.json, cloudevents.avro
Sec-WebSocket-Version: 13
Origin: http://example.com
```
Example server response:
```text
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: HSmrc0sMlYUkAGmm5OPpG2HaGWk=
Sec-WebSocket-Protocol: cloudevents.json
```
### 1.5. CloudEvents Subprotocols
This specification maps a WebSocket subprotocol to each defined event format in
the CloudEvents specification, following the guidelines discussed in the
[RFC6455][rfc6455-section-1-9]. For each subprotocol, senders MUST use the
specified WebSocket frame type:
| Subprotocol | Event format | Frame Type |
| ------------------ | -------------------------------- | ---------- |
| `cloudevents.json` | [JSON event format][json-format] | Text |
| `cloudevents.avro` | [AVRO event format][avro-format] | Binary |
| `cloudevents.proto` | [Protobuf event format][proto-format] | Binary |
All implementations of this specification MUST support the [JSON event
format][json-format]. This specification does not support the [JSON batch
format][json-batch-format].
### 1.6. Security
This specification does not introduce any new security features for WebSockets,
or mandate specific existing features to be used.
## 2. Use of CloudEvents Attributes
This specification does not further define any of the [CloudEvents][ce] event
attributes.
## 3. WebSocket Message Mapping
Because the content mode is always _structured_, a WebSocket message just
contains a CloudEvent serialized using the agreed event format.
### 3.1 Event Data Encoding
The chosen [event format][ce-event-format] defines how all attributes, including
the payload, are represented.
The event metadata and data MUST be rendered in accordance with the event
format specification and the resulting data becomes the payload.
## 4. References
- [RFC2119][rfc2119] Key words for use in RFCs to Indicate Requirement Levels
- [RFC6455][rfc6455] The WebSocket Protocol
[ce]: ../spec.md
[ce-message]: ../spec.md#message
[ce-event-format]: ../spec.md#event-format
[json-format]: ../formats/json-format.md
[json-batch-format]: ../formats/json-format.md#4-json-batch-format
[avro-format]: ../formats/avro-format.md
[proto-format]: ../formats/protobuf-format.md
[rfc2119]: https://tools.ietf.org/html/rfc2119
[rfc6455]: https://tools.ietf.org/html/rfc6455
[rfc6455-section-1-3]: https://tools.ietf.org/html/rfc6455#section-1.3
[rfc6455-section-4]: https://tools.ietf.org/html/rfc6455#section-4
[rfc6455-section-1-9]: https://tools.ietf.org/html/rfc6455#section-1.9
[rfc7230-section-5-1]: https://datatracker.ietf.org/doc/html/rfc7230#section-5.1
[rfc6455-section-5-1]: https://datatracker.ietf.org/doc/html/rfc6455#section-5.1

View File

@ -1,55 +0,0 @@
# CloudEvents Extension Attributes
The [CloudEvents specification](../spec.md) defines a set of metadata
attributes that can be used when transforming a generic event into a
CloudEvent. The list of attributes specified in that document represent the
minimal set that the specification authors deemed most likely to be used in a
majority of situations.
This document defines some addition attributes that, while not as commonly used
as the ones specified in the [CloudEvents specification](../spec.md), could
still benefit from being formally specified in the hopes of providing some
degree of interoperability. This also allows for attributes to be defined in an
experimental manner and tested prior to being considered for inclusion in the
[CloudEvents specification](../spec.md).
Implementations of the [CloudEvents specification](../spec.md) are not
mandated to limit their use of extension attributes to just the ones specified
in this document. The attributes defined in this document have no official
standing and might be changed, or removed, at any time. As such, inclusion of
an attribute in this document does not need to meet the same level of maturity,
or popularity, as attributes defined in the
[CloudEvents specification](../spec.md). To be
included in this document, aside from the normal PR review process, the
attribute needs to have at least two
[Voting](../../docs/GOVERNANCE.md#membership) member organizations stating
their support for its inclusion as comments in the PR. If the author of the PR
is also a Voting member, then they are allowed to be one of two.
## Usage
Support for any extension is OPTIONAL. When an extension definition uses
[RFC 2199](https://www.ietf.org/rfc/rfc2119.txt) keywords (e.g. MUST, SHOULD,
MAY), this usage only applies to events that use the extension.
Extensions attributes, while not defined by the core CloudEvents specifications,
MUST follow the same serialization rules as defined by the format and protocol
binding specifications. See
[Extension Context Attributes](../spec.md#extension-context-attributes)
for more information.
## Known Extensions
- [Auth Context](authcontext.md)
- [BAM](bam.md)
- [Data Classification](data-classification.md)
- [Dataref (Claim Check Pattern)](dataref.md)
- [Deprecation](deprecation.md)
- [Distributed Tracing](distributed-tracing.md)
- [Expiry Time](expirytime.md)
- [OPC UA](opcua.md)
- [Partitioning](partitioning.md)
- [Recorded Time](recordedtime.md)
- [Sampling](sampledrate.md)
- [Sequence](sequence.md)
- [Severity](severity.md)

View File

@ -1,65 +0,0 @@
# Auth Context
This extension embeds information about the principal which triggered an
occurrence. This allows consumers of the
CloudEvent to perform user-dependent actions without requiring the user ID to
be embedded in the `data` or `source` field.
This extension is purely informational and is not intended to secure
CloudEvents.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is
used. For example, an attribute being marked as "REQUIRED" does not mean
it needs to be in all CloudEvents, rather it needs to be included only when
this extension is being used.
## Attributes
### authtype
- Type: `String`
- Description: An enum representing the type of principal that triggered the
occurrence. Valid values are:
- `app_user`: An end user of an application. Examples include an AWS cognito,
Google Cloud Identity Platform, or Azure Active Directory user.
- `user`: A user account registered in the infrastructure. Examples include
developer accounts secured by IAM in AWS, Google Cloud Platform, or Azure.
- `service_account`: A non-user principal used to identify a service.
- `api_key`: A non-user API key
- `system`: An obscured identity used when a cloud platform or other system
service triggers an event. Examples include a database record which
was deleted based on a TTL.
- `unauthenticated`: No credentials were used to authenticate the change that
triggered the occurrence.
- `unknown`: The type of principal cannot be determined and is unknown.
- Constraints
- REQUIRED
- This specification defines the following values, and it is RECOMMENDED that
they be used. However, implementations MAY define additional values.
### authid
- Type: `String`
- Description: A unique identifier of the principal that triggered the
occurrence. This specification makes no statement as to what this value
ought to be, however including personally identifiable information (PII)
in a CloudEvent is often considered inappropriate, so some indirect reference
(e.g. a hash or label of an API key) might be considered.
- Constraints
- OPTIONAL
### authclaims
- Type: `String`
- Description: A JSON string representing claims of the principal that triggered
the event.
- Constraints
- OPTIONAL
- MUST NOT contain actual credentials sufficient for the Consumer to
impersonate the principal directly.
- MAY contain enough information that a Consumer can authenticate against an
identity service to mint a credential impersonating the original principal.

View File

@ -1,127 +0,0 @@
# Business Activity Monitoring (BAM) Extension
Business Activity Monitoring (BAM) was originally coined by analysts at
Gartner, and refers to the aggregation, analysis, and presentation of
real-time information about activities inside organizations, customers,
and partners.
The activity monitoring consists of a model of a business process,
which can consists of multiple transactions (e.g. order, payment, invoice),
and these transactions can have multiple steps. The technical processing
represented by a transaction instance `bamtxid` is then correlated with
the steps in those transactions of the business process.
This extension defines attributes that can be included within a CloudEvent
to describe the business activity that the event is associated with.
Producers and consumers are free to define an out-of-band agreement on the
semantic meaning, or valid values, for the attribute.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is
used. For example, an attribute being marked as "REQUIRED" does not mean
it needs to be in all CloudEvents, rather it needs to be included only when
this extension is being used.
## Attributes
### bamtxid (BAM Transaction ID)
- Type: `String`
- Description: A unique identifier for the instance of a transaction.
This identifier connects the actual processing in the distributed
system (e.g. payment, invoice, warehouse) with the model of this process.
- Constraints
- REQUIRED
- MUST be a non-empty string
- RECOMMENDED as monotonically increasing and contiguous identifier
that is lexicographically-sortable.
### bampid (BAM Process ID)
- Type: `String`
- Description: A unique identifier for the model of the business process
that is associated with instance of the transaction `bamtxid`.
A business process is a collection of transactions. These transactions
can run in sequence or parallel (e.g. payment, invoice, warehouse).
- Constraints
- REQUIRED
- MUST be a non-empty string
- RECOMMENDED an alphanumeric string that contains non-whitespace characters
and only hyphens, underscores, and periods.
### bamptxid (BAM Process Transaction ID)
- Type: `String`
- Description: A unique identifier for the model of a transaction that
constructs a business process (e.g. payment, invoice, warehouse).
- Constraints
- REQUIRED
- MUST be a non-empty string
- RECOMMENDED an alphanumeric string that contains non-whitespace characters
and only hyphens, underscores, and periods.
### bamptxsid (BAM Process Transaction Step ID)
- Type: `String`
- Description: A unique identifier for the specific step in a business process
transaction (e.g. start, processing, finish).
- Constraints
- REQUIRED
- MUST be a non-empty string
- RECOMMENDED an alphanumeric string that contains non-whitespace characters
and only hyphens, underscores, and periods.
### bamptxsstatus (BAM Transaction Step Status)
- Type: `String`
- Description: The status of the specific step in a business process
transaction (e.g. success, waiting, failure).
- Constraints
- OPTIONAL
- if present, MUST be a non-empty string
- RECOMMENDED an alphanumeric string that contains non-whitespace characters
and only hyphens, underscores, and periods.
### bamptxcompleted (BAM Process Transaction Completed)
- Type: `Boolean`
- Description: Indicates if the instance of the transaction (`bamtxid`) has
actually been completed, or if the transaction has somehow failed.
This is a mechanism to indicate a final completion or failure that is
not captured by the model of the business process.
- Constraints
- OPTIONAL
- if present, MUST be a boolean value
## Usage
When this extension is used, producers MUST set the value of
the `bamtxid`, `bampid`, `bamptxid`, and `bamptxsid` attributes
to the unique identifiers of the business process, transaction,
and transaction step associated with the event.
Intermediaries MUST NOT change the value of the `bamtxid`,
`bampid`, `bamptxid`, and `bamptxsid` attributes.
## Use cases
This extension can be used in cases in which a business activity monitoring
system is used to monitor the progress of a business process, and the events
generated by the process are used to track the progress of the process.
Usually these systems have their own modelling language to describe the
business process, and the events generated by the process
are used to track the progress of the process.
## References
- [Gartner Business Activity Monitoring](https://www.gartner.com/en/information-technology/glossary/bam-business-activity-monitoring)
- [Business Activity Monitoring](https://en.wikipedia.org/wiki/Business_activity_monitoring)
- [What is Business Activity Monitoring (BAM)?](https://www.ibm.com/topics/business-activity-monitoring)
- [Business Activity Monitoring (BAM)](https://learn.microsoft.com/en-us/biztalk/core/business-activity-monitoring-bam)

View File

@ -1,228 +0,0 @@
# Correlation
This extension defines attributes for tracking occurrence relationships and
causality in distributed systems, enabling comprehensive traceability through
correlation and causation identifiers.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
## Attributes
### correlationid
- Type: `String`
- Description: An identifier that groups related events within the same logical
flow or business transaction. All events sharing the same correlation ID are
part of the same workflow.
- Constraints
- OPTIONAL
- If present, MUST be a non-empty string
### causationid
- Type: `String`
- Description: The unique identifier of the event that directly caused this
event to be generated. This SHOULD be the `id` value of the causing event.
- Constraints
- OPTIONAL
- If present, MUST be a non-empty string
## Usage
The Correlation extension provides two complementary mechanisms for tracking
event relationships:
1. **Correlation ID**: Groups all events that are part of the same logical flow,
regardless of their causal relationships
2. **Causation ID**: Tracks the direct parent-child relationships between events
in a causal chain
These attributes can be used independently or together, depending on the correlation
requirements of your system.
### Correlation vs Causation
Understanding the distinction between these two concepts is crucial:
- **Correlation ID** answers: "Which events are part of the same business
transaction?"
- **Causation ID** answers: "Which specific event directly triggered this
event?"
### Example Scenario
Consider an e-commerce order processing flow:
1. User initiates checkout (correlation ID: "txn-abc-123" is created)
2. Order is placed (Event A)
3. Payment is processed (Event B, caused by A)
4. Inventory is checked (Event C, caused by A)
5. Shipping is scheduled (Event D, caused by C)
6. Notification is sent (Event E, caused by D)
In this scenario:
- All events share the same `correlationid`: "txn-abc-123"
- Each event has a `causationid` pointing to its direct trigger:
- Event B and C have `causationid`: "order-123" (Event A's ID)
- Event D has `causationid`: "inventory-456" (Event C's ID)
- Event E has `causationid`: "shipping-789" (Event D's ID)
## Examples
### Example 1: Complete Correlation Chain
Initial Order Event:
```json
{
"specversion": "1.0",
"type": "com.example.order.placed",
"source": "https://example.com/orders",
"id": "order-123",
"correlationid": "txn-abc-123",
"data": {
"orderId": "123",
"customerId": "456"
}
}
```
Payment Processing (triggered by order):
```json
{
"specversion": "1.0",
"type": "com.example.payment.processed",
"source": "https://example.com/payments",
"id": "payment-789",
"correlationid": "txn-abc-123",
"causationid": "order-123",
"data": {
"amount": 150.0,
"currency": "USD"
}
}
```
Inventory Check (also triggered by order):
```json
{
"specversion": "1.0",
"type": "com.example.inventory.checked",
"source": "https://example.com/inventory",
"id": "inventory-456",
"correlationid": "txn-abc-123",
"causationid": "order-123",
"data": {
"items": ["sku-001", "sku-002"],
"available": true
}
}
```
Shipping Scheduled (triggered by inventory check):
```json
{
"specversion": "1.0",
"type": "com.example.shipping.scheduled",
"source": "https://example.com/shipping",
"id": "shipping-012",
"correlationid": "txn-abc-123",
"causationid": "inventory-456",
"data": {
"carrier": "FastShip",
"estimatedDelivery": "2024-01-15"
}
}
```
### Example 2: Error Handling with Correlation
When an error occurs, the correlation attributes help identify both the affected
transaction and the specific trigger:
```json
{
"specversion": "1.0",
"type": "com.example.payment.failed",
"source": "https://example.com/payments",
"id": "error-345",
"correlationid": "txn-abc-123",
"causationid": "payment-789",
"data": {
"error": "Insufficient funds",
"retryable": true
}
}
```
### Example 3: Fan-out Pattern
A single event can cause multiple downstream events:
```json
{
"specversion": "1.0",
"type": "com.example.order.fulfilled",
"source": "https://example.com/fulfillment",
"id": "fulfillment-567",
"correlationid": "txn-abc-123",
"causationid": "shipping-012",
"data": {
"completedAt": "2024-01-14T10:30:00Z"
}
}
```
This might trigger multiple notification events, all with the same causationid:
```json
{
"specversion": "1.0",
"type": "com.example.notification.email",
"source": "https://example.com/notifications",
"id": "notify-email-890",
"correlationid": "txn-abc-123",
"causationid": "fulfillment-567",
"data": {
"recipient": "customer@example.com",
"template": "order-fulfilled"
}
}
```
```json
{
"specversion": "1.0",
"type": "com.example.notification.sms",
"source": "https://example.com/notifications",
"id": "notify-sms-891",
"correlationid": "txn-abc-123",
"causationid": "fulfillment-567",
"data": {
"recipient": "+1234567890",
"message": "Your order has been fulfilled!"
}
}
```
## Best Practices
1. **Correlation ID Generation**: Generate correlation IDs at the entry point of
your system (e.g., API gateway, UI interaction)
2. **Causation ID Propagation**: Always set the causation ID to the `id` of the
event that directly triggered the current event
3. **Consistent Usage**: If you start using these attributes in a flow, use them
consistently throughout
4. **ID Format**: Use globally unique identifiers (e.g., UUIDs) to avoid
collisions across distributed systems
5. **Retention**: Consider the retention implications when designing queries
based on these attributes

View File

@ -1,110 +0,0 @@
# Data Classification Extension
CloudEvents might contain payloads which are subjected to data protection
regulations like GDPR or HIPAA. For intermediaries and consumers knowing how
event payloads are classified, which data protection regulation applies and how
payloads are categorized, enables compliant processing of events.
This extension defines attributes to describe to
[consumers](../spec.md#consumer) or [intermediaries](../spec.md#intermediary)
how an event and its payload is classified, category of the payload and any
applicable data protection regulations.
These attributes are intended for classification at an event and payload level
and not at a `data` field level. Classification at a field level is best defined
in the schema specified via the `dataschema` attribute.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is used.
For example, an attribute being marked as "REQUIRED" does not mean it needs to
be in all CloudEvents, rather it needs to be included only when this extension
is being used.
## Attributes
### dataclassification
- Type: `String`
- Description: Data classification level for the event payload within the
context of a `dataregulation`. In situations where `dataregulation` is
undefined or the data protection regulation does not define any labels, then
RECOMMENDED labels are: `public`, `internal`, `confidential`, or
`restricted`.
- Constraints:
- REQUIRED
### dataregulation
- Type: `String`
- Description: A comma-delimited list of applicable data protection regulations.
For example: `GDPR`, `HIPAA`, `PCI-DSS`, `ISO-27001`, `NIST-800-53`, `CCPA`.
- Constraints:
- OPTIONAL
- if present, MUST be a non-empty string without internal spaces. Leading and
trailing spaces around each entry MUST be ignored.
### datacategory
- Type: `String`
- Description: Data category of the event payload within the context of a
`dataregulation` and `dataclassification`. For GDPR personal data typical
labels are: `non-sensitive`, `standard`, `sensitive`, `special-category`. For
US personal data this could be: `sensitive-pii`, `non-sensitive-pii`,
`non-pii`. And for personal health information under HIPAA: `phi`.
- Constraints:
- OPTIONAL
- if present, MUST be a non-empty string
## Usage
When this extension is used, producers MUST set the value of the
`dataclassification` attribute. When applicable the `dataregulation` and
`datacategory` attributes MAY be set to provide additional details on the
classification context.
When an implementation supports this extension, then intermediaries and
consumers MUST take these attributes into account and act accordingly to data
regulations and/or internal policies in processing the event and payload. If
intermediaries or consumers cannot meet such requirements, they MUST reject and
report an error through a protocol-level mechanism.
If intermediaries or consumers are unsure on how to interpret these attributes,
for example when they encounter an unknown classification level or data
regulation, they MUST assume they cannot meet requirements and MUST reject the
event and report an error through a protocol-level mechanism.
Intermediaries SHOULD NOT modify the `dataclassification`, `dataregulation`, and
`datacategory` attributes.
## Use cases
Examples where data classification of events can be useful are:
- When an event contains PII or restricted information and therefore processing
by intermediaries or consumers need to adhere to certain policies. For example
having separate processing pipelines by sensitivity or having logging,
auditing and access policies based upon classification.
- When an event payload is subjected to regulation and therefore retention
policies apply. For example, having event retention policies based upon data
classification or to enable automated data purging of durable topics.
## Appendix: Data Protection and Privacy Regulations
For reference purposes, a catalog of common data protection and privacy
regulation and abbreviations is availble from [UNCTAD
(United Nations Conference on Trade and
Development)](https://unctad.org/page/data-protection-and-privacy-legislation-worldwide),
under the `DOWNLOAD FULL DATA` button ([direct
link](https://unctad.org/system/files/information-document/DP.xlsx)). Others
might exist.
Some examples include:
- `GDPR` - General Data Protection Regulation, Europe
- `HIPAA` - Health Insurance Portability and Accountability Act, United States
- `NDPR` - Nigeria Data Protection Regulation, Nigeria

View File

@ -1,91 +0,0 @@
# Dataref (Claim Check Pattern)
As defined by the term [Data](../spec.md#data), CloudEvents MAY include
domain-specific information about the occurrence. When present, this information
will be encapsulated within `data`.
The `dataref` attribute MAY be used to reference another location where this
information is stored. The information, whether accessed via `data` or `dataref`
MUST be identical.
Both `data` and the `dataref` attribute MAY exist at the same time. A middleware
MAY drop `data` when the `dataref` attribute exists, it MAY add
the `dataref` attribute and drop the `data` attribute, or it MAY add the `data`
attribute by using the `dataref` attribute. Note that since the CloudEvents
specification does not define a mechanism by which a sender can know if the
receiver supports any particular CloudEvent extension, removing the `data`
attribute in favor of just having the `dataref` attribute could yield
unexpected results. As such, removing the `data` attribute SHOULD only be done
when the sender is confident that all receivers support the `dataref`
attribute - via some out-of-band agreement.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is
used. For example, an attribute being marked as "REQUIRED" does not mean
it needs to be in all CloudEvents, rather it needs to be included only when
this extension is being used.
## Attributes
### dataref
- Type: `URI-reference`
- Description: A reference to a location where the event payload is stored. The
location MAY not be accessible without further information (e.g. a pre-shared
secret).
Known as the "Claim Check Pattern", this attribute MAY be used for a variety
of purposes, including:
- If the [Data](../spec.md#data) is too large to be included in the
message, the `data` is not present, and the consumer can retrieve it using
this attribute.
- If the consumer wants to verify that the [Data](../spec.md#data)
has not been tampered with, it can retrieve it from a trusted source using
this attribute.
- If the [Data](../spec.md#data) MUST only be viewed by trusted
consumers (e.g. personally identifiable information), only a trusted
consumer can retrieve it using this attribute and a pre-shared secret.
If this attribute is used, the information SHOULD be accessible long enough
for all consumers to retrieve it, but MAY not be stored for an extended period
of time.
- Constraints:
- REQUIRED
# Examples
The following example shows a CloudEvent in which the event producer has included
both `data` and `dataref` (serialized as JSON):
```JSON
{
"specversion" : "1.0",
"type" : "com.github.pull_request.opened",
"source" : "https://github.com/cloudevents/spec/pull/123",
"id" : "A234-1234-1234",
"datacontenttype" : "text/xml",
"data" : "<much wow=\"xml\"/>",
"dataref" : "https://github.com/cloudevents/spec/pull/123/events/A234-1234-1234.xml"
}
```
The following example shows a CloudEvent in which a middleware has replaced the
`data` with a `dataref` (serialized as JSON):
```JSON
{
"specversion" : "1.0",
"type" : "com.github.pull_request.opened",
"source" : "https://github.com/cloudevents/spec/pull/123",
"id" : "A234-1234-1234",
"datacontenttype" : "text/xml",
"dataref" : "https://tenant123.middleware.com/events/data/A234-1234-1234.xml"
}
```

View File

@ -1,86 +0,0 @@
# Deprecation extension
This specification defines attributes that can be included in CloudEvents to
indicate to [consumers](../spec.md#consumer) or
[intermediaries](../spec.md#intermediary) the deprecation of events. These
attributes inform CloudEvents consumers about upcoming changes or removals,
facilitating smoother transitions and proactive adjustments.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is
used. For example, an attribute being marked as "REQUIRED" does not mean it
needs to be in all CloudEvents, rather it needs to be included only when this
extension is being used.
## Attributes
### deprecated
- Type: `Boolean`
- Description: Indicates whether the event type is deprecated.
- Constraints
- MUST be `true`
- REQUIRED
- Example: `"deprecated": true`
### deprecationfrom
- Type: `Timestamp`
- Description: Specifies the date and time when the event type was
officially marked as deprecated.
- Constraints
- OPTIONAL
- The `deprecationfrom` timestamp SHOULD remain stable once set and SHOULD
reflect a point in the past or present. Pre-announcing deprecation by
setting a future date is not encouraged.
- Example: `"deprecationfrom": "2024-10-11T00:00:00Z"`
### deprecationsunset
- Type: `Timestamp`
- Description: Specifies the future date and time when the event type will
become unsupported.
- Constraints
- OPTIONAL
- The timestamp MUST be later than or the same as the one given in the
`deprecationfrom` field, if present. It MAY be extended to a later date but
MUST NOT be shortened once set.
- Example: `"deprecationsunset": "2024-11-12T00:00:00Z"`
### deprecationmigration
- Type: `URI`
- Description: Provides a link to documentation or resources that describe
the migration path from the deprecated event to an alternative. This helps
consumers transition away from the deprecated event.
- Constraints
- OPTIONAL
- The URI SHOULD point to a valid and accessible resource that helps
consumers understand what SHOULD replace the deprecated event.
- Example: `"deprecationmigration": "https://example.com/migrate-to-new-evt"`
## Usage
When this extension is used, producers MUST set the value of the `deprecated`
attribute to `true`. This gives consumers a heads-up that they SHOULD begin
migrating to a new event or version.
Consumers SHOULD make efforts to switch to the suggested replacement before the
specified `deprecationsunset` timestamp. It is advisable to begin transitioning
as soon as the event is marked as deprecated to ensure a smooth migration and
avoid potential disruptions after the sunset date.
If an event is received after the `deprecationsunset` timestamp, consumers
SHOULD choose to stop processing such events, especially if unsupported events
can cause downstream issues.
Producers SHOULD stop emitting deprecated events after the `deprecationsunset`
timestamp. They SHOULD also provide detailed documentation via the
`deprecationmigration` attribute to guide consumers toward the correct replacement
event.

View File

@ -1,89 +0,0 @@
# Distributed Tracing extension
This extension embeds context from
[W3C TraceContext](https://www.w3.org/TR/trace-context/) into a CloudEvent.
The goal of this extension is to offer means to carry context when instrumenting
CloudEvents based systems with OpenTelemetry.
The [OpenTelemetry](https://opentelemetry.io/) project is a collection
of tools, APIs and SDKs that can be used to instrument, generate, collect,
and export telemetry data (metrics, logs, and traces) to help you
analyze your softwares performance and behavior.
The OpenTelemetry specification defines both
[Context](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.8.0/specification/context/context.md#overview)
and
[Distributed Tracing](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.8.0/specification/overview.md#tracing-signal)
as:
> A `Context` is a propagation mechanism which carries execution-scoped values across
API boundaries and between logically associated execution units. Cross-cutting
concerns access their data in-process using the same shared `Context` object.
>
> A `Distributed Trace` is a set of events, triggered as a result of a single
logical operation, consolidated across various components of an application.
A distributed trace contains events that cross process, network and security boundaries.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is
used. For example, an attribute being marked as "REQUIRED" does not mean
it needs to be in all CloudEvents, rather it needs to be included only when
this extension is being used.
## Attributes
### traceparent
- Type: `String`
- Description: Contains a version, trace ID, span ID, and trace options as
defined in [section 3.2](https://w3c.github.io/trace-context/#traceparent-header)
- Constraints
- REQUIRED
### tracestate
- Type: `String`
- Description: a comma-delimited list of key-value pairs, defined by
[section 3.3](https://w3c.github.io/trace-context/#tracestate-header).
- Constraints
- OPTIONAL
## Using the Distributed Tracing Extension
The Distributed Tracing Extension is not intended to replace the protocol specific headers for tracing,
like the ones described in [W3C Trace Context](https://w3c.github.io/trace-context/) for HTTP.
Given a single hop event transmission (from source to sink directly), the Distributed Tracing Extension,
if used, MUST carry the same trace information contained in protocol specific tracing headers.
Given a multi hop event transmission, the Distributed Tracing Extension, if used, MUST
carry the trace information of the starting trace of the transmission.
In other words, it MUST NOT carry trace information of each individual hop, since this information is usually
carried using protocol specific headers, understood by tools like [OpenTelemetry](https://opentelemetry.io/).
The
[OpenTelemetry Semantic Conventions for CloudEvents](https://opentelemetry.io/docs/specs/semconv/cloudevents/cloudevents-spans/)
define the trace structure to follow when instrumenting CloudEvent systems and
in which scenarios this extension can be used and how to use it to achieve said structure.
Middleware between the source and the sink of the event could eventually add a Distributed Tracing Extension
if the source didn't include any, in order to provide to the sink the starting trace of the transmission.
An example with HTTP:
```bash
CURL -X POST example/webhook.json \
-H 'ce-id: 1' \
-H 'ce-specversion: 1.0' \
-H 'ce-type: example' \
-H 'ce-source: http://localhost' \
-H 'ce-traceparent: 00-0af7651916cd43dd8448eb211c80319c-b9c7c989f97918e1-01' \
-H 'traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01' \
-H 'tracestate: rojo=00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01,congo=lZWRzIHRoNhcm5hbCBwbGVhc3VyZS4`
```

View File

@ -1,69 +0,0 @@
# Expiry Time Extension
This extension provides a mechanism to hint to [consumers](../spec.md#consumer)
or [intermediaries](../spec.md#intermediary) a timestamp after which an
[event](../spec.md#event) can be ignored.
In distributed systems with message delivery guarantees, events might be delivered
to a consumer some significant amount of time after an event has been sent.
In this situation, it might be desirable to ignore events that
are no longer relevant. The [`time` attribute](../spec.md#time) could be used
to handle this on the consumer side but can be tricky if the logic varies
depending on the event type or producer.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is
used. For example, an attribute being marked as "REQUIRED" does not mean
it needs to be in all CloudEvents, rather it needs to be included only when
this extension is being used.
## Attributes
### expirytime
- Type: `Timestamp`
- Description: Timestamp indicating an event is no longer useful after the
indicated time.
- Constraints:
- REQUIRED
- SHOULD be equal to or later than the `time` attribute, if present
## Usage
When this extension is used, producers MUST set the value of the `expirytime`
attribute.
Intermediaries and consumers MAY ignore and discard an event that has an
`expirytime` at or before the current timestamp at the time of any checks.
Any system that directly or indirectly interacts with a consumer SHOULD NOT
make any assumptions on whether a consumer will
keep or discard an event based on this extension alone. The reasoning for this
is that time-keeping can be inaccurate between any two given systems.
Intermediaries MAY modify the `expirytime` attribute, however, they MUST NOT
remove it.
## Potential Scenarios
### Web dashboard for sensors
A series of sensors produce CloudEvents at regular intervals that vary per
sensor. Each sensor can pick an `expirytime` that suits its configured sample
rate. In the event that an intermediary delays delivery of events to a
consumer, older events can be skipped to avoid excessive processing or UI
updates upon resuming delivery.
### Jobs triggered by Continuous Integration
A Continuous Integration (CI) system uses CloudEvents to delegate a job to a
runner machine. The job has a set deadline and needs to complete before that time
has elapsed to be considered successful. The CI system can set the
`expirytime` to match the deadline. The job runner would ignore/reject the job
if the `expirytime` has elapsed since the CI might have likely already determined
the job state.

View File

@ -1,336 +0,0 @@
# OPC UA
This extension defines the mapping of [OPC UA](https://reference.opcfoundation.org/Core/Part1/v105/docs/)
[PubSub](https://reference.opcfoundation.org/Core/Part14/v105/docs/) dataset to
CloudEvents to allow seamless routing of OPC UA dataset messages via different
protocols, it therefore provides a recommendation to map known REQUIRED and
OPTIONAL attributes using other extensions as well as defines its own extension
attributes.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is used.
For example, an attribute being marked as "REQUIRED" does not mean it needs to
be in all CloudEvents, rather it needs to be included only when this extension
is being used.
## Mapping of REQUIRED Attributes
### id
MUST map to [Network Message
Header](https://reference.opcfoundation.org/Core/Part14/v105/docs/7.2.5.3#Table163)
field `MessageId`.
### source
MUST either map to [Application
Description](https://reference.opcfoundation.org/Core/Part4/v104/docs/7.1) field
`applicationUri` of the OPC UA server or to a customer configured identifier
like a unified namespace path.
### type
MUST map to [Data Set Message Header](https://reference.opcfoundation.org/Core/Part14/v105/docs/7.2.5.4#Table164)
field `MessageType`.
## Mapping of OPTIONAL Attributes
### datacontenttype
MUST be `application/json` for OPC UA PubSub JSON payload and MAY be appended with
`+gzip` when the payload is gzip compressed.
### dataschema
OPC UA provides type information as part of PubSub metadata messages, for non
OPC UA consumers or when different payload encoding like Avro is used, it is
REQUIRED to provide schema information (based on metadata information) in a
separate format like [JSON schema](https://json-schema.org/specification) or
[Avro schema](https://avro.apache.org/docs/1.11.1/specification/) or others. For
those cases the attribute references the schema and is used for versioning.
### subject
For metadata, event and data messages (type one of `ua-metadata`, `ua-keyframe`,
`ua-deltaframe`, `ua-event`), `subject` MUST map to either [Data Set Message
Header](https://reference.opcfoundation.org/Core/Part14/v105/docs/7.2.5.4#Table164)
field `DataSetWriterId` or `DataSetWriterName`.
For event messages (type equals to `ua-event`) `subject` MUST be appended with
"/" and [Base Event Type](https://reference.opcfoundation.org/Core/Part5/v104/docs/6.4.2)
field `EventId`.
### time
MUST map to [Data Set Message
Header](https://reference.opcfoundation.org/Core/Part14/v105/docs/7.2.5.4#Table164)
field `Timestamp`.
## Mapping for other extensions
The following well-known extensions attributes MUST be used for data messages and
event messages (type one of `ua-keyframe`, `ua-deltaframe`, `ua-event`).
### sequence
Attribute as defined by [sequence extensions](./sequence.md) MUST map to [Data
Set Message Header](https://reference.opcfoundation.org/Core/Part14/v105/docs/7.2.5.4#Table164)
field `SequenceNumber`.
### traceparent
Attribute as defined by [distributed-tracing extension](./distributed-tracing.md)
to allow tracing from event publisher towards consumer.
### tracestate
Attribute as defined by [distributed-tracing extension](./distributed-tracing.md)
MAY be used to allow tracing from event publisher towards consumer.
### recordedtime
Attribute as defined by [recordedtime extension](./recordedtime.md) to
determine the latency between event publisher towards consumer.
## Attributes
### opcuametadatamajorversion
- Type: `Integer`
- Description: Links dataset message to the current version of the metadata.
Contains value from `MajorVersion` of [Data Set Message Header](https://reference.opcfoundation.org/Core/Part14/v105/docs/7.2.5.4#Table164) field `MetaDataVersion`.
- Constraints
- OPTIONAL but MUST NOT be present if `dataschema` is used
### opcuametadataminorversion
- Type: `Integer`
- Description: Links dataset message to the current version of the metadata.
Contains value from `MinorVersion` of [Data Set Message
Header](https://reference.opcfoundation.org/Core/Part14/v105/docs/7.2.5.4#Table164)
field `MetaDataVersion`.
- Constraints
- OPTIONAL but MUST NOT be present if `dataschema` is used
### opcuastatus
- Type: `Integer`
- Description: Defines the overall status of the data set message, maps to
[Data Set Message Header](https://reference.opcfoundation.org/Core/Part14/v105/docs/7.2.5.4#Table164) field `Status`.
- Constraints
- OPTIONAL
- REQUIRED if status is not _Good_
- MAY be omitted if status is _Good_
## General Constraints
- OPC UA messages MUST use `binary-mode` of CloudEvents.
- OPC UA PubSub JSON messages MUST be encoded using non-reversible encoding as
the decoding information is contained in metadata messages or by schema
referenced via `dataschema` attribute.
- Payload of OPC UA PubSub JSON messages MUST NOT contain Network Message Header
and Data Set Header as that information is mapped into CloudEvents attributes.
- OPC UA PubSub JSON messages MUST contain exactly one dataset message.
## Examples
### Metadata message
The metadata message helps Cloud applications to understand the semantics and
structure of dataset messages.
```text
------------------ PUBLISH -------------------
Topic Name: opcua/json/DataSetMetaData/publisher-on-ot-edge
Content Type: application/json; charset=utf-8
------------- User Properties ----------------
specversion: 1.0
type: ua-metadata
time: 2024-03-28T21:56:24Z
id: 1234-1234-1234
source: urn:factory:aggregationserver:opcua
datacontenttype: application/json; charset=utf-8
subject: energy-consumption-asset
.... further attributes ...
------------------ payload -------------------
{
... application data (OPC UA PubSub metadata) ...
"ConfigurationVersion": {
"MajorVersion": 672338910,
"MinorVersion": 672341762
}
...
}
-----------------------------------------------
```
### Telemetry message
The telemetry or data messages contain values of all OPC UA nodes that had
changed in a given period of time (`ua-deltaframe`) or contain values for all
OPC UA nodes that were monitored (`ua-keyframe`).
The complete list of monitored OPC UA nodes as well as the related type
information are defined in the metadata message. The attributes
`opcuametadatamajorversion` and `opcuametadataminorversion` are used to
reference the correct metadata message. The `ua-deltaframe` messages will be
used for hot and/or cold path processing and `ua-keyframe` messages can
additional be used to update last-known-value tables.
```text
------------------ PUBLISH -------------------
Topic Name: opcua/json/DataSetMessage/publisher-on-ot-edge
Content Type: application/json; charset=utf-8
------------- User Properties ----------------
specversion: 1.0
type: ua-deltaframe
time: 2024-03-28T21:56:42Z
id: 1235-1235-1235
source: urn:factory:aggregationserver:opcua
datacontenttype: application/json; charset=utf-8
subject: energy-consumption-asset
sequence: 7
traceparent: 4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00000011
recordedtime: 2024-03-28T21:56:43Z
opcuametadatamajorversion: 672338910
opcuametadataminorversion: 672341762
.... further attributes ...
------------------ payload -------------------
{
... application data
(OPC UA PubSub JSON single dataset Message)...
}
-----------------------------------------------
```
#### OPC UA PubSub JSON single dataset Message
Using CloudEvents and model the OPC UA PubSub header information as CloudEvent
attributes enables integration into various system (independent from used
protocols) and simplifies the payload structure.
```text
{
"IsRunning": {
"Value": true,
"SourceTimestamp": "2024-03-29T07:31:19.555Z"
},
"EnergyConsumption": {
"Value": 31
"SourceTimestamp": "2024-03-29T07:31:37.546Z",
"StatusCode": {
"Code":1073741824,
"Symbol":"Uncertain"
}
},
"EnergyPeak": {
"Value": 54
"SourceTimestamp": "2024-03-29T07:31:06.978Z"
},
"EnergyLow": {
"Value": 22
"SourceTimestamp": "2024-03-29T07:31:17.582Z"
}
}
```
### Event message
The event message will contain a single event and the identifier of this event is
added to the `subject` to allow routing it into different systems without parsing
the payload. Events are routed for example in systems like Manufacturing Execution
Systems (MES), Supervisory Control and Data Acquisition systems (SCADA),
Alerting Systems or Operation Technology Operator Terminals (HMI Clients) and
also in hot and/or cold path processing. The attributes
`opcuametadatamajorversion` and `opcuametadataminorversion` are used to
reference the correct metadata message.
```text
------------------ PUBLISH -------------------
Topic Name: opcua/json/DataSetMessage/publisher-on-ot-edge
Content Type: application/json; charset=utf-8
------------- User Properties ----------------
specversion: 1.0
type: ua-event
time: 2024-03-28T21:57:01Z
id: 1236-1237-1238
source: urn:factory:aggregationserver:opcua
datacontenttype: application/json; charset=utf-8
subject: energy-consumption-asset/444321
sequence: 18
traceparent: caffef3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00000011
recordedtime: 2024-03-28T21:57:01Z
opcuametadatamajorversion: 672338910
opcuametadataminorversion: 672341762
.... further attributes ...
------------------ payload -------------------
{
... application data
(OPC UA PubSub JSON Single Event Message)...
}
-----------------------------------------------
```
### Telemetry message with different Encoding
One major benefit of CloudEvents for OPC UA is that it is possible to support
other encoding and external schema, while keeping the same OPC UA information for
routing.
The example below uses Avro binary encoded payload, with the corresponding schema
referenced by `dataschema`. The `source` will be defined by an customer defined
hierarchical path.
```text
------------------ PUBLISH -------------------
Topic Name: bottling-company/amsterdam/FillingArea1/FillingLine9/Cell1/Conveyor
Content Type: application/avro
------------- User Properties ----------------
specversion: 1.0
type: ua-keyframe
time: 2024-03-28T23:59:59Z
id: 6235-7235-8235
source: bottling-company/amsterdam/FillingArea1/FillingLine9/Cell1/Conveyor
datacontenttype: application/avro
subject: energy-consumption-asset
dataschema: http://example.com/schemas/energy-consumption-asset/v1.8
sequence: 3141
traceparent: 22222f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00000011
recordedtime: 2024-03-28T23:59:59Z
.... further attributes ...
------------------ payload -------------------
... application data
(OPC UA PubSub Single DataSet Message as AVRO binary)...
-----------------------------------------------
```

View File

@ -1,40 +0,0 @@
# Partitioning extension
This extension defines an attribute for use by message brokers and their clients
that support partitioning of events, typically for the purpose of scaling.
Often in large scale systems, during times of heavy load, events being received
need to be partitioned into multiple buckets so that each bucket can be
separately processed in order for the system to manage the incoming load. A
partitioning key can be used to determine which bucket each event goes into. The
entity sending the events can ensure that events that need to be placed into the
same bucket are done so by using the same partition key on those events.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is
used. For example, an attribute being marked as "REQUIRED" does not mean
it needs to be in all CloudEvents, rather it needs to be included only when
this extension is being used.
## Attributes
### partitionkey
- Type: `String`
- Description: A partition key for the event, typically for the purposes of
defining a causal relationship/grouping between multiple events. In cases
where the CloudEvent is delivered to an event consumer via multiple hops,
it is possible that the value of this attribute might change, or even be
removed, due to protocol semantics or business processing logic within
each hop.
- Examples:
- The ID of the entity that the event is associated with
- Constraints:
- REQUIRED
- MUST be a non-empty string

View File

@ -1,81 +0,0 @@
# Recorded Time Extension
This extension defines an attribute that represents the time when an
[_occurrence_](../spec.md#occurrence)
was recorded in a particular
[_event_](../spec.md#event),
which is the time when the CloudEvent was created by a producer.
This attribute is distinct from the [`time`
attribute](https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md#time),
which, according to the CloudEvents specification, SHOULD be the time when the
occurrence happened, if it can be determined.
This attribute makes it possible to represent
[bitemporal](https://en.wikipedia.org/wiki/Bitemporal_modeling) data with
CloudEvents so that, for every event, both of the following times can be known
by consumers:
- _Occurrence time_: timestamp of when the occurrence recorded in the event
happened, which corresponds to the `time` attribute.
- _Recorded time_: the timestamp of when the occurrence was recorded in a
specific CloudEvent instance, which is represented by this extension.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is
used. For example, an attribute being marked as "REQUIRED" does not mean
it needs to be in all CloudEvents, rather it needs to be included only when
this extension is being used.
## Attributes
### recordedtime
- Type: `Timestamp`
- Description: Timestamp of when the occurrence was recorded in this CloudEvent,
i.e. when the CloudEvent was created by a producer.
- Constraints:
- REQUIRED
- If present, MUST adhere to the format specified in
[RFC 3339](https://tools.ietf.org/html/rfc3339)
- SHOULD be equal to or later than the _occurrence time_.
## Usage
When this extension is used, producers MUST set the value of the `recordedtime`
attribute to the timestamp of when they create the owning CloudEvent.
If the same occurrence MUST be recorded differently, or the event data or
attributes of a previous record of the occurrence MUST be amended or redacted,
then the new CloudEvent with the necessary changes SHOULD have a different
`recordedtime` attribute value than the previous record of the occurrence.
Intermediaries MUST NOT change the value of the `recordedtime` attribute.
## Use cases
Examples of when an occurrence might need to be recorded differently are:
- When incompatible changes to the event data schema are made, and there are
systems that can only process the new schema.
- When a previous record contains incorrect information.
- When a previous record contains personal information that can no longer be
kept because of regulatory or statutory reasons and needs to be redacted.
Having bitemporal data makes it easier to get reproducible datasets for
analytics and data science, as the datasets can be created by placing
constraints on both the `time` and `recordedtime` attributes of events.
Knowing when an occurrence was recorded in a particular event also makes it
possible to determine latency between event producers and consumers. It also
makes it possible to do operations which are sensitive to the time when an event
was recorded, such as capturing events into time-intervalled files.
The recorded time also makes it easier to differentiate different records of the
same occurrence in analytical data stores.

View File

@ -1,49 +0,0 @@
# Sampled Rate Extension
There are many cases in an Event's life when a system (either the system
creating the event or a system transporting the event) might wish to only emit a
portion of the events that actually happened. In a high throughput system where
creating the event is costly, a system might wish to only create an event for
1/100 of the times that something happened. Additionally, during the
transmission of an event from the source to the eventual recipient, any step
along the way might choose to only pass along a fraction of the events it
receives.
In order for the system receiving the event to understand what is actually
happening in the system that generated the event, information about how many
similar events happened would need to be included in the event itself. This
field provides a place for a system generating an event to indicate that the
emitted event represents a given number of other similar events. It also
provides a place for intermediary transport systems to modify the event when
they impose additional sampling.
This specification does not mandate which component (e.g. event source, event
producer) is responsible for doing the sampling. Rather just if sampling is
done then the attributes defined below are where the metadata would appear
within the CloudEvent.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is
used. For example, an attribute being marked as "REQUIRED" does not mean
it needs to be in all CloudEvents, rather it needs to be included only when
this extension is being used.
## Attributes
### sampledrate
- Type: `Integer`
- Description: The rate at which this event has already been sampled. Represents
the number of similar events that happened but were not sent plus this event.
For example, if a system sees 30 occurrences and emits a single event,
`sampledrate` would be 30 (29 not sent and 1 sent). A value of `1` is the
equivalent of this extension not being used at all.
- Constraints
- REQUIRED
- The rate MUST be greater than zero.

View File

@ -1,46 +0,0 @@
# Sequence
This extension defines an attribute that can be included within a CloudEvent
to describe the position of an event in the ordered sequence of events produced
by a unique event source.
The `sequence` attribute represents the value of this event's order in the
stream of events. This specification does not define the meaning or set of
valid value of this attribute, rather it only mandates that the value be
a string that can be lexicographically compared to other `sequence` values
to determine which one comes first. The `sequence` with a lower lexicographical
value comes first.
Produces and consumers are free to define an out-of-band agreement on the
semantic meaning, or valid values, for the attribute.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is
used. For example, an attribute being marked as "REQUIRED" does not mean
it needs to be in all CloudEvents, rather it needs to be included only when
this extension is being used.
## Attributes
### sequence
- Type: `String`
- Description: Value expressing the relative order of the event. This enables
interpretation of data supercedence.
- Constraints
- REQUIRED
- MUST be a non-empty lexicographically-orderable string
- RECOMMENDED as monotonically increasing and contiguous
The entity creating the CloudEvent MUST ensure that the `sequence` values
used are formatted such that across the entire set of values used a receiver
can determine the order of the events via a simple string-compare type of
operation. This means that it might be necessary for the value to include
some kind of padding (e.g. leading zeros in the case of the value being the
string representation of an integer).

View File

@ -1,80 +0,0 @@
# Severity Extension
## Abstract
This extension defines attributes that MAY be included within a CloudEvent
to describe the "severity" or "level" of an event in relation to other events.
Often systems produce events in form of logs, and these types of events usually
share a common concept of "log-level". This extension aims to provide a
standard way for describing this property in a language agnostic form.
Sharing a common way to describe severity of events allows for better
monitoring systems, tooling and general log consumption.
This extension is heavily inspired by the
[OpenTelemetry Severity Fields](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#severity-fields)
and is intended to interoperate with them.
## Notational Conventions
As with the main [CloudEvents specification](../spec.md), the key words "MUST",
"MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT",
"RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as
described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
However, the scope of these key words is limited to when this extension is
used. For example, an attribute being marked as "REQUIRED" does not mean
it needs to be in all CloudEvents, rather it needs to be included only when
this extension is being used.
## Attributes
When both attributes are used, all `severitytext` values which MAY be produced
in a context of a `source` SHOULD be in a
[one-to-one and onto](https://en.wikipedia.org/wiki/Bijection) relationship
with all `severitynumber` values which MAY be produced by the same `source`.
### severitytext
- Type: `String`
- Description: Human readable text representation of the event severity (also
known as log level name).
This is the original string representation of the severity as it is known
at the source. If this field is missing and `severitynumber` is present then
the [short name](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#displaying-severity)
that corresponds to the `severitynumber` MAY be used as a substitution.
- Constraints
- OPTIONAL
- if present, MUST be a non-empty string
- SHOULD be uppercase
- RECOMMENDED values are `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, and
`FATAL`, but others MAY be used.
### severitynumber
- Type: `Integer`
- Description: Numerical representation of the event severity (also known as
log level number), normalized to values described in this document.
Severity of all values MUST be numerically ascending from least-severe
to most-severe. An event with a lower numerical value (such as a debug event)
MUST be less severe than an event with a higher numerical value (such as
an error event).
See OpenTelemetry for [exact severity number meanings](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#field-severitynumber)
- Constraints
- REQUIRED
- if present, MUST NOT be negative
# References
- [Mapping of SeverityNumber](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#mapping-of-severitynumber)
- [Reverse Mapping](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#reverse-mapping)
- [Error Semantics](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#error-semantics)
- [Displaying Severity](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#displaying-severity)
- [Comparing Severity](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#comparing-severity)
- [Mapping of existing log formats to severity levels](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md#appendix-a-example-mappings)

View File

@ -1,187 +0,0 @@
# Avro Event Format for CloudEvents - Version 1.0.3-wip
## Abstract
The Avro Format for CloudEvents defines how events are expressed in
the [Avro 1.9.0 Specification][avro-spec].
## Table of Contents
1. [Introduction](#1-introduction)
2. [Attributes](#2-attributes)
3. [Data](#3-data)
4. [Transport](#4-transport)
4. [Examples](#5-examples)
## 1. Introduction
[CloudEvents][ce] is a standardized and protocol-agnostic definition of the
structure and metadata description of events. This specification defines how the
CloudEvents are to be represented as [Avro 1.9.0][avro-primitives].
The [Attributes](#2-attributes) section describes the naming conventions and
data type mappings for CloudEvents attributes for use as Avro message
properties.
This specification does not define an envelope format. The Avro type system's
intent is primarily to provide a consistent type system for Avro itself and not
for message payloads.
The Avro event format does not currently define a batch mode format.
### 1.1. Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [RFC2119][rfc2119].
## 2. Attributes
This section defines how CloudEvents attributes are mapped to the Avro
type-system. This specification explicitly maps each attribute.
### 2.1 Type System Mapping
The CloudEvents type system MUST be mapped to Avro types as follows.
| CloudEvents | Avro |
|---------------|------------------------------------------------------------------------|
| Boolean | [boolean][avro-primitives] |
| Integer | [int][avro-primitives] |
| String | [string][avro-primitives] |
| Binary | [bytes][avro-primitives] |
| URI | [string][avro-primitives] following [RFC 3986 §4.3][rfc3986-section43] |
| URI-reference | [string][avro-primitives] following [RFC 3986 §4.1][rfc3986-section41] |
| Timestamp | [string][avro-primitives] following [RFC 3339][rfc3339] (ISO 8601) |
Extension specifications MAY define secondary mapping rules for the values of
attributes they define, but MUST also include the previously defined primary
mapping.
### 2.3 OPTIONAL Attributes
CloudEvents Spec defines OPTIONAL attributes. The Avro format defines that these
fields MUST use the `null` type and the actual type through the
[union][avro-unions].
Example:
```json
["null", "string"]
```
### 2.4 Definition
Users of Avro MUST use a message whose binary encoding is identical to the one
described by the [CloudEvent Avro Schema](cloudevents.avsc):
```json
{
"namespace": "io.cloudevents",
"type": "record",
"name": "CloudEvent",
"version": "1.0",
"doc": "Avro Event Format for CloudEvents",
"fields": [
{
"name": "attribute",
"type": {
"type": "map",
"values": ["null", "boolean", "int", "string", "bytes"]
}
},
{
"name": "data",
"type": [
"bytes",
"null",
"boolean",
{
"type": "map",
"values": [
"null",
"boolean",
{
"type": "record",
"name": "CloudEventData",
"doc": "Representation of a JSON Value",
"fields": [
{
"name": "value",
"type": {
"type": "map",
"values": [
"null",
"boolean",
{ "type": "map", "values": "CloudEventData" },
{ "type": "array", "items": "CloudEventData" },
"double",
"string"
]
}
}
]
},
"double",
"string"
]
},
{ "type": "array", "items": "CloudEventData" },
"double",
"string"
]
}
]
}
```
## 3 Data
Before encoding, the AVRO serializer MUST first determine the runtime data type
of the content. This can be determined by examining the data for invalid UTF-8
sequences or by consulting the `datacontenttype` attribute.
If the implementation determines that the type of the data is binary, the value
MUST be stored in the `data` field using the `bytes` type.
For other types (non-binary data without a `datacontenttype` attribute), the
implementation MUST translate the data value into a representation of the JSON
value using the union types described for the `data` record.
## 4 Transport
Transports that support content identification MUST use the following designation:
```text
application/cloudevents+avro
```
## 5 Examples
The following table shows exemplary mappings:
| CloudEvents | Type | Exemplary Avro Value |
|-----------------|--------|-------------------------------------------|
| id | string | `7a0dc520-c870-4193c8` |
| source | string | `https://github.com/cloudevents` |
| specversion | string | `1.0` |
| type | string | `com.example.object.deleted.v2` |
| datacontenttype | string | `application/octet-stream` |
| dataschema | string | `http://registry.com/schema/v1/much.json` |
| subject | string | `mynewfile.jpg` |
| time | string | `2019-06-05T23:45:00Z` |
| data | bytes | `[bytes]` |
## References
- [Avro 1.9.0][avro-spec] Apache Avro™ 1.9.0 Specification
[avro-spec]: http://avro.apache.org/docs/1.9.0/spec.html
[avro-primitives]: http://avro.apache.org/docs/1.9.0/spec.html#schema_primitive
[avro-logical-types]: http://avro.apache.org/docs/1.9.0/spec.html#Logical+Types
[avro-unions]: http://avro.apache.org/docs/1.9.0/spec.html#Unions
[ce]: ../spec.md
[rfc2119]: https://tools.ietf.org/html/rfc2119
[rfc3986-section41]: https://tools.ietf.org/html/rfc3986#section-4.1
[rfc3986-section43]: https://tools.ietf.org/html/rfc3986#section-4.3
[rfc3339]: https://tools.ietf.org/html/rfc3339

View File

@ -1,64 +0,0 @@
{
"namespace":"io.cloudevents",
"type":"record",
"name":"AvroCloudEvent",
"version":"1.0",
"doc":"Avro Event Format for CloudEvents",
"fields":[
{
"name":"attribute",
"type":{
"type":"map",
"values":[
"null",
"boolean",
"int",
"string",
"bytes"
]
}
},
{
"name": "data",
"type": [
"bytes",
"null",
"boolean",
{
"type": "map",
"values": [
"null",
"boolean",
{
"type": "record",
"name": "AvroCloudEventData",
"doc": "Representation of a JSON Value",
"fields": [
{
"name": "value",
"type": {
"type": "map",
"values": [
"null",
"boolean",
{ "type": "map", "values": "AvroCloudEventData" },
{ "type": "array", "items": "AvroCloudEventData" },
"double",
"string"
]
}
}
]
},
"double",
"string"
]
},
{ "type": "array", "items": "AvroCloudEventData" },
"double",
"string"
]
}
]
}

View File

@ -1,128 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "CloudEvents Specification JSON Schema",
"type": "object",
"properties": {
"id": {
"description": "Identifies the event.",
"$ref": "#/definitions/iddef",
"examples": [
"A234-1234-1234"
]
},
"source": {
"description": "Identifies the context in which an event happened.",
"$ref": "#/definitions/sourcedef",
"examples" : [
"https://github.com/cloudevents",
"mailto:cncf-wg-serverless@lists.cncf.io",
"urn:uuid:6e8bc430-9c3a-11d9-9669-0800200c9a66",
"cloudevents/spec/pull/123",
"/sensors/tn-1234567/alerts",
"1-555-123-4567"
]
},
"specversion": {
"description": "The version of the CloudEvents specification which the event uses.",
"$ref": "#/definitions/specversiondef",
"examples": [
"1.0"
]
},
"type": {
"description": "Describes the type of event related to the originating occurrence.",
"$ref": "#/definitions/typedef",
"examples" : [
"com.github.pull_request.opened",
"com.example.object.deleted.v2"
]
},
"datacontenttype": {
"description": "Content type of the data value. Must adhere to RFC 2046 format.",
"$ref": "#/definitions/datacontenttypedef",
"examples": [
"text/xml",
"application/json",
"image/png",
"multipart/form-data"
]
},
"dataschema": {
"description": "Identifies the schema that data adheres to.",
"$ref": "#/definitions/dataschemadef"
},
"subject": {
"description": "Describes the subject of the event in the context of the event producer (identified by source).",
"$ref": "#/definitions/subjectdef",
"examples": [
"mynewfile.jpg"
]
},
"time": {
"description": "Timestamp of when the occurrence happened. Must adhere to RFC 3339.",
"$ref": "#/definitions/timedef",
"examples": [
"2018-04-05T17:31:00Z"
]
},
"data": {
"description": "The event payload.",
"$ref": "#/definitions/datadef",
"examples": [
"<much wow=\"xml\"/>"
]
},
"data_base64": {
"description": "Base64 encoded event payload. Must adhere to RFC4648.",
"$ref": "#/definitions/data_base64def",
"examples": [
"Zm9vYg=="
]
}
},
"required": ["id", "source", "specversion", "type"],
"definitions": {
"iddef": {
"type": "string",
"minLength": 1
},
"sourcedef": {
"type": "string",
"format": "uri-reference",
"minLength": 1
},
"specversiondef": {
"type": "string",
"minLength": 1
},
"typedef": {
"type": "string",
"minLength": 1
},
"datacontenttypedef": {
"type": ["string", "null"],
"minLength": 1
},
"dataschemadef": {
"type": ["string", "null"],
"format": "uri",
"minLength": 1
},
"subjectdef": {
"type": ["string", "null"],
"minLength": 1
},
"timedef": {
"type": ["string", "null"],
"format": "date-time",
"minLength": 1
},
"datadef": {
"type": ["object", "string", "number", "array", "boolean", "null"]
},
"data_base64def": {
"type": ["string", "null"],
"contentEncoding": "base64"
}
}
}

View File

@ -1,69 +0,0 @@
/**
* CloudEvent Protobuf Format
*
* - Required context attributes are explicitly represented.
* - Optional and Extension context attributes are carried in a map structure.
* - Data may be represented as binary, text, or protobuf messages.
*/
syntax = "proto3";
package io.cloudevents.v1;
import "google/protobuf/any.proto";
import "google/protobuf/timestamp.proto";
option csharp_namespace = "CloudNative.CloudEvents.V1";
option go_package = "cloudevents.io/genproto/v1";
option java_package = "io.cloudevents.v1.proto";
option java_multiple_files = true;
option php_namespace = "Io\\CloudEvents\\V1\\Proto";
option ruby_package = "Io::CloudEvents::V1::Proto";
message CloudEvent {
// -- CloudEvent Context Attributes
// Required Attributes
string id = 1;
string source = 2; // URI-reference
string spec_version = 3;
string type = 4;
// Optional & Extension Attributes
map<string, CloudEventAttributeValue> attributes = 5;
// -- CloudEvent Data (Bytes, Text, or Proto)
oneof data {
bytes binary_data = 6;
string text_data = 7;
google.protobuf.Any proto_data = 8;
}
/**
* The CloudEvent specification defines
* seven attribute value types...
*/
message CloudEventAttributeValue {
oneof attr {
bool ce_boolean = 1;
int32 ce_integer = 2;
string ce_string = 3;
bytes ce_bytes = 4;
string ce_uri = 5;
string ce_uri_ref = 6;
google.protobuf.Timestamp ce_timestamp = 7;
}
}
}
/**
* CloudEvent Protobuf Batch Format
*
*/
message CloudEventBatch {
repeated CloudEvent events = 1;
}

View File

@ -1,517 +0,0 @@
# JSON Event Format for CloudEvents - Version 1.0.3-wip
## Abstract
The JSON Format for CloudEvents defines how events are expressed in JavaScript
Object Notation (JSON) Data Interchange Format ([RFC8259][rfc8259]).
## Table of Contents
1. [Introduction](#1-introduction)
2. [Attributes](#2-attributes)
3. [Envelope](#3-envelope)
4. [JSON Batch Format](#4-json-batch-format)
5. [References](#5-references)
## 1. Introduction
[CloudEvents][ce] is a standardized and protocol-agnostic definition of the
structure and metadata description of events. This specification defines how the
elements defined in the CloudEvents specification are to be represented in the
JavaScript Object Notation (JSON) Data Interchange Format ([RFC8259][rfc8259]).
The [Attributes](#2-attributes) section describes the naming conventions and
data type mappings for CloudEvents attributes.
The [Envelope](#3-envelope) section defines a JSON container for CloudEvents
attributes and an associated media type.
### 1.1. Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [RFC2119][rfc2119].
## 2. Attributes
This section defines how CloudEvents attributes are mapped to JSON. This
specification does not explicitly map each attribute, but provides a generic
mapping model that applies to all current and future CloudEvents attributes,
including extensions.
For clarity, extension attributes are serialized using the same rules as
core attributes. This includes their syntax and placement
within the JSON object. In particular, extensions are placed as top-level JSON
properties. Extensions MUST be serialized as a top-level JSON property. There
were many reasons for this design decision and they are covered in more detail
in the [Primer](../primer.md#json-extensions).
### 2.1. Base Type System
The core [CloudEvents specification][ce] defines a minimal abstract type system,
which this mapping leans on.
### 2.2. Type System Mapping
The [CloudEvents type system][ce-types] MUST be mapped to JSON types as follows,
with exceptions noted below.
| CloudEvents | JSON |
| ------------- | -------------------------------------------------------------- |
| Boolean | [boolean][json-bool] |
| Integer | [number][json-number], only the integer component optionally prefixed with a minus sign is permitted |
| String | [string][json-string] |
| Binary | [string][json-string], [Base64-encoded][base64] binary |
| URI | [string][json-string] following [RFC 3986][rfc3986] |
| URI-reference | [string][json-string] following [RFC 3986][rfc3986] |
| Timestamp | [string][json-string] following [RFC 3339][rfc3339] (ISO 8601) |
Unset attributes MAY be encoded to the JSON value of `null`. When decoding
attributes and a `null` value is encountered, it MUST be treated as the
equivalent of unset or omitted.
Extension specifications MAY define secondary mapping rules for the values of
attributes they define, but MUST also include the previously defined primary
mapping.
For instance, the attribute value might be a data structure defined in a
standard outside of CloudEvents, with a formal JSON mapping, and there might be
risk of translation errors or information loss when the original format is not
preserved.
An extension specification that defines a secondary mapping rule for JSON, and
any revision of such a specification, MUST also define explicit mapping rules
for all other event formats that are part of the CloudEvents core at the time of
the submission or revision.
If necessary, the CloudEvents type can be determined by inference using the rules
from the mapping table, whereby the only potentially ambiguous JSON data type is
`string`. The value is compatible with the respective CloudEvents type when the
mapping rules are fulfilled.
### 2.3. Examples
The following table shows exemplary attribute mappings:
| CloudEvents | Type | Exemplary JSON Value |
| --------------- | ---------------- | ----------------------- |
| type | String | "com.example.someevent" |
| specversion | String | "1.0" |
| source | URI-reference | "/mycontext" |
| subject | String | "larger-context" |
| subject | String (null) | null |
| id | String | "1234-1234-1234" |
| time | Timestamp | "2018-04-05T17:31:00Z" |
| time | Timestamp (null) | null |
| datacontenttype | String | "application/json" |
### 2.4. JSONSchema Validation
The CloudEvents [JSONSchema](http://json-schema.org) for the spec is located
[here](cloudevents.json) and contains the definitions for validating events in
JSON.
## 3. Envelope
Each CloudEvents event can be wholly represented as a JSON object.
Such a representation MUST use the media type `application/cloudevents+json`.
All REQUIRED and all not omitted OPTIONAL attributes in the given event MUST
become members of the JSON object, with the respective JSON object member name
matching the attribute name, and the member's type and value being mapped using
the [type system mapping](#22-type-system-mapping).
OPTIONAL not omitted attributes MAY be represented as a `null` JSON value.
### 3.1. Handling of "data"
The JSON representation of the event "data" payload is determined by the runtime
type of the `data` content and the value of the [`datacontenttype`
attribute][datacontenttype].
#### 3.1.1. Payload Serialization
Before taking action, a JSON serializer MUST first determine the runtime data
type of the `data` content.
If the implementation determines that the type of data is `Binary`, the value
MUST be represented as a [JSON string][json-string] expression containing the
[Base64][base64] encoded binary value, and use the member name `data_base64` to
store it inside the JSON representation. If present, the `datacontenttype` MUST
reflect the format of the original binary data. If a `datacontenttype` value is
not provided, no assumptions can be made as to the format of the data and
therefore the `datacontenttype` attribute MUST NOT be present in the resulting
CloudEvent.
Note: Definition of `data_base64` is a JSON-specific marshaling rule and not
part of the formal CloudEvents context attributes definition. This means the
rules governing CloudEvent attributes names do not apply to this JSON member.
If the type of data is not `Binary`, the implementation will next determine
whether the value of the `datacontenttype` attribute declares the `data` to
contain JSON-formatted content. Such a content type is defined as one having a
[media subtype][rfc2045-sec5] equal to `json` or ending with a `+json` format
extension. That is, a `datacontenttype` declares JSON-formatted content if its
media type, when stripped of parameters, has the form `*/json` or `*/*+json`.
If the `datacontenttype` is unspecified, processing SHOULD proceed as if the
`datacontenttype` had been specified explicitly as `application/json`.
If the `datacontenttype` declares the data to contain JSON-formatted content, a
JSON serializer MUST translate the data value to a [JSON value][json-value], and
use the member name `data` to store it inside the JSON representation. The data
value MUST be stored directly as a JSON value, rather than as an encoded JSON
document represented as a string. An implementation MAY fail to serialize the
event if it is unable to translate the runtime value to a JSON value.
Otherwise, if the `datacontenttype` does not declare JSON-formatted data
content, a JSON serializer MUST store a string representation of the data value,
properly encoded according to the `datacontenttype`, in the `data` member of the
JSON representation. An implementation MAY fail to serialize the event if it is
unable to represent the runtime value as a properly encoded string.
Out of this follows that the presence of the `data` and `data_base64` members is
mutually exclusive in a JSON serialized CloudEvent.
Furthermore, unlike attributes, for which value types are restricted by the
[type-system mapping](#22-type-system-mapping), the `data` member
[JSON value][json-value] is unrestricted, and MAY contain any valid JSON if the
`datacontenttype` declares the data to be JSON-formatted. In particular, the
`data` member MAY have a value of `null`, representing an explicit `null`
payload as distinct from the absence of the `data` member.
#### 3.1.2. Payload Deserialization
When a CloudEvents is deserialized from JSON, the presence of the `data_base64`
member clearly indicates that the value is a Base64 encoded binary data, which
the deserializer MUST decode into a binary runtime data type. The deserializer
MAY further interpret this binary data according to the `datacontenttype`. If
the `datacontenttype` attribute is absent, the decoding MUST NOT make an
assumption of JSON-formatted data (as described below for the `data` member).
When a `data` member is present, the decoding behavior is dependent on the value
of the `datacontenttype` attribute. If the `datacontenttype` declares the `data`
to contain JSON-formatted content (that is, its subtype is `json` or has a
`+json` format extension), then the `data` member MUST be treated directly as a
[JSON value][json-value] and decoded using an appropriate JSON type mapping for
the runtime. Note: if the `data` member is a string, a JSON deserializer MUST
interpret it directly as a [JSON String][json-string] value; it MUST NOT further
deserialize the string as a JSON document.
If the `datacontenttype` does not declare JSON-formatted data content, then the
`data` member SHOULD be treated as an encoded content string. An implementation
MAY fail to deserialize the event if the `data` member is not a string, or if it
is unable to interpret the `data` with the `datacontenttype`.
When a `data` member is present, if the `datacontenttype` attribute is absent, a
JSON deserializer SHOULD proceed as if it were set to `application/json`, which
declares the data to contain JSON-formatted content. Thus, it SHOULD treat the
`data` member directly as a [JSON value][json-value] as specified above.
Furthermore, if a JSON-formatted event with no `datacontenttype` attribute, is
deserialized and then re-serialized using a different format or protocol
binding, the `datacontenttype` in the re-serialized event SHOULD be set
explicitly to the implied `application/json` content type to preserve the
semantics of the event.
### 3.2. Examples
Example event with `Binary`-valued data:
```JSON
{
"specversion" : "1.0",
"type" : "com.example.someevent",
"source" : "/mycontext",
"id" : "A234-1234-1234",
"time" : "2018-04-05T17:31:00Z",
"comexampleextension1" : "value",
"comexampleothervalue" : 5,
"datacontenttype" : "application/vnd.apache.thrift.binary",
"data_base64" : "... base64 encoded string ..."
}
```
The above example re-encoded using [HTTP Binary Content Mode][http-binary]:
```
ce-specversion: 1.0
ce-type: com.example.someevent
ce-source: /mycontext
ce-id: A234-1234-1234
ce-time: 2018-04-05T17:31:00Z
ce-comexampleextension1: value
ce-comexampleothervalue: 5
content-type: application/vnd.apache.thrift.binary
...raw binary bytes...
```
Example event with a serialized XML document as the `String` (i.e. non-`Binary`)
valued `data`, and an XML (i.e. non-JSON-formatted) content type:
```JSON
{
"specversion" : "1.0",
"type" : "com.example.someevent",
"source" : "/mycontext",
"id" : "B234-1234-1234",
"time" : "2018-04-05T17:31:00Z",
"comexampleextension1" : "value",
"comexampleothervalue" : 5,
"unsetextension": null,
"datacontenttype" : "application/xml",
"data" : "<much wow=\"xml\"/>"
}
```
The above example re-encoded using [HTTP Binary Content Mode][http-binary]:
```
ce-specversion: 1.0
ce-type: com.example.someevent
ce-source: /mycontext
ce-id: B234-1234-1234
ce-time: 2018-04-05T17:31:00Z
ce-comexampleextension1: value
ce-comexampleothervalue: 5
content-type: application/xml
<much wow="xml"/>
```
Example event with [JSON Object][json-object]-valued `data` and a content type
declaring JSON-formatted data:
```JSON
{
"specversion" : "1.0",
"type" : "com.example.someevent",
"source" : "/mycontext",
"subject": null,
"id" : "C234-1234-1234",
"time" : "2018-04-05T17:31:00Z",
"comexampleextension1" : "value",
"comexampleothervalue" : 5,
"datacontenttype" : "application/json",
"data" : {
"appinfoA" : "abc",
"appinfoB" : 123,
"appinfoC" : true
}
}
```
The above example re-encoded using [HTTP Binary Content Mode][http-binary]:
```
ce-specversion: 1.0
ce-type: com.example.someevent
ce-source: /mycontext
ce-id: C234-1234-1234
ce-time: 2018-04-05T17:31:00Z
ce-comexampleextension1: value
ce-comexampleothervalue: 5
content-type: application/json
{
"appinfoA" : "abc",
"appinfoB" : 123,
"appinfoC" : true
}
```
Example event with [JSON Number][json-number]-valued `data` and a content type
declaring JSON-formatted data:
```JSON
{
"specversion" : "1.0",
"type" : "com.example.someevent",
"source" : "/mycontext",
"subject": null,
"id" : "C234-1234-1234",
"time" : "2018-04-05T17:31:00Z",
"comexampleextension1" : "value",
"comexampleothervalue" : 5,
"datacontenttype" : "application/json",
"data" : 1.5
}
```
The above example re-encoded using [HTTP Binary Content Mode][http-binary]:
```
ce-specversion: 1.0
ce-type: com.example.someevent
ce-source: /mycontext
ce-id: C234-1234-1234
ce-time: 2018-04-05T17:31:00Z
ce-comexampleextension1: value
ce-comexampleothervalue: 5
content-type: application/json
1.5
```
Example event with a literal JSON string as the non-`Binary`-valued `data` and
no `datacontenttype`. The data is implicitly treated as if the `datacontenttype`
were set to `application/json`:
```JSON
{
"specversion" : "1.0",
"type" : "com.example.someevent",
"source" : "/mycontext",
"subject": null,
"id" : "D234-1234-1234",
"time" : "2018-04-05T17:31:00Z",
"comexampleextension1" : "value",
"comexampleothervalue" : 5,
"data" : "I'm just a string"
}
```
The above example re-encoded using [HTTP Binary Content Mode][http-binary].
Note that the Content Type is explicitly set to the `application/json` value
that was implicit in JSON format. Note also that the content is quoted to
indicate that it is a literal JSON string. If the quotes were missing, this
would have been an invalid event because the content could not be decoded as
`application/json`:
```
ce-specversion: 1.0
ce-type: com.example.someevent
ce-source: /mycontext
ce-id: D234-1234-1234
ce-time: 2018-04-05T17:31:00Z
ce-comexampleextension1: value
ce-comexampleothervalue: 5
content-type: application/json
"I'm just a string"
```
Example event with a `Binary`-valued `data_base64` but no `datacontenttype`.
Even though the data happens to be a valid JSON document when interpreted as
text, no content type is inferred.
```JSON
{
"specversion" : "1.0",
"type" : "com.example.someevent",
"source" : "/mycontext",
"id" : "D234-1234-1234",
"data_base64" : "eyAieHl6IjogMTIzIH0="
}
```
The above example re-encoded using [HTTP Binary Content Mode][http-binary].
Note that there is no `content-type` header present.
```
ce-specversion: 1.0
ce-type: com.example.someevent
ce-source: /mycontext
ce-id: D234-1234-1234
{ "xyz": 123 }
```
## 4. JSON Batch Format
In the _JSON Batch Format_ several CloudEvents are batched into a single JSON
document. The document is a JSON array filled with CloudEvents in the [JSON
Event format][json-format].
### 4.1. Mapping CloudEvents
This section defines how a batch of CloudEvents is mapped to JSON.
The outermost JSON element is a [JSON Array][json-array], which contains as
elements CloudEvents rendered in accordance with the [JSON event
format][json-format] specification.
### 4.2. Envelope
A JSON Batch of CloudEvents MUST use the media type
`application/cloudevents-batch+json`.
### 4.3. Examples
An example containing two CloudEvents: The first with `Binary`-valued data, the
second with JSON data.
```JSON
[
{
"specversion" : "1.0",
"type" : "com.example.someevent",
"source" : "/mycontext/4",
"id" : "B234-1234-1234",
"time" : "2018-04-05T17:31:00Z",
"comexampleextension1" : "value",
"comexampleothervalue" : 5,
"datacontenttype" : "application/vnd.apache.thrift.binary",
"data_base64" : "... base64 encoded string ..."
},
{
"specversion" : "1.0",
"type" : "com.example.someotherevent",
"source" : "/mycontext/9",
"id" : "C234-1234-1234",
"time" : "2018-04-05T17:31:05Z",
"comexampleextension1" : "value",
"comexampleothervalue" : 5,
"datacontenttype" : "application/json",
"data" : {
"appinfoA" : "abc",
"appinfoB" : 123,
"appinfoC" : true
}
}
]
```
An example of an empty batch of CloudEvents (typically used in a response, but
also valid in a request):
```JSON
[]
```
## 5. References
- [RFC2046][rfc2046] Multipurpose Internet Mail Extensions (MIME) Part Two:
Media Types
- [RFC2119][rfc2119] Key words for use in RFCs to Indicate Requirement Levels
- [RFC4627][rfc4627] The application/json Media Type for JavaScript Object
Notation (JSON)
- [RFC4648][rfc4648] The Base16, Base32, and Base64 Data Encodings
- [RFC6839][rfc6839] Additional Media Type Structured Syntax Suffixes
- [RFC8259][rfc8259] The JavaScript Object Notation (JSON) Data Interchange
Format
[base64]: https://tools.ietf.org/html/rfc4648#section-4
[ce]: ../spec.md
[ce-types]: ../spec.md#type-system
[content-type]: https://tools.ietf.org/html/rfc7231#section-3.1.1.5
[datacontenttype]: ../spec.md#datacontenttype
[http-binary]: ../bindings/http-protocol-binding.md#31-binary-content-mode
[json-format]: ../formats/json-format.md
[json-geoseq]:https://www.iana.org/assignments/media-types/application/geo+json-seq
[json-object]: https://tools.ietf.org/html/rfc7159#section-4
[json-seq]: https://www.iana.org/assignments/media-types/application/json-seq
[json-bool]: https://tools.ietf.org/html/rfc7159#section-3
[json-number]: https://tools.ietf.org/html/rfc7159#section-6
[json-string]: https://tools.ietf.org/html/rfc7159#section-7
[json-value]: https://tools.ietf.org/html/rfc7159#section-3
[json-array]: https://tools.ietf.org/html/rfc7159#section-5
[rfc2045-sec5]: https://tools.ietf.org/html/rfc2045#section-5
[rfc2046]: https://tools.ietf.org/html/rfc2046
[rfc2119]: https://tools.ietf.org/html/rfc2119
[rfc3986]: https://tools.ietf.org/html/rfc3986
[rfc4627]: https://tools.ietf.org/html/rfc4627
[rfc4648]: https://tools.ietf.org/html/rfc4648
[rfc6839]: https://tools.ietf.org/html/rfc6839#section-3.1
[rfc8259]: https://tools.ietf.org/html/rfc8259
[rfc3339]: https://www.ietf.org/rfc/rfc3339.txt

View File

@ -1,247 +0,0 @@
# Protobuf Event Format for CloudEvents - Version 1.0.3-wip
## Abstract
[Protocol Buffers][proto-home] is a mechanism for marshalling structured data,
this document defines how CloudEvents are represented using [version 3][proto-3]
of that specification.
In this document the terms *Protocol Buffers*, *protobuf*, and *proto* are used
interchangeably.
## Table of Contents
1. [Introduction](#1-introduction)
2. [Attributes](#2-attributes)
3. [Data](#3-data)
4. [Transport](#4-transport)
5. [Batch Format](#5-batch-format)
6. [Examples](#6-examples)
## 1. Introduction
[CloudEvents][ce] is a standardized and protocol-agnostic definition of the
structure and metadata description of events. This specification defines how the
elements defined in the CloudEvents specification are represented using
a protobuf schema.
The [Attributes](#2-attributes) section describes the naming conventions and
data type mappings for CloudEvent attributes for use as protobuf message
properties.
The [Data](#3-data) section describes how the event payload is carried.
### 1.1. Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [RFC2119][rfc2119].
### 1.2 Content-Type
There is no official IANA *media-type* designation for protobuf, as such this
specification uses 'application/protobuf' to identify such content.
## 2. Attributes
This section defines how CloudEvents attributes are represented in the protobuf
[schema][proto-schema].
## 2.1 Type System
The CloudEvents type system is mapped to protobuf as follows :
| CloudEvents | protobuf |
| ------------- | ---------------------------------------------------------------------- |
| Boolean | [boolean][proto-scalars] |
| Integer | [int32][proto-scalars] |
| String | [string][proto-scalars] |
| Binary | [bytes][proto-scalars] |
| URI | [string][proto-scalars] following [RFC 3986 §4.3][rfc3986-section43]|
| URI-reference | [string][proto-scalars] following [RFC 3986 §4.1][rfc3986-section41] |
| Timestamp | [Timestamp][proto-timestamp] |
## 2.3 REQUIRED Attributes
REQUIRED attributes are represented explicitly as protobuf fields.
## 2.4 OPTIONAL Attributes & Extensions
OPTIONAL and extension attributes are represented using a map construct enabling
direct support of the CloudEvent [type system][ce-types].
```proto
map<string, CloudEventAttributeValue> attributes = 1;
message CloudEventAttributeValue {
oneof attr {
bool ce_boolean = 1;
int32 ce_integer = 2;
string ce_string = 3;
bytes ce_binary = 4;
string ce_uri = 5;
string ce_uri_reference = 6;
google.protobuf.Timestamp ce_timestamp = 7;
}
}
```
In this model an attribute's name is used as the map *key* and is
associated with its *value* stored in the appropriately typed property.
This approach allows attributes to be represented and transported
with no loss of *type* information.
## 3. Data
The specification allows for data payloads of the following types to be explicitly represented:
* string
* bytes
* protobuf object/message
```proto
oneof data {
// Binary data
bytes binary_data = 2;
// String data
string text_data = 3;
// Protobuf Message data
google.protobuf.Any proto_data = 4;
}
```
* Where the data is a protobuf message it MUST be stored in the `proto_data` property.
* `datacontenttype` MAY be populated with `application/protobuf`
* `dataschema` SHOULD be populated with the type URL of the protobuf data message.
* When the type of the data is text, the value MUST be stored in the `text_data` property.
* `datacontenttype` SHOULD be populated with the appropriate media-type.
* When the type of the data is binary the value MUST be stored in the `binary_data` property.
* `datacontenttype` SHOULD be populated with the appropriate media-type.
## 4. Transport
Transports that support content identification MUST use the following designation:
```text
application/cloudevents+protobuf
```
## 5. Batch Format
In the _Protobuf Batch Format_ several CloudEvents are batched into a single Protobuf
message. The message contains a repeated field filled with independent CloudEvent messages
in the structured mode Protobuf event format.
### 5.1 Envelope
The enveloping container is a _CloudEventBatch_ protobuf message containing a
repeating set of _CloudEvent_ message(s):
```proto
message CloudEventBatch {
repeated CloudEvent events = 1;
}
```
### 5.2 Batch Media Type
A compliant protobuf batch representation is identifed using the following media-type
```text
application/cloudevents-batch+protobuf
```
## 6. Examples
The following code-snippets show how proto representations might be constructed
assuming the availability of some convenience methods.
### 6.1 Plain Text event data
```java
public static CloudEvent plainTextExample() {
CloudEvent.Builder ceBuilder = CloudEvent.newBuilder();
ceBuilder
//-- REQUIRED Attributes.
.setId(UUID.randomUUID().toString())
.setSpecVersion("1.0")
.setType("io.cloudevent.example")
.setSource("producer-1")
//-- Data.
.setTextData("This is a plain text message");
//-- OPTIONAL Attributes
withCurrentTime(ceBuilder, "time");
withAttribute(ceBuilder, "datacontenttype", "text/plain");
// Build it.
return ceBuilder.build();
}
```
### 6.2 Proto message as event data
Where the event data payload is itself a protobuf message (with its own schema)
a protocol buffer idiomatic method can be used to carry the data.
```java
private static Spec.CloudEvent protoExample() {
//-- Build an event data protobuf object.
Test.SomeData.Builder dataBuilder = Test.SomeData.newBuilder();
dataBuilder
.setSomeText("this is an important message")
.setIsImportant(true);
//-- Build the CloudEvent.
CloudEvent.Builder ceBuilder = Spec.CloudEvent.newBuilder();
ceBuilder
.setId(UUID.randomUUID().toString())
.setSpecVersion("1.0")
.setType("io.cloudevent.example")
.setSource("producer-2")
// Add the proto data into the CloudEvent envelope.
.setProtoData(Any.pack(dataBuilder.build()));
// Add the protto type URL
withAttribute(ceBuilder, "dataschema", ceBuilder.getProtoData().getTypeUrl());
// Set Content-Type (OPTIONAL)
withAttribute(ceBuilder, "datacontenttype", "application/protobuf");
//-- Done.
return ceBuilder.build();
}
```
## References
* [Protocol Buffer 3 Specification][proto-3]
* [CloudEvents Protocol Buffers format schema][proto-schema]
[proto-3]: https://developers.google.com/protocol-buffers/docs/reference/proto3-spec
[proto-home]: https://developers.google.com/protocol-buffers
[proto-scalars]: https://developers.google.com/protocol-buffers/docs/proto3#scalar
[proto-wellknown]: https://developers.google.com/protocol-buffers/docs/reference/google.protobuf
[proto-timestamp]: https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#google.protobuf.Timestamp
[proto-schema]: ./cloudevents.proto
[ce]: ../spec.md
[ce-types]: ../spec.md#type-system
[rfc2119]: https://tools.ietf.org/html/rfc2119
[rfc3986-section41]: https://tools.ietf.org/html/rfc3986#section-4.1
[rfc3986-section43]: https://tools.ietf.org/html/rfc3986#section-4.3
[rfc3339]: https://tools.ietf.org/html/rfc3339

View File

@ -1,380 +0,0 @@
# HTTP 1.1 Web Hooks for Event Delivery - Version 1.0.3-wip
## Abstract
"Webhooks" are a popular pattern to deliver notifications between applications
and via HTTP endpoints. In spite of pattern usage being widespread, there is no
formal definition for Web Hooks. This specification aims to provide such a
definition for use with [CNCF CloudEvents][ce], but is considered generally
usable beyond the scope of CloudEvents.
## Table of Contents
1. [Introduction](#1-introduction)
- 1.1. [Conformance](#11-conformance)
- 1.2. [Relation to HTTP](#12-relation-to-http)
2. [Delivering notifications](#2-delivering-notifications)
3. [Authorization](#3-authorization)
4. [Abuse Protection](#4-abuse-protection)
5. [References](#5-references)
## 1. Introduction
["Webhooks"][webhooks] are a popular pattern to deliver notifications between
applications and via HTTP endpoints. Applications that make notifications
available, allow for other applications to register an HTTP endpoint to which
notifications are delivered.
This specification defines a HTTP method by how notifications are delivered by
the sender, an authorization model for event delivery to protect the delivery
target, and a registration handshake that protects the sender from being abused
for flooding arbitrary HTTP sites with requests.
### 1.1. Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
interpreted as described in [RFC2119][rfc2119].
### 1.2. Relation to HTTP
This specification prescribes rules constraining the use and handling of
specific [HTTP methods][rfc7231-section-4] and headers.
This specification also applies equivalently to HTTP/2 ([RFC7540][rfc7540]),
which is compatible with HTTP 1.1 semantics.
## 2. Delivering notifications
### 2.1. Delivery request
Notifications are delivered using a HTTP request. The response indicates the
resulting status of the delivery.
HTTP-over-TLS (HTTPS) [RFC2818][rfc2818] MUST be used for the connection.
The HTTP method for the delivery request MUST be [POST][post].
The [`Content-Type`][content-type] header MUST be carried and the request MUST
carry a notification payload of the given content type. Requests without
payloads, e.g. where the notification is entirely expressed in HTTP headers, are
not permitted by this specification.
This specification does not further constrain the content of the notification,
and it also does not prescribe the [HTTP target resource][rfc7230-section-5-1]
that is used for delivery.
If the delivery target supports and requires [Abuse
Protection](#4-abuse-protection), the delivery request MUST include the
`WebHook-Request-Origin` header. The `WebHook-Request-Origin` header value is a
DNS name expression that identifies the sending system.
### 2.2. Delivery response
The delivery response MAY contain a payload providing detail status information
in the case of handling errors. This specification does not attempt to define
such a payload.
The response MUST NOT use any of the [3xx HTTP Redirect status codes][3xx] and
the client MUST NOT follow any such redirection.
If the delivery has been accepted and processed, and if the response carries a
payload with processing details, the response MUST have the [200 OK][200] or
[201 Created][201] status code. In this case, the response MUST carry a
[`Content-Type`][content-type] header.
If the delivery has been accepted and processed, but carries no payload, the
response MUST have the [201 Created][201] or [204 No Content][204] status code.
If the delivery has been accepted, but has not yet been processed or if the
processing status is unknown, the response MUST have the [202 Accepted][202]
status code.
If a delivery target has been retired, but the HTTP site still exists, the site
SHOULD return a [410 Gone][410] status code and the sender SHOULD refrain from
sending any further notifications.
If the delivery target is unable to process the request due to exceeding a
request rate limit, it SHOULD return a [429 Too Many Requests][429] status code
and MUST include the [`Retry-After`][retry-after] header. The sender MUST
observe the value of the Retry-After header and refrain from sending further
requests until the indicated time.
If the delivery cannot be accepted because the notification format has not been
understood, the service MUST respond with status code [415 Unsupported Media
Type][415].
All further error status codes apply as specified in [RFC7231][rfc7231].
## 3. Authorization
The delivery request MUST use one of the following two methods, both of which
lean on the OAuth 2.0 Bearer Token [RFC6750][rfc6750] model.
The delivery target MUST support both methods.
The client MAY use any token-based authorization scheme. The token can take any
shape, and can be a standardized token format or a simple key expression.
Challenge-based schemes MUST NOT be used.
### 3.1. Authorization Request Header Field
The access token is sent in the [`Authorization`][authorization] request header
field defined by HTTP/1.1.
For [OAuth 2.0 Bearer][bearer] tokens, the "Bearer" scheme MUST be used.
Example:
```text
POST /resource HTTP/1.1
Host: server.example.com
Authorization: Bearer mF_9.B5f-4.1JqM
```
### 3.2 URI Query Parameter
When sending the access token in the HTTP request URI, the client adds the
access token to the request URI query component as defined by "Uniform Resource
Identifier (URI): Generic Syntax" [RFC3986][rfc3986], using the "access_token"
parameter.
For example, the client makes the following HTTP request:
```text
POST /resource?access_token=mF_9.B5f-4.1JqM HTTP/1.1
Host: server.example.com
```
The HTTP request URI query MAY include other request-specific parameters, in
which case the "access_token" parameter MUST be properly separated from the
request-specific parameters using "&" character(s) (ASCII code 38).
For example:
https://server.example.com/resource?access_token=mF_9.B5f-4.1JqM&p=q
Clients using the URI Query Parameter method SHOULD also send a Cache-Control
header containing the "no-store" option. Server success (2XX status) responses
to these requests SHOULD contain a Cache-Control header with the "private"
option.
Because of the security weaknesses associated with the URI method (see [RFC6750,
Section 5][rfc6750]), including the high likelihood that the URL containing the
access token will be logged, it SHOULD NOT be used unless it is impossible to
transport the access token in the "Authorization" request header field or the
HTTP request entity-body. All further caveats cited in [RFC6750][rfc6750] apply
equivalently.
## 4. Abuse Protection
Any system that allows registration of and delivery of notifications to
arbitrary HTTP endpoints can potentially be abused such that someone maliciously
or inadvertently registers the address of a system that does not expect such
requests and for which the registering party is not authorized to perform such a
registration. In extreme cases, a notification infrastructure could be abused to
launch denial-of-service attacks against an arbitrary web-site.
To protect the sender from being abused in such a way, a legitimate delivery
target needs to indicate that it agrees with notifications being delivered to
it.
Reaching the delivery agreement is realized using the following validation
handshake. The handshake can either be executed immediately at registration time
or as a "pre-flight" request immediately preceding a delivery.
It is important to understand that the handshake does not aim to establish an
authentication or authorization context. It only serves to protect the sender
from being told to a push to a destination that is not expecting the traffic.
While this specification mandates use of an authorization model, this mandate is
not sufficient to protect any arbitrary website from unwanted traffic if that
website doesn't implement access control and therefore ignores the
`Authorization` header.
Delivery targets SHOULD support the abuse protection feature. If a target does
not support the feature, the sender MAY choose not to send to the target, at
all, or send only at a very low request rate.
### 4.1. Validation request
The validation request uses the HTTP [OPTIONS][options] method. The request is
directed to the exact resource target URI that is being registered.
With the validation request, the sender asks the target for permission to send
notifications, and it can declare a desired request rate (requests per minute).
The delivery target will respond with a permission statement and the permitted
request rate.
The following header fields are for inclusion in the validation request.
#### 4.1.2. WebHook-Request-Origin
The `WebHook-Request-Origin` header MUST be included in the validation request
and requests permission to send notifications from this sender, and contains a
DNS expression that identifies the sending system, for example
"eventemitter.example.com". The value is meant to summarily identify all sender
instances that act on the behalf of a certain system, and not an individual
host.
After the handshake and if permission has been granted, the sender MUST use the
`WebHook-Request-Origin` request header for each delivery request, with the value matching that
of this header.
Example:
```text
WebHook-Request-Origin: eventemitter.example.com
```
#### 4.1.3. WebHook-Request-Callback
The `WebHook-Request-Callback` header is OPTIONAL and augments the
`WebHook-Request-Origin` header. It allows the delivery target to grant send
permission asynchronously, via a simple HTTPS callback.
If the receiving application does not explicitly support the handshake described
here, an administrator could nevertheless still find the callback URL in the
log, and call it manually and therewith grant access.
The delivery target grants permission by issuing an HTTPS GET or POST request
against the given URL. The HTTP GET request can be performed manually using a
browser client. If the WebHook-Request-Callback header is used, the callback
target MUST support both methods.
The delivery target MAY include the `WebHook-Allowed-Rate` response in the
callback.
The URL is not formally constrained, but it SHOULD contain an identifier for the
delivery target along with a secret key that makes the URL difficult to guess so
that 3rd parties cannot spoof the delivery target.
For example:
```text
WebHook-Request-Callback: https://example.com/confirm?id=12345&key=...base64...
```
#### 4.1.4. WebHook-Request-Rate
The `WebHook-Request-Rate` header MAY be included in the request and asks for
permission to send notifications from this sender at the specified rate. The
value is the string representation of a positive integer number greater than
zero and expresses the request rate in "requests per minute".
For example, the following header asks for permission to send 120 requests per
minute:
```text
WebHook-Request-Rate: 120
```
### 4.2. Validation response
If and only if the delivery target does allow delivery of the events, it MUST
reply to the request by including the `WebHook-Allowed-Origin` and
`WebHook-Allowed-Rate` headers.
If the delivery target chooses to grant permission by callback, it withholds the
response headers.
If the delivery target does not allow delivery of the events or does not expect
delivery of events and nevertheless handles the HTTP OPTIONS method, the
existing response ought not to be interpreted as consent, and therefore the
handshake cannot rely on status codes. If the delivery target otherwise does not
handle the HTTP OPTIONS method, it SHOULD respond with HTTP status code 405, as
if OPTIONS were not supported.
The OPTIONS response SHOULD include the [Allow][allow] header indicating the
[POST][post] method being permitted. Other methods MAY be permitted on the
resource, but their function is outside the scope of this specification.
#### 4.2.1. WebHook-Allowed-Origin
The `WebHook-Allowed-Origin` header MUST be returned when the delivery target
agrees to notification delivery by the origin service. Its value MUST either be
the origin name supplied in the `WebHook-Request-Origin` header, or a singular
asterisk character ('\*'), indicating that the delivery target supports
notifications from all origins.
```text
WebHook-Allowed-Origin: eventemitter.example.com
```
or
```text
WebHook-Allowed-Origin: *
```
#### 4.2.2. WebHook-Allowed-Rate
The `WebHook-Allowed-Rate` header MUST be returned alongside
`WebHook-Allowed-Origin`if the request contained the `WebHook-Request-Rate`
header, otherwise it SHOULD be returned.
For the callback model, the `WebHook-Allowed-Rate` header SHOULD be included
in the callback request. If the header is not included, for instance when a
callback is issued through a browser as a GET request, the allowed rate SHOULD
correspond to the requested rate.
The header grants permission to send notifications at the specified rate. The
value is either an asterisk character or the string representation of a positive
integer number greater than zero. The asterisk indicates that there is no rate
limitation. An integer number expresses the permitted request rate in "requests
per minute". For request rates exceeding the granted notification rate, the
sender ought to expect request throttling. Throttling is indicated by requests
being rejected using HTTP status code [429 Too Many Requests][429].
For example, the following header permits to send 100 requests per minute:
```text
WebHook-Allowed-Rate: 100
```
## 5. References
- [RFC2119][rfc2119] Key words for use in RFCs to Indicate Requirement Levels
- [RFC2818][rfc2818] HTTP over TLS
- [RFC6750][rfc6750] The OAuth 2.0 Authorization Framework: Bearer Token Usage
- [RFC6585][rfc6585] Additional HTTP Status Codes
- [RFC7230][rfc7230] Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and
Routing
- [RFC7231][rfc7231] Hypertext Transfer Protocol (HTTP/1.1): Semantics and
Content
- [RFC7235][rfc7235] Hypertext Transfer Protocol (HTTP/1.1): Authentication
- [RFC7540][rfc7540] Hypertext Transfer Protocol Version 2 (HTTP/2)
[ce]: ./spec.md
[webhooks]: https://progrium.github.io/blog/2007/05/03/web-hooks-to-revolutionize-the-web/index.html
[content-type]: https://tools.ietf.org/html/rfc7231#section-3.1.1.5
[retry-after]: https://tools.ietf.org/html/rfc7231#section-7.1.3
[authorization]: https://tools.ietf.org/html/rfc7235#section-4.2
[allow]: https://tools.ietf.org/html/rfc7231#section-7.4.1
[post]: https://tools.ietf.org/html/rfc7231#section-4.3.3
[options]: https://tools.ietf.org/html/rfc7231#section-4.3.7
[3xx]: https://tools.ietf.org/html/rfc7231#section-6.4
[200]: https://tools.ietf.org/html/rfc7231#section-6.3.1
[201]: https://tools.ietf.org/html/rfc7231#section-6.3.2
[202]: https://tools.ietf.org/html/rfc7231#section-6.3.3
[204]: https://tools.ietf.org/html/rfc7231#section-6.3.5
[410]: https://tools.ietf.org/html/rfc7231#section-6.5.9
[415]: https://tools.ietf.org/html/rfc7231#section-6.5.13
[429]: https://tools.ietf.org/html/rfc6585#section-4
[bearer]: https://tools.ietf.org/html/rfc6750#section-2.1
[rfc2119]: https://tools.ietf.org/html/rfc2119
[rfc3986]: https://tools.ietf.org/html/rfc3986
[rfc2818]: https://tools.ietf.org/html/rfc2818
[rfc6585]: https://tools.ietf.org/html/rfc6585
[rfc6750]: https://tools.ietf.org/html/rfc6750
[rfc7159]: https://tools.ietf.org/html/rfc7159
[rfc7230]: https://tools.ietf.org/html/rfc7230
[rfc7230-section-3]: https://tools.ietf.org/html/rfc7230#section-3
[rfc7231-section-4]: https://tools.ietf.org/html/rfc7231#section-4
[rfc7230-section-5-1]: https://tools.ietf.org/html/rfc7230#section-5.1
[rfc7231]: https://tools.ietf.org/html/rfc7231
[rfc7235]: https://tools.ietf.org/html/rfc7235
[rfc7540]: https://tools.ietf.org/html/rfc7540

View File

@ -1,2 +0,0 @@
# מפרט CloudEvents - גרסה 1.0.3 -בתהליך כתיבה
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../README.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# CloudEvents Release Notes
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../RELEASE_NOTES.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# CloudEvents SDK Requirements
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../SDK.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# CloudEvents Adapters
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../adapters/README.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# AWS Simple Storage Service CloudEvents Adapter
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../adapters/aws-s3.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# Amazon Simple Notification Service CloudEvents Adapter
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../adapters/aws-sns.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# CouchDB CloudEvents Adapter
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../adapters/couchdb.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# GitHub CloudEvents Adapter
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../adapters/github.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# GitLab CloudEvents Adapter
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../adapters/gitlab.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# AMQP Protocol Binding for CloudEvents - Version 1.0.3-wip
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../bindings/amqp-protocol-binding.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# HTTP Protocol Binding for CloudEvents - Version 1.0.3-wip
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../bindings/http-protocol-binding.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# Kafka Protocol Binding for CloudEvents - Version 1.0.3-wip
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../bindings/kafka-protocol-binding.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# MQTT Protocol Binding for CloudEvents - Version 1.0.3-wip
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../bindings/mqtt-protocol-binding.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# NATS Protocol Binding for CloudEvents - Version 1.0.3-wip
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../bindings/nats-protocol-binding.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# WebSockets Protocol Binding for CloudEvents - Version 1.0.3-wip
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../bindings/websockets-protocol-binding.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# CloudEvents Extension Attributes
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../extensions/README.md) לבינתיים.

View File

@ -1,2 +0,0 @@
# Auth Context
מסמך זה טרם תורגם. בבקשה תשתמשו [בגרסה האנגלית של המסמך](../../../extensions/authcontext.md) לבינתיים.

Some files were not shown because too many files have changed in this diff Show More