mirror of https://github.com/knative/docs.git
General eventing and kafka updates (#4028)
* General eventing and kafka updates * additional cleanup * fix indentation * additional cleanup * Update docs/eventing/samples/kafka/channel/README.md Co-authored-by: Samia Nneji <snneji@vmware.com> * Update docs/eventing/samples/kafka/channel/README.md Co-authored-by: Samia Nneji <snneji@vmware.com> * Update docs/eventing/samples/kafka/channel/README.md Co-authored-by: Samia Nneji <snneji@vmware.com> * Update docs/eventing/samples/kafka/channel/README.md Co-authored-by: Samia Nneji <snneji@vmware.com> * Update docs/eventing/samples/kafka/channel/README.md Co-authored-by: Samia Nneji <snneji@vmware.com> * Update docs/eventing/samples/kafka/channel/README.md * Update docs/eventing/samples/kafka/channel/README.md * Update docs/eventing/samples/kafka/channel/README.md * Update docs/eventing/samples/kafka/channel/README.md * cleanup rebase * PR comments * remove broker fluff, remove old strimzi docs * fix link Co-authored-by: Samia Nneji <snneji@vmware.com>
This commit is contained in:
parent
85ef1c3619
commit
1b25cf85de
|
|
@ -210,7 +210,6 @@ nav:
|
||||||
- GO: eventing/samples/helloworld/helloworld-go/README.md
|
- GO: eventing/samples/helloworld/helloworld-go/README.md
|
||||||
- Python: eventing/samples/helloworld/helloworld-python/README.md
|
- Python: eventing/samples/helloworld/helloworld-python/README.md
|
||||||
- Apache Kafka:
|
- Apache Kafka:
|
||||||
- Overview: eventing/samples/kafka/README.md
|
|
||||||
- Binding Example: eventing/samples/kafka/binding/README.md
|
- Binding Example: eventing/samples/kafka/binding/README.md
|
||||||
- Channel Example: eventing/samples/kafka/channel/README.md
|
- Channel Example: eventing/samples/kafka/channel/README.md
|
||||||
- ResetOffset Example: eventing/samples/kafka/resetoffset/README.md
|
- ResetOffset Example: eventing/samples/kafka/resetoffset/README.md
|
||||||
|
|
|
||||||
|
|
@ -50,18 +50,8 @@ Follow the procedure for the Channel of your choice:
|
||||||
|
|
||||||
=== "Apache Kafka Channel"
|
=== "Apache Kafka Channel"
|
||||||
|
|
||||||
1. [Install Apache Kafka for Kubernetes](../../../eventing/samples/kafka/README.md).
|
1. Install [Strimzi](https://strimzi.io/quickstarts/).
|
||||||
|
1. Install the Apache Kafka Channel for Knative from the [`knative-sandbox` repository](https://github.com/knative-sandbox/eventing-kafka).
|
||||||
1. Install the Apache Kafka Channel by running the command:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl -L "{{ artifact(org="knative-sandbox",repo="eventing-kafka",file="channel-consolidated.yaml")}}" \
|
|
||||||
| sed 's/REPLACE_WITH_CLUSTER_URL/my-cluster-kafka-bootstrap.kafka:9092/' \
|
|
||||||
| kubectl apply -f -
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! tip
|
|
||||||
To learn more, try the [Apache Kafka Channel sample](../../../eventing/samples/kafka/channel/README.md).
|
|
||||||
|
|
||||||
=== "Google Cloud Pub/Sub Channel"
|
=== "Google Cloud Pub/Sub Channel"
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,3 @@
|
||||||
---
|
|
||||||
title: "Event sources"
|
|
||||||
weight: 20
|
|
||||||
type: "docs"
|
|
||||||
---
|
|
||||||
|
|
||||||
# Event sources
|
# Event sources
|
||||||
|
|
||||||
An event source is a Kubernetes custom resource (CR), created by a developer or cluster administrator, that acts as a link between an event producer and an event _sink_.
|
An event source is a Kubernetes custom resource (CR), created by a developer or cluster administrator, that acts as a link between an event producer and an event _sink_.
|
||||||
|
|
@ -36,7 +30,7 @@ All Sources are part of the `sources` category.
|
||||||
| [AWS SQS](https://github.com/knative-sandbox/eventing-awssqs/tree/main/samples) | v1alpha1 | Knative | Brings [AWS Simple Queue Service](https://aws.amazon.com/sqs/) messages into Knative. The AwsSqsSource fires a new event each time an event is published on an [AWS SQS topic](https://aws.amazon.com/sqs/). |
|
| [AWS SQS](https://github.com/knative-sandbox/eventing-awssqs/tree/main/samples) | v1alpha1 | Knative | Brings [AWS Simple Queue Service](https://aws.amazon.com/sqs/) messages into Knative. The AwsSqsSource fires a new event each time an event is published on an [AWS SQS topic](https://aws.amazon.com/sqs/). |
|
||||||
| [Apache Camel](apache-camel-source/README.md) | N/A | Apache Software Foundation | Enables use of [Apache Camel](https://github.com/apache/camel) components for pushing events into Knative. Camel sources are now provided via [Kamelets](https://camel.apache.org/camel-kamelets/latest/) as part of the [Apache Camel K](https://camel.apache.org/camel-k/latest/installation/installation.html) project. |
|
| [Apache Camel](apache-camel-source/README.md) | N/A | Apache Software Foundation | Enables use of [Apache Camel](https://github.com/apache/camel) components for pushing events into Knative. Camel sources are now provided via [Kamelets](https://camel.apache.org/camel-kamelets/latest/) as part of the [Apache Camel K](https://camel.apache.org/camel-k/latest/installation/installation.html) project. |
|
||||||
| [Apache CouchDB](https://github.com/knative-sandbox/eventing-couchdb/blob/main/source) | v1alpha1 | Knative | Brings [Apache CouchDB](https://couchdb.apache.org/) messages into Knative. |
|
| [Apache CouchDB](https://github.com/knative-sandbox/eventing-couchdb/blob/main/source) | v1alpha1 | Knative | Brings [Apache CouchDB](https://couchdb.apache.org/) messages into Knative. |
|
||||||
| [Apache Kafka](../../../eventing/samples/kafka/README.md) | v1beta1 | Knative | Brings [Apache Kafka](https://kafka.apache.org/) messages into Knative. The KafkaSource reads events from an Apache Kafka Cluster, and passes these events to a sink so that they can be consumed. See the [Kafka Source](https://github.com/knative-sandbox/eventing-kafka/blob/main/pkg/source) example for more details. |
|
| [Apache Kafka](kafka-source/README.md) | v1beta1 | Knative | Brings [Apache Kafka](https://kafka.apache.org/) messages into Knative. The KafkaSource reads events from an Apache Kafka Cluster, and passes these events to a sink so that they can be consumed. See the [Kafka Source](https://github.com/knative-sandbox/eventing-kafka/blob/main/pkg/source) example for more details. |
|
||||||
| [ContainerSource](containersource/README.md) | v1 | Knative | The ContainerSource will instantiate container image(s) that can generate events until the ContainerSource is deleted. This may be used, for example, to poll an FTP server for new files or generate events at a set time interval. Given a `spec.template` with at least a container image specified, ContainerSource will keep a `Pod` running with the specified image(s). `K_SINK` (destination address) and `KE_CE_OVERRIDES` (JSON CloudEvents attributes) environment variables are injected into the running image(s). It is used by multiple other Sources as underlying infrastructure. Refer to the [Container Source](../../../eventing/samples/container-source/README.md) example for more details. |
|
| [ContainerSource](containersource/README.md) | v1 | Knative | The ContainerSource will instantiate container image(s) that can generate events until the ContainerSource is deleted. This may be used, for example, to poll an FTP server for new files or generate events at a set time interval. Given a `spec.template` with at least a container image specified, ContainerSource will keep a `Pod` running with the specified image(s). `K_SINK` (destination address) and `KE_CE_OVERRIDES` (JSON CloudEvents attributes) environment variables are injected into the running image(s). It is used by multiple other Sources as underlying infrastructure. Refer to the [Container Source](../../../eventing/samples/container-source/README.md) example for more details. |
|
||||||
| [GitHub](../../../eventing/samples/github-source/README.md) | v1alpha1 | Knative | Registers for events of the specified types on the specified GitHub organization or repository, and brings those events into Knative. The GitHubSource fires a new event for selected [GitHub event types](https://developer.github.com/v3/activity/events/types/). See the [GitHub Source](../../../eventing/samples/github-source/README.md) example for more details. |
|
| [GitHub](../../../eventing/samples/github-source/README.md) | v1alpha1 | Knative | Registers for events of the specified types on the specified GitHub organization or repository, and brings those events into Knative. The GitHubSource fires a new event for selected [GitHub event types](https://developer.github.com/v3/activity/events/types/). See the [GitHub Source](../../../eventing/samples/github-source/README.md) example for more details. |
|
||||||
| [GitLab](../../../eventing/samples/gitlab-source/README.md) | v1alpha1 | Knative | Registers for events of the specified types on the specified GitLab repository, and brings those events into Knative. The GitLabSource creates a webhooks for specified [event types](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html#events), listens for incoming events, and passes them to a consumer. See the [GitLab Source](../../../eventing/samples/gitlab-source/README.md) example for more details. |
|
| [GitLab](../../../eventing/samples/gitlab-source/README.md) | v1alpha1 | Knative | Registers for events of the specified types on the specified GitLab repository, and brings those events into Knative. The GitLabSource creates a webhooks for specified [event types](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html#events), listens for incoming events, and passes them to a consumer. See the [GitLab Source](../../../eventing/samples/gitlab-source/README.md) example for more details. |
|
||||||
|
|
|
||||||
|
|
@ -1,11 +1,3 @@
|
||||||
---
|
|
||||||
title: "Knative Eventing code samples"
|
|
||||||
linkTitle: "Code samples"
|
|
||||||
weight: 100
|
|
||||||
type: "docs"
|
|
||||||
showlandingtoc: "true"
|
|
||||||
---
|
|
||||||
|
|
||||||
# Knative Eventing code samples
|
# Knative Eventing code samples
|
||||||
|
|
||||||
Use the following code samples to help you understand the various use cases for
|
Use the following code samples to help you understand the various use cases for
|
||||||
|
|
|
||||||
|
|
@ -1,104 +0,0 @@
|
||||||
---
|
|
||||||
title: "Apache Kafka examples"
|
|
||||||
linkTitle: "Apache Kafka"
|
|
||||||
weight: 10
|
|
||||||
type: "docs"
|
|
||||||
---
|
|
||||||
|
|
||||||
# Apache Kafka examples
|
|
||||||
|
|
||||||
The following examples will help you understand how to use the different Apache
|
|
||||||
Kafka components for Knative.
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
All examples require:
|
|
||||||
|
|
||||||
- A Kubernetes cluster with
|
|
||||||
- Knative Eventing v0.9+
|
|
||||||
- Knative Serving v0.9+
|
|
||||||
- An Apache Kafka cluster
|
|
||||||
|
|
||||||
### Setting up Apache Kafka
|
|
||||||
|
|
||||||
If you want to run the Apache Kafka cluster on Kubernetes, the simplest option
|
|
||||||
is to install it by using [Strimzi](https://strimzi.io).
|
|
||||||
|
|
||||||
1. Create a namespace for your Apache Kafka installation, like `kafka`:
|
|
||||||
```bash
|
|
||||||
kubectl create namespace kafka
|
|
||||||
```
|
|
||||||
1. Install the Strimzi operator, like:
|
|
||||||
```bash
|
|
||||||
curl -L "https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.16.2/strimzi-cluster-operator-0.16.2.yaml" \
|
|
||||||
| sed 's/namespace: .*/namespace: kafka/' \
|
|
||||||
| kubectl -n kafka apply -f -
|
|
||||||
```
|
|
||||||
1. Describe the size of your Apache Kafka installation in `kafka.yaml`, like:
|
|
||||||
```yaml
|
|
||||||
apiVersion: kafka.strimzi.io/v1beta2
|
|
||||||
kind: Kafka
|
|
||||||
metadata:
|
|
||||||
name: my-cluster
|
|
||||||
spec:
|
|
||||||
kafka:
|
|
||||||
version: 2.4.0
|
|
||||||
replicas: 1
|
|
||||||
listeners:
|
|
||||||
plain: {}
|
|
||||||
tls: {}
|
|
||||||
config:
|
|
||||||
offsets.topic.replication.factor: 1
|
|
||||||
transaction.state.log.replication.factor: 1
|
|
||||||
transaction.state.log.min.isr: 1
|
|
||||||
log.message.format.version: "2.4"
|
|
||||||
storage:
|
|
||||||
type: ephemeral
|
|
||||||
zookeeper:
|
|
||||||
replicas: 3
|
|
||||||
storage:
|
|
||||||
type: ephemeral
|
|
||||||
entityOperator:
|
|
||||||
topicOperator: {}
|
|
||||||
userOperator: {}
|
|
||||||
```
|
|
||||||
1. Deploy the Apache Kafka cluster
|
|
||||||
```
|
|
||||||
$ kubectl apply -n kafka -f kafka.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
This will install a small, non-production, cluster of Apache Kafka. To verify
|
|
||||||
your installation, check if the pods for Strimzi are all up, in the `kafka`
|
|
||||||
namespace:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ kubectl get pods -n kafka
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
my-cluster-entity-operator-65995cf856-ld2zp 3/3 Running 0 102s
|
|
||||||
my-cluster-kafka-0 2/2 Running 0 2m8s
|
|
||||||
my-cluster-zookeeper-0 2/2 Running 0 2m39s
|
|
||||||
my-cluster-zookeeper-1 2/2 Running 0 2m49s
|
|
||||||
my-cluster-zookeeper-2 2/2 Running 0 2m59s
|
|
||||||
strimzi-cluster-operator-77555d4b69-sbrt4 1/1 Running 0 3m14s
|
|
||||||
```
|
|
||||||
|
|
||||||
> NOTE: For production ready installs check [Strimzi](https://strimzi.io).
|
|
||||||
|
|
||||||
### Installation script
|
|
||||||
|
|
||||||
If you want to install the latest version of Strimzi, in just one step, we have
|
|
||||||
a [script](kafka_setup.sh) for your convenience, which does exactly the same
|
|
||||||
steps that are listed above:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ ./kafka_setup.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
## Examples of Apache Kafka and Knative
|
|
||||||
|
|
||||||
A number of different examples, showing the `KafkaSource`, `KafkaChannel` and
|
|
||||||
`KafkaBinding` can be found here:
|
|
||||||
|
|
||||||
- [`KafkaSource` to `Service`](../../sources/kafka-source)
|
|
||||||
- [`KafkaChannel` and Broker](channel/)
|
|
||||||
- [`KafkaBinding`](binding/)
|
|
||||||
|
|
@ -1,9 +0,0 @@
|
||||||
apiVersion: serving.knative.dev/v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: broker-kafka-display
|
|
||||||
spec:
|
|
||||||
template:
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display
|
|
||||||
|
|
@ -1,37 +0,0 @@
|
||||||
apiVersion: v1
|
|
||||||
kind: ServiceAccount
|
|
||||||
metadata:
|
|
||||||
name: events-sa
|
|
||||||
namespace: default
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: ClusterRole
|
|
||||||
metadata:
|
|
||||||
name: event-watcher
|
|
||||||
rules:
|
|
||||||
- apiGroups:
|
|
||||||
- ""
|
|
||||||
resources:
|
|
||||||
- events
|
|
||||||
verbs:
|
|
||||||
- get
|
|
||||||
- list
|
|
||||||
- watch
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: ClusterRoleBinding
|
|
||||||
metadata:
|
|
||||||
name: k8s-ra-event-watcher
|
|
||||||
roleRef:
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: ClusterRole
|
|
||||||
name: event-watcher
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: events-sa
|
|
||||||
namespace: default
|
|
||||||
|
|
||||||
|
|
@ -1,16 +0,0 @@
|
||||||
apiVersion: sources.knative.dev/v1
|
|
||||||
kind: ApiServerSource
|
|
||||||
metadata:
|
|
||||||
name: testevents-kafka-03
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
serviceAccountName: events-sa
|
|
||||||
mode: Resource
|
|
||||||
resources:
|
|
||||||
- apiVersion: v1
|
|
||||||
kind: Event
|
|
||||||
sink:
|
|
||||||
ref:
|
|
||||||
apiVersion: eventing.knative.dev/v1
|
|
||||||
kind: Broker
|
|
||||||
name: default
|
|
||||||
|
|
@ -1,12 +0,0 @@
|
||||||
apiVersion: eventing.knative.dev/v1
|
|
||||||
kind: Trigger
|
|
||||||
metadata:
|
|
||||||
name: testevents-trigger
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
broker: default
|
|
||||||
subscriber:
|
|
||||||
ref:
|
|
||||||
apiVersion: serving.knative.dev/v1
|
|
||||||
kind: Service
|
|
||||||
name: broker-kafka-display
|
|
||||||
|
|
@ -1,25 +1,16 @@
|
||||||
---
|
|
||||||
title: "Apache Kafka Channel Example"
|
|
||||||
linkTitle: "Channel Example"
|
|
||||||
weight: 20
|
|
||||||
type: "docs"
|
|
||||||
---
|
|
||||||
|
|
||||||
# Apache Kafka Channel Example
|
# Apache Kafka Channel Example
|
||||||
|
|
||||||
You can install and configure the Apache Kafka CRD (`KafkaChannel`) as the
|
You can install and configure the Apache Kafka Channel as the
|
||||||
default channel configuration in Knative Eventing.
|
default Channel configuration for Knative Eventing.
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
- Ensure that you meet the
|
|
||||||
[prerequisites listed in the Apache Kafka overview](../).
|
|
||||||
- A Kubernetes cluster with
|
- A Kubernetes cluster with
|
||||||
[Knative Kafka Channel installed](../../../../admin/install/).
|
[Knative Eventing](../../../../admin/install/eventing/install-eventing-with-yaml.md), as well as the optional Broker and Kafka Channel components.
|
||||||
|
|
||||||
## Creating a `KafkaChannel` channel CRD
|
## Creating a Kafka Channel
|
||||||
|
|
||||||
1. Create a new object by configuring the YAML file as follows:
|
1. Create a Kafka Channel that contains the following YAML:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: messaging.knative.dev/v1beta1
|
apiVersion: messaging.knative.dev/v1beta1
|
||||||
|
|
@ -38,12 +29,10 @@ default channel configuration in Knative Eventing.
|
||||||
```
|
```
|
||||||
Where `<filename>` is the name of the file you created in the previous step.
|
Where `<filename>` is the name of the file you created in the previous step.
|
||||||
|
|
||||||
|
## Specifying Kafka as the default Channel implementation
|
||||||
|
<!--TODO: Move to admin guide-->
|
||||||
|
|
||||||
## Specifying the default channel configuration
|
1. To configure Kafka Channel as the [default channel configuration](../../../channels/channel-types-defaults), modify the `default-ch-webhook` ConfigMap so that it contains the following YAML:
|
||||||
|
|
||||||
1. To configure the usage of the `KafkaChannel` CRD as the
|
|
||||||
[default channel configuration](../../../channels/channel-types-defaults),
|
|
||||||
edit the `default-ch-webhook` ConfigMap as follows:
|
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
|
|
@ -67,13 +56,12 @@ default channel configuration in Knative Eventing.
|
||||||
```bash
|
```bash
|
||||||
kubectl apply -f <filename>.yaml
|
kubectl apply -f <filename>.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Where `<filename>` is the name of the file you created in the previous step.
|
Where `<filename>` is the name of the file you created in the previous step.
|
||||||
|
|
||||||
## Creating an Apache Kafka channel using the default channel configuration
|
## Creating an Apache Kafka channel
|
||||||
|
|
||||||
1. Now that `KafkaChannel` is set as the default channel configuration,
|
1. After `KafkaChannel` is set as the default Channel type, you can create a Kafka Channel by creating a generic Channel object that contains the following YAML:
|
||||||
use the `channels.messaging.knative.dev` CRD to create a new Apache Kafka
|
|
||||||
channel, using the generic `Channel` below:
|
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: messaging.knative.dev/v1
|
apiVersion: messaging.knative.dev/v1
|
||||||
|
|
@ -81,20 +69,24 @@ default channel configuration in Knative Eventing.
|
||||||
metadata:
|
metadata:
|
||||||
name: testchannel-one
|
name: testchannel-one
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Apply the YAML file by running the command:
|
1. Apply the YAML file by running the command:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl apply -f <filename>.yaml
|
kubectl apply -f <filename>.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Where `<filename>` is the name of the file you created in the previous step.
|
Where `<filename>` is the name of the file you created in the previous step.
|
||||||
|
|
||||||
2. Check Kafka for a `testchannel-one` topic. With Strimzi this can be done by
|
1. Verify that the Channel was created properly by checking that your Kafka cluster has a `testchannel-one` Topic. If you are using Strimzi, you can run the command:
|
||||||
using the command:
|
|
||||||
```bash
|
```bash
|
||||||
kubectl -n kafka exec -it my-cluster-kafka-0 -- bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --list
|
kubectl -n kafka exec -it my-cluster-kafka-0 -- bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --list
|
||||||
```
|
```
|
||||||
The result is:
|
|
||||||
```
|
The output looks similar to the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
...
|
...
|
||||||
__consumer_offsets
|
__consumer_offsets
|
||||||
knative-messaging-kafka.default.my-kafka-channel
|
knative-messaging-kafka.default.my-kafka-channel
|
||||||
|
|
@ -102,24 +94,105 @@ default channel configuration in Knative Eventing.
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
The Apache Kafka topic that is created by the channel implementation contains
|
The Kafka Topic that is created by the Channel contains the name of the namespace, `default` in this example, followed by the name of the Channel. In the consolidated Channel implementation, it is also prefixed with `knative-messaging-kafka` to indicate that it is a Kafka Channel from Knative.
|
||||||
the name of the namespace, `default` in this example, followed by the actual
|
|
||||||
name of the channel. In the consolidated channel implementation, it is also
|
|
||||||
prefixed with `knative-messaging-kafka` to indicate that it is an Apache Kafka
|
|
||||||
channel from Knative.
|
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
The topic of a Kafka channel is an implementation detail and records from it should not be consumed from different applications.
|
The topic of a Kafka Channel is an implementation detail and records from it should not be consumed from different applications.
|
||||||
|
|
||||||
## Configuring the Knative broker for Apache Kafka channels
|
## Creating a Service and Trigger that use the Apache Kafka Broker
|
||||||
|
|
||||||
1. To setup a broker that will use the new default Kafka channels,
|
The following example uses a ApiServerSource to publish events to an existing Broker, and a Trigger that routes those events to a Knative Service.
|
||||||
create a new _default_ broker by copying the YAML below into a file:
|
<!--TODO: Not sure this example makes sense, why would you have an event source AND channels?-->
|
||||||
|
|
||||||
|
1. Create a Knative Service:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: serving.knative.dev/v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: broker-kafka-display
|
||||||
|
spec:
|
||||||
|
template:
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Apply the YAML file by running the command:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
kubectl apply -f <filename>.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Where `<filename>` is the name of the file you created in the previous step.
|
||||||
|
|
||||||
|
1. Create a ServiceAccount, ClusterRole, and ClusterRoleBinding for the ApiServerSource:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: events-sa
|
||||||
|
namespace: default
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRole
|
||||||
|
metadata:
|
||||||
|
name: event-watcher
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- events
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: k8s-ra-event-watcher
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: event-watcher
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: events-sa
|
||||||
|
namespace: default
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Apply the YAML file by running the command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -f <filename>.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Where `<filename>` is the name of the file you created in the previous step.
|
||||||
|
|
||||||
|
1. Create an ApiServerSource that sends events to the default Broker:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: sources.knative.dev/v1
|
||||||
|
kind: ApiServerSource
|
||||||
|
metadata:
|
||||||
|
name: testevents-kafka-03
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
serviceAccountName: events-sa
|
||||||
|
mode: Resource
|
||||||
|
resources:
|
||||||
|
- apiVersion: v1
|
||||||
|
kind: Event
|
||||||
|
sink:
|
||||||
|
ref:
|
||||||
apiVersion: eventing.knative.dev/v1
|
apiVersion: eventing.knative.dev/v1
|
||||||
kind: Broker
|
kind: Broker
|
||||||
metadata:
|
|
||||||
name: default
|
name: default
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
@ -128,51 +201,36 @@ channel from Knative.
|
||||||
```bash
|
```bash
|
||||||
kubectl apply -f <filename>.yaml
|
kubectl apply -f <filename>.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Where `<filename>` is the name of the file you created in the previous step.
|
Where `<filename>` is the name of the file you created in the previous step.
|
||||||
|
|
||||||
This will give you two pods, such as:
|
1. Create a Trigger that filters events from the Broker to the Service:
|
||||||
```
|
|
||||||
default-broker-filter-64658fc79f-nf596 1/1 Running 0 15m
|
```yaml
|
||||||
default-broker-ingress-ff79755b6-vj9jt 1/1 Running 0 15
|
apiVersion: eventing.knative.dev/v1
|
||||||
```
|
kind: Trigger
|
||||||
Inside the Apache Kafka cluster you should see two new topics, such as:
|
metadata:
|
||||||
```
|
name: testevents-trigger
|
||||||
...
|
namespace: default
|
||||||
knative-messaging-kafka.default.default-kn2-ingress
|
spec:
|
||||||
knative-messaging-kafka.default.default-kn2-trigger
|
broker: default
|
||||||
...
|
subscriber:
|
||||||
|
ref:
|
||||||
|
apiVersion: serving.knative.dev/v1
|
||||||
|
kind: Service
|
||||||
|
name: broker-kafka-display
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! note
|
1. Apply the YAML file by running the command:
|
||||||
The topic of a Kafka channel is an implementation detail and records from it should not be consumed from different applications.
|
|
||||||
|
|
||||||
## Creating a service and trigger to use the Apache Kafka broker
|
|
||||||
|
|
||||||
To use the Apache Kafka based broker, let's take a look at a simple demo. Use
|
|
||||||
the`ApiServerSource` to publish events to the broker as well as the `Trigger`
|
|
||||||
API, which then routes events to a Knative `Service`.
|
|
||||||
|
|
||||||
1. Install `ksvc`, using the command:
|
|
||||||
```bash
|
```bash
|
||||||
kubectl apply -f 000-ksvc.yaml
|
kubectl apply -f <filename>.yaml
|
||||||
```
|
|
||||||
2. Install a source that publishes to the default broker
|
|
||||||
```bash
|
|
||||||
kubectl apply -f 020-k8s-events.yaml
|
|
||||||
```
|
|
||||||
3. Create a trigger that routes the events to the `ksvc`:
|
|
||||||
```bash
|
|
||||||
kubectl apply -f 030-trigger.yaml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Verifying your Apache Kafka channel and broker
|
Where `<filename>` is the name of the file you created in the previous step.
|
||||||
|
|
||||||
Now that your Eventing cluster is configured for Apache Kafka, you can verify
|
1. Verifying the Kafka Channel is working, by observing events in the log of the Service, by running the command:
|
||||||
your configuration with the following options.
|
|
||||||
|
|
||||||
### Receive events via Knative
|
|
||||||
|
|
||||||
1. Observe the events in the log of the `ksvc` using the command:
|
|
||||||
```bash
|
```bash
|
||||||
kubectl logs --selector='serving.knative.dev/service=broker-kafka-display' -c user-container
|
kubectl logs --selector='serving.knative.dev/service=broker-kafka-display' -c user-container
|
||||||
```
|
```
|
||||||
|
|
@ -182,21 +240,24 @@ your configuration with the following options.
|
||||||
In production environments it is common that the Apache Kafka cluster is
|
In production environments it is common that the Apache Kafka cluster is
|
||||||
secured using [TLS](http://kafka.apache.org/documentation/#security_ssl)
|
secured using [TLS](http://kafka.apache.org/documentation/#security_ssl)
|
||||||
or [SASL](http://kafka.apache.org/documentation/#security_sasl). This section
|
or [SASL](http://kafka.apache.org/documentation/#security_sasl). This section
|
||||||
shows how to configure the `KafkaChannel` to work against a protected Apache
|
shows how to configure a Kafka Channel to work against a protected Apache
|
||||||
Kafka cluster, with the two supported TLS and SASL authentication methods.
|
Kafka cluster, with the two supported TLS and SASL authentication methods.
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
Kafka channels require certificates to be in `.pem` format. If your files
|
Kafka Channels require certificates to be in `.pem` format. If your files
|
||||||
are in a different format, you must convert them to `.pem`.
|
are in a different format, you must convert them to `.pem`.
|
||||||
|
|
||||||
### TLS authentication
|
### TLS authentication
|
||||||
|
|
||||||
1. Edit your config-kafka ConfigMap:
|
1. Edit the `config-kafka` ConfigMap:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl -n knative-eventing edit configmap config-kafka
|
kubectl -n knative-eventing edit configmap config-kafka
|
||||||
```
|
```
|
||||||
2. Set the TLS.Enable field to `true`, for example
|
|
||||||
```
|
1. Set the `TLS.Enable` field to `true`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
...
|
...
|
||||||
data:
|
data:
|
||||||
sarama: |
|
sarama: |
|
||||||
|
|
@ -206,10 +267,10 @@ Kafka cluster, with the two supported TLS and SASL authentication methods.
|
||||||
Enable: true
|
Enable: true
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
3. Optional. If using a custom CA certificate, place your certificate data
|
|
||||||
into the ConfigMap in the data.sarama.config.Net.TLS.Config.RootPEMs field,
|
1. Optional: If you are using a custom CA certificate, add your certificate data to the ConfigMap in the `data.sarama.config.Net.TLS.Config.RootPEMs` field:
|
||||||
for example:
|
|
||||||
```
|
```yaml
|
||||||
...
|
...
|
||||||
data:
|
data:
|
||||||
sarama: |
|
sarama: |
|
||||||
|
|
@ -237,12 +298,15 @@ To use SASL authentication, you will need the following information:
|
||||||
!!! note
|
!!! note
|
||||||
It is recommended to also enable TLS as described in the previous section.
|
It is recommended to also enable TLS as described in the previous section.
|
||||||
|
|
||||||
1. Edit your config-kafka ConfigMap:
|
1. Edit the `config-kafka` ConfigMap:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl -n knative-eventing edit configmap config-kafka
|
kubectl -n knative-eventing edit configmap config-kafka
|
||||||
```
|
```
|
||||||
2. Set the SASL.Enable field to `true`, for example:
|
|
||||||
```
|
1. Set the `SASL.Enable` field to `true`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
...
|
...
|
||||||
data:
|
data:
|
||||||
sarama: |
|
sarama: |
|
||||||
|
|
@ -252,7 +316,9 @@ To use SASL authentication, you will need the following information:
|
||||||
Enable: true
|
Enable: true
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
3. Create a secret with the username, password, and SASL mechanism, for example:
|
|
||||||
|
1. Create a secret that uses the username, password, and SASL mechanism:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl create secret --namespace <namespace> generic <kafka-auth-secret> \
|
kubectl create secret --namespace <namespace> generic <kafka-auth-secret> \
|
||||||
--from-literal=password="SecretPassword" \
|
--from-literal=password="SecretPassword" \
|
||||||
|
|
@ -262,10 +328,9 @@ To use SASL authentication, you will need the following information:
|
||||||
|
|
||||||
### All authentication methods
|
### All authentication methods
|
||||||
|
|
||||||
1. If you have created a secret for your desired authentication method by
|
1. If you have created a secret for your desired authentication method by using the previous steps, reference the secret and the namespace of the secret in the `config-kafka` ConfigMap:
|
||||||
using the previous steps, reference the secret and the namespace of the
|
|
||||||
secret in the `config-kafka` ConfigMap:
|
```yaml
|
||||||
```
|
|
||||||
...
|
...
|
||||||
data:
|
data:
|
||||||
eventing-kafka: |
|
eventing-kafka: |
|
||||||
|
|
@ -276,19 +341,16 @@ To use SASL authentication, you will need the following information:
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
The default secret name and namespace are `kafka-cluster` and
|
The default secret name and namespace are `kafka-cluster` and `knative-eventing` respectively. If you reference a secret in a different namespace, make sure you configure your roles and bindings so that the `knative-eventing` Pods can access it.
|
||||||
`knative-eventing` respectively. If you reference a secret in a different
|
|
||||||
namespace, be sure you configure your roles and bindings so that the
|
|
||||||
knative-eventing pods can access it.
|
|
||||||
|
|
||||||
## Channel configuration
|
## Channel configuration
|
||||||
|
|
||||||
The `config-kafka` ConfigMap allows for a variety of channel options such as:
|
The `config-kafka` ConfigMap allows for a variety of Channel options such as:
|
||||||
|
|
||||||
- CPU and Memory requests and limits for the dispatcher (and receiver for
|
- CPU and Memory requests and limits for the dispatcher (and receiver for
|
||||||
the distributed channel type) deployments created by the controller
|
the distributed Channel type) deployments created by the controller
|
||||||
|
|
||||||
- Kafka topic default values (number of partitions, replication factor, and
|
- Kafka Topic default values (number of partitions, replication factor, and
|
||||||
retention time)
|
retention time)
|
||||||
|
|
||||||
- Maximum idle connections/connections per host for Knative cloudevents
|
- Maximum idle connections/connections per host for Knative cloudevents
|
||||||
|
|
|
||||||
|
|
@ -1,25 +0,0 @@
|
||||||
apiVersion: kafka.strimzi.io/v1beta2
|
|
||||||
kind: Kafka
|
|
||||||
metadata:
|
|
||||||
name: my-cluster
|
|
||||||
spec:
|
|
||||||
kafka:
|
|
||||||
version: 2.4.0
|
|
||||||
replicas: 1
|
|
||||||
listeners:
|
|
||||||
plain: {}
|
|
||||||
tls: {}
|
|
||||||
config:
|
|
||||||
offsets.topic.replication.factor: 1
|
|
||||||
transaction.state.log.replication.factor: 1
|
|
||||||
transaction.state.log.min.isr: 1
|
|
||||||
log.message.format.version: "2.4"
|
|
||||||
storage:
|
|
||||||
type: ephemeral
|
|
||||||
zookeeper:
|
|
||||||
replicas: 3
|
|
||||||
storage:
|
|
||||||
type: ephemeral
|
|
||||||
entityOperator:
|
|
||||||
topicOperator: {}
|
|
||||||
userOperator: {}
|
|
||||||
|
|
@ -1,27 +0,0 @@
|
||||||
#!/usr/bin/env bash
|
|
||||||
set -e
|
|
||||||
# Turn colors in this script off by setting the NO_COLOR variable in your
|
|
||||||
# environment to any value:
|
|
||||||
#
|
|
||||||
# $ NO_COLOR=1 test.sh
|
|
||||||
NO_COLOR=${NO_COLOR:-""}
|
|
||||||
if [ -z "$NO_COLOR" ]; then
|
|
||||||
header=$'\e[1;33m'
|
|
||||||
reset=$'\e[0m'
|
|
||||||
else
|
|
||||||
header=''
|
|
||||||
reset=''
|
|
||||||
fi
|
|
||||||
strimzi_version=`curl https://github.com/strimzi/strimzi-kafka-operator/releases/latest | awk -F 'tag/' '{print $2}' | awk -F '"' '{print $1}' 2>/dev/null`
|
|
||||||
function header_text {
|
|
||||||
echo "$header$*$reset"
|
|
||||||
}
|
|
||||||
header_text "Using Strimzi Version: ${strimzi_version}"
|
|
||||||
header_text "Strimzi install"
|
|
||||||
kubectl create namespace kafka
|
|
||||||
curl -L "https://github.com/strimzi/strimzi-kafka-operator/releases/download/${strimzi_version}/strimzi-cluster-operator-${strimzi_version}.yaml" \
|
|
||||||
| sed 's/namespace: .*/namespace: kafka/' \
|
|
||||||
| kubectl -n kafka apply -f -
|
|
||||||
header_text "Applying Strimzi Cluster file"
|
|
||||||
kubectl -n kafka apply -f "https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/${strimzi_version}/examples/kafka/kafka-ephemeral-single.yaml"
|
|
||||||
|
|
||||||
Loading…
Reference in New Issue