Apache kafka (#1712)

* Adding Kafka samples for channel and source

Co-Authored-By: RichieEscarez <rescarez@google.com>

* fixing selector bug ... 🔥
This commit is contained in:
Matthias Wessendorf 2019-10-04 22:17:07 +02:00 committed by Knative Prow Robot
parent 1359104439
commit 69b071c47d
11 changed files with 573 additions and 0 deletions

View File

@ -0,0 +1,19 @@
The following examples will help you understand how to use the different Apache Kafka components for Knative.
## Prerequisites
All examples require:
- A Kubernetes cluster with
- Knative Eventing v0.9+
- Knative Serving v0.9+
- An Apache Kafka cluster
If you want to run the Apache Kafka cluster on Kubernetes, the simplest option is to install it by using Strimzi. Check out the [Quickstart](https://strimzi.io/quickstarts/) guides for both Minikube and Openshift. You can also install Kafka on the host.
## Examples
A number of different examples, showing the `KafkaSource` and the `KafkaChannel` can be found here:
- [`KafkaSource` to `Service`](./source/README.md)
- [`KafkaChannel` and Broker](./channel/README.md)

View File

@ -0,0 +1,8 @@
---
title: "Apache Kafka examples"
linkTitle: "Apache Kafka"
weight: 10
type: "docs"
---
{{% readfile file="README.md" relative="true" markdown="true" %}}

View File

@ -0,0 +1,9 @@
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: broker-kafka-display
spec:
template:
spec:
containers:
- image: gcr.io/knative-releases/github.com/knative/eventing-contrib/cmd/event_display@sha256:1d6ddc00ab3e43634cd16b342f9663f9739ba09bc037d5dea175dc425a1bb955

View File

@ -0,0 +1,37 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: events-sa
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: event-watcher
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-ra-event-watcher
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: event-watcher
subjects:
- kind: ServiceAccount
name: events-sa
namespace: default

View File

@ -0,0 +1,15 @@
apiVersion: sources.eventing.knative.dev/v1alpha1
kind: ApiServerSource
metadata:
name: testevents-kafka-03
namespace: default
spec:
serviceAccountName: events-sa
mode: Resource
resources:
- apiVersion: v1
kind: Event
sink:
apiVersion: eventing.knative.dev/v1alpha1
kind: Broker
name: default

View File

@ -0,0 +1,11 @@
apiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
name: testevents-trigger
namespace: default
spec:
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: broker-kafka-display

View File

@ -0,0 +1,145 @@
# Apache Kafka CRD default channel
You can install and configure the Apache Kafka CRD (`KafkaChannel`) as the default channel configuration in Knative Eventing.
## Prerequisites
You must ensure that you meet the [prerequisites listed in the Apache Kafka overview](../README.md).
You must also have the following tools installed:
- `curl`
- `sed`
## Creating a `KafkaChannel` channel CRD
Install the `KafkaChannel` sub-component on your Knative Eventing cluster:
```
curl -L "https://github.com/knative/eventing-contrib/releases/download/v0.9.0/kafka-channel.yaml" \
| sed 's/REPLACE_WITH_CLUSTER_URL/my-cluster-kafka-bootstrap.kafka:9092/' \
| kubectl apply --filename -
```
> Note: The above assumes that you have Apache Kafka installed in the `kafka`, as discussed [here](../README.md)!
Once the `KafkaChannel` API is available, create a new object by configuring the YAML file as follows:
```
cat <<-EOF | kubectl apply -f -
---
apiVersion: messaging.knative.dev/v1alpha1
kind: KafkaChannel
metadata:
name: my-kafka-channel
spec:
numPartitions: 1
replicationFactor: 3
EOF
```
You can now set the `KafkaChannel` CRD as the default channel configuration.
## Specifying the default channel configuration
To configure the usage of the `KafkaChannel` CRD as the [default channel configuration](channels/default-channels.md), edit the `default-ch-webhook` ConfigMap as follows:
```
cat <<-EOF | kubectl apply -f -
---
apiVersion: v1
kind: ConfigMap
metadata:
name: default-ch-webhook
namespace: knative-eventing
data:
# Configuration for defaulting channels that do not specify CRD implementations.
default-ch-config: |
clusterDefault:
apiVersion: messaging.knative.dev/v1alpha1
kind: KafkaChannel
EOF
```
## Creating an Apache Kafka channel using the default channel configuration
Now that `KafkaChannel` is set as the default channel configuration, you can use the `channels.messaging.knative.dev` CRD to create a new Apache Kafka channel, using the generic `Channel`:
```
cat <<-EOF | kubectl apply -f -
---
apiVersion: messaging.knative.dev/v1alpha1
kind: Channel
metadata:
name: testchannel-one
EOF
```
Check Kafka for a `testchannel` topic. With Strimzi this can be done by using the command:
```
kubectl -n kafka exec -it my-cluster-kafka-0 -- bin/kafka-topics.sh --zookeeper localhost:2181 --list
```
The result is:
```
...
knative-messaging-kafka.default.testchannel-one
...
```
The Apache Kafka topic that is created by the channel implementation is prefixed with `knative-messaging-kafka`. This indicates it is an Apache Kafka channel from Knative. It contains the name of the namespace, `default` in this example, followed by the actual name of the channel.
## Configuring the Knative broker for Apache Kafka channels
To setup a broker that will use the new default Kafka channels, you must inject a new _default_ broker, using the command:
```
kubectl label namespace default knative-eventing-injection=enabled
```
This will give you two pods, such as:
```
default-broker-filter-64658fc79f-nf596 1/1 Running 0 15m
default-broker-ingress-ff79755b6-vj9jt 1/1 Running 0 15m
```
Inside the Apache Kafka cluster you should see two new topics, such as:
```
...
knative-messaging-kafka.default.default-kn2-ingress
knative-messaging-kafka.default.default-kn2-trigger
...
```
## Creating a service and trigger to use the Apache Kafka broker
To use the Apache Kafka based broker, let's take a look at a simple demo. Use the`ApiServerSource` to publish events to the broker as well as the `Trigger` API, which then routes events to a Knative `Service`.
1. Install `ksvc`, using the command:
```
kubectl apply -f 000-ksvc.yaml
```
2. Install a source that publishes to the default broker
```
kubectl apply --filename 020-k8s-events.yaml
```
3. Create a trigger that routes the events to the `ksvc`:
```
kubectl apply -f 030-trigger.yaml
```
## Verifying your Apache Kafka channel and broker
Now that your Eventing cluster is configured for Apache Kafka, you can verify
your configuration with the following options.
### Receive events via Knative
Now you can see the events in the log of the `ksvc` using the command:
```
kubectl logs --selector='serving.knative.dev/service=broker-kafka-display' -c user-container
```

View File

@ -0,0 +1,202 @@
# Apache Kafka - Source Example
Tutorial on how to build and deploy a `KafkaSource` [Eventing source](../../../sources/README.md) using a Knative Serving `Service`.
## Prerequisites
You must ensure that you meet the [prerequisites listed in the Apache Kafka overview](../README.md).
## Creating a `KafkaSource` source CRD
1. Install the `KafkaSource` sub-component to your Knative cluster:
```
kubectl apply -f https://github.com/knative/eventing-contrib/releases/download/v0.9.0/kafka-importer.yaml
```
2. Check that the `kafka-controller-manager-0` pod is running.
```
kubectl get pods --namespace knative-sources
NAME READY STATUS RESTARTS AGE
kafka-controller-manager-0 1/1 Running 0 42m
```
3. Check the `kafka-controller-manager-0` pod logs.
```
$ kubectl logs kafka-controller-manager-0 -n knative-sources
2019/03/19 22:25:54 Registering Components.
2019/03/19 22:25:54 Setting up Controller.
2019/03/19 22:25:54 Adding the Apache Kafka Source controller.
2019/03/19 22:25:54 Starting Apache Kafka controller.
```
### Apache Kafka Topic (Optional)
1. If using Strimzi, you can set a topic modifying
`source/kafka-topic.yaml` with your desired:
- Topic
- Cluster Name
- Partitions
- Replicas
```yaml
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaTopic
metadata:
name: knative-demo-topic
namespace: kafka
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 3
replicas: 1
config:
retention.ms: 7200000
segment.bytes: 1073741824
```
2. Deploy the `KafkaTopic`
```shell
$ kubectl apply -f kafka/source/samples/strimzi-topic.yaml
kafkatopic.kafka.strimzi.io/knative-demo-topic created
```
3. Ensure the `KafkaTopic` is running.
```shell
$ kubectl -n kafka get kafkatopics.kafka.strimzi.io
NAME AGE
knative-demo-topic 16s
```
### Create the Event Display service
1. Build and deploy the Event Display Service.
```
$ kubectl apply --filename source/samples/event-display.yaml
...
service.serving.knative.dev/event-display created
```
1. Ensure that the Service pod is running. The pod name will be prefixed with
`event-display`.
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
event-display-00001-deployment-5d5df6c7-gv2j4 2/2 Running 0 72s
...
```
#### Apache Kafka Event Source
1. Modify `source/event-source.yaml` accordingly with bootstrap
servers, topics, etc...:
```yaml
apiVersion: sources.eventing.knative.dev/v1alpha1
kind: KafkaSource
metadata:
name: kafka-source
spec:
consumerGroup: knative-group
bootstrapServers: my-cluster-kafka-bootstrap.kafka:9092 #note the kafka namespace
topics: knative-demo-topic
sink:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
name: event-display
```
1. Deploy the event source.
```
$ kubectl apply -f kafka/source/samples/event-source.yaml
...
kafkasource.sources.eventing.knative.dev/kafka-source created
```
1. Check that the event source pod is running. The pod name will be prefixed
with `kafka-source`.
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-source-xlnhq-5544766765-dnl5s 1/1 Running 0 40m
```
1. Ensure the Apache Kafka Event Source started with the necessary
configuration.
```
$ kubectl logs --selector='knative-eventing-source-name=kafka-source'
{"level":"info","ts":"2019-04-01T19:09:32.164Z","caller":"receive_adapter/main.go:97","msg":"Starting Apache Kafka Receive Adapter...","Bootstrap Server":"...","Topics":".","ConsumerGroup":"...","SinkURI":"...","TLS":false}
```
### Verify
1. Produce a message (`{"msg": "This is a test!"}`) to the Apache Kafka topic, like shown below:
```
kubectl -n kafka run kafka-producer -ti --image=strimzi/kafka:0.14.0-kafka-2.3.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9092 --topic my-topic
If you don't see a command prompt, try pressing enter.
>{"msg": "This is a test!"}
```
1. Check that the Apache Kafka Event Source consumed the message and sent it to
its sink properly.
```
$ kubectl logs --selector='knative-eventing-source-name=kafka-source'
...
{"level":"info","ts":"2019-04-15T20:37:24.702Z","caller":"receive_adapter/main.go:99","msg":"Starting Apache Kafka Receive Adapter...","bootstrap_server":"...","Topics":"knative-demo-topic","ConsumerGroup":"knative-group","SinkURI":"...","TLS":false}
{"level":"info","ts":"2019-04-15T20:37:24.702Z","caller":"adapter/adapter.go:100","msg":"Starting with config: ","bootstrap_server":"...","Topics":"knative-demo-topic","ConsumerGroup":"knative-group","SinkURI":"...","TLS":false}
{"level":"info","ts":1553034726.546107,"caller":"adapter/adapter.go:154","msg":"Successfully sent event to sink"}
```
1. Ensure the Event Display received the message sent to it by the Event Source.
```
$ kubectl logs --selector='serving.knative.dev/service=event-display' -c user-container
☁️ CloudEvent: valid ✅
Context Attributes,
SpecVersion: 0.2
Type: dev.knative.kafka.event
Source: dubee
ID: partition:0/offset:333
Time: 2019-03-19T22:32:06.535321588Z
ContentType: application/json
Extensions:
key:
Transport Context,
URI: /
Host: event-display.default.svc.cluster.local
Method: POST
Data,
{
"msg": "This is a test!"
}
```
## Teardown Steps
1. Remove the Apache Kafka Event Source
```
$ kubectl delete -f source/source.yaml
kafkasource.sources.eventing.knative.dev "kafka-source" deleted
```
2. Remove the Event Display
```
$ kubectl delete -f source/event-display.yaml
service.serving.knative.dev "event-display" deleted
```
3. Remove the Apache Kafka Event Controller
```
$ kubectl delete -f https://github.com/knative/eventing-contrib/releases/download/v0.9.0/kafka-importer.yaml
serviceaccount "kafka-controller-manager" deleted
clusterrole.rbac.authorization.k8s.io "eventing-sources-kafka-controller" deleted
clusterrolebinding.rbac.authorization.k8s.io "eventing-sources-kafka-controller" deleted
customresourcedefinition.apiextensions.k8s.io "kafkasources.sources.eventing.knative.dev" deleted
service "kafka-controller" deleted
statefulset.apps "kafka-controller-manager" deleted
```
4. (Optional) Remove the Apache Kafka Topic
```shell
$ kubectl delete -f kafka/source/samples/kafka-topic.yaml
kafkatopic.kafka.strimzi.io "knative-demo-topic" deleted
```

View File

@ -0,0 +1,25 @@
# Copyright 2019 The Knative Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: event-display
spec:
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: gcr.io/knative-releases/github.com/knative/eventing-contrib/cmd/event_display@sha256:1d6ddc00ab3e43634cd16b342f9663f9739ba09bc037d5dea175dc425a1bb955

View File

@ -0,0 +1,75 @@
# Copyright 2019 The Knative Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Replace the following before applying this file:
# KAFKA_CONSUMER_GROUP_NAME: Name of Kafka consumer group
# KAFKA_BOOTSTRAP_SERVERS: Comma-separated list of bootstrap servers
# KAFKA_TOPICS: Comma-separated list of topics
# KAFKA_SASL_ENABLE: Truthy value to enable/disable SASL, disabled by default (optional)
# KAFKA_SASL_USER_SECRET_NAME: Name of secret containing SASL user (optional)
# KAFKA_SASL_USER_SECRET_KEY: Key within secret containing SASL user (optional)
# KAFKA_SASL_PASSWORD_SECRET_NAME: Name of secret containing SASL password (optional)
# KAFKA_SASL_PASSWORD_SECRET_KEY: Key within secret containing SASL password (optional)
# KAFKA_TLS_ENABLE: Truthy to enable TLS, disabled by default (optional)
# KAFKA_TLS_CERT_SECRET_NAME: Name of secret containing client cert to use when connecting wtih TLS (optional)
# KAFKA_TLS_CERT_SECRET_KEY: Key within secret containing client cert to use when connecting wtih TLS (optional)
# KAFKA_TLS_KEY_SECRET_NAME: Name of secret containing client key to use when connecting wtih TLS (optional)
# KAFKA_TLS_KEY_SECRET_KEY: Key within secret containing client key to use when connecting wtih TLS (optional)
# KAFKA_TLS_CA_CERT_SECRET_NAME: Name of secret containing server CA cert to use when connecting wtih TLS (optional)
# KAFKA_TLS_CA_CERT_SECRET_KEY: Key within secret containing server CA cert to use when connecting wtih TLS (optional)
apiVersion: sources.eventing.knative.dev/v1alpha1
kind: KafkaSource
metadata:
name: kafka-source
spec:
consumerGroup: KAFKA_CONSUMER_GROUP_NAME
bootstrapServers: KAFKA_BOOTSTRAP_SERVERS
topics: KAFKA_TOPICS
net:
sasl:
enable: KAFKA_SASL_ENABLE
user:
secretKeyRef:
name: KAFKA_SASL_USER_SECRET_NAME
key: KAFKA_SASL_USER_SECRET_KEY
password:
secretKeyRef:
name: KAFKA_SASL_PASSWORD_SECRET_NAME
key: KAFKA_SASL_PASSWORD_SECRET_KEY
tls:
enable: KAFKA_TLS_ENABLE
cert:
secretKeyRef:
name: KAFKA_TLS_CERT_SECRET_NAME
key: KAFKA_TLS_CERT_SECRET_KEY
key:
secretKeyRef:
name: KAFKA_TLS_KEY_SECRET_NAME
key: KAFKA_TLS_KEY_SECRET_KEY
caCert:
secretKeyRef:
name: KAFKA_TLS_CA_CERT_SECRET_NAME
key: KAFKA_TLS_CA_CERT_SECRET_KEY
resources:
limits:
cpu: 250m
memory: 512Mi
requests:
cpu: 250m
memory: 512Mi
sink:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
name: event-display

View File

@ -0,0 +1,27 @@
# Copyright 2019 The Knative Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaTopic
metadata:
name: knative-demo-topic
namespace: kafka
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 3
replicas: 1
config:
retention.ms: 7200000
segment.bytes: 1073741824