adding katacoda tutorials (#3142)

* adding katacoda scenarios

katacoda scenarios let people try out Knative without installing
anything on their local machines. The goal is add a number of basic as
well as advanced scenarios that would serve as a guided tutorial for
knative. We start off by adding a serving-intro tutorial

* move up the scenario

* fixing the repo structure to enable multiple scenarios

* upgrading to knative 0.19.0

* adding yaml getting started tutorial

* don't install kn cli for yaml tutorial

* fix revision name

* fix curl command for yaml demo

* check status of the service

* update the title

* adding eventing intro scenario

* fix indent for katacoda markdown

* fix more markdown

* polish

* fix api version

* fix typos

* add a step for sequence

* add broker tutorial

* polish

* adding pathway file

* add order to tutorials

* incorporate feedback

* more feedback

* Apply suggestions from code review

Co-authored-by: Mike Petersen <mike.petersen@ibm.com>

* remove hardcoded version

* adding missing newlines

* fixing trailing whitespace

* more trailing whitespace

* Apply suggestions from code review

Co-authored-by: Mike Petersen <mike.petersen@ibm.com>

* reformat sentence

* review feedback

Co-authored-by: Carlos Santana <csantana23@gmail.com>

* review feedback

dark revision, full migration to green

* quote around T2 does not work

* update apiversion

* cosmetic fixes

* bump timeout to 90 seconds

* blue to green for yaml

Co-authored-by: Mike Petersen <mike.petersen@ibm.com>
Co-authored-by: Carlos Santana <csantana23@gmail.com>
This commit is contained in:
Swapnil Bawaskar 2021-03-30 10:30:21 -07:00 committed by GitHub
parent e204743905
commit cc325fa475
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
37 changed files with 1033 additions and 0 deletions

1
katacoda.yaml Normal file
View File

@ -0,0 +1 @@
scenario_root : tutorials/katacoda

View File

@ -0,0 +1,35 @@
{
"title": "Getting Started with Knative Serving (kn cli)",
"description": "Introduction to Knative by installing knative serving and deploying an application (If you want to script your workflow, or really don't like yaml)",
"difficulty": "Beginner",
"time": "20",
"details": {
"steps": [
{
"title": "Step 1",
"text": "step1.md"
},
{
"title": "Step 2",
"text": "step2.md"
},
{
"title": "Step 3",
"text": "step3.md"
}
],
"intro": {
"code": "scripts/install-dependencies.sh",
"text": "intro.md"
},
"finish": {
"text": "finish.md"
}
},
"environment": {
"uilayout": "terminal"
},
"backend": {
"imageid": "minikube-running"
}
}

View File

@ -0,0 +1,7 @@
## What is Knative?
Knative brings the "serverless" experience to kubernetes. It also tries to codify common patterns and best practices for running applications while hiding the complexity of doing so on kubernetes. It does so by providing two components:
- Eventing - Management and delivery of events
- Serving - Request-driven compute that can scale to zero
## What will we learn in this tutorial?
This tutorial will serve as an introduction to Knative. Here we will install Knative (Serving only), deploy an application, watch Knative's "scale down to zero" feature then deploy a second version of the application and watch traffic split between the two versions.

View File

@ -0,0 +1,12 @@
echo "Installing kn cli..."
wget https://storage.googleapis.com/knative-nightly/client/latest/kn-linux-amd64 -O kn
chmod +x kn
mv kn /usr/local/bin/
echo "Done"
echo "Waiting for Kubernetes to start. This may take a few moments, please wait..."
while [ `minikube status &>/dev/null; echo $?` -ne 0 ]; do sleep 1; done
echo "Kubernetes Started"
export latest_version=$(curl -H "Accept: application/vnd.github.v3+json" https://api.github.com/repos/knative/serving/tags?per_page=1 | jq -r .[0].name)
echo "Latest knative version is: ${latest_version}"

View File

@ -0,0 +1,20 @@
## Installation
> The startup script running on the right will install the `kn` cli and wait for kubernetes to start. Once you see a prompt, you can click on the commands below at your own pace, and they will be copied and run for you in the terminal on the right.
1. Install Knative Serving's core components
```
kubectl apply --filename https://github.com/knative/serving/releases/download/${latest_version}/serving-crds.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/${latest_version}/serving-core.yaml
```{{execute}}
1. Install contour as the networking layer. (Knative also supports Courier, Gloo, Istio and Kourier as options)
```
kubectl apply --filename https://github.com/knative/net-contour/releases/download/${latest_version}/contour.yaml
kubectl apply --filename https://github.com/knative/net-contour/releases/download/${latest_version}/net-contour.yaml
```{{execute}}
1. Configure Knative Serving to use Contour by default
```
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress.class":"contour.ingress.networking.knative.dev"}}'
```{{execute}}

View File

@ -0,0 +1,26 @@
## Deploy and autoscale application
We are going to deploy the [Hello world sample web application](https://knative.dev/docs/serving/samples/hello-world/helloworld-go/). This basic web application reads in an env variable TARGET and prints `Hello ${TARGET}!`. If TARGET is not specified, it will use `World` as the TARGET.
We will now deploy the application by specifying the image location and the `TARGET` env variable set to `blue`.
Knative defines a `service.serving.knative.dev` CRD to control the lifecycle of the application (not to be confused with kubernetes service). We will use the `kn` cli to create the Knative service: (This may take up to a minute)
```
kn service create demo --image gcr.io/knative-samples/helloworld-go --env TARGET=blue --autoscale-window 15s
```{{execute}}
We can now invoke the application using `curl`. We first need to figure out the IP address of minikube and the ingress port.
```
MINIKUBE_IP=$(minikube ip)
INGRESS_PORT=$(kubectl get svc envoy --namespace contour-external --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```{{execute}}
Then invoke the application using curl:
```
curl http://$MINIKUBE_IP:$INGRESS_PORT/ -H 'Host: demo.default.example.com'
```{{execute T1}}
### Scale down to zero
You can run `watch kubectl get pods`{{execute T2}} (may need two clicks) in a new Terminal tab to see a pod created to serve the requests. Knative will scale this pod down to zero if there are no incoming requests, we have configured this window to be 15 seconds above.
You can wait for the pods to scale down to zero and then issue the above `curl` again to see the pod spin up and serve the request.

View File

@ -0,0 +1,47 @@
## Blue/Green deploy
The Knative `service` resource creates additional resources "route, configuration and revision" to manage the lifecycle of the application.
- revision: just like a git revision, any change to the Service's `spec.template` results in a new revision
- route: control traffic to several revisions
You can list those resources by running `kubectl get ksvc,configuration,route,revision` or by using the `kn` cli
We will now update the service to change the `TARGET` env variable to `green`.
But, before we do that, let us update the revision name to "hello-v1", so that we can direct traffic to it.
```
kn service update demo --revision-name="demo-v1"```{{execute T1}}
Now, let's update the env variable to `green`, but let's do it as a dark launch i.e. zero traffic will go to this new revision:
```
kn service update demo --env TARGET=green --revision-name="demo-v2" --traffic demo-v1=100,demo-v2=0
```{{execute T1}}
This will result in a new `revision` being created. Verify this by running `kn revision list`{{execute T1}}.
All these revisions are capable of serving requests. Let's tag the `green` revision, so as to get a custom hostname to be able to access the revision.
```
kn service update demo --tag demo-v2=v2
```{{execute T1}}
You can now test the `green` revision like so: (This hostname can be listed with `kn route describe demo` command).
```
curl http://$MINIKUBE_IP:$INGRESS_PORT/ -H 'Host: v2-demo.default.example.com'
```{{execute T1}}
We now need to split our traffic between the two revisions.
```
kn service update demo --traffic demo-v1=50,@latest=50
```{{execute T1}}
Then proceed by issuing the curl command multiple times to see that the traffic is split between the two revisions.
```
curl http://$MINIKUBE_IP:$INGRESS_PORT/ -H 'Host: demo.default.example.com'
```{{execute T1}}
We can now make all traffic go to the `green` revision:
```
kn service update demo --traffic @latest=100
```{{execute T1}}
Then proceed by issuing the curl command multiple times to see that all traffic is routed to `green` revision.
```
curl http://$MINIKUBE_IP:$INGRESS_PORT/ -H 'Host: demo.default.example.com'
```{{execute T1}}

View File

@ -0,0 +1,35 @@
{
"title": "Getting Started with Knative Serving (yaml)",
"description": "Introduction to Knative by installing knative serving and deploy an application",
"difficulty": "Beginner",
"time": "20",
"details": {
"steps": [
{
"title": "Step 1",
"text": "step1.md"
},
{
"title": "Step 2",
"text": "step2.md"
},
{
"title": "Step 3",
"text": "step3.md"
}
],
"intro": {
"code": "scripts/install-dependencies.sh",
"text": "intro.md"
},
"finish": {
"text": "finish.md"
}
},
"environment": {
"uilayout": "terminal"
},
"backend": {
"imageid": "minikube-running"
}
}

View File

@ -0,0 +1,7 @@
## What is Knative?
Knative brings the "serverless" experience to kubernetes. It also tries to codify common patterns and best practices for running applications while hiding the complexity of doing so on kubernetes. It does so by providing two components:
- Eventing - Management and delivery of events
- Serving - Request-driven compute that can scale to zero
## What will we learn in this tutorial?
This tutorial will serve as an introduction to Knative. Here we will install Knative (Serving only), deploy an application, watch Knative's "scale down to zero" feature then deploy a second version of the application and watch traffic split between the two versions.

View File

@ -0,0 +1,6 @@
echo "Waiting for Kubernetes to start. This may take a few moments, please wait..."
while [ `minikube status &>/dev/null; echo $?` -ne 0 ]; do sleep 1; done
echo "Kubernetes Started"
export latest_version=$(curl -H "Accept: application/vnd.github.v3+json" https://api.github.com/repos/knative/serving/tags?per_page=1 | jq -r .[0].name)
echo "Latest knative version is: ${latest_version}"

View File

@ -0,0 +1,20 @@
## Installation
> The startup script running on the right will wait for kubernetes to start. Once you see a prompt, you can click on the commands below at your own pace, and they will be copied and run for you in the terminal on the right.
1. Install Knative Serving's core components
```
kubectl apply --filename https://github.com/knative/serving/releases/download/${latest_version}/serving-crds.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/${latest_version}/serving-core.yaml
```{{execute}}
1. Install contour as the networking layer. (Knative also supports Courier, Gloo, Istio and Kourier as options)
```
kubectl apply --filename https://github.com/knative/net-contour/releases/download/${latest_version}/contour.yaml
kubectl apply --filename https://github.com/knative/net-contour/releases/download/${latest_version}/net-contour.yaml
```{{execute}}
1. Configure Knative Serving to use Contour by default
```
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress.class":"contour.ingress.networking.knative.dev"}}'
```{{execute}}

View File

@ -0,0 +1,43 @@
## Deploy and autoscale application
We are going to deploy the [Hello world sample application](https://knative.dev/docs/serving/samples/hello-world/helloworld-go/). This application reads in an env variable TARGET and prints `Hello ${TARGET}!`. If TARGET is not specified, it will use `World` as the TARGET.
We will now deploy the application by specifying the image location and the `TARGET` env variable.
Knative defines a `service.serving.knative.dev` CRD to control the lifecycle of the application (not to be confused with kubernetes service). We will create the Knative service using the yaml below:
```
cat <<EOF | kubectl apply -f -
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: helloworld-go
spec:
template:
metadata:
name: helloworld-go-blue
spec:
containers:
- env:
- name: TARGET
value: blue
image: gcr.io/knative-samples/helloworld-go
EOF
```{{execute}}
Check the status of the service by running `watch kubectl get ksvc`{{execute T2}}. When `READY` becomes `True` the service is ready to serve traffic.
We can now invoke the application using `curl`. We first need to figure out the IP address of minikube and ingress port.
```
MINIKUBE_IP=$(minikube ip)
INGRESS_PORT=$(kubectl get svc envoy --namespace contour-external --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')
```{{execute}}
Then invoke the application using curl:
```
curl http://$MINIKUBE_IP:$INGRESS_PORT/ -H 'Host: helloworld-go.default.example.com'
```{{execute T1}}
### Scale down to zero
You can run `watch kubectl get pods`{{execute T2}} in a new Terminal tab to see a pod created to serve the requests. Knative will scale this pod down to zero if there are no incoming requests for 60 seconds by default.
You can wait for the pods to scale down to zero and then issue the above `curl` again to see the pod spin up and serve the request.

View File

@ -0,0 +1,98 @@
## Blue/Green deploy
The Knative `service` resource creates additional resources "route, configuration and revision" to manage the lifecycle of the application.
- revision: just like a git revision, any change to the Service's `spec.template` results in a new revision
- route: control traffic to several revisions
You can list those resources by running ```kubectl get ksvc,configuration,route,revision```{{execute T1}} or by using the `kn` cli
We will now update the service to change the `TARGET` env variable to `green`. Note that the name of the service is the same, we have updated the value of the environment
variable and `.spec.template.metadata.name`
```
cat <<EOF | kubectl apply -f -
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: helloworld-go
spec:
template:
metadata:
name: helloworld-go-green
spec:
containers:
- env:
- name: TARGET
value: green
image: gcr.io/knative-samples/helloworld-go
EOF
```{{execute T1}}
This will result in a new `revision` being created. Verify this by running `kubectl get revisions`{{execute T1}}.
Both these revisions are capable of serving requests. By default all traffic will be routed to the latest revision. You can test that by running the `curl` command again.
We will now split our traffic between the two revisions by using the `traffic` block in the Service definition.
```
cat <<EOF | kubectl apply -f -
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: helloworld-go
spec:
template:
metadata:
name: helloworld-go-green
spec:
containers:
- env:
- name: TARGET
value: green
image: gcr.io/knative-samples/helloworld-go
traffic:
- tag: current
revisionName: helloworld-go-green
percent: 50
- tag: candidate
revisionName: helloworld-go-blue
percent: 50
- tag: latest
latestRevision: true
percent: 0
EOF
```{{execute T1}}
Then proceed by issuing the curl command multiple times to see that the traffic is split between the two revisions:
```
curl http://$MINIKUBE_IP:$INGRESS_PORT/ -H 'Host: helloworld-go.default.example.com'
```{{execute T1}}
Once you are satisfied with the new revision, all the traffic can be moved to the new `green` revision
```
cat <<EOF | kubectl apply -f -
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: helloworld-go
spec:
template:
metadata:
name: helloworld-go-green
spec:
containers:
- env:
- name: TARGET
value: green
image: gcr.io/knative-samples/helloworld-go
traffic:
- tag: latest
latestRevision: true
percent: 100
EOF
```{{execute T1}}
Then proceed by issuing the curl command multiple times to see that the traffic is goes to the new revision:
```
curl http://$MINIKUBE_IP:$INGRESS_PORT/ -H 'Host: helloworld-go.default.example.com'
```{{execute T1}}

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

View File

@ -0,0 +1,39 @@
{
"title": "Getting Started with Knative Eventing 1",
"description": "Introduction to Knative by installing knative eventing and implementing various event-processing topologies",
"difficulty": "Beginner",
"time": "20",
"details": {
"steps": [
{
"title": "Step 1",
"text": "step1.md"
},
{
"title": "Step 2",
"text": "step2.md"
},
{
"title": "Step 3",
"text": "step3.md"
},
{
"title": "Step 4",
"text": "step4.md"
}
],
"intro": {
"code": "scripts/install-dependencies.sh",
"text": "intro.md"
},
"finish": {
"text": "finish.md"
}
},
"environment": {
"uilayout": "terminal"
},
"backend": {
"imageid": "minikube-running"
}
}

View File

@ -0,0 +1,10 @@
## What is Knative?
Knative brings "serverless" experience to kubernetes. It also tries to codify common patterns and best practices for
running applications while hiding away the complexity of doing that on kubernetes. It does so by providing two
components:
- Eventing - Management and delivery of events
- Serving - Request-driven compute that can scale to zero
## What will we learn in this tutorial?
This tutorial will serve as an introduction to Knative Eventing. Here we will install Knative Eventing, and go through
various event delivery scenarios an application may have. This tutorial will serve as an introduction to Channels, Subscriptions and Sequence in Knative Eventing.

View File

@ -0,0 +1,18 @@
echo "Starting Kubernetes and installing Knative Serving. This may take a few moments, please wait..."
while [ `minikube status &>/dev/null; echo $?` -ne 0 ]; do sleep 1; done
echo "Kubernetes Started."
export latest_version=$(curl -H "Accept: application/vnd.github.v3+json" https://api.github.com/repos/knative/serving/tags?per_page=1 | jq -r .[0].name)
echo "Latest knative version is: ${latest_version}"
echo "Installing Knative Serving..."
kubectl apply --filename https://github.com/knative/serving/releases/download/${latest_version}/serving-crds.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/${latest_version}/serving-core.yaml
kubectl apply --filename https://github.com/knative/net-contour/releases/download/${latest_version}/contour.yaml
kubectl apply --filename https://github.com/knative/net-contour/releases/download/${latest_version}/net-contour.yaml
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress.class":"contour.ingress.networking.knative.dev"}}'
echo "Knative Serving Installed."

View File

@ -0,0 +1,19 @@
## Installation
> The startup script running on the right will wait for kubernetes to start and knative serving to be installed. (Although Serving is not required for Eventing to work, we install it here for creating consumers succinctly).
> Once you see a prompt, you can click on the commands below at your own pace, and they will be copied and run for you in the terminal on the right.
1. Install Knative Eventing's core components
```
kubectl apply --filename https://github.com/knative/eventing/releases/download/${latest_version}/eventing-crds.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/${latest_version}/eventing-core.yaml
```{{execute}}
## Event Driven Architecture
In an event driven architecture, microservice and functions execute business logic when they are triggered by an event.
The source that generates the event is called the "Producer" of the event and the microservice/function is the "consumer".
The microservices/functions in an event-driven architecture are constantly reacting to and producing events themselves.
Producers should send their event data in a specific format, like [Cloud Events](https://cloudevents.io/), to make it easier
for consumers to handle the data. By default, Knative Eventing works with Cloud Events as a standard format for event data.
In the next section we will look at various eventing topologies.

View File

@ -0,0 +1,93 @@
## Event Topologies
We will walk through the simplest to more complicated topologies
### 1:1 Event Delivery
The most straightforward use case is that whenever events are produced, you want some code to handle that event.
![1to1](./assets/1to1.png)
Looking at the diagram above, well create the components in the reverse order.
Let's create a consumer that will display the events that are sent to it:
```
cat <<EOF | kubectl create -f -
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: event-display
spec:
template:
spec:
containers:
- image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
EOF
```{{execute}}
For creating the producer, we will use the PingSource that will create events every minute.
The “sink” element describes where to send events. In this case, events are sent to a service with the name “event-display”
which means there's a tight coupling between the producer and consumer.
```
cat <<EOF | kubectl create -f -
apiVersion: sources.knative.dev/v1beta2
kind: PingSource
metadata:
name: test-ping-source
spec:
schedule: "*/1 * * * *"
data: '{"message": "1 to 1 delivery!"}'
sink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: event-display
EOF
```{{execute}}
To verify event delivery, you can check the logs of the consumer with the following command (You will not see an event there for a minute after creating the producer):
```
# it is likely that is pod is still being created after scaling down to zero
kubectl wait --for=condition=ready pod -l serving.knative.dev/service=event-display --timeout=90s
# get the logs
kubectl logs -l serving.knative.dev/service=event-display -c user-container --since=10m --tail=50
```{{execute}}
### N:1 Event Delivery
With a standard format for events, like Cloud Events, your function already knows how to handle
receiving the event and youll only need to update your business logic to handle processing the
new data type. As an extension to the previous example, a second producer can send events to the
same consumer. In the diagram below, you can see an updated drawing where a new producer (Producer2) and event (Event2) have been added.
![Nto1](./assets/Nto1.png)
Let us create the second producer:
```
cat <<EOF | kubectl create -f -
apiVersion: sources.knative.dev/v1beta2
kind: PingSource
metadata:
name: test-ping-source2
spec:
schedule: "*/1 * * * *"
data: '{"note": "multiple events to the same function works!"}'
sink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: event-display
EOF
```{{execute}}
To verify event delivery, you can check the logs of the consumer with the following command (You will not see an event there for a minute after creating the producer):
```
# it is likely that is pod is still being created after scaling down to zero
kubectl wait --for=condition=ready pod -l serving.knative.dev/service=event-display --timeout=90s
# see the logs
kubectl logs -l serving.knative.dev/service=event-display -c user-container --since=10m --tail=50
```{{execute}}
### Caveats
- There is a tight coupling between the producer and the consumer.
- Both retries and timeouts completely depend on the producer. Events might get dropped due to errors or timeouts and
cancellations may occur while handling the request.

View File

@ -0,0 +1,116 @@
### 1:N Event Delivery
Similar to the first scenario, you can have multiple consumers that are all interested in the same event. You might want
to run multiple checks or update multiple systems with the new event. There are two ways you can handle this scenario.
The first way to handle this is an extension of the first diagram. Each consumer is directly tied to the producer.
![1toN](./assets/1toN.png)
This manner of handling multiple consumers for an event, also called the fanout pattern, introduces a bunch of complex
problems for our application architecture. Problems like what if the producer crashes after delivering an event to only a subset of consumers?
what if a client was temporarily un-available, how does it get the messages it missed? etc.
Rather than burdening the producers to handle these problems, Knative Eventing introduces the concept of a Channel.
![channel](./assets/channel.png)
### Channel
Channels help de-couple the Producers from Consumers. The Producer only publishes to the channel and the consumer registers a `Subscription` to get events from the channel.
There are several kinds of channels, but they all implement the capability to deliver events to all consumers and persist the events. This resolves both the problems (producer crashing, clients temporarily offline) mentioned above.
When you create the channel, you can choose which kind of channel is most appropriate for your use case.
For development, an “in memory” channel may be sufficient, but for production you may need persistence, retry, and replay capabilities for reliability and/or compliance.
### Subscription
Consumers of the events need to let the channel know theyre interested to receive events by creating a subscription.
Let's see this in action now. First we install and create an in-memory channel:
Install an in-memory channel. (Knative also supports Apache Kafka Channel, Google Cloud Pub/Sub Channel and NATS Channel as options)
```
kubectl apply --filename https://github.com/knative/eventing/releases/download/${latest_version}/in-memory-channel.yaml
```{{execute}}
Now, create an in-memory channel: InMemory channels are great for testing because they add very little overhead and require
almost no resources. The downside, though, is that you have no persistence and retries. For this example, an InMemory channel is well suited.
```
cat <<EOF | kubectl create -f -
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
metadata:
name: pingevents
EOF
```{{execute}}
We will now create 3 consumers:
```
for i in 1 2 3; do
cat <<EOF | kubectl create -f -
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: event-display${i}
spec:
template:
spec:
containers:
- image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
EOF
done
```{{execute}}
Now that the channel and the consumers exist, we will need to create the subscriptions
to make sure the consumers can get the messages.
```
for i in 1 2 3; do
cat <<EOF | kubectl create -f -
apiVersion: messaging.knative.dev/v1
kind: Subscription
metadata:
name: subscriber-${i}
spec:
channel:
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
name: pingevents
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: event-display${i}
EOF
done
```{{execute}}
Finally, we create the Producers. As before, we will create a PingSource producer. The “sink” element describes where to send
events. Rather than sending the events to a service, events are sent to a channel with the name “pingevents” which means
theres no longer a tight coupling between the producer and consumer.
```
cat <<EOF | kubectl create -f -
apiVersion: sources.knative.dev/v1beta2
kind: PingSource
metadata:
name: test-ping-source-channel
spec:
schedule: "*/1 * * * *"
data: '{"message": "Message from Channel!"}'
sink:
ref:
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
name: pingevents
EOF
```{{execute}}
To verify event delivery, you can check the logs of all three consumers: (You will not see an event there for a minute after creating the producer):
```
# it is likely that is pod is still being created after scaling down to zero
kubectl wait --for=condition=ready pod -l serving.knative.dev/service=event-display1 --timeout=90s
kubectl wait --for=condition=ready pod -l serving.knative.dev/service=event-display2 --timeout=90s
kubectl wait --for=condition=ready pod -l serving.knative.dev/service=event-display3 --timeout=90s
# see the logs
kubectl logs -l serving.knative.dev/service=event-display1 -c user-container --since=10m --tail=50
kubectl logs -l serving.knative.dev/service=event-display2 -c user-container --since=10m --tail=50
kubectl logs -l serving.knative.dev/service=event-display3 -c user-container --since=10m --tail=50
```{{execute}}

View File

@ -0,0 +1,101 @@
## Sequences
The unix philosophy states:
> Write programs that do one thing and do it well. Write programs to work together.
In an event driven architecture too, it is useful to split up functionality into smaller chunks. These chunks can then be implemented as either microservices
or functions. We then need to pass the events from the producer through a series of consumers. This time rather than creating several channels and subscriptions, we
will create a `Sequence` instead. A Sequence lets us define an in-order list of functions that will be invoked. Sequence creates `Channel`s and `Subscription`s under the hood.
Let's see this in action now. We will create the following topology where the event is passed from Producer to Consumer 1, who transforms the event, and then passes it
along to Consumer 2, which displays the event.
![seq](./assets/sequence.png)
First, let's create the final consumer:
```
cat <<EOF | kubectl create -f -
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: event-display-chain
spec:
template:
spec:
containers:
- image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
EOF
```{{execute}}
Then create the first consumer which will modify the message:
```
cat <<EOF | kubectl create -f -
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: first
spec:
template:
spec:
containers:
- image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/appender
env:
- name: MESSAGE
value: " - Updating this message a little..."
EOF
```{{execute}}
Now, let's create the Sequence to execute the service "first" and then pass its result to the service "event-display-chain"
```
cat <<EOF | kubectl create -f -
apiVersion: flows.knative.dev/v1
kind: Sequence
metadata:
name: sequence
spec:
channelTemplate:
apiVersion: messaging.knative.dev/v1beta1
kind: InMemoryChannel
steps:
- ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: first
reply:
ref:
kind: Service
apiVersion: serving.knative.dev/v1
name: event-display-chain
EOF
```{{execute}}
Finally, create the producer to create the event:
```
cat <<EOF | kubectl create -f -
apiVersion: sources.knative.dev/v1beta2
kind: PingSource
metadata:
name: ping-source-sequence
spec:
schedule: "*/1 * * * *"
data: '{"message": "Hello Sequence!"}'
sink:
ref:
apiVersion: flows.knative.dev/v1beta1
kind: Sequence
name: sequence
EOF
```{{execute}}
Verify that the sequence was executed and the message was updated by running the below command
(You will not see an event there for a minute after creating the producer):
```
# it is likely that is pod is still being created after scaling down to zero
kubectl wait --for=condition=ready pod -l serving.knative.dev/service=event-display-chain --timeout=90s
# see the logs
kubectl -n default logs -l serving.knative.dev/service=event-display-chain -c user-container --since=10m --tail=50
```{{execute}}

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@ -0,0 +1,35 @@
{
"title": "Getting Started with Knative Eventing 2",
"description": "Introduction to Broker-Trigger model in Knative Eventing",
"difficulty": "Beginner",
"time": "20",
"details": {
"steps": [
{
"title": "Step 1",
"text": "step1.md"
},
{
"title": "Step 2",
"text": "step2.md"
},
{
"title": "Step 3",
"text": "step3.md"
}
],
"intro": {
"code": "scripts/install-dependencies.sh",
"text": "intro.md"
},
"finish": {
"text": "finish.md"
}
},
"environment": {
"uilayout": "terminal"
},
"backend": {
"imageid": "minikube-running"
}
}

View File

@ -0,0 +1,9 @@
## What is Knative?
Knative brings "serverless" experience to kubernetes. It also tries to codify common patterns and best practices for
running applications while hiding away the complexity of doing that on kubernetes. It does so by providing two
components:
- Eventing - Management and delivery of events
- Serving - Request-driven compute that can scale to zero
## What will we learn in this tutorial?
Knative Eventing supports multiple modes of usage. This tutorial will serve as an introduction to Broker-Trigger model in Knative Eventing.

View File

@ -0,0 +1,18 @@
echo "Starting Kubernetes and installing Knative Serving. This may take a few moments, please wait..."
while [ `minikube status &>/dev/null; echo $?` -ne 0 ]; do sleep 1; done
echo "Kubernetes Started."
export latest_version=$(curl -H "Accept: application/vnd.github.v3+json" https://api.github.com/repos/knative/serving/tags?per_page=1 | jq -r .[0].name)
echo "Latest knative version is: ${latest_version}"
echo "Installing Knative Serving..."
kubectl apply --filename https://github.com/knative/serving/releases/download/${latest_version}/serving-crds.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/${latest_version}/serving-core.yaml
kubectl apply --filename https://github.com/knative/net-contour/releases/download/${latest_version}/contour.yaml
kubectl apply --filename https://github.com/knative/net-contour/releases/download/${latest_version}/net-contour.yaml
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress.class":"contour.ingress.networking.knative.dev"}}'
echo "Knative Serving Installed."

View File

@ -0,0 +1,28 @@
## Installation
> The startup script running on the right will wait for kubernetes to start and knative serving to be installed. (Although Serving is not required for Eventing to work, we install it here for creating consumers succinctly).
> Once you see a prompt, you can click on the commands below at your own pace, and they will be copied and run for you in the terminal on the right.
1. Install Knative Eventing's core components
```
kubectl apply --filename https://github.com/knative/eventing/releases/download/${latest_version}/eventing-crds.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/${latest_version}/eventing-core.yaml
```{{execute}}
1. Install an in-memory channel. (Knative also supports Apache Kafka Channel, Google Cloud Pub/Sub Channel and NATS Channel as options)
```
kubectl apply --filename https://github.com/knative/eventing/releases/download/${latest_version}/in-memory-channel.yaml
```{{execute}}
1. Install a Broker
```
kubectl apply --filename https://github.com/knative/eventing/releases/download/${latest_version}/mt-channel-broker.yaml
```{{execute}}
## Event Driven Architecture
In an event driven architecture, microservice and functions execute business logic when they are triggered by an event.
The source that generates the event is called the "Producer" of the event and the microservice/function is the "consumer".
The microservices/functions in an event-driven architecture are constantly reacting to and producing events themselves.
Producers should send their event data in a specific format, like [Cloud Events](https://cloudevents.io/), to make it easier
for consumers to handle the data. By default, Knative Eventing works with Cloud Events as a standard format for event data.
In the next section we will look at the broker-trigger model.

View File

@ -0,0 +1,111 @@
## Use Case
The broker and trigger model is useful for complex event delivery topologies like N:M:Z, i.e. there are a multitude of Sources sending events, Functions
consuming/transforming which are then processed by even more functions and so on. It can get a bit unwieldy to keep track of which Channel is having which events. Also, sometimes you might only want to
consume specific types of events. You would have to receive all the events, and throw out the ones youre not interested in. To make these kinds of interactions easier and allow the
user to only focus on declaring which events they are interested in and where to send them is an easier way to reason about them. This is where Broker and Trigger are meant
to provide a straightforward user experience.
### Broker
Producers POST events to the Broker. Once an Event has entered the Broker, it can be forwarded to event Channels by using Triggers. This event delivery mechanism hides
details of event routing from the event producer and event consumer.
### Trigger
A Trigger describes a filter on event attributes which should be delivered to Consumers. This allows the consumer to only receive a subset of the events.
### Example
Suppose we want to implement the following workflow for events
![broker-eg](./assets/broker-eg.png)
We can implement this with Channel and Subscription, but in order for that wed need 6 Channels and have two instances of Consumer1 (because we would need to have it output
to two different channels to separate Red and Yellow without Consumer2 and Consumer4 having to filter out the events they didnt want). We would furthermore need to have 6
Subscription objects. By using the Broker / Trigger we only have to declare that Consumer1 is interested in Blue and Orange events, Consumer2 is interested in Red events and so forth. The new topology now becomes:
![broker](./assets/broker.png)
To see this in action let us first create a Broker:
```
kubectl create -f - <<EOF
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: default
namespace: default
EOF
```{{execute}}
Now we will create consumers that will simply log the event. (We will only create two consumers, the rest will be very similar):
```
for i in 1 2; do
cat <<EOF | kubectl create -f -
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: consumer${i}
spec:
template:
spec:
containers:
- image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
EOF
done
```{{execute}}
Let us now create Triggers that will send only the events that the consumer is interested in:
For `consumer2`, we create a filter for Red events only
```
kubectl apply -f - << EOF
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: consumer2-red
spec:
broker: default
filter:
attributes:
type: red
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: consumer2
EOF
```{{execute}}
For `consumer1` we will create 2 triggers, one for blue and one for orange events.
```
kubectl apply -f - << EOF
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: consumer1-blue
spec:
broker: default
filter:
attributes:
type: blue
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: consumer1
---
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: consumer1-orange
spec:
broker: default
filter:
attributes:
type: orange
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: consumer1
EOF
```{{execute}}

View File

@ -0,0 +1,79 @@
## Publishing Events
Finally, let's publish some events. The broker can only be accessed from within the cluster where Knative Eventing is installed. We will create a pod within that cluster to
act as an event producer that will execute the curl commands.
```
kubectl apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
labels:
run: post-event
name: post-event
spec:
containers:
# This could be any image that we can SSH into and has curl.
- image: radial/busyboxplus:curl
imagePullPolicy: IfNotPresent
name: post-event
resources: {}
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
EOF
# wait for it to be ready
kubectl wait pods/post-event --for=condition=ContainersReady --timeout=90s
```{{execute}}
Let us publish some [cloudevents](https://cloudevents.io/). The command below adds some cloud-events specific headers to the curl command.
Publish `red` events using the command:
```
kubectl exec post-event -- curl -v "http://broker-ingress.knative-eventing.svc.cluster.local/default/default" \
-X POST \
-H "Ce-Id: test" \
-H "Ce-Specversion: 1.0" \
-H "Ce-Type: red" \
-H "Ce-Source: kata-tutorial" \
-H "Content-Type: application/json" \
-d '{"msg":"The event is red!"}'
```{{execute}}
Publish `blue` events using the command:
```
kubectl exec post-event -- curl -v "http://broker-ingress.knative-eventing.svc.cluster.local/default/default" \
-X POST \
-H "Ce-Id: test" \
-H "Ce-Specversion: 1.0" \
-H "Ce-Type: blue" \
-H "Ce-Source: kata-tutorial" \
-H "Content-Type: application/json" \
-d '{"msg":"The event is blue!"}'
```{{execute}}
Publish `orange` events using the command:
```
kubectl exec post-event -- curl -v "http://broker-ingress.knative-eventing.svc.cluster.local/default/default" \
-X POST \
-H "Ce-Id: test" \
-H "Ce-Specversion: 1.0" \
-H "Ce-Type: orange" \
-H "Ce-Source: kata-tutorial" \
-H "Content-Type: application/json" \
-d '{"msg":"The event is orange!"}'
```{{execute}}
Verify that consumer1 receives blue and orange events and consumer2 receives red events:
```
# it is likely that is pod is still being created after scaling down to zero
kubectl wait --for=condition=ready pod -l serving.knative.dev/service=consumer1 --timeout=90s
kubectl wait --for=condition=ready pod -l serving.knative.dev/service=consumer2 --timeout=90s
# see the logs
kubectl logs -l serving.knative.dev/service=consumer1 -c user-container --since=10m --tail=50
kubectl logs -l serving.knative.dev/service=consumer2 -c user-container --since=10m --tail=50
```{{execute}}