Fixing broken links and general cleanup (#3722)

* Fixing broken links

* fixing up the python sample a bit

* cleanup golang sample

* fix links

* move kafka source

* fix links

* fix links
This commit is contained in:
Ashleigh Brennan 2021-06-04 09:53:45 -05:00 committed by GitHub
parent 7da688bc18
commit f21bd81918
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 380 additions and 449 deletions

View File

@ -31,9 +31,7 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-go
## Before you begin
- A Kubernetes cluster with
[Knative Eventing](../../../getting-started.md#installing-knative-eventing)
installed.
- A Kubernetes cluster with [Knative Eventing](../../../../admin/install) installed.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
@ -42,105 +40,105 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-go
1. Create a new file named `helloworld.go` and paste the following code. This
code creates a basic web server which listens on port 8080:
```go
import (
"context"
"log"
```go
import (
"context"
"log"
cloudevents "github.com/cloudevents/sdk-go/v2"
"github.com/google/uuid"
)
cloudevents "github.com/cloudevents/sdk-go/v2"
"github.com/google/uuid"
)
func receive(ctx context.Context, event cloudevents.Event) (*cloudevents.Event, cloudevents.Result) {
// Here is where your code to process the event will go.
// In this example we will log the event msg
log.Printf("Event received. \n%s\n", event)
data := &HelloWorld{}
if err := event.DataAs(data); err != nil {
log.Printf("Error while extracting cloudevent Data: %s\n", err.Error())
return nil, cloudevents.NewHTTPResult(400, "failed to convert data: %s", err)
func receive(ctx context.Context, event cloudevents.Event) (*cloudevents.Event, cloudevents.Result) {
// Here is where your code to process the event will go.
// In this example we will log the event msg
log.Printf("Event received. \n%s\n", event)
data := &HelloWorld{}
if err := event.DataAs(data); err != nil {
log.Printf("Error while extracting cloudevent Data: %s\n", err.Error())
return nil, cloudevents.NewHTTPResult(400, "failed to convert data: %s", err)
}
log.Printf("Hello World Message from received event %q", data.Msg)
// Respond with another event (optional)
// This is optional and is intended to show how to respond back with another event after processing.
// The response will go back into the knative eventing system just like any other event
newEvent := cloudevents.NewEvent()
newEvent.SetID(uuid.New().String())
newEvent.SetSource("knative/eventing/samples/hello-world")
newEvent.SetType("dev.knative.samples.hifromknative")
if err := newEvent.SetData(cloudevents.ApplicationJSON, HiFromKnative{Msg: "Hi from helloworld-go app!"}); err != nil {
return nil, cloudevents.NewHTTPResult(500, "failed to set response data: %s", err)
}
log.Printf("Responding with event\n%s\n", newEvent)
return &newEvent, nil
}
log.Printf("Hello World Message from received event %q", data.Msg)
// Respond with another event (optional)
// This is optional and is intended to show how to respond back with another event after processing.
// The response will go back into the knative eventing system just like any other event
newEvent := cloudevents.NewEvent()
newEvent.SetID(uuid.New().String())
newEvent.SetSource("knative/eventing/samples/hello-world")
newEvent.SetType("dev.knative.samples.hifromknative")
if err := newEvent.SetData(cloudevents.ApplicationJSON, HiFromKnative{Msg: "Hi from helloworld-go app!"}); err != nil {
return nil, cloudevents.NewHTTPResult(500, "failed to set response data: %s", err)
func main() {
log.Print("Hello world sample started.")
c, err := cloudevents.NewDefaultClient()
if err != nil {
log.Fatalf("failed to create client, %v", err)
}
log.Fatal(c.StartReceiver(context.Background(), receive))
}
log.Printf("Responding with event\n%s\n", newEvent)
return &newEvent, nil
}
func main() {
log.Print("Hello world sample started.")
c, err := cloudevents.NewDefaultClient()
if err != nil {
log.Fatalf("failed to create client, %v", err)
}
log.Fatal(c.StartReceiver(context.Background(), receive))
}
```
```
1. Create a new file named `eventschemas.go` and paste the following code. This
defines the data schema of the CloudEvents.
```go
package main
```go
package main
// HelloWorld defines the Data of CloudEvent with type=dev.knative.samples.helloworld
type HelloWorld struct {
// Msg holds the message from the event
Msg string `json:"msg,omitempty,string"`
}
// HelloWorld defines the Data of CloudEvent with type=dev.knative.samples.helloworld
type HelloWorld struct {
// Msg holds the message from the event
Msg string `json:"msg,omitempty,string"`
}
// HiFromKnative defines the Data of CloudEvent with type=dev.knative.samples.hifromknative
type HiFromKnative struct {
// Msg holds the message from the event
Msg string `json:"msg,omitempty,string"`
}
```
// HiFromKnative defines the Data of CloudEvent with type=dev.knative.samples.hifromknative
type HiFromKnative struct {
// Msg holds the message from the event
Msg string `json:"msg,omitempty,string"`
}
```
1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it. For detailed instructions on dockerizing a Go app, see
[Deploying Go servers with Docker](https://blog.golang.org/docker).
```docker
# Use the official Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.14 as builder
```docker
# Use the official Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.14 as builder
# Copy local code to the container image.
WORKDIR /app
# Copy local code to the container image.
WORKDIR /app
# Retrieve application dependencies using go modules.
# Allows container builds to reuse downloaded dependencies.
COPY go.* ./
RUN go mod download
# Retrieve application dependencies using go modules.
# Allows container builds to reuse downloaded dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Copy local code to the container image.
COPY . ./
# Build the binary.
# -mod=readonly ensures immutable go.mod and go.sum in container builds.
RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o helloworld
# Build the binary.
# -mod=readonly ensures immutable go.mod and go.sum in container builds.
RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o helloworld
# Use a Docker multi-stage build to create a lean production image.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine:3
RUN apk add --no-cache ca-certificates
# Use a Docker multi-stage build to create a lean production image.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine:3
RUN apk add --no-cache ca-certificates
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/helloworld /helloworld
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/helloworld /helloworld
# Run the web service on container startup.
CMD ["/helloworld"]
```
# Run the web service on container startup.
CMD ["/helloworld"]
```
1. Create a new file, `sample-app.yaml` and copy the following service
definition into the file. Make sure to replace `{username}` with your Docker
@ -218,9 +216,9 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-go
1. Use the go tool to create a
[`go.mod`](https://github.com/golang/go/wiki/Modules#gomod) manifest.
```shell
go mod init github.com/knative/docs/docs/serving/samples/hello-world/helloworld-go
```
```shell
go mod init github.com/knative/docs/docs/serving/samples/hello-world/helloworld-go
```
## Building and deploying the sample
@ -231,25 +229,24 @@ folder) you're ready to build and deploy the sample app.
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-go .
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-go .
# Push the container to docker registry
docker push {username}/helloworld-go
```
# Push the container to docker registry
docker push {username}/helloworld-go
```
1. After the build has completed and the container is pushed to docker hub, you
can deploy the sample application into your cluster. Ensure that the
container image value in `sample-app.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`:
```shell
kubectl apply --filename sample-app.yaml
```
```shell
kubectl apply --filename sample-app.yaml
```
1. Above command created a namespace `knative-samples` and create a default
Broker it. Verify using the following command:
1. Above command created a namespace `knative-samples` and create a default Broker it. Verify using the following command:
```shell
kubectl get broker --namespace knative-samples
@ -258,9 +255,9 @@ folder) you're ready to build and deploy the sample app.
**Note:** you can also use injection based on labels with the
Eventing sugar controller.
For how to install the Eventing sugar controller, see
[Install optional Eventing extensions](../../../../install/install-extensions.md#install-optional-eventing-extensions).
[Install optional Eventing extensions](../../../../admin/install/install-extensions#install-optional-eventing-extensions).
1. It deployed the helloworld-go app as a K8s Deployment and created a K8s
1. It deployed the helloworld-go app as a K8s Deployment and created a K8s
service names helloworld-go. Verify using the following command.
```shell
@ -269,8 +266,9 @@ folder) you're ready to build and deploy the sample app.
kubectl --namespace knative-samples get svc helloworld-go
```
1. It created a Knative Eventing Trigger to route certain events to the
1. It created a Knative Eventing Trigger to route certain events to the
helloworld-go application. Make sure that Ready=true
```shell
kubectl --namespace knative-samples get trigger helloworld-go
```
@ -286,28 +284,31 @@ We can send an http request directly to the [Broker](../../../broker/)
with correct CloudEvent headers set.
1. Deploy a curl pod and SSH into it
```shell
kubectl --namespace knative-samples run curl --image=radial/busyboxplus:curl -it
```
```bash
kubectl -n knative-samples run curl --image=radial/busyboxplus:curl -it
```
1. Get the Broker URL
```shell
kubectl --namespace knative-samples get broker default
```
```bash
kubectl -n knative-samples get broker default
```
1. Run the following in the SSH terminal. Please replace the URL with the URL of
the default broker.
```shell
curl -v "http://broker-ingress.knative-eventing.svc.cluster.local/knative-samples/default" \
-X POST \
-H "Ce-Id: 536808d3-88be-4077-9d7a-a3f162705f79" \
-H "Ce-Specversion: 1.0" \
-H "Ce-Type: dev.knative.samples.helloworld" \
-H "Ce-Source: dev.knative.samples/helloworldsource" \
-H "Content-Type: application/json" \
-d '{"msg":"Hello World from the curl pod."}'
exit
```
```bash
curl -v "http://broker-ingress.knative-eventing.svc.cluster.local/knative-samples/default" \
-X POST \
-H "Ce-Id: 536808d3-88be-4077-9d7a-a3f162705f79" \
-H "Ce-Specversion: 1.0" \
-H "Ce-Type: dev.knative.samples.helloworld" \
-H "Ce-Source: dev.knative.samples/helloworldsource" \
-H "Content-Type: application/json" \
-d '{"msg":"Hello World from the curl pod."}'
exit
```
### Verify that event is received by helloworld-go app
@ -316,13 +317,13 @@ back with another event.
1. Display helloworld-go app logs
```shell
```bash
kubectl --namespace knative-samples logs -l app=helloworld-go --tail=50
```
You should see something similar to:
```shell
```bash
Event received.
Validation: valid
Context Attributes,
@ -353,9 +354,7 @@ back with another event.
```
Play around with the CloudEvent attributes in the curl command and the
trigger specification to understand how
[Triggers work](../../../broker/README.md#trigger).
Play around with the CloudEvent attributes in the curl command and the trigger specification to [understand how triggers work](../../../broker/triggers).
## Verify reply from helloworld-go app
@ -366,8 +365,8 @@ mesh via the Broker and can be delivered to other services using a Trigger
1. Deploy a pod that receives any CloudEvent and logs the event to its output.
```shell
kubectl --namespace knative-samples apply --filename - << END
```yaml
kubectl -n knative-samples apply -f - <<EOF
# event-display app deploment
apiVersion: apps/v1
kind: Deployment
@ -402,65 +401,66 @@ mesh via the Broker and can be delivered to other services using a Trigger
- protocol: TCP
port: 80
targetPort: 8080
END
EOF
```
1. Create a trigger to deliver the event to the above service
```shell
kubectl --namespace knative-samples apply --filename - << END
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: event-display
namespace: knative-samples
spec:
broker: default
filter:
attributes:
type: dev.knative.samples.hifromknative
source: knative/eventing/samples/hello-world
subscriber:
ref:
apiVersion: v1
kind: Service
```yaml
kubectl -n knative-samples apply -f - <<EOF
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: event-display
END
```
namespace: knative-samples
spec:
broker: default
filter:
attributes:
type: dev.knative.samples.hifromknative
source: knative/eventing/samples/hello-world
subscriber:
ref:
apiVersion: v1
kind: Service
name: event-display
EOF
```
1. [Send a CloudEvent to the Broker](###Send-CloudEvent-to-the-Broker)
1. [Send a CloudEvent to the Broker](#send-cloudevent-to-the-broker)
1. Check the logs of event-display service
```shell
kubectl --namespace knative-samples logs -l app=event-display --tail=50
```
You should see something similar to:
```shell
cloudevents.Event
Validation: valid
Context Attributes,
specversion: 1.0
type: dev.knative.samples.hifromknative
source: knative/eventing/samples/hello-world
id: 8a7384b9-8bbe-4634-bf0f-ead07e450b2a
time: 2019-10-04T22:53:39.844943931Z
datacontenttype: application/json
Extensions,
knativearrivaltime: 2019-10-04T22:53:39Z
knativehistory: default-kn2-ingress-kn-channel.knative-samples.svc.cluster.local
traceparent: 00-4b01db030b9ea04bb150b77c8fa86509-2740816590a7604f-00
Data,
{
"msg": "Hi from helloworld-go app!"
}
```
1. Check the logs of `event-display` service:
**Note: You could use the above approach to test your applications too.**
```bash
kubectl -n knative-samples logs -l app=event-display --tail=50
```
You should see something similar to:
```bash
cloudevents.Event
Validation: valid
Context Attributes,
specversion: 1.0
type: dev.knative.samples.hifromknative
source: knative/eventing/samples/hello-world
id: 8a7384b9-8bbe-4634-bf0f-ead07e450b2a
time: 2019-10-04T22:53:39.844943931Z
datacontenttype: application/json
Extensions,
knativearrivaltime: 2019-10-04T22:53:39Z
knativehistory: default-kn2-ingress-kn-channel.knative- samples.svc.cluster.local
traceparent: 00-4b01db030b9ea04bb150b77c8fa86509-2740816590a7604f-00
Data,
{
"msg": "Hi from helloworld-go app!"
}
```
## Removing the sample app deployment
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete --filename sample-app.yaml
```bash
kubectl delete -f sample-app.yaml
```

View File

@ -1,10 +1,3 @@
---
title: "Hello World - Python"
linkTitle: "Python"
weight: 20
type: "docs"
---
# Hello World - Python
A simple web app written in Python that you can use to test knative eventing. It shows how to consume a [CloudEvent](https://cloudevents.io/) in Knative eventing, and optionally how to respond back with another CloudEvent in the http response, by adding the Cloud Eventing headers outlined in the Cloud Events standard definition.
@ -16,7 +9,7 @@ Follow the steps below to create the sample code and then deploy the app to your
cluster. You can also download a working copy of the sample, by running the
following commands:
```shell
```bash
# Clone the relevant branch version such as "release-0.13"
git clone -b "{{ branch }}" https://github.com/knative/docs knative-docs
cd knative-docs/docs/eventing/samples/helloworld/helloworld-python
@ -24,7 +17,7 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-python
## Before you begin
- A Kubernetes cluster with [Knative Eventing](../../../getting-started.md#installing-knative-eventing) installed.
- A Kubernetes cluster with [Knative Eventing](../../../../admin/install) installed.
- [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry).
@ -54,15 +47,14 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-python
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=8080)
```
1. Add a requirements.txt file containing the following contents:
1. Add a `requirements.txt` file containing the following contents:
```bash
Flask==1.1.1
```
1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it. For detailed instructions on dockerizing a Go app, see
[Deploying Go servers with Docker](https://blog.golang.org/docker).
@ -84,9 +76,7 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-python
```
1. Create a new file, `sample-app.yaml` and copy the following service definition
into the file. Make sure to replace `{username}` with your Docker Hub
username.
1. Create a new file, `sample-app.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub username.
```yaml
# Namespace for sample application with eventing enabled
@ -156,64 +146,73 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-python
Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
1. Use Docker to build the sample code into a container. To build and push with Docker Hub, run these commands replacing `{username}` with your Docker Hub username:
```shell
# Build the container on your local machine
docker build -t {username}/helloworld-python .
# Push the container to docker registry
docker push {username}/helloworld-python
```
```bash
# Build the container on your local machine
docker build -t {username}/helloworld-python .
# Push the container to docker registry
docker push {username}/helloworld-python
```
1. After the build has completed and the container is pushed to Docker Hub, you
can deploy the sample application into your cluster. Ensure that the container image value
in `sample-app.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`:
can deploy the sample application into your cluster. Ensure that the container image value in `sample-app.yaml` matches the container you built in the previous step. Apply the configuration using `kubectl`:
```bash
kubectl apply -f sample-app.yaml
```
1. The previous command creates a namespace `knative-samples` and labels it with `knative-eventing-injection=enabled`, to enable eventing in the namespace. Verify using the following command:
```bash
kubectl get ns knative-samples --show-labels
```
1. It deployed the `helloworld-python` app as a K8s Deployment and created a K8s service names helloworld-python. Verify using the following command:
```bash
kubectl --namespace knative-samples get deployments helloworld-python
kubectl --namespace knative-samples get svc helloworld-python
```
1. It created a Knative Eventing Trigger to route certain events to the helloworld-python application. Make sure that `Ready=true`:
```bash
kubectl -n knative-samples get trigger helloworld-python
```
```shell
kubectl apply --filename sample-app.yaml
```
1. Above command created a namespace `knative-samples` and labelled it with `knative-eventing-injection=enabled`, to enable eventing in the namespace. Verify using the following command:
```shell
kubectl get ns knative-samples --show-labels
```
1. It deployed the helloworld-python app as a K8s Deployment and created a K8s service names helloworld-python. Verify using the following command.
```shell
kubectl --namespace knative-samples get deployments helloworld-python
kubectl --namespace knative-samples get svc helloworld-python
```
1. It created a Knative Eventing Trigger to route certain events to the helloworld-python application. Make sure that Ready=true
```shell
kubectl --namespace knative-samples get trigger helloworld-python
```
## Send and verify CloudEvents
After you have deployed the application, and have verified that the namespace, sample application and trigger are ready, you can send a CloudEvent.
### Send CloudEvent to the Broker
You can send an HTTP request directly to the Knative [broker](../../../broker-trigger.md) if the correct CloudEvent headers are set.
1. Deploy a curl pod and SSH into it
```shell
kubectl --namespace knative-samples run curl --image=radial/busyboxplus:curl -it
```
1. Run the following in the SSH terminal
```shell
curl -v "default-broker.knative-samples.svc.cluster.local" \
-X POST \
-H "Ce-Id: 536808d3-88be-4077-9d7a-a3f162705f79" \
-H "Ce-specversion: 0.3" \
-H "Ce-Type: dev.knative.samples.helloworld" \
-H "Ce-Source: dev.knative.samples/helloworldsource" \
-H "Content-Type: application/json" \
-d '{"msg":"Hello World from the curl pod."}'
You can send an HTTP request directly to the Knative [broker](../../../broker) if the correct CloudEvent headers are set.
1. Deploy a curl pod and SSH into it:
```bash
kubectl --namespace knative-samples run curl --image=radial/busyboxplus:curl -it
```
1. Run the following command in the SSH terminal:
```bash
curl -v "default-broker.knative-samples.svc.cluster.local" \
-X POST \
-H "Ce-Id: 536808d3-88be-4077-9d7a-a3f162705f79" \
-H "Ce-specversion: 0.3" \
-H "Ce-Type: dev.knative.samples.helloworld" \
-H "Ce-Source: dev.knative.samples/helloworldsource" \
-H "Content-Type: application/json" \
-d '{"msg":"Hello World from the curl pod."}'
exit
```
exit
```
### Verify that event is received by helloworld-python app
Helloworld-python app logs the context and the msg of the above event, and replies back with another event.
1. Display helloworld-python app logs
```shell
kubectl --namespace knative-samples logs -l app=helloworld-python --tail=50
@ -243,14 +242,15 @@ Helloworld-python app logs the context and the msg of the above event, and repli
{"msg":"Hi from Knative!"}
```
Try the CloudEvent attributes in the curl command and the trigger specification to understand how [triggers](../../../broker-trigger.md#trigger) work.
Try the CloudEvent attributes in the curl command and the trigger specification to understand how [triggers](../../../broker/triggers) work.
## Verify reply from helloworld-python app
The `helloworld-python` app replies with an event type `type= dev.knative.samples.hifromknative`, and source `source=knative/eventing/samples/hello-world`. The event enters the eventing mesh through the broker, and can be delivered to event sinks using a trigger
1. Deploy a pod that receives any CloudEvent and logs the event to its output.
```shell
kubectl --namespace knative-samples apply --filename - << END
1. Deploy a pod that receives any CloudEvent and logs the event to its output:
```yaml
kubectl -n knative-samples apply -f - <<EOF
# event-display app deploment
apiVersion: apps/v1
kind: Deployment
@ -284,11 +284,13 @@ The `helloworld-python` app replies with an event type `type= dev.knative.sample
- protocol: TCP
port: 80
targetPort: 8080
END
EOF
```
1. Create a trigger to deliver the event to the above service
```shell
kubectl --namespace knative-samples apply --filename - << END
1. Create a trigger to deliver the event to the previously created service:
```yaml
kubectl -n knative-samples apply -f - <<EOF
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
@ -305,17 +307,20 @@ The `helloworld-python` app replies with an event type `type= dev.knative.sample
apiVersion: v1
kind: Service
name: event-display
END
EOF
```
1. [Send a CloudEvent to the Broker](###Send-CloudEvent-to-the-Broker)
1. [Send a CloudEvent to the Broker](#send-cloudevent-to-the-broker)
1. Check the logs of event-display service
```shell
kubectl --namespace knative-samples logs -l app=event-display --tail=50
```
You should see something similar to:
```shell
1. Check the logs of `event-display` Service:
```bash
kubectl -n knative-samples logs -l app=event-display --tail=50
```
Example output:
```bash
cloudevents.Event
Validation: valid
Context Attributes,
@ -333,15 +338,12 @@ The `helloworld-python` app replies with an event type `type= dev.knative.sample
{
"msg": "Hi from helloworld- app!"
}
```
**Note: You could use the above approach to test your applications too.**
```
## Removing the sample app deployment
To remove the sample app from your cluster, delete the service record:
```shell
kubectl delete --filename sample-app.yaml
```bash
kubectl delete -f sample-app.yaml
```

View File

@ -1,10 +1,3 @@
---
title: "Parallel Example"
linkTitle: "Parallel"
weight: 10
type: "docs"
---
# Parallel Example
The following examples will help you understand how to use Parallel to describe
@ -19,12 +12,12 @@ All examples require:
- Knative Serving
All examples are using the
[default channel template](../../channels/create-default-channel.md).
[default channel template](../../channels/create-default-channel).
## Examples
For each of these examples below, we'll use
[`PingSource`](../ping-source/README.md) as the source of events.
[`PingSource`](../../sources/ping-source/) as the source of events.
We also use simple
[functions](https://github.com/lionelvillard/knative-functions) to perform

View File

@ -10,20 +10,20 @@ This page shows how to install and configure Apache Kafka Sink.
## Prerequisites
[Installing Eventing using YAML files](./../../install/install-eventing-with-yaml.md).
You must have a Kubernetes cluster with [Knative Eventing installed](../../../admin/install/).
## Installation
1. Install the Kafka controller:
```bash
kubectl apply --filename {{ artifact(org="knative-sandbox", repo="eventing-kafka-broker", file="eventing-kafka-controller.yaml") }}
kubectl apply -f {{ artifact(org="knative-sandbox", repo="eventing-kafka-broker", file="eventing-kafka-controller.yaml") }}
```
1. Install the Kafka Sink data plane:
```bash
kubectl apply --filename {{ artifact(org="knative-sandbox", repo="eventing-kafka-broker", file="eventing-kafka-sink.yaml") }}
kubectl apply -f {{ artifact(org="knative-sandbox", repo="eventing-kafka-broker", file="eventing-kafka-sink.yaml") }}
```
1. Verify that `kafka-controller` and `kafka-sink-receiver` are running:

View File

@ -7,7 +7,7 @@ type: "docs"
# Apache Kafka Source Example
Tutorial on how to build and deploy a `KafkaSource` [Eventing source](../../../sources/README.md) using a Knative Serving `Service`.
Tutorial on how to build and deploy a `KafkaSource` event source.
## Background
@ -17,8 +17,7 @@ The `KafkaSource` reads all the messages, from all partitions, and sends those m
## Prerequisites
- Ensure that you meet the [prerequisites listed in the Apache Kafka overview](../).
- A Kubernetes cluster with [Knative Kafka Source installed](../../../../install/).
- A Kubernetes cluster with [Knative Kafka Source installed](../../../admin/install/).
## Apache Kafka Topic (Optional)
@ -73,6 +72,7 @@ The `KafkaSource` reads all the messages, from all partitions, and sends those m
2. Build the Event Display Service (`event-display.yaml`)
```yaml
kubectl apply -f - <<EOF
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
@ -85,23 +85,20 @@ The `KafkaSource` reads all the messages, from all partitions, and sends those m
- # This corresponds to
# https://github.com/knative/eventing/tree/main/cmd/event_display/main.go
image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display
EOF
```
1. Deploy the Event Display Service
Example output:
```
$ kubectl apply --filename event-display.yaml
...
service.serving.knative.dev/event-display created
```
1. Ensure that the Service pod is running. The pod name will be prefixed with
`event-display`.
```
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
event-display-00001-deployment-5d5df6c7-gv2j4 2/2 Running 0 72s
...
```
### Apache Kafka Event Source

View File

@ -1,10 +1,3 @@
---
title: "Autoscaling concepts"
linkTitle: "Autoscaling concepts"
weight: 01
type: "docs"
---
# Autoscaling concepts
This section covers conceptual information about which Autoscaler types are supported, as well as fundamental information about how autoscaling is configured.
@ -15,7 +8,7 @@ Knative Serving supports the implementation of Knative Pod Autoscaler (KPA) and
**IMPORTANT:** If you want to use Kubernetes Horizontal Pod Autoscaler (HPA),
you must install it after you install Knative Serving.
For how to install HPA, see [Install optional Eventing extensions](../../install/install-extensions.md#install-optional-serving-extensions).
For how to install HPA, see [Install optional Eventing extensions](../../../admin/install/install-extensions).
### Knative Pod Autoscaler (KPA)
@ -80,9 +73,6 @@ The type of Autoscaler implementation (KPA or HPA) can be configured by using th
pod-autoscaler-class: "kpa.autoscaling.knative.dev"
```
## Global versus per-revision settings
Configuring for autoscaling in Knative can be set using either global or per-revision settings.

View File

@ -1,9 +1,3 @@
---
title: "Enabling requests to Knative services when additional authorization policies are enabled"
weight: 25
type: "docs"
---
# Enabling requests to Knative services when additional authorization policies are enabled
Knative Serving system pods, such as the activator and autoscaler components, require access to your deployed Knative services.
@ -14,7 +8,7 @@ If you have configured additional security features, such as Istio's authorizati
You must meet the following prerequisites to use Istio AuthorizationPolicy:
- Istio must be used for your Knative Ingress.
See [Install a networking layer](../install/install-serving-with-yaml.md#install-a-networking-layer).
See [Install a networking layer](../../../admin/install/install-serving-with-yaml#install-a-networking-layer).
- Istio sidecar injection must be enabled.
See the [Istio Documentation](https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/).
@ -56,13 +50,13 @@ $ kubectl exec deployment/httpbin -c httpbin -it -- curl -s http://httpbin.knati
- In STRICT mode, requests will simply be rejected.
To understand when requests are forwarded through the activator, see [documentation](https://knative.dev/docs/serving/autoscaling/target-burst-capacity/) on the `TargetBurstCapacity` setting.
To understand when requests are forwarded through the activator, see [documentation](../load-balancing/target-burst-capacity/) on the `TargetBurstCapacity` setting.
This also means that many Istio AuthorizationPolicies won't work as expected. For example, if you set up a rule allowing requests from a particular source into a Knative service, you will see requests being rejected if they are forwarded by the activator.
For example, the following policy allows requests from within pods in the `serving-tests` namespace to other pods in the `serving-tests` namespace.
```
```yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
@ -80,7 +74,7 @@ Requests here will fail when forwarded by the activator, because the Istio proxy
Currently, the easiest way around this is to explicitly allow requests from the `knative-serving` namespace, for example by adding it to the list in the above policy:
```
```yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
@ -96,11 +90,9 @@ spec:
## Health checking and metrics collection
In addition to allowing your application path, you'll need to configure Istio AuthorizationPolicy
to allow health checking and metrics collection to your applications from system pods.
You can allow access from system pods [by paths](#allow-access-from-system-pods-by-paths).
In addition to allowing your application path, you'll need to configure Istio AuthorizationPolicy to allow health checking and metrics collection to your applications from system pods. You can allow access from system pods by paths.
## Allowing access from system pods by paths
### Allowing access from system pods by paths
Knative system pods access your application using the following paths:
@ -112,8 +104,8 @@ The `/healthz` path allows system pods to probe the service.
You can add the `/metrics` and `/healthz` paths to the AuthorizationPolicy as shown in the example:
```
$ cat <<EOF | kubectl apply -f -
```yaml
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:

View File

@ -67,7 +67,7 @@ You can learn more about the build configuration syntax
## Configuring the Service descriptor
Importantly, in [helloworld-scala.yaml](./helloworld-scala.yaml) **change the
Importantly, in the `service.yaml` file, **change the
image reference to match up with the repository**, name, and version specified
in the [build.sbt](./build.sbt) in the previous section.
@ -111,18 +111,17 @@ local Docker Repository.
=== "yaml"
Apply the [Service yaml definition](./helloworld-scala.yaml):
Apply the [Service yaml definition](./service.yaml):
```shell
kubectl apply --filename helloworld-scala.yaml
```bash
kubectl apply -f service.yaml
```
=== "kn"
With `kn` you can deploy the service with
```shell
```bash
kn service create helloworld-scala --image=docker.io/{username}/helloworld-scala --env TARGET="Scala Sample v1"
```
@ -145,16 +144,11 @@ local Docker Repository.
http://helloworld-scala.default.1.2.3.4.sslip.io
```
=== "kubectl"
Then find the service host:
```shell
```bash
kubectl get ksvc helloworld-scala \
--output=custom-columns=NAME:.metadata.name,URL:.status.url
@ -165,7 +159,7 @@ local Docker Repository.
Finally, to try your service, use the obtained URL:
```shell
```bash
curl -v http://helloworld-scala.default.1.2.3.4.sslip.io
```
@ -188,26 +182,16 @@ local Docker Repository.
curl -v http://helloworld-scala.default.1.2.3.4.sslip.io
```
## Cleanup
=== "kubectl"
```shell
kubectl delete --filename helloworld-scala.yaml
```bash
kubectl delete -f service.yaml
```
```
kubetl delete --filename helloworld-scala.yaml
```
=== "kn"
```shell
```bash
kn service delete helloworld-scala
```

View File

@ -1,10 +1,3 @@
---
title: "Simple Traffic Splitting Between Revisions"
linkTitle: "Traffic splitting"
weight: 1
type: "docs"
---
# Simple Traffic Splitting Between Revisions
This samples builds off of the [Creating a RESTful Service](../rest-api-go)
@ -13,13 +6,13 @@ splitting traffic between the two created Revisions.
## Prerequisites
1. Complete the Service creation steps in
[Creating a RESTful Service](../rest-api-go).
1. Complete the Service creation steps in [Creating a RESTful Service](../rest-api-go).
1. Move into the docs directory:
```shell
cd $GOPATH/src/github.com/knative/docs
```
```bash
cd $GOPATH/src/github.com/knative/docs
```
## Using the `traffic:` block
@ -39,42 +32,42 @@ us in the previous sample:
under `status:` we see a specific `revisionName:` here, which is what it has
resolved to (in this case the name we asked for).
```shell
$ kubectl get ksvc -oyaml stock-service-example
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: stock-service-example
...
spec:
template: ... # A defaulted version of what we provided.
```yaml
kubectl get ksvc -oyaml stock-service-example
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: stock-service-example
...
spec:
template: ... # A defaulted version of what we provided.
traffic:
- latestRevision: true
percent: 100
traffic:
- latestRevision: true
percent: 100
status:
...
traffic:
- percent: 100
revisionName: stock-service-example-first
```
status:
...
traffic:
- percent: 100
revisionName: stock-service-example-first
```
1. The `release_sample.yaml` in this directory overwrites the defaulted traffic
block with a block that fixes traffic to the revision
`stock-service-example-first`, while keeping the latest ready revision
available via the sub-route "latest".
```shell
kubectl apply --filename docs/serving/samples/traffic-splitting/release_sample.yaml
```
```bash
kubectl apply -f docs/serving/samples/traffic-splitting/release_sample.yaml
```
1. The `spec` of the Service should now show our `traffic` block with the
Revision name we specified above.
```shell
kubectl get ksvc stock-service-example --output yaml
```
```bash
kubectl get ksvc stock-service-example --output yaml
```
## Updating the Service
@ -88,45 +81,44 @@ will result in a new Revision.
For comparison, you can diff the `release_sample.yaml` with the
`updated_sample.yaml`.
```shell
diff serving/samples/traffic-splitting/release_sample.yaml \
serving/samples/traffic-splitting/updated_sample.yaml
```
```bash
diff serving/samples/traffic-splitting/release_sample.yaml \
serving/samples/traffic-splitting/updated_sample.yaml
```
1. Execute the command below to update Service, resulting in a new Revision.
```shell
kubectl apply --filename docs/serving/samples/traffic-splitting/updated_sample.yaml
```
```bash
kubectl apply --filename docs/serving/samples/traffic-splitting/updated_sample.yaml
```
2. With our `traffic` block, traffic will _not_ shift to the new Revision
1. With our `traffic` block, traffic will _not_ shift to the new Revision
automatically. However, it will be available via the URL associated with our
`latest` sub-route. This can be verified through the Service status, by
finding the entry of `status.traffic` for `latest`:
```shell
kubectl get ksvc stock-service-example --output yaml
```
```bash
kubectl get ksvc stock-service-example --output yaml
```
3. The readiness of the Service can be verified through the Service Conditions.
1. The readiness of the Service can be verified through the Service Conditions.
When the Service conditions report it is ready again, you can access the new
Revision using the same method as found in the
[previous sample](../rest-api-go/README.md#access-the-service) using the
Revision using the same method as found in the previous sample using the
Service hostname found above.
```shell
# Replace "latest" with whichever tag for which we want the hostname.
export LATEST_HOSTNAME=`kubectl get ksvc stock-service-example --output jsonpath="{.status.traffic[?(@.tag=='latest')].url}" | cut -d'/' -f 3`
curl --header "Host: ${LATEST_HOSTNAME}" http://${INGRESS_IP}
```
```bash
# Replace "latest" with whichever tag for which we want the hostname.
export LATEST_HOSTNAME=`kubectl get ksvc stock-service-example --output jsonpath="{.status.traffic[?(@.tag=='latest')].url}" | cut -d'/' -f 3`
curl --header "Host: ${LATEST_HOSTNAME}" http://${INGRESS_IP}
```
- Visiting the Service's domain will still hit the original Revision, since we
configured it to receive 100% of our main traffic (you can also use the
`current` sub-route).
- Visiting the Service's domain will still hit the original Revision, since we
configured it to receive 100% of our main traffic (you can also use the
`current` sub-route).
```shell
curl --header "Host:${SERVICE_HOSTNAME}" http://${INGRESS_IP}
```
```bash
curl --header "Host:${SERVICE_HOSTNAME}" http://${INGRESS_IP}
```
## Traffic Splitting
@ -136,28 +128,28 @@ extending our `traffic` list, and splitting the `percent` across them.
1. Execute the command below to update Service, resulting in a 50/50 traffic
split.
```shell
kubectl apply --filename docs/serving/samples/traffic-splitting/split_sample.yaml
```
```bash
kubectl apply -f docs/serving/samples/traffic-splitting/split_sample.yaml
```
2. Verify the deployment by checking the service status:
1. Verify the deployment by checking the service status:
```shell
kubectl get ksvc --output yaml
```
```bash
kubectl get ksvc --output yaml
```
3. Once updated, `curl` requests to the base domain should result in responses
1. Once updated, `curl` requests to the base domain should result in responses
split evenly between `Welcome to the share app!` and
`Welcome to the stock app!`.
```shell
curl --header "Host:${SERVICE_HOSTNAME}" http://${INGRESS_IP}
```
```shell
curl --header "Host:${SERVICE_HOSTNAME}" http://${INGRESS_IP}
```
## Clean Up
To clean up the sample service:
To clean up the sample service, run the command:
```shell
kubectl delete --filename docs/serving/samples/traffic-splitting/split_sample.yaml
```bash
kubectl delete -f docs/serving/samples/traffic-splitting/split_sample.yaml
```

View File

@ -1,17 +1,25 @@
---
title: "Enabling automatic TLS certificate provisioning"
linkTitle: "Enabling auto TLS certs"
weight: 64
type: "docs"
---
# Enabling automatic TLS certificate provisioning
If you install and configure cert-manager, you can configure Knative to
automatically obtain new TLS certificates and renew existing ones for Knative
Services.
To learn more about using secure connections in Knative, see
[Configuring HTTPS with TLS certificates](./using-a-tls-cert.md).
Services. To learn more about using secure connections in Knative, see
[Configuring HTTPS with TLS certificates](../using-a-tls-cert).
## Before you begin
The following must be installed on your Knative cluster:
- [Knative Serving](../../../admin/install/).
- A Networking layer such as Kourier, Istio with SDS v1.3 or higher, Contour v1.1 or higher, or Gloo v0.18.16 or higher. See [Install a networking layer](../../../admin/install/install-serving-with-yaml#install-a-networking-layer) or [Istio with SDS, version 1.3 or higher](../../../admin/install/installing-istio#installing-istio-with-SDS-to-secure-the-ingress-gateway).
!!! note
Currently, [Ambassador](https://github.com/datawire/ambassador) is unsupported for use with Auto TLS.
- [`cert-manager` version `1.0.0` or higher](../installing-cert-manager).
- Your Knative cluster must be configured to use a [custom domain](../using-a-custom-domain).
- Your DNS provider must be setup and configured to your domain.
- If you want to use HTTP-01 challenge, you need to configure your custom
domain to map to the IP of ingress. You can achieve this by adding a DNS A record to map the domain to the IP according to the instructions of your DNS provider.
## Automatic TLS provision mode
@ -35,34 +43,12 @@ Knative supports the following Auto TLS modes:
- When using HTTP-01 challenge, **a certificate will be provisioned per Knative Service.**
- **HTTP-01 does not support provisioning a certificate per namespace.**
## Before you begin
You must meet the following prerequisites to enable Auto TLS:
- The following must be installed on your Knative cluster:
- [Knative Serving](../install/).
- A Networking layer such as Kourier, Istio with SDS v1.3 or higher, Contour v1.1 or higher, or Gloo v0.18.16 or higher.
See [Install a networking layer](../install/install-serving-with-yaml.md#install-a-networking-layer) or
[Istio with SDS, version 1.3 or higher](../install/installing-istio.md#installing-istio-with-SDS-to-secure-the-ingress-gateway).<br>
**Note:** Currently, [Ambassador](https://github.com/datawire/ambassador) is unsupported for use with Auto TLS.
- [cert-manager version `1.0.0` and higher](./installing-cert-manager.md).
- Your Knative cluster must be configured to use a
[custom domain](./using-a-custom-domain.md).
- Your DNS provider must be setup and configured to your domain.
- If you want to use HTTP-01 challenge, you need to configure your custom
domain to map to the IP of ingress. You can achieve this by adding a DNS A record to map the domain to the IP according to the instructions of your DNS provider.
## Enabling Auto TLS
To enable support for Auto TLS in Knative:
### Create cert-manager ClusterIssuer
1. Create and add the `ClusterIssuer` configuration file to your Knative cluster
to define who issues the TLS certificates, how requests are validated,
1. Create and add the `ClusterIssuer` configuration file to your Knative cluster to define who issues the TLS certificates, how requests are validated,
and which DNS provider validates those requests.
#### ClusterIssuer for DNS-01 challenge
### ClusterIssuer for DNS-01 challenge
Use the cert-manager reference to determine how to configure your
`ClusterIssuer` file:
@ -78,7 +64,7 @@ and which DNS provider validates those requests.
the Let's Encrypt account info, required `DNS-01` challenge type, and
Cloud DNS provider info defined. For the complete Google Cloud DNS
example, see
[Configuring HTTPS with cert-manager and Google Cloud DNS](./using-cert-manager-on-gcp.md).
[Configuring HTTPS with cert-manager and Google Cloud DNS](./using-cert-manager-on-gcp).
```shell
apiVersion: cert-manager.io/v1
@ -106,11 +92,11 @@ and which DNS provider validates those requests.
key: key.json
```
#### ClusterIssuer for HTTP-01 challenge
### ClusterIssuer for HTTP-01 challenge
Run the following command to apply the ClusterIssuer for HTT01 challenge:
```shell
```yaml
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
@ -130,8 +116,8 @@ and which DNS provider validates those requests.
1. Ensure that the ClusterIssuer is created successfully:
```shell
kubectl get clusterissuer <cluster-issuer-name> --output yaml
```bash
kubectl get clusterissuer <cluster-issuer-name> -o yaml
```
Result: The `Status.Conditions` should include `Ready=True`.

View File

@ -1,9 +1,3 @@
---
title: "Creating and using Subroutes"
weight: 20
type: "docs"
---
# Creating and using Subroutes
Subroutes are most effective when used with multiple revisions. When defining a Knative service/route, the traffic section of the spec can split between the different revisions. For example:
@ -37,4 +31,4 @@ traffic:
In the above example, you can access the staging target by accessing `staging-<route name>.<namespace>.<domain>`. The targets for `bar` and `baz` can only be accessed using the main route, `<route name>.<namespace>.<domain>`.
When a traffic target gets tagged, a new Kubernetes service is created for it so that other services can also access it within the cluster. From the above example, a new Kubernetes service called `staging-<route name>` will be created in the same namespace. This service has the ability to override the visibility of this specific route by applying the label `networking.knative.dev/visibility` with value `cluster-local`. See [cluster local routes](./cluster-local-route.md#label-a-service-to-be-cluster-local) for more information about how to restrict visibility on the specific route.
When a traffic target gets tagged, a new Kubernetes service is created for it so that other services can also access it within the cluster. From the above example, a new Kubernetes service called `staging-<route name>` will be created in the same namespace. This service has the ability to override the visibility of this specific route by applying the label `networking.knative.dev/visibility` with value `cluster-local`. See the documentation on [private services](../../../developer/serving/services/private-services) for more information about how to restrict visibility on the specific route.

View File

@ -82,6 +82,7 @@ nav:
- Configuring scale bounds: serving/autoscaling/scale-bounds.md
- Additional autoscaling configuration for Knative Pod Autoscaler: serving/autoscaling/kpa-specific.md
- Autoscale Sample App - Go: serving/autoscaling/autoscale-go/index.md
# Administrator topics
- Administrator Topics:
- Deployment Configuration: serving/services/deployment.md
- Kubernetes services: serving/knative-kubernetes-services.md
@ -149,6 +150,7 @@ nav:
- Create a SinkBinding object: eventing/sources/sinkbinding/getting-started.md
- SinkBinding reference: eventing/sources/sinkbinding/reference.md
- Camel source: eventing/sources/apache-camel-source/README.md
- Kafka source: eventing/sources/kafka-source/README.md
- Creating an event source:
- Overview: eventing/sources/creating-event-sources/README.md
- Writing an event source using Javascript: eventing/sources/creating-event-sources/writing-event-source-easy-way/README.md
@ -207,7 +209,6 @@ nav:
- Overview: eventing/samples/kafka/README.md
- Binding Example: eventing/samples/kafka/binding/index.md
- Channel Example: eventing/samples/kafka/channel/README.md
- Source Example: eventing/sources/kafka-source/README.md
- Parallel:
- Overview: eventing/samples/parallel/README.md
- Multiple Cases: eventing/samples/parallel/multiple-branches/README.md