Fixing broken links and general cleanup (#3722)

* Fixing broken links

* fixing up the python sample a bit

* cleanup golang sample

* fix links

* move kafka source

* fix links

* fix links
This commit is contained in:
Ashleigh Brennan 2021-06-04 09:53:45 -05:00 committed by GitHub
parent 7da688bc18
commit f21bd81918
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 380 additions and 449 deletions

View File

@ -31,9 +31,7 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-go
## Before you begin ## Before you begin
- A Kubernetes cluster with - A Kubernetes cluster with [Knative Eventing](../../../../admin/install) installed.
[Knative Eventing](../../../getting-started.md#installing-knative-eventing)
installed.
- [Docker](https://www.docker.com) installed and running on your local machine, - [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry). and a Docker Hub account configured (we'll use it for a container registry).
@ -42,105 +40,105 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-go
1. Create a new file named `helloworld.go` and paste the following code. This 1. Create a new file named `helloworld.go` and paste the following code. This
code creates a basic web server which listens on port 8080: code creates a basic web server which listens on port 8080:
```go ```go
import ( import (
"context" "context"
"log" "log"
cloudevents "github.com/cloudevents/sdk-go/v2" cloudevents "github.com/cloudevents/sdk-go/v2"
"github.com/google/uuid" "github.com/google/uuid"
) )
func receive(ctx context.Context, event cloudevents.Event) (*cloudevents.Event, cloudevents.Result) { func receive(ctx context.Context, event cloudevents.Event) (*cloudevents.Event, cloudevents.Result) {
// Here is where your code to process the event will go. // Here is where your code to process the event will go.
// In this example we will log the event msg // In this example we will log the event msg
log.Printf("Event received. \n%s\n", event) log.Printf("Event received. \n%s\n", event)
data := &HelloWorld{} data := &HelloWorld{}
if err := event.DataAs(data); err != nil { if err := event.DataAs(data); err != nil {
log.Printf("Error while extracting cloudevent Data: %s\n", err.Error()) log.Printf("Error while extracting cloudevent Data: %s\n", err.Error())
return nil, cloudevents.NewHTTPResult(400, "failed to convert data: %s", err) return nil, cloudevents.NewHTTPResult(400, "failed to convert data: %s", err)
}
log.Printf("Hello World Message from received event %q", data.Msg)
// Respond with another event (optional)
// This is optional and is intended to show how to respond back with another event after processing.
// The response will go back into the knative eventing system just like any other event
newEvent := cloudevents.NewEvent()
newEvent.SetID(uuid.New().String())
newEvent.SetSource("knative/eventing/samples/hello-world")
newEvent.SetType("dev.knative.samples.hifromknative")
if err := newEvent.SetData(cloudevents.ApplicationJSON, HiFromKnative{Msg: "Hi from helloworld-go app!"}); err != nil {
return nil, cloudevents.NewHTTPResult(500, "failed to set response data: %s", err)
}
log.Printf("Responding with event\n%s\n", newEvent)
return &newEvent, nil
} }
log.Printf("Hello World Message from received event %q", data.Msg)
// Respond with another event (optional) func main() {
// This is optional and is intended to show how to respond back with another event after processing. log.Print("Hello world sample started.")
// The response will go back into the knative eventing system just like any other event c, err := cloudevents.NewDefaultClient()
newEvent := cloudevents.NewEvent() if err != nil {
newEvent.SetID(uuid.New().String()) log.Fatalf("failed to create client, %v", err)
newEvent.SetSource("knative/eventing/samples/hello-world") }
newEvent.SetType("dev.knative.samples.hifromknative") log.Fatal(c.StartReceiver(context.Background(), receive))
if err := newEvent.SetData(cloudevents.ApplicationJSON, HiFromKnative{Msg: "Hi from helloworld-go app!"}); err != nil {
return nil, cloudevents.NewHTTPResult(500, "failed to set response data: %s", err)
} }
log.Printf("Responding with event\n%s\n", newEvent) ```
return &newEvent, nil
}
func main() {
log.Print("Hello world sample started.")
c, err := cloudevents.NewDefaultClient()
if err != nil {
log.Fatalf("failed to create client, %v", err)
}
log.Fatal(c.StartReceiver(context.Background(), receive))
}
```
1. Create a new file named `eventschemas.go` and paste the following code. This 1. Create a new file named `eventschemas.go` and paste the following code. This
defines the data schema of the CloudEvents. defines the data schema of the CloudEvents.
```go ```go
package main package main
// HelloWorld defines the Data of CloudEvent with type=dev.knative.samples.helloworld // HelloWorld defines the Data of CloudEvent with type=dev.knative.samples.helloworld
type HelloWorld struct { type HelloWorld struct {
// Msg holds the message from the event // Msg holds the message from the event
Msg string `json:"msg,omitempty,string"` Msg string `json:"msg,omitempty,string"`
} }
// HiFromKnative defines the Data of CloudEvent with type=dev.knative.samples.hifromknative // HiFromKnative defines the Data of CloudEvent with type=dev.knative.samples.hifromknative
type HiFromKnative struct { type HiFromKnative struct {
// Msg holds the message from the event // Msg holds the message from the event
Msg string `json:"msg,omitempty,string"` Msg string `json:"msg,omitempty,string"`
} }
``` ```
1. In your project directory, create a file named `Dockerfile` and copy the code 1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it. For detailed instructions on dockerizing a Go app, see block below into it. For detailed instructions on dockerizing a Go app, see
[Deploying Go servers with Docker](https://blog.golang.org/docker). [Deploying Go servers with Docker](https://blog.golang.org/docker).
```docker ```docker
# Use the official Golang image to create a build artifact. # Use the official Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go. # This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang # https://hub.docker.com/_/golang
FROM golang:1.14 as builder FROM golang:1.14 as builder
# Copy local code to the container image. # Copy local code to the container image.
WORKDIR /app WORKDIR /app
# Retrieve application dependencies using go modules. # Retrieve application dependencies using go modules.
# Allows container builds to reuse downloaded dependencies. # Allows container builds to reuse downloaded dependencies.
COPY go.* ./ COPY go.* ./
RUN go mod download RUN go mod download
# Copy local code to the container image. # Copy local code to the container image.
COPY . ./ COPY . ./
# Build the binary. # Build the binary.
# -mod=readonly ensures immutable go.mod and go.sum in container builds. # -mod=readonly ensures immutable go.mod and go.sum in container builds.
RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o helloworld RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o helloworld
# Use a Docker multi-stage build to create a lean production image. # Use a Docker multi-stage build to create a lean production image.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine:3 FROM alpine:3
RUN apk add --no-cache ca-certificates RUN apk add --no-cache ca-certificates
# Copy the binary to the production image from the builder stage. # Copy the binary to the production image from the builder stage.
COPY --from=builder /app/helloworld /helloworld COPY --from=builder /app/helloworld /helloworld
# Run the web service on container startup. # Run the web service on container startup.
CMD ["/helloworld"] CMD ["/helloworld"]
``` ```
1. Create a new file, `sample-app.yaml` and copy the following service 1. Create a new file, `sample-app.yaml` and copy the following service
definition into the file. Make sure to replace `{username}` with your Docker definition into the file. Make sure to replace `{username}` with your Docker
@ -218,9 +216,9 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-go
1. Use the go tool to create a 1. Use the go tool to create a
[`go.mod`](https://github.com/golang/go/wiki/Modules#gomod) manifest. [`go.mod`](https://github.com/golang/go/wiki/Modules#gomod) manifest.
```shell ```shell
go mod init github.com/knative/docs/docs/serving/samples/hello-world/helloworld-go go mod init github.com/knative/docs/docs/serving/samples/hello-world/helloworld-go
``` ```
## Building and deploying the sample ## Building and deploying the sample
@ -231,25 +229,24 @@ folder) you're ready to build and deploy the sample app.
Docker Hub, run these commands replacing `{username}` with your Docker Hub Docker Hub, run these commands replacing `{username}` with your Docker Hub
username: username:
```shell ```shell
# Build the container on your local machine # Build the container on your local machine
docker build -t {username}/helloworld-go . docker build -t {username}/helloworld-go .
# Push the container to docker registry # Push the container to docker registry
docker push {username}/helloworld-go docker push {username}/helloworld-go
``` ```
1. After the build has completed and the container is pushed to docker hub, you 1. After the build has completed and the container is pushed to docker hub, you
can deploy the sample application into your cluster. Ensure that the can deploy the sample application into your cluster. Ensure that the
container image value in `sample-app.yaml` matches the container you built in container image value in `sample-app.yaml` matches the container you built in
the previous step. Apply the configuration using `kubectl`: the previous step. Apply the configuration using `kubectl`:
```shell ```shell
kubectl apply --filename sample-app.yaml kubectl apply --filename sample-app.yaml
``` ```
1. Above command created a namespace `knative-samples` and create a default 1. Above command created a namespace `knative-samples` and create a default Broker it. Verify using the following command:
Broker it. Verify using the following command:
```shell ```shell
kubectl get broker --namespace knative-samples kubectl get broker --namespace knative-samples
@ -258,9 +255,9 @@ folder) you're ready to build and deploy the sample app.
**Note:** you can also use injection based on labels with the **Note:** you can also use injection based on labels with the
Eventing sugar controller. Eventing sugar controller.
For how to install the Eventing sugar controller, see For how to install the Eventing sugar controller, see
[Install optional Eventing extensions](../../../../install/install-extensions.md#install-optional-eventing-extensions). [Install optional Eventing extensions](../../../../admin/install/install-extensions#install-optional-eventing-extensions).
1. It deployed the helloworld-go app as a K8s Deployment and created a K8s 1. It deployed the helloworld-go app as a K8s Deployment and created a K8s
service names helloworld-go. Verify using the following command. service names helloworld-go. Verify using the following command.
```shell ```shell
@ -269,8 +266,9 @@ folder) you're ready to build and deploy the sample app.
kubectl --namespace knative-samples get svc helloworld-go kubectl --namespace knative-samples get svc helloworld-go
``` ```
1. It created a Knative Eventing Trigger to route certain events to the 1. It created a Knative Eventing Trigger to route certain events to the
helloworld-go application. Make sure that Ready=true helloworld-go application. Make sure that Ready=true
```shell ```shell
kubectl --namespace knative-samples get trigger helloworld-go kubectl --namespace knative-samples get trigger helloworld-go
``` ```
@ -286,28 +284,31 @@ We can send an http request directly to the [Broker](../../../broker/)
with correct CloudEvent headers set. with correct CloudEvent headers set.
1. Deploy a curl pod and SSH into it 1. Deploy a curl pod and SSH into it
```shell
kubectl --namespace knative-samples run curl --image=radial/busyboxplus:curl -it ```bash
``` kubectl -n knative-samples run curl --image=radial/busyboxplus:curl -it
```
1. Get the Broker URL 1. Get the Broker URL
```shell
kubectl --namespace knative-samples get broker default ```bash
``` kubectl -n knative-samples get broker default
```
1. Run the following in the SSH terminal. Please replace the URL with the URL of 1. Run the following in the SSH terminal. Please replace the URL with the URL of
the default broker. the default broker.
```shell ```bash
curl -v "http://broker-ingress.knative-eventing.svc.cluster.local/knative-samples/default" \ curl -v "http://broker-ingress.knative-eventing.svc.cluster.local/knative-samples/default" \
-X POST \ -X POST \
-H "Ce-Id: 536808d3-88be-4077-9d7a-a3f162705f79" \ -H "Ce-Id: 536808d3-88be-4077-9d7a-a3f162705f79" \
-H "Ce-Specversion: 1.0" \ -H "Ce-Specversion: 1.0" \
-H "Ce-Type: dev.knative.samples.helloworld" \ -H "Ce-Type: dev.knative.samples.helloworld" \
-H "Ce-Source: dev.knative.samples/helloworldsource" \ -H "Ce-Source: dev.knative.samples/helloworldsource" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{"msg":"Hello World from the curl pod."}' -d '{"msg":"Hello World from the curl pod."}'
exit
exit ```
```
### Verify that event is received by helloworld-go app ### Verify that event is received by helloworld-go app
@ -316,13 +317,13 @@ back with another event.
1. Display helloworld-go app logs 1. Display helloworld-go app logs
```shell ```bash
kubectl --namespace knative-samples logs -l app=helloworld-go --tail=50 kubectl --namespace knative-samples logs -l app=helloworld-go --tail=50
``` ```
You should see something similar to: You should see something similar to:
```shell ```bash
Event received. Event received.
Validation: valid Validation: valid
Context Attributes, Context Attributes,
@ -353,9 +354,7 @@ back with another event.
``` ```
Play around with the CloudEvent attributes in the curl command and the Play around with the CloudEvent attributes in the curl command and the trigger specification to [understand how triggers work](../../../broker/triggers).
trigger specification to understand how
[Triggers work](../../../broker/README.md#trigger).
## Verify reply from helloworld-go app ## Verify reply from helloworld-go app
@ -366,8 +365,8 @@ mesh via the Broker and can be delivered to other services using a Trigger
1. Deploy a pod that receives any CloudEvent and logs the event to its output. 1. Deploy a pod that receives any CloudEvent and logs the event to its output.
```shell ```yaml
kubectl --namespace knative-samples apply --filename - << END kubectl -n knative-samples apply -f - <<EOF
# event-display app deploment # event-display app deploment
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
@ -402,65 +401,66 @@ mesh via the Broker and can be delivered to other services using a Trigger
- protocol: TCP - protocol: TCP
port: 80 port: 80
targetPort: 8080 targetPort: 8080
END EOF
``` ```
1. Create a trigger to deliver the event to the above service 1. Create a trigger to deliver the event to the above service
```shell ```yaml
kubectl --namespace knative-samples apply --filename - << END kubectl -n knative-samples apply -f - <<EOF
apiVersion: eventing.knative.dev/v1 apiVersion: eventing.knative.dev/v1
kind: Trigger kind: Trigger
metadata: metadata:
name: event-display
namespace: knative-samples
spec:
broker: default
filter:
attributes:
type: dev.knative.samples.hifromknative
source: knative/eventing/samples/hello-world
subscriber:
ref:
apiVersion: v1
kind: Service
name: event-display name: event-display
END namespace: knative-samples
``` spec:
broker: default
filter:
attributes:
type: dev.knative.samples.hifromknative
source: knative/eventing/samples/hello-world
subscriber:
ref:
apiVersion: v1
kind: Service
name: event-display
EOF
```
1. [Send a CloudEvent to the Broker](###Send-CloudEvent-to-the-Broker) 1. [Send a CloudEvent to the Broker](#send-cloudevent-to-the-broker)
1. Check the logs of event-display service 1. Check the logs of `event-display` service:
```shell
kubectl --namespace knative-samples logs -l app=event-display --tail=50
```
You should see something similar to:
```shell
cloudevents.Event
Validation: valid
Context Attributes,
specversion: 1.0
type: dev.knative.samples.hifromknative
source: knative/eventing/samples/hello-world
id: 8a7384b9-8bbe-4634-bf0f-ead07e450b2a
time: 2019-10-04T22:53:39.844943931Z
datacontenttype: application/json
Extensions,
knativearrivaltime: 2019-10-04T22:53:39Z
knativehistory: default-kn2-ingress-kn-channel.knative-samples.svc.cluster.local
traceparent: 00-4b01db030b9ea04bb150b77c8fa86509-2740816590a7604f-00
Data,
{
"msg": "Hi from helloworld-go app!"
}
```
**Note: You could use the above approach to test your applications too.** ```bash
kubectl -n knative-samples logs -l app=event-display --tail=50
```
You should see something similar to:
```bash
cloudevents.Event
Validation: valid
Context Attributes,
specversion: 1.0
type: dev.knative.samples.hifromknative
source: knative/eventing/samples/hello-world
id: 8a7384b9-8bbe-4634-bf0f-ead07e450b2a
time: 2019-10-04T22:53:39.844943931Z
datacontenttype: application/json
Extensions,
knativearrivaltime: 2019-10-04T22:53:39Z
knativehistory: default-kn2-ingress-kn-channel.knative- samples.svc.cluster.local
traceparent: 00-4b01db030b9ea04bb150b77c8fa86509-2740816590a7604f-00
Data,
{
"msg": "Hi from helloworld-go app!"
}
```
## Removing the sample app deployment ## Removing the sample app deployment
To remove the sample app from your cluster, delete the service record: To remove the sample app from your cluster, delete the service record:
```shell ```bash
kubectl delete --filename sample-app.yaml kubectl delete -f sample-app.yaml
``` ```

View File

@ -1,10 +1,3 @@
---
title: "Hello World - Python"
linkTitle: "Python"
weight: 20
type: "docs"
---
# Hello World - Python # Hello World - Python
A simple web app written in Python that you can use to test knative eventing. It shows how to consume a [CloudEvent](https://cloudevents.io/) in Knative eventing, and optionally how to respond back with another CloudEvent in the http response, by adding the Cloud Eventing headers outlined in the Cloud Events standard definition. A simple web app written in Python that you can use to test knative eventing. It shows how to consume a [CloudEvent](https://cloudevents.io/) in Knative eventing, and optionally how to respond back with another CloudEvent in the http response, by adding the Cloud Eventing headers outlined in the Cloud Events standard definition.
@ -16,7 +9,7 @@ Follow the steps below to create the sample code and then deploy the app to your
cluster. You can also download a working copy of the sample, by running the cluster. You can also download a working copy of the sample, by running the
following commands: following commands:
```shell ```bash
# Clone the relevant branch version such as "release-0.13" # Clone the relevant branch version such as "release-0.13"
git clone -b "{{ branch }}" https://github.com/knative/docs knative-docs git clone -b "{{ branch }}" https://github.com/knative/docs knative-docs
cd knative-docs/docs/eventing/samples/helloworld/helloworld-python cd knative-docs/docs/eventing/samples/helloworld/helloworld-python
@ -24,7 +17,7 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-python
## Before you begin ## Before you begin
- A Kubernetes cluster with [Knative Eventing](../../../getting-started.md#installing-knative-eventing) installed. - A Kubernetes cluster with [Knative Eventing](../../../../admin/install) installed.
- [Docker](https://www.docker.com) installed and running on your local machine, - [Docker](https://www.docker.com) installed and running on your local machine,
and a Docker Hub account configured (we'll use it for a container registry). and a Docker Hub account configured (we'll use it for a container registry).
@ -54,15 +47,14 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-python
if __name__ == '__main__': if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=8080) app.run(debug=True, host='0.0.0.0', port=8080)
``` ```
1. Add a requirements.txt file containing the following contents: 1. Add a `requirements.txt` file containing the following contents:
```bash ```bash
Flask==1.1.1 Flask==1.1.1
``` ```
1. In your project directory, create a file named `Dockerfile` and copy the code 1. In your project directory, create a file named `Dockerfile` and copy the code
block below into it. For detailed instructions on dockerizing a Go app, see block below into it. For detailed instructions on dockerizing a Go app, see
[Deploying Go servers with Docker](https://blog.golang.org/docker). [Deploying Go servers with Docker](https://blog.golang.org/docker).
@ -84,9 +76,7 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-python
``` ```
1. Create a new file, `sample-app.yaml` and copy the following service definition 1. Create a new file, `sample-app.yaml` and copy the following service definition into the file. Make sure to replace `{username}` with your Docker Hub username.
into the file. Make sure to replace `{username}` with your Docker Hub
username.
```yaml ```yaml
# Namespace for sample application with eventing enabled # Namespace for sample application with eventing enabled
@ -156,64 +146,73 @@ cd knative-docs/docs/eventing/samples/helloworld/helloworld-python
Once you have recreated the sample code files (or used the files in the sample Once you have recreated the sample code files (or used the files in the sample
folder) you're ready to build and deploy the sample app. folder) you're ready to build and deploy the sample app.
1. Use Docker to build the sample code into a container. To build and push with 1. Use Docker to build the sample code into a container. To build and push with Docker Hub, run these commands replacing `{username}` with your Docker Hub username:
Docker Hub, run these commands replacing `{username}` with your Docker Hub
username:
```shell ```bash
# Build the container on your local machine # Build the container on your local machine
docker build -t {username}/helloworld-python . docker build -t {username}/helloworld-python .
# Push the container to docker registry
# Push the container to docker registry docker push {username}/helloworld-python
docker push {username}/helloworld-python ```
```
1. After the build has completed and the container is pushed to Docker Hub, you 1. After the build has completed and the container is pushed to Docker Hub, you
can deploy the sample application into your cluster. Ensure that the container image value can deploy the sample application into your cluster. Ensure that the container image value in `sample-app.yaml` matches the container you built in the previous step. Apply the configuration using `kubectl`:
in `sample-app.yaml` matches the container you built in the previous step. Apply
the configuration using `kubectl`: ```bash
kubectl apply -f sample-app.yaml
```
1. The previous command creates a namespace `knative-samples` and labels it with `knative-eventing-injection=enabled`, to enable eventing in the namespace. Verify using the following command:
```bash
kubectl get ns knative-samples --show-labels
```
1. It deployed the `helloworld-python` app as a K8s Deployment and created a K8s service names helloworld-python. Verify using the following command:
```bash
kubectl --namespace knative-samples get deployments helloworld-python
kubectl --namespace knative-samples get svc helloworld-python
```
1. It created a Knative Eventing Trigger to route certain events to the helloworld-python application. Make sure that `Ready=true`:
```bash
kubectl -n knative-samples get trigger helloworld-python
```
```shell
kubectl apply --filename sample-app.yaml
```
1. Above command created a namespace `knative-samples` and labelled it with `knative-eventing-injection=enabled`, to enable eventing in the namespace. Verify using the following command:
```shell
kubectl get ns knative-samples --show-labels
```
1. It deployed the helloworld-python app as a K8s Deployment and created a K8s service names helloworld-python. Verify using the following command.
```shell
kubectl --namespace knative-samples get deployments helloworld-python
kubectl --namespace knative-samples get svc helloworld-python
```
1. It created a Knative Eventing Trigger to route certain events to the helloworld-python application. Make sure that Ready=true
```shell
kubectl --namespace knative-samples get trigger helloworld-python
```
## Send and verify CloudEvents ## Send and verify CloudEvents
After you have deployed the application, and have verified that the namespace, sample application and trigger are ready, you can send a CloudEvent. After you have deployed the application, and have verified that the namespace, sample application and trigger are ready, you can send a CloudEvent.
### Send CloudEvent to the Broker ### Send CloudEvent to the Broker
You can send an HTTP request directly to the Knative [broker](../../../broker-trigger.md) if the correct CloudEvent headers are set.
1. Deploy a curl pod and SSH into it You can send an HTTP request directly to the Knative [broker](../../../broker) if the correct CloudEvent headers are set.
```shell
kubectl --namespace knative-samples run curl --image=radial/busyboxplus:curl -it 1. Deploy a curl pod and SSH into it:
```
1. Run the following in the SSH terminal ```bash
```shell kubectl --namespace knative-samples run curl --image=radial/busyboxplus:curl -it
curl -v "default-broker.knative-samples.svc.cluster.local" \ ```
-X POST \
-H "Ce-Id: 536808d3-88be-4077-9d7a-a3f162705f79" \ 1. Run the following command in the SSH terminal:
-H "Ce-specversion: 0.3" \
-H "Ce-Type: dev.knative.samples.helloworld" \ ```bash
-H "Ce-Source: dev.knative.samples/helloworldsource" \ curl -v "default-broker.knative-samples.svc.cluster.local" \
-H "Content-Type: application/json" \ -X POST \
-d '{"msg":"Hello World from the curl pod."}' -H "Ce-Id: 536808d3-88be-4077-9d7a-a3f162705f79" \
-H "Ce-specversion: 0.3" \
-H "Ce-Type: dev.knative.samples.helloworld" \
-H "Ce-Source: dev.knative.samples/helloworldsource" \
-H "Content-Type: application/json" \
-d '{"msg":"Hello World from the curl pod."}'
exit
```
exit
```
### Verify that event is received by helloworld-python app ### Verify that event is received by helloworld-python app
Helloworld-python app logs the context and the msg of the above event, and replies back with another event. Helloworld-python app logs the context and the msg of the above event, and replies back with another event.
1. Display helloworld-python app logs 1. Display helloworld-python app logs
```shell ```shell
kubectl --namespace knative-samples logs -l app=helloworld-python --tail=50 kubectl --namespace knative-samples logs -l app=helloworld-python --tail=50
@ -243,14 +242,15 @@ Helloworld-python app logs the context and the msg of the above event, and repli
{"msg":"Hi from Knative!"} {"msg":"Hi from Knative!"}
``` ```
Try the CloudEvent attributes in the curl command and the trigger specification to understand how [triggers](../../../broker-trigger.md#trigger) work. Try the CloudEvent attributes in the curl command and the trigger specification to understand how [triggers](../../../broker/triggers) work.
## Verify reply from helloworld-python app ## Verify reply from helloworld-python app
The `helloworld-python` app replies with an event type `type= dev.knative.samples.hifromknative`, and source `source=knative/eventing/samples/hello-world`. The event enters the eventing mesh through the broker, and can be delivered to event sinks using a trigger The `helloworld-python` app replies with an event type `type= dev.knative.samples.hifromknative`, and source `source=knative/eventing/samples/hello-world`. The event enters the eventing mesh through the broker, and can be delivered to event sinks using a trigger
1. Deploy a pod that receives any CloudEvent and logs the event to its output. 1. Deploy a pod that receives any CloudEvent and logs the event to its output:
```shell
kubectl --namespace knative-samples apply --filename - << END ```yaml
kubectl -n knative-samples apply -f - <<EOF
# event-display app deploment # event-display app deploment
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
@ -284,11 +284,13 @@ The `helloworld-python` app replies with an event type `type= dev.knative.sample
- protocol: TCP - protocol: TCP
port: 80 port: 80
targetPort: 8080 targetPort: 8080
END EOF
``` ```
1. Create a trigger to deliver the event to the above service
```shell 1. Create a trigger to deliver the event to the previously created service:
kubectl --namespace knative-samples apply --filename - << END
```yaml
kubectl -n knative-samples apply -f - <<EOF
apiVersion: eventing.knative.dev/v1 apiVersion: eventing.knative.dev/v1
kind: Trigger kind: Trigger
metadata: metadata:
@ -305,17 +307,20 @@ The `helloworld-python` app replies with an event type `type= dev.knative.sample
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
name: event-display name: event-display
END EOF
``` ```
1. [Send a CloudEvent to the Broker](###Send-CloudEvent-to-the-Broker) 1. [Send a CloudEvent to the Broker](#send-cloudevent-to-the-broker)
1. Check the logs of event-display service 1. Check the logs of `event-display` Service:
```shell
kubectl --namespace knative-samples logs -l app=event-display --tail=50 ```bash
``` kubectl -n knative-samples logs -l app=event-display --tail=50
You should see something similar to: ```
```shell
Example output:
```bash
cloudevents.Event cloudevents.Event
Validation: valid Validation: valid
Context Attributes, Context Attributes,
@ -333,15 +338,12 @@ The `helloworld-python` app replies with an event type `type= dev.knative.sample
{ {
"msg": "Hi from helloworld- app!" "msg": "Hi from helloworld- app!"
} }
``` ```
**Note: You could use the above approach to test your applications too.**
## Removing the sample app deployment ## Removing the sample app deployment
To remove the sample app from your cluster, delete the service record: To remove the sample app from your cluster, delete the service record:
```shell ```bash
kubectl delete --filename sample-app.yaml kubectl delete -f sample-app.yaml
``` ```

View File

@ -1,10 +1,3 @@
---
title: "Parallel Example"
linkTitle: "Parallel"
weight: 10
type: "docs"
---
# Parallel Example # Parallel Example
The following examples will help you understand how to use Parallel to describe The following examples will help you understand how to use Parallel to describe
@ -19,12 +12,12 @@ All examples require:
- Knative Serving - Knative Serving
All examples are using the All examples are using the
[default channel template](../../channels/create-default-channel.md). [default channel template](../../channels/create-default-channel).
## Examples ## Examples
For each of these examples below, we'll use For each of these examples below, we'll use
[`PingSource`](../ping-source/README.md) as the source of events. [`PingSource`](../../sources/ping-source/) as the source of events.
We also use simple We also use simple
[functions](https://github.com/lionelvillard/knative-functions) to perform [functions](https://github.com/lionelvillard/knative-functions) to perform

View File

@ -10,20 +10,20 @@ This page shows how to install and configure Apache Kafka Sink.
## Prerequisites ## Prerequisites
[Installing Eventing using YAML files](./../../install/install-eventing-with-yaml.md). You must have a Kubernetes cluster with [Knative Eventing installed](../../../admin/install/).
## Installation ## Installation
1. Install the Kafka controller: 1. Install the Kafka controller:
```bash ```bash
kubectl apply --filename {{ artifact(org="knative-sandbox", repo="eventing-kafka-broker", file="eventing-kafka-controller.yaml") }} kubectl apply -f {{ artifact(org="knative-sandbox", repo="eventing-kafka-broker", file="eventing-kafka-controller.yaml") }}
``` ```
1. Install the Kafka Sink data plane: 1. Install the Kafka Sink data plane:
```bash ```bash
kubectl apply --filename {{ artifact(org="knative-sandbox", repo="eventing-kafka-broker", file="eventing-kafka-sink.yaml") }} kubectl apply -f {{ artifact(org="knative-sandbox", repo="eventing-kafka-broker", file="eventing-kafka-sink.yaml") }}
``` ```
1. Verify that `kafka-controller` and `kafka-sink-receiver` are running: 1. Verify that `kafka-controller` and `kafka-sink-receiver` are running:

View File

@ -7,7 +7,7 @@ type: "docs"
# Apache Kafka Source Example # Apache Kafka Source Example
Tutorial on how to build and deploy a `KafkaSource` [Eventing source](../../../sources/README.md) using a Knative Serving `Service`. Tutorial on how to build and deploy a `KafkaSource` event source.
## Background ## Background
@ -17,8 +17,7 @@ The `KafkaSource` reads all the messages, from all partitions, and sends those m
## Prerequisites ## Prerequisites
- Ensure that you meet the [prerequisites listed in the Apache Kafka overview](../). - A Kubernetes cluster with [Knative Kafka Source installed](../../../admin/install/).
- A Kubernetes cluster with [Knative Kafka Source installed](../../../../install/).
## Apache Kafka Topic (Optional) ## Apache Kafka Topic (Optional)
@ -73,6 +72,7 @@ The `KafkaSource` reads all the messages, from all partitions, and sends those m
2. Build the Event Display Service (`event-display.yaml`) 2. Build the Event Display Service (`event-display.yaml`)
```yaml ```yaml
kubectl apply -f - <<EOF
apiVersion: serving.knative.dev/v1 apiVersion: serving.knative.dev/v1
kind: Service kind: Service
metadata: metadata:
@ -85,23 +85,20 @@ The `KafkaSource` reads all the messages, from all partitions, and sends those m
- # This corresponds to - # This corresponds to
# https://github.com/knative/eventing/tree/main/cmd/event_display/main.go # https://github.com/knative/eventing/tree/main/cmd/event_display/main.go
image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display
EOF
``` ```
Example output:
1. Deploy the Event Display Service
``` ```
$ kubectl apply --filename event-display.yaml
...
service.serving.knative.dev/event-display created service.serving.knative.dev/event-display created
``` ```
1. Ensure that the Service pod is running. The pod name will be prefixed with 1. Ensure that the Service pod is running. The pod name will be prefixed with
`event-display`. `event-display`.
```
```bash
$ kubectl get pods $ kubectl get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
event-display-00001-deployment-5d5df6c7-gv2j4 2/2 Running 0 72s event-display-00001-deployment-5d5df6c7-gv2j4 2/2 Running 0 72s
...
``` ```
### Apache Kafka Event Source ### Apache Kafka Event Source

View File

@ -1,10 +1,3 @@
---
title: "Autoscaling concepts"
linkTitle: "Autoscaling concepts"
weight: 01
type: "docs"
---
# Autoscaling concepts # Autoscaling concepts
This section covers conceptual information about which Autoscaler types are supported, as well as fundamental information about how autoscaling is configured. This section covers conceptual information about which Autoscaler types are supported, as well as fundamental information about how autoscaling is configured.
@ -15,7 +8,7 @@ Knative Serving supports the implementation of Knative Pod Autoscaler (KPA) and
**IMPORTANT:** If you want to use Kubernetes Horizontal Pod Autoscaler (HPA), **IMPORTANT:** If you want to use Kubernetes Horizontal Pod Autoscaler (HPA),
you must install it after you install Knative Serving. you must install it after you install Knative Serving.
For how to install HPA, see [Install optional Eventing extensions](../../install/install-extensions.md#install-optional-serving-extensions). For how to install HPA, see [Install optional Eventing extensions](../../../admin/install/install-extensions).
### Knative Pod Autoscaler (KPA) ### Knative Pod Autoscaler (KPA)
@ -80,9 +73,6 @@ The type of Autoscaler implementation (KPA or HPA) can be configured by using th
pod-autoscaler-class: "kpa.autoscaling.knative.dev" pod-autoscaler-class: "kpa.autoscaling.knative.dev"
``` ```
## Global versus per-revision settings ## Global versus per-revision settings
Configuring for autoscaling in Knative can be set using either global or per-revision settings. Configuring for autoscaling in Knative can be set using either global or per-revision settings.

View File

@ -1,9 +1,3 @@
---
title: "Enabling requests to Knative services when additional authorization policies are enabled"
weight: 25
type: "docs"
---
# Enabling requests to Knative services when additional authorization policies are enabled # Enabling requests to Knative services when additional authorization policies are enabled
Knative Serving system pods, such as the activator and autoscaler components, require access to your deployed Knative services. Knative Serving system pods, such as the activator and autoscaler components, require access to your deployed Knative services.
@ -14,7 +8,7 @@ If you have configured additional security features, such as Istio's authorizati
You must meet the following prerequisites to use Istio AuthorizationPolicy: You must meet the following prerequisites to use Istio AuthorizationPolicy:
- Istio must be used for your Knative Ingress. - Istio must be used for your Knative Ingress.
See [Install a networking layer](../install/install-serving-with-yaml.md#install-a-networking-layer). See [Install a networking layer](../../../admin/install/install-serving-with-yaml#install-a-networking-layer).
- Istio sidecar injection must be enabled. - Istio sidecar injection must be enabled.
See the [Istio Documentation](https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/). See the [Istio Documentation](https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/).
@ -56,13 +50,13 @@ $ kubectl exec deployment/httpbin -c httpbin -it -- curl -s http://httpbin.knati
- In STRICT mode, requests will simply be rejected. - In STRICT mode, requests will simply be rejected.
To understand when requests are forwarded through the activator, see [documentation](https://knative.dev/docs/serving/autoscaling/target-burst-capacity/) on the `TargetBurstCapacity` setting. To understand when requests are forwarded through the activator, see [documentation](../load-balancing/target-burst-capacity/) on the `TargetBurstCapacity` setting.
This also means that many Istio AuthorizationPolicies won't work as expected. For example, if you set up a rule allowing requests from a particular source into a Knative service, you will see requests being rejected if they are forwarded by the activator. This also means that many Istio AuthorizationPolicies won't work as expected. For example, if you set up a rule allowing requests from a particular source into a Knative service, you will see requests being rejected if they are forwarded by the activator.
For example, the following policy allows requests from within pods in the `serving-tests` namespace to other pods in the `serving-tests` namespace. For example, the following policy allows requests from within pods in the `serving-tests` namespace to other pods in the `serving-tests` namespace.
``` ```yaml
apiVersion: security.istio.io/v1beta1 apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy kind: AuthorizationPolicy
metadata: metadata:
@ -80,7 +74,7 @@ Requests here will fail when forwarded by the activator, because the Istio proxy
Currently, the easiest way around this is to explicitly allow requests from the `knative-serving` namespace, for example by adding it to the list in the above policy: Currently, the easiest way around this is to explicitly allow requests from the `knative-serving` namespace, for example by adding it to the list in the above policy:
``` ```yaml
apiVersion: security.istio.io/v1beta1 apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy kind: AuthorizationPolicy
metadata: metadata:
@ -96,11 +90,9 @@ spec:
## Health checking and metrics collection ## Health checking and metrics collection
In addition to allowing your application path, you'll need to configure Istio AuthorizationPolicy In addition to allowing your application path, you'll need to configure Istio AuthorizationPolicy to allow health checking and metrics collection to your applications from system pods. You can allow access from system pods by paths.
to allow health checking and metrics collection to your applications from system pods.
You can allow access from system pods [by paths](#allow-access-from-system-pods-by-paths).
## Allowing access from system pods by paths ### Allowing access from system pods by paths
Knative system pods access your application using the following paths: Knative system pods access your application using the following paths:
@ -112,8 +104,8 @@ The `/healthz` path allows system pods to probe the service.
You can add the `/metrics` and `/healthz` paths to the AuthorizationPolicy as shown in the example: You can add the `/metrics` and `/healthz` paths to the AuthorizationPolicy as shown in the example:
``` ```yaml
$ cat <<EOF | kubectl apply -f - kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1 apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy kind: AuthorizationPolicy
metadata: metadata:

View File

@ -67,7 +67,7 @@ You can learn more about the build configuration syntax
## Configuring the Service descriptor ## Configuring the Service descriptor
Importantly, in [helloworld-scala.yaml](./helloworld-scala.yaml) **change the Importantly, in the `service.yaml` file, **change the
image reference to match up with the repository**, name, and version specified image reference to match up with the repository**, name, and version specified
in the [build.sbt](./build.sbt) in the previous section. in the [build.sbt](./build.sbt) in the previous section.
@ -111,18 +111,17 @@ local Docker Repository.
=== "yaml" === "yaml"
Apply the [Service yaml definition](./helloworld-scala.yaml): Apply the [Service yaml definition](./service.yaml):
```shell ```bash
kubectl apply --filename helloworld-scala.yaml kubectl apply -f service.yaml
``` ```
=== "kn" === "kn"
With `kn` you can deploy the service with With `kn` you can deploy the service with
```shell ```bash
kn service create helloworld-scala --image=docker.io/{username}/helloworld-scala --env TARGET="Scala Sample v1" kn service create helloworld-scala --image=docker.io/{username}/helloworld-scala --env TARGET="Scala Sample v1"
``` ```
@ -145,16 +144,11 @@ local Docker Repository.
http://helloworld-scala.default.1.2.3.4.sslip.io http://helloworld-scala.default.1.2.3.4.sslip.io
``` ```
=== "kubectl" === "kubectl"
Then find the service host: Then find the service host:
```shell ```bash
kubectl get ksvc helloworld-scala \ kubectl get ksvc helloworld-scala \
--output=custom-columns=NAME:.metadata.name,URL:.status.url --output=custom-columns=NAME:.metadata.name,URL:.status.url
@ -165,7 +159,7 @@ local Docker Repository.
Finally, to try your service, use the obtained URL: Finally, to try your service, use the obtained URL:
```shell ```bash
curl -v http://helloworld-scala.default.1.2.3.4.sslip.io curl -v http://helloworld-scala.default.1.2.3.4.sslip.io
``` ```
@ -188,26 +182,16 @@ local Docker Repository.
curl -v http://helloworld-scala.default.1.2.3.4.sslip.io curl -v http://helloworld-scala.default.1.2.3.4.sslip.io
``` ```
## Cleanup ## Cleanup
=== "kubectl" === "kubectl"
```shell ```bash
kubectl delete --filename helloworld-scala.yaml kubectl delete -f service.yaml
``` ```
```
kubetl delete --filename helloworld-scala.yaml
```
=== "kn" === "kn"
```shell ```bash
kn service delete helloworld-scala kn service delete helloworld-scala
``` ```

View File

@ -1,10 +1,3 @@
---
title: "Simple Traffic Splitting Between Revisions"
linkTitle: "Traffic splitting"
weight: 1
type: "docs"
---
# Simple Traffic Splitting Between Revisions # Simple Traffic Splitting Between Revisions
This samples builds off of the [Creating a RESTful Service](../rest-api-go) This samples builds off of the [Creating a RESTful Service](../rest-api-go)
@ -13,13 +6,13 @@ splitting traffic between the two created Revisions.
## Prerequisites ## Prerequisites
1. Complete the Service creation steps in 1. Complete the Service creation steps in [Creating a RESTful Service](../rest-api-go).
[Creating a RESTful Service](../rest-api-go).
1. Move into the docs directory: 1. Move into the docs directory:
```shell ```bash
cd $GOPATH/src/github.com/knative/docs cd $GOPATH/src/github.com/knative/docs
``` ```
## Using the `traffic:` block ## Using the `traffic:` block
@ -39,42 +32,42 @@ us in the previous sample:
under `status:` we see a specific `revisionName:` here, which is what it has under `status:` we see a specific `revisionName:` here, which is what it has
resolved to (in this case the name we asked for). resolved to (in this case the name we asked for).
```shell ```yaml
$ kubectl get ksvc -oyaml stock-service-example kubectl get ksvc -oyaml stock-service-example
apiVersion: serving.knative.dev/v1 apiVersion: serving.knative.dev/v1
kind: Service kind: Service
metadata: metadata:
name: stock-service-example name: stock-service-example
... ...
spec: spec:
template: ... # A defaulted version of what we provided. template: ... # A defaulted version of what we provided.
traffic: traffic:
- latestRevision: true - latestRevision: true
percent: 100 percent: 100
status: status:
... ...
traffic: traffic:
- percent: 100 - percent: 100
revisionName: stock-service-example-first revisionName: stock-service-example-first
``` ```
1. The `release_sample.yaml` in this directory overwrites the defaulted traffic 1. The `release_sample.yaml` in this directory overwrites the defaulted traffic
block with a block that fixes traffic to the revision block with a block that fixes traffic to the revision
`stock-service-example-first`, while keeping the latest ready revision `stock-service-example-first`, while keeping the latest ready revision
available via the sub-route "latest". available via the sub-route "latest".
```shell ```bash
kubectl apply --filename docs/serving/samples/traffic-splitting/release_sample.yaml kubectl apply -f docs/serving/samples/traffic-splitting/release_sample.yaml
``` ```
1. The `spec` of the Service should now show our `traffic` block with the 1. The `spec` of the Service should now show our `traffic` block with the
Revision name we specified above. Revision name we specified above.
```shell ```bash
kubectl get ksvc stock-service-example --output yaml kubectl get ksvc stock-service-example --output yaml
``` ```
## Updating the Service ## Updating the Service
@ -88,45 +81,44 @@ will result in a new Revision.
For comparison, you can diff the `release_sample.yaml` with the For comparison, you can diff the `release_sample.yaml` with the
`updated_sample.yaml`. `updated_sample.yaml`.
```shell ```bash
diff serving/samples/traffic-splitting/release_sample.yaml \ diff serving/samples/traffic-splitting/release_sample.yaml \
serving/samples/traffic-splitting/updated_sample.yaml serving/samples/traffic-splitting/updated_sample.yaml
``` ```
1. Execute the command below to update Service, resulting in a new Revision. 1. Execute the command below to update Service, resulting in a new Revision.
```shell ```bash
kubectl apply --filename docs/serving/samples/traffic-splitting/updated_sample.yaml kubectl apply --filename docs/serving/samples/traffic-splitting/updated_sample.yaml
``` ```
2. With our `traffic` block, traffic will _not_ shift to the new Revision 1. With our `traffic` block, traffic will _not_ shift to the new Revision
automatically. However, it will be available via the URL associated with our automatically. However, it will be available via the URL associated with our
`latest` sub-route. This can be verified through the Service status, by `latest` sub-route. This can be verified through the Service status, by
finding the entry of `status.traffic` for `latest`: finding the entry of `status.traffic` for `latest`:
```shell ```bash
kubectl get ksvc stock-service-example --output yaml kubectl get ksvc stock-service-example --output yaml
``` ```
3. The readiness of the Service can be verified through the Service Conditions. 1. The readiness of the Service can be verified through the Service Conditions.
When the Service conditions report it is ready again, you can access the new When the Service conditions report it is ready again, you can access the new
Revision using the same method as found in the Revision using the same method as found in the previous sample using the
[previous sample](../rest-api-go/README.md#access-the-service) using the
Service hostname found above. Service hostname found above.
```shell ```bash
# Replace "latest" with whichever tag for which we want the hostname. # Replace "latest" with whichever tag for which we want the hostname.
export LATEST_HOSTNAME=`kubectl get ksvc stock-service-example --output jsonpath="{.status.traffic[?(@.tag=='latest')].url}" | cut -d'/' -f 3` export LATEST_HOSTNAME=`kubectl get ksvc stock-service-example --output jsonpath="{.status.traffic[?(@.tag=='latest')].url}" | cut -d'/' -f 3`
curl --header "Host: ${LATEST_HOSTNAME}" http://${INGRESS_IP} curl --header "Host: ${LATEST_HOSTNAME}" http://${INGRESS_IP}
``` ```
- Visiting the Service's domain will still hit the original Revision, since we - Visiting the Service's domain will still hit the original Revision, since we
configured it to receive 100% of our main traffic (you can also use the configured it to receive 100% of our main traffic (you can also use the
`current` sub-route). `current` sub-route).
```shell ```bash
curl --header "Host:${SERVICE_HOSTNAME}" http://${INGRESS_IP} curl --header "Host:${SERVICE_HOSTNAME}" http://${INGRESS_IP}
``` ```
## Traffic Splitting ## Traffic Splitting
@ -136,28 +128,28 @@ extending our `traffic` list, and splitting the `percent` across them.
1. Execute the command below to update Service, resulting in a 50/50 traffic 1. Execute the command below to update Service, resulting in a 50/50 traffic
split. split.
```shell ```bash
kubectl apply --filename docs/serving/samples/traffic-splitting/split_sample.yaml kubectl apply -f docs/serving/samples/traffic-splitting/split_sample.yaml
``` ```
2. Verify the deployment by checking the service status: 1. Verify the deployment by checking the service status:
```shell ```bash
kubectl get ksvc --output yaml kubectl get ksvc --output yaml
``` ```
3. Once updated, `curl` requests to the base domain should result in responses 1. Once updated, `curl` requests to the base domain should result in responses
split evenly between `Welcome to the share app!` and split evenly between `Welcome to the share app!` and
`Welcome to the stock app!`. `Welcome to the stock app!`.
```shell ```shell
curl --header "Host:${SERVICE_HOSTNAME}" http://${INGRESS_IP} curl --header "Host:${SERVICE_HOSTNAME}" http://${INGRESS_IP}
``` ```
## Clean Up ## Clean Up
To clean up the sample service: To clean up the sample service, run the command:
```shell ```bash
kubectl delete --filename docs/serving/samples/traffic-splitting/split_sample.yaml kubectl delete -f docs/serving/samples/traffic-splitting/split_sample.yaml
``` ```

View File

@ -1,17 +1,25 @@
---
title: "Enabling automatic TLS certificate provisioning"
linkTitle: "Enabling auto TLS certs"
weight: 64
type: "docs"
---
# Enabling automatic TLS certificate provisioning # Enabling automatic TLS certificate provisioning
If you install and configure cert-manager, you can configure Knative to If you install and configure cert-manager, you can configure Knative to
automatically obtain new TLS certificates and renew existing ones for Knative automatically obtain new TLS certificates and renew existing ones for Knative
Services. Services. To learn more about using secure connections in Knative, see
To learn more about using secure connections in Knative, see [Configuring HTTPS with TLS certificates](../using-a-tls-cert).
[Configuring HTTPS with TLS certificates](./using-a-tls-cert.md).
## Before you begin
The following must be installed on your Knative cluster:
- [Knative Serving](../../../admin/install/).
- A Networking layer such as Kourier, Istio with SDS v1.3 or higher, Contour v1.1 or higher, or Gloo v0.18.16 or higher. See [Install a networking layer](../../../admin/install/install-serving-with-yaml#install-a-networking-layer) or [Istio with SDS, version 1.3 or higher](../../../admin/install/installing-istio#installing-istio-with-SDS-to-secure-the-ingress-gateway).
!!! note
Currently, [Ambassador](https://github.com/datawire/ambassador) is unsupported for use with Auto TLS.
- [`cert-manager` version `1.0.0` or higher](../installing-cert-manager).
- Your Knative cluster must be configured to use a [custom domain](../using-a-custom-domain).
- Your DNS provider must be setup and configured to your domain.
- If you want to use HTTP-01 challenge, you need to configure your custom
domain to map to the IP of ingress. You can achieve this by adding a DNS A record to map the domain to the IP according to the instructions of your DNS provider.
## Automatic TLS provision mode ## Automatic TLS provision mode
@ -35,34 +43,12 @@ Knative supports the following Auto TLS modes:
- When using HTTP-01 challenge, **a certificate will be provisioned per Knative Service.** - When using HTTP-01 challenge, **a certificate will be provisioned per Knative Service.**
- **HTTP-01 does not support provisioning a certificate per namespace.** - **HTTP-01 does not support provisioning a certificate per namespace.**
## Before you begin
You must meet the following prerequisites to enable Auto TLS:
- The following must be installed on your Knative cluster:
- [Knative Serving](../install/).
- A Networking layer such as Kourier, Istio with SDS v1.3 or higher, Contour v1.1 or higher, or Gloo v0.18.16 or higher.
See [Install a networking layer](../install/install-serving-with-yaml.md#install-a-networking-layer) or
[Istio with SDS, version 1.3 or higher](../install/installing-istio.md#installing-istio-with-SDS-to-secure-the-ingress-gateway).<br>
**Note:** Currently, [Ambassador](https://github.com/datawire/ambassador) is unsupported for use with Auto TLS.
- [cert-manager version `1.0.0` and higher](./installing-cert-manager.md).
- Your Knative cluster must be configured to use a
[custom domain](./using-a-custom-domain.md).
- Your DNS provider must be setup and configured to your domain.
- If you want to use HTTP-01 challenge, you need to configure your custom
domain to map to the IP of ingress. You can achieve this by adding a DNS A record to map the domain to the IP according to the instructions of your DNS provider.
## Enabling Auto TLS ## Enabling Auto TLS
To enable support for Auto TLS in Knative: 1. Create and add the `ClusterIssuer` configuration file to your Knative cluster to define who issues the TLS certificates, how requests are validated,
### Create cert-manager ClusterIssuer
1. Create and add the `ClusterIssuer` configuration file to your Knative cluster
to define who issues the TLS certificates, how requests are validated,
and which DNS provider validates those requests. and which DNS provider validates those requests.
#### ClusterIssuer for DNS-01 challenge ### ClusterIssuer for DNS-01 challenge
Use the cert-manager reference to determine how to configure your Use the cert-manager reference to determine how to configure your
`ClusterIssuer` file: `ClusterIssuer` file:
@ -78,7 +64,7 @@ and which DNS provider validates those requests.
the Let's Encrypt account info, required `DNS-01` challenge type, and the Let's Encrypt account info, required `DNS-01` challenge type, and
Cloud DNS provider info defined. For the complete Google Cloud DNS Cloud DNS provider info defined. For the complete Google Cloud DNS
example, see example, see
[Configuring HTTPS with cert-manager and Google Cloud DNS](./using-cert-manager-on-gcp.md). [Configuring HTTPS with cert-manager and Google Cloud DNS](./using-cert-manager-on-gcp).
```shell ```shell
apiVersion: cert-manager.io/v1 apiVersion: cert-manager.io/v1
@ -106,11 +92,11 @@ and which DNS provider validates those requests.
key: key.json key: key.json
``` ```
#### ClusterIssuer for HTTP-01 challenge ### ClusterIssuer for HTTP-01 challenge
Run the following command to apply the ClusterIssuer for HTT01 challenge: Run the following command to apply the ClusterIssuer for HTT01 challenge:
```shell ```yaml
kubectl apply -f - <<EOF kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1 apiVersion: cert-manager.io/v1
kind: ClusterIssuer kind: ClusterIssuer
@ -130,8 +116,8 @@ and which DNS provider validates those requests.
1. Ensure that the ClusterIssuer is created successfully: 1. Ensure that the ClusterIssuer is created successfully:
```shell ```bash
kubectl get clusterissuer <cluster-issuer-name> --output yaml kubectl get clusterissuer <cluster-issuer-name> -o yaml
``` ```
Result: The `Status.Conditions` should include `Ready=True`. Result: The `Status.Conditions` should include `Ready=True`.

View File

@ -1,9 +1,3 @@
---
title: "Creating and using Subroutes"
weight: 20
type: "docs"
---
# Creating and using Subroutes # Creating and using Subroutes
Subroutes are most effective when used with multiple revisions. When defining a Knative service/route, the traffic section of the spec can split between the different revisions. For example: Subroutes are most effective when used with multiple revisions. When defining a Knative service/route, the traffic section of the spec can split between the different revisions. For example:
@ -37,4 +31,4 @@ traffic:
In the above example, you can access the staging target by accessing `staging-<route name>.<namespace>.<domain>`. The targets for `bar` and `baz` can only be accessed using the main route, `<route name>.<namespace>.<domain>`. In the above example, you can access the staging target by accessing `staging-<route name>.<namespace>.<domain>`. The targets for `bar` and `baz` can only be accessed using the main route, `<route name>.<namespace>.<domain>`.
When a traffic target gets tagged, a new Kubernetes service is created for it so that other services can also access it within the cluster. From the above example, a new Kubernetes service called `staging-<route name>` will be created in the same namespace. This service has the ability to override the visibility of this specific route by applying the label `networking.knative.dev/visibility` with value `cluster-local`. See [cluster local routes](./cluster-local-route.md#label-a-service-to-be-cluster-local) for more information about how to restrict visibility on the specific route. When a traffic target gets tagged, a new Kubernetes service is created for it so that other services can also access it within the cluster. From the above example, a new Kubernetes service called `staging-<route name>` will be created in the same namespace. This service has the ability to override the visibility of this specific route by applying the label `networking.knative.dev/visibility` with value `cluster-local`. See the documentation on [private services](../../../developer/serving/services/private-services) for more information about how to restrict visibility on the specific route.

View File

@ -82,6 +82,7 @@ nav:
- Configuring scale bounds: serving/autoscaling/scale-bounds.md - Configuring scale bounds: serving/autoscaling/scale-bounds.md
- Additional autoscaling configuration for Knative Pod Autoscaler: serving/autoscaling/kpa-specific.md - Additional autoscaling configuration for Knative Pod Autoscaler: serving/autoscaling/kpa-specific.md
- Autoscale Sample App - Go: serving/autoscaling/autoscale-go/index.md - Autoscale Sample App - Go: serving/autoscaling/autoscale-go/index.md
# Administrator topics
- Administrator Topics: - Administrator Topics:
- Deployment Configuration: serving/services/deployment.md - Deployment Configuration: serving/services/deployment.md
- Kubernetes services: serving/knative-kubernetes-services.md - Kubernetes services: serving/knative-kubernetes-services.md
@ -149,6 +150,7 @@ nav:
- Create a SinkBinding object: eventing/sources/sinkbinding/getting-started.md - Create a SinkBinding object: eventing/sources/sinkbinding/getting-started.md
- SinkBinding reference: eventing/sources/sinkbinding/reference.md - SinkBinding reference: eventing/sources/sinkbinding/reference.md
- Camel source: eventing/sources/apache-camel-source/README.md - Camel source: eventing/sources/apache-camel-source/README.md
- Kafka source: eventing/sources/kafka-source/README.md
- Creating an event source: - Creating an event source:
- Overview: eventing/sources/creating-event-sources/README.md - Overview: eventing/sources/creating-event-sources/README.md
- Writing an event source using Javascript: eventing/sources/creating-event-sources/writing-event-source-easy-way/README.md - Writing an event source using Javascript: eventing/sources/creating-event-sources/writing-event-source-easy-way/README.md
@ -207,7 +209,6 @@ nav:
- Overview: eventing/samples/kafka/README.md - Overview: eventing/samples/kafka/README.md
- Binding Example: eventing/samples/kafka/binding/index.md - Binding Example: eventing/samples/kafka/binding/index.md
- Channel Example: eventing/samples/kafka/channel/README.md - Channel Example: eventing/samples/kafka/channel/README.md
- Source Example: eventing/sources/kafka-source/README.md
- Parallel: - Parallel:
- Overview: eventing/samples/parallel/README.md - Overview: eventing/samples/parallel/README.md
- Multiple Cases: eventing/samples/parallel/multiple-branches/README.md - Multiple Cases: eventing/samples/parallel/multiple-branches/README.md