sync main with release-1.2 (#4772)

Signed-off-by: Carlos Santana <csantana23@gmail.com>
This commit is contained in:
Carlos Santana 2022-02-22 09:58:03 -05:00 committed by GitHub
parent ede8a0ee64
commit d92d2eece2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
121 changed files with 99233 additions and 1952 deletions

View File

@ -149,6 +149,7 @@ jobs:
git check-attr --stdin linguist-vendored | grep -Ev ': (set|true)$' | cut -d: -f1 |
git check-attr --stdin ignore-lint | grep -Ev ': (set|true)$' | cut -d: -f1 |
grep -Ev '^(vendor/|third_party/|.git)' |
grep -v '\.svg$' |
xargs misspell -i importas -error |
reviewdog -efm="%f:%l:%c: %m" \
-name="github.com/client9/misspell" \
@ -179,6 +180,7 @@ jobs:
git check-attr --stdin linguist-vendored | grep -Ev ': (set|true)$' | cut -d: -f1 |
git check-attr --stdin ignore-lint | grep -Ev ': (set|true)$' | cut -d: -f1 |
grep -Ev '^(vendor/|third_party/|.git)' |
grep -v '\.svg$' |
xargs grep -nE " +$" |
reviewdog -efm="%f:%l:%m" \
-name="trailing whitespace" \
@ -211,7 +213,8 @@ jobs:
git check-attr --stdin linguist-vendored | grep -Ev ': (set|true)$' | cut -d: -f1 |
git check-attr --stdin ignore-lint | grep -Ev ': (set|true)$' | cut -d: -f1 |
grep -Ev '^(vendor/|third_party/|.git)' |
grep -v '\.ai$')
grep -v '\.ai$' |
grep -v '\.svg$')
for x in $LINT_FILES; do
# Based on https://stackoverflow.com/questions/34943632/linux-check-if-there-is-an-empty-line-at-the-end-of-a-file

View File

@ -1,3 +1,4 @@
vendor/*
third_party/*
*-lock.json
*.svg

View File

@ -66,10 +66,11 @@ aliases:
- pierDipi
- vaikas
knative-admin:
- carlisia
- csantanapr
- dprotaso
- duglin
- evankanderson
- itsmurugappan
- julz
- knative-prow-releaser-robot
- knative-prow-robot
@ -77,14 +78,15 @@ aliases:
- knative-test-reporter-robot
- lance
- pmorie
- psschwei
- rhuss
- smoser-ibm
- spencerdillard
- thisisnotapril
- vaikas
- xtreme-sameer-vohra
knative-release-leads:
- dprotaso
- psschwei
- carlisia
- xtreme-sameer-vohra
knative-robots:
- knative-prow-releaser-robot
- knative-prow-robot
@ -150,6 +152,7 @@ aliases:
- evankanderson
- gerardo-lc
- kvmware
- mgencur
- shinigambit
productivity-wg-leads:
- chizhg
@ -187,6 +190,7 @@ aliases:
- lionelvillard
steering-committee:
- csantanapr
- itsmurugappan
- lance
- pmorie
- thisisnotapril
@ -197,8 +201,8 @@ aliases:
- julz
- rhuss
trademark-committee:
- duglin
- evankanderson
- smoser-ibm
- spencerdillard
ux-wg-leads:
- csantanapr

View File

@ -14,6 +14,7 @@ nav:
- Blog:
- index.md
- Releases:
- releases/announcing-knative-v1-2-release.md
- releases/announcing-knative-v1-1-release.md
- releases/announcing-knative-v1-0-release.md
- releases/announcing-knative-v0-26-release.md

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 403 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 402 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

View File

@ -27,15 +27,10 @@ Details on intention to join CNCF as incubating project.
[Read more :octicons-arrow-right-24:](steering/knative-cncf-donation.md){ .md-button }
## Welcome new members to Steering Committee
Details on the Steering Committee 2021 election results.
## Knative 1.2 is out!
Details on the 1.2 release of the Knative project.
[Read more :octicons-arrow-right-24:](steering/2021-12-14-steering-elections-results.md){ .md-button }
## Knative 1.0 is out!
Details on the 1.0 release of the Knative project.
[Read more :octicons-arrow-right-24:](articles/knative-1.0.md){ .md-button }
[Read more :octicons-arrow-right-24:](releases/announcing-knative-v1-2-release.md){ .md-button }
## Highlighting the value of Knative for the c-suite

View File

@ -0,0 +1,200 @@
---
title: "v1.2 release"
linkTitle: "v1.2 release"
Author: "Samia Nneji"
Author handle: https://github.com/snneji
date: 2021-01-25
description: "Knative v1.2 release announcement"
type: "blog"
---
### Announcing Knative v1.2 Release
A new version of Knative is now available across multiple components.
Follow the instructions in the documentation
[Installing Knative](https://knative.dev/docs/install/) for the respective component.
#### Table of Contents
- [Highlights](#highlights)
- [Serving v1.2](#serving-v12)
- [Eventing v1.2](#eventing-v12)
- [Eventing Extensions](#eventing-extensions)
- [Apache Kafka Broker v1.2](#apache-kafka-broker-v12)
- [RabbitMQ Broker and Source v1.2](#rabbitmq-broker-and-source-v12)
- `kn` [CLI v1.2](#client-v12)
- [Knative Operator v1.2](#operator-v12)
- [Thank you contributors](#thank-you-contributors)
### Highlights
- Minimum Kubernetes version is now v1.21.
- Serving added experimental support for PVC.
- Eventing ConfigMap element names are standardized to kebab case.
Use `data-max-size` and `channel-template-spec` instead of the previous camel case
elements, which have been deprecated.
For more information, see [Eventing v1.2](#eventing-v12).
- The `kn` client has added autocomplete for several commands. For more information, see [CLI v1.2](#client-v12).
### Serving v1.2
<!-- Original notes are here: https://github.com/knative/serving/releases/tag/knative-v1.2.0 -->
#### 🚨 Breaking or Notable Changes
- Our minimum Kubernetes version is now v1.21. ([#12509](https://github.com/knative/serving/pull/12509))
- PodDisruptionBudget updated to v1 API. ([#12548](https://github.com/knative/serving/pull/12548))
#### 💫 New Features & Changes
- Improves the error message when a DomainMapping cannot be reconciled because `autocreate-cluster-domain-claims` is false and the CDC does not exist. ([#12439](https://github.com/knative/serving/pull/12439))
- Utilizes Kubernetes's immediate trigger of readiness probes after startup. Restores default `periodSeconds` for readiness probe to Kubernetes default (10s). ([#12550](https://github.com/knative/serving/pull/12550))
#### 🐞 Bug Fixes
- Changes liveness probes to directly probe the user container rather than queue proxy. ([#12479](https://github.com/knative/serving/pull/12479))
#### 🧪 Experimental
- Adds PVC support behind the feature flags `kubernetes.podspec-persistent-volume-claim` and `kubernetes.podspec-persistent-volume-write`. ([#12458](https://github.com/knative/serving/pull/12458))
### Eventing v1.2
<!-- Original notes are here: https://github.com/knative/eventing/releases/tag/knative-v1.2.0 -->
#### 🚨 Breaking or Notable Changes
- Change default Broker delivery spec. ([#6011](https://github.com/knative/eventing/pull/6011))
- Unify inconsistent ConfigMaps ([#5875](https://github.com/knative/eventing/pull/5875)):
- The Channel template in the ConfigMap that Brokers use to declare the underlying channel must be located under the `channel-template-spec` element. The previous `channelTemplateSpec` element has been deprecated.
- PingSource's ConfigMap element for maximum size has been redefined as `data-max-size`.
The previous `dataMaxSize` element has been deprecated.
#### 💫 New Features & Changes
- Traces generated by PingSource includes some Kubernetes attributes: `k8s.namespace`, `k8s.name`, `k8s.resource`. ([#5928](https://github.com/knative/eventing/pull/5928))
- Add new `new-trigger-filters` experimental feature. When enabled, Triggers support a new `filters` field that conforms to the filters API field defined in the [CloudEvents Subscriptions API](https://github.com/cloudevents/spec/blob/main/subscriptions/spec.md#324-filters). It allows you to specify a set of powerful filter expressions, where each expression evaluates to either true or false for each event. ([#5995](https://github.com/knative/eventing/pull/5995))
#### 🐞 Bug Fixes
- Fixes the following vulnerabilities ([#6057](https://github.com/knative/eventing/pull/6057)):
- `github.com/knative/pkg` contains a dependency that is subject to DoS attack.
- `github.com/kubernetes/utils` contains a security issue that was discovered where a user might be able to create a container with subpath volume mounts to access files and directories outside of the volume, including on the host filesystem.
### Eventing Extensions
#### Apache Kafka Broker v1.2
<!-- Original notes are here: https://github.com/knative-sandbox/eventing-kafka-broker/releases/tag/knative-v1.2.0 -->
#### 💫 New Features & Changes
- An HTTP header will be supplied to your event consumers when the Broker it is communicating with supports reply events.
This will always be sent while using this Kafka Broker since it supports handling reply events. ([#1771](https://github.com/knative-sandbox/eventing-kafka-broker/pull/1771))
- Apply back-pressure by limiting the number of in-flight dispatch requests in the unordered event consumption. ([#1750](https://github.com/knative-sandbox/eventing-kafka-broker/pull/1750))
- Support TLS for the metrics server. Now, the receiver and the dispatcher accept the following environment variables ([#1707](https://github.com/knative-sandbox/eventing-kafka-broker/pull/1707)):
- `METRICS_PEM_CERT_PATH`: TLS cert path
- `METRICS_PEM_KEY_PATH`: TLS key path
- `METRICS_HOST`: metrics server host
#### RabbitMQ Broker and Source v1.2
<!-- Original notes are here: https://github.com/knative-sandbox/eventing-rabbitmq/releases/tag/knative-v1.2.0 -->
#### 💫 New Features & Changes
- Improved Broker's and Source's README docs, sample descriptions, and files. ([#555](https://github.com/knative-sandbox/eventing-rabbitmq/pull/555))
- Add publisher confirms to ingress. Return 200 only when RabbitMQ confirms receiving and storing the message. ([#568](https://github.com/knative-sandbox/eventing-rabbitmq/pull/568))
- Makefile-based workflow. Includes migrating GitHub Actions. ([#525](https://github.com/knative-sandbox/eventing-rabbitmq/pull/525), [#569](https://github.com/knative-sandbox/eventing-rabbitmq/pull/569), [#579](https://github.com/knative-sandbox/eventing-rabbitmq/pull/579))
- Various code refactoring and code health improvements. ([#552](https://github.com/knative-sandbox/eventing-rabbitmq/pull/552), [#572](https://github.com/knative-sandbox/eventing-rabbitmq/pull/572))
- Source adapter trigger dispatcher homologation. Now the Source Adapter and
Broker Dispatcher's Prefetch Count behavior is the same.
Updated the Trigger's webhook to validate the following ([#536](https://github.com/knative-sandbox/eventing-rabbitmq/pull/536)):
- Has a default value of 1. FIFO behavior
- Have limits: 1 ≤ prefetchCount ≤ 1000
- All core Knative Eventing RabbitMQ Pods should now be able to run in the restricted Pod security standard profile. ([#541](https://github.com/knative-sandbox/eventing-rabbitmq/pull/541))
#### 🐞 Bug Fixes
- Removing the dead letter sink on a Trigger will now properly fall back to the
Broker's dead letter sink, if one is defined. ([#533](https://github.com/knative-sandbox/eventing-rabbitmq/pull/533))
- Messages sent to RabbitMQ are now marked as Persistent ([#560](https://github.com/knative-sandbox/eventing-rabbitmq/pull/560)):
- Configuring messages sent into the RabbitMQ Broker to be persistent as the
Queues used by the Broker are always durable.
- Now if the user set the configuration of the RabbitMQ Source Exchange and
Queue to be durable, the messages are also durable.
### Client v1.2
<!-- Original notes are here: https://github.com/knative/client/blob/main/CHANGELOG.adoc#v120-2022-01-25 -->
#### 💫 New Features & Changes
- Adds auto-completion for Eventing resources name. ([#1567](https://github.com/knative/client/pull/1567))
- Adds auto-completion for Domain name. ([#1562](https://github.com/knative/client/pull/1562))
- Adds auto-completion for Route name. ([#1561](https://github.com/knative/client/pull/1561))
- Adds auto-completion for Revision name. ([#1560](https://github.com/knative/client/pull/1560))
- Adds auto-completion for Broker name. ([#1559](https://github.com/knative/client/pull/1559))
- Adds auto-completion for Service name. ([#1547](https://github.com/knative/client/pull/1547))
- Removes deprecated Hugo frontmatter generation for docs. ([#1563](https://github.com/knative/client/pull/1563))
#### 🐞 Bug Fixes
- Fix for file not found error message discrepancy in Windows. ([#1575](https://github.com/knative/client/pull/1575))
- Fixed panic in `kn channel list` command. ([#1568](https://github.com/knative/client/pull/1568))
### Operator v1.2
<!-- Original notes are here: https://github.com/knative/operator/releases/tag/knative-v1.2.0 -->
#### 💫 New Features & Changes
- Add the support of resources configuration based on deploy and container names. ([#893](https://github.com/knative/operator/pull/893))
### Thank you, contributors
Release leads: [@dprotaso](https://github.com/dprotaso) and [@psschwei](https://github.com/psschwei)
- [@benmoss](https://github.com/benmoss)
- [@ChunyiLyu](https://github.com/ChunyiLyu)
- [@devguyio](https://github.com/devguyio)
- [@dprotaso](https://github.com/dprotaso)
- [@gabo1208](https://github.com/gabo1208)
- [@gvmw](https://github.com/gvmw)
- [@houshengbo](https://github.com/houshengbo)
- [@ikvmw](https://github.com/ikvmw)
- [@jhill072](https://github.com/jhill072)
- [@julz](https://github.com/julz)
- [@lionelvillard](https://github.com/lionelvillard)
- [@odacremolbap](https://github.com/odacremolbap)
- [@pierDipi](https://github.com/pierDipi)
- [@psschwei](https://github.com/psschwei)
- [@skonto](https://github.com/skonto)
- [@steven0711dong](https://github.com/steven0711dong)
- [@vyasgun](https://github.com/vyasgun)
### Learn more
Knative is an open source project that anyone in the [community](https://knative.dev/docs/community/) can use, improve, and enjoy. We'd love you to join us!
- [Welcome to Knative](https://knative.dev/docs)
- [Getting started documentation](https://knative.dev/docs/getting-started)
- [Samples](https://knative.dev/docs/samples)
- [Knative working groups](https://github.com/knative/community/blob/main/working-groups/WORKING-GROUPS.md)
- [Knative User Mailing List](https://groups.google.com/forum/#!forum/knative-users)
- [Knative Development Mailing List](https://groups.google.com/forum/#!forum/knative-dev)
- Knative on Twitter [@KnativeProject](https://twitter.com/KnativeProject)
- Knative on [StackOverflow](https://stackoverflow.com/questions/tagged/knative)
- Knative [Slack](https://slack.knative.dev)
- Knative on [YouTube](https://www.youtube.com/channel/UCq7cipu-A1UHOkZ9fls1N8A)

View File

@ -32,7 +32,8 @@ markdown_extensions:
- attr_list
- meta
- pymdownx.superfences
- pymdownx.tabbed
- pymdownx.tabbed:
alternate_style: true
- pymdownx.details
- pymdownx.snippets:
base_path: docs/snippets

View File

@ -1,4 +1,4 @@
{
{
"$schema": "http://json.schemastore.org/launchsettings.json",
"iisSettings": {
"windowsAuthentication": false,

View File

@ -112,10 +112,9 @@
"dev": true
},
"ajv": {
"version": "6.12.2",
"resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.2.tgz",
"integrity": "sha512-k+V+hzjm5q/Mr8ef/1Y9goCmlsK4I6Sm74teeyGvFk1XrOsbsKLjEdrvny42CZ+a8sXbk8KWpY/bDwS+FLL2UQ==",
"dev": true,
"version": "6.12.6",
"resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz",
"integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==",
"requires": {
"fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
@ -421,19 +420,6 @@
"requires": {
"ajv": "~6.12.3",
"uuid": "~8.3.0"
},
"dependencies": {
"ajv": {
"version": "6.12.6",
"resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz",
"integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==",
"requires": {
"fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
"json-schema-traverse": "^0.4.1",
"uri-js": "^4.2.2"
}
}
}
},
"color-convert": {
@ -1216,9 +1202,9 @@
"dev": true
},
"follow-redirects": {
"version": "1.14.7",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.7.tgz",
"integrity": "sha512-+hbxoLbFMbRKDwohX8GkTataGqO6Jb7jGwpAlwgy2bIz25XtRm7KEzJM76R1WiNT5SwZkX4Y75SwBolkpmE7iQ=="
"version": "1.14.8",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.8.tgz",
"integrity": "sha512-1x0S9UVJHsQprFcEC/qnNzBLcIxsjAV905f/UkQxbclCsoTWlacCNOpQa/anodLl2uaEKFhfWOvM2Qg77+15zA=="
},
"forwarded": {
"version": "0.1.2",

View File

@ -1,4 +1,4 @@
var builder = WebApplication.CreateBuilder(args);
var builder = WebApplication.CreateBuilder(args);
var port = Environment.GetEnvironmentVariable("PORT") ?? "8080";
var url = $"http://0.0.0.0:{port}";

View File

@ -18,8 +18,8 @@ nav:
- Using Knative Eventing:
- Introducing Knative Eventing: getting-started/getting-started-eventing.md
- Sources, Brokers, Triggers, Sinks: getting-started/first-broker.md
- Introducing the CloudEvents Player: getting-started/first-source.md
- Creating your first Trigger: getting-started/first-trigger.md
- Using a Knative Service as a source: getting-started/first-source.md
- Using Triggers and sinks: getting-started/first-trigger.md
- What's Next?: getting-started/next-steps.md
- Clean Up: getting-started/clean-up.md
###############################################################################
@ -27,18 +27,16 @@ nav:
###############################################################################
- Installing:
- About installing Knative: install/README.md
# Serving Installation
- Install Knative Serving:
- Install Serving with YAML: install/serving/install-serving-with-yaml.md
- Knative Serving installation files: install/serving/serving-installation-files.md
# Istio Installation
- Installing Istio for Knative: install/serving/installing-istio.md
# Cert-manager Installation
- Installing cert-manager: install/serving/installing-cert-manager.md
# Eventing Installation
- Install Knative Eventing:
- Install Eventing with YAML: install/eventing/install-eventing-with-yaml.md
- Knative Eventing installation files: install/eventing/eventing-installation-files.md
- Install Knative using quickstart: install/quickstart-install.md
# YAML Installation
- Install Knative with YAML:
- About YAML-based installation: install/yaml-install/README.md
- Install Knative Serving:
- Install Serving with YAML: install/yaml-install/serving/install-serving-with-yaml.md
- Knative Serving installation files: install/yaml-install/serving/serving-installation-files.md
- Install Knative Eventing:
- Install Eventing with YAML: install/yaml-install/eventing/install-eventing-with-yaml.md
- Knative Eventing installation files: install/yaml-install/eventing/eventing-installation-files.md
# Operator Installation
- Install with Knative Operator:
- Installing using the Operator: install/operator/knative-with-operators.md
@ -51,6 +49,12 @@ nav:
- Installing kn: install/client/install-kn.md
- Customizing kn: install/client/configure-kn.md
- kn plugins: install/client/kn-plugins.md
# Advanced options for Serving
- Install advanced options:
# Istio Installation
- Install Istio for Knative: install/installing-istio.md
# Cert-manager Installation
- Install cert-manager: install/installing-cert-manager.md
# Vendor docs
- Using a Knative-based offering: install/knative-offerings.md
# Upgrading Knative
@ -191,7 +195,7 @@ nav:
- Creating a Broker: eventing/broker/create-mtbroker.md
- Triggers: eventing/broker/triggers/README.md
- Broker configuration example: eventing/broker/example-mtbroker.md
- Apache Kafka Broker: eventing/broker/kafka-broker/README.md
- Knative Kafka Broker: eventing/broker/kafka-broker/README.md
- RabbitMQ Broker: eventing/broker/rabbitmq-broker/README.md
- Accessing CloudEvent traces: eventing/accessing-traces.md
# Eventing - admin docs

View File

@ -2,26 +2,25 @@ plugins:
redirects:
redirect_maps:
admin/collecting-logs/README.md: serving/observability/logging/collecting-logs.md
admin/collecting-metrics/eventing-metrics/metrics.md: eventing/observability/metrics/eventing-metrics.md
admin/collecting-metrics/README.md: serving/observability/metrics/collecting-metrics.md
admin/collecting-metrics/eventing-metrics/metrics.md: eventing/observability/metrics/eventing-metrics.md
admin/collecting-metrics/serving-metrics/metrics.md: serving/observability/metrics/serving-metrics.md
admin/eventing/broker-configuration.md: eventing/configuration/broker-configuration.md
admin/eventing/channel-configuration.md: eventing/configuration/channel-configuration.md
admin/eventing/kafka-channel-configuration.md: eventing/configuration/kafka-channel-configuration.md
admin/eventing/sources-configuration.md: eventing/configuration/sources-configuration.md
admin/install/eventing/eventing-installation-files.md: install/eventing/eventing-installation-files.md
admin/install/eventing/install-eventing-with-yaml.md: install/eventing/install-eventing-with-yaml.md
admin/install/install-eventing-with-yaml.md: install/eventing/install-eventing-with-yaml.md
admin/install/install-serving-with-yaml.md: install/serving/install-serving-with-yaml.md
admin/install/installing-istio.md: install/serving/installing-istio.md
admin/install/installing-istio.md: install/serving/installing-istio.md
admin/install/README.md: install/README.md
admin/install/eventing/eventing-installation-files.md: install/yaml-install/eventing/eventing-installation-files.md
admin/install/eventing/install-eventing-with-yaml.md: install/yaml-install/eventing/install-eventing-with-yaml.md
admin/install/install-eventing-with-yaml.md: install/yaml-install/eventing/install-eventing-with-yaml.md
admin/install/install-serving-with-yaml.md: install/yaml-install/serving/install-serving-with-yaml.md
admin/install/installing-istio.md: install/installing-istio.md
admin/install/knative-offerings.md: install/knative-offerings.md
admin/install/knative-with-operators.md: install/operator/knative-with-operators.md
admin/install/operator/configuring-eventing-cr.md: install/operator/configuring-eventing-cr.md
admin/install/operator/configuring-serving-cr.md: install/operator/configuring-serving-cr.md
admin/install/README.md: install/README.md
admin/install/serving/install-serving-with-yaml.md: install/serving/install-serving-with-yaml.md
admin/install/serving/serving-installation-files.md: install/serving/serving-installation-files.md
admin/install/serving/install-serving-with-yaml.md: install/yaml-install/serving/install-serving-with-yaml.md
admin/install/serving/serving-installation-files.md: install/yaml-install/serving/serving-installation-files.md
admin/install/uninstall.md: install/uninstall.md
admin/serving/config-defaults.md: serving/configuration/config-defaults.md
admin/serving/deployment.md: serving/configuration/deployment.md
@ -32,45 +31,46 @@ plugins:
admin/upgrade/upgrade-installation-with-operator.md: install/upgrade/upgrade-installation-with-operator.md
admin/upgrade/upgrade-installation.md: install/upgrade/upgrade-installation.md
check-install-version.md: install/upgrade/check-install-version.md
client/README.md: install/client/README.md
client/configure-kn.md: install/client/configure-kn.md
client/connecting-kn-to-your-cluster/index.md: install/client/README.md
client/install-kn.md: install/client/install-kn.md
client/kn-plugins.md: install/client/kn-plugins.md
client/README.md: install/client/README.md
community/annual_reports.md: https://github.com/knative/community/tree/main/annual_reports
community/calendar.md: https://github.com/knative/community/blob/main/CALENDAR.MD
community/contributing/code-of-conduct.md: https://github.com/knative/community/blob/main/CODE-OF-CONDUCT.md
community/contributing/contributing.md: https://github.com/knative/community/blob/main/CONTRIBUTING.md
community/contributing/governance.md: https://github.com/knative/community/blob/main/GOVERNANCE.md
community/contributing/mechanics.md: https://github.com/knative/community/tree/main/mechanics
community/contributing/mechanics/creating-a-sandbox-repo.md: https://github.com/knative/community/blob/main/mechanics/CREATING-A-SANDBOX-REPO.md
community/contributing/mechanics/sc.md: https://github.com/knative/community/blob/main/mechanics/SC.md
community/contributing/mechanics/toc.md: https://github.com/knative/community/blob/main/mechanics/TOC.md
community/contributing/mechanics/working-group-processes.md: https://github.com/knative/community/blob/main/mechanics/WORKING-GROUP-PROCESSES.md
community/contributing/mechanics/feature-tracks.md: https://github.com/knative/community/blob/main/mechanics/FEATURE-TRACKS.md
community/contributing/mechanics/golang-policy.md: https://github.com/knative/community/blob/main/mechanics/GOLANG-POLICY.md
community/contributing/mechanics/release-versioning-principles.md: https://github.com/knative/community/blob/main/mechanics/RELEASE-VERSIONING-PRINCIPLES.md
community/contributing/mechanics/release-schedule.md: https://github.com/knative/community/blob/main/mechanics/RELEASE-SCHEDULE.md
community/contributing/mechanics/release-versioning-principles.md: https://github.com/knative/community/blob/main/mechanics/RELEASE-VERSIONING-PRINCIPLES.md
community/contributing/mechanics/sc.md: https://github.com/knative/community/blob/main/mechanics/SC.md
community/contributing/mechanics/sunsetting-features.md: https://github.com/knative/community/blob/main/mechanics/SUNSETTING-FEATURES.md
community/contributing/working-groups/working-groups.md: https://github.com/knative/community/blob/main/working-groups
community/contributing/mechanics/toc.md: https://github.com/knative/community/blob/main/mechanics/TOC.md
community/contributing/mechanics/working-group-processes.md: https://github.com/knative/community/blob/main/mechanics/WORKING-GROUP-PROCESSES.md
community/contributing/repository-guidelines.md: https://github.com/knative/community/blob/main/REPOSITORY-GUIDELINES.md
community/contributing/tech-oversight-committee.md: https://github.com/knative/community/blob/main/TECH-OVERSIGHT-COMMITTEE.md
community/contributing/steering-committee.md: https://github.com/knative/community/blob/main/STEERING-COMMITTEE.md
community/contributing/slack-guidelines.md: https://github.com/knative/community/blob/main/SLACK-GUIDELINES.md
community/contributing/values.md: https://github.com/knative/community/blob/main/VALUES.md
community/contributing/trademark-committee.md: https://github.com/knative/community/blob/main/TRADEMARK-COMMITTEE.md
community/contributing/roles.md: https://github.com/knative/community/blob/main/ROLES.md
community/contributing/reviewing.md: https://github.com/knative/community/blob/main/REVIEWING.md
community/calendar.md: https://github.com/knative/community/blob/main/CALENDAR.MD
community/contributing/roles.md: https://github.com/knative/community/blob/main/ROLES.md
community/contributing/slack-guidelines.md: https://github.com/knative/community/blob/main/SLACK-GUIDELINES.md
community/contributing/steering-committee.md: https://github.com/knative/community/blob/main/STEERING-COMMITTEE.md
community/contributing/tech-oversight-committee.md: https://github.com/knative/community/blob/main/TECH-OVERSIGHT-COMMITTEE.md
community/contributing/trademark-committee.md: https://github.com/knative/community/blob/main/TRADEMARK-COMMITTEE.md
community/contributing/values.md: https://github.com/knative/community/blob/main/VALUES.md
community/contributing/working-groups/working-groups.md: https://github.com/knative/community/blob/main/working-groups
community/meetup.md: community/contributing.md
community/annual_reports.md: https://github.com/knative/community/tree/main/annual_reports
community/samples.md: https://github.com/knative/docs/tree/main/code-samples/community
concepts/overview.md: index.md
developer/concepts/duck-typing.md: reference/concepts/duck-typing.md
developer/eventing/event-delivery.md: eventing/event-delivery.md
developer/eventing/sinks/kafka-sink.md: eventing/sinks/kafka-sink.md
developer/eventing/sinks/README.md: eventing/sinks/README.md
developer/eventing/sinks/kafka-sink.md: eventing/sinks/kafka-sink.md
developer/eventing/sources/README.md: eventing/sources/README.md
developer/eventing/sources/apache-camel-source/README.md: eventing/sources/apache-camel-source/README.md
developer/eventing/sources/apiserversource/getting-started.md: eventing/sources/apiserversource/getting-started.md
developer/eventing/sources/apiserversource/README.md: eventing/sources/apiserversource/README.md
developer/eventing/sources/apiserversource/getting-started.md: eventing/sources/apiserversource/getting-started.md
developer/eventing/sources/apiserversource/reference.md: eventing/sources/apiserversource/reference.md
developer/eventing/sources/containersource/README.md: eventing/custom-event-source/containersource/README.md
developer/eventing/sources/containersource/reference.md: eventing/custom-event-source/containersource/reference.md
@ -86,12 +86,12 @@ plugins:
developer/eventing/sources/kafka-source/README.md: eventing/sources/kafka-source/README.md
developer/eventing/sources/ping-source/README.md: eventing/sources/ping-source/README.md
developer/eventing/sources/ping-source/reference.md: eventing/sources/ping-source/reference.md
developer/eventing/sources/README.md: eventing/sources/README.md
developer/eventing/sources/sinkbinding/getting-started.md: eventing/custom-event-source/sinkbinding/create-a-sinkbinding.md
developer/eventing/sources/sinkbinding/README.md: eventing/custom-event-source/sinkbinding/README.md
developer/eventing/sources/sinkbinding/getting-started.md: eventing/custom-event-source/sinkbinding/create-a-sinkbinding.md
developer/eventing/sources/sinkbinding/reference.md: eventing/custom-event-source/sinkbinding/reference.md
developer/serving/deploying-from-private-registry.md: serving/deploying-from-private-registry.md
developer/serving/rolling-out-latest-revision.md: serving/rolling-out-latest-revision.md
developer/serving/services/README.md: serving/services/README.md
developer/serving/services/byo-certificate.md: serving/services/byo-certificate.md
developer/serving/services/certificate-class.md: serving/services/certificate-class.md
developer/serving/services/configure-requests-limits-services.md: serving/services/configure-requests-limits-services.md
@ -100,7 +100,6 @@ plugins:
developer/serving/services/http-option.md: serving/services/http-protocol.md
developer/serving/services/ingress-class.md: serving/services/ingress-class.md
developer/serving/services/private-services.md: serving/services/private-services.md
developer/serving/services/README.md: serving/services/README.md
developer/serving/services/service-metrics.md: serving/services/service-metrics.md
developer/serving/tag-resolution.md: serving/tag-resolution.md
developer/serving/traffic-management.md: serving/traffic-management.md
@ -113,6 +112,7 @@ plugins:
eventing/debugging/README.md: eventing/troubleshooting/README.md
eventing/metrics.md: eventing/observability/metrics/eventing-metrics.md
eventing/parallel.md: eventing/flows/parallel.md
eventing/samples/README.md: samples/eventing.md
eventing/samples/apache-camel-source/index.md: eventing/sources/apache-camel-source/README.md
eventing/samples/cloud-audit-logs-source/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/cloud-audit-logs-source
eventing/samples/cloud-pubsub-source/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/cloud-pubsub-source
@ -121,18 +121,17 @@ plugins:
eventing/samples/gcp-pubsub-source/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/cloud-pubsub-source
eventing/samples/github-source/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/github-source
eventing/samples/gitlab-source/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/gitlab-source
eventing/samples/helloworld/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/helloworld
eventing/samples/helloworld/helloworld-go/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/helloworld/helloworld-go
eventing/samples/helloworld/helloworld-python/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/helloworld/helloworld-python
eventing/samples/helloworld/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/helloworld
eventing/samples/kafka/binding/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/kafka/binding
eventing/samples/kafka/channel/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/kafka/channel
eventing/samples/kafka/resetoffset/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/kafka/resetoffset
eventing/samples/kubernetes-event-source/index.md: eventing/sources/apiserversource/README.md
eventing/samples/parallel/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/parallel
eventing/samples/parallel/multiple-branches/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/parallel/multiple-branches
eventing/samples/parallel/mutual-exclusivity/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/parallel/mutual-exclusivity
eventing/samples/parallel/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/parallel
eventing/samples/ping-source/index.md: eventing/sources/ping-source/README.md
eventing/samples/README.md: samples/eventing.md
eventing/samples/sequence/index.md: eventing/flows/sequence/README.md
eventing/samples/sequence/sequence-replay-to-event-display/index.md: eventing/flows/sequence/sequence-reply-to-event-display/README.md
eventing/samples/sequence/sequence-reply-to-sequence/index.md: eventing/flows/sequence/sequence-reply-to-sequence/README.md
@ -141,18 +140,23 @@ plugins:
eventing/samples/sinkbinding/README.md: eventing/custom-event-source/sinkbinding/README.md
eventing/samples/writing-event-source-easy-way/README.md: https://github.com/knative/docs/tree/main/code-samples/eventing/writing-event-source-easy-way
eventing/sequence.md: eventing/flows/sequence/README.md
eventing/sink/kafka-sink.md: eventing/sinks/kafka-sink.md
eventing/sink/README.md: eventing/sinks/README.md
eventing/sink/kafka-sink.md: eventing/sinks/kafka-sink.md
eventing/sources/containersource.md: eventing/custom-event-source/containersource/README.md
eventing/sources/pingsource/index.md: eventing/sources/ping-source/README.md
eventing/triggers/index.md: eventing/broker/triggers/README.md
install/collecting-logs/index.md: serving/observability/logging/collecting-logs.md
install/collecting-metrics/index.md: serving/observability/metrics/collecting-metrics.md
install/eventing/eventing-installation-files.md: install/yaml-install/eventing/eventing-installation-files.md
install/eventing/install-eventing-with-yaml.md: install/yaml-install/eventing/install-eventing-with-yaml.md
install/getting-started-knative-app/index.md: getting-started/README.md
install/install-extensions.md: install/README.md
install/installation-files.md: install/README.md
install/prerequisites.md: install/README.md
install/knative-with-operators.md: install/operator/knative-with-operators.md
install/serving/install-serving-with-yaml.md: install/yaml-install/serving/install-serving-with-yaml.md
install/serving/installing-cert-manager.md: install/installing-cert-manager.md
install/serving/installing-istio.md: install/installing-istio.md
install/serving/serving-installation-files.md: install/yaml-install/serving/serving-installation-files.md
operator/configuring-eventing-cr/index.md: install/operator/configuring-eventing-cr.md
operator/configuring-serving-cr/index.md: install/operator/configuring-serving-cr.md
reference/resources/index.md: install/client/README.md
@ -163,8 +167,9 @@ plugins:
serving/debugging-application-issues.md: serving/troubleshooting/debugging-application-issues.md
serving/feature-flags.md: serving/configuration/feature-flags.md
serving/getting-started-knative-app.md: getting-started/README.md
serving/installing-cert-manager.md: install/serving/installing-cert-manager.md
serving/installing-cert-manager.md: install/installing-cert-manager.md
serving/metrics.md: serving/observability/metrics/serving-metrics.md
serving/samples/README.md: samples/serving.md
serving/samples/autoscale-go/index.md: serving/autoscaling/autoscale-go/README.md
serving/samples/blue-green-deployment.md: serving/traffic-management.md
serving/samples/cloudevents/cloudevents-dotnet/README.md: https://github.com/knative/docs/tree/main/code-samples/serving/cloudevents/cloudevents-dotnet
@ -189,7 +194,6 @@ plugins:
serving/samples/knative-routing-go/README.md: https://github.com/knative/docs/tree/main/code-samples/serving/knative-routing-go
serving/samples/kong-routing-go/README.md: https://github.com/knative/docs/tree/main/code-samples/serving/kong-routing-go
serving/samples/multi-container/README.md: https://github.com/knative/docs/tree/main/code-samples/serving/multicontainer
serving/samples/README.md: samples/serving.md
serving/samples/secrets-go/README.md: https://github.com/knative/docs/tree/main/code-samples/serving/secrets-go
serving/samples/tag-header-based-routing/README.md: https://github.com/knative/docs/tree/main/code-samples/serving/tag-header-based-routing
serving/samples/traffic-splitting/README.md: serving/traffic-management.md

View File

@ -39,3 +39,9 @@ If you notice gaps in the style guide or have queries, please post in [the Docs
- [Word and phrase list](style-guide/word-and-phrase-list.md)
- [Content re-use](style-guide/content-reuse.md)
- Using shortcodes (TBD)
## Maintainer guides
How-to guides for maintainers of the Knative docs repo and website:
- [Releasing a new version of the Knative documentation](docs-release-process.md)

View File

@ -0,0 +1,58 @@
# Releasing a new version of the Knative documentation
This document describes how to perform a docs release. In general, this should
be done by one of the release managers in the list at
https://github.com/knative/release.
To release a new version of the docs you must:
1. [Check dependencies](#check-dependencies)
1. [Create a release branch](#create-a-release-branch)
1. [Generate the new docs version](#generate-the-new-docs-version)
## Check dependencies
You cannot release a new version of the docs until the Knative components have
built their new release.
This is because the website references these releases in various locations.
Check the following components for the new release:
* [client](https://github.com/knative/client/releases/)
* [eventing](https://github.com/knative/eventing/releases/)
* [operator](https://github.com/knative/operator/releases/)
* [serving](https://github.com/knative/serving/releases/)
## Create a release branch
1. Check on the `#docs` Slack channel to make sure the release is ready.
_In the future, we should automate this so the check isn't needed._
1. Using the GitHub UI, create a `release-X.Y` branch based on `main`.
![branch](https://user-images.githubusercontent.com/35748459/87461583-804c4c80-c5c3-11ea-8105-f9b34988c9af.png)
## Generate the new docs version
To generate the new version of the docs, you must update the [`hack/build.sh`](../hack/build.sh)
script in the main branch to reference the new release.
We keep the last 4 releases available per [our support window](https://github.com/knative/community/blob/main/mechanics/RELEASE-VERSIONING-PRINCIPLES.md#knative-community-support-window-principle).
To generate the new docs version:
1. In `hack/build.sh` on the main branch, update `VERSIONS` and `RELEASE_BRANCHES`
to include the new version, and remove the oldest. Order matters, most recent first.
For example:
```
VERSIONS=("1.2" "1.1" "1.0" "0.26")
RELEASE_BRANCHES=("knative-v1.2.0" "knative-v1.1.0" "knative-v1.0.0" "v0.26.0")
```
1. PR the result to main.
## How GitHub and Netlify are hooked up
TODO: add information about how the docs are built and served using Netlify

View File

@ -2,9 +2,48 @@
The Knative website uses [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/)
to render documentation.
You can choose to install MkDocs locally or use a Docker image.
## Install Material for MkDocs locally
If you don't want to use any tool locally, you can use [GitPod](https://gitpod.io/#https://github.com/knative/docs)
this will allow you to edit the files on a Web IDE and live preview.
If you choose to run the site locally, we strongly recommend using a container.
Regardless of the method used, when you submit a PR, a live preview link will be available in a comment on the PR.
## (Option 1): Use the Docker container
You can use [Docker Desktop](https://www.docker.com/products/docker-desktop) or any docker engine supported for your operating system that is compatible with the `docker` CLI, for example [colima](https://github.com/abiosoft/colima).
### Live preview
To start the live preview, run the following script from the root directory of your local Knative docs repo:
```
./hack/docker/run.sh
```
Then open a web browser on http://localhost:8000
You can edit any file under `./docs` and the live preview autoreloads.
When you're done with your changes, you can stop the container using `Ctrl+C`.
### Full site build (optional)
To run a complete build of the website with all versions, run the following script from the root directory of your local Knative docs repo:
```
./hack/docker/test.sh
```
The build output is the entire static site located in `./site`.
You can preview the website locally by running a webserver using this directory like `npx http-server site -p 8000` if you have Node.js or `python3 -m http.server 8000` if you have Python 3.
## (Option 2) Using native Python mkdocs CLI
The website is built using [material-mkdocs](https://squidfunk.github.io/mkdocs-material/) which is a python tool based
on the `[mkdocs](https://www.mkdocs.org/) project.
### Install Material for MkDocs locally
Material for MkDocs is Python based and uses pip to install most of its required
packages, as well as the optional add-ons we use.
@ -14,42 +53,18 @@ from the [Python website](https://www.python.org).
For some (e.g. folks using RHEL), you might have to use pip3.
### Install using pip
Install Material for MkDocs and dependencies by running:
1. Install Material for MkDocs by running:
```
pip install -r requirements.txt
```
```
pip install mkdocs-material
```
For more detailed instructions, see [Material for MkDocs documentation](https://squidfunk.github.io/mkdocs-material/getting-started/#installation)
For more detailed instructions, see [Material for MkDocs documentation](https://squidfunk.github.io/mkdocs-material/getting-started/#installation)
1. Install the extensions to MkDocs needed for Knative by running:
If you have `pip3` you can use the above commands and replace `pip` with `pip3`
```
pip install mkdocs-material-extensions mkdocs-macros-plugin mkdocs-exclude mkdocs-awesome-pages-plugin mkdocs-redirects
```
### Install using pip3
1. Install Material for MkDocs by running:
```
pip3 install mkdocs-material
```
For more detailed instructions, see the [Material for MkDocs documentation](https://squidfunk.github.io/mkdocs-material/getting-started/#installation)
1. Install the extensions to MkDocs needed for Knative by running:
```
pip3 install mkdocs-material-extensions mkdocs-macros-plugin mkdocs-exclude mkdocs-awesome-pages-plugin mkdocs-redirects
```
## Use the Docker container
//TODO DOCKER CONTAINER EXTENSIONS
## Setting up local preview
### Setting up local preview
When using the local preview, anytime you change any file in your local copy of
the `/docs` directory and hit save, the site automatically rebuilds to reflect your changes!
@ -70,7 +85,7 @@ and clone the repo.
```
- **Local Preview with Dirty Reload**
If youre only changing a single page in the `/docs/` folder that is not the homepage or `nav.yml`, adding the flag `--dirtyreload` makes the site rebuild super crazy insta-fast.
If youre only changing a single page in the `/docs/` folder that is not the homepage or `nav.yml`, adding the flag `--dirtyreload` makes the site rebuild faster.
```
mkdocs serve --dirtyreload

View File

@ -1,8 +1,7 @@
# Apache Kafka Broker
# Knative Kafka Broker
The Apache Kafka Broker is a native Broker implementation, that reduces
network hops, supports any Kafka version, and has a better integration
with Apache Kafka for the Knative Broker and Trigger model.
The Knative Kafka Broker is an Apache Kafka native implementation of the Knative Broker API that reduces
network hops, supports any Kafka version, and has a better integration with Kafka for the Broker and Trigger model.
Notable features are:
@ -14,7 +13,7 @@ Notable features are:
## Prerequisites
1. [Installing Eventing using YAML files](../../../install/eventing/install-eventing-with-yaml.md).
1. [Installing Eventing using YAML files](../../../install/yaml-install/eventing/install-eventing-with-yaml.md).
2. An Apache Kafka cluster (if you're just getting started you can follow [Strimzi Quickstart page](https://strimzi.io/quickstarts/)).
## Installation

View File

@ -6,7 +6,7 @@ This topic describes how to create a RabbitMQ Broker.
To use the RabbitMQ Broker, you must have the following installed:
1. [Knative Eventing](../../../install/eventing/install-eventing-with-yaml.md)
1. [Knative Eventing](../../../install/yaml-install/eventing/install-eventing-with-yaml.md)
1. [RabbitMQ Cluster Operator](https://github.com/rabbitmq/cluster-operator) - our recommendation is [latest release](https://github.com/rabbitmq/cluster-operator/releases/latest)
1. [CertManager v1.5.4](https://github.com/jetstack/cert-manager/releases/tag/v1.5.4) - easiest integration with RabbitMQ Messaging Topology Operator
1. [RabbitMQ Messaging Topology Operator](https://github.com/rabbitmq/messaging-topology-operator) - our recommendation is [latest release](https://github.com/rabbitmq/messaging-topology-operator/releases/latest) with CertManager

View File

@ -11,7 +11,7 @@ container image, and a ContainerSource that uses your image URI.
## Before you begin
Before you can create a ContainerSource object, you must have [Knative Eventing](../../../install/eventing/install-eventing-with-yaml.md) installed on your cluster.
Before you can create a ContainerSource object, you must have [Knative Eventing](../../../install/yaml-install/eventing/install-eventing-with-yaml.md) installed on your cluster.
## Develop, build and publish a container image

View File

@ -4,7 +4,7 @@ This page shows how to install and configure an Apache KafkaSink.
## Prerequisites
You must have access to a Kubernetes cluster with [Knative Eventing installed](../../install/eventing/install-eventing-with-yaml.md).
You must have access to a Kubernetes cluster with [Knative Eventing installed](../../install/yaml-install/eventing/install-eventing-with-yaml.md).
## Installation

View File

@ -8,7 +8,7 @@ This topic describes how to create an ApiServerSource object.
Before you can create an ApiServerSource object:
- You must have [Knative Eventing](../../../install/eventing/install-eventing-with-yaml.md)
- You must have [Knative Eventing](../../../install/yaml-install/eventing/install-eventing-with-yaml.md)
installed on your cluster.
- You must install the [`kubectl` CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
tool.

View File

@ -1,4 +1,4 @@
# Apache Kafka source example
# Apache Kafka Source
Tutorial on how to build and deploy a `KafkaSource` event source.
@ -6,19 +6,42 @@ Tutorial on how to build and deploy a `KafkaSource` event source.
The `KafkaSource` reads all the messages, from all partitions, and sends those messages as CloudEvents through HTTP to its configured `sink`. The `KafkaSource` supports an ordered consumer delivery guaranty, which is a per-partition blocking consumer that waits for a successful response from the CloudEvent subscriber before it delivers the next message of the partition.
!!! note
If you need a more sophisticated Kafka Consumer, with direct access to specific partitions or offsets, you can implement a Kafka Consumer by using one of the available Apache Kafka SDKs.
<!--TODO: Check if this note is out of scope; should we not mention anything beyond the direct Knative/Kafka integration we provide?-->
## Install the Kafka event source CRD
## Installing Kafka source
- Set up a Kubernetes cluster with the Kafka event source installed.
You can install the Kafka event source by using
[YAML](../../../install/eventing/install-eventing-with-yaml.md#install-optional-eventing-extensions)
or the [Knative Operator](../../../install/operator/knative-with-operators.md#installing-with-different-eventing-sources).
1. Install the Kafka controller by entering the following command:
## Optional: Create a Kafka topic
```bash
kubectl apply --filename {{ artifact(org="knative-sandbox", repo="eventing-kafka-broker", file="eventing-kafka-controller.yaml") }}
```
1. Install the Kafka Source data plane by entering the following command:
```bash
kubectl apply --filename {{ artifact(org="knative-sandbox", repo="eventing-kafka-broker", file="eventing-kafka-source.yaml") }}
```
1. Verify that `kafka-controller` and `kafka-source-dispatcher` are running,
by entering the following command:
```bash
kubectl get deployments.apps -n knative-eventing
```
Example output:
```{ .bash .no-copy }
NAME READY UP-TO-DATE AVAILABLE AGE
kafka-controller 1/1 1 1 3s
kafka-source-dispatcher 1/1 1 1 4s
```
## Create a Kafka topic
!!! note
The create a Kafka topic section assumes you're using Strimzi to operate Apache Kafka,
however equivalent operations can be replicated using the Apache Kafka CLI or any
other tool.
If you are using Strimzi:
@ -150,28 +173,16 @@ If you are using Strimzi:
kafkasource.sources.knative.dev/kafka-source created
```
1. Verify that the event source Pod is running:
1. Verify that the KafkaSource is ready:
```bash
kubectl get pods
```
The Pod name is prefixed with `kafka-source`:
```{ .bash .no-copy }
NAME READY STATUS RESTARTS AGE
kafka-source-xlnhq-5544766765-dnl5s 1/1 Running 0 40m
```
1. Ensure that the Kafka event source started with the necessary
configuration:
```bash
kubectl logs --selector='knative-eventing-source-name=kafka-source'
kubectl get kafkasource kafka-source
```
Example output:
```{ .bash .no-copy }
{"level":"info","ts":"2020-05-28T10:39:42.104Z","caller":"adapter/adapter.go:81","msg":"Starting with config: ","Topics":".","ConsumerGroup":"...","SinkURI":"...","Name":".","Namespace":"."}
NAME TOPICS BOOTSTRAPSERVERS READY REASON AGE
kafka-source ["knative-demo-topic"] ["my-cluster-kafka-bootstrap.kafka:9092"] True 26h
```
### Verify
@ -185,48 +196,6 @@ If you are using Strimzi:
!!! tip
If you don't see a command prompt, try pressing **Enter**.
1. Verify that the Kafka event source consumed the message and sent it to
its Sink properly. Because these logs are captured in debug level, edit the key `level` of `config-logging` ConfigMap in `knative-sources` namespace to look like this:
```bash
data:
loglevel.controller: info
loglevel.webhook: info
zap-logger-config: |
{
"level": "debug",
"development": false,
"outputPaths": ["stdout"],
"errorOutputPaths": ["stderr"],
"encoding": "json",
"encoderConfig": {
"timeKey": "ts",
"levelKey": "level",
"nameKey": "logger",
"callerKey": "caller",
"messageKey": "msg",
"stacktraceKey": "stacktrace",
"lineEnding": "",
"levelEncoder": "",
"timeEncoder": "iso8601",
"durationEncoder": "",
"callerEncoder": ""
}
}
```
1. Manually delete the Kafka source deployment and allow the `kafka-controller-manager` deployment running in the `knative-sources` namespace to redeploy it. Debug level logs should be visible now.
```bash
kubectl logs --selector='knative-eventing-source-name=kafka-source'
```
Example output:
```{ .bash .no-copy }
{"level":"debug","ts":"2020-05-28T10:40:29.400Z","caller":"kafka/consumer_handler.go:77","msg":"Message claimed","topic":".","value":"."}
{"level":"debug","ts":"2020-05-28T10:40:31.722Z","caller":"kafka/consumer_handler.go:89","msg":"Message marked","topic":".","value":"."}
```
1. Verify that the Service received the message from the event source:
```bash
@ -277,23 +246,6 @@ If you are using Strimzi:
"event-display" deleted
```
3. Remove the Kafka event controller:
```bash
kubectl delete -f https://storage.googleapis.com/knative-releases/eventing-contrib/latest/kafka-source.yaml
```
Example output:
```{ .bash .no-copy }
serviceaccount "kafka-controller-manager" deleted
clusterrole.rbac.authorization.k8s.io "eventing-sources-kafka-controller"
deleted clusterrolebinding.rbac.authorization.k8s.io
"eventing-sources-kafka-controller" deleted
customresourcedefinition.apiextensions.k8s.io "kafkasources.sources.knative.dev"
deleted service "kafka-controller" deleted statefulset.apps
"kafka-controller-manager" deleted
```
4. Optional: Remove the Apache Kafka Topic
```bash
@ -349,7 +301,7 @@ to consume from the earliest offset, set the initialOffset field to `earliest`,
apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
name: kafka-source
name: kafka-source
spec:
consumerGroup: knative-group
initialOffset: earliest
@ -366,7 +318,7 @@ sink:
!!! note
The valid values for `initialOffset` are `earliest` and `latest`. Any other value results in a
validation error. This field is honored only if there are no prior committed offsets for that
validation error. This field is honored only if there are no committed offsets for that
consumer group.
## Connecting to a TLS-enabled Kafka Broker

View File

@ -14,7 +14,7 @@ If you have an existing sink, you can replace the examples with your own values.
To create a PingSource:
- You must install [Knative Eventing](../../../install/eventing/install-eventing-with-yaml.md).
- You must install [Knative Eventing](../../../install/yaml-install/eventing/install-eventing-with-yaml.md).
The PingSource event source type is enabled by default when you install Knative Eventing.
- You can use either `kubectl` or [`kn`](../../../install/client/install-kn.md) commands
to create components such as a sink and PingSource.

View File

@ -1,5 +1,8 @@
# Scaling to Zero
**Remember those super powers :rocket: we talked about?** One of Knative Serving's powers is built-in automatic scaling (autoscaling). This means your Knative Service only spins up your application to perform its job -- in this case, saying "Hello world!" -- if it is needed; otherwise, it will "scale to zero" by spinning down and waiting for a new request to come in.
**Remember those super powers :rocket: we talked about?** One of Knative Serving's powers is built-in automatic scaling, also known as **autoscaling**.
This means your Knative Service only spins up your application to perform its job (in this case, saying "Hello world!") if it is needed.
Otherwise, it will **scale to zero** by spinning down and waiting for a new request to come in.
??? question "What about scaling up to meet increased demand?"
Knative Autoscaling also allows you to easily configure your service to scale up
@ -12,7 +15,8 @@
[Pod](https://kubernetes.io/docs/concepts/workloads/pods/){target=blank_} in Kubernetes where our
Knative Service is running to watch our "Hello world!" Service scale up and down.
### Run your Knative Service
## Watch your Knative Service scale to zero
Let's run our "Hello world!" Service just one more time. This time, try the Knative Service `URL` in
your browser
[http://hello.default.127.0.0.1.sslip.io](http://hello.default.127.0.0.1.sslip.io){target=_blank}, or you
@ -21,7 +25,7 @@ can use your terminal with `curl`.
curl http://hello.default.127.0.0.1.sslip.io
```
You can watch the pods and see how they scale to zero after traffic stops going to the URL.
Now watch the pods and see how they scale to zero after traffic stops going to the URL.
```bash
kubectl get pod -l serving.knative.dev/service=hello -w
```
@ -30,26 +34,27 @@ kubectl get pod -l serving.knative.dev/service=hello -w
It may take up to 2 minutes for your Pods to scale down. Pinging your service again will reset this timer.
==**Expected output:**==
```{ .bash .no-copy }
NAME READY STATUS
hello-world 2/2 Running
hello-world 2/2 Terminating
hello-world 1/2 Terminating
hello-world 0/2 Terminating
```
!!! Success "Expected output"
```{ .bash .no-copy }
NAME READY STATUS
hello-world 2/2 Running
hello-world 2/2 Terminating
hello-world 1/2 Terminating
hello-world 0/2 Terminating
```
## Scale up your Knative Service
### Scale up your Knative Service
Rerun the Knative Service in your browser [http://hello.default.127.0.0.1.sslip.io](http://hello.default.127.0.0.1.sslip.io){target=_blank}, and you will see a new pod running again.
==**Expected output:**==
```{ .bash .no-copy }
NAME READY STATUS
hello-world 0/2 Pending
hello-world 0/2 ContainerCreating
hello-world 1/2 Running
hello-world 2/2 Running
```
!!! Success "Expected output"
```{ .bash .no-copy }
NAME READY STATUS
hello-world 0/2 Pending
hello-world 0/2 ContainerCreating
hello-world 1/2 Running
hello-world 2/2 Running
```
Exit the watch command with `Ctrl+c`.
Some people call this **Serverless** :tada: :taco: :fire: Up next, traffic splitting!

View File

@ -1,6 +1,6 @@
# Sources, Brokers, Triggers, Sinks, oh my!
For the purposes of this tutorial, let's keep it simple. You will focus on four powerful Eventing components: **Source, Trigger, Broker, and Sink**.
For the purposes of this tutorial, let's keep it simple. You will focus on four powerful Eventing components: **Source**, **Trigger**, **Broker**, and **Sink**.
Let's take a look at how these components interact:
@ -35,17 +35,17 @@ information back and forth between your Services and these components.
## Examining the Broker
As part of the `kn quickstart` install, an In-Memory Broker should have already be installed in your Cluster. Check to see that it is installed by running the command:
As part of the `kn quickstart` install, an In-Memory Broker should already be installed in your Cluster. Check to see that it is installed by running the command:
```bash
kn broker list
```
==**Expected Output**==
```{ .bash .no-copy }
NAME URL AGE CONDITIONS READY REASON
example-broker http://broker-ingress.knative-eventing.svc.cluster.local/default/example-broker 5m 5 OK / 5 True
```
!!! Success "Expected output"
```{ .bash .no-copy }
NAME URL AGE CONDITIONS READY REASON
example-broker http://broker-ingress.knative-eventing.svc.cluster.local/default/example-broker 5m 5 OK / 5 True
```
!!! warning
In-Memory Brokers are for development use only and must not be used in a production deployment.
@ -55,4 +55,4 @@ example-broker http://broker-ingress.knative-eventing.svc.cluster.local/defaul
If you want to find out more about the different components of Knative Eventing, such as Channels, Sequences and Parallel flows, check out the [Knative Eventing documentation](../eventing/README.md){target=_blank}.
**Next, you'll take a look at a simple implementation** of Sources, Brokers, Triggers and Sinks using an app called the Cloud Events Player.
**Next, you'll take a look at a simple implementation** of Sources, Brokers, Triggers and Sinks using an app called the CloudEvents Player.

View File

@ -2,13 +2,17 @@
**In this tutorial, you will deploy a "Hello world" service.**
This service will accept an environment variable, `TARGET`, and print "`Hello ${TARGET}!`."
Since our "Hello world" Service is being deployed as a Knative Service, not a Kubernetes Service, it gets some **super powers out of the box** :rocket:.
## Knative Service: "Hello world!"
First, deploy the Knative Service. This service accepts the environment variable,
`TARGET`, and prints `Hello ${TARGET}!`.
=== "kn"
Deploy the Service by running the command:
``` bash
kn service create hello \
--image gcr.io/knative-samples/helloworld-go \
@ -20,61 +24,69 @@ Since our "Hello world" Service is being deployed as a Knative Service, not a Ku
??? question "Why did I pass in `revision-name`?"
Note the name "world" which you passed in as "revision-name," naming your `Revisions` will help you to more easily identify them, but don't worry, you'll learn more about `Revisions` later.
==**Expected output:**==
```{ .bash .no-copy }
Service hello created to latest revision 'hello-world' is available at URL:
http://hello.default.127.0.0.1.sslip.io
```
!!! Success "Expected output"
```{ .bash .no-copy }
Service hello created to latest revision 'hello-world' is available at URL:
http://hello.default.127.0.0.1.sslip.io
```
=== "YAML"
1. Copy the following YAML into a file named `hello.yaml`:
``` bash
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
spec:
template:
``` yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
# This is the name of our new "Revision," it must follow the convention {service-name}-{revision-name}
name: hello-world
name: hello
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
ports:
- containerPort: 8080
env:
- name: TARGET
value: "World"
```
Once you've created your YAML file (named something like "hello.yaml"):
``` bash
kubectl apply -f hello.yaml
```
??? question "Why did I pass in the second name, `hello-world`?"
Note the name "hello-world" which you passed in under "metadata" in your YAML file. Naming your `Revisions` will help you to more easily identify them, but don't worry if this if a bit confusing now, you'll learn more about `Revisions` later.
template:
metadata:
# This is the name of our new "Revision," it must follow the convention {service-name}-{revision-name}
name: hello-world
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
ports:
- containerPort: 8080
env:
- name: TARGET
value: "World"
```
1. Deploy the Knative Service by running the command:
``` bash
kubectl apply -f hello.yaml
```
??? question "Why did I pass in the second name, `hello-world`?"
Note the name `hello-world` which you passed in under `metadata` in your YAML file. Naming your `Revisions` will help you to more easily identify them, but don't worry if this if a bit confusing now, you'll learn more about `Revisions` later.
!!! Success "Expected output"
```{ .bash .no-copy }
service.serving.knative.dev/hello created
```
1. To see the URL where your Knative Service is hosted, leverage the `kn` CLI:
```bash
kn service list
```
!!! Success "Expected output"
```bash
NAME URL LATEST AGE CONDITIONS READY REASON
hello http://hello.default.127.0.0.1.sslip.io hello-world 13s 3 OK / 3 True
```
==**Expected output:**==
```{ .bash .no-copy }
service.serving.knative.dev/hello created
```
To see the URL where your Knative Service is hosted, leverage the `kn` CLI:
```bash
kn service list
```
## Ping your Knative Service
Ping your Knative Service by opening [http://hello.default.127.0.0.1.sslip.io](http://hello.default.127.0.0.1.sslip.io){target=_blank} in your browser of choice or by running the command:
```bash
curl http://hello.default.127.0.0.1.sslip.io
```
==**Expected output:**==
```{ .bash .no-copy }
Hello World!
```
!!! Success "Expected output"
```{ .bash .no-copy }
Hello World!
```
??? question "Are you seeing `curl: (6) Could not resolve host: hello.default.127.0.0.1.sslip.io`?"

View File

@ -1,60 +1,64 @@
In this tutorial, you use the [CloudEvents Player](https://github.com/ruromero/cloudevents-player){target=blank} to showcase the core concepts of Knative Eventing. By the end of this tutorial, you should have an architecture that looks like this:
# Using a Knative Service as a source
![The CloudEvents Player acts as both a Source and a Sink for CloudEvents](images/event_diagram.png)
In this tutorial, you will use the [CloudEvents Player](https://github.com/ruromero/cloudevents-player){target=blank} app to showcase the core concepts of Knative Eventing. By the end of this tutorial, you should have an architecture that looks like this:
![The CloudEvents Player acts as both a source and a sink for CloudEvents](images/event_diagram.png)
The above image is Figure 6.6 from [Knative in Action](https://www.manning.com/books/knative-in-action){target=_blank}.
## Creating your first Source
## Creating your first source
The CloudEvents Player acts as a Source for CloudEvents by intaking the URL of the Broker as an environment variable, `BROKER_URL`. You will send CloudEvents to the Broker through the CloudEvents Player application.
Create the CloudEvents Player Service:
=== "kn"
Run the command:
```bash
kn service create cloudevents-player \
--image ruromero/cloudevents-player:latest \
--env BROKER_URL=http://broker-ingress.knative-eventing.svc.cluster.local/default/example-broker
```
==**Expected Output**==
```{ .bash .no-copy }
Service 'cloudevents-player' created to latest revision 'cloudevents-player-vwybw-1' is available at URL:
http://cloudevents-player.default.127.0.0.1.sslip.io
```
!!! Success "Expected output"
```{ .bash .no-copy }
Service 'cloudevents-player' created to latest revision 'cloudevents-player-vwybw-1' is available at URL:
http://cloudevents-player.default.127.0.0.1.sslip.io
```
??? question "Why is my Revision named something different!"
Because we didn't assign a `revision-name`, Knative Serving automatically created one for us. It's okay if your Revision is named something different.
=== "YAML"
```bash
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: cloudevents-player
spec:
template:
1. Copy the following YAML into a file named `cloudevents-player.yaml`:
```bash
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
annotations:
autoscaling.knative.dev/min-scale: "1"
name: cloudevents-player
spec:
containers:
- image: ruromero/cloudevents-player:latest
env:
- name: BROKER_URL
value: http://broker-ingress.knative-eventing.svc.cluster.local/default/example-broker
```
template:
metadata:
annotations:
autoscaling.knative.dev/min-scale: "1"
spec:
containers:
- image: ruromero/cloudevents-player:latest
env:
- name: BROKER_URL
value: http://broker-ingress.knative-eventing.svc.cluster.local/default/example-broker
```
Once you've created your YAML file, named something like `cloudevents-player.yaml`, apply it by running the command:
``` bash
kubectl apply -f cloudevents-player.yaml
```
1. Apply the YAML file by running the command:
``` bash
kubectl apply -f cloudevents-player.yaml
```
==**Expected Output**==
```{ .bash .no-copy }
service.serving.knative.dev/cloudevents-player created
```
!!! Success "Expected output"
```{ .bash .no-copy }
service.serving.knative.dev/cloudevents-player created
```
## Examining the CloudEvents Player
**You can use the CloudEvents Player to send and receive CloudEvents.** If you open the [Service URL](http://cloudevents-player.default.127.0.0.1.sslip.io){target=_blank} in your browser, the **Create Event** form appears:
![The user interface for the CloudEvents Player](images/event_form.png)
@ -70,17 +74,21 @@ Create the CloudEvents Player Service:
For more information on the CloudEvents Specification, check out the [CloudEvents Spec](https://github.com/cloudevents/spec/blob/v1.0.1/spec.md){target=_blank}.
1. Fill in the form with whatever you data you want.
1. Ensure your Event Source does not contain any spaces.
1. Click **SEND EVENT**.
### Sending an event
Try sending an event using the CloudEvents Player interface:
1. Fill in the form with whatever you data you want.
1. Ensure your Event Source does not contain any spaces.
1. Click **SEND EVENT**.
![CloudEvents Player Send](images/event_sent.png)
??? tip "Clicking the :fontawesome-solid-envelope: shows you the CloudEvent as the Broker sees it."
![Event Details](images/event_details.png){:width="500px"}
??? question "Want to send events via the command line instead?"
As an alternative to the Web form, events can also be sent/viewed via the command line.
??? question "Want to send events using the command line instead?"
As an alternative to the Web form, events can also be sent/viewed using the command line.
To post an event:
```bash
@ -100,4 +108,4 @@ Create the CloudEvents Player Service:
The :material-send: icon in the "Status" column implies that the event has been sent to our Broker... but where has the event gone? **Well, right now, nowhere!**
A Broker is simply a receptacle for events. In order for your events to be sent anywhere, you must create a Trigger which listens for your events and places them somewhere. And, you're in luck: you'll create your first Trigger on the next page!
A Broker is simply a receptacle for events. In order for your events to be sent anywhere, you must create a Trigger which listens for your events and places them somewhere. And, you're in luck; you'll create your first Trigger on the next page!

View File

@ -10,16 +10,21 @@ The last super power :rocket: of Knative Serving we'll go over in this tutorial
## Creating a new Revision
You may have noticed that when you created your Knative Service you assigned it a `revision-name`, "world". If you used `kn`, when your Service was created Knative returned both a URL and a "latest revision" for your Knative Service. **But what happens if you make a change to your Service?**
??? question "What exactly is a Revision?""
You may have noticed that when you created your Knative Service you assigned it a `revision-name`, `world`. If you used `kn` when your Service was created, Knative returned both a URL and a "latest revision" for your Knative Service. **But what happens if you make a change to your Service?**
??? question "What exactly is a Revision?"
You can think of a [Revision](../serving/README.md#serving-resources){target=_blank} as a stateless, autoscaling, snapshot-in-time of application code and configuration.
A new Revision will get created each and every time you make changes to your Knative Service, whether you assign it a name or not. When splitting traffic, Knative splits traffic between different Revisions of your Knative Service.
Instead of `TARGET`="World" update the environment variable `TARGET` on your Knative Service `hello` to greet "Knative" instead. Name this new revision `hello-knative`
### Create the Revision: hello-knative
Instead of `TARGET=World` update the environment variable `TARGET` on your Knative Service `hello` to greet "Knative" instead. Name this new revision `hello-knative`.
=== "kn"
Deploy the updated version of your Knative Service by running the command:
``` bash
kn service update hello \
--env TARGET=Knative \
@ -27,91 +32,96 @@ Instead of `TARGET`="World" update the environment variable `TARGET` on your Kna
```
As before, `kn` prints out some helpful information to the CLI.
==**Expected output:**==
```{ .bash .no-copy }
Service hello created to latest revision 'hello-knative' is available at URL:
http://hello.default.127.0.0.1.sslip.io
```
!!! Success "Expected output"
```{ .bash .no-copy }
Service hello created to latest revision 'hello-knative' is available at URL:
http://hello.default.127.0.0.1.sslip.io
```
=== "YAML"
``` bash
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
spec:
template:
1. Edit your existing `hello.yaml` file to contain the following:
``` bash
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello-knative
name: hello
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
ports:
- containerPort: 8080
env:
- name: TARGET
value: "Knative"
```
Once you've edited your existing YAML file:
``` bash
kubectl apply -f hello.yaml
```
template:
metadata:
name: hello-knative
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
ports:
- containerPort: 8080
env:
- name: TARGET
value: "Knative"
```
1. Deploy the updated version of your Knative Service by running the command:
``` bash
kubectl apply -f hello.yaml
```
==**Expected output:**==
```{ .bash .no-copy }
service.serving.knative.dev/hello configured
```
!!! Success "Expected output"
```{ .bash .no-copy }
service.serving.knative.dev/hello configured
```
Note, since we are updating an existing Knative Service `hello`, the URL doesn't change, but our new Revision should have the new name `hello-knative`
Let's access our Knative Service again on your browser [http://hello.default.127.0.0.1.sslip.io](http://hello.default.127.0.0.1.sslip.io){target=_blank} to see the change, or use `curl` in your terminal:
### View the new Revision
To see the change, access the Knative Service again on your browser [http://hello.default.127.0.0.1.sslip.io](http://hello.default.127.0.0.1.sslip.io){target=_blank}, or use `curl` in your terminal:
```bash
curl http://hello.default.127.0.0.1.sslip.io
```
==**Expected output:**==
```{ .bash .no-copy }
Hello Knative!
```
!!! Success "Expected output"
```{ .bash .no-copy }
Hello Knative!
```
## Splitting Traffic
You may at this point be wondering, "where did 'Hello World!' go?" Remember, Revisions are a stateless snapshot-in-time of application code and configuration, so your "hello-world" `Revision` is still available to you.
We can easily see a list of our existing revisions with the `kn` CLI:
### List your Revisions
We can easily see a list of our existing revisions with the `kn` CLI.
=== "kn"
View a list of revisions by running the command:
```bash
kn revisions list
```
!!! Success "Expected output"
```{ .bash .no-copy }
NAME SERVICE TRAFFIC TAGS GENERATION AGE CONDITIONS READY REASON
hello-knative hello 100% 2 30s 3 OK / 4 True
hello-world hello 1 5m 3 OK / 4 True
```
=== "kubectl"
Though the following example doesn't cover it, you can peak under the hood to Kubernetes to see the revisions as Kubernetes sees them.
Though the following example doesn't cover it, you can peak under the hood to Kubernetes to see the revisions as Kubernetes sees them by running the command:
```bash
kubectl get revisions
```
==**Expected output:**==
```{ .bash .no-copy }
NAME SERVICE TRAFFIC TAGS GENERATION AGE CONDITIONS READY REASON
hello-knative hello 100% 2 30s 3 OK / 4 True
hello-world hello 1 5m 3 OK / 4 True
```
The column most relevant for our purposes is `TRAFFIC`. It looks like 100% of traffic is going to our latest `Revision` ("hello-knative") and 0% of traffic is going to the Revision we configured earlier ("hello-world").
When running the `kn` command, the column most relevant for our purposes is `TRAFFIC`. It looks like 100% of traffic is going to our latest `Revision` ("hello-knative")
and 0% of traffic is going to the Revision we configured earlier ("hello-world").
When you create a new Revision of a Knative Service, Knative defaults to directing 100% of traffic to this latest Revision. **We can change this default behavior by specifying how much traffic we want each of our Revisions to receive.**
### Split traffic between Revisions
Lets split traffic between our two Revisions:
!!! info inline end
`@latest` will always point to our "latest" `Revision` which, at the moment, is `hello-knative`.
=== "kn"
Run the command:
```bash
kn service update hello \
--traffic hello-world=50 \
@ -119,54 +129,60 @@ Lets split traffic between our two Revisions:
```
=== "YAML"
Add the following to the bottom of your existing YAML file:
``` bash
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
spec:
template:
1. Add the `traffic` section to the bottom of your existing `hello.yaml` file:
``` bash
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello-knative
name: hello
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
ports:
- containerPort: 8080
env:
- name: TARGET
value: "Knative"
traffic:
- latestRevision: true
percent: 50
- revisionName: hello-world
percent: 50
```
Once you've edited your existing YAML file:
``` bash
kubectl apply -f hello.yaml
```
template:
metadata:
name: hello-knative
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
ports:
- containerPort: 8080
env:
- name: TARGET
value: "Knative"
traffic:
- latestRevision: true
percent: 50
- revisionName: hello-world
percent: 50
```
1. Apply the YAML by running the command:
``` bash
kubectl apply -f hello.yaml
```
!!! info
`@latest` will always point to our "latest" `Revision` which, at the moment, is `hello-knative`.
### Verify the traffic split
Verify traffic split has configured correctly by listing the revisions again.
Verify traffic split configure correctly by listing the revisions again.
=== "kn"
Run the command:
```bash
kn revisions list
```
!!! Success "Expected output"
```{ .bash .no-copy }
NAME SERVICE TRAFFIC TAGS GENERATION AGE CONDITIONS READY REASON
hello-knative hello 50% 2 10m 3 OK / 4 True
hello-world hello 50% 1 36m 3 OK / 4 True
```
=== "kubectl"
Though the following example doesn't cover it, you can peak under the hood to Kubernetes to see the revisions as Kubernetes sees them.
Though the following example doesn't cover it, you can peak under the hood to Kubernetes to see the revisions as Kubernetes sees them by running the command:
```bash
kubectl get revisions
```
==**Expected output:**==
```{ .bash .no-copy }
NAME SERVICE TRAFFIC TAGS GENERATION AGE CONDITIONS READY REASON
hello-knative hello 50% 2 10m 3 OK / 4 True
hello-world hello 50% 1 36m 3 OK / 4 True
```
Access your Knative service on the browser again [http://hello.default.127.0.0.1.sslip.io](http://hello.default.127.0.0.1.sslip.io){target=_blank}, and refresh multiple times to see the different output being served by each Revision.
@ -175,13 +191,13 @@ Similarly, you can `curl` the Service URL multiple times to see the traffic bein
curl http://hello.default.127.0.0.1.sslip.io
```
==**Expected output:**==
```{ .bash .no-copy }
curl http://hello.default.127.0.0.1.sslip.io
Hello Knative!
!!! Success "Expected output"
```{ .bash .no-copy }
curl http://hello.default.127.0.0.1.sslip.io
Hello Knative!
curl http://hello.default.127.0.0.1.sslip.io
Hello World!
```
curl http://hello.default.127.0.0.1.sslip.io
Hello World!
```
Congratulations, :tada: you've successfully split traffic between 2 different Revisions of a Knative Service. Up next, Knative Eventing!
Congratulations, :tada: you've successfully split traffic between two different Revisions of a Knative Service. Up next, Knative Eventing!

View File

@ -1,47 +1,62 @@
# Creating your first Trigger
# Using Triggers and sinks
In the last topic we used the CloudEvents Player as an event source to send events to the Broker.
We now want the event to go from the Broker to an event sink.
In this topic, we will use the CloudEvents Player as the sink as well as a source.
This means we will be using the CloudEvents Player to both send and receive events.
We will use a Trigger to listen for events in the Broker to send to the sink.
## Creating your first Trigger
Create a Trigger that listens for CloudEvents from the event source and places them into the sink, which is also the CloudEvents Player app.
=== "kn"
To create the Trigger, run the command:
```bash
kn trigger create cloudevents-trigger --sink cloudevents-player --broker example-broker
```
!!! Success "Expected output"
```{ .bash .no-copy }
Trigger 'cloudevents-trigger' successfully created in namespace 'default'.
```
```{ .bash .no-copy }
Trigger 'cloudevents-trigger' successfully created in namespace 'default'.
```
=== "YAML"
```bash
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: cloudevents-trigger
annotations:
knative-eventing-injection: enabled
spec:
broker: example-broker
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: cloudevents-player
```
After you've created your YAML file, named something like `ce-trigger.yaml`, apply it by running the command:
```bash
kubectl apply -f ce-trigger.yaml
```
1. Copy the following YAML into a file named `ce-trigger.yaml`:
```bash
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: cloudevents-trigger
annotations:
knative-eventing-injection: enabled
spec:
broker: example-broker
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: cloudevents-player
```
==**Expected Output**==
```{ .bash .no-copy }
trigger.eventing.knative.dev/cloudevents-trigger created
```
1. Create the Trigger by running the command:
```bash
kubectl apply -f ce-trigger.yaml
```
!!! Success "Expected output"
```{ .bash .no-copy }
trigger.eventing.knative.dev/cloudevents-trigger created
```
trigger.eventing.knative.dev/cloudevents-player created
??? question "What CloudEvents is my Trigger listening for?"
Because we didn't specify a `--filter` in our `kn` command, the Trigger is listening for any CloudEvents coming into the Broker.
The following example shows how to use Filters.
The expand the next note to see how to use Filters.
Now, when we go back to the CloudEvents Player and send an Event, we see that CloudEvents are both sent and received by the CloudEvents Player:
Now, when we go back to the CloudEvents Player and send an event, we see that CloudEvents are both sent and received by the CloudEvents Player:
![CloudEvents Player user interface](images/event_received.png){draggable=false}
@ -57,7 +72,7 @@ You may need to refresh the page to see your changes.
kn trigger create cloudevents-player-filter --sink cloudevents-player --broker example-broker --filter type=some-type
```
If you send a CloudEvent with type "some-type," it is reflected in the CloudEvents Player UI. The Trigger ignores any other types.
If you send a CloudEvent with type `some-type`, it is reflected in the CloudEvents Player UI. The Trigger ignores any other types.
You can filter on any aspect of the CloudEvent you would like to.

View File

@ -1,14 +1,15 @@
# Introducing the Knative Eventing
## Background
With Knative Serving, we have a powerful tool which can take our containerized code and deploy it with relative ease. **With Knative Eventing, you gain a few new super powers :rocket:** that allow you to build Event-Driven Applications.
With Knative Serving, we have a powerful tool which can take our containerized code and deploy it with relative ease. **With Knative Eventing, you gain a few new super powers :rocket:** that allow you to build **Event-Driven Applications**.
??? question "What are Event Driven Applications?"
Event-driven applications are designed to detect events as they occur, and then deal with them using some event-handling procedure. Producing and consuming events with an "event-handling procedure" is precisely what Knative Eventing enables.
Want to find out more about Event-Driven Architecture and Knative Eventing? Check out this CNCF Session aptly named ["Event-driven architecture with Knative events"](https://www.cncf.io/online-programs/event-driven-architecture-with-knative-events/){target=blank}
==**Knative Eventing acts as the "glue" between the disparate parts of your architecture**== and allows you to easily communicate between those parts in a fault-tolerant way. Some examples include:
## Knative Eventing examples
**Knative Eventing acts as the "glue" between the disparate parts of your architecture** and allows you to easily communicate between those parts in a fault-tolerant way. Some examples include:
:material-file-document: [Creating and responding to Kubernetes API events](../eventing/sources/apiserversource/README.md){target=blank}
@ -18,4 +19,4 @@ With Knative Serving, we have a powerful tool which can take our containerized c
--8<-- "YouTube_icon.svg"
[Facilitating AI workloads at the edge in large-scale, drone-powered sustainable agriculture projects](https://www.youtube.com/watch?v=lVfJ5WEQ5_s){target=blank}
As you can see by the mentioned examples, Knative Eventing implementations can range from simplistic to extremely complex. For now, you'll start with simplistic and learn about the most basic components of Knative Eventing: Sources, Brokers, Triggers and Sinks.
As you can see by the mentioned examples, Knative Eventing implementations can range from simplistic to extremely complex. For now, you'll start with simplistic and learn about the most basic components of Knative Eventing: **Sources**, **Brokers**, **Triggers**, and **Sinks**.

View File

@ -1,119 +1,4 @@
# Install Knative using quickstart
This topic describes how to install a local deployment of Knative Serving and Eventing using
the Knative `quickstart` plugin.
The plugin installs a preconfigured Knative deployment on a local Kubernetes cluster.
!!! warning
Knative `quickstart` environments are for experimentation use only.
For a production ready installation, see [Installing Knative](../install/README.md).
## Before you begin
Before you can get started with a Knative `quickstart` deployment you must intstall:
- [kind](https://kind.sigs.k8s.io/docs/user/quick-start){target=_blank} (Kubernetes in Docker) or [minikube](https://minikube.sigs.k8s.io/docs/start/){target=_blank} to enable you to run a local Kubernetes cluster with Docker container nodes.
- The [Kubernetes CLI (`kubectl`)](https://kubernetes.io/docs/tasks/tools/install-kubectl){target=_blank} to run commands against Kubernetes clusters. You can use `kubectl` to deploy applications, inspect and manage cluster resources, and view logs.
- The Knative CLI (`kn`) v0.25 or later. For instructions, see the next section.
### Install the Knative CLI
--8<-- "install-kn.md"
## Install the Knative quickstart plugin
To get started, install the Knative `quickstart` plugin:
=== "Using Homebrew"
- Install the `quickstart` plugin by using [Homebrew](https://brew.sh){target=_blank}:
```bash
brew install knative-sandbox/kn-plugins/quickstart
```
- Upgrade an existing install to the latest version by running the command:
```bash
brew upgrade knative-sandbox/kn-plugins/quickstart
```
=== "Using a binary"
1. Download the executable binary for your system from the [`quickstart` release page](https://github.com/knative-sandbox/kn-plugin-quickstart/releases){target=_blank}.
1. Move the executable binary file to a directory on your `PATH`, for example, in `/usr/local/bin`.
1. Verify that the plugin is working, for example:
```bash
kn quickstart --help
```
=== "Using Go"
1. Check out the `kn-plugin-quickstart` repository:
```bash
git clone https://github.com/knative-sandbox/kn-plugin-quickstart.git
cd kn-plugin-quickstart/
```
1. Build an executable binary:
```bash
hack/build.sh
```
1. Move the executable binary file to a directory on your `PATH`:
```bash
mv kn-quickstart /usr/local/bin
```
1. Verify that the plugin is working, for example:
```bash
kn quickstart --help
```
## Run the Knative quickstart plugin
The `quickstart` plugin completes the following functions:
1. **Checks if you have the selected Kubernetes instance installed**
1. **Creates a cluster called `knative`**
1. **Installs Knative Serving** with Kourier as the default networking layer, and sslip.io as the DNS
1. **Installs Knative Eventing** and creates an in-memory Broker and Channel implementation
To get a local deployment of Knative, run the `quickstart` plugin:
=== "Using kind"
1. Install Knative and Kubernetes on a local Docker daemon by running:
```bash
kn quickstart kind
```
1. After the plugin is finished, verify you have a cluster called `knative`:
```bash
kind get clusters
```
=== "Using minikube"
1. Install Knative and Kubernetes in a minikube instance by running:
```bash
kn quickstart minikube
```
1. After the plugin is finished, verify you have a cluster called `knative`:
```bash
minikube profile list
```
--8<-- "quickstart-install.md"
## Next steps

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 388 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 366 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 374 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 366 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 366 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 368 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 368 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 368 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 365 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 366 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 374 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 365 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 366 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 368 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 366 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 368 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 412 KiB

View File

@ -1,16 +1,19 @@
# Installing Knative
!!! tip
You can install a local distribution of Knative for development purposes
using the [Knative `quickstart` plugin](../getting-started/quickstart-install.md).
You can install the Serving component, Eventing component, or both on your
cluster by using one of the following deployment options:
You can install the Serving component, Eventing component, or both on your cluster by using one of the following deployment options:
- Use the [Knative Quickstart plugin](quickstart-install.md) to install a
preconfigured, local distribution of Knative for development purposes.
- Using a YAML-based installation:
- [Install Knative Serving by using YAML](serving/install-serving-with-yaml.md)
- [Install Knative Eventing by using YAML](eventing/install-eventing-with-yaml.md)
- Using the [Knative Operator](operator/knative-with-operators.md).
- Following the documentation for vendor managed [Knative offerings](knative-offerings.md).
- Use a YAML-based installation to install a production ready deployment:
- [Install Knative Serving by using YAML](yaml-install/serving/install-serving-with-yaml.md)
- [Install Knative Eventing by using YAML](yaml-install/eventing/install-eventing-with-yaml.md)
- Use the [Knative Operator](operator/knative-with-operators.md) to install and
configure a production ready deployment.
- Follow the documentation for vendor managed [Knative offerings](knative-offerings.md).
You can also [upgrade an existing Knative installation](upgrade/README.md).

View File

@ -0,0 +1,43 @@
# Installing cert-manager for TLS certificates
Install the [Cert-Manager](https://github.com/jetstack/cert-manager) tool to
obtain TLS certificates that you can use for secure HTTPS connections in
Knative. For more information about enabling HTTPS connections in Knative, see
[Configuring HTTPS with TLS certificates](../serving/using-a-tls-cert.md).
You can use cert-manager to either manually obtain certificates, or to enable
Knative for automatic certificate provisioning. Complete instructions about
automatic certificate provisioning are provided in
[Enabling automatic TLS cert provisioning](../serving/using-auto-tls.md).
Regardless of if your want to manually obtain certificates, or configure Knative
for automatic provisioning, you can use the following steps to install
cert-manager.
## Before you begin
You must meet the following requirements to install cert-manager for Knative:
- Knative Serving must be installed. For details about installing the Serving
component, see the [Knative installation guide](yaml-install/serving/install-serving-with-yaml.md).
- You must configure your Knative cluster to use a
[custom domain](../serving/using-a-custom-domain.md).
- Knative currently supports cert-manager version `1.0.0` and higher.
## Downloading and installing cert-manager
To download and install cert-manager, follow the [Installation steps](https://cert-manager.io/docs/installation/kubernetes/) from the official `cert-manager` website.
## Completing the Knative configuration for TLS support
Before you can use a TLS certificate for secure connections, you must finish
configuring Knative:
- **Manual**: If you installed cert-manager to manually obtain certificates,
continue to the following topic for instructions about creating a Kubernetes
secret:
[Manually adding a TLS certificate](../serving/using-a-tls-cert.md#manually-adding-a-tls-certificate)
- **Automatic**: If you installed cert-manager to use for automatic certificate
provisioning, continue to the following topic to enable that feature:
[Enabling automatic TLS certificate provisioning in Knative](../serving/using-auto-tls.md)

View File

@ -0,0 +1,225 @@
# Installing Istio for Knative
This guide walks you through manually installing and customizing Istio for use
with Knative.
If your cloud platform offers a managed Istio installation, we recommend
installing Istio that way, unless you need to customize your
installation.
## Before you begin
You need:
- A Kubernetes cluster created.
- [`istioctl`](https://istio.io/docs/setup/install/istioctl/) installed.
## Supported Istio versions
The current known-to-be-stable version of Istio tested in conjunction with Knative is **v1.12**.
Versions in the 1.12 line are generally fine too.
## Installing Istio
When you install Istio, there are a few options depending on your goals. For a
basic Istio installation suitable for most Knative use cases, follow the
[Installing Istio without sidecar injection](#installing-istio-without-sidecar-injection)
instructions. If you're familiar with Istio and know what kind of installation
you want, read through the options and choose the installation that suits your
needs.
You can easily customize your Istio installation with `istioctl`. The following sections
cover a few useful Istio configurations and their benefits.
### Choosing an Istio installation
You can install Istio with or without a service mesh:
- [Installing Istio without sidecar injection](#installing-istio-without-sidecar-injection)(Recommended
default installation)
- [Installing Istio with sidecar injection](#installing-istio-with-sidecar-injection)
If you want to get up and running with Knative quickly, we recommend installing
Istio without automatic sidecar injection. This install is also recommended for
users who don't need the Istio service mesh, or who want to enable the service
mesh by [manually injecting the Istio sidecars][1].
#### Installing Istio without sidecar injection
Enter the following command to install Istio:
To install Istio without sidecar injection:
```sh
istioctl install -y
```
#### Installing Istio with sidecar injection
If you want to enable the Istio service mesh, you must enable [automatic sidecar
injection][2]. The Istio service mesh provides a few benefits:
- Allows you to turn on [mutual TLS][3], which secures service-to-service
traffic within the cluster.
- Allows you to use the [Istio authorization policy][4], controlling the access
to each Knative service based on Istio service roles.
For automatic sidecar injection, set `autoInject: enabled` in addition to the earlier
operator configuration.
```
global:
proxy:
autoInject: enabled
```
#### Using Istio mTLS feature
Since there are some networking communications between knative-serving namespace
and the namespace where your services running on, you need additional
preparations for mTLS enabled environment.
1. Enable sidecar container on `knative-serving` system namespace.
```bash
kubectl label namespace knative-serving istio-injection=enabled
```
1. Set `PeerAuthentication` to `PERMISSIVE` on knative-serving system namespace
by creating a YAML file using the following template:
```bash
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "knative-serving"
spec:
mtls:
mode: PERMISSIVE
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
After you install the cluster local gateway, your service and deployment for the local gateway is named `knative-local-gateway`.
### Updating the `config-istio` configmap to use a non-default local gateway
If you create a custom service and deployment for local gateway with a name other than `knative-local-gateway`, you
need to update gateway configmap `config-istio` under the `knative-serving` namespace.
1. Edit the `config-istio` configmap:
```bash
kubectl edit configmap config-istio -n knative-serving
```
2. Replace the `local-gateway.knative-serving.knative-local-gateway` field with the custom service. As an example, if you name both
the service and deployment `custom-local-gateway` under the namespace `istio-system`, it should be updated to:
```
custom-local-gateway.istio-system.svc.cluster.local
```
As an example, if both the custom service and deployment are labeled with `custom: custom-local-gateway`, not the default
`istio: knative-local-gateway`, you must update gateway instance `knative-local-gateway` in the `knative-serving` namespace:
```bash
kubectl edit gateway knative-local-gateway -n knative-serving
```
Replace the label selector with the label of your service:
```
istio: knative-local-gateway
```
For the service mentioned earlier, it should be updated to:
```
custom: custom-local-gateway
```
If there is a change in service ports (compared to that of
`knative-local-gateway`), update the port info in the gateway accordingly.
### Verifying your Istio install
View the status of your Istio installation to make sure the install was
successful. It might take a few seconds, so rerun the following command until
all of the pods show a `STATUS` of `Running` or `Completed`:
```bash
kubectl get pods --namespace istio-system
```
> Tip: You can append the `--watch` flag to the `kubectl get` commands to view
> the pod status in realtime. You use `CTRL + C` to exit watch mode.
### Configuring DNS
Knative dispatches to different services based on their hostname, so it is recommended to have DNS properly configured.
To do this, begin by looking up the external IP address that Istio received:
```
$ kubectl get svc -nistio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.0.2.24 34.83.80.117 15020:32206/TCP,80:30742/TCP,443:30996/TCP 2m14s
istio-pilot ClusterIP 10.0.3.27 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 2m14s
```
This external IP can be used with your DNS provider with a wildcard `A` record. However, for a basic non-production set
up, this external IP address can be used with `sslip.io` in the `config-domain` ConfigMap in `knative-serving`.
You can edit this by using the following command:
```
kubectl edit cm config-domain --namespace knative-serving
```
Given this external IP, change the content to:
```
apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
data:
# sslip.io is a "magic" DNS provider, which resolves all DNS lookups for:
# *.{ip}.sslip.io to {ip}.
34.83.80.117.sslip.io: ""
```
## Istio resources
- For the official Istio installation guide, see the
[Istio Kubernetes Getting Started Guide](https://istio.io/docs/setup/kubernetes/).
- For the full list of available configs when installing Istio with `istioctl`, see
the
[Istio Installation Options reference](https://istio.io/docs/setup/install/istioctl/).
## Clean up Istio
See the [Uninstall Istio](https://istio.io/docs/setup/install/istioctl/#uninstall-istio).
## What's next
- View the [Knative Serving documentation](../serving/README.md).
- Try some Knative Serving [code samples](../samples/README.md).
[1]:
https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/#manual-sidecar-injection
[2]:
https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/#automatic-sidecar-injection
[3]: https://istio.io/docs/concepts/security/#mutual-tls-authentication
[4]: https://istio.io/docs/tasks/security/authz-http/

View File

@ -123,7 +123,7 @@ spec:
controller: docker.io/knative-images-repo3/controller:v0.13.0
webhook: docker.io/knative-images-repo4/webhook:v0.13.0
autoscaler-hpa: docker.io/knative-images-repo5/autoscaler-hpa:v0.13.0
net-istio-controller: docker.io/knative-images-repo6/prefix-net-istio-controller:v0.13.0
net-istio-controller/controller: docker.io/knative-images-repo6/prefix-net-istio-controller:v0.13.0
net-istio-webhook/webhook: docker.io/knative-images-repo6/net-istio-webhook:v0.13.0
queue-proxy: docker.io/knative-images-repo7/queue-proxy-suffix:v0.13.0
```
@ -273,7 +273,7 @@ Update `spec.ingress.istio.knative-local-gateway` to select the labels of the ne
### Default local gateway name:
Go through the [installing Istio](../serving/installing-istio.md#installing-istio-without-sidecar-injection) guide to use local cluster gateway,
Go through the [installing Istio](../installing-istio.md#installing-istio-without-sidecar-injection) guide to use local cluster gateway,
if you use the default gateway called `knative-local-gateway`.
### Non-default local gateway name:
@ -343,7 +343,6 @@ Requests and limits can be configured for the following containers: `activator`,
!!! info
If multiple deployments share the same container name, the configuration in `spec.resources` for that certain container will apply to all the deployments.
Visit [Override System Resources based on the deployment](#override-the-resources) to specify the resources for a container within a specific deployment.
To override resource settings for a specific container, create an entry in the `spec.resources` list with the container name and the [Kubernetes resource settings](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container).
@ -398,35 +397,7 @@ spec:
## Override system deployments
If you would like to override some configurations for a specific deployment, you can override the configuration by using `spec.deployments` in CR.
Currently `resources`, `replicas`, `labels`, `annotations` and `nodeSelector` are supported.
### Override the resources
The KnativeServing custom resource is able to configure system resources for the Knative system containers based on the deployment.
Requests and limits can be configured for all the available containers within the deployment, like `activator`, `autoscaler`,
`controller`, `webhook`, `autoscaler-hpa`, `net-istio-controller`, etc.
For example, the following KnativeServing resource configures the container `activator` in the deployment `activator` to request
0.3 CPU and 100MB of RAM, and sets hard limits of 1 CPU and 250MB RAM:
```yaml
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
deployments:
- name: activator
resources:
- container: activator
requests:
cpu: 300m
memory: 100Mi
limits:
cpu: 1000m
memory: 250Mi
```
Currently `replicas`, `labels`, `annotations` and `nodeSelector` are supported.
### Override replicas, labels and annotations

View File

@ -264,7 +264,7 @@ Knative Serving with different ingresses:
The following steps install Istio to enable its Knative integration:
1. [Install Istio](../serving/installing-istio.md).
1. [Install Istio](../installing-istio.md).
1. If you installed Istio under a namespace other than the default `istio-system`:
1. Add `spec.config.istio` to your Serving CR YAML file as follows:

View File

@ -0,0 +1,7 @@
--8<-- "quickstart-install.md"
## Next steps
- Learn how to deploy your first Service in the [Knative tutorial](../getting-started/first-service.md).
- Try out Knative [code samples](../samples/README.md).
- See the [Knative Serving](../serving/README.md) and [Knative Eventing](../eventing/README.md) guides.

View File

@ -0,0 +1,9 @@
# About YAML-based installation
You can install the Serving component, Eventing component, or both on your cluster
by applying YAML files.
<!--TODO: Add reason to choose this install method -->
- [Install Knative Serving with YAML](serving/install-serving-with-yaml.md)
- [Install Knative Eventing with YAML](eventing/install-eventing-with-yaml.md)

View File

@ -0,0 +1,21 @@
# Knative Eventing installation files
This guide provides reference information about the core Knative Eventing YAML files, including:
- The custom resource definitions (CRDs) and core components required to install Knative Eventing.
- Optional components that you can apply to customize your installation.
For information about installing these files, see
[Installing Knative Eventing using YAML files](install-eventing-with-yaml.md).
The following table describes the installation files included in Knative Eventing:
| File name | Description | Dependencies|
| --- | --- | --- |
| eventing-core.yaml | Required: Knative Eventing core components. | eventing-crds.yaml |
| eventing-crds.yaml | Required: Knative Eventing core CRDs. | none |
| eventing-post-install.yaml | Jobs required for upgrading to a new minor version. | eventing-core.yaml, eventing-crds.yaml |
| eventing-sugar-controller.yaml | Reconciler that watches for labels and annotations on certain resources to inject eventing components. | eventing-core.yaml |
| eventing.yaml | Combines `eventing-core.yaml`, `mt-channel-broker.yaml`, and `in-memory-channel.yaml`. | none |
| in-memory-channel.yaml | Components to configure In-Memory Channels. | eventing-core.yaml |
| mt-channel-broker.yaml | Components to configure Multi-Tenant (MT) Channel Broker. | eventing-core.yaml |

View File

@ -0,0 +1,313 @@
# Installing Knative Eventing using YAML files
This topic describes how to install Knative Eventing by applying YAML files using the `kubectl` CLI.
--8<-- "prerequisites.md"
## Install Knative Eventing
To install Knative Eventing:
1. Install the required custom resource definitions (CRDs) by running the command:
```bash
kubectl apply -f {{ artifact(repo="eventing",file="eventing-crds.yaml")}}
```
1. Install the core components of Eventing by running the command:
```bash
kubectl apply -f {{ artifact(repo="eventing",file="eventing-core.yaml")}}
```
!!! info
For information about the YAML files in Knative Eventing, see [Description Tables for YAML Files](eventing-installation-files.md).
## Verify the installation
!!! success
Monitor the Knative components until all of the components show a `STATUS` of `Running` or `Completed`.
You can do this by running the following command and inspecting the output:
```bash
kubectl get pods -n knative-eventing
```
Example output:
```{ .bash .no-copy }
NAME READY STATUS RESTARTS AGE
eventing-controller-7995d654c7-qg895 1/1 Running 0 2m18s
eventing-webhook-fff97b47c-8hmt8 1/1 Running 0 2m17s
```
## Optional: Install a default Channel (messaging) layer
The following tabs expand to show instructions for installing a default Channel layer.
Follow the procedure for the Channel of your choice:
<!-- This indentation is important for things to render properly. -->
=== "Apache Kafka Channel"
1. Install [Strimzi](https://strimzi.io/quickstarts/).
1. Install the Apache Kafka Channel for Knative from the [`knative-sandbox` repository](https://github.com/knative-sandbox/eventing-kafka).
=== "Google Cloud Pub/Sub Channel"
* Install the Google Cloud Pub/Sub Channel by running the command:
```bash
kubectl apply -f {{ artifact(org="google",repo="knative-gcp",file="cloud-run-events.yaml")}}
```
This command installs both the Channel and the GCP Sources.
!!! tip
To learn more, try the [Google Cloud Pub/Sub channel sample](https://github.com/google/knative-gcp/blob/master/docs/examples/channel/README.md).
=== "In-Memory (standalone)"
!!! warning
This simple standalone implementation runs in-memory and is not suitable for production use cases.
* Install an in-memory implementation of Channel by running the command:
```bash
kubectl apply -f {{ artifact(repo="eventing",file="in-memory-channel.yaml")}}
```
=== "NATS Channel"
1. [Install NATS Streaming for Kubernetes](https://github.com/knative-sandbox/eventing-natss/tree/main/config).
1. Install the NATS Streaming Channel by running the command:
```bash
kubectl apply -f {{ artifact(org="knative-sandbox",repo="eventing-natss",file="eventing-natss.yaml")}}
```
<!-- TODO(https://github.com/knative/docs/issues/2153): Add more Channels here -->
You can change the default channel implementation by following the instructions described in the [Configure Channel defaults](../../../eventing/configuration/channel-configuration.md) section.
## Optional: Install a Broker layer
The following tabs expand to show instructions for installing the Broker layer.
Follow the procedure for the Broker of your choice:
<!-- This indentation is important for things to render properly. -->
=== "Apache Kafka Broker"
The following commands install the Apache Kafka Broker and run event routing in a system
namespace. The `knative-eventing` namespace is used by default.
1. Install the Kafka controller by running the following command:
```bash
kubectl apply -f {{ artifact(org="knative-sandbox",repo="eventing-kafka-broker",file="eventing-kafka-controller.yaml")}}
```
1. Install the Kafka Broker data plane by running the following command:
```bash
kubectl apply -f {{ artifact(org="knative-sandbox",repo="eventing-kafka-broker",file="eventing-kafka-broker.yaml")}}
```
For more information, see the [Kafka Broker](../../../eventing/broker/kafka-broker/README.md) documentation.
=== "MT-Channel-based"
This implementation of Broker uses Channels and runs event routing components in a system
namespace, providing a smaller and simpler installation.
* Install this implementation of Broker by running the command:
```bash
kubectl apply -f {{ artifact(repo="eventing",file="mt-channel-broker.yaml")}}
```
To customize which Broker Channel implementation is used, update the following ConfigMap to
specify which configurations are used for which namespaces:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-br-defaults
namespace: knative-eventing
data:
default-br-config: |
# This is the cluster-wide default broker channel.
clusterDefault:
brokerClass: MTChannelBasedBroker
apiVersion: v1
kind: ConfigMap
name: imc-channel
namespace: knative-eventing
# This allows you to specify different defaults per-namespace,
# in this case the "some-namespace" namespace will use the Kafka
# channel ConfigMap by default (only for example, you will need
# to install kafka also to make use of this).
namespaceDefaults:
some-namespace:
brokerClass: MTChannelBasedBroker
apiVersion: v1
kind: ConfigMap
name: kafka-channel
namespace: knative-eventing
```
The referenced `imc-channel` and `kafka-channel` example ConfigMaps would look like:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: imc-channel
namespace: knative-eventing
data:
channel-template-spec: |
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-channel
namespace: knative-eventing
data:
channel-template-spec: |
apiVersion: messaging.knative.dev/v1alpha1
kind: KafkaChannel
spec:
numPartitions: 3
replicationFactor: 1
```
!!! warning
In order to use the KafkaChannel, ensure that it is installed on your cluster, as mentioned previously in this topic.
=== "RabbitMQ Broker"
* Install the RabbitMQ Broker by following the instructions in the
[RabbitMQ Knative Eventing Broker README](https://github.com/knative-sandbox/eventing-rabbitmq/tree/main/broker).
For more information, see the [RabbitMQ Broker](https://github.com/knative-sandbox/eventing-rabbitmq) in GitHub.
## Install optional Eventing extensions
The following tabs expand to show instructions for installing each Eventing extension.
<!-- This indentation is important for things to render properly. -->
=== "Apache Kafka Sink"
1. Install the Kafka controller by running the command:
```bash
kubectl apply -f {{ artifact(org="knative-sandbox",repo="eventing-kafka-broker",file="eventing-kafka-controller.yaml")}}
```
1. Install the Kafka Sink data plane by running the command:
```bash
kubectl apply -f {{ artifact(org="knative-sandbox",repo="eventing-kafka-broker",file="eventing-kafka-sink.yaml")}}
```
For more information, see the [Kafka Sink](../../../eventing/sinks/kafka-sink.md) documentation.
=== "Sugar Controller"
<!-- Unclear when this feature came in -->
1. Install the Eventing Sugar Controller by running the command:
```bash
kubectl apply -f {{ artifact(repo="eventing",file="eventing-sugar-controller.yaml")}}
```
The Knative Eventing Sugar Controller reacts to special labels and
annotations and produce Eventing resources. For example:
- When a namespace is labeled with `eventing.knative.dev/injection=enabled`, the
controller creates a default Broker in that namespace.
- When a Trigger is annotated with `eventing.knative.dev/injection=enabled`, the
controller creates a Broker named by that Trigger in the Trigger's namespace.
1. Enable the default Broker on a namespace (here `default`) by running the command:
```bash
kubectl label namespace <namespace-name> eventing.knative.dev/injection=enabled
```
Where `<namespace-name>` is the name of the namespace.
=== "GitHub Source"
A single-tenant GitHub source creates one Knative service per GitHub source.
A multi-tenant GitHub source only creates one Knative Service, which handles all GitHub sources in the
cluster. This source does not support logging or tracing configuration.
* To install a single-tenant GitHub source run the command:
```bash
kubectl apply -f {{ artifact(org="knative-sandbox",repo="eventing-github",file="github.yaml")}}
```
* To install a multi-tenant GitHub source run the command:
```bash
kubectl apply -f {{ artifact(org="knative-sandbox",repo="eventing-github",file="mt-github.yaml")}}
```
To learn more, try the [GitHub source sample](https://github.com/knative/docs/tree/main/code-samples/eventing/github-source)
=== "Apache Kafka Source"
* Install the Apache Kafka Source by running the command:
```bash
kubectl apply -f {{ artifact(org="knative-sandbox",repo="eventing-kafka",file="source.yaml")}}
```
To learn more, try the [Apache Kafka source sample](../../../eventing/sources/kafka-source/README.md).
=== "GCP Sources"
* Install the GCP Sources by running the command:
```bash
kubectl apply -f {{ artifact(org="google",repo="knative-gcp",file="cloud-run-events.yaml")}}
```
This command installs both the Sources and the Channel.
To learn more, try the following samples:
- [Cloud Pub/Sub source sample](https://github.com/knative/docs/tree/main/code-samples/eventing/cloud-pubsub-source)
- [Cloud Storage source sample](https://github.com/knative/docs/tree/main/code-samples/eventing/cloud-storage-source)
- [Cloud Scheduler source sample](https://github.com/knative/docs/tree/main/code-samples/eventing/cloud-scheduler-source)
- [Cloud Audit Logs source sample](https://github.com/knative/docs/tree/main/code-samples/eventing/cloud-audit-logs-source)
=== "Apache CouchDB Source"
* Install the Apache CouchDB Source by running the command:
```bash
kubectl apply -f {{ artifact(org="knative-sandbox",repo="eventing-couchdb",file="couchdb.yaml")}}
```
To learn more, read the [Apache CouchDB source](https://github.com/knative-sandbox/eventing-couchdb/blob/main/source/README.md) documentation.
=== "VMware Sources and Bindings"
* Install VMware Sources and Bindings by running the command:
```bash
kubectl apply -f {{ artifact(org="vmware-tanzu",repo="sources-for-knative",file="release.yaml")}}
```
To learn more, try the [VMware sources and bindings samples](https://github.com/vmware-tanzu/sources-for-knative/tree/master/samples/README.md).

View File

@ -0,0 +1,212 @@
# Installing Knative Serving using YAML files
This topic describes how to install Knative Serving by applying YAML files using the `kubectl` CLI.
--8<-- "prerequisites.md"
## Install the Knative Serving component
To install the Knative Serving component:
1. Install the required custom resources by running the command:
```bash
kubectl apply -f {{ artifact(repo="serving",file="serving-crds.yaml")}}
```
1. Install the core components of Knative Serving by running the command:
```bash
kubectl apply -f {{ artifact(repo="serving",file="serving-core.yaml")}}
```
!!! info
For information about the YAML files in Knative Serving, see [Knative Serving installation files](serving-installation-files.md).
## Install a networking layer
The following tabs expand to show instructions for installing a networking layer.
Follow the procedure for the networking layer of your choice:
<!-- TODO: Link to document/diagram describing what is a networking layer. -->
<!-- This indentation is important for things to render properly. -->
=== "Kourier (Choose this if you are not sure)"
The following commands install Kourier and enable its Knative integration.
1. Install the Knative Kourier controller by running the command:
```bash
kubectl apply -f {{ artifact(repo="net-kourier",file="kourier.yaml")}}
```
1. Configure Knative Serving to use Kourier by default by running the command:
```bash
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
```
1. Fetch the External IP address or CNAME by running the command:
```bash
kubectl --namespace kourier-system get service kourier
```
!!! tip
Save this to use in the following [Configure DNS](#configure-dns) section.
=== "Istio"
The following commands install Istio and enable its Knative integration.
1. Install a properly configured Istio by following the
[Advanced Istio installation](../../installing-istio.md) instructions or by running the command:
```bash
kubectl apply -l knative.dev/crd-install=true -f {{ artifact(repo="net-istio",file="istio.yaml")}}
kubectl apply -f {{ artifact(repo="net-istio",file="istio.yaml")}}
```
1. Install the Knative Istio controller by running the command:
```bash
kubectl apply -f {{ artifact(repo="net-istio",file="net-istio.yaml")}}
```
1. Fetch the External IP address or CNAME by running the command:
```bash
kubectl --namespace istio-system get service istio-ingressgateway
```
!!! tip
Save this to use in the following [Configure DNS](#configure-dns) section.
=== "Contour"
The following commands install Contour and enable its Knative integration.
1. Install a properly configured Contour by running the command:
```bash
kubectl apply -f {{ artifact(repo="net-contour",file="contour.yaml")}}
```
<!-- TODO(https://github.com/knative-sandbox/net-contour/issues/11): We need a guide on how to use/modify a pre-existing install. -->
1. Install the Knative Contour controller by running the command:
```bash
kubectl apply -f {{ artifact(repo="net-contour",file="net-contour.yaml")}}
```
1. Configure Knative Serving to use Contour by default by running the command:
```bash
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress-class":"contour.ingress.networking.knative.dev"}}'
```
1. Fetch the External IP address or CNAME by running the command:
```bash
kubectl --namespace contour-external get service envoy
```
!!! tip
Save this to use in the following [Configure DNS](#configure-dns) section.
## Verify the installation
!!! success
Monitor the Knative components until all of the components show a `STATUS` of `Running` or `Completed`.
You can do this by running the following command and inspecting the output:
```bash
kubectl get pods -n knative-serving
```
Example output:
```{ .bash .no-copy }
NAME READY STATUS RESTARTS AGE
3scale-kourier-control-54cc54cc58-mmdgq 1/1 Running 0 81s
activator-67656dcbbb-8mftq 1/1 Running 0 97s
autoscaler-df6856b64-5h4lc 1/1 Running 0 97s
controller-788796f49d-4x6pm 1/1 Running 0 97s
domain-mapping-65f58c79dc-9cw6d 1/1 Running 0 97s
domainmapping-webhook-cc646465c-jnwbz 1/1 Running 0 97s
webhook-859796bc7-8n5g2 1/1 Running 0 96s
```
<!-- These are snippets from the docs/snippets directory -->
{% include "dns.md" %}
{% include "real-dns-yaml.md" %}
{% include "temporary-dns.md" %}
## Install optional Serving extensions
The following tabs expand to show instructions for installing each Serving extension.
=== "HPA autoscaling"
Knative also supports the use of the Kubernetes Horizontal Pod Autoscaler (HPA)
for driving autoscaling decisions.
* Install the components needed to support HPA-class autoscaling by running the command:
```bash
kubectl apply -f {{ artifact(repo="serving",file="serving-hpa.yaml")}}
```
<!-- TODO(https://github.com/knative/docs/issues/2152): Link to a more in-depth guide on HPA-class autoscaling -->
=== "TLS with cert-manager"
Knative supports automatically provisioning TLS certificates through
[cert-manager](https://cert-manager.io/docs/). The following commands
install the components needed to support the provisioning of TLS certificates
through cert-manager.
1. Install [cert-manager version v1.0.0 or later](../../installing-cert-manager.md).
1. Install the component that integrates Knative with `cert-manager` by running the command:
```bash
kubectl apply -f {{ artifact(repo="net-certmanager",file="release.yaml")}}
```
1. Configure Knative to automatically configure TLS certificates by following the steps in
[Enabling automatic TLS certificate provisioning](../../../serving/using-auto-tls.md).
=== "TLS with HTTP01"
Knative supports automatically provisioning TLS certificates using Encrypt HTTP01 challenges. The following commands install the components needed to support TLS.
1. Install the net-http01 controller by running the command:
```bash
kubectl apply -f {{ artifact(repo="net-http01",file="release.yaml")}}
```
2. Configure the `certificate-class` to use this certificate type by running the command:
```bash
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"certificate-class":"net-http01.certificate.networking.knative.dev"}}'
```
3. Enable autoTLS by running the command:
```bash
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"auto-tls":"Enabled"}}'
```

View File

@ -0,0 +1,19 @@
# Knative Serving installation files
This guide provides reference information about the core Knative Serving YAML files, including:
- The custom resource definitions (CRDs) and core components required to install Knative Serving.
- Optional components that you can apply to customize your installation.
For information about installing these files, see [Installing Knative Serving using YAML files](install-serving-with-yaml.md).
The following table describes the installation files included in Knative Serving:
| File name | Description | Dependencies|
| --- | --- | --- |
| serving-core.yaml | Required: Knative Serving core components. | serving-crds.yaml |
| serving-crds.yaml | Required: Knative Serving core CRDs. | none |
| serving-default-domain.yaml | Configures Knative Serving to use [http://sslip.io](http://sslip.io) as the default DNS suffix. | serving-core.yaml |
| serving-hpa.yaml | Components to autoscale Knative revisions through the Kubernetes Horizontal Pod Autoscaler. | serving-core.yaml |
| serving-post-install-jobs.yaml | Additional jobs after installing `serving-core.yaml`. Currently it is the same as `serving-storage-version-migration.yaml`. | serving-core.yaml |
| serving-storage-version-migration.yaml | Migrates the storage version of Knative resources, including Service, Route, Revision, and Configuration, from `v1alpha1` and `v1beta1` to `v1`. Required by upgrade from version 0.18 to 0.19. | serving-core.yaml |

View File

@ -27,6 +27,7 @@ A list of links to Knative code samples located outside of Knative repos:
- [Knative Eventing (Cloud Events) example using spring-boot and spring-cloud-streams + Kafka](https://salaboy.com/2020/02/20/getting-started-with-knative-2020/)
- [Image processing pipeline using Knative Eventing on GKE, Google Cloud Vision API and ImageSharp library](https://github.com/meteatamel/knative-tutorial/blob/master/docs/image-processing-pipeline.md)
- [BigQuery processing pipeline using Knative Eventing on GKE, Cloud Scheduler, BigQuery, mathplotlib and SendGrid](https://github.com/meteatamel/knative-tutorial/blob/master/docs/bigquery-processing-pipeline.md)
- [Load testing with SLO validation for Knative HTTP services](https://iter8.tools/0.8/tutorials/load-test/community/knative/loadtest/)
- [Performance (load) testing with SLO validation for Knative HTTP services](https://iter8.tools/0.8/tutorials/load-test-http/community/knative/loadtest/)
- [Performance (load) testing with SLO validation for Knative gRPC services](https://iter8.tools/0.8/tutorials/load-test-grpc/community/knative/loadtest/)
_Please add links to your externally hosted Knative code sample._

View File

@ -13,6 +13,6 @@ To use autoscaling for your application if it is enabled on your cluster, you mu
<!--TODO: Move KPA details, metrics to admin / advanced section; too in depth for intro)-->
* Try out the [Go Autoscale Sample App](autoscale-go/README.md).
* Configure your Knative deployment to use the Kubernetes Horizontal Pod Autoscaler (HPA) instead of the default KPA. For how to install HPA, see [Install optional Serving extensions](../../install/serving/install-serving-with-yaml.md#install-optional-serving-extensions).
* Configure your Knative deployment to use the Kubernetes Horizontal Pod Autoscaler (HPA) instead of the default KPA. For how to install HPA, see [Install optional Serving extensions](../../install/yaml-install/serving/install-serving-with-yaml.md#install-optional-serving-extensions).
* Configure the [types of metrics](autoscaling-metrics.md) that the Autoscaler consumes.
* Configure your Knative Service to use [container-freezer](container-freezer.md), which freezes the running process when the pod's traffic drops to zero. The most valuable benefit is reducing the cold-start time within this configuration.

View File

@ -4,7 +4,7 @@ A demonstration of the autoscaling capabilities of a Knative Serving Revision.
## Prerequisites
1. A Kubernetes cluster with [Knative Serving](../../../install/serving/install-serving-with-yaml.md))
1. A Kubernetes cluster with [Knative Serving](../../../install/yaml-install/serving/install-serving-with-yaml.md))
installed.
1. The `hey` load generator installed (`go get -u github.com/rakyll/hey`).
1. Clone this repository, and move into the sample directory:

View File

@ -4,7 +4,7 @@ Knative Serving supports the implementation of Knative Pod Autoscaler (KPA) and
!!! important
If you want to use Kubernetes Horizontal Pod Autoscaler (HPA), you must install it after you install Knative Serving.
For how to install HPA, see [Install optional Serving extensions](../../install/serving/install-serving-with-yaml.md#install-optional-serving-extensions).
For how to install HPA, see [Install optional Serving extensions](../../install/yaml-install/serving/install-serving-with-yaml.md#install-optional-serving-extensions).
## Knative Pod Autoscaler (KPA)

View File

@ -78,9 +78,9 @@ Beta stage
GA stage
: The feature is allowed by default.
# Available Flags
## Available Flags
## Multiple containers
### Multiple containers
* **Type**: Feature
* **ConfigMap key:** `multi-container`
@ -105,7 +105,7 @@ spec:
image: gcr.io/knative-samples/helloworld-java
```
## Kubernetes EmptyDir Volume
### Kubernetes EmptyDir Volume
* **Type**: Extension
* **ConfigMap key:** `kubernetes.podspec-volumes-emptydir`
@ -129,12 +129,12 @@ spec:
emptyDir: {}
```
## Kubernetes PersistentVolumeClaim
### Kubernetes PersistentVolumeClaim (PVC)
* **Type**: Extension
* **ConfigMap keys:** `kubernetes.podspec-persistent-volume-claim` <br/> `kubernetes.podspec-persistent-volume-write`
This extension controls whether [`PersistentVolumeClaim`](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) can be specified
This extension controls whether [`PersistentVolumeClaim (PVC)`](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) can be specified
and whether write access is allowed for the corresponding volume.
```yaml
@ -157,7 +157,7 @@ spec:
readOnly: true
```
## Kubernetes node affinity
### Kubernetes node affinity
* **Type**: Extension
* **ConfigMap key:** `kubernetes.podspec-affinity`
@ -183,7 +183,7 @@ spec:
- e2e-az2
```
## Kubernetes host aliases
### Kubernetes host aliases
* **Type**: Extension
* **ConfigMap key:** `kubernetes.podspec-hostaliases`
@ -204,7 +204,7 @@ spec:
- "bar.local"
```
## Kubernetes node selector
### Kubernetes node selector
* **Type**: Extension
* **ConfigMap key:** `kubernetes.podspec-nodeselector`
@ -222,7 +222,7 @@ spec:
labelName: labelValue
```
## Kubernetes toleration
### Kubernetes toleration
* **Type**: Extension
* **ConfigMap key:** `kubernetes.podspec-tolerations`
@ -242,7 +242,7 @@ spec:
effect: "NoSchedule"
```
## Kubernetes Downward API
### Kubernetes Downward API
* **Type**: Extension
* **ConfigMap key:** `kubernetes.podspec-fieldref`
@ -266,7 +266,7 @@ spec:
fieldPath: spec.nodeName
```
## Kubernetes priority class name
### Kubernetes priority class name
- **Type**: extension
- **ConfigMap key:** `kubernetes.podspec-priorityclassname`
@ -284,7 +284,7 @@ spec:
...
```
## Kubernetes dry run
### Kubernetes dry run
* **Type**: Extension
* **ConfigMap key:** `kubernetes.podspec-dryrun`
@ -305,7 +305,7 @@ metadata:
...
```
## Kubernetes runtime class
### Kubernetes runtime class
* **Type**: Extension
* **ConfigMap key:** `kubernetes.podspec-runtimeclass`
@ -323,7 +323,7 @@ spec:
...
```
## Kubernetes security context
### Kubernetes security context
* **Type**: Extension
* **ConfigMap key:** `kubernetes.podspec-securitycontext`
@ -359,7 +359,7 @@ spec:
...
```
## Kubernetes security context capabilities
### Kubernetes security context capabilities
* **Type**: Extension
* **ConfigMap key**: `kubernetes.containerspec-addcapabilities`
@ -387,14 +387,14 @@ spec:
- NET_BIND_SERVICE
```
## Tag header based routing
### Tag header based routing
* **Type**: Extension
* **ConfigMap key:** `tag-header-based-routing`
This flags controls whether [tag header based routing](https://github.com/knative/docs/tree/main/code-samples/serving/tag-header-based-routing) is enabled.
## Kubernetes init containers
### Kubernetes init containers
* **Type**: Extension
* **ConfigMap key:** `kubernetes.podspec-init-containers`

View File

@ -8,7 +8,7 @@ If you have configured additional security features, such as Istio's authorizati
You must meet the following prerequisites to use Istio AuthorizationPolicy:
- Istio must be used for your Knative Ingress.
See [Install a networking layer](../install/serving/install-serving-with-yaml.md#install-a-networking-layer).
See [Install a networking layer](../install/yaml-install/serving/install-serving-with-yaml.md#install-a-networking-layer).
- Istio sidecar injection must be enabled.
See the [Istio Documentation](https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/).

View File

@ -6,7 +6,7 @@ that are active when running Knative Serving.
## Before You Begin
1. This guide assumes that you have [installed Knative Serving](../install/serving/install-serving-with-yaml.md).
1. This guide assumes that you have [installed Knative Serving](../install/yaml-install/serving/install-serving-with-yaml.md).
2. Verify that you have the proper components in your cluster. To view the
services installed in your cluster, use the command:

View File

@ -6,7 +6,7 @@ You can create a Knative service by applying a YAML file or using the `kn servic
To create a Knative service, you will need:
* A Kubernetes cluster with Knative Serving installed. For more information, see [Installing Knative Serving](../../install/serving/install-serving-with-yaml.md).
* A Kubernetes cluster with Knative Serving installed. For more information, see [Installing Knative Serving](../../install/yaml-install/serving/install-serving-with-yaml.md).
* Optional: To use the `kn service create` command, you must [install the `kn` CLI](../../install/client/configure-kn.md).
## Procedure

View File

@ -20,7 +20,7 @@ serve a Knative Service at this domain.
## Prerequisites
- You must have access to a Kubernetes cluster, with Knative Serving and an Ingress implementation installed. For more information, see the [Serving Installation documentation](../../install/serving/install-serving-with-yaml.md).
- You must have access to a Kubernetes cluster, with Knative Serving and an Ingress implementation installed. For more information, see the [Serving Installation documentation](../../install/yaml-install/serving/install-serving-with-yaml.md).
- You must have the domain mapping feature enabled on your cluster.
- You must have access to [a Knative service](creating-services.md) that you can map a domain to.
- You must own or have access to a domain name to map, and be able to change the domain DNS to point to your Knative cluster by using the tools provided by your domain registrar.

View File

@ -7,7 +7,7 @@ Knative provides two ways to enable private services which are only available
inside the cluster:
1. To make all Knative Services private, change the default domain to
`svc.cluster.local` by [editing the `config-domain` ConfigMap](../using-a-custom-domain.md). This changes all Services deployed through Knative to only be published to the cluster.
`svc.cluster.local` by [editing the `config-domain` ConfigMap](https://github.com/knative/serving/blob/main/config/core/configmaps/domain.yaml). This changes all Services deployed through Knative to only be published to the cluster.
1. To make an individual Service private, the Service or Route can be
labelled with `networking.knative.dev/visibility=cluster-local` so that it is not published to the external gateway.

View File

@ -50,7 +50,7 @@ use and configure your certificate issuer to use the
You must meet the following requirements to enable secure HTTPS connections:
- Knative Serving must be installed. For details about installing the Serving
component, see the [Knative installation guides](../install/serving/install-serving-with-yaml.md).
component, see the [Knative installation guides](../install/yaml-install/serving/install-serving-with-yaml.md).
- You must configure your Knative cluster to use a
[custom domain](using-a-custom-domain.md).
@ -122,7 +122,7 @@ provisioning:
To use cert-manager to manually obtain certificates:
1. [Install and configure cert-manager](../install/serving/installing-cert-manager.md).
1. [Install and configure cert-manager](../install/installing-cert-manager.md).
1. Continue to the steps about
[manually adding a TLS certificate](#manually-adding-a-tls-certificate) by

View File

@ -9,10 +9,11 @@ Services. To learn more about using secure connections in Knative, see
The following must be installed on your Knative cluster:
- [Knative Serving](../install/serving/install-serving-with-yaml.md).
- A Networking layer such as Kourier, Istio with SDS v1.3 or higher, or Contour v1.1 or higher. See [Install a networking layer](../install/serving/install-serving-with-yaml.md#install-a-networking-layer) or [Istio with SDS, version 1.3 or higher](../install/serving/installing-istio.md#installing-istio-with-SDS-to-secure-the-ingress-gateway).
- [Knative Serving](../install/yaml-install/serving/install-serving-with-yaml.md).
- A Networking layer such as Kourier, Istio with SDS v1.3 or higher, or Contour v1.1 or higher. See [Install a networking layer](../install/yaml-install/serving/install-serving-with-yaml.md#install-a-networking-layer) or [Istio with SDS, version 1.3 or higher](../install/installing-istio.md#installing-istio-with-SDS-to-secure-the-ingress-gateway).
- [`cert-manager` version `1.0.0` or higher](../install/serving/installing-cert-manager.md).
- [`cert-manager` version `1.0.0` or higher](../install/installing-cert-manager.md).
- Your Knative cluster must be configured to use a [custom domain](using-a-custom-domain.md).
- Your DNS provider must be setup and configured to your domain.
- If you want to use HTTP-01 challenge, you need to configure your custom

View File

@ -9,6 +9,9 @@ Knative supports different popular tools for collecting metrics:
You can also set up the OpenTelemetry Collector to receive metrics from Knative components and distribute them to other metrics providers that support OpenTelemetry.
!!! warning
You can't use OpenTelemetry Collector and Prometheus at the same time. The default metrics backend is Prometheus. You will need to remove `metrics.backend-destination` and `metrics.request-metrics-backend-destination` keys from the config-observability Configmap to enable Prometheus metrics.
## About Prometheus
[Prometheus](https://prometheus.io/) is an open-source tool for collecting,
@ -46,7 +49,7 @@ aggregating timeseries metrics and alerting. It can also be used to scrape the O
1. Grafana dashboards can be imported from the [`knative-sandbox` repository](https://github.com/knative-sandbox/monitoring/tree/main/grafana).
1. If you are using the Grafana Helm Chart with the Dashboard Sidecar configured, you can load the dashboards by applying the following configmap.
1. If you are using the Grafana Helm Chart with the Dashboard Sidecar enabled, you can load the dashboards by applying the following configmaps.
```bash
kubectl apply -f https://raw.githubusercontent.com/knative-sandbox/monitoring/main/grafana/dashboards.yaml

View File

@ -4,27 +4,29 @@ The `kn` CLI also simplifies completion of otherwise complex procedures such as
=== "Using Homebrew"
- Install `kn` by using [Homebrew](https://brew.sh){target=_blank}:
Do one of the following:
- To install `kn` by using [Homebrew](https://brew.sh){target=_blank}, run the command:
```bash
brew install kn
```
- Upgrade an existing install to the latest version by running the command:
- To upgrade an existing `kn` install to the latest version, run the command:
```bash
brew upgrade kn
```
??? bug "Having issues upgrading `kn` using Homebrew?"
??? bug "Having issues upgrading `kn` using Homebrew?"
If you are having issues upgrading using Homebrew, it might be due to a change to a CLI repository where the `master` branch was renamed to `main`. Resolve this issue by running the command:
If you are having issues upgrading using Homebrew, it might be due to a change to a CLI repository where the `master` branch was renamed to `main`. Resolve this issue by running the command:
```bash
brew tap --repair
brew update
brew upgrade kn
```
```bash
brew tap --repair
brew update
brew upgrade kn
```
=== "Using a binary"

View File

@ -6,7 +6,7 @@ Before installing Knative, you must meet the following prerequisites:
!!! tip
You can install a local distribution of Knative for development purposes
using the [Knative Quickstart plugin](/docs/getting-started/quickstart-install.md)
using the [Knative Quickstart plugin](/docs/getting-started/quickstart-install/)
- **For production purposes**, it is recommended that:

View File

@ -1,69 +1,132 @@
!!! todo "Installing the `quickstart` plugin"
=== "Using Homebrew"
For macOS, you can install the `quickstart` plugin by using [Homebrew](https://brew.sh){target=_blank}.
```
brew install knative-sandbox/kn-plugins/quickstart
```
# Install Knative using quickstart
=== "Using a binary"
You can install the `quickstart` plugin by downloading the executable binary for your system and placing it on your `PATH` (for example, in `/usr/local/bin`).
This topic describes how to install a local deployment of Knative Serving and
Eventing using the Knative `quickstart` plugin.
A link to the latest stable binary release is available on the [`quickstart` release page](https://github.com/knative-sandbox/kn-plugin-quickstart/releases){target=_blank}.
The plugin installs a preconfigured Knative deployment on a local Kubernetes cluster.
=== "Using Go"
1. Check out the `kn-plugin-quickstart` repository:
!!! warning
Knative `quickstart` environments are for experimentation use only.
For a production ready installation, see the [YAML-based installation](/docs/install/yaml-install/)
or the [Knative Operator installation](/docs/install/operator/knative-with-operators/).
```
git clone https://github.com/knative-sandbox/kn-plugin-quickstart.git
cd kn-plugin-quickstart/
```
## Before you begin
1. Build an executable binary:
Before you can get started with a Knative `quickstart` deployment you must install:
```
hack/build.sh
```
- [kind](https://kind.sigs.k8s.io/docs/user/quick-start){target=_blank} (Kubernetes in Docker)
or [minikube](https://minikube.sigs.k8s.io/docs/start/){target=_blank} to enable
you to run a local Kubernetes cluster with Docker container nodes.
- The [Kubernetes CLI (`kubectl`)](https://kubernetes.io/docs/tasks/tools/install-kubectl){target=_blank}
to run commands against Kubernetes clusters.
You can use `kubectl` to deploy applications, inspect and manage cluster resources, and view logs.
- The Knative CLI (`kn`) v0.25 or later. For instructions, see the next section.
1. Move the executable binary file to a directory on your `PATH`:
### Install the Knative CLI
```
mv kn-quickstart /usr/local/bin
```
--8<-- "install-kn.md"
1. Verify that the plugin is working, for example:
## Install the Knative quickstart plugin
```
kn quickstart --help
```
To get started, install the Knative `quickstart` plugin:
=== "Using Homebrew"
Do one of the following:
- To install the `quickstart` plugin by using [Homebrew](https://brew.sh){target=_blank}, run the command:
```bash
brew install knative-sandbox/kn-plugins/quickstart
```
- To upgrade an existing `quickstart` install to the latest version, run the command:
```bash
brew upgrade knative-sandbox/kn-plugins/quickstart
```
=== "Using a binary"
1. Download the executable binary for your system from the [`quickstart` release page](https://github.com/knative-sandbox/kn-plugin-quickstart/releases){target=_blank}.
1. Move the executable binary file to a directory on your `PATH`, for example, in `/usr/local/bin`.
1. Verify that the plugin is working, for example:
```bash
kn quickstart --help
```
=== "Using Go"
1. Check out the `kn-plugin-quickstart` repository:
```bash
git clone https://github.com/knative-sandbox/kn-plugin-quickstart.git
cd kn-plugin-quickstart/
```
1. Build an executable binary:
```bash
hack/build.sh
```
1. Move the executable binary file to a directory on your `PATH`:
```bash
mv kn-quickstart /usr/local/bin
```
1. Verify that the plugin is working, for example:
```bash
kn quickstart --help
```
## Run the Knative quickstart plugin
The `quickstart` plugin completes the following functions:
1. **Checks if you have the selected Kubernetes instance installed,** and creates a cluster called `knative`.
2. **Installs Knative Serving with Kourier** as the default networking layer, and sslip.io as the DNS.
3. **Installs Knative Eventing** and creates an in-memory Broker and Channel implementation.
1. **Checks if you have the selected Kubernetes instance installed**
1. **Creates a cluster called `knative`**
1. **Installs Knative Serving** with Kourier as the default networking layer, and sslip.io as the DNS
1. **Installs Knative Eventing** and creates an in-memory Broker and Channel implementation
!!! todo "Install Knative and Kubernetes locally"
=== "Using kind"
To get a local deployment of Knative, run the `quickstart` plugin:
=== "Using kind"
1. Install Knative and Kubernetes on a local Docker daemon by running:
Install Knative and Kubernetes on a local Docker daemon by running:
```bash
kn quickstart kind
```
After the plugin is finished, verify you have a cluster called `knative`:
1. After the plugin is finished, verify you have a cluster called `knative`:
```bash
kind get clusters
```
=== "Using minikube"
=== "Using minikube"
1. Install Knative and Kubernetes in a minikube instance by running:
Install Knative and Kubernetes in a minikube instance by running:
```bash
kn quickstart minikube
```
After the plugin is finished, verify you have a cluster called `knative`:
1. After the plugin is finished, verify you have a cluster called `knative`:
```bash
minikube profile list
```
1. To finish setting up networking for minikube, start the `minikube tunnel` process in a separate terminal window:
```bash
minikube tunnel --profile knative
```
The tunnel must continue to run in a terminal window while you are using your Knative `quickstart` environment.
!!! note
To terminate the process and clean up network routes, enter `Ctrl-C`.
For more information about the `minikube tunnel` command, see the [minikube documentation](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).

Some files were not shown because too many files have changed in this diff Show More