Updates for admin guide, misc (#3601)

* Updates

* Create admin guide: mkdocs

* removed duplicate, update yml

* fix buttons, yml

* new naming, other updates

* buttons

* fixed nav

* change index to readme

* improve upgrade docs

* mkdocs-awesome-pages-plugin>=2.5 for build

* add redirects to yml

* redirects, moved metrics

* update yml, clean up metrics docs

* cleanup

* redirects plugin

* min version plugin

* Fixing formatting on version check

* Adding content blocks

Co-authored-by: Omer B <obensaadon@vmware.com>
This commit is contained in:
Ashleigh Brennan 2021-05-19 17:18:16 -05:00 committed by GitHub
parent e2749dac0f
commit d88e2fa0e5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
50 changed files with 510 additions and 771 deletions

3
docs/admin/README.md Normal file
View File

@ -0,0 +1,3 @@
# Administration guide
The following topics provide information for cluster administrators about how to complete administration tasks and use administration tools for a Knative cluster.

View File

@ -0,0 +1,176 @@
# Logging
You can use [Fluent Bit](https://docs.fluentbit.io/), a log processor and forwarder, to collect Kubernetes logs in a central directory. This is not required to run Knative, but can be helpful with [Knative Serving](../../serving), which automatically delete pods and associated logs when they are no longer needed.
Fluent Bit supports exporting to a number of other log providers. If you already have an existing log provider, for example, Splunk, Datadog, ElasticSearch, or Stackdriver, you can follow the [FluentBit documentation](https://docs.fluentbit.io/manual/pipeline/outputs) to configure log forwarders.
## Setting up logging components
Setting up log collection requires two steps:
1. Running a log forwarding DaemonSet on each node.
2. Running a collector somewhere in the cluster.
!!! tip
In the following example, a StatefulSet is used, which stores logs on a Kubernetes PersistentVolumeClaim, but you can also use a HostPath.
### Setting up the collector
The `fluent-bit-collector.yaml` file defines a StatefulSet, as well as a Kubernetes Service which allows accessing and reading the logs from within the cluster. The supplied configuration will create the monitoring configuration in a namespace called `logging`.
!!! important
Set up the collector before the forwarders. You will need the address of the collector when configuring the forwarders, and the forwarders may queue logs until the collector is ready.
![System diagram: forwarders and co-located collector and nginx](system.svg)
<!-- yuml.me UML rendering of:
[Forwarder1]logs->[Collector]
[Forwarder2]logs->[Collector]
// Add notes
[Collector]->[shared volume]
[nginx]-[shared volume]
-->
#### Procedure
1. Apply the configuration by entering the command:
```shell
kubectl apply -f https://github.com/knative/docs/raw/main/docs/admin/install/collecting-logs/fluent-bit-collector.yaml
```
The default configuration will classify logs into:
- Knative services, or pods with an `app=Knative` label.
- Non-Knative apps.
!!! note
Logs default to logging with the pod name; this can be changed by updating the `log-collector-config` ConfigMap before or after installation.
!!! warning
After the ConfigMap is updated, you must restart Fluent Bit. You can do this by deleting the pod and letting the StatefulSet recreate it.
1. To access the logs through your web browser, enter the command:
```shell
kubectl port-forward --namespace logging service/log-collector 8080:80
```
3. Navigate to `http://localhost:8080/`.
4. Optional: You can open a shell in the `nginx` pod and search the logs using Unix tools, by entering the command:
```shell
kubectl exec --namespace logging --stdin --tty --container nginx log-collector-0
```
### Setting up the forwarders
See the [Fluent Bit](https://docs.fluentbit.io/manual/installation/kubernetes) documentation to set up a Fluent Bit DaemonSet that forwards logs to ElasticSearch by default.
When you create a ConfigMap during the installation steps, you must:
- Replace the ElasticSearch configuration with the [`fluent-bit-configmap.yaml`](./fluent-bit-configmap.yaml), or
- Add the following block to the ConfigMap, and update the
`@INCLUDE output-elasticsearch.conf` to be `@INCLUDE output-forward.conf`:
```yaml
output-forward.conf: |
[OUTPUT]
Name forward
Host log-collector.logging
Port 24224
Require_ack_response True
```
### Setting up a local collector
!!! warning
This procedure describes a development environment setup and is not suitable for production use.
If you are using a local Kubernetes cluster for development, you can create a `hostPath` PersistentVolume to store the logs on your desktop operating system. This allows you to use your usual desktop tools on the files without needing Kubernetes-specific tools.
The `PersistentVolumeClaim` will look similar to the following:
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-logs
labels:
app: logs-collector
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: manual
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: logs-log-collector-0
namespace: logging
capacity:
storage: 5Gi
hostPath:
path: <see below>
```
!!! note
The `hostPath` will vary based on your Kubernetes software and host operating system.
You must update the StatefulSet `volumeClaimTemplates` to reference the `shared-logs` volume, as shown in the following example:
```yaml
volumeClaimTemplates:
metadata:
name: logs
spec:
accessModes: ["ReadWriteOnce"]
volumeName: shared-logs
```
### Kind
When creating your cluster, you must use a `kind-config.yaml` and specify
`extraMounts` for each node, as shown in the following example:
```yaml
apiversion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraMounts:
- hostPath: ./logs
containerPath: /shared/logs
- role: worker
extraMounts:
- hostPath: ./logs
containerPath: /shared/logs
```
You can then use `/shared/logs` as the `spec.hostPath.path` in your
PersistentVolume. Note that the directory path `./logs` is relative to the
directory that the Kind cluster was created in.
### Docker Desktop
Docker desktop automatically creates some shared mounts between the host and the
guest operating systems, so you only need to know the path to your home
directory. The following are some examples for different operating systems:
| Host OS | `hostPath` |
| ------- | ---------------------------------------- |
| Mac OS | `/Users/${USER}` |
| Windows | `/run/desktop/mnt/host/c/Users/${USER}/` |
| Linux | `/home/${USER}` |
### Minikube
Minikube requires an explicit command to [mount a directory](https://minikube.sigs.k8s.io/docs/handbook/mount/) into the virtual machine (VM) running Kubernetes.
The following command mounts the `logs` directory inside the current directory onto `/mnt/logs` in the VM:
```shell
minikube mount ./logs:/mnt/logs
```
You must also reference `/mnt/logs` as the `hostPath.path` in the PersistentVolume.

View File

Before

Width:  |  Height:  |  Size: 238 KiB

After

Width:  |  Height:  |  Size: 238 KiB

View File

@ -0,0 +1,108 @@
# Collecting Metrics with OpenTelemetry
You can set up the [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) to receive metrics from Knative components and distribute them to Prometheus.
## About OpenTelemetry
OpenTelemetry is a CNCF observability framework for cloud-native software, which provides a collection of tools, APIs, and SDKs.
You can use OpenTelemetry to instrument, generate, collect, and export telemetry data. This data includes metrics, logs, and traces, that you can analyze to understand the performance and behavior of Knative components.
OpenTelemetry allows you to easily export metrics to multiple monitoring services without needing to rebuild or reconfigure the Knative binaries.
## Understanding the collector
The collector provides a location where various Knative components can push metrics to be retained and collected by a monitoring service.
In the following example, you can configure a single collector instance using a ConfigMap and a Deployment.
!!! tip
For more complex deployments, you can automate some of these steps by using the [OpenTelemetry Operator](https://github.com/open-telemetry/opentelemetry-operator).
![Diagram of components reporting to collector, which is scraped by Prometheus](./system-diagram.svg)
<!-- yuml.me UML rendering of:
[queue-proxy1]->[Collector]
[queue-proxy2]->[Collector]
[autoscaler]->[Collector]
[controller]->[Collector]
[Collector]<-scrape[Prometheus]
-->
## Set up the collector
1. Create a namespace for the collector to run in, by entering the following command:
```shell
kubectl create namespace <namespace>
```
Where
- `<namespace>` is the name of the namespace that you want to create for the collector.
1. Create a Deployment, Service, and ConfigMap for the collector by entering the following command:
```shell
kubectl apply -f https://raw.githubusercontent.com/knative/docs/master/docs/install/collecting-metrics/collector.yaml
```
1. Update the `config-observability` ConfigMaps in the Knative Serving and
Eventing namespaces, by entering the follow command:
```shell
kubectl patch --namespace knative-serving configmap/config-observability \
--type merge \
--patch '{"data":{"metrics.backend-destination":"opencensus","request-metrics-backend-destination":"opencensus","metrics.opencensus-address":"otel-collector.metrics:55678"}}'
kubectl patch --namespace knative-eventing configmap/config-observability \
--type merge \
--patch '{"data":{"metrics.backend-destination":"opencensus","metrics.opencensus-address":"otel-collector.metrics:55678"}}'
```
## Verify the collector setup
1. You can check that metrics are being forwarded by loading the Prometheus export port on the collector, by entering the following command:
```shell
kubectl port-forward --namespace metrics deployment/otel-collector 8889
```
1. Fetch `http://localhost:8889/metrics` to see the exported metrics.
## About Prometheus
[Prometheus](https://prometheus.io/) is an open-source tool for collecting and
aggregating timeseries metrics. It can be used to scrape the OpenTelemetry collector that you created in the previous step.
## Setting up Prometheus
1. Install the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator) by entering the following command:
```shell
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/master/bundle.yaml
```
!!! caution
The manifest provided installs the Prometheus Operator into the `default` namespace. If you want to install the Operator in a different namespace, you must download the [YAML manifest](https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/master/bundle.yaml) and update any namespace references to your target namespace.
1. Create a `ServiceMonitor` object to track the OpenTelemetry collector.
1. Create a `ServiceAccount` object with the ability to read Kubernetes services and pods, so that Prometheus can track the resource endpoints.
1. Apply the `prometheus.yaml` file to create a Prometheus instance, by entering the following command:
```shell
kubectl apply -f prometheus.yaml
```
<!--TODO: Add links / commands for the two steps above?-->
### Make the Prometheus instance public
By default, the Prometheus instance is only exposed on a private service named `prometheus-operated`.
To access the console in your web browser:
1. Enter the command:
```shell
kubectl port-forward --namespace metrics service/prometheus-operated 9090
```
1. Access the console in your browser via http://localhost:9090.

View File

Before

Width:  |  Height:  |  Size: 239 KiB

After

Width:  |  Height:  |  Size: 239 KiB

View File

@ -24,7 +24,7 @@ showlandingtoc: "false"
# Installing Knative
!!! tip
You can install a local distribution of Knative for development use, by following the [Getting started guide](../getting-started/getting-started.md){_blank}.
You can install a local distribution of Knative for development use by following the [Getting started guide](../getting-started/){_blank}.
You can install the Serving component, Eventing component, or both on your cluster by using one of the following deployment options:
@ -36,7 +36,3 @@ You can also [upgrade an existing Knative installation](./upgrade-installation).
**NOTE:** Knative installation instructions assume you are running Mac or Linux with a bash shell.
<!-- TODO: Link to provisioning guide for advanced installation -->
## Next steps
- Install the [Knative CLI](../client/install-kn) to use `kn` commands.

View File

Before

Width:  |  Height:  |  Size: 232 KiB

After

Width:  |  Height:  |  Size: 232 KiB

View File

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 54 KiB

View File

Before

Width:  |  Height:  |  Size: 212 KiB

After

Width:  |  Height:  |  Size: 212 KiB

View File

Before

Width:  |  Height:  |  Size: 166 KiB

After

Width:  |  Height:  |  Size: 166 KiB

View File

Before

Width:  |  Height:  |  Size: 87 KiB

After

Width:  |  Height:  |  Size: 87 KiB

View File

Before

Width:  |  Height:  |  Size: 124 KiB

After

Width:  |  Height:  |  Size: 124 KiB

View File

Before

Width:  |  Height:  |  Size: 111 KiB

After

Width:  |  Height:  |  Size: 111 KiB

View File

Before

Width:  |  Height:  |  Size: 252 KiB

After

Width:  |  Height:  |  Size: 252 KiB

View File

Before

Width:  |  Height:  |  Size: 124 KiB

After

Width:  |  Height:  |  Size: 124 KiB

View File

Before

Width:  |  Height:  |  Size: 232 KiB

After

Width:  |  Height:  |  Size: 232 KiB

View File

Before

Width:  |  Height:  |  Size: 125 KiB

After

Width:  |  Height:  |  Size: 125 KiB

View File

@ -0,0 +1,18 @@
# Prerequisites
Before installing Knative, you must meet the following prerequisites:
- **For prototyping purposes**, Knative will work on most local deployments of Kubernetes. For example, you can use a local, one-node cluster that has 2 CPU and 4GB of memory.
!!! tip
You can install a local distribution of Knative for development use by following the [Getting started guide](../../../getting-started/){_blank}.
- **For production purposes**, it is recommended that:
- If you have only one node in your cluster, you will need at least 6 CPUs, 6 GB of memory, and 30 GB of disk storage.
- If you have multiple nodes in your cluster, for each node you will need at least 2 CPUs, 4 GB of memory, and 20 GB of disk storage.
- You have a cluster that uses Kubernetes v1.18 or newer.
- You have installed the [`kubectl` CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
- Your Kubernetes cluster must have access to the internet, since Kubernetes needs to be able to fetch images. To pull from a private registry, see [Deploying images from a private container registry](../../../serving/deploying-from-private-registry).
!!! caution
The system requirements provided are recommendations only. The requirements for your installation may vary, depending on whether you use optional components, such as a networking layer.

View File

@ -1,16 +1,11 @@
---
title: "Upgrading Knative"
weight: 03
type: "docs"
---
# Upgrading Knative
Knative supports upgrading by a single [minor](https://semver.org/) version number. For example, if you have v0.20.0 installed, you must upgrade to v0.21.0 before attempting to upgrade to v0.22.0.
Knative supports upgrading by a single [minor](https://semver.org/) version number. For example, if you have v0.21.0 installed, you must upgrade to v0.22.0 before attempting to upgrade to v0.23.0.
To verify the version of your current Knative installation:
- Check the installed **Knative Serving** version by entering the following command:
=== "Knative Serving"
Check the installed **Knative Serving** version by entering the following command:
```bash
kubectl get KnativeServing knative-serving --namespace knative-serving
@ -18,12 +13,13 @@ To verify the version of your current Knative installation:
Example output:
```bash
```{ .bash .no-copy }
NAME VERSION READY REASON
knative-serving 0.21.0 True
knative-serving 0.23.0 True
```
- Check the installed **Knative Eventing** version by entering the following command:
=== "Knative Eventing"
Check the installed **Knative Eventing** version by entering the following command:
```bash
kubectl get KnativeEventing knative-eventing --namespace knative-eventing
@ -31,7 +27,7 @@ To verify the version of your current Knative installation:
Example output:
```bash
```{ .bash .no-copy }
NAME VERSION READY REASON
knative-eventing 0.21.0 True
knative-eventing 0.23.0 True
```

View File

@ -0,0 +1,139 @@
# Upgrading using the Knative Operator
The attribute `spec.version` is the only field you need to change in the
Serving or Eventing custom resource to perform an upgrade. You do not need to specify the version for the `patch` number, because the Knative Operator matches the latest available `patch` number, as long as you specify `major.minor` for the version. For example, you only need to specify `"0.23"` to upgrade to the 0.23 release, you do not need to specify the exact `patch` number.
The Knative Operator supports up to the last three major releases. For example, if the current version of the Operator is 0.23, it bundles and supports the installation of Knative versions 0.20, 0.21, 0.22 and 0.23.
!!! note
In the following examples, Knative Serving custom resources are installed in the `knative-serving` namespace, and Knative Eventing custom resources are installed in the `knative-eventing` namespace.
## Performing the upgrade
To upgrade, apply the Operator custom resources, adding the `spec.version` for the Knative version that you want to upgrade to:
```yaml
kubectl apply -f - <<EOF
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
version: "0.23"
EOF
```
## Verifying an upgrade by viewing pods
You can confirm that your Knative components have upgraded successfully, by viewing the status of the pods for the components in the relevant namespace.
!!! note
All pods will restart during the upgrade and their age will reset.
=== "Knative Serving"
Enter the following command to view information about pods in the `knative-serving` namespace:
```bash
kubectl get pods --namespace knative-serving
```
The command returns an output similar to the following:
```{ .bash .no-copy }
NAME READY STATUS RESTARTS AGE
activator-6875896748-gdjgs 1/1 Running 0 58s
autoscaler-6bbc885cfd-vkrgg 1/1 Running 0 57s
autoscaler-hpa-5cdd7c6b69-hxzv4 1/1 Running 0 55s
controller-64dd4bd56-wzb2k 1/1 Running 0 57s
istio-webhook-75cc84fbd4-dkcgt 1/1 Running 0 50s
networking-istio-6dcbd4b5f4-mxm8q 1/1 Running 0 51s
storage-version-migration-serving-serving-0.20.0-82hjt 0/1 Completed 0 50s
webhook-75f5d4845d-zkrdt 1/1 Running 0 56s
```
=== "Knative Eventing"
Enter the following command to view information about pods in the `knative-eventing` namespace:
```bash
kubectl get pods --namespace knative-eventing
```
The command returns an output similar to the following:
```{ .bash .no-copy }
NAME READY STATUS RESTARTS AGE
eventing-controller-6bc59c9fd7-6svbm 1/1 Running 0 38s
eventing-webhook-85cd479f87-4dwxh 1/1 Running 0 38s
imc-controller-97c4fd87c-t9mnm 1/1 Running 0 33s
imc-dispatcher-c6db95ffd-ln4mc 1/1 Running 0 33s
mt-broker-controller-5f87fbd5d9-m69cd 1/1 Running 0 32s
mt-broker-filter-5b9c64cbd5-d27p4 1/1 Running 0 32s
mt-broker-ingress-55c66fdfdf-gn56g 1/1 Running 0 32s
storage-version-migration-eventing-0.20.0-fvgqf 0/1 Completed 0 31s
sugar-controller-684d5cfdbb-67vsv 1/1 Running 0 31s
```
<!-- TODO: Make this a snippet for verifying all installations-->
## Verifying an upgrade by viewing custom resources
You can verify the status of a Knative component by checking that the custom resource `READY` status is `True`.
=== "Knative Serving"
```bash
kubectl get KnativeServing knative-serving --namespace knative-serving
```
This command returns an output similar to the following:
```{ .bash .no-copy }
NAME VERSION READY REASON
knative-serving 0.20.0 True
```
=== "Knative Eventing"
```bash
kubectl get KnativeEventing knative-eventing --namespace knative-eventing
```
This command returns an output similar to the following:
```{ .bash .no-copy }
NAME VERSION READY REASON
knative-eventing 0.20.0 True
```
<!--- END snippet-->
## Rollback to an earlier version
If the upgrade fails, you can rollback to restore your Knative to the previous version. For example, if something goes wrong with an upgrade to 0.23, and your previous version is 0.22, you can apply the following custom resources to restore Knative Serving and Knative Eventing to version 0.22.
=== "Knative Serving"
```yaml
kubectl apply -f - <<EOF
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
version: "0.22"
EOF
```
=== "Knative Eventing"
```yaml
kubectl apply -f - <<EOF
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeEventing
metadata:
name: knative-eventing
namespace: knative-eventing
spec:
version: "0.22"
EOF
```

View File

@ -1,11 +1,3 @@
---
title: "Upgrading Knative"
weight: 21
type: "docs"
aliases:
- /docs/install/upgrade-installation
---
# Upgrading Knative
You can use the `kubectl apply` command to upgrade your Knative components and plugins.
@ -64,13 +56,11 @@ For the various subprojects there is a K8s job to help operators perform this mi
## Performing the upgrade
To upgrade, apply the YAML files for the subsequent minor versions of all
your installed Knative components and features, remembering to only
upgrade by one minor version at a time. For a cluster running version 0.20 of the Knative Serving and Eventing components, the following command upgrades the installation to v0.22.0:
To upgrade, apply the YAML files for the subsequent minor versions of all your installed Knative components and features, remembering to only upgrade by one minor version at a time. For a cluster running version 0.22 of the Knative Serving and Knative Eventing components, the following command upgrades the installation to v0.23.0:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/v0.22.0/serving-core.yaml \
-f https://github.com/knative/eventing/releases/download/v0.22.0/eventing.yaml \
kubectl apply -f https://github.com/knative/serving/releases/download/v0.23.0/serving-core.yaml \
-f https://github.com/knative/eventing/releases/download/v0.23.0/eventing.yaml \
```
### Run post-install tools after the upgrade

View File

@ -12,11 +12,11 @@ To obtain the version of the Knative component that you have running on your clu
=== "Knative Serving"
```{ .bash .no-copy }
```
kubectl get namespace knative-serving -o 'go-template={{index .metadata.labels "serving.knative.dev/release"}}'
```
=== "Knative Eventing"
```{ .bash .no-copy }
```
kubectl get namespace knative-eventing -o 'go-template={{index .metadata.labels "eventing.knative.dev/release"}}'
```

View File

@ -28,12 +28,12 @@ Knative uses a number of extensions to MkDocs which can also be installed using
=== "pip"
```
pip install mkdocs-material-extensions mkdocs-macros-plugin mkdocs-exclude mkdocs-awesome-pages-plugin
pip install mkdocs-material-extensions mkdocs-macros-plugin mkdocs-exclude mkdocs-awesome-pages-plugin mkdocs-redirects
```
=== "pip3"
```
pip3 install mkdocs-material-extensions mkdocs-macros-plugin mkdocs-exclude mkdocs-awesome-pages-plugin
pip3 install mkdocs-material-extensions mkdocs-macros-plugin mkdocs-exclude mkdocs-awesome-pages-plugin mkdocs-redirects
```
## Use the Docker Container

View File

@ -1,88 +0,0 @@
This document describes how to set up [Fluent Bit](https://docs.fluentbit.io/),
a log processor and forwarder, to collect your kubernetes logs in a central
directory. This is not required for running Knative, but can be helpful with
[Knative Serving](../serving), which will automatically delete pods (and their
associated logs) when they are no longer needed. Note that Fluent Bit supports
exporting to a number of other log providers; if you already have an existing
log provider (for example, Splunk, Datadog, ElasticSearch, or Stackdriver), then
you may only need
[the second part of setting up and configuring log forwarders](#setting-up-the-forwarders).
Setting up log collection consists of two pieces: running a log forwarding
DaemonSet on each node, and running a collector somewhere in the cluster (in our
example, we use a StatefulSet which stores logs on a Kubernetes
PersistentVolumeClaim, but you could also use a HostPath).
## Setting up the collector
It's useful to set up the collector before the forwarders, because you'll need
the address of the collector when configuring the forwarders, and the forwarders
may queue logs until the collector is ready.
![System diagram: forwarders and co-located collector and nginx](system.svg)
<!-- yuml.me UML rendering of:
[Forwarder1]logs->[Collector]
[Forwarder2]logs->[Collector]
// Add notes
[Collector]->[shared volume]
[nginx]-[shared volume]
-->
The [`fluent-bit-collector.yaml`](./fluent-bit-collector.yaml) defines a
StatefulSet as well as a Kubernetes Service which allows accessing and reading
the logs from within the cluster. The supplied configuration will create the
monitoring configuration in a namespace called `logging`. You can apply the
configuration with:
```bash
kubectl apply --filename https://github.com/knative/docs/raw/main/docs/install/collecting-logs/fluent-bit-collector.yaml
```
The default configuration will classify logs into Knative, apps (pods with an
`app=` label which aren't Knative), and the default to logging with the pod
name; this can be changed by updating the `log-collector-config` ConfigMap
before or after installation. Once the ConfigMap is updated, you'll need to
restart Fluent Bit (for example, by deleting the pod and letting the StatefulSet
recreate it).
To access the logs through your web browser:
```shell
kubectl port-forward --namespace logging service/log-collector 8080:80
```
And then visit http://localhost:8080/.
You can also open a shell in the `nginx` pod and search the logs using unix
tools:
```
kubectl exec --namespace logging --stdin --tty --container nginx log-collector-0
```
## Setting up the forwarders
For the most part, you can follow the
[Fluent Bit directions for installing on Kubernetes](https://docs.fluentbit.io/manual/installation/kubernetes).
Those directions will set up a Fluent Bit DaemonSet which forwards logs to
ElasticSearch by default; when the directions call for creating the ConfigMap,
you'll want to either replace the elasticsearch configuration with
[this `fluent-bit-configmap.yaml`](./fluent-bit-configmap.yaml) or add the
following block to the ConfigMap and update the
`@INCLUDE output-elasticsearch.conf` to be `@INCLUDE output-forward.conf`.
```yaml
output-forward.conf: |
[OUTPUT]
Name forward
Host log-collector.logging
Port 24224
Require_ack_response True
```
If you are using a different log collection infrastructure (Splunk, for
example),
[follow the directions in the FluentBit documentation](https://docs.fluentbit.io/manual/pipeline/outputs)
on how to configure your forwarders.

View File

@ -1,109 +0,0 @@
---
title: "Collecting Metrics with OpenTelemetry"
linkTitle: "Collecting metrics"
weight: 50
type: "docs"
---
# Collecting Metrics with OpenTelemetry
This document describes how to set up the
[OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) to receive
metrics from the Knative infrastructure components and distribute them to
Prometheus. [OpenTelemetry](https://opentelemetry.io/) is a CNCF an
observability framework for cloud-native software. The project provides a
collection of tools, APIs, and SDKs. You use it to instrument, generate,
collect, and export telemetry data (metrics, logs, and traces) for analysis in
order to understand your software's performance and behavior. OpenTelemetry
allows Knative to build provider-agnostic instrumentation into the platform, so
that it's easy to export metrics to multiple monitoring services without
needing to rebuild or reconfigure the Knative binaries.
## Setting up the collector
The collector provides a long-lived location where various Knative components
can push metrics (and eventually traces) to be retained and collected by a
monitoring service. For this example, we'll configure a single collector
instance using a ConfigMap and a Deployment. For more complex deployments, some
of this can be automated using the
[opentelemetry-operator](https://github.com/open-telemetry/opentelemetry-operator),
but it's also easy to manage this service directly. Note that you can attach
other components (node agents, other services); this is just a simple sample.
![Diagram of components reporting to collector, which is scraped by Prometheus](./system-diagram.svg)
<!-- yuml.me UML rendering of:
[queue-proxy1]->[Collector]
[queue-proxy2]->[Collector]
[autoscaler]->[Collector]
[controller]->[Collector]
[Collector]<-scrape[Prometheus]
-->
1. First, create a namespace for the collector to run in:
```shell
kubectl create namespace metrics
```
1. And then create a Deployment, Service, and ConfigMap for the collector:
```shell
kubectl apply --filename https://raw.githubusercontent.com/knative/docs/master/docs/install/collecting-metrics/collector.yaml
```
1. Finally, update the `config-observability` ConfigMap in Knative Serving and
Eventing
```shell
kubectl patch --namespace knative-serving configmap/config-observability \
--type merge \
--patch '{"data":{"metrics.backend-destination":"opencensus","request-metrics-backend-destination":"opencensus","metrics.opencensus-address":"otel-collector.metrics:55678"}}'
kubectl patch --namespace knative-eventing configmap/config-observability \
--type merge \
--patch '{"data":{"metrics.backend-destination":"opencensus","metrics.opencensus-address":"otel-collector.metrics:55678"}}'
```
You can check that metrics are being forwarded by loading the Prometheus export
port on the collector:
```shell
kubectl port-forward --namespace metrics deployment/otel-collector 8889
```
And then fetch http://localhost:8889/metrics to see the exported metrics.
## Setting up Prometheus
[Prometheus](https://prometheus.io/) is an open-source tool for collecting and
aggregating timeseries metrics. Full configuration of Prometheus can be found at
the website, but this document will provide a simple setup for scraping the
OpenTelemetry Collector we set up in the previous section.
1. Install the
[Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator).
Note that the provided manifest installs the operator into the `default`
namespace. If you want to install into another namespace, you'll need to
download the YAML manifest and update all the namespace references to your
target namespace.
```shell
kubectl apply --filename https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/master/bundle.yaml
```
1. You'll then need to set up a ServiceMonitor object to track the OpenTelemetry
Collector, as well as a ServiceAccount with the ability to read Kubernetes
services and pods (so that Prometheus can track the resource endpoints) and
finally a Prometheus object to instantiate the actual Prometheus instance.
```shell
kubectl apply --filename prometheus.yaml
```
By default, the Prometheus instance will only be exposed on a private service
named `prometheus-operated`; to access the console in your web browser, run:
```shell
kubectl port-forward --namespace metrics service/prometheus-operated 9090
```
And then access the console in your browser via http://localhost:9090.

View File

@ -1,42 +0,0 @@
---
title: "Prerequisites"
weight: 01
type: "docs"
showlandingtoc: "false"
---
# Prerequisites
!!! tip
If you're installing Knative for the first time, a better place to start may be [Getting Started](../getting-started/getting-started.md).
Before installing Knative, you must meet the following prerequisites:
## System requirements
**For prototyping purposes**, Knative will work on most local deployments of Kubernetes.
For example, you can use a local, one-node cluster that has 2 CPU and 4GB of memory.
**For production purposes**, it is recommended that:
- If you have only one node in your cluster, you will need at least 6 CPUs, 6 GB of memory, and 30 GB of disk storage.
- If you have multiple nodes in your cluster, for each node you will need at least 2 CPUs, 4 GB of memory, and 20 GB of disk storage.
<!--TODO: Verify these requirements-->
**NOTE:** The system requirements provided are recommendations only.
The requirements for your installation may vary, depending on whether you use optional components, such as a networking layer.
## Prerequisites
Before installation, you must meet the following prerequisites:
- You have a cluster that uses Kubernetes v1.18 or newer.
- You have installed the [`kubectl` CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
- Your Kubernetes cluster must have access to the internet, since Kubernetes needs to be able to fetch images. (To pull from a private registry, see [Deploying images from a private container registry](https://knative.dev/docs/serving/deploying/private-registry/))
## Next Steps: Install Knative Serving and Eventing
You can install the Serving component, Eventing component, or both on your cluster. If you're planning on installing both, **we recommend starting with Knative Serving.**
- [Installing Knative Serving using YAML files](./install-serving-with-yaml.md)
- [Installing Knative Eventing using YAML files](./install-eventing-with-yaml.md)

View File

@ -1,203 +0,0 @@
---
title: "Upgrading your installation with Knative operator"
weight: 21
type: "docs"
---
# Upgrading your installation with Knative operator
The Knative operator supports a straightforward upgrade process. It supports upgrading the Knative component
by a single [minor](https://semver.org/) version number. For example, if you have v0.17 installed, you must upgrade to
v0.18 before attempting to upgrade to v0.19. The attribute `spec.version` is the only field you need to change in the
Serving or Eventing CR to perform an upgrade. You do not need to specify the version in terms of the `patch` number,
because the Knative Operator will match the latest available `patch` number, as long as you specify `major.minor` for
the version. For example, you only need to specify `0.19` to upgrade to the latest v0.19 release. There is no need to
know the exact `patch` number.
The Knative Operator implements a minus 3 principle to support the Knative versions, which means the current version
of the Operator can support Knative with the version back 3 in terms of the `minor` number. For example, if the
current version of the Operator is 0.19.x, it bundles and supports the installation of Knative with the versions,
0.16.x, 0.17.x, 0.18.x and 0.19.x.
## Before you begin
Knative Operator maximizes the automation for the upgrade process, all you need to know is the current version of your
Knative, the target version of your Knative, and the namespaces for your Knative installation. In the
following instruction, Knative Serving and the Serving custom resource are installed in the `knative-serving` namespace,
and Knative Eventing and the Eventing custom resource are installed in the `knative-eventing` namespace.
### Check the current version of the installed Knative
If you want to check the version of the installed Knative Serving, you can apply the following command:
```
kubectl get KnativeServing knative-serving --namespace knative-serving
```
If your current version for Knative Serving is 0.19.x, you will get the result as below:
```
NAME VERSION READY REASON
knative-serving 0.19.0 True
```
As Knative only supports the upgrade with one single `minor` version, the target version is 0.20 for Knative Serving.
The status `True` means the Serving CR and Knative Serving are in good status.
If you want to check the version of the installed Knative Eventing, you can apply the following command:
```
kubectl get KnativeEventing knative-eventing --namespace knative-eventing
```
If your current version for Knative Eventing is 0.19.x, you will get the result as below:
```
NAME VERSION READY REASON
knative-eventing 0.19.0 True
```
As Knative only supports the upgrade with one single `minor` version, the target version is 0.20 for Knative Eventing.
The status `True` means the Eventing CR and Knative Eventing are in good status.
## Performing the upgrade
To upgrade, apply the Operator CRs with the same spec, but a different target version for the attribute `spec.version`.
If your existing Serving CR is as below:
```
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
version: "0.19"
```
then apply the following CR to upgrade to 0.20:
```
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
version: "0.20"
```
If your existing Eventing CR is as below:
```
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeEventing
metadata:
name: knative-eventing
namespace: knative-eventing
spec:
version: "0.19"
```
then apply the following CR to upgrade to 0.20:
```
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeEventing
metadata:
name: knative-eventing
namespace: knative-eventing
spec:
version: "0.20"
```
## Verifying the upgrade
To confirm that your Knative components have successfully upgraded, view the status of their pods in the relevant namespaces.
All pods will restart during the upgrade and their age will reset.
If you upgraded Knative Serving and Eventing, enter the following commands to get information about the pods for each namespace:
```bash
kubectl get pods --namespace knative-serving
```
```bash
kubectl get pods --namespace knative-eventing
```
These commands return something similar to:
```bash
NAME READY STATUS RESTARTS AGE
activator-6875896748-gdjgs 1/1 Running 0 58s
autoscaler-6bbc885cfd-vkrgg 1/1 Running 0 57s
autoscaler-hpa-5cdd7c6b69-hxzv4 1/1 Running 0 55s
controller-64dd4bd56-wzb2k 1/1 Running 0 57s
istio-webhook-75cc84fbd4-dkcgt 1/1 Running 0 50s
networking-istio-6dcbd4b5f4-mxm8q 1/1 Running 0 51s
storage-version-migration-serving-serving-0.20.0-82hjt 0/1 Completed 0 50s
webhook-75f5d4845d-zkrdt 1/1 Running 0 56s
```
```bash
NAME READY STATUS RESTARTS AGE
eventing-controller-6bc59c9fd7-6svbm 1/1 Running 0 38s
eventing-webhook-85cd479f87-4dwxh 1/1 Running 0 38s
imc-controller-97c4fd87c-t9mnm 1/1 Running 0 33s
imc-dispatcher-c6db95ffd-ln4mc 1/1 Running 0 33s
mt-broker-controller-5f87fbd5d9-m69cd 1/1 Running 0 32s
mt-broker-filter-5b9c64cbd5-d27p4 1/1 Running 0 32s
mt-broker-ingress-55c66fdfdf-gn56g 1/1 Running 0 32s
storage-version-migration-eventing-0.20.0-fvgqf 0/1 Completed 0 31s
sugar-controller-684d5cfdbb-67vsv 1/1 Running 0 31s
```
You can also verify the status of Knative by checking the CRs:
```bash
kubectl get KnativeServing knative-serving --namespace knative-serving
```
```bash
kubectl get KnativeEventing knative-eventing --namespace knative-eventing
```
These commands return something similar to:
```
NAME VERSION READY REASON
knative-serving 0.20.0 True
```
```
NAME VERSION READY REASON
knative-eventing 0.20.0 True
```
## Rollback
If the upgrade fails, you can always have a rollback solution to restore your Knative to the current version. If your
current version is 0.19, you can apply the following CR to restore Knative Serving and Eventing.
For Knative Serving:
```
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
version: "0.19"
```
For Knative Eventing:
```
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeEventing
metadata:
name: knative-eventing
namespace: knative-eventing
spec:
version: "0.19"
```

View File

@ -1,144 +0,0 @@
---
title: "Upgrading your installation"
weight: 21
type: "docs"
---
# Upgrading your installation
To upgrade your Knative components and plugins, run the `kubectl apply` command
to install the subsequent release. We support upgrading by a single
[minor](https://semver.org/) version number. For example, if you have v0.14.0 installed,
you must upgrade to v0.15.0 before attempting to upgrade to v0.16.0. To verify the version
number you currently have installed, see
[Checking your installation version](./check-install-version.md).
If you installed Knative using the [operator](https://github.com/knative/operator), the upgrade process will differ. See the [operator upgrade guide](./upgrade-installation-with-operator.md) to learn how to upgrade an installation managed by the operators.
## Before you begin
Before upgrading, there are a few steps you must take to ensure a successful
upgrade process.
### Identify breaking changes
You should be aware of any breaking changes between your current and desired
versions of Knative. Breaking changes between Knative versions are documented in
the Knative release notes. Before upgrading, review the release notes for the
target version to learn about any changes you might need to make to your Knative
applications:
- [Serving](https://github.com/knative/serving/releases)
- [Eventing](https://github.com/knative/eventing/releases)
- [Eventing-Contrib](https://github.com/knative/eventing-contrib/releases)
Release notes are published with each version on the "Releases" page of their
respective repositories in GitHub.
### View current pod status
Before upgrading, view the status of the pods for the namespaces you plan on
upgrading. This allows you to compare the before and after state of your
namespace. For example, if you are upgrading Knative Serving and Eventing, enter the following commands to see the current state of
each namespace:
```bash
kubectl get pods --namespace knative-serving
```
```bash
kubectl get pods --namespace knative-eventing
```
### Upgrading plug-ins
If you have a plug-in installed, make sure to upgrade it at the same time as
you upgrade your Knative components.
### Run pre-install tools before upgrade
In some upgrades there are some steps that must happen before the actual
upgrade, and these are identified in the release notes. For example, upgrading
from v0.15.0 to v0.16.0 for Eventing you have to run:
```bash
kubectl apply --filename {{ artifact(repo="eventing",file="eventing-pre-install-jobs.yaml")}}
```
### Upgrade existing resources to the latest stored version
Our custom resources are stored within Kubernetes at a particular version.
As we introduce newer and remove older versions you'll need to migrate our resources
to the designated stored version. This ensures removing older versions
will succeed when upgrading.
For the various subprojects - we have a K8s job to help operators perform this migration.
The release notes for each release will explicitly whether a migration is required.
ie.
```bash
kubectl create --filename {{ artifact( repo="serving", file="serving-post-install-jobs.yaml" )}}
```
## Performing the upgrade
To upgrade, apply the `.yaml` files for the subsequent minor versions of all
your installed Knative components and features, remembering to only
upgrade by one minor version at a time. For a cluster running v0.15.2 of the
Knative Serving and Eventing components, the
following command upgrades the installation to v0.16.0:
```bash
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.16.0/serving-core.yaml \
--filename https://github.com/knative/eventing/releases/download/v0.16.0/eventing.yaml \
```
### Run post-install tools after the upgrade
In some upgrades there are some steps that must happen after the actual
upgrade, and these are identified in the release notes. For example, after
upgrading from v0.15.0 to v0.16.0 for Eventing you should run:
```bash
kubectl apply --filename {{ artifact(repo="eventing",file="eventing-post-install-jobs.yaml")}}
```
## Verifying the upgrade
To confirm that your components and plugins have successfully upgraded, view the status of their pods in the relevant namespaces.
All pods will restart during the upgrade and their age will reset.
If you upgraded Knative Serving and Eventing, enter the following commands to get information about the pods for each namespace:
```bash
kubectl get pods --namespace knative-serving
```
```bash
kubectl get pods --namespace knative-eventing
```
These commands return something similar to:
```bash
NAME READY STATUS RESTARTS AGE
activator-79f674fb7b-dgvss 2/2 Running 0 43s
autoscaler-96dc49858-b24bm 2/2 Running 1 43s
autoscaler-hpa-d887d4895-njtrb 1/1 Running 0 43s
controller-6bcdd87fd6-zz9fx 1/1 Running 0 41s
networking-istio-7fcd97cbf7-z2xmr 1/1 Running 0 40s
webhook-747b799559-4sj6q 1/1 Running 0 41s
```
```bash
NAME READY STATUS RESTARTS AGE
eventing-controller-69ffcc6f7d-5l7th 1/1 Running 0 83s
eventing-webhook-6c56fcd86c-42dr8 1/1 Running 0 81s
imc-controller-6bcf5957b5-6ccp2 1/1 Running 0 80s
imc-dispatcher-f59b7c57-q9xcl 1/1 Running 0 80s
sources-controller-8596684d7b-jxkmd 1/1 Running 0 83s
```
If the age of all your pods has been reset and all pods are up and running, the upgrade was completed successfully.
You might notice a status of `Terminating` for the old pods as they are cleaned up.
If necessary, repeat the upgrade process until you reach your desired minor version number.

View File

@ -1,124 +0,0 @@
---
title: "Upgrading using the Knative Operator"
weight: 21
type: "docs"
aliases:
- /docs/install/upgrade-installation-with-operator
---
# Upgrading using the Knative Operator
The attribute `spec.version` is the only field you need to change in the
Serving or Eventing custom resource to perform an upgrade. You do not need to specify the version for the `patch` number, because the Knative Operator matches the latest available `patch` number, as long as you specify `major.minor` for the version. For example, you only need to specify `"0.22"` to upgrade to the 0.22 release, you do not need to specify the exact `patch` number.
The Knative Operator supports up to the last three major releases. For example, if the current version of the Operator is 0.22, it bundles and supports the installation of Knative versions 0.19, 0.20, 0.21 and 0.22.
**NOTE:** In the following examples, Knative Serving custom resources are installed in the `knative-serving` namespace, and Knative Eventing custom resources are installed in the `knative-eventing` namespace.
## Performing the upgrade
To upgrade, apply the Operator custom resources, adding the `spec.version` for the Knative version that you want to upgrade to:
```yaml
kubectl apply -f - <<EOF
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
version: "0.22"
EOF
## Verifying the upgrade
To confirm that your Knative components have successfully upgraded, view the status of their pods in the relevant namespaces.
All pods will restart during the upgrade and their age will reset.
If you upgraded Knative Serving and Eventing, enter the following commands to get information about the pods for each namespace:
```bash
kubectl get pods --namespace knative-serving
```
```bash
kubectl get pods --namespace knative-eventing
```
These commands return something similar to:
```bash
NAME READY STATUS RESTARTS AGE
activator-6875896748-gdjgs 1/1 Running 0 58s
autoscaler-6bbc885cfd-vkrgg 1/1 Running 0 57s
autoscaler-hpa-5cdd7c6b69-hxzv4 1/1 Running 0 55s
controller-64dd4bd56-wzb2k 1/1 Running 0 57s
istio-webhook-75cc84fbd4-dkcgt 1/1 Running 0 50s
networking-istio-6dcbd4b5f4-mxm8q 1/1 Running 0 51s
storage-version-migration-serving-serving-0.20.0-82hjt 0/1 Completed 0 50s
webhook-75f5d4845d-zkrdt 1/1 Running 0 56s
```
```bash
NAME READY STATUS RESTARTS AGE
eventing-controller-6bc59c9fd7-6svbm 1/1 Running 0 38s
eventing-webhook-85cd479f87-4dwxh 1/1 Running 0 38s
imc-controller-97c4fd87c-t9mnm 1/1 Running 0 33s
imc-dispatcher-c6db95ffd-ln4mc 1/1 Running 0 33s
mt-broker-controller-5f87fbd5d9-m69cd 1/1 Running 0 32s
mt-broker-filter-5b9c64cbd5-d27p4 1/1 Running 0 32s
mt-broker-ingress-55c66fdfdf-gn56g 1/1 Running 0 32s
storage-version-migration-eventing-0.20.0-fvgqf 0/1 Completed 0 31s
sugar-controller-684d5cfdbb-67vsv 1/1 Running 0 31s
```
You can also verify the status of Knative by checking the custom resources:
```bash
kubectl get KnativeServing knative-serving --namespace knative-serving
```
```bash
kubectl get KnativeEventing knative-eventing --namespace knative-eventing
```
These commands return something similar to:
```bash
NAME VERSION READY REASON
knative-serving 0.20.0 True
```
```bash
NAME VERSION READY REASON
knative-eventing 0.20.0 True
```
## Rollback
If the upgrade fails, you can rollback to restore your Knative to the previous version. For example, if something goes wrong with an upgrade to 0.22, and your previous version is 0.21, you can apply the following custom resources to restore Knative Serving and Eventing to version 0.21.
For Knative Serving:
```yaml
kubectl apply -f - <<EOF
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
version: "0.21"
EOF
For Knative Eventing:
```yaml
kubectl apply -f - <<EOF
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeEventing
metadata:
name: knative-eventing
namespace: knative-eventing
spec:
version: "0.21"
EOF

View File

@ -22,28 +22,31 @@ nav:
- Introducing the CloudEvents Player: getting-started/first-source.md
- Creating your first Trigger: getting-started/first-trigger.md
- What's Next?: getting-started/next-steps.md
- Installing Knative:
- Overview: install/README.md
- Installing using YAML:
- Prerequisites: install/prerequisites.md
- Install Serving with YAML: install/install-serving-with-yaml.md
- Install Eventing with YAML: install/install-eventing-with-yaml.md
- Install optional extensions: install/install-extensions.md
- Description Tables for YAML Files: install/installation-files.md
- Knative Operator:
- Installing with the Operator: install/knative-with-operators.md
- Configuring Knative Eventing CRDs: install/operator/configuring-eventing-cr.md
- Configureing Knative Serving CRDs: install/operator/configuring-serving-cr.md
- Installing Istio for Knative: install/installing-istio.md
- Upgrading your installation:
- Checking your Knative version: install/check-install-version.md
- Upgrading with the Knative Operator: install/upgrade-installation-with-operator.md
- Upgrading with kubectl: install/upgrade-installation.md
- Collecting Logs with Fluentbit: install/collecting-logs/_index.md
- Collecting Metrics with OpenTelemetry: install/collecting-metrics/_index.md
- Using a Knative-based Offering: install/knative-offerings.md
- ... | install/*
- Administration guide:
- Overview: admin/README.md
- Installing Knative:
- Overview: admin/install/README.md
- Prerequisites: admin/install/prerequisites.md
- Installing using YAML:
- Install Serving with YAML: admin/install/install-serving-with-yaml.md
- Install Eventing with YAML: admin/install/install-eventing-with-yaml.md
- Install optional extensions: admin/install/install-extensions.md
- Description Tables for YAML Files: admin/install/installation-files.md
- Knative Operator:
- Installing with the Operator: admin/install/knative-with-operators.md
- Configuring Knative Eventing CRDs: admin/install/operator/configuring-eventing-cr.md
- Configuring Knative Serving CRDs: admin/install/operator/configuring-serving-cr.md
- Installing Istio for Knative: admin/install/installing-istio.md
- Using a Knative-based Offering: admin/install/knative-offerings.md
- ... | install/*
- Checking your Knative version: check-install-version.md
- Upgrading your installation:
- Overview: admin/upgrade/README.md
- Upgrading with the Knative Operator: admin/upgrade/upgrade-installation-with-operator.md
- Upgrading with kubectl: admin/upgrade/upgrade-installation.md
- Logging: admin/collecting-logs/README.md
- Metrics: admin/collecting-metrics/README.md
- Uninstalling Knative: admin/install/uninstall.md
- Serving Component:
- Overview: serving/README.md
- Developer Topics:
@ -268,6 +271,26 @@ plugins:
filename: .index
collapse_single_pages: true
strict: false
- redirects:
redirect_maps:
'install/collecting-logs/index.md': 'admin/collecting-logs/README.md'
'install/README.md': 'admin/install/README.md'
'install/collecting-metrics/index.md': 'admin/collecting-metrics/README.md'
'install/install-eventing-with-yaml.md': 'admin/install/install-eventing-with-yaml.md'
'install/install-extensions.md': 'admin/install/install-extensions.md'
'install/install-serving-with-yaml.md': 'admin/install/install-serving-with-yaml.md'
'install/installation-files.md': 'admin/install/installation-files.md'
'install/installing-istio.md': 'admin/install/installing-istio.md'
'install/knative-offerings.md': 'admin/install/knative-offerings.md'
'install/knative-with-operators.md': 'admin/install/knative-with-operators.md'
'install/operator/configuring-eventing-cr.md': 'admin/install/operator/configuring-eventing-cr.md'
'install/operator/configuring-serving-cr.md': 'admin/install/operator/configuring-serving-cr.md'
'install/prerequisites.md': 'admin/install/prerequisites.md'
'uninstall.md': 'admin/install/uninstall.md'
'upgrade/index.md': 'admin/upgrade/README.md'
'upgrade/upgrade-installation-with-operator.md': 'admin/upgrade/upgrade-installation-with-operator.md'
'upgrade/upgrade-installation.md': 'admin/upgrade/upgrade-installation.md'
'install/check-install-version.md': 'check-install-version.md'
copyright: "Copyright © 2021 The Knative Authors"

View File

@ -12,10 +12,10 @@
<h1>Enterprise-grade Serverless on your own terms.</h1>
<h2>Kubernetes-based platform to deploy and manage modern serverless workloads.</h2>
<p style="display: block">
<a href="{{ 'getting-started/getting-started/' | url }}" class="md-button md-button--primary">
Get Started
<a href="{{ 'getting-started/' | url }}" class="md-button md-button--primary">
Developer? Get Started Here!
</a>
<a href="{{ 'install/prerequisites/' | url }}" class="md-button"> Custom YAML-Based Install </a>
<a href="{{ 'admin/install/' | url }}" class="md-button"> Cluster Administrator? Get Started Here!</a>
<a
href="https://katacoda.com/knative-tutorials/scenarios/2-serving-intro-yaml"
class="md-button"

View File

@ -2,4 +2,4 @@ mkdocs-material>=7.1
mkdocs-exclude>=1.0
mkdocs-macros-plugin>=0.5
mkdocs-awesome-pages-plugin>=2.5
mkdocs-redirects>=1.0.3