mirror of https://github.com/knative/docs.git
				
				
				
			Split YAML installing topic (#3399)
* Split installing with YAML * Update links to installing topics * Add prerequisites to optional extension topic * Fix links * Remove old installing topic * Add updated info about system reqs * Update docs/install/install-eventing-with-yaml.md Co-authored-by: Ashleigh Brennan <abrennan@redhat.com> * Update docs/eventing/broker/kafka-broker.md Co-authored-by: Ashleigh Brennan <abrennan@redhat.com> * Update docs/install/install-eventing-with-yaml.md Co-authored-by: Ashleigh Brennan <abrennan@redhat.com> * Update docs/serving/using-auto-tls.md Co-authored-by: Ashleigh Brennan <abrennan@redhat.com> * Update docs/install/installation-files.md Co-authored-by: Ashleigh Brennan <abrennan@redhat.com> * Apply suggestions from code review Co-authored-by: Ashleigh Brennan <abrennan@redhat.com> * Fix capitalisation issues * Apply feedback * Apply feedback Co-authored-by: Ashleigh Brennan <abrennan@redhat.com>
This commit is contained in:
		
							parent
							
								
									1f2339d9f5
								
							
						
					
					
						commit
						05b65aff16
					
				|  | @ -18,7 +18,7 @@ Notable features are: | |||
| 
 | ||||
| ## Prerequisites | ||||
| 
 | ||||
| 1. [Knative Eventing installation](./../../install/any-kubernetes-cluster.md#installing-the-eventing-component). | ||||
| 1. [Installing Eventing using YAML files](./../../install/install-eventing-with-yaml.md). | ||||
| 2. An Apache Kafka cluster (if you're just getting started you can follow [Strimzi Quickstart page](https://strimzi.io/quickstarts/)). | ||||
| 
 | ||||
| ## Installation | ||||
|  | @ -263,7 +263,7 @@ data: | |||
|     </configuration> | ||||
| ``` | ||||
| 
 | ||||
| To change the logging level to `DEBUG`, you need to:  | ||||
| To change the logging level to `DEBUG`, you must: | ||||
| 
 | ||||
| 1. Apply the following `kafka-config-logging` `ConfigMap` or replace `level="INFO"` with `level="DEBUG"` to the | ||||
| `ConfigMap` `kafka-config-logging`: | ||||
|  |  | |||
|  | @ -246,8 +246,10 @@ folder) you're ready to build and deploy the sample app. | |||
|       kubectl get broker --namespace knative-samples | ||||
|       ``` | ||||
| 
 | ||||
|       _Note_: you can also use injection based on labels with the | ||||
|       [Eventing Sugar Controller](../../../../install/any-kubernetes-cluster.md). | ||||
|       **Note:** you can also use injection based on labels with the | ||||
|       Eventing sugar controller. | ||||
|       For how to install the Eventing sugar controller, see | ||||
|       [Install optional Eventing extensions](../../../../install/install-extensions.md#install-optional-eventing-extensions). | ||||
| 
 | ||||
|    1. It deployed the helloworld-go app as a K8s Deployment and created a K8s | ||||
|       service names helloworld-go. Verify using the following command. | ||||
|  |  | |||
|  | @ -8,7 +8,7 @@ This page shows how to install and configure Apache Kafka Sink. | |||
| 
 | ||||
| ## Prerequisites | ||||
| 
 | ||||
| [Knative Eventing installation](./../../install/any-kubernetes-cluster.md#installing-the-eventing-component). | ||||
| [Installing Eventing using YAML files](./../../install/install-eventing-with-yaml.md). | ||||
| 
 | ||||
| ## Installation | ||||
| 
 | ||||
|  |  | |||
|  | @ -2,12 +2,30 @@ | |||
| title: "Installing Knative" | ||||
| weight: 05 | ||||
| type: "docs" | ||||
| aliases: | ||||
|   - /docs/install/knative-with-any-k8s | ||||
|   - /docs/install/knative-with-aks | ||||
|   - /docs/install/knative-with-ambassador | ||||
|   - /docs/install/knative-with-contour | ||||
|   - /docs/install/knative-with-docker-for-mac | ||||
|   - /docs/install/knative-with-gke | ||||
|   - /docs/install/knative-with-gardener | ||||
|   - /docs/install/knative-with-gloo | ||||
|   - /docs/install/knative-with-icp | ||||
|   - /docs/install/knative-with-iks | ||||
|   - /docs/install/knative-with-microk8s | ||||
|   - /docs/install/knative-with-minikube | ||||
|   - /docs/install/knative-with-minishift | ||||
|   - /docs/install/knative-with-pks | ||||
|   - /docs/install/any-kubernetes-cluster | ||||
| showlandingtoc: "false" | ||||
| --- | ||||
| 
 | ||||
| You can install the Serving component, Eventing component, or both on your cluster by using one of the following deployment options: | ||||
| 
 | ||||
| - Using a [YAML-based installation](./any-kubernetes-cluster). | ||||
| - Using a YAML-based installation: | ||||
|   - [Installing Serving using YAML files](./install-serving-with-yaml) | ||||
|   - [Installing Eventing using YAML files](./install-eventing-with-yaml) | ||||
| - Using the [Knative Operator](./knative-with-operators). | ||||
| - Following the documentation for vendor managed [Knative offerings](../knative-offerings). | ||||
| 
 | ||||
|  |  | |||
|  | @ -1,849 +0,0 @@ | |||
| --- | ||||
| title: "YAML-based installation" | ||||
| weight: 01 | ||||
| type: "docs" | ||||
| aliases: | ||||
|   - /docs/install/knative-with-any-k8s | ||||
|   - /docs/install/knative-with-aks | ||||
|   - /docs/install/knative-with-ambassador | ||||
|   - /docs/install/knative-with-contour | ||||
|   - /docs/install/knative-with-docker-for-mac | ||||
|   - /docs/install/knative-with-gke | ||||
|   - /docs/install/knative-with-gardener | ||||
|   - /docs/install/knative-with-gloo | ||||
|   - /docs/install/knative-with-icp | ||||
|   - /docs/install/knative-with-iks | ||||
|   - /docs/install/knative-with-microk8s | ||||
|   - /docs/install/knative-with-minikube | ||||
|   - /docs/install/knative-with-minishift | ||||
|   - /docs/install/knative-with-pks | ||||
| showlandingtoc: "false" | ||||
| --- | ||||
| 
 | ||||
| You can install Knative by applying YAML files using the `kubectl` CLI. | ||||
| You can install the Serving component, Eventing component, or both on your cluster. | ||||
| 
 | ||||
| ## System requirements | ||||
| For prototyping purposes, Knative will work on most local deployments of Kubernetes. For example, you can use a local, one-node cluster that has 2 CPU and 4GB of memory. | ||||
| 
 | ||||
| For production purposes, it is recommended that: | ||||
| - If you have only one node in your cluster, you will need at least 6 CPUs, 6 GB of memory, and 30 GB of disk storage. | ||||
| - If you have multiple nodes in your cluster, for each node you will need at least 2 CPUs, 4 GB of memory, and 20 GB of disk storage. | ||||
| <!--TODO: Verify these requirements--> | ||||
| 
 | ||||
| **NOTE:** The system requirements provided are recommendations only. The requirements for your installation may vary, depending on whether you use optional components, such as a networking layer. | ||||
| 
 | ||||
| ## Prerequisites | ||||
| 
 | ||||
| Before installation, you must meet the following prerequisites: | ||||
| 
 | ||||
| - You have a cluster that uses Kubernetes v1.18 or newer. | ||||
| - You have installed the [`kubectl` CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/). | ||||
| - Your Kubernetes cluster must have access to the internet, since Kubernetes needs to be able to fetch images. | ||||
| 
 | ||||
| ## Installing the Serving component | ||||
| 
 | ||||
| To install the serving component: | ||||
| 
 | ||||
| 1. Install the required custom resources: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="serving" file="serving-crds.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 1. Install the core components of Serving (see below for optional extensions): | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="serving" file="serving-core.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| ### Installing a networking layer | ||||
| 
 | ||||
| Follow the procedure for the networking layer of your choice: | ||||
| 
 | ||||
| <!-- TODO: Link to document/diagram describing what is a networking layer.  --> | ||||
| <!-- This indentation is important for things to render properly. --> | ||||
| 
 | ||||
|    {{< tabs name="serving_networking" default="Kourier" >}} | ||||
|    {{% tab name="Ambassador" %}} | ||||
| 
 | ||||
| The following commands install Ambassador and enable its Knative integration. | ||||
| 
 | ||||
| 1. Create a namespace to install Ambassador in: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl create namespace ambassador | ||||
|    ``` | ||||
| 
 | ||||
| 1. Install Ambassador: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply --namespace ambassador \ | ||||
|      -f https://getambassador.io/yaml/ambassador/ambassador-crds.yaml \ | ||||
|      -f https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml \ | ||||
|      -f https://getambassador.io/yaml/ambassador/ambassador-service.yaml | ||||
|    ``` | ||||
| 
 | ||||
| 1. Give Ambassador the required permissions: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch clusterrolebinding ambassador -p '{"subjects":[{"kind": "ServiceAccount", "name": "ambassador", "namespace": "ambassador"}]}' | ||||
|    ``` | ||||
| 
 | ||||
| 1. Enable Knative support in Ambassador: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl set env --namespace ambassador  deployments/ambassador AMBASSADOR_KNATIVE_SUPPORT=true | ||||
|    ``` | ||||
| 
 | ||||
| 1. To configure Knative Serving to use Ambassador by default: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch configmap/config-network \ | ||||
|      --namespace knative-serving \ | ||||
|      --type merge \ | ||||
|      --patch '{"data":{"ingress.class":"ambassador.ingress.networking.knative.dev"}}' | ||||
|    ``` | ||||
| 
 | ||||
| 1. Fetch the External IP or CNAME: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl --namespace ambassador get service ambassador | ||||
|    ``` | ||||
| 
 | ||||
|    Save this for configuring DNS below. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Contour" %}} | ||||
| 
 | ||||
| The following commands install Contour and enable its Knative integration. | ||||
| 
 | ||||
| 1. Install a properly configured Contour: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-contour" file="contour.yaml" >}} | ||||
|    ``` | ||||
| <!-- TODO(https://github.com/knative-sandbox/net-contour/issues/11): We need a guide on how to use/modify a pre-existing install. --> | ||||
| 
 | ||||
| 1. Install the Knative Contour controller: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-contour" file="net-contour.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 1. To configure Knative Serving to use Contour by default: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch configmap/config-network \ | ||||
|      --namespace knative-serving \ | ||||
|      --type merge \ | ||||
|      --patch '{"data":{"ingress.class":"contour.ingress.networking.knative.dev"}}' | ||||
|    ``` | ||||
| 
 | ||||
| 1. Fetch the External IP or CNAME: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl --namespace contour-external get service envoy | ||||
|    ``` | ||||
| 
 | ||||
|    Save this for configuring DNS below. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Gloo" %}} | ||||
| 
 | ||||
| _For a detailed guide on Gloo integration, see | ||||
| [Installing Gloo for Knative](https://docs.solo.io/gloo/latest/installation/knative/) | ||||
| in the Gloo documentation._ | ||||
| 
 | ||||
| The following commands install Gloo and enable its Knative integration. | ||||
| 
 | ||||
| 1. Make sure `glooctl` is installed (version 1.3.x and higher recommended): | ||||
| 
 | ||||
|    ```bash | ||||
|    glooctl version | ||||
|    ``` | ||||
| 
 | ||||
|    If it is not installed, you can install the latest version using: | ||||
| 
 | ||||
|    ```bash | ||||
|    curl -sL https://run.solo.io/gloo/install | sh | ||||
|    export PATH=$HOME/.gloo/bin:$PATH | ||||
|    ``` | ||||
| 
 | ||||
|    Or following the | ||||
|    [Gloo CLI install instructions](https://docs.solo.io/gloo/latest/installation/knative/#install-command-line-tool-cli). | ||||
| 
 | ||||
| 1. Install Gloo and the Knative integration: | ||||
| 
 | ||||
|    ```bash | ||||
|    glooctl install knative --install-knative=false | ||||
|    ``` | ||||
| 
 | ||||
| 1. Fetch the External IP or CNAME: | ||||
| 
 | ||||
|    ```bash | ||||
|    glooctl proxy url --name knative-external-proxy | ||||
|    ``` | ||||
| 
 | ||||
|    Save this for configuring DNS below. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Istio" %}} | ||||
| 
 | ||||
| The following commands install Istio and enable its Knative integration. | ||||
| 
 | ||||
| 1. Install a properly configured Istio ([Advanced installation](./installing-istio.md)) | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-istio" file="istio.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 
 | ||||
| 1. Install the Knative Istio controller: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-istio" file="net-istio.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 1. Fetch the External IP or CNAME: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl --namespace istio-system get service istio-ingressgateway | ||||
|    ``` | ||||
| 
 | ||||
|    Save this for configuring DNS below. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Kong" %}} | ||||
| 
 | ||||
| The following commands install Kong and enable its Knative integration. | ||||
| 
 | ||||
| 1. Install Kong Ingress Controller: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/0.9.x/deploy/single/all-in-one-dbless.yaml | ||||
|    ``` | ||||
| 
 | ||||
| 1. To configure Knative Serving to use Kong by default: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch configmap/config-network \ | ||||
|      --namespace knative-serving \ | ||||
|      --type merge \ | ||||
|      --patch '{"data":{"ingress.class":"kong"}}' | ||||
|    ``` | ||||
| 
 | ||||
| 1. Fetch the External IP or CNAME: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl --namespace kong get service kong-proxy | ||||
|    ``` | ||||
| 
 | ||||
|    Save this for configuring DNS below. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Kourier" %}} | ||||
| 
 | ||||
| The following commands install Kourier and enable its Knative integration. | ||||
| 
 | ||||
| 1. Install the Knative Kourier controller: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-kourier" file="kourier.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 1. To configure Knative Serving to use Kourier by default: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch configmap/config-network \ | ||||
|      --namespace knative-serving \ | ||||
|      --type merge \ | ||||
|      --patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}' | ||||
|    ``` | ||||
| 
 | ||||
| 1. Fetch the External IP or CNAME: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl --namespace kourier-system get service kourier | ||||
|    ``` | ||||
| 
 | ||||
|    Save this for configuring DNS below. | ||||
| 
 | ||||
| {{< /tab >}} {{< /tabs >}} | ||||
| 
 | ||||
| 
 | ||||
| ### Verify the installation | ||||
| 
 | ||||
| Monitor the Knative components until all of the components show a `STATUS` of `Running` or `Completed`: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl get pods --namespace knative-serving | ||||
| ``` | ||||
| 
 | ||||
| ### Optional: Configuring DNS | ||||
| 
 | ||||
| You can configure DNS to prevent the need to run curl commands with a host header. | ||||
| To configure DNS, follow the procedure for the DNS of your choice below: | ||||
| 
 | ||||
| <!-- This indentation is important for things to render properly. --> | ||||
| 
 | ||||
|    {{< tabs name="serving_dns" default="Magic DNS (xip.io)" >}} | ||||
|    {{% tab name="Magic DNS (xip.io)" %}} | ||||
| 
 | ||||
| We ship a simple Kubernetes Job called "default domain" that will (see caveats) | ||||
| configure Knative Serving to use <a href="http://xip.io">xip.io</a> as the | ||||
| default DNS suffix. | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="serving" file="serving-default-domain.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| **Caveat**: This will only work if the cluster LoadBalancer service exposes an | ||||
| IPv4 address or hostname, so it will not work with IPv6 clusters or local setups | ||||
| like Minikube. For these, see "Real DNS" or "Temporary DNS". | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Real DNS" %}} | ||||
| 
 | ||||
| To configure DNS for Knative, take the External IP | ||||
| or CNAME from setting up networking, and configure it with your DNS provider as | ||||
| follows: | ||||
| 
 | ||||
| - If the networking layer produced an External IP address, then configure a | ||||
|   wildcard `A` record for the domain: | ||||
| 
 | ||||
|   ``` | ||||
|   # Here knative.example.com is the domain suffix for your cluster | ||||
|   *.knative.example.com == A 35.233.41.212 | ||||
|   ``` | ||||
| 
 | ||||
| - If the networking layer produced a CNAME, then configure a CNAME record for | ||||
|   the domain: | ||||
| 
 | ||||
|   ``` | ||||
|   # Here knative.example.com is the domain suffix for your cluster | ||||
|   *.knative.example.com == CNAME a317a278525d111e89f272a164fd35fb-1510370581.eu-central-1.elb.amazonaws.com | ||||
|   ``` | ||||
| 
 | ||||
| Once your DNS provider has been configured, direct Knative to use that domain: | ||||
| 
 | ||||
| ```bash | ||||
| # Replace knative.example.com with your domain suffix | ||||
| kubectl patch configmap/config-domain \ | ||||
|   --namespace knative-serving \ | ||||
|   --type merge \ | ||||
|   --patch '{"data":{"knative.example.com":""}}' | ||||
| ``` | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
|     {{% tab name="Temporary DNS" %}} | ||||
| 
 | ||||
| If you are using `curl` to access the sample | ||||
| applications, or your own Knative app, and are unable to use the "Magic DNS | ||||
| (xip.io)" or "Real DNS" methods, there is a temporary approach. This is useful | ||||
| for those who wish to evaluate Knative without altering their DNS configuration, | ||||
| as per the "Real DNS" method, or cannot use the "Magic DNS" method due to using, | ||||
| for example, minikube locally or IPv6 clusters. | ||||
| 
 | ||||
| To access your application using `curl` using this method: | ||||
| 
 | ||||
| 1. After starting your application, get the URL of your application: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl get ksvc | ||||
|    ``` | ||||
| 
 | ||||
|    The output should be similar to: | ||||
| 
 | ||||
|    ```bash | ||||
|    NAME            URL                                        LATESTCREATED         LATESTREADY           READY   REASON | ||||
|    helloworld-go   http://helloworld-go.default.example.com   helloworld-go-vqjlf   helloworld-go-vqjlf   True | ||||
|    ``` | ||||
| 
 | ||||
| 1. Instruct `curl` to connect to the External IP or CNAME defined by the | ||||
|    networking layer in section 3 above, and use the `-H "Host:"` command-line | ||||
|    option to specify the Knative application's host name. For example, if the | ||||
|    networking layer defines your External IP and port to be | ||||
|    `http://192.168.39.228:32198` and you wish to access the above | ||||
|    `helloworld-go` application, use: | ||||
| 
 | ||||
|    ```bash | ||||
|    curl -H "Host: helloworld-go.default.example.com" http://192.168.39.228:32198 | ||||
|    ``` | ||||
| 
 | ||||
|    In the case of the provided `helloworld-go` sample application, the output | ||||
|    should, using the default configuration, be: | ||||
| 
 | ||||
|    ``` | ||||
|    Hello Go Sample v1! | ||||
|    ``` | ||||
| 
 | ||||
| Refer to the "Real DNS" method for a permanent solution. | ||||
| 
 | ||||
|     {{< /tab >}} {{< /tabs >}} | ||||
| 
 | ||||
| ### Optional: Install Serving extensions | ||||
| 
 | ||||
| To add extra features to your Knative Serving installation, you can install extensions | ||||
| by applying YAML files using the `kubectl` CLI. | ||||
| 
 | ||||
| For information about the YAML files in the Knative Serving release, see | ||||
| [Installation files](./installation-files#knative-serving-installation-files). | ||||
| 
 | ||||
| Follow the steps for any Serving extensions you want to install: | ||||
| 
 | ||||
| {{< tabs name="serving_extensions" >}} | ||||
| 
 | ||||
| {{% tab name="HPA autoscaling" %}} | ||||
| 
 | ||||
| Knative also supports the use of the Kubernetes Horizontal Pod Autoscaler (HPA) | ||||
| for driving autoscaling decisions. The following command will install the | ||||
| components needed to support HPA-class autoscaling: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="serving" file="serving-hpa.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| <!-- TODO(https://github.com/knative/docs/issues/2152): Link to a more in-depth guide on HPA-class autoscaling --> | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="TLS with cert-manager" %}} | ||||
| 
 | ||||
| Knative supports automatically provisioning TLS certificates via | ||||
| [cert-manager](https://cert-manager.io/docs/). The following commands will | ||||
| install the components needed to support the provisioning of TLS certificates | ||||
| via cert-manager. | ||||
| 
 | ||||
| 1. First, install | ||||
|    [cert-manager version `0.12.0` or higher](../serving/installing-cert-manager.md) | ||||
| 
 | ||||
| 2. Next, install the component that integrates Knative with cert-manager: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-certmanager" file="release.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 3. Now configure Knative to | ||||
|    [automatically configure TLS certificates](../serving/using-auto-tls.md). | ||||
|    {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="TLS via HTTP01" %}} | ||||
| 
 | ||||
| Knative supports automatically provisioning TLS certificates using Let's Encrypt | ||||
| HTTP01 challenges. The following commands will install the components needed to | ||||
| support that. | ||||
| 
 | ||||
| 1. First, install the `net-http01` controller: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-http01" file="release.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 2. Next, configure the `certificate.class` to use this certificate type. | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch configmap/config-network \ | ||||
|      --namespace knative-serving \ | ||||
|      --type merge \ | ||||
|      --patch '{"data":{"certificate.class":"net-http01.certificate.networking.knative.dev"}}' | ||||
|    ``` | ||||
| 
 | ||||
| 3. Lastly, enable auto-TLS. | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch configmap/config-network \ | ||||
|      --namespace knative-serving \ | ||||
|      --type merge \ | ||||
|      --patch '{"data":{"autoTLS":"Enabled"}}' | ||||
|    ``` | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="TLS wildcard support" %}} | ||||
| 
 | ||||
| If you are using a Certificate implementation that supports provisioning | ||||
| wildcard certificates (e.g. cert-manager with a DNS01 issuer), then the most | ||||
| efficient way to provision certificates is with the namespace wildcard | ||||
| certificate controller. The following command will install the components needed | ||||
| to provision wildcard certificates in each namespace: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="serving" file="serving-nscert.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| > Note this will not work with HTTP01 either via cert-manager or the net-http01 | ||||
| > options. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="DomainMapping CRD" %}} | ||||
| 
 | ||||
| The `DomainMapping` CRD allows a user to map a Domain Name that they own to a | ||||
| specific Knative Service. | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="serving" file="serving-domainmapping-crds.yaml" >}} | ||||
| kubectl wait --for=condition=Established --all crd | ||||
| kubectl apply -f {{< artifact repo="serving" file="serving-domainmapping.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| {{< /tab >}} {{< /tabs >}} | ||||
| 
 | ||||
| ## Installing the Eventing component | ||||
| 
 | ||||
| To install the Eventing component: | ||||
| 
 | ||||
| 1. Install the required custom resources: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="eventing" file="eventing-crds.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 1. Install the core components of Eventing (see below for optional extensions): | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="eventing" file="eventing-core.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| ### Verify the installation | ||||
| 
 | ||||
| Monitor the Knative components until all of the components show a `STATUS` of `Running`: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl get pods --namespace knative-eventing | ||||
| ``` | ||||
| 
 | ||||
| ### Optional: Installing a default Channel (messaging) layer | ||||
| 
 | ||||
| To install a default Channel (messaging) layer: | ||||
| <!-- This indentation is important for things to render properly. --> | ||||
| 
 | ||||
|    {{< tabs name="eventing_channels" default="In-Memory (standalone)" >}} | ||||
|    {{% tab name="Apache Kafka Channel" %}} | ||||
| 
 | ||||
| 1. First, | ||||
|    [Install Apache Kafka for Kubernetes](../eventing/samples/kafka/README.md) | ||||
| 
 | ||||
| 1. Then install the Apache Kafka Channel: | ||||
| 
 | ||||
|    ```bash | ||||
|    curl -L "{{< artifact org="knative-sandbox" repo="eventing-kafka" file="channel-consolidated.yaml" >}}" \ | ||||
|     | sed 's/REPLACE_WITH_CLUSTER_URL/my-cluster-kafka-bootstrap.kafka:9092/' \ | ||||
|     | kubectl apply -f - | ||||
|    ``` | ||||
| 
 | ||||
| To learn more about the Apache Kafka channel, try | ||||
| [our sample](../eventing/samples/kafka/channel/README.md) | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Google Cloud Pub/Sub Channel" %}} | ||||
| 
 | ||||
| 1. Install the Google Cloud Pub/Sub Channel: | ||||
| 
 | ||||
|    ```bash | ||||
|    # This installs both the Channel and the GCP Sources. | ||||
|    kubectl apply -f {{< artifact org="google" repo="knative-gcp" file="cloud-run-events.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| To learn more about the Google Cloud Pub/Sub Channel, try | ||||
| [our sample](https://github.com/google/knative-gcp/blob/master/docs/examples/channel/README.md) | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="In-Memory (standalone)" %}} | ||||
| 
 | ||||
| The following command installs an implementation of Channel that runs in-memory. | ||||
| This implementation is nice because it is simple and standalone, but it is | ||||
| unsuitable for production use cases. | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="eventing" file="in-memory-channel.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="NATS Channel" %}} | ||||
| 
 | ||||
| 1. First, [Install NATS Streaming for | ||||
|    Kubernetes](https://github.com/knative-sandbox/eventing-natss/tree/main/config) | ||||
| 
 | ||||
| 1. Then install the NATS Streaming Channel: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-natss" file="300-natss-channel.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| <!-- TODO(https://github.com/knative/docs/issues/2153): Add more Channels here --> | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| ### Optional: Installing a Broker (Eventing) layer: | ||||
| 
 | ||||
| To install a Broker (Eventing) layer: | ||||
| <!-- This indentation is important for things to render properly. --> | ||||
|    {{< tabs name="eventing_brokers" default="MT-Channel-based" >}} | ||||
|    {{% tab name="Apache Kafka Broker" %}} | ||||
| 
 | ||||
| The following commands install the Apache Kafka broker, and run event routing in a system namespace, | ||||
| `knative-eventing`, by default. | ||||
| 
 | ||||
| 1. Install the Kafka controller by entering the following command: | ||||
| 
 | ||||
|     ```bash | ||||
|     kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-kafka-broker" file="eventing-kafka-controller.yaml" >}} | ||||
|     ``` | ||||
| 
 | ||||
| 1. Install the Kafka Broker data plane by entering the following command: | ||||
| 
 | ||||
|     ```bash | ||||
|     kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-kafka-broker" file="eventing-kafka-broker.yaml" >}} | ||||
|     ``` | ||||
| 
 | ||||
| For more information, see the [Kafka Broker](./../eventing/broker/kafka-broker.md) documentation. | ||||
| {{< /tab >}} | ||||
| 
 | ||||
|    {{% tab name="MT-Channel-based" %}} | ||||
| 
 | ||||
| The following command installs an implementation of Broker that utilizes | ||||
| Channels and runs event routing components in a System Namespace, providing a | ||||
| smaller and simpler installation. | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="eventing" file="mt-channel-broker.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| To customize which broker channel implementation is used, update the following | ||||
| ConfigMap to specify which configurations are used for which namespaces: | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: v1 | ||||
| kind: ConfigMap | ||||
| metadata: | ||||
|   name: config-br-defaults | ||||
|   namespace: knative-eventing | ||||
| data: | ||||
|   default-br-config: | | ||||
|     # This is the cluster-wide default broker channel. | ||||
|     clusterDefault: | ||||
|       brokerClass: MTChannelBasedBroker | ||||
|       apiVersion: v1 | ||||
|       kind: ConfigMap | ||||
|       name: imc-channel | ||||
|       namespace: knative-eventing | ||||
|     # This allows you to specify different defaults per-namespace, | ||||
|     # in this case the "some-namespace" namespace will use the Kafka | ||||
|     # channel ConfigMap by default (only for example, you will need | ||||
|     # to install kafka also to make use of this). | ||||
|     namespaceDefaults: | ||||
|       some-namespace: | ||||
|         brokerClass: MTChannelBasedBroker | ||||
|         apiVersion: v1 | ||||
|         kind: ConfigMap | ||||
|         name: kafka-channel | ||||
|         namespace: knative-eventing | ||||
| ``` | ||||
| 
 | ||||
| The referenced `imc-channel` and `kafka-channel` example ConfigMaps would look | ||||
| like: | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: v1 | ||||
| kind: ConfigMap | ||||
| metadata: | ||||
|   name: imc-channel | ||||
|   namespace: knative-eventing | ||||
| data: | ||||
|   channelTemplateSpec: | | ||||
|     apiVersion: messaging.knative.dev/v1 | ||||
|     kind: InMemoryChannel | ||||
| --- | ||||
| apiVersion: v1 | ||||
| kind: ConfigMap | ||||
| metadata: | ||||
|   name: kafka-channel | ||||
|   namespace: knative-eventing | ||||
| data: | ||||
|   channelTemplateSpec: | | ||||
|     apiVersion: messaging.knative.dev/v1alpha1 | ||||
|     kind: KafkaChannel | ||||
|     spec: | ||||
|       numPartitions: 3 | ||||
|       replicationFactor: 1 | ||||
| ``` | ||||
| 
 | ||||
| **NOTE:** In order to use the KafkaChannel make sure it is installed on the cluster as discussed above. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| ### Optional: Install Eventing extensions | ||||
| 
 | ||||
| To add extra features to your Knative Eventing installation, you can install extensions | ||||
| by applying YAML files using the `kubectl` CLI. | ||||
| 
 | ||||
| For information about the YAML files in the Knative Eventing release, see | ||||
| [Installation files](./installation-files#knative-eventing-installation-files). | ||||
| 
 | ||||
| Follow the steps for any Eventing extensions you want to install: | ||||
| <!-- This indentation is important for things to render properly. --> | ||||
| 
 | ||||
| {{< tabs name="eventing_extensions" >}} | ||||
| 
 | ||||
| {{% tab name="Apache Kafka Sink" %}} | ||||
| 
 | ||||
| 1. Install the Kafka controller: | ||||
| 
 | ||||
|     ```bash | ||||
|     kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-kafka-broker" file="eventing-kafka-controller.yaml" >}} | ||||
|     ``` | ||||
| 
 | ||||
| 1. Install the Kafka Sink data plane: | ||||
| 
 | ||||
|     ```bash | ||||
|     kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-kafka-broker" file="eventing-kafka-sink.yaml" >}} | ||||
|     ``` | ||||
| 
 | ||||
| For more information, see the [Kafka Sink](./../eventing/sink/kafka-sink.md) documentation. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Sugar Controller" %}} | ||||
| 
 | ||||
| <!-- Unclear when this feature came in --> | ||||
| 
 | ||||
| The following command installs the Eventing Sugar Controller: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="eventing" file="eventing-sugar-controller.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| The Knative Eventing Sugar Controller will react to special labels and | ||||
| annotations and produce Eventing resources. For example: | ||||
| 
 | ||||
| - When a Namespace is labeled with `eventing.knative.dev/injection=enabled`, the | ||||
|   controller will create a default broker in that namespace. | ||||
| - When a Trigger is annotated with `eventing.knative.dev/injection=enabled`, the | ||||
|   controller will create a Broker named by that Trigger in the Trigger's | ||||
|   Namespace. | ||||
| 
 | ||||
| The following command enables the default Broker on a namespace (here | ||||
| `default`): | ||||
| 
 | ||||
| ```bash | ||||
| kubectl label namespace default eventing.knative.dev/injection=enabled | ||||
| ``` | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Github Source" %}} | ||||
| 
 | ||||
| The following command installs the single-tenant Github source: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-github" file="github.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| The single-tenant GitHub source creates one Knative service per GitHub source. | ||||
| 
 | ||||
| The following command installs the multi-tenant GitHub source: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-github" file="mt-github.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| The multi-tenant GitHub source creates only one Knative service handling all | ||||
| GitHub sources in the cluster. This source does not support logging or tracing | ||||
| configuration yet. | ||||
| 
 | ||||
| To learn more about the Github source, try | ||||
| [our sample](../eventing/samples/github-source/README.md) | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Apache Camel-K Source" %}} | ||||
| 
 | ||||
| The following command installs the Apache Camel-K Source: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-camel" file="camel.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| To learn more about the Apache Camel-K source, try | ||||
| [our sample](../eventing/samples/apache-camel-source/README.md) | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Apache Kafka Source" %}} | ||||
| 
 | ||||
| The following command installs the Apache Kafka Source: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-kafka" file="source.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| To learn more about the Apache Kafka source, try | ||||
| [our sample](../eventing/samples/kafka/source/README.md) | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="GCP Sources" %}} | ||||
| 
 | ||||
| The following command installs the GCP Sources: | ||||
| 
 | ||||
| ```bash | ||||
| # This installs both the Sources and the Channel. | ||||
| kubectl apply -f {{< artifact org="google" repo="knative-gcp" file="cloud-run-events.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| To learn more about the Cloud Pub/Sub source, try | ||||
| [our sample](../eventing/samples/cloud-pubsub-source/README.md). | ||||
| 
 | ||||
| To learn more about the Cloud Storage source, try | ||||
| [our sample](../eventing/samples/cloud-storage-source/README.md). | ||||
| 
 | ||||
| To learn more about the Cloud Scheduler source, try | ||||
| [our sample](../eventing/samples/cloud-scheduler-source/README.md). | ||||
| 
 | ||||
| To learn more about the Cloud Audit Logs source, try | ||||
| [our sample](../eventing/samples/cloud-audit-logs-source/README.md). | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Apache CouchDB Source" %}} | ||||
| 
 | ||||
| The following command installs the Apache CouchDB Source: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-couchdb" file="couchdb.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| To learn more about the Apache CouchDB source, read the [documentation](https://github.com/knative-sandbox/eventing-couchdb/blob/main/source/README.md). | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="VMware Sources and Bindings" %}} | ||||
| 
 | ||||
| The following command installs the VMware Sources and Bindings: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact org="vmware-tanzu" repo="sources-for-knative" file="release.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| To learn more about the VMware sources and bindings, try | ||||
| [our samples](https://github.com/vmware-tanzu/sources-for-knative/tree/master/samples/README.md). | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
|  | @ -0,0 +1,223 @@ | |||
| --- | ||||
| title: "Installing Knative Eventing using YAML files" | ||||
| linkTitle: "Install Eventing using YAML" | ||||
| weight: 03 | ||||
| type: "docs" | ||||
| showlandingtoc: "false" | ||||
| --- | ||||
| 
 | ||||
| This topic describes how to install Knative Eventing by applying YAML files using the `kubectl` CLI. | ||||
| 
 | ||||
| 
 | ||||
| ## Prerequisites | ||||
| 
 | ||||
| Before installation, you must meet the prerequisites. | ||||
| See [Knative Prerequisites](./prerequisites.md). | ||||
| 
 | ||||
| 
 | ||||
| ## Install the Eventing component | ||||
| 
 | ||||
| To install the Eventing component: | ||||
| 
 | ||||
| 1. Install the required custom resource definitions (CRDs): | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="eventing" file="eventing-crds.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 1. Install the core components of Eventing: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="eventing" file="eventing-core.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 
 | ||||
| ## Verify the installation | ||||
| 
 | ||||
| Monitor the Knative components until all of the components show a `STATUS` of `Running`: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl get pods --namespace knative-eventing | ||||
| ``` | ||||
| 
 | ||||
| 
 | ||||
| ## Optional: Install a default channel (messaging) layer | ||||
| 
 | ||||
| The tabs below expand to show instructions for installing a default channel layer. | ||||
| Follow the procedure for the channel of your choice: | ||||
| 
 | ||||
| <!-- This indentation is important for things to render properly. --> | ||||
| 
 | ||||
|    {{< tabs name="eventing_channels" default="In-Memory (standalone)" >}} | ||||
|    {{% tab name="Apache Kafka Channel" %}} | ||||
| 
 | ||||
| 1. First, | ||||
|    [Install Apache Kafka for Kubernetes](../eventing/samples/kafka/README.md) | ||||
| 
 | ||||
| 1. Then install the Apache Kafka channel: | ||||
| 
 | ||||
|    ```bash | ||||
|    curl -L "{{< artifact org="knative-sandbox" repo="eventing-kafka" file="channel-consolidated.yaml" >}}" \ | ||||
|     | sed 's/REPLACE_WITH_CLUSTER_URL/my-cluster-kafka-bootstrap.kafka:9092/' \ | ||||
|     | kubectl apply -f - | ||||
|    ``` | ||||
| 
 | ||||
| To learn more about the Apache Kafka channel, try | ||||
| [our sample](../eventing/samples/kafka/channel/README.md) | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Google Cloud Pub/Sub Channel" %}} | ||||
| 
 | ||||
| 1. Install the Google Cloud Pub/Sub channel: | ||||
| 
 | ||||
|    ```bash | ||||
|    # This installs both the Channel and the GCP Sources. | ||||
|    kubectl apply -f {{< artifact org="google" repo="knative-gcp" file="cloud-run-events.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| To learn more about the Google Cloud Pub/Sub channel, try | ||||
| [our sample](https://github.com/google/knative-gcp/blob/master/docs/examples/channel/README.md) | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="In-Memory (standalone)" %}} | ||||
| 
 | ||||
| The following command installs an implementation of channel that runs in-memory. | ||||
| This implementation is nice because it is simple and standalone, but it is | ||||
| unsuitable for production use cases. | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="eventing" file="in-memory-channel.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="NATS Channel" %}} | ||||
| 
 | ||||
| 1. First, [Install NATS Streaming for | ||||
|    Kubernetes](https://github.com/knative-sandbox/eventing-natss/tree/main/config) | ||||
| 
 | ||||
| 1. Then install the NATS Streaming channel: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-natss" file="300-natss-channel.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| <!-- TODO(https://github.com/knative/docs/issues/2153): Add more Channels here --> | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| 
 | ||||
| ## Optional: Install a broker layer: | ||||
| 
 | ||||
| The tabs below expand to show instructions for installing the broker layer. | ||||
| Follow the procedure for the broker of your choice: | ||||
| 
 | ||||
| <!-- This indentation is important for things to render properly. --> | ||||
|    {{< tabs name="eventing_brokers" default="MT-Channel-based" >}} | ||||
|    {{% tab name="Apache Kafka Broker" %}} | ||||
| 
 | ||||
| The following commands install the Apache Kafka broker, and run event routing in a system namespace, | ||||
| `knative-eventing`, by default. | ||||
| 
 | ||||
| 1. Install the Kafka controller by entering the following command: | ||||
| 
 | ||||
|     ```bash | ||||
|     kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-kafka-broker" file="eventing-kafka-controller.yaml" >}} | ||||
|     ``` | ||||
| 
 | ||||
| 1. Install the Kafka broker data plane by entering the following command: | ||||
| 
 | ||||
|     ```bash | ||||
|     kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-kafka-broker" file="eventing-kafka-broker.yaml" >}} | ||||
|     ``` | ||||
| 
 | ||||
| For more information, see the [Kafka broker](./../eventing/broker/kafka-broker.md) documentation. | ||||
| {{< /tab >}} | ||||
| 
 | ||||
|    {{% tab name="MT-Channel-based" %}} | ||||
| 
 | ||||
| The following command installs an implementation of broker that utilizes | ||||
| channels and runs event routing components in a System Namespace, providing a | ||||
| smaller and simpler installation. | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="eventing" file="mt-channel-broker.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| To customize which broker channel implementation is used, update the following | ||||
| ConfigMap to specify which configurations are used for which namespaces: | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: v1 | ||||
| kind: ConfigMap | ||||
| metadata: | ||||
|   name: config-br-defaults | ||||
|   namespace: knative-eventing | ||||
| data: | ||||
|   default-br-config: | | ||||
|     # This is the cluster-wide default broker channel. | ||||
|     clusterDefault: | ||||
|       brokerClass: MTChannelBasedBroker | ||||
|       apiVersion: v1 | ||||
|       kind: ConfigMap | ||||
|       name: imc-channel | ||||
|       namespace: knative-eventing | ||||
|     # This allows you to specify different defaults per-namespace, | ||||
|     # in this case the "some-namespace" namespace will use the Kafka | ||||
|     # channel ConfigMap by default (only for example, you will need | ||||
|     # to install kafka also to make use of this). | ||||
|     namespaceDefaults: | ||||
|       some-namespace: | ||||
|         brokerClass: MTChannelBasedBroker | ||||
|         apiVersion: v1 | ||||
|         kind: ConfigMap | ||||
|         name: kafka-channel | ||||
|         namespace: knative-eventing | ||||
| ``` | ||||
| 
 | ||||
| The referenced `imc-channel` and `kafka-channel` example ConfigMaps would look | ||||
| like: | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: v1 | ||||
| kind: ConfigMap | ||||
| metadata: | ||||
|   name: imc-channel | ||||
|   namespace: knative-eventing | ||||
| data: | ||||
|   channelTemplateSpec: | | ||||
|     apiVersion: messaging.knative.dev/v1 | ||||
|     kind: InMemoryChannel | ||||
| --- | ||||
| apiVersion: v1 | ||||
| kind: ConfigMap | ||||
| metadata: | ||||
|   name: kafka-channel | ||||
|   namespace: knative-eventing | ||||
| data: | ||||
|   channelTemplateSpec: | | ||||
|     apiVersion: messaging.knative.dev/v1alpha1 | ||||
|     kind: KafkaChannel | ||||
|     spec: | ||||
|       numPartitions: 3 | ||||
|       replicationFactor: 1 | ||||
| ``` | ||||
| 
 | ||||
| **NOTE:** In order to use the KafkaChannel make sure it is installed on the cluster as discussed above. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| 
 | ||||
| ## Next steps | ||||
| 
 | ||||
| After installing Knative Eventing: | ||||
| 
 | ||||
| - If you want to add extra features to your installation, see [Installing optional extensions](./install-extensions.md). | ||||
| - If you want to install the Knative Serving component, see [Installing Serving using YAML files](./install-serving-with-yaml.md) | ||||
| - Install the [Knative CLI](./install-kn) to use `kn` commands. | ||||
|  | @ -0,0 +1,278 @@ | |||
| --- | ||||
| title: "Installing optional extensions" | ||||
| linkTitle: "Install optional extensions" | ||||
| weight: 04 | ||||
| type: "docs" | ||||
| showlandingtoc: "false" | ||||
| --- | ||||
| 
 | ||||
| To add extra features to your Knative Serving or Eventing installation, you can install extensions | ||||
| by applying YAML files using the `kubectl` CLI. | ||||
| 
 | ||||
| For information about the YAML files in the Knative Serving and Eventing releases, see | ||||
| [Installation files](./installation-files.md). | ||||
| 
 | ||||
| 
 | ||||
| # Prerequisites | ||||
| 
 | ||||
| Before you install any optional extensions, you must install Knative Serving or Eventing. | ||||
| See [Installing Serving using YAML files](./install-serving-with-yaml.md) | ||||
| and [Installing Eventing using YAML files](./install/install-eventing-with-yaml.md). | ||||
| 
 | ||||
| 
 | ||||
| ## Install optional Serving extensions | ||||
| 
 | ||||
| The tabs below expand to show instructions for installing each Serving extension. | ||||
| 
 | ||||
| {{< tabs name="serving_extensions" >}} | ||||
| 
 | ||||
| {{% tab name="HPA autoscaling" %}} | ||||
| 
 | ||||
| Knative also supports the use of the Kubernetes Horizontal Pod Autoscaler (HPA) | ||||
| for driving autoscaling decisions. The following command will install the | ||||
| components needed to support HPA-class autoscaling: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="serving" file="serving-hpa.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| <!-- TODO(https://github.com/knative/docs/issues/2152): Link to a more in-depth guide on HPA-class autoscaling --> | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="TLS with cert-manager" %}} | ||||
| 
 | ||||
| Knative supports automatically provisioning TLS certificates via | ||||
| [cert-manager](https://cert-manager.io/docs/). The following commands will | ||||
| install the components needed to support the provisioning of TLS certificates | ||||
| via cert-manager. | ||||
| 
 | ||||
| 1. First, install | ||||
|    [cert-manager version `0.12.0` or higher](../serving/installing-cert-manager.md) | ||||
| 
 | ||||
| 2. Next, install the component that integrates Knative with cert-manager: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-certmanager" file="release.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 3. Now configure Knative to | ||||
|    [automatically configure TLS certificates](../serving/using-auto-tls.md). | ||||
|    {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="TLS via HTTP01" %}} | ||||
| 
 | ||||
| Knative supports automatically provisioning TLS certificates using Let's Encrypt | ||||
| HTTP01 challenges. The following commands will install the components needed to | ||||
| support that. | ||||
| 
 | ||||
| 1. First, install the `net-http01` controller: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-http01" file="release.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 2. Next, configure the `certificate.class` to use this certificate type. | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch configmap/config-network \ | ||||
|      --namespace knative-serving \ | ||||
|      --type merge \ | ||||
|      --patch '{"data":{"certificate.class":"net-http01.certificate.networking.knative.dev"}}' | ||||
|    ``` | ||||
| 
 | ||||
| 3. Lastly, enable auto-TLS. | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch configmap/config-network \ | ||||
|      --namespace knative-serving \ | ||||
|      --type merge \ | ||||
|      --patch '{"data":{"autoTLS":"Enabled"}}' | ||||
|    ``` | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="TLS wildcard support" %}} | ||||
| 
 | ||||
| If you are using a Certificate implementation that supports provisioning | ||||
| wildcard certificates (e.g. cert-manager with a DNS01 issuer), then the most | ||||
| efficient way to provision certificates is with the namespace wildcard | ||||
| certificate controller. The following command will install the components needed | ||||
| to provision wildcard certificates in each namespace: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="serving" file="serving-nscert.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| > Note this will not work with HTTP01 either via cert-manager or the net-http01 | ||||
| > options. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="DomainMapping CRD" %}} | ||||
| 
 | ||||
| The `DomainMapping` CRD allows a user to map a Domain Name that they own to a | ||||
| specific Knative Service. | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="serving" file="serving-domainmapping-crds.yaml" >}} | ||||
| kubectl wait --for=condition=Established --all crd | ||||
| kubectl apply -f {{< artifact repo="serving" file="serving-domainmapping.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| {{< /tab >}} {{< /tabs >}} | ||||
| 
 | ||||
| 
 | ||||
| ## Install optional Eventing extensions | ||||
| 
 | ||||
| The tabs below expand to show instructions for installing each Eventing extension. | ||||
| <!-- This indentation is important for things to render properly. --> | ||||
| 
 | ||||
| {{< tabs name="eventing_extensions" >}} | ||||
| 
 | ||||
| {{% tab name="Apache Kafka Sink" %}} | ||||
| 
 | ||||
| 1. Install the Kafka controller: | ||||
| 
 | ||||
|     ```bash | ||||
|     kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-kafka-broker" file="eventing-kafka-controller.yaml" >}} | ||||
|     ``` | ||||
| 
 | ||||
| 1. Install the Kafka Sink data plane: | ||||
| 
 | ||||
|     ```bash | ||||
|     kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-kafka-broker" file="eventing-kafka-sink.yaml" >}} | ||||
|     ``` | ||||
| 
 | ||||
| For more information, see the [Kafka Sink](./../eventing/sink/kafka-sink.md) documentation. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Sugar Controller" %}} | ||||
| 
 | ||||
| <!-- Unclear when this feature came in --> | ||||
| 
 | ||||
| The following command installs the Eventing Sugar Controller: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="eventing" file="eventing-sugar-controller.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| The Knative Eventing Sugar Controller will react to special labels and | ||||
| annotations and produce Eventing resources. For example: | ||||
| 
 | ||||
| - When a Namespace is labeled with `eventing.knative.dev/injection=enabled`, the | ||||
|   controller will create a default broker in that namespace. | ||||
| - When a Trigger is annotated with `eventing.knative.dev/injection=enabled`, the | ||||
|   controller will create a Broker named by that Trigger in the Trigger's | ||||
|   Namespace. | ||||
| 
 | ||||
| The following command enables the default Broker on a namespace (here | ||||
| `default`): | ||||
| 
 | ||||
| ```bash | ||||
| kubectl label namespace default eventing.knative.dev/injection=enabled | ||||
| ``` | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Github Source" %}} | ||||
| 
 | ||||
| The following command installs the single-tenant Github source: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-github" file="github.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| The single-tenant GitHub source creates one Knative service per GitHub source. | ||||
| 
 | ||||
| The following command installs the multi-tenant GitHub source: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-github" file="mt-github.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| The multi-tenant GitHub source creates only one Knative service handling all | ||||
| GitHub sources in the cluster. This source does not support logging or tracing | ||||
| configuration yet. | ||||
| 
 | ||||
| To learn more about the Github source, try | ||||
| [our sample](../eventing/samples/github-source/README.md) | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Apache Camel-K Source" %}} | ||||
| 
 | ||||
| The following command installs the Apache Camel-K Source: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-camel" file="camel.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| To learn more about the Apache Camel-K source, try | ||||
| [our sample](../eventing/samples/apache-camel-source/README.md) | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Apache Kafka Source" %}} | ||||
| 
 | ||||
| The following command installs the Apache Kafka Source: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-kafka" file="source.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| To learn more about the Apache Kafka source, try | ||||
| [our sample](../eventing/samples/kafka/source/README.md) | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="GCP Sources" %}} | ||||
| 
 | ||||
| The following command installs the GCP Sources: | ||||
| 
 | ||||
| ```bash | ||||
| # This installs both the Sources and the Channel. | ||||
| kubectl apply -f {{< artifact org="google" repo="knative-gcp" file="cloud-run-events.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| To learn more about the Cloud Pub/Sub source, try | ||||
| [our sample](../eventing/samples/cloud-pubsub-source/README.md). | ||||
| 
 | ||||
| To learn more about the Cloud Storage source, try | ||||
| [our sample](../eventing/samples/cloud-storage-source/README.md). | ||||
| 
 | ||||
| To learn more about the Cloud Scheduler source, try | ||||
| [our sample](../eventing/samples/cloud-scheduler-source/README.md). | ||||
| 
 | ||||
| To learn more about the Cloud Audit Logs source, try | ||||
| [our sample](../eventing/samples/cloud-audit-logs-source/README.md). | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Apache CouchDB Source" %}} | ||||
| 
 | ||||
| The following command installs the Apache CouchDB Source: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact org="knative-sandbox" repo="eventing-couchdb" file="couchdb.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| To learn more about the Apache CouchDB source, read the [documentation](https://github.com/knative-sandbox/eventing-couchdb/blob/main/source/README.md). | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="VMware Sources and Bindings" %}} | ||||
| 
 | ||||
| The following command installs the VMware Sources and Bindings: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact org="vmware-tanzu" repo="sources-for-knative" file="release.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| To learn more about the VMware sources and bindings, try | ||||
| [our samples](https://github.com/vmware-tanzu/sources-for-knative/tree/master/samples/README.md). | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
|  | @ -0,0 +1,378 @@ | |||
| --- | ||||
| title: "Installing Knative Serving using YAML files" | ||||
| linkTitle: "Install Serving using YAML" | ||||
| weight: 02 | ||||
| type: "docs" | ||||
| showlandingtoc: "false" | ||||
| --- | ||||
| 
 | ||||
| This topic describes how to install Knative Serving by applying YAML files using the `kubectl` CLI. | ||||
| 
 | ||||
| 
 | ||||
| ## Prerequisites | ||||
| 
 | ||||
| Before installation, you must meet the prerequisites. | ||||
| See [Knative Prerequisites](./prerequisites.md). | ||||
| 
 | ||||
| 
 | ||||
| ## Install the Serving component | ||||
| 
 | ||||
| To install the serving component: | ||||
| 
 | ||||
| 1. Install the required custom resources: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="serving" file="serving-crds.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 1. Install the core components of Serving: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="serving" file="serving-core.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 
 | ||||
| ## Install a networking layer | ||||
| 
 | ||||
| The tabs below expand to show instructions for installing a networking layer. | ||||
| Follow the procedure for the networking layer of your choice: | ||||
| 
 | ||||
| <!-- TODO: Link to document/diagram describing what is a networking layer.  --> | ||||
| <!-- This indentation is important for things to render properly. --> | ||||
| 
 | ||||
|    {{< tabs name="serving_networking" default="Kourier" >}} | ||||
|    {{% tab name="Ambassador" %}} | ||||
| 
 | ||||
| The following commands install Ambassador and enable its Knative integration. | ||||
| 
 | ||||
| 1. Create a namespace to install Ambassador in: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl create namespace ambassador | ||||
|    ``` | ||||
| 
 | ||||
| 1. Install Ambassador: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply --namespace ambassador \ | ||||
|      -f https://getambassador.io/yaml/ambassador/ambassador-crds.yaml \ | ||||
|      -f https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml \ | ||||
|      -f https://getambassador.io/yaml/ambassador/ambassador-service.yaml | ||||
|    ``` | ||||
| 
 | ||||
| 1. Give Ambassador the required permissions: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch clusterrolebinding ambassador -p '{"subjects":[{"kind": "ServiceAccount", "name": "ambassador", "namespace": "ambassador"}]}' | ||||
|    ``` | ||||
| 
 | ||||
| 1. Enable Knative support in Ambassador: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl set env --namespace ambassador  deployments/ambassador AMBASSADOR_KNATIVE_SUPPORT=true | ||||
|    ``` | ||||
| 
 | ||||
| 1. To configure Knative Serving to use Ambassador by default: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch configmap/config-network \ | ||||
|      --namespace knative-serving \ | ||||
|      --type merge \ | ||||
|      --patch '{"data":{"ingress.class":"ambassador.ingress.networking.knative.dev"}}' | ||||
|    ``` | ||||
| 
 | ||||
| 1. Fetch the External IP or CNAME: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl --namespace ambassador get service ambassador | ||||
|    ``` | ||||
| 
 | ||||
|    Save this for configuring DNS below. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Contour" %}} | ||||
| 
 | ||||
| The following commands install Contour and enable its Knative integration. | ||||
| 
 | ||||
| 1. Install a properly configured Contour: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-contour" file="contour.yaml" >}} | ||||
|    ``` | ||||
| <!-- TODO(https://github.com/knative-sandbox/net-contour/issues/11): We need a guide on how to use/modify a pre-existing install. --> | ||||
| 
 | ||||
| 1. Install the Knative Contour controller: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-contour" file="net-contour.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 1. To configure Knative Serving to use Contour by default: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch configmap/config-network \ | ||||
|      --namespace knative-serving \ | ||||
|      --type merge \ | ||||
|      --patch '{"data":{"ingress.class":"contour.ingress.networking.knative.dev"}}' | ||||
|    ``` | ||||
| 
 | ||||
| 1. Fetch the External IP or CNAME: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl --namespace contour-external get service envoy | ||||
|    ``` | ||||
| 
 | ||||
|    Save this for configuring DNS below. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Gloo" %}} | ||||
| 
 | ||||
| _For a detailed guide on Gloo integration, see | ||||
| [Installing Gloo for Knative](https://docs.solo.io/gloo/latest/installation/knative/) | ||||
| in the Gloo documentation._ | ||||
| 
 | ||||
| The following commands install Gloo and enable its Knative integration. | ||||
| 
 | ||||
| 1. Make sure `glooctl` is installed (version 1.3.x and higher recommended): | ||||
| 
 | ||||
|    ```bash | ||||
|    glooctl version | ||||
|    ``` | ||||
| 
 | ||||
|    If it is not installed, you can install the latest version using: | ||||
| 
 | ||||
|    ```bash | ||||
|    curl -sL https://run.solo.io/gloo/install | sh | ||||
|    export PATH=$HOME/.gloo/bin:$PATH | ||||
|    ``` | ||||
| 
 | ||||
|    Or following the | ||||
|    [Gloo CLI install instructions](https://docs.solo.io/gloo/latest/installation/knative/#install-command-line-tool-cli). | ||||
| 
 | ||||
| 1. Install Gloo and the Knative integration: | ||||
| 
 | ||||
|    ```bash | ||||
|    glooctl install knative --install-knative=false | ||||
|    ``` | ||||
| 
 | ||||
| 1. Fetch the External IP or CNAME: | ||||
| 
 | ||||
|    ```bash | ||||
|    glooctl proxy url --name knative-external-proxy | ||||
|    ``` | ||||
| 
 | ||||
|    Save this for configuring DNS below. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Istio" %}} | ||||
| 
 | ||||
| The following commands install Istio and enable its Knative integration. | ||||
| 
 | ||||
| 1. Install a properly configured Istio ([Advanced installation](./installing-istio.md)) | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-istio" file="istio.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 
 | ||||
| 1. Install the Knative Istio controller: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-istio" file="net-istio.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 1. Fetch the External IP or CNAME: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl --namespace istio-system get service istio-ingressgateway | ||||
|    ``` | ||||
| 
 | ||||
|    Save this for configuring DNS below. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Kong" %}} | ||||
| 
 | ||||
| The following commands install Kong and enable its Knative integration. | ||||
| 
 | ||||
| 1. Install Kong Ingress Controller: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/0.9.x/deploy/single/all-in-one-dbless.yaml | ||||
|    ``` | ||||
| 
 | ||||
| 1. To configure Knative Serving to use Kong by default: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch configmap/config-network \ | ||||
|      --namespace knative-serving \ | ||||
|      --type merge \ | ||||
|      --patch '{"data":{"ingress.class":"kong"}}' | ||||
|    ``` | ||||
| 
 | ||||
| 1. Fetch the External IP or CNAME: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl --namespace kong get service kong-proxy | ||||
|    ``` | ||||
| 
 | ||||
|    Save this for configuring DNS below. | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Kourier" %}} | ||||
| 
 | ||||
| The following commands install Kourier and enable its Knative integration. | ||||
| 
 | ||||
| 1. Install the Knative Kourier controller: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl apply -f {{< artifact repo="net-kourier" file="kourier.yaml" >}} | ||||
|    ``` | ||||
| 
 | ||||
| 1. To configure Knative Serving to use Kourier by default: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl patch configmap/config-network \ | ||||
|      --namespace knative-serving \ | ||||
|      --type merge \ | ||||
|      --patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}' | ||||
|    ``` | ||||
| 
 | ||||
| 1. Fetch the External IP or CNAME: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl --namespace kourier-system get service kourier | ||||
|    ``` | ||||
| 
 | ||||
|    Save this for configuring DNS below. | ||||
| 
 | ||||
| {{< /tab >}} {{< /tabs >}} | ||||
| 
 | ||||
| 
 | ||||
| ## Verify the installation | ||||
| 
 | ||||
| Monitor the Knative components until all of the components show a `STATUS` of `Running` or `Completed`: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl get pods --namespace knative-serving | ||||
| ``` | ||||
| 
 | ||||
| 
 | ||||
| ## Configure DNS | ||||
| 
 | ||||
| You can configure DNS to prevent the need to run curl commands with a host header. | ||||
| 
 | ||||
| The tabs below expand to show instructions for configuring DNS. | ||||
| Follow the procedure for the DNS of your choice: | ||||
| 
 | ||||
| <!-- This indentation is important for things to render properly. --> | ||||
| 
 | ||||
|    {{< tabs name="serving_dns" default="Magic DNS (xip.io)" >}} | ||||
|    {{% tab name="Magic DNS (xip.io)" %}} | ||||
| 
 | ||||
| We ship a simple Kubernetes Job called "default domain" that will (see caveats) | ||||
| configure Knative Serving to use <a href="http://xip.io">xip.io</a> as the | ||||
| default DNS suffix. | ||||
| 
 | ||||
| ```bash | ||||
| kubectl apply -f {{< artifact repo="serving" file="serving-default-domain.yaml" >}} | ||||
| ``` | ||||
| 
 | ||||
| **Caveat**: This will only work if the cluster LoadBalancer service exposes an | ||||
| IPv4 address or hostname, so it will not work with IPv6 clusters or local setups | ||||
| like Minikube. For these, see "Real DNS" or "Temporary DNS". | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
| {{% tab name="Real DNS" %}} | ||||
| 
 | ||||
| To configure DNS for Knative, take the External IP | ||||
| or CNAME from setting up networking, and configure it with your DNS provider as | ||||
| follows: | ||||
| 
 | ||||
| - If the networking layer produced an External IP address, then configure a | ||||
|   wildcard `A` record for the domain: | ||||
| 
 | ||||
|   ``` | ||||
|   # Here knative.example.com is the domain suffix for your cluster | ||||
|   *.knative.example.com == A 35.233.41.212 | ||||
|   ``` | ||||
| 
 | ||||
| - If the networking layer produced a CNAME, then configure a CNAME record for | ||||
|   the domain: | ||||
| 
 | ||||
|   ``` | ||||
|   # Here knative.example.com is the domain suffix for your cluster | ||||
|   *.knative.example.com == CNAME a317a278525d111e89f272a164fd35fb-1510370581.eu-central-1.elb.amazonaws.com | ||||
|   ``` | ||||
| 
 | ||||
| Once your DNS provider has been configured, direct Knative to use that domain: | ||||
| 
 | ||||
| ```bash | ||||
| # Replace knative.example.com with your domain suffix | ||||
| kubectl patch configmap/config-domain \ | ||||
|   --namespace knative-serving \ | ||||
|   --type merge \ | ||||
|   --patch '{"data":{"knative.example.com":""}}' | ||||
| ``` | ||||
| 
 | ||||
| {{< /tab >}} | ||||
| 
 | ||||
|     {{% tab name="Temporary DNS" %}} | ||||
| 
 | ||||
| If you are using `curl` to access the sample | ||||
| applications, or your own Knative app, and are unable to use the "Magic DNS | ||||
| (xip.io)" or "Real DNS" methods, there is a temporary approach. This is useful | ||||
| for those who wish to evaluate Knative without altering their DNS configuration, | ||||
| as per the "Real DNS" method, or cannot use the "Magic DNS" method due to using, | ||||
| for example, minikube locally or IPv6 clusters. | ||||
| 
 | ||||
| To access your application using `curl` using this method: | ||||
| 
 | ||||
| 1. After starting your application, get the URL of your application: | ||||
| 
 | ||||
|    ```bash | ||||
|    kubectl get ksvc | ||||
|    ``` | ||||
| 
 | ||||
|    The output should be similar to: | ||||
| 
 | ||||
|    ```bash | ||||
|    NAME            URL                                        LATESTCREATED         LATESTREADY           READY   REASON | ||||
|    helloworld-go   http://helloworld-go.default.example.com   helloworld-go-vqjlf   helloworld-go-vqjlf   True | ||||
|    ``` | ||||
| 
 | ||||
| 1. Instruct `curl` to connect to the External IP or CNAME defined by the | ||||
|    networking layer in section 3 above, and use the `-H "Host:"` command-line | ||||
|    option to specify the Knative application's host name. For example, if the | ||||
|    networking layer defines your External IP and port to be | ||||
|    `http://192.168.39.228:32198` and you wish to access the above | ||||
|    `helloworld-go` application, use: | ||||
| 
 | ||||
|    ```bash | ||||
|    curl -H "Host: helloworld-go.default.example.com" http://192.168.39.228:32198 | ||||
|    ``` | ||||
| 
 | ||||
|    In the case of the provided `helloworld-go` sample application, the output | ||||
|    should, using the default configuration, be: | ||||
| 
 | ||||
|    ``` | ||||
|    Hello Go Sample v1! | ||||
|    ``` | ||||
| 
 | ||||
| Refer to the "Real DNS" method for a permanent solution. | ||||
| 
 | ||||
|     {{< /tab >}} {{< /tabs >}} | ||||
| 
 | ||||
| 
 | ||||
| ## Next steps | ||||
| 
 | ||||
| After installing Knative Serving: | ||||
| 
 | ||||
| - If you want to add extra features to your installation, see [Installing optional extensions](./install-extensions.md). | ||||
| - If you want to install the Knative Eventing component, see [Installing Eventing using YAML files](./install-eventing-with-yaml.md) | ||||
| - Install the [Knative CLI](./install-kn) to use `kn` commands. | ||||
|  | @ -12,7 +12,8 @@ The YAML files in the releases include: | |||
| - The custom resource definitions (CRDs) and core components required to install Knative. | ||||
| - Optional components that you can apply to customize your installation. | ||||
| 
 | ||||
| For information about installing these files, see [YAML-based installation](../any-kubernetes-cluster.md). | ||||
| For information about installing these files, see [Installing Serving using YAML files](./install-serving-with-yaml) | ||||
| and [Installing Eventing using YAML files](./install-eventing-with-yaml). | ||||
| 
 | ||||
| ## Knative Serving installation files | ||||
| 
 | ||||
|  |  | |||
|  | @ -1,6 +1,6 @@ | |||
| --- | ||||
| title: "Knative Operator installation" | ||||
| weight: 02 | ||||
| weight: 05 | ||||
| type: "docs" | ||||
| showlandingtoc: "false" | ||||
| --- | ||||
|  |  | |||
|  | @ -0,0 +1,29 @@ | |||
| --- | ||||
| title: "Prerequisites" | ||||
| weight: 01 | ||||
| type: "docs" | ||||
| showlandingtoc: "false" | ||||
| --- | ||||
| 
 | ||||
| Before installing Knative, you must meet the following prerequisites: | ||||
| 
 | ||||
| ## System requirements | ||||
| 
 | ||||
| For prototyping purposes, Knative will work on most local deployments of Kubernetes. | ||||
| For example, you can use a local, one-node cluster that has 2 CPU and 4GB of memory. | ||||
| 
 | ||||
| For production purposes, it is recommended that: | ||||
| - If you have only one node in your cluster, you will need at least 6 CPUs, 6 GB of memory, and 30 GB of disk storage. | ||||
| - If you have multiple nodes in your cluster, for each node you will need at least 2 CPUs, 4 GB of memory, and 20 GB of disk storage. | ||||
| <!--TODO: Verify these requirements--> | ||||
| 
 | ||||
| **NOTE:** The system requirements provided are recommendations only. | ||||
| The requirements for your installation may vary, depending on whether you use optional components, such as a networking layer. | ||||
| 
 | ||||
| ## Prerequisites | ||||
| 
 | ||||
| Before installation, you must meet the following prerequisites: | ||||
| 
 | ||||
| - You have a cluster that uses Kubernetes v1.18 or newer. | ||||
| - You have installed the [`kubectl` CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/). | ||||
| - Your Kubernetes cluster must have access to the internet, since Kubernetes needs to be able to fetch images. | ||||
|  | @ -11,7 +11,9 @@ For more information about which metrics can be used to control the Autoscaler, | |||
| 
 | ||||
| ## Optional autoscaling configuration tasks | ||||
| 
 | ||||
| * Configure your Knative deployment to use the Kubernetes [Horizontal Pod Autoscaler (HPA)](../../install/any-kubernetes-cluster.md#optional-serving-extensions) instead of the default KPA. | ||||
| * Configure your Knative deployment to use the Kubernetes Horizontal Pod Autoscaler (HPA) | ||||
| instead of the default KPA. | ||||
| For how to install HPA, see [Install optional Eventing extensions](../../install/install-extensions.md#install-optional-serving-extensions). | ||||
| * Disable scale to zero functionality for your cluster ([global configuration only](./scale-to-zero.md)). | ||||
| * Configure the [type of metrics](./autoscaling-metrics.md) your Autoscaler consumes. | ||||
| * Configure [concurrency limits](./concurrency.md) for applications. | ||||
|  |  | |||
|  | @ -11,7 +11,9 @@ This section covers conceptual information about which Autoscaler types are supp | |||
| 
 | ||||
| Knative Serving supports the implementation of Knative Pod Autoscaler (KPA) and Kubernetes' Horizontal Pod Autoscaler (HPA). The features and limitations of each of these Autoscalers are listed below. | ||||
| 
 | ||||
| **IMPORTANT:** If you want to use Kubernetes Horizontal Pod Autoscaler (HPA), you must install it after you install [Knative Serving](../../install/any-kubernetes-cluster.md#optional-serving-extensions). | ||||
| **IMPORTANT:** If you want to use Kubernetes Horizontal Pod Autoscaler (HPA), | ||||
| you must install it after you install Knative Serving. | ||||
| For how to install HPA, see [Install optional Eventing extensions](../../install/install-extensions.md#install-optional-serving-extensions). | ||||
| 
 | ||||
| ### Knative Pod Autoscaler (KPA) | ||||
| 
 | ||||
|  | @ -21,7 +23,7 @@ Knative Serving supports the implementation of Knative Pod Autoscaler (KPA) and | |||
| 
 | ||||
| ### Horizontal Pod Autoscaler (HPA) | ||||
| 
 | ||||
| * Not part of the Knative Serving core, and must be enabled after [Knative Serving installation](../../install/any-kubernetes-cluster.md#optional-serving-extensions). | ||||
| * Not part of the Knative Serving core, and you must install Knative Serving first. | ||||
| * Does not support scale to zero functionality. | ||||
| * Supports CPU-based autoscaling. | ||||
| 
 | ||||
|  |  | |||
|  | @ -17,7 +17,8 @@ have this domain be served by a Knative Service. | |||
| ## Before you begin | ||||
| 
 | ||||
| 1. You need to enable the DomainMapping feature (and a supported Knative | ||||
|    Ingress implementation) to use it. See [the Install instructions](../install/any-kubernetes-cluster/#optional-serving-extensions). | ||||
|    Ingress implementation) to use it. | ||||
|    See [Install optional Serving extensions](../install/install-extensions.md#install-optional-serving-extensions). | ||||
| 1. To map a custom domain to a Knative Service, you must first [create a Knative | ||||
| Service](../serving/services/creating-services). | ||||
| 1. You will need a Domain Name to map, and the ability to change its DNS to | ||||
|  |  | |||
|  | @ -11,8 +11,10 @@ If you have configured additional security features, such as Istio's authorizati | |||
| 
 | ||||
| You must meet the following prerequisites to use Istio AuthorizationPolicy: | ||||
| 
 | ||||
| - [Istio must be used for your Knative Ingress](https://knative.dev/docs/install/any-kubernetes-cluster/#installing-the-serving-component). | ||||
| - [Istio sidecar injection must be enabled](https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/). | ||||
| - Istio must be used for your Knative Ingress. | ||||
| See [Install a networking layer](../install/install-serving-with-yaml.md#install-a-networking-layer). | ||||
| - Istio sidecar injection must be enabled. | ||||
| See the [Istio Documentation](https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/). | ||||
| 
 | ||||
| ## Mutual TLS in Knative | ||||
| 
 | ||||
|  |  | |||
|  | @ -6,7 +6,7 @@ type: "docs" | |||
| --- | ||||
| 
 | ||||
| If you install and configure cert-manager, you can configure Knative to | ||||
| automatically obtain new TLS certificates and renew existing ones for Knative  | ||||
| automatically obtain new TLS certificates and renew existing ones for Knative | ||||
| Services. | ||||
| To learn more about using secure connections in Knative, see | ||||
| [Configuring HTTPS with TLS certificates](./using-a-tls-cert.md). | ||||
|  | @ -29,26 +29,25 @@ Knative supports the following Auto TLS modes: | |||
| 
 | ||||
| 1.  Using HTTP-01 challenge | ||||
| 
 | ||||
|     - In this type, your cluster does not need to be able to talk to your DNS server. You just  | ||||
|     need to map your domain to the IP of the cluser ingress. | ||||
|     - In this type, your cluster does not need to be able to talk to your DNS server. You must map your domain to the IP of the cluser ingress. | ||||
|     - When using HTTP-01 challenge, **a certificate will be provisioned per Knative Service.** | ||||
|     - **HTTP-01 does not support provisioning a certificate per namespace.** | ||||
| 
 | ||||
| ## Before you begin | ||||
| 
 | ||||
| You must meet the following prerequisites to enable auto TLS: | ||||
| You must meet the following prerequisites to enable Auto TLS: | ||||
| 
 | ||||
| - The following must be installed on your Knative cluster: | ||||
|   - [Knative Serving](../install/). | ||||
|   - A Networking layer such as [Kourier](../install/any-kubernetes-cluster.md#installing-the-serving-component), [Istio with SDS, version 1.3 or higher](../install/installing-istio.md#installing-istio-with-SDS-to-secure-the-ingress-gateway), | ||||
|     [Contour, version 1.1 or higher](../install/any-kubernetes-cluster.md#installing-the-serving-component), | ||||
|     or [Gloo, version 0.18.16 or higher](https://docs.solo.io/gloo/latest/installation/knative/). | ||||
|     Note: Currently, [Ambassador](https://github.com/datawire/ambassador) is unsupported. | ||||
|   - A Networking layer such as Kourier, Istio with SDS v1.3 or higher, Contour v1.1 or higher, or Gloo v0.18.16 or higher. | ||||
|   See [Install a networking layer](../install/install-serving-with-yaml.md#install-a-networking-layer) or | ||||
|   [Istio with SDS, version 1.3 or higher](../install/installing-istio.md#installing-istio-with-SDS-to-secure-the-ingress-gateway).<br> | ||||
|     **Note:** Currently, [Ambassador](https://github.com/datawire/ambassador) is unsupported for use with Auto TLS. | ||||
| - [cert-manager version `1.0.0` and higher](./installing-cert-manager.md). | ||||
| - Your Knative cluster must be configured to use a | ||||
|   [custom domain](./using-a-custom-domain.md). | ||||
| - Your DNS provider must be setup and configured to your domain. | ||||
| - If you want to use HTTP-01 challenge, you need to configure your custom  | ||||
| - If you want to use HTTP-01 challenge, you need to configure your custom | ||||
| domain to map to the IP of ingress. You can achieve this by adding a DNS A record to map the domain to the IP according to the instructions of your DNS provider. | ||||
| 
 | ||||
| ## Enabling Auto TLS | ||||
|  | @ -152,7 +151,7 @@ See how the Google Cloud DNS is defined as the provider: | |||
| 
 | ||||
| ### Install networking-certmanager deployment | ||||
| 
 | ||||
| 1.  Determine if `networking-certmanager` is already installed by running the  | ||||
| 1.  Determine if `networking-certmanager` is already installed by running the | ||||
|     following command: | ||||
| 
 | ||||
|     ```shell | ||||
|  | @ -172,7 +171,7 @@ If you choose to use the mode of provisioning certificate per namespace, you nee | |||
| **IMPORTANT:** Provisioning a certificate per namespace only works with DNS-01 | ||||
|  challenge. This component cannot be used with HTTP-01 challenge. | ||||
| 
 | ||||
| 1. Determine if `networking-ns-cert` deployment is already installed by  | ||||
| 1. Determine if `networking-ns-cert` deployment is already installed by | ||||
| running the following command: | ||||
| 
 | ||||
|     ```shell | ||||
|  | @ -223,7 +222,7 @@ in the `knative-serving` namespace to reference your new `ClusterIssuer`. | |||
|         name: letsencrypt-http01-issuer | ||||
|     ``` | ||||
| 
 | ||||
|     `issueRef` defines which `ClusterIssuer` will be used by Knative to issue  | ||||
|     `issueRef` defines which `ClusterIssuer` will be used by Knative to issue | ||||
|     certificates. | ||||
| 
 | ||||
| 1.  Ensure that the file was updated successfully: | ||||
|  | @ -329,7 +328,7 @@ be able to handle HTTPS traffic. | |||
|     kubectl apply -f https://raw.githubusercontent.com/knative/docs/main/docs/serving/autoscaling/autoscale-go/service.yaml | ||||
|     ``` | ||||
| 
 | ||||
| 1.  When the certificate is provisioned (which could take up to several minutes depending on  | ||||
| 1.  When the certificate is provisioned (which could take up to several minutes depending on | ||||
|     the challenge type), you should see something like: | ||||
|     ``` | ||||
|     NAME               URL                                           LATESTCREATED            LATESTREADY              READY   REASON | ||||
|  |  | |||
		Loading…
	
		Reference in New Issue