diff --git a/docs/admin/eventing/broker-configuration.md b/docs/admin/eventing/broker-configuration.md index a760a3a3f..034dad3c6 100644 --- a/docs/admin/eventing/broker-configuration.md +++ b/docs/admin/eventing/broker-configuration.md @@ -112,15 +112,23 @@ When a Broker is created without a specified `BrokerClass` annotation, the defau The following example creates a Broker called `default` in the default namespace, and uses `MTChannelBasedBroker` as the implementation: -```bash -kubectl create -f - <.yaml + ``` + Where `` is the name of the file you created in the previous step. + ### Configuring the Broker class diff --git a/docs/admin/eventing/kafka-channel-configuration.md b/docs/admin/eventing/kafka-channel-configuration.md index aaf9b8380..de51f5c7d 100644 --- a/docs/admin/eventing/kafka-channel-configuration.md +++ b/docs/admin/eventing/kafka-channel-configuration.md @@ -10,10 +10,9 @@ To use Kafka Channels, you must: ## Create a `kafka-channel` ConfigMap -1. Create a `kafka-channel` ConfigMap by running the command: +1. Create a YAML file for the `kafka-channel` ConfigMap using the template below: ```yaml - kubectl apply -f - <.yaml + ``` + Where `` is the name of the file you created in the previous step. + + +1. Optional. To create a Broker that uses Kafka Channels, specify the `kafka-channel` ConfigMap in the Broker spec. You can do this by creating a YAML file using the template below: ```yaml - kubectl apply -f - <.yaml + ``` + Where `` is the name of the file you created in the previous step. diff --git a/docs/admin/install/installing-istio.md b/docs/admin/install/installing-istio.md index 9f0a7fbdb..75efca62b 100644 --- a/docs/admin/install/installing-istio.md +++ b/docs/admin/install/installing-istio.md @@ -55,32 +55,38 @@ mesh by [manually injecting the Istio sidecars][1]. Enter the following command to install Istio: -```bash -cat << EOF > ./istio-minimal-operator.yaml -apiVersion: install.istio.io/v1alpha1 -kind: IstioOperator -spec: - values: - global: - proxy: - autoInject: disabled - useMCP: false - # The third-party-jwt is not enabled on all k8s. - # See: https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens - jwtPolicy: first-party-jwt +To install Istio without sidecar injection: - addonComponents: - pilot: - enabled: true +1. Create a `istio-minimal-operator.yaml` file using the template below: - components: - ingressGateways: - - name: istio-ingressgateway - enabled: true -EOF + ```yaml + apiVersion: install.istio.io/v1alpha1 + kind: IstioOperator + spec: + values: + global: + proxy: + autoInject: disabled + useMCP: false + # The third-party-jwt is not enabled on all k8s. + # See: https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens + jwtPolicy: first-party-jwt -istioctl install -f istio-minimal-operator.yaml -``` + addonComponents: + pilot: + enabled: true + + components: + ingressGateways: + - name: istio-ingressgateway + enabled: true + ``` + +1. Apply the YAML file by running the command: + + ```bash + istioctl install -f istio-minimal-operator.yaml + ``` #### Installing Istio with sidecar injection @@ -108,26 +114,32 @@ Since there are some networking communications between knative-serving namespace and the namespace where your services running on, you need additional preparations for mTLS enabled environment. -- Enable sidecar container on `knative-serving` system namespace. +1. Enable sidecar container on `knative-serving` system namespace. -```bash -kubectl label namespace knative-serving istio-injection=enabled -``` + ```bash + kubectl label namespace knative-serving istio-injection=enabled + ``` -- Set `PeerAuthentication` to `PERMISSIVE` on knative-serving system namespace. +1. Set `PeerAuthentication` to `PERMISSIVE` on knative-serving system namespace +by creating a YAML file using the template below: -```bash -cat <.yaml + ``` + Where `` is the name of the file you created in the previous step. After you install the cluster local gateway, your service and deployment for the local gateway is named `knative-local-gateway`. @@ -138,16 +150,16 @@ need to update gateway configmap `config-istio` under the `knative-serving` name 1. Edit the `config-istio` configmap: -```bash -kubectl edit configmap config-istio -n knative-serving -``` + ```bash + kubectl edit configmap config-istio -n knative-serving + ``` 2. Replace the `local-gateway.knative-serving.knative-local-gateway` field with the custom service. As an example, if you name both the service and deployment `custom-local-gateway` under the namespace `istio-system`, it should be updated to: -``` -custom-local-gateway.istio-system.svc.cluster.local -``` + ``` + custom-local-gateway.istio-system.svc.cluster.local + ``` As an example, if both the custom service and deployment are labeled with `custom: custom-local-gateway`, not the default `istio: knative-local-gateway`, you must update gateway instance `knative-local-gateway` in the `knative-serving` namespace: diff --git a/docs/admin/install/knative-with-operators.md b/docs/admin/install/knative-with-operators.md index 7b87d94e1..bf3c8459f 100644 --- a/docs/admin/install/knative-with-operators.md +++ b/docs/admin/install/knative-with-operators.md @@ -190,7 +190,7 @@ NAME VERSION READY REASON knative-serving True ``` -### Installing with Different Networking Layers +### Installing with different networking layers ??? "Installing the Knative Serving component with different network layers" @@ -225,9 +225,8 @@ knative-serving True kubectl set env --namespace ambassador deployments/ambassador AMBASSADOR_KNATIVE_SUPPORT=true ``` - 1. To configure Knative Serving to use Ambassador, apply the content of the Serving CR as below: - ```bash - cat <<-EOF | kubectl apply -f - + 1. To configure Knative Serving to use Ambassador, copy the YAML below into a file: + ```yaml apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: @@ -237,9 +236,15 @@ knative-serving True config: network: ingress.class: "ambassador.ingress.networking.knative.dev" - EOF ``` + 1. Apply the YAML file by running the command: + + ```bash + kubectl apply -f .yaml + ``` + Where `` is the name of the file you created in the previous step. + 1. Fetch the External IP or CNAME: ```bash kubectl --namespace ambassador get service ambassador @@ -256,9 +261,8 @@ knative-serving True kubectl apply --filename {{artifact(repo="net-contour",file="contour.yaml")}} ``` - 1. To configure Knative Serving to use Contour, apply the content of the Serving CR as below: - ```bash - cat <<-EOF | kubectl apply -f - + 1. To configure Knative Serving to use Contour, copy the YAML below into a file: + ```yaml apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: @@ -271,8 +275,13 @@ knative-serving True config: network: ingress.class: "contour.ingress.networking.knative.dev" - EOF ``` + 1. Apply the YAML file by running the command: + + ```bash + kubectl apply -f .yaml + ``` + Where `` is the name of the file you created in the previous step. 1. Fetch the External IP or CNAME: ```bash @@ -285,9 +294,8 @@ knative-serving True The following commands install Kourier and enable its Knative integration. - 1. To configure Knative Serving to use Kourier, apply the content of the Serving CR as below: - ```bash - cat <<-EOF | kubectl apply -f - + 1. To configure Knative Serving to use Kourier, copy the YAML below into a file: + ```yaml apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: @@ -300,9 +308,15 @@ knative-serving True config: network: ingress.class: "kourier.ingress.networking.knative.dev" - EOF ``` + 1. Apply the YAML file by running the command: + + ```bash + kubectl apply -f .yaml + ``` + Where `` is the name of the file you created in the previous step. + 1. Fetch the External IP or CNAME: ```bash kubectl --namespace knative-serving get service kourier diff --git a/docs/admin/upgrade/upgrade-installation-with-operator.md b/docs/admin/upgrade/upgrade-installation-with-operator.md index 712fa977e..81d051db9 100644 --- a/docs/admin/upgrade/upgrade-installation-with-operator.md +++ b/docs/admin/upgrade/upgrade-installation-with-operator.md @@ -12,17 +12,24 @@ The Knative Operator supports up to the last three major releases. For example, To upgrade, apply the Operator custom resources, adding the `spec.version` for the Knative version that you want to upgrade to: -```yaml -kubectl apply -f - <.yaml + ``` + Where `` is the name of the file you created in the previous step. ## Verifying an upgrade by viewing pods @@ -112,8 +119,9 @@ If the upgrade fails, you can rollback to restore your Knative to the previous v === "Knative Serving" +1. Copy the YAML below into a file: + ```yaml - kubectl apply -f - <.yaml + ``` + Where `` is the name of the file you created in the previous step. + === "Knative Eventing" +1. Copy the YAML below into a file: + ```yaml - kubectl apply -f - <.yaml + ``` + Where `` is the name of the file you created in the previous step. diff --git a/docs/serving/istio-authorization.md b/docs/serving/istio-authorization.md index 352cb9e24..241a8e892 100644 --- a/docs/serving/istio-authorization.md +++ b/docs/serving/istio-authorization.md @@ -102,22 +102,29 @@ Knative system pods access your application using the following paths: The `/metrics` path allows the autoscaler pod to collect metrics. The `/healthz` path allows system pods to probe the service. -You can add the `/metrics` and `/healthz` paths to the AuthorizationPolicy as shown in the example: +To add the `/metrics` and `/healthz` paths to the AuthorizationPolicy: -```yaml -kubectl apply -f - <.yaml + ``` + Where `` is the name of the file you created in the previous step. diff --git a/docs/serving/using-a-custom-domain.md b/docs/serving/using-a-custom-domain.md index 31f03d9e6..b584367e4 100644 --- a/docs/serving/using-a-custom-domain.md +++ b/docs/serving/using-a-custom-domain.md @@ -45,10 +45,9 @@ To change the {default-domain} value there are a few steps involved: You can also apply an updated domain configuration: -1. Replace the `example.org` and `example.com` values with the new domain you want to use and run the command: +1. Create a YAML file using the template below: ```yaml - kubectl apply -f - <.yaml + ``` + Where `` is the name of the file you created in the previous step. ## Deploy an application diff --git a/docs/serving/using-a-tls-cert.md b/docs/serving/using-a-tls-cert.md index 8a5bc3d1f..d34382141 100644 --- a/docs/serving/using-a-tls-cert.md +++ b/docs/serving/using-a-tls-cert.md @@ -156,10 +156,10 @@ continue below for instructions about manually adding a certificate. === "Contour" To manually add a TLS certificate to your Knative cluster, you must create a - Kubernetes secret and then configure the Knative Contour plugin + Kubernetes secret and then configure the Knative Contour plugin. 1. Create a Kubernetes secret to hold your TLS certificate, `cert.pem`, and the - private key, `key.pem`, by entering the following command: + private key, `key.pem`, by running the command: ```bash kubectl create -n contour-external secret tls default-cert \ @@ -167,13 +167,13 @@ continue below for instructions about manually adding a certificate. --cert cert.pem ``` - !!! warning + !!! note Take note of the namespace and secret name. You will need these in future steps. - 1. Contour requires you to create a delegation to use this certificate and private key in different namespaces. You can create this resource by running the command: + 1. To use this certificate and private key in different namespaces, you must + create a delegation. To do so, create a YAML file using the template below: ```yaml - kubectl apply -f - <.yaml + ``` + Where `` is the name of the file you created in the previous step. + + 1. Update the Knative Contour plugin to use the certificate as a fallback + when auto-TLS is disabled by running the command: ```bash kubectl patch configmap config-contour -n knative-serving \ diff --git a/docs/serving/using-auto-tls.md b/docs/serving/using-auto-tls.md index ca96b23e3..70751b26d 100644 --- a/docs/serving/using-auto-tls.md +++ b/docs/serving/using-auto-tls.md @@ -45,72 +45,75 @@ Knative supports the following Auto TLS modes: ## Enabling Auto TLS -1. Create and add the `ClusterIssuer` configuration file to your Knative cluster to define who issues the TLS certificates, how requests are validated, +1. Create and add the `ClusterIssuer` configuration file to your Knative cluster +to define who issues the TLS certificates, how requests are validated, and which DNS provider validates those requests. - ### ClusterIssuer for DNS-01 challenge + - **ClusterIssuer for DNS-01 challenge:** use the cert-manager reference to determine how to configure your `ClusterIssuer` file. - Use the cert-manager reference to determine how to configure your - `ClusterIssuer` file: - - See the generic - [`ClusterIssuer` example](https://cert-manager.io/docs/configuration/acme/#creating-a-basic-acme-issuer) - - Also see the - [`DNS01` example](https://docs.cert-manager.io/en/latest/tasks/acme/configuring-dns01/index.html) + - See the generic [`ClusterIssuer` example](https://cert-manager.io/docs/configuration/acme/#creating-a-basic-acme-issuer) + - Also see the + [`DNS01` example](https://docs.cert-manager.io/en/latest/tasks/acme/configuring-dns01/index.html) - **Example**: Cloud DNS `ClusterIssuer` configuration file: + For example, the following `ClusterIssuer` file named `letsencrypt-issuer` is + configured for the Let's Encrypt CA and Google Cloud DNS. + The Let's Encrypt account info, required `DNS-01` challenge type, and + Cloud DNS provider info is defined under `spec`. - The following `letsencrypt-issuer` named `ClusterIssuer` file is - configured for the Let's Encrypt CA and Google Cloud DNS. Under `spec`, - the Let's Encrypt account info, required `DNS-01` challenge type, and - Cloud DNS provider info defined. + ```yaml + apiVersion: cert-manager.io/v1 + kind: ClusterIssuer + metadata: + name: letsencrypt-dns-issuer + spec: + acme: + server: https://acme-v02.api.letsencrypt.org/directory + # This will register an issuer with LetsEncrypt. Replace + # with your admin email address. + email: myemail@gmail.com + privateKeySecretRef: + # Set privateKeySecretRef to any unused secret name. + name: letsencrypt-dns-issuer + solvers: + - dns01: + clouddns: + # Set this to your GCP project-id + project: $PROJECT_ID + # Set this to the secret that we publish our service account key + # in the previous step. + serviceAccountSecretRef: + name: cloud-dns-key + key: key.json + ``` - ```bash - apiVersion: cert-manager.io/v1 - kind: ClusterIssuer - metadata: - name: letsencrypt-dns-issuer - spec: - acme: - server: https://acme-v02.api.letsencrypt.org/directory - # This will register an issuer with LetsEncrypt. Replace - # with your admin email address. - email: myemail@gmail.com - privateKeySecretRef: - # Set privateKeySecretRef to any unused secret name. - name: letsencrypt-dns-issuer - solvers: - - dns01: - clouddns: - # Set this to your GCP project-id - project: $PROJECT_ID - # Set this to the secret that we publish our service account key - # in the previous step. - serviceAccountSecretRef: - name: cloud-dns-key - key: key.json - ``` + - **ClusterIssuer for HTTP-01 challenge** - ### ClusterIssuer for HTTP-01 challenge + To apply the ClusterIssuer for HTTP01 challenge: - Run the following command to apply the ClusterIssuer for HTT01 challenge: + 1. Create a YAML file using the template below: - ```yaml - kubectl apply -f - <.yaml + ``` + Where `` is the name of the file you created in the previous step. 1. Ensure that the ClusterIssuer is created successfully: