Update command style in serving and admin directories (#3904)

* Update command style in serving and admin directories

* Heading --> sentence case

* Update docs/serving/using-auto-tls.md

Co-authored-by: Ashleigh Brennan <abrennan@redhat.com>

* Update docs/admin/eventing/broker-configuration.md

Co-authored-by: Ashleigh Brennan <abrennan@redhat.com>

* change --filename to -f

Co-authored-by: Ashleigh Brennan <abrennan@redhat.com>
This commit is contained in:
Samia Nneji 2021-07-29 20:27:56 +01:00 committed by GitHub
parent 676ff681a9
commit 5df72e09b6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 261 additions and 175 deletions

View File

@ -112,15 +112,23 @@ When a Broker is created without a specified `BrokerClass` annotation, the defau
The following example creates a Broker called `default` in the default namespace, and uses `MTChannelBasedBroker` as the implementation: The following example creates a Broker called `default` in the default namespace, and uses `MTChannelBasedBroker` as the implementation:
```bash 1. Create a YAML file for your Broker using the example below:
kubectl create -f - <<EOF
apiVersion: eventing.knative.dev/v1 ```yaml
kind: Broker apiVersion: eventing.knative.dev/v1
metadata: kind: Broker
name: default metadata:
namespace: default name: default
EOF namespace: default
``` ```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
### Configuring the Broker class ### Configuring the Broker class

View File

@ -10,10 +10,9 @@ To use Kafka Channels, you must:
## Create a `kafka-channel` ConfigMap ## Create a `kafka-channel` ConfigMap
1. Create a `kafka-channel` ConfigMap by running the command: 1. Create a YAML file for the `kafka-channel` ConfigMap using the template below:
```yaml ```yaml
kubectl apply -f - <<EOF
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
@ -26,16 +25,22 @@ To use Kafka Channels, you must:
spec: spec:
numPartitions: 3 numPartitions: 3
replicationFactor: 1 replicationFactor: 1
EOF
``` ```
!!! note !!! note
This example specifies two extra parameters that are specific to Kafka Channels; `numPartitions` and `replicationFactor`. This example specifies two extra parameters that are specific to Kafka Channels; `numPartitions` and `replicationFactor`.
1. Optional. To create a Broker that uses Kafka Channels, specify the `kafka-channel` ConfigMap in the Broker spec. You can do this by running the command: 1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
1. Optional. To create a Broker that uses Kafka Channels, specify the `kafka-channel` ConfigMap in the Broker spec. You can do this by creating a YAML file using the template below:
```yaml ```yaml
kubectl apply -f - <<EOF
apiVersion: eventing.knative.dev/v1 apiVersion: eventing.knative.dev/v1
kind: Broker kind: Broker
metadata: metadata:
@ -49,5 +54,11 @@ To use Kafka Channels, you must:
kind: ConfigMap kind: ConfigMap
name: kafka-channel name: kafka-channel
namespace: knative-eventing namespace: knative-eventing
EOF
``` ```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.

View File

@ -55,32 +55,38 @@ mesh by [manually injecting the Istio sidecars][1].
Enter the following command to install Istio: Enter the following command to install Istio:
```bash To install Istio without sidecar injection:
cat << EOF > ./istio-minimal-operator.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
proxy:
autoInject: disabled
useMCP: false
# The third-party-jwt is not enabled on all k8s.
# See: https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens
jwtPolicy: first-party-jwt
addonComponents: 1. Create a `istio-minimal-operator.yaml` file using the template below:
pilot:
enabled: true
components: ```yaml
ingressGateways: apiVersion: install.istio.io/v1alpha1
- name: istio-ingressgateway kind: IstioOperator
enabled: true spec:
EOF values:
global:
proxy:
autoInject: disabled
useMCP: false
# The third-party-jwt is not enabled on all k8s.
# See: https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens
jwtPolicy: first-party-jwt
istioctl install -f istio-minimal-operator.yaml addonComponents:
``` pilot:
enabled: true
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
```
1. Apply the YAML file by running the command:
```bash
istioctl install -f istio-minimal-operator.yaml
```
#### Installing Istio with sidecar injection #### Installing Istio with sidecar injection
@ -108,26 +114,32 @@ Since there are some networking communications between knative-serving namespace
and the namespace where your services running on, you need additional and the namespace where your services running on, you need additional
preparations for mTLS enabled environment. preparations for mTLS enabled environment.
- Enable sidecar container on `knative-serving` system namespace. 1. Enable sidecar container on `knative-serving` system namespace.
```bash ```bash
kubectl label namespace knative-serving istio-injection=enabled kubectl label namespace knative-serving istio-injection=enabled
``` ```
- Set `PeerAuthentication` to `PERMISSIVE` on knative-serving system namespace. 1. Set `PeerAuthentication` to `PERMISSIVE` on knative-serving system namespace
by creating a YAML file using the template below:
```bash ```bash
cat <<EOF | kubectl apply -f - apiVersion: "security.istio.io/v1beta1"
apiVersion: "security.istio.io/v1beta1" kind: "PeerAuthentication"
kind: "PeerAuthentication" metadata:
metadata: name: "default"
name: "default" namespace: "knative-serving"
namespace: "knative-serving" spec:
spec: mtls:
mtls: mode: PERMISSIVE
mode: PERMISSIVE ```
EOF
``` 1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
After you install the cluster local gateway, your service and deployment for the local gateway is named `knative-local-gateway`. After you install the cluster local gateway, your service and deployment for the local gateway is named `knative-local-gateway`.
@ -138,16 +150,16 @@ need to update gateway configmap `config-istio` under the `knative-serving` name
1. Edit the `config-istio` configmap: 1. Edit the `config-istio` configmap:
```bash ```bash
kubectl edit configmap config-istio -n knative-serving kubectl edit configmap config-istio -n knative-serving
``` ```
2. Replace the `local-gateway.knative-serving.knative-local-gateway` field with the custom service. As an example, if you name both 2. Replace the `local-gateway.knative-serving.knative-local-gateway` field with the custom service. As an example, if you name both
the service and deployment `custom-local-gateway` under the namespace `istio-system`, it should be updated to: the service and deployment `custom-local-gateway` under the namespace `istio-system`, it should be updated to:
``` ```
custom-local-gateway.istio-system.svc.cluster.local custom-local-gateway.istio-system.svc.cluster.local
``` ```
As an example, if both the custom service and deployment are labeled with `custom: custom-local-gateway`, not the default As an example, if both the custom service and deployment are labeled with `custom: custom-local-gateway`, not the default
`istio: knative-local-gateway`, you must update gateway instance `knative-local-gateway` in the `knative-serving` namespace: `istio: knative-local-gateway`, you must update gateway instance `knative-local-gateway` in the `knative-serving` namespace:

View File

@ -190,7 +190,7 @@ NAME VERSION READY REASON
knative-serving <version number> True knative-serving <version number> True
``` ```
### Installing with Different Networking Layers ### Installing with different networking layers
??? "Installing the Knative Serving component with different network layers" ??? "Installing the Knative Serving component with different network layers"
@ -225,9 +225,8 @@ knative-serving <version number> True
kubectl set env --namespace ambassador deployments/ambassador AMBASSADOR_KNATIVE_SUPPORT=true kubectl set env --namespace ambassador deployments/ambassador AMBASSADOR_KNATIVE_SUPPORT=true
``` ```
1. To configure Knative Serving to use Ambassador, apply the content of the Serving CR as below: 1. To configure Knative Serving to use Ambassador, copy the YAML below into a file:
```bash ```yaml
cat <<-EOF | kubectl apply -f -
apiVersion: operator.knative.dev/v1alpha1 apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing kind: KnativeServing
metadata: metadata:
@ -237,9 +236,15 @@ knative-serving <version number> True
config: config:
network: network:
ingress.class: "ambassador.ingress.networking.knative.dev" ingress.class: "ambassador.ingress.networking.knative.dev"
EOF
``` ```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
1. Fetch the External IP or CNAME: 1. Fetch the External IP or CNAME:
```bash ```bash
kubectl --namespace ambassador get service ambassador kubectl --namespace ambassador get service ambassador
@ -256,9 +261,8 @@ knative-serving <version number> True
kubectl apply --filename {{artifact(repo="net-contour",file="contour.yaml")}} kubectl apply --filename {{artifact(repo="net-contour",file="contour.yaml")}}
``` ```
1. To configure Knative Serving to use Contour, apply the content of the Serving CR as below: 1. To configure Knative Serving to use Contour, copy the YAML below into a file:
```bash ```yaml
cat <<-EOF | kubectl apply -f -
apiVersion: operator.knative.dev/v1alpha1 apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing kind: KnativeServing
metadata: metadata:
@ -271,8 +275,13 @@ knative-serving <version number> True
config: config:
network: network:
ingress.class: "contour.ingress.networking.knative.dev" ingress.class: "contour.ingress.networking.knative.dev"
EOF
``` ```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
1. Fetch the External IP or CNAME: 1. Fetch the External IP or CNAME:
```bash ```bash
@ -285,9 +294,8 @@ knative-serving <version number> True
The following commands install Kourier and enable its Knative integration. The following commands install Kourier and enable its Knative integration.
1. To configure Knative Serving to use Kourier, apply the content of the Serving CR as below: 1. To configure Knative Serving to use Kourier, copy the YAML below into a file:
```bash ```yaml
cat <<-EOF | kubectl apply -f -
apiVersion: operator.knative.dev/v1alpha1 apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing kind: KnativeServing
metadata: metadata:
@ -300,9 +308,15 @@ knative-serving <version number> True
config: config:
network: network:
ingress.class: "kourier.ingress.networking.knative.dev" ingress.class: "kourier.ingress.networking.knative.dev"
EOF
``` ```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
1. Fetch the External IP or CNAME: 1. Fetch the External IP or CNAME:
```bash ```bash
kubectl --namespace knative-serving get service kourier kubectl --namespace knative-serving get service kourier

View File

@ -12,17 +12,24 @@ The Knative Operator supports up to the last three major releases. For example,
To upgrade, apply the Operator custom resources, adding the `spec.version` for the Knative version that you want to upgrade to: To upgrade, apply the Operator custom resources, adding the `spec.version` for the Knative version that you want to upgrade to:
```yaml 1. Copy the YAML below into a file:
kubectl apply -f - <<EOF
apiVersion: operator.knative.dev/v1alpha1 ```yaml
kind: KnativeServing apiVersion: operator.knative.dev/v1alpha1
metadata: kind: KnativeServing
name: knative-serving metadata:
namespace: knative-serving name: knative-serving
spec: namespace: knative-serving
version: "0.23" spec:
EOF version: "0.23"
``` ```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
## Verifying an upgrade by viewing pods ## Verifying an upgrade by viewing pods
@ -112,8 +119,9 @@ If the upgrade fails, you can rollback to restore your Knative to the previous v
=== "Knative Serving" === "Knative Serving"
1. Copy the YAML below into a file:
```yaml ```yaml
kubectl apply -f - <<EOF
apiVersion: operator.knative.dev/v1alpha1 apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing kind: KnativeServing
metadata: metadata:
@ -121,13 +129,20 @@ If the upgrade fails, you can rollback to restore your Knative to the previous v
namespace: knative-serving namespace: knative-serving
spec: spec:
version: "0.22" version: "0.22"
EOF
``` ```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
=== "Knative Eventing" === "Knative Eventing"
1. Copy the YAML below into a file:
```yaml ```yaml
kubectl apply -f - <<EOF
apiVersion: operator.knative.dev/v1alpha1 apiVersion: operator.knative.dev/v1alpha1
kind: KnativeEventing kind: KnativeEventing
metadata: metadata:
@ -135,5 +150,10 @@ If the upgrade fails, you can rollback to restore your Knative to the previous v
namespace: knative-eventing namespace: knative-eventing
spec: spec:
version: "0.22" version: "0.22"
EOF
``` ```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.

View File

@ -102,22 +102,29 @@ Knative system pods access your application using the following paths:
The `/metrics` path allows the autoscaler pod to collect metrics. The `/metrics` path allows the autoscaler pod to collect metrics.
The `/healthz` path allows system pods to probe the service. The `/healthz` path allows system pods to probe the service.
You can add the `/metrics` and `/healthz` paths to the AuthorizationPolicy as shown in the example: To add the `/metrics` and `/healthz` paths to the AuthorizationPolicy:
```yaml 1. Create a YAML file for your AuthorizationPolicy using the example below:
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1 ```yaml
kind: AuthorizationPolicy apiVersion: security.istio.io/v1beta1
metadata: kind: AuthorizationPolicy
name: allowlist-by-paths metadata:
namespace: serving-tests name: allowlist-by-paths
spec: namespace: serving-tests
action: ALLOW spec:
rules: action: ALLOW
- to: rules:
- operation: - to:
paths: - operation:
- /metrics # The path to collect metrics by system pod. paths:
- /healthz # The path to probe by system pod. - /metrics # The path to collect metrics by system pod.
EOF - /healthz # The path to probe by system pod.
``` ```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.

View File

@ -45,10 +45,9 @@ To change the {default-domain} value there are a few steps involved:
You can also apply an updated domain configuration: You can also apply an updated domain configuration:
1. Replace the `example.org` and `example.com` values with the new domain you want to use and run the command: 1. Create a YAML file using the template below:
```yaml ```yaml
kubectl apply -f - <<EOF
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
@ -64,8 +63,15 @@ You can also apply an updated domain configuration:
# Although it will match all routes, it is the least-specific rule so it # Although it will match all routes, it is the least-specific rule so it
# will only be used if no other domain matches. # will only be used if no other domain matches.
example.com: "" example.com: ""
EOF
``` ```
Replace `example.org` and `example.com` with the new domain you want to use.
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
## Deploy an application ## Deploy an application

View File

@ -156,10 +156,10 @@ continue below for instructions about manually adding a certificate.
=== "Contour" === "Contour"
To manually add a TLS certificate to your Knative cluster, you must create a To manually add a TLS certificate to your Knative cluster, you must create a
Kubernetes secret and then configure the Knative Contour plugin Kubernetes secret and then configure the Knative Contour plugin.
1. Create a Kubernetes secret to hold your TLS certificate, `cert.pem`, and the 1. Create a Kubernetes secret to hold your TLS certificate, `cert.pem`, and the
private key, `key.pem`, by entering the following command: private key, `key.pem`, by running the command:
```bash ```bash
kubectl create -n contour-external secret tls default-cert \ kubectl create -n contour-external secret tls default-cert \
@ -167,13 +167,13 @@ continue below for instructions about manually adding a certificate.
--cert cert.pem --cert cert.pem
``` ```
!!! warning !!! note
Take note of the namespace and secret name. You will need these in future steps. Take note of the namespace and secret name. You will need these in future steps.
1. Contour requires you to create a delegation to use this certificate and private key in different namespaces. You can create this resource by running the command: 1. To use this certificate and private key in different namespaces, you must
create a delegation. To do so, create a YAML file using the template below:
```yaml ```yaml
kubectl apply -f - <<EOF
apiVersion: projectcontour.io/v1 apiVersion: projectcontour.io/v1
kind: TLSCertificateDelegation kind: TLSCertificateDelegation
metadata: metadata:
@ -184,11 +184,16 @@ continue below for instructions about manually adding a certificate.
- secretName: default-cert - secretName: default-cert
targetNamespaces: targetNamespaces:
- "*" - "*"
EOF
``` ```
1. Apply the YAML file by running the command:
1. Update the Knative Contour plugin to start using the certificate as a fallback ```bash
when auto-TLS is disabled. This can be done with the following patch: kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
1. Update the Knative Contour plugin to use the certificate as a fallback
when auto-TLS is disabled by running the command:
```bash ```bash
kubectl patch configmap config-contour -n knative-serving \ kubectl patch configmap config-contour -n knative-serving \

View File

@ -45,72 +45,75 @@ Knative supports the following Auto TLS modes:
## Enabling Auto TLS ## Enabling Auto TLS
1. Create and add the `ClusterIssuer` configuration file to your Knative cluster to define who issues the TLS certificates, how requests are validated, 1. Create and add the `ClusterIssuer` configuration file to your Knative cluster
to define who issues the TLS certificates, how requests are validated,
and which DNS provider validates those requests. and which DNS provider validates those requests.
### ClusterIssuer for DNS-01 challenge - **ClusterIssuer for DNS-01 challenge:** use the cert-manager reference to determine how to configure your `ClusterIssuer` file.
Use the cert-manager reference to determine how to configure your - See the generic [`ClusterIssuer` example](https://cert-manager.io/docs/configuration/acme/#creating-a-basic-acme-issuer)
`ClusterIssuer` file: - Also see the
- See the generic [`DNS01` example](https://docs.cert-manager.io/en/latest/tasks/acme/configuring-dns01/index.html)
[`ClusterIssuer` example](https://cert-manager.io/docs/configuration/acme/#creating-a-basic-acme-issuer)
- Also see the
[`DNS01` example](https://docs.cert-manager.io/en/latest/tasks/acme/configuring-dns01/index.html)
**Example**: Cloud DNS `ClusterIssuer` configuration file: For example, the following `ClusterIssuer` file named `letsencrypt-issuer` is
configured for the Let's Encrypt CA and Google Cloud DNS.
The Let's Encrypt account info, required `DNS-01` challenge type, and
Cloud DNS provider info is defined under `spec`.
The following `letsencrypt-issuer` named `ClusterIssuer` file is ```yaml
configured for the Let's Encrypt CA and Google Cloud DNS. Under `spec`, apiVersion: cert-manager.io/v1
the Let's Encrypt account info, required `DNS-01` challenge type, and kind: ClusterIssuer
Cloud DNS provider info defined. metadata:
name: letsencrypt-dns-issuer
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
# This will register an issuer with LetsEncrypt. Replace
# with your admin email address.
email: myemail@gmail.com
privateKeySecretRef:
# Set privateKeySecretRef to any unused secret name.
name: letsencrypt-dns-issuer
solvers:
- dns01:
clouddns:
# Set this to your GCP project-id
project: $PROJECT_ID
# Set this to the secret that we publish our service account key
# in the previous step.
serviceAccountSecretRef:
name: cloud-dns-key
key: key.json
```
```bash - **ClusterIssuer for HTTP-01 challenge**
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-dns-issuer
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
# This will register an issuer with LetsEncrypt. Replace
# with your admin email address.
email: myemail@gmail.com
privateKeySecretRef:
# Set privateKeySecretRef to any unused secret name.
name: letsencrypt-dns-issuer
solvers:
- dns01:
clouddns:
# Set this to your GCP project-id
project: $PROJECT_ID
# Set this to the secret that we publish our service account key
# in the previous step.
serviceAccountSecretRef:
name: cloud-dns-key
key: key.json
```
### ClusterIssuer for HTTP-01 challenge To apply the ClusterIssuer for HTTP01 challenge:
Run the following command to apply the ClusterIssuer for HTT01 challenge: 1. Create a YAML file using the template below:
```yaml ```yaml
kubectl apply -f - <<EOF apiVersion: cert-manager.io/v1
apiVersion: cert-manager.io/v1 kind: ClusterIssuer
kind: ClusterIssuer metadata:
metadata: name: letsencrypt-http01-issuer
name: letsencrypt-http01-issuer spec:
spec: acme:
acme: privateKeySecretRef:
privateKeySecretRef: name: letsencrypt
name: letsencrypt server: https://acme-v02.api.letsencrypt.org/directory
server: https://acme-v02.api.letsencrypt.org/directory solvers:
solvers: - http01:
- http01: ingress:
ingress: class: istio
class: istio ```
EOF
``` 1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
1. Ensure that the ClusterIssuer is created successfully: 1. Ensure that the ClusterIssuer is created successfully: