Update command style in serving and admin directories (#3904)

* Update command style in serving and admin directories

* Heading --> sentence case

* Update docs/serving/using-auto-tls.md

Co-authored-by: Ashleigh Brennan <abrennan@redhat.com>

* Update docs/admin/eventing/broker-configuration.md

Co-authored-by: Ashleigh Brennan <abrennan@redhat.com>

* change --filename to -f

Co-authored-by: Ashleigh Brennan <abrennan@redhat.com>
This commit is contained in:
Samia Nneji 2021-07-29 20:27:56 +01:00 committed by GitHub
parent 676ff681a9
commit 5df72e09b6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 261 additions and 175 deletions

View File

@ -112,15 +112,23 @@ When a Broker is created without a specified `BrokerClass` annotation, the defau
The following example creates a Broker called `default` in the default namespace, and uses `MTChannelBasedBroker` as the implementation:
```bash
kubectl create -f - <<EOF
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: default
namespace: default
EOF
```
1. Create a YAML file for your Broker using the example below:
```yaml
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: default
namespace: default
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
### Configuring the Broker class

View File

@ -10,10 +10,9 @@ To use Kafka Channels, you must:
## Create a `kafka-channel` ConfigMap
1. Create a `kafka-channel` ConfigMap by running the command:
1. Create a YAML file for the `kafka-channel` ConfigMap using the template below:
```yaml
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
@ -26,16 +25,22 @@ To use Kafka Channels, you must:
spec:
numPartitions: 3
replicationFactor: 1
EOF
```
!!! note
This example specifies two extra parameters that are specific to Kafka Channels; `numPartitions` and `replicationFactor`.
1. Optional. To create a Broker that uses Kafka Channels, specify the `kafka-channel` ConfigMap in the Broker spec. You can do this by running the command:
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
1. Optional. To create a Broker that uses Kafka Channels, specify the `kafka-channel` ConfigMap in the Broker spec. You can do this by creating a YAML file using the template below:
```yaml
kubectl apply -f - <<EOF
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
@ -49,5 +54,11 @@ To use Kafka Channels, you must:
kind: ConfigMap
name: kafka-channel
namespace: knative-eventing
EOF
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.

View File

@ -55,32 +55,38 @@ mesh by [manually injecting the Istio sidecars][1].
Enter the following command to install Istio:
```bash
cat << EOF > ./istio-minimal-operator.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
proxy:
autoInject: disabled
useMCP: false
# The third-party-jwt is not enabled on all k8s.
# See: https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens
jwtPolicy: first-party-jwt
To install Istio without sidecar injection:
addonComponents:
pilot:
enabled: true
1. Create a `istio-minimal-operator.yaml` file using the template below:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
EOF
```yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
proxy:
autoInject: disabled
useMCP: false
# The third-party-jwt is not enabled on all k8s.
# See: https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens
jwtPolicy: first-party-jwt
istioctl install -f istio-minimal-operator.yaml
```
addonComponents:
pilot:
enabled: true
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
```
1. Apply the YAML file by running the command:
```bash
istioctl install -f istio-minimal-operator.yaml
```
#### Installing Istio with sidecar injection
@ -108,26 +114,32 @@ Since there are some networking communications between knative-serving namespace
and the namespace where your services running on, you need additional
preparations for mTLS enabled environment.
- Enable sidecar container on `knative-serving` system namespace.
1. Enable sidecar container on `knative-serving` system namespace.
```bash
kubectl label namespace knative-serving istio-injection=enabled
```
```bash
kubectl label namespace knative-serving istio-injection=enabled
```
- Set `PeerAuthentication` to `PERMISSIVE` on knative-serving system namespace.
1. Set `PeerAuthentication` to `PERMISSIVE` on knative-serving system namespace
by creating a YAML file using the template below:
```bash
cat <<EOF | kubectl apply -f -
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "knative-serving"
spec:
mtls:
mode: PERMISSIVE
EOF
```
```bash
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "knative-serving"
spec:
mtls:
mode: PERMISSIVE
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
After you install the cluster local gateway, your service and deployment for the local gateway is named `knative-local-gateway`.
@ -138,16 +150,16 @@ need to update gateway configmap `config-istio` under the `knative-serving` name
1. Edit the `config-istio` configmap:
```bash
kubectl edit configmap config-istio -n knative-serving
```
```bash
kubectl edit configmap config-istio -n knative-serving
```
2. Replace the `local-gateway.knative-serving.knative-local-gateway` field with the custom service. As an example, if you name both
the service and deployment `custom-local-gateway` under the namespace `istio-system`, it should be updated to:
```
custom-local-gateway.istio-system.svc.cluster.local
```
```
custom-local-gateway.istio-system.svc.cluster.local
```
As an example, if both the custom service and deployment are labeled with `custom: custom-local-gateway`, not the default
`istio: knative-local-gateway`, you must update gateway instance `knative-local-gateway` in the `knative-serving` namespace:

View File

@ -190,7 +190,7 @@ NAME VERSION READY REASON
knative-serving <version number> True
```
### Installing with Different Networking Layers
### Installing with different networking layers
??? "Installing the Knative Serving component with different network layers"
@ -225,9 +225,8 @@ knative-serving <version number> True
kubectl set env --namespace ambassador deployments/ambassador AMBASSADOR_KNATIVE_SUPPORT=true
```
1. To configure Knative Serving to use Ambassador, apply the content of the Serving CR as below:
```bash
cat <<-EOF | kubectl apply -f -
1. To configure Knative Serving to use Ambassador, copy the YAML below into a file:
```yaml
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
@ -237,9 +236,15 @@ knative-serving <version number> True
config:
network:
ingress.class: "ambassador.ingress.networking.knative.dev"
EOF
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
1. Fetch the External IP or CNAME:
```bash
kubectl --namespace ambassador get service ambassador
@ -256,9 +261,8 @@ knative-serving <version number> True
kubectl apply --filename {{artifact(repo="net-contour",file="contour.yaml")}}
```
1. To configure Knative Serving to use Contour, apply the content of the Serving CR as below:
```bash
cat <<-EOF | kubectl apply -f -
1. To configure Knative Serving to use Contour, copy the YAML below into a file:
```yaml
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
@ -271,8 +275,13 @@ knative-serving <version number> True
config:
network:
ingress.class: "contour.ingress.networking.knative.dev"
EOF
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
1. Fetch the External IP or CNAME:
```bash
@ -285,9 +294,8 @@ knative-serving <version number> True
The following commands install Kourier and enable its Knative integration.
1. To configure Knative Serving to use Kourier, apply the content of the Serving CR as below:
```bash
cat <<-EOF | kubectl apply -f -
1. To configure Knative Serving to use Kourier, copy the YAML below into a file:
```yaml
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
@ -300,9 +308,15 @@ knative-serving <version number> True
config:
network:
ingress.class: "kourier.ingress.networking.knative.dev"
EOF
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
1. Fetch the External IP or CNAME:
```bash
kubectl --namespace knative-serving get service kourier

View File

@ -12,17 +12,24 @@ The Knative Operator supports up to the last three major releases. For example,
To upgrade, apply the Operator custom resources, adding the `spec.version` for the Knative version that you want to upgrade to:
```yaml
kubectl apply -f - <<EOF
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
version: "0.23"
EOF
```
1. Copy the YAML below into a file:
```yaml
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
version: "0.23"
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
## Verifying an upgrade by viewing pods
@ -112,8 +119,9 @@ If the upgrade fails, you can rollback to restore your Knative to the previous v
=== "Knative Serving"
1. Copy the YAML below into a file:
```yaml
kubectl apply -f - <<EOF
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
@ -121,13 +129,20 @@ If the upgrade fails, you can rollback to restore your Knative to the previous v
namespace: knative-serving
spec:
version: "0.22"
EOF
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
=== "Knative Eventing"
1. Copy the YAML below into a file:
```yaml
kubectl apply -f - <<EOF
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeEventing
metadata:
@ -135,5 +150,10 @@ If the upgrade fails, you can rollback to restore your Knative to the previous v
namespace: knative-eventing
spec:
version: "0.22"
EOF
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.

View File

@ -102,22 +102,29 @@ Knative system pods access your application using the following paths:
The `/metrics` path allows the autoscaler pod to collect metrics.
The `/healthz` path allows system pods to probe the service.
You can add the `/metrics` and `/healthz` paths to the AuthorizationPolicy as shown in the example:
To add the `/metrics` and `/healthz` paths to the AuthorizationPolicy:
```yaml
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allowlist-by-paths
namespace: serving-tests
spec:
action: ALLOW
rules:
- to:
- operation:
paths:
- /metrics # The path to collect metrics by system pod.
- /healthz # The path to probe by system pod.
EOF
```
1. Create a YAML file for your AuthorizationPolicy using the example below:
```yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allowlist-by-paths
namespace: serving-tests
spec:
action: ALLOW
rules:
- to:
- operation:
paths:
- /metrics # The path to collect metrics by system pod.
- /healthz # The path to probe by system pod.
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.

View File

@ -45,10 +45,9 @@ To change the {default-domain} value there are a few steps involved:
You can also apply an updated domain configuration:
1. Replace the `example.org` and `example.com` values with the new domain you want to use and run the command:
1. Create a YAML file using the template below:
```yaml
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
@ -64,8 +63,15 @@ You can also apply an updated domain configuration:
# Although it will match all routes, it is the least-specific rule so it
# will only be used if no other domain matches.
example.com: ""
EOF
```
Replace `example.org` and `example.com` with the new domain you want to use.
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
## Deploy an application

View File

@ -156,10 +156,10 @@ continue below for instructions about manually adding a certificate.
=== "Contour"
To manually add a TLS certificate to your Knative cluster, you must create a
Kubernetes secret and then configure the Knative Contour plugin
Kubernetes secret and then configure the Knative Contour plugin.
1. Create a Kubernetes secret to hold your TLS certificate, `cert.pem`, and the
private key, `key.pem`, by entering the following command:
private key, `key.pem`, by running the command:
```bash
kubectl create -n contour-external secret tls default-cert \
@ -167,13 +167,13 @@ continue below for instructions about manually adding a certificate.
--cert cert.pem
```
!!! warning
!!! note
Take note of the namespace and secret name. You will need these in future steps.
1. Contour requires you to create a delegation to use this certificate and private key in different namespaces. You can create this resource by running the command:
1. To use this certificate and private key in different namespaces, you must
create a delegation. To do so, create a YAML file using the template below:
```yaml
kubectl apply -f - <<EOF
apiVersion: projectcontour.io/v1
kind: TLSCertificateDelegation
metadata:
@ -184,11 +184,16 @@ continue below for instructions about manually adding a certificate.
- secretName: default-cert
targetNamespaces:
- "*"
EOF
```
1. Apply the YAML file by running the command:
1. Update the Knative Contour plugin to start using the certificate as a fallback
when auto-TLS is disabled. This can be done with the following patch:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
1. Update the Knative Contour plugin to use the certificate as a fallback
when auto-TLS is disabled by running the command:
```bash
kubectl patch configmap config-contour -n knative-serving \

View File

@ -45,72 +45,75 @@ Knative supports the following Auto TLS modes:
## Enabling Auto TLS
1. Create and add the `ClusterIssuer` configuration file to your Knative cluster to define who issues the TLS certificates, how requests are validated,
1. Create and add the `ClusterIssuer` configuration file to your Knative cluster
to define who issues the TLS certificates, how requests are validated,
and which DNS provider validates those requests.
### ClusterIssuer for DNS-01 challenge
- **ClusterIssuer for DNS-01 challenge:** use the cert-manager reference to determine how to configure your `ClusterIssuer` file.
Use the cert-manager reference to determine how to configure your
`ClusterIssuer` file:
- See the generic
[`ClusterIssuer` example](https://cert-manager.io/docs/configuration/acme/#creating-a-basic-acme-issuer)
- Also see the
[`DNS01` example](https://docs.cert-manager.io/en/latest/tasks/acme/configuring-dns01/index.html)
- See the generic [`ClusterIssuer` example](https://cert-manager.io/docs/configuration/acme/#creating-a-basic-acme-issuer)
- Also see the
[`DNS01` example](https://docs.cert-manager.io/en/latest/tasks/acme/configuring-dns01/index.html)
**Example**: Cloud DNS `ClusterIssuer` configuration file:
For example, the following `ClusterIssuer` file named `letsencrypt-issuer` is
configured for the Let's Encrypt CA and Google Cloud DNS.
The Let's Encrypt account info, required `DNS-01` challenge type, and
Cloud DNS provider info is defined under `spec`.
The following `letsencrypt-issuer` named `ClusterIssuer` file is
configured for the Let's Encrypt CA and Google Cloud DNS. Under `spec`,
the Let's Encrypt account info, required `DNS-01` challenge type, and
Cloud DNS provider info defined.
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-dns-issuer
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
# This will register an issuer with LetsEncrypt. Replace
# with your admin email address.
email: myemail@gmail.com
privateKeySecretRef:
# Set privateKeySecretRef to any unused secret name.
name: letsencrypt-dns-issuer
solvers:
- dns01:
clouddns:
# Set this to your GCP project-id
project: $PROJECT_ID
# Set this to the secret that we publish our service account key
# in the previous step.
serviceAccountSecretRef:
name: cloud-dns-key
key: key.json
```
```bash
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-dns-issuer
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
# This will register an issuer with LetsEncrypt. Replace
# with your admin email address.
email: myemail@gmail.com
privateKeySecretRef:
# Set privateKeySecretRef to any unused secret name.
name: letsencrypt-dns-issuer
solvers:
- dns01:
clouddns:
# Set this to your GCP project-id
project: $PROJECT_ID
# Set this to the secret that we publish our service account key
# in the previous step.
serviceAccountSecretRef:
name: cloud-dns-key
key: key.json
```
- **ClusterIssuer for HTTP-01 challenge**
### ClusterIssuer for HTTP-01 challenge
To apply the ClusterIssuer for HTTP01 challenge:
Run the following command to apply the ClusterIssuer for HTT01 challenge:
1. Create a YAML file using the template below:
```yaml
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-http01-issuer
spec:
acme:
privateKeySecretRef:
name: letsencrypt
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: istio
EOF
```
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-http01-issuer
spec:
acme:
privateKeySecretRef:
name: letsencrypt
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: istio
```
1. Apply the YAML file by running the command:
```bash
kubectl apply -f <filename>.yaml
```
Where `<filename>` is the name of the file you created in the previous step.
1. Ensure that the ClusterIssuer is created successfully: