docs updates (#3845)

* include a section about disabling autotls for a namespace

* handle rename of namespace certificate controller

* handle rename of other net-* controllers
This commit is contained in:
Dave Protasowski 2021-06-24 16:37:21 -04:00 committed by GitHub
parent 8b4a316347
commit 2e13894eeb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 52 additions and 38 deletions

View File

@ -171,14 +171,14 @@ kubectl get deployment -n knative-serving
If Knative Serving has been successfully deployed, all deployments of the Knative Serving will show `READY` status. Here
is a sample output:
```
NAME READY UP-TO-DATE AVAILABLE AGE
activator 1/1 1 1 18s
autoscaler 1/1 1 1 18s
autoscaler-hpa 1/1 1 1 14s
controller 1/1 1 1 18s
istio-webhook 1/1 1 1 12s
networking-istio 1/1 1 1 12s
webhook 1/1 1 1 17s
NAME READY UP-TO-DATE AVAILABLE AGE
activator 1/1 1 1 18s
autoscaler 1/1 1 1 18s
autoscaler-hpa 1/1 1 1 14s
controller 1/1 1 1 18s
net-istio-webhook 1/1 1 1 12s
net-istio-contorller 1/1 1 1 12s
webhook 1/1 1 1 17s
```
1. Check the status of Knative Serving Custom Resource:

View File

@ -116,7 +116,7 @@ You can use the `spec.registry` section of the operator CR to change the image r
- `default`: this field defines a image reference template for all Knative images. The format
is `example-registry.io/custom/path/${NAME}:{CUSTOM-TAG}`. If you use the same tag for all your images, the only difference is the image name. `${NAME}` is
a pre-defined variable in the operator corresponding to the container name. If you name the images in your private repo to align with the container names (
`activator`, `autoscaler`, `controller`, `webhook`, `autoscaler-hpa`, `networking-istio`, and `queue-proxy`), the `default` argument should be sufficient.
`activator`, `autoscaler`, `controller`, `webhook`, `autoscaler-hpa`, `net-istio-controller`, and `queue-proxy`), the `default` argument should be sufficient.
- `override`: a map from container name to the full registry
location. This section is only needed when the registry images do not match the common naming format. For containers whose name matches a key, the value is used in preference to the image name calculated by `default`. If a container's name does not match a key in `override`, the template in `default` is used.
@ -146,7 +146,7 @@ First, you need to make sure your images pushed to the following image tags:
| `controller` | `docker.io/knative-images/controller:v0.13.0` |
| `webhook` | `docker.io/knative-images/webhook:v0.13.0` |
| `autoscaler-hpa` | `docker.io/knative-images/autoscaler-hpa:v0.13.0` |
| `networking-istio` | `docker.io/knative-images/networking-istio:v0.13.0` |
| `net-istio-controller` | `docker.io/knative-images/net-istio-controller:v0.13.0` |
| `queue-proxy` | `docker.io/knative-images/queue-proxy:v0.13.0` |
Then, you need to define your operator CR with following content:
@ -177,8 +177,8 @@ For example, to given the following images:
| `controller` | `docker.io/knative-images-repo3/controller:v0.13.0` |
| `webhook` | `docker.io/knative-images-repo4/webhook:v0.13.0` |
| `autoscaler-hpa` | `docker.io/knative-images-repo5/autoscaler-hpa:v0.13.0` |
| `networking-istio` | `docker.io/knative-images-repo6/prefix-networking-istio:v0.13.0` |
| `(net-istio) webhook` | `docker.io/knative-images-repo6/networking-istio-webhook:v0.13.0` |
| `net-istio-controller` | `docker.io/knative-images-repo6/prefix-net-istio-controller:v0.13.0` |
| `net-istio-webhook` | `docker.io/knative-images-repo6/net-istio-webhooko:v0.13.0` |
| `queue-proxy` | `docker.io/knative-images-repo7/queue-proxy-suffix:v0.13.0` |
The operator CR should be modified to include the full list:
@ -197,8 +197,8 @@ spec:
controller: docker.io/knative-images-repo3/controller:v0.13.0
webhook: docker.io/knative-images-repo4/webhook:v0.13.0
autoscaler-hpa: docker.io/knative-images-repo5/autoscaler-hpa:v0.13.0
networking-istio: docker.io/knative-images-repo6/prefix-networking-istio:v0.13.0
istio-webhook/webhook: docker.io/knative-images-repo6/networking-istio-webhook:v0.13.0
net-istio-controller: docker.io/knative-images-repo6/prefix-net-istio-controller:v0.13.0
net-istio-webhook/webhook: docker.io/knative-images-repo6/net-istio-webhook:v0.13.0
queue-proxy: docker.io/knative-images-repo7/queue-proxy-suffix:v0.13.0
```
@ -359,7 +359,7 @@ spec:
## High availability
By default, Knative Serving runs a single instance of each controller. The `spec.high-availability` field allows you to configure the number of replicas for the following leader-elected controllers: `controller`, `autoscaler-hpa`, `networking-istio`. This field also configures the `HorizontalPodAutoscaler` resources for the data plane (`activator`):
By default, Knative Serving runs a single instance of each controller. The `spec.high-availability` field allows you to configure the number of replicas for the following leader-elected controllers: `controller`, `autoscaler-hpa`, `net-istio-controller`. This field also configures the `HorizontalPodAutoscaler` resources for the data plane (`activator`):
The following configuration specifies a replica count of 3 for the controllers and a minimum of 3 activators (which may scale higher if needed):
@ -378,7 +378,7 @@ spec:
The operator custom resource allows you to configure system resources for the Knative system containers.
Requests and limits can be configured for the following containers: `activator`, `autoscaler`, `controller`, `webhook`, `autoscaler-hpa`,
`networking-istio` and `queue-proxy`.
`net-istio-controller` and `queue-proxy`.
To override resource settings for a specific container, create an entry in the `spec.resources` list with the container name and the [Kubernetes resource settings](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container).

View File

@ -46,8 +46,8 @@ You can confirm that your Knative components have upgraded successfully, by view
autoscaler-6bbc885cfd-vkrgg 1/1 Running 0 57s
autoscaler-hpa-5cdd7c6b69-hxzv4 1/1 Running 0 55s
controller-64dd4bd56-wzb2k 1/1 Running 0 57s
istio-webhook-75cc84fbd4-dkcgt 1/1 Running 0 50s
networking-istio-6dcbd4b5f4-mxm8q 1/1 Running 0 51s
net-istio-webhook-75cc84fbd4-dkcgt 1/1 Running 0 50s
net-istio-controller-6dcbd4b5f4-mxm8q 1/1 Running 0 51s
storage-version-migration-serving-serving-0.20.0-82hjt 0/1 Completed 0 50s
webhook-75f5d4845d-zkrdt 1/1 Running 0 56s
```

View File

@ -90,7 +90,7 @@ activator-79f674fb7b-dgvss 2/2 Running 0 43s
autoscaler-96dc49858-b24bm 2/2 Running 1 43s
autoscaler-hpa-d887d4895-njtrb 1/1 Running 0 43s
controller-6bcdd87fd6-zz9fx 1/1 Running 0 41s
networking-istio-7fcd97cbf7-z2xmr 1/1 Running 0 40s
net-istio-controller-7fcdf7-z2xmr 1/1 Running 0 40s
webhook-747b799559-4sj6q 1/1 Running 0 41s
```

View File

@ -33,13 +33,13 @@ that are active when running Knative Serving.
This returns an output similar to the following:
```{ .bash .no-copy }
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
activator 1 1 1 1 1h
autoscaler 1 1 1 1 1h
controller 1 1 1 1 1h
networking-certmanager 1 1 1 1 1h
networking-istio 1 1 1 1 1h
webhook 1 1 1 1 1h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
activator 1 1 1 1 1h
autoscaler 1 1 1 1 1h
controller 1 1 1 1 1h
net-certmanager-controller 1 1 1 1 1h
net-istio-controller 1 1 1 1 1h
webhook 1 1 1 1 1h
```
These services and deployments are installed by the `serving.yaml` file during
@ -71,13 +71,13 @@ The webhook intercepts all Kubernetes API calls as well as all CRD insertions
and updates. It sets default values, rejects inconsitent and invalid objects,
and validates and mutates Kubernetes API calls.
### Deployment: networking-certmanager
### Deployment: net-certmanager-controller
The certmanager reconciles cluster ingresses into cert manager objects.
### Deployment: networking-istio
### Deployment: net-istio-controller
The networking-istio deployment reconciles a cluster's ingress into an
The net-istio-controller deployment reconciles a cluster's ingress into an
[Istio virtual service](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/).
## What's Next

View File

@ -130,36 +130,36 @@ providers, are provided in
[DNS01 challenge providers and configuration instructions](https://cert-manager.io/docs/configuration/acme/dns01/#supported-dns01-providers).
### Install networking-certmanager deployment
### Install net-certmanager-controller deployment
1. Determine if `networking-certmanager` is already installed by running the
1. Determine if `net-certmanager-controller` is already installed by running the
following command:
```bash
kubectl get deployment networking-certmanager -n knative-serving
kubectl get deployment net-certmanager-controller -n knative-serving
```
1. If `networking-certmanager` is not found, run the following command:
1. If `net-certmanager-controller` is not found, run the following command:
```bash
kubectl apply --filename {{ artifact( repo="net-certmanager", file="release.yaml") }}
```
### Install networking-ns-cert component
### Install net-nscert-controller component
If you choose to use the mode of provisioning certificate per namespace, you need to install `networking-ns-cert` components.
If you choose to use the mode of provisioning certificate per namespace, you need to install `net-nscert-controller` components.
**IMPORTANT:** Provisioning a certificate per namespace only works with DNS-01
challenge. This component cannot be used with HTTP-01 challenge.
1. Determine if `networking-ns-cert` deployment is already installed by
1. Determine if `net-nscert-controller` deployment is already installed by
running the following command:
```bash
kubectl get deployment networking-ns-cert -n knative-serving
kubectl get deployment net-nscert-controller -n knative-serving
```
1. If `networking-ns-cert` deployment is not found, run the following command:
1. If `net-nscert-controller` deployment is not found, run the following command:
```bash
kubectl apply --filename {{ artifact( repo="serving", file="serving-nscert.yaml") }}
@ -330,3 +330,17 @@ Using the previous `autoscale-go` example:
NAME URL LATEST AGE CONDITIONS READY REASON
autoscale-go http://autoscale-go.default.1.arenault.dev autoscale-go-dd42t 8m17s 3 OK / 3 True
```
### Disable Auto TLS per namespace
If you have Auto TLS enabled to provision a certificate per namespace, you can choose to disable it for an individual namespace by adding the annotation `networking.knative.dev/disableWildcardCert: true`
1. Edit your namespace `kubectl edit namespace default` and add the annotation:
```yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
...
networking.knative.dev/disableWildcardCert: "true"
...
```