Remove more references to legacy addons

This commit is contained in:
Peter Rifel 2025-03-27 21:40:23 -05:00
parent f44547b3f6
commit 49f9941407
No known key found for this signature in database
4 changed files with 4 additions and 11 deletions

View File

@ -4,8 +4,7 @@ kOps incorporates management of some addons; we _have_ to manage some addons whi
the kubernetes API is functional.
In addition, kOps offers end-user management of addons via the `channels` tool (which is still experimental,
but we are working on making it a recommended part of kubernetes addon management). We ship some
curated addons in the [addons directory](https://github.com/kubernetes/kops/tree/master/addons), more information in the [addons document](/addons.md).
but we are working on making it a recommended part of kubernetes addon management). More information in the [addons document](/addons.md).
kOps uses the `channels` tool for system addon management also. Because kOps uses the same tool
@ -34,9 +33,7 @@ If you want to update the bootstrap addons, you can run the following command to
The channels tool adds a manifest-of-manifests file, of `Kind: Addons`, which allows for a description
of the various manifest versions that are available. In this way kOps can manage updates
as new versions of the addon are released. For example,
the [dashboard addon](https://github.com/kubernetes/kops/blob/master/addons/kubernetes-dashboard/addon.yaml)
lists multiple versions.
as new versions of the addon are released.
For example, a typical addons declaration might looks like this:

View File

@ -275,7 +275,7 @@ curl http://34.200.247.63
```
**NOTE:** If you are replicating this exercise in a production environment, use a "real" load balancer in order to expose your replicated services. We are here just testing things so we really don't care right now about that, but, if you are doing this for a "real" production environment, either use an AWS ELB service, or an nginx ingress controller as described in our documentation: [NGINX Based ingress controller](https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx).
**NOTE:** If you are replicating this exercise in a production environment, use a "real" load balancer in order to expose your replicated services. We are here just testing things so we really don't care right now about that, but, if you are doing this for a "real" production environment, either use an AWS ELB service, or an nginx ingress controller.
Now, let's delete our recently-created deployment:

View File

@ -38,8 +38,7 @@ Specifically:
### Support For Multiple Metrics
To enable the resource metrics API for scaling on CPU and memory, install metrics-server
([installation instruction here][k8s-metrics-server]). The
To enable the resource metrics API for scaling on CPU and memory, enable metrics-server by setting `spec.metricsServer.enabled=true` in the Cluster spec. The
compatibility matrix is as follows:
Metrics Server | Metrics API group/version | Supported Kubernetes version
@ -54,5 +53,4 @@ Prometheus, checkout the [custom metrics adapter for Prometheus][k8s-prometheus-
[k8s-aggregation-layer]: https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/
[k8s-extend-api]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/
[k8s-hpa]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
[k8s-metrics-server]: https://github.com/kubernetes/kops/blob/master/addons/metrics-server/README.md
[k8s-prometheus-custom-metrics-adapter]: https://github.com/DirectXMan12/k8s-prometheus-adapter

View File

@ -1,5 +1,3 @@
./addons/cluster-autoscaler/cluster-autoscaler.sh
./addons/prometheus-operator/sync-repo.sh
./hack/dev-build.sh
./hooks/nvidia-bootstrap/image/run.sh
./hooks/nvidia-device-plugin/image/files/01-aws-nvidia-driver.sh