From e6276724bbaa4e7ff42504dc6fdfb2457f7bfa4f Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Sat, 30 Apr 2022 09:21:16 +0800 Subject: [PATCH] [en] modify link about debug Signed-off-by: xin.li --- .../2018-08-03-make-kubernetes-production-grade-anywhere.md | 2 +- .../concepts/configuration/manage-resources-containers.md | 4 ++-- content/en/docs/concepts/overview/components.md | 2 +- content/en/docs/setup/production-environment/_index.md | 2 +- .../tools/kubeadm/troubleshooting-kubeadm.md | 2 +- .../windows/intro-windows-in-kubernetes.md | 4 ++-- .../tasks/run-application/force-delete-stateful-set-pod.md | 2 +- .../en/docs/tasks/run-application/horizontal-pod-autoscale.md | 2 +- .../run-application/run-replicated-stateful-application.md | 2 +- content/en/docs/tutorials/stateful-application/cassandra.md | 2 +- 10 files changed, 12 insertions(+), 12 deletions(-) diff --git a/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md b/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md index 329b2c4de7..00416256e0 100644 --- a/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md +++ b/content/en/blog/_posts/2018-08-03-make-kubernetes-production-grade-anywhere.md @@ -174,7 +174,7 @@ Cluster-distributed stateful services (e.g., Cassandra) can benefit from splitti ## Other considerations -[Logs](/docs/concepts/cluster-administration/logging/) and [metrics](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) (if collected and persistently retained) are valuable to diagnose outages, but given the variety of technologies available it will not be addressed in this blog. If Internet connectivity is available, it may be desirable to retain logs and metrics externally at a central location. +[Logs](/docs/concepts/cluster-administration/logging/) and [metrics](/docs/tasks/debug/debug-cluster/resource-usage-monitoring/) (if collected and persistently retained) are valuable to diagnose outages, but given the variety of technologies available it will not be addressed in this blog. If Internet connectivity is available, it may be desirable to retain logs and metrics externally at a central location. Your production deployment should utilize an automated installation, configuration and update tool (e.g., [Ansible](https://github.com/kubernetes-incubator/kubespray), [BOSH](https://github.com/cloudfoundry-incubator/kubo-deployment), [Chef](https://github.com/chef-cookbooks/kubernetes), [Juju](/docs/getting-started-guides/ubuntu/installation/), [kubeadm](/docs/reference/setup-tools/kubeadm/), [Puppet](https://forge.puppet.com/puppetlabs/kubernetes), etc.). A manual process will have repeatability issues, be labor intensive, error prone, and difficult to scale. [Certified distributions](https://www.cncf.io/certification/software-conformance/#logos) are likely to include a facility for retaining configuration settings across updates, but if you implement your own install and config toolchain, then retention, backup and recovery of the configuration artifacts is essential. Consider keeping your deployment components and settings under a version control system such as Git. diff --git a/content/en/docs/concepts/configuration/manage-resources-containers.md b/content/en/docs/concepts/configuration/manage-resources-containers.md index 575f96f04d..76c26a2c65 100644 --- a/content/en/docs/concepts/configuration/manage-resources-containers.md +++ b/content/en/docs/concepts/configuration/manage-resources-containers.md @@ -229,9 +229,9 @@ see the [Troubleshooting](#troubleshooting) section. The kubelet reports the resource usage of a Pod as part of the Pod [`status`](/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status). -If optional [tools for monitoring](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) +If optional [tools for monitoring](/docs/tasks/debug/debug-cluster/resource-usage-monitoring/) are available in your cluster, then Pod resource usage can be retrieved either -from the [Metrics API](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-api) +from the [Metrics API](/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-api) directly or from your monitoring tools. ## Local ephemeral storage diff --git a/content/en/docs/concepts/overview/components.md b/content/en/docs/concepts/overview/components.md index 60433f63e5..387fc157d9 100644 --- a/content/en/docs/concepts/overview/components.md +++ b/content/en/docs/concepts/overview/components.md @@ -114,7 +114,7 @@ Containers started by Kubernetes automatically include this DNS server in their ### Container Resource Monitoring -[Container Resource Monitoring](/docs/tasks/debug-application-cluster/resource-usage-monitoring/) records generic time-series metrics +[Container Resource Monitoring](/docs/tasks/debug/debug-cluster/resource-usage-monitoring/) records generic time-series metrics about containers in a central database, and provides a UI for browsing that data. ### Cluster-level Logging diff --git a/content/en/docs/setup/production-environment/_index.md b/content/en/docs/setup/production-environment/_index.md index 1611170ec9..7d8200a6b3 100644 --- a/content/en/docs/setup/production-environment/_index.md +++ b/content/en/docs/setup/production-environment/_index.md @@ -197,7 +197,7 @@ are some virtualization platforms that can be scripted to spin up new nodes based on demand. - *Set up node health checks*: For important workloads, you want to make sure that the nodes and pods running on those nodes are healthy. Using the -[Node Problem Detector](/docs/tasks/debug-application-cluster/monitor-node-health/) +[Node Problem Detector](/docs/tasks/debug/debug-cluster/monitor-node-health/) daemon, you can ensure your nodes are healthy. ## Production user management diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md index 9108ecafcd..7cfa5b1281 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md @@ -163,7 +163,7 @@ services](/docs/concepts/services-networking/service/#type-nodeport) or use `Hos ## Pods are not accessible via their Service IP -- Many network add-ons do not yet enable [hairpin mode](/docs/tasks/debug-application-cluster/debug-service/#a-pod-fails-to-reach-itself-via-the-service-ip) +- Many network add-ons do not yet enable [hairpin mode](/docs/tasks/debug/debug-application/debug-service/#a-pod-fails-to-reach-itself-via-the-service-ip) which allows pods to access themselves via their Service IP. This is an issue related to [CNI](https://github.com/containernetworking/cni/issues/476). Please contact the network add-on provider to get the latest status of their support for hairpin mode. diff --git a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md index add81bd71e..e14ca99cac 100644 --- a/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md +++ b/content/en/docs/setup/production-environment/windows/intro-windows-in-kubernetes.md @@ -609,7 +609,7 @@ None of the Pod [`securityContext`](/docs/reference/kubernetes-api/workload-reso ### Node problem detector The node problem detector (see -[Monitor Node Health](/docs/tasks/debug-application-cluster/monitor-node-health/)) +[Monitor Node Health](/docs/tasks/debug/debug-cluster/monitor-node-health/)) is not compatible with Windows. ### Pause container @@ -705,7 +705,7 @@ Privileged containers are [not supported](#compatibility-v1-pod-spec-containers- ## Getting help and troubleshooting {#troubleshooting} Your main source of help for troubleshooting your Kubernetes cluster should start -with the [Troubleshooting](/docs/tasks/debug-application-cluster/troubleshooting/) +with the [Troubleshooting](/docs/tasks/debug/debug-cluster/) page. Some additional, Windows-specific troubleshooting help is included diff --git a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md index 0001f4c9f4..a4a145f5b4 100644 --- a/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md +++ b/content/en/docs/tasks/run-application/force-delete-stateful-set-pod.md @@ -90,6 +90,6 @@ Always perform force deletion of StatefulSet Pods carefully and with complete kn ## {{% heading "whatsnext" %}} -Learn more about [debugging a StatefulSet](/docs/tasks/debug-application-cluster/debug-stateful-set/). +Learn more about [debugging a StatefulSet](/docs/tasks/debug/debug-application/debug-statefulset/). diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index 0039254f7e..e9a446287c 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -90,7 +90,7 @@ The common use for HorizontalPodAutoscaler is to configure it to fetch metrics f (`metrics.k8s.io`, `custom.metrics.k8s.io`, or `external.metrics.k8s.io`). The `metrics.k8s.io` API is usually provided by an add-on named Metrics Server, which needs to be launched separately. For more information about resource metrics, see -[Metrics Server](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server). +[Metrics Server](/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-server). [Support for metrics APIs](#support-for-metrics-apis) explains the stability guarantees and support status for these different APIs. diff --git a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md index e98830b9e3..03da601a48 100644 --- a/content/en/docs/tasks/run-application/run-replicated-stateful-application.md +++ b/content/en/docs/tasks/run-application/run-replicated-stateful-application.md @@ -532,7 +532,7 @@ kubectl delete pvc data-mysql-4 ## {{% heading "whatsnext" %}} * Learn more about [scaling a StatefulSet](/docs/tasks/run-application/scale-stateful-set/). -* Learn more about [debugging a StatefulSet](/docs/tasks/debug-application-cluster/debug-stateful-set/). +* Learn more about [debugging a StatefulSet](/docs/tasks/debug/debug-application/debug-statefulset/). * Learn more about [deleting a StatefulSet](/docs/tasks/run-application/delete-stateful-set/). * Learn more about [force deleting StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/). * Look in the [Helm Charts repository](https://artifacthub.io/) diff --git a/content/en/docs/tutorials/stateful-application/cassandra.md b/content/en/docs/tutorials/stateful-application/cassandra.md index ffbf65286b..b6b656b3f5 100644 --- a/content/en/docs/tutorials/stateful-application/cassandra.md +++ b/content/en/docs/tutorials/stateful-application/cassandra.md @@ -93,7 +93,7 @@ cassandra ClusterIP None 9042/TCP 45s ``` If you don't see a Service named `cassandra`, that means creation failed. Read -[Debug Services](/docs/tasks/debug-application-cluster/debug-service/) +[Debug Services](/docs/tasks/debug/debug-application/debug-service/) for help troubleshooting common issues. ## Using a StatefulSet to create a Cassandra ring