diff --git a/content/en/docs/concepts/architecture/garbage-collection.md b/content/en/docs/concepts/architecture/garbage-collection.md index 8e149b76c4..aa743d3856 100644 --- a/content/en/docs/concepts/architecture/garbage-collection.md +++ b/content/en/docs/concepts/architecture/garbage-collection.md @@ -118,7 +118,7 @@ break the kubelet behavior and remove containers that should exist. To configure options for unused container and image garbage collection, tune the kubelet using a [configuration file](/docs/tasks/administer-cluster/kubelet-config-file/) and change the parameters related to garbage collection using the -[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) +[`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/) resource type. ### Container image lifecycle diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md index c165106de6..399102a97f 100644 --- a/content/en/docs/concepts/architecture/nodes.md +++ b/content/en/docs/concepts/architecture/nodes.md @@ -506,7 +506,7 @@ in a cluster, |`custom-class-c` | 1000 | |`regular/unset` | 0 | -Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) +Within the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/) the settings for `shutdownGracePeriodByPodPriority` could look like: |Pod priority class value|Shutdown period| @@ -625,7 +625,7 @@ onwards, swap memory support can be enabled on a per-node basis. To enable swap on a node, the `NodeSwap` feature gate must be enabled on the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn` -[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) +[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/) must be set to false. {{< warning >}} diff --git a/content/en/docs/concepts/cluster-administration/logging.md b/content/en/docs/concepts/cluster-administration/logging.md index f6c3684a9b..110026f9df 100644 --- a/content/en/docs/concepts/cluster-administration/logging.md +++ b/content/en/docs/concepts/cluster-administration/logging.md @@ -81,15 +81,16 @@ See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl ![Node level logging](/images/docs/user-guide/logging/logging-node-level.png) -A container runtime handles and redirects any output generated to a containerized application's `stdout` and `stderr` streams. -Different container runtimes implement this in different ways; however, the integration with the kubelet is standardized -as the _CRI logging format_. +A container runtime handles and redirects any output generated to a containerized +application's `stdout` and `stderr` streams. +Different container runtimes implement this in different ways; however, the integration +with the kubelet is standardized as the _CRI logging format_. -By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, -all corresponding containers are also evicted, along with their logs. +By default, if a container restarts, the kubelet keeps one terminated container with its logs. +If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs. -The kubelet makes logs available to clients via a special feature of the Kubernetes API. The usual way to access this is -by running `kubectl logs`. +The kubelet makes logs available to clients via a special feature of the Kubernetes API. +The usual way to access this is by running `kubectl logs`. ### Log rotation @@ -101,7 +102,7 @@ If you configure rotation, the kubelet is responsible for rotating container log The kubelet sends this information to the container runtime (using CRI), and the runtime writes the container logs to the given location. -You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration), +You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/), `containerLogMaxSize` and `containerLogMaxFiles`, using the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/). These settings let you configure the maximum size for each log file and the maximum number of files allowed for each container respectively. @@ -201,7 +202,8 @@ as your responsibility. ## Cluster-level logging architectures -While Kubernetes does not provide a native solution for cluster-level logging, there are several common approaches you can consider. Here are some options: +While Kubernetes does not provide a native solution for cluster-level logging, there are +several common approaches you can consider. Here are some options: * Use a node-level logging agent that runs on every node. * Include a dedicated sidecar container for logging in an application pod. @@ -211,14 +213,18 @@ While Kubernetes does not provide a native solution for cluster-level logging, t ![Using a node level logging agent](/images/docs/user-guide/logging/logging-with-node-agent.png) -You can implement cluster-level logging by including a _node-level logging agent_ on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node. +You can implement cluster-level logging by including a _node-level logging agent_ on each node. +The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. +Commonly, the logging agent is a container that has access to a directory with log files from all of the +application containers on that node. Because the logging agent must run on every node, it is recommended to run the agent as a `DaemonSet`. Node-level logging creates only one agent per node and doesn't require any changes to the applications running on the node. -Containers write to stdout and stderr, but with no agreed format. A node-level agent collects these logs and forwards them for aggregation. +Containers write to stdout and stderr, but with no agreed format. A node-level agent collects +these logs and forwards them for aggregation. ### Using a sidecar container with the logging agent {#sidecar-container-with-logging-agent} diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index 1ad41353c0..bea19dcf2b 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -53,13 +53,13 @@ setting up a cluster to use an external CA. You can use the `check-expiration` subcommand to check when certificates expire: -``` +```shell kubeadm certs check-expiration ``` The output is similar to this: -``` +```console CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Dec 30, 2020 23:36 UTC 364d no apiserver Dec 30, 2020 23:36 UTC 364d ca no @@ -268,7 +268,7 @@ serverTLSBootstrap: true If you have already created the cluster you must adapt it by doing the following: - Find and edit the `kubelet-config-{{< skew currentVersion >}}` ConfigMap in the `kube-system` namespace. In that ConfigMap, the `kubelet` key has a -[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) +[KubeletConfiguration](/docs/reference/config-api/kubelet-config.v1beta1/) document as its value. Edit the KubeletConfiguration document to set `serverTLSBootstrap: true`. - On each node, add the `serverTLSBootstrap: true` field in `/var/lib/kubelet/config.yaml` and restart the kubelet with `systemctl restart kubelet` @@ -284,6 +284,8 @@ These CSRs can be viewed using: ```shell kubectl get csr +``` +```console NAME AGE SIGNERNAME REQUESTOR CONDITION csr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending csr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending