[KEP-2731] Add docs for Kubelet OpenTelemetry Tracing graduation

Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
This commit is contained in:
Sascha Grunert 2023-03-10 08:24:11 +01:00
parent e1c71e37c9
commit 53641dfce9
No known key found for this signature in database
GPG Key ID: 09D97D153EF94D93
2 changed files with 19 additions and 6 deletions

View File

@ -76,7 +76,7 @@ For more information about the `TracingConfiguration` struct, see
### kubelet traces ### kubelet traces
{{< feature-state for_k8s_version="v1.25" state="alpha" >}} {{< feature-state for_k8s_version="v1.27" state="beta" >}}
The kubelet CRI interface and authenticated http servers are instrumented to generate The kubelet CRI interface and authenticated http servers are instrumented to generate
trace spans. As with the apiserver, the endpoint and sampling rate are configurable. trace spans. As with the apiserver, the endpoint and sampling rate are configurable.
@ -86,10 +86,7 @@ Enabled without a configured endpoint, the default OpenTelemetry Collector recei
#### Enabling tracing in the kubelet #### Enabling tracing in the kubelet
To enable tracing, enable the `KubeletTracing` To enable tracing, apply the [tracing configuration](https://github.com/kubernetes/component-base/blob/release-1.27/tracing/api/v1/types.go).
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
on the kubelet. Also, provide the kubelet with a
[tracing configuration](https://github.com/kubernetes/component-base/blob/release-1.25/tracing/api/v1/types.go).
This is an example snippet of a kubelet config that records spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint: This is an example snippet of a kubelet config that records spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:
```yaml ```yaml
@ -103,6 +100,21 @@ tracing:
samplingRatePerMillion: 100 samplingRatePerMillion: 100
``` ```
If the `samplingRatePerMillion` is set to one million (`1000000`), then every
span will be sent to the exporter.
The kubelet in Kubernetes v{{< skew currentVersion >}} collects spans from
the garbage collection, pod synchronization routine as well as every gRPC
method. Connected container runtimes like CRI-O and containerd can link the
traces to their exported spans to provide additional context of information.
Please note that exporting spans always comes with a small performance overhead
on the networking and CPU side, depending on the overall configuration of the
system. If there is any issue like that in a cluster which is running with
tracing enabled, then mitigate the problem by either reducing the
`samplingRatePerMillion` or disabling tracing completely by removing the
configuration.
## Stability ## Stability
Tracing instrumentation is still under active development, and may change Tracing instrumentation is still under active development, and may change

View File

@ -131,7 +131,8 @@ For a reference to old feature gates that are removed, please refer to
| `KubeletPodResourcesGetAllocatable` | `false` | Alpha | 1.21 | 1.22 | | `KubeletPodResourcesGetAllocatable` | `false` | Alpha | 1.21 | 1.22 |
| `KubeletPodResourcesGetAllocatable` | `true` | Beta | 1.23 | | | `KubeletPodResourcesGetAllocatable` | `true` | Beta | 1.23 | |
| `KubeletPodResourcesDynamicResources` | `false` | Alpha | 1.27 | | | `KubeletPodResourcesDynamicResources` | `false` | Alpha | 1.27 | |
| `KubeletTracing` | `false` | Alpha | 1.25 | | | `KubeletTracing` | `false` | Alpha | 1.25 | 1.26 |
| `KubeletTracing` | `true` | Beta | 1.27 | |
| `LegacyServiceAccountTokenTracking` | `false` | Alpha | 1.26 | 1.26 | | `LegacyServiceAccountTokenTracking` | `false` | Alpha | 1.26 | 1.26 |
| `LegacyServiceAccountTokenTracking` | `true` | Beta | 1.27 | | | `LegacyServiceAccountTokenTracking` | `true` | Beta | 1.27 | |
| `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | - | | `LocalStorageCapacityIsolationFSQuotaMonitoring` | `false` | Alpha | 1.15 | - |