--- title: Important Components for Kubernetes linkTitle: Components # prettier-ignore cSpell:ignore: alertmanagers containerd crio filelog gotime horizontalpodautoscalers hostfs hostmetrics iostream k8sattributes kubelet kubeletstats logtag replicasets replicationcontrollers resourcequotas statefulsets varlibdockercontainers varlogpods --- The [OpenTelemetry Collector](/docs/collector/) supports many different receivers and processors to facilitate monitoring Kubernetes. This section covers the components that are most important for collecting Kubernetes data and enhancing it. Components covered in this page: - [Kubernetes Attributes Processor](#kubernetes-attributes-processor): adds Kubernetes metadata to incoming application telemetry. - [Kubeletstats Receiver](#kubeletstats-receiver): pulls node, pod, and container metrics from the API server on a kubelet. - [Filelog Receiver](#filelog-receiver): collects Kubernetes logs and application logs written to stdout/stderr. - [Kubernetes Cluster Receiver](#kubernetes-cluster-receiver): collects cluster-level metrics and entity events. - [Kubernetes Objects Receiver](#kubernetes-objects-receiver): collects objects, such as events, from the Kubernetes API server. - [Prometheus Receiver](#prometheus-receiver): receives metrics in [Prometheus](https://prometheus.io/) format. - [Host Metrics Receiver](#host-metrics-receiver): scrapes host metrics from Kubernetes nodes. For application traces, metrics, or logs, we recommend the [OTLP receiver](https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/otlpreceiver), but any receiver that fits your data is appropriate. ## Kubernetes Attributes Processor | Deployment Pattern | Usable | | -------------------- | ------ | | DaemonSet (agent) | Yes | | Deployment (gateway) | Yes | | Sidecar | No | The Kubernetes Attributes Processor automatically discovers Kubernetes pods, extracts their metadata, and adds the extracted metadata to spans, metrics, and logs as resource attributes. **The Kubernetes Attributes Processor is one of the most important components for a collector running in Kubernetes. Any collector receiving application data should use it.** Because it adds Kubernetes context to your telemetry, the Kubernetes Attributes Processor lets you correlate your application's traces, metrics, and logs signals with your Kubernetes telemetry, such as pod metrics and traces. The Kubernetes Attributes Processor uses the Kubernetes API to discover all pods running in a cluster and keeps a record of their IP addresses, pod UIDs, and interesting metadata. By default, data passing through the processor is associated to a pod via the incoming request's IP address, but different rules can be configured. Since the processor uses the Kubernetes API, it requires special permissions (see example below). If you're using the [OpenTelemetry Collector Helm chart](../../helm/collector/) you can use the [`kubernetesAttributes` preset](../../helm/collector/#kubernetes-attributes-preset) to get started. The following attributes are added by default: - `k8s.namespace.name` - `k8s.pod.name` - `k8s.pod.uid` - `k8s.pod.start_time` - `k8s.deployment.name` - `k8s.node.name` The Kubernetes Attributes Processor can also set custom resource attributes for traces, metrics, and logs using the Kubernetes labels and Kubernetes annotations you've added to your pods and namespaces. ```yaml k8sattributes: auth_type: 'serviceAccount' extract: metadata: # extracted from the pod - k8s.namespace.name - k8s.pod.name - k8s.pod.start_time - k8s.pod.uid - k8s.deployment.name - k8s.node.name annotations: # Extracts the value of a pod annotation with key `annotation-one` and inserts it as a resource attribute with key `a1` - tag_name: a1 key: annotation-one from: pod # Extracts the value of a namespaces annotation with key `annotation-two` with regexp and inserts it as a resource with key `a2` - tag_name: a2 key: annotation-two regex: field=(?P.+) from: namespace labels: # Extracts the value of a namespaces label with key `label1` and inserts it as a resource attribute with key `l1` - tag_name: l1 key: label1 from: namespace # Extracts the value of a pod label with key `label2` with regexp and inserts it as a resource attribute with key `l2` - tag_name: l2 key: label2 regex: field=(?P.+) from: pod pod_association: # How to associate the data to a pod (order matters) - sources: # First try to use the value of the resource attribute k8s.pod.ip - from: resource_attribute name: k8s.pod.ip - sources: # Then try to use the value of the resource attribute k8s.pod.uid - from: resource_attribute name: k8s.pod.uid - sources: # If neither of those work, use the request's connection to get the pod IP. - from: connection ``` There are also special configuration options for when the collector is deployed as a Kubernetes DaemonSet (agent) or as a Kubernetes Deployment (gateway). For details, see [Deployment Scenarios](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/k8sattributesprocessor#deployment-scenarios) For Kubernetes Attributes Processor configuration details, see [Kubernetes Attributes Processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/k8sattributesprocessor). Since the processor uses the Kubernetes API, it needs the correct permission to work correctly. For most use cases, you should give the service account running the collector the following permissions via a ClusterRole. ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: collector namespace: --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: - '' resources: - 'pods' - 'namespaces' verbs: - 'get' - 'watch' - 'list' - apiGroups: - 'apps' resources: - 'replicasets' verbs: - 'get' - 'list' - 'watch' - apiGroups: - 'extensions' resources: - 'replicasets' verbs: - 'get' - 'list' - 'watch' --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: collector namespace: roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io ``` ## Kubeletstats Receiver | Deployment Pattern | Usable | | -------------------- | ------------------------------------------------------------------ | | DaemonSet (agent) | Preferred | | Deployment (gateway) | Yes, but will only collect metrics from the node it is deployed on | | Sidecar | No | Each Kubernetes node runs a kubelet that includes an API server. The Kubernetes Receiver connects to that kubelet via the API server to collect metrics about the node and the workloads running on the node. There are different methods for authentication, but typically a service account is used. The service account will also need proper permissions to pull data from the Kubelet (see below). If you're using the [OpenTelemetry Collector Helm chart](../../helm/collector/) you can use the [`kubeletMetrics` preset](../../helm/collector/#kubelet-metrics-preset) to get started. By default, metrics will be collected for pods and nodes, but you can configure the receiver to collect container and volume metrics as well. The receiver also allows configuring how often the metrics are collected: ```yaml receivers: kubeletstats: collection_interval: 10s auth_type: 'serviceAccount' endpoint: '${env:K8S_NODE_NAME}:10250' insecure_skip_verify: true metric_groups: - node - pod - container ``` For specific details about which metrics are collected, see [Default Metrics](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/kubeletstatsreceiver/documentation.md). For specific configuration details, see [Kubeletstats Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/kubeletstatsreceiver). Since the processor uses the Kubernetes API, it needs the correct permission to work correctly. For most use cases, you should give the service account running the Collector the following permissions via a ClusterRole. ```yaml --- apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-collector subjects: - kind: ServiceAccount name: otel-collector namespace: default ``` ## Filelog Receiver | Deployment Pattern | Usable | | -------------------- | --------------------------------------------------------------- | | DaemonSet (agent) | Preferred | | Deployment (gateway) | Yes, but will only collect logs from the node it is deployed on | | Sidecar | Yes, but this would be considered advanced configuration | The Filelog Receiver tails and parses logs from files. Although it's not a Kubernetes-specific receiver, it is still the de facto solution for collecting any logs from Kubernetes. The Filelog Receiver is composed of Operators that are chained together to process a log. Each Operator performs a simple responsibility, such as parsing a timestamp or JSON. Configuring a Filelog Receiver is not trivial. If you're using the [OpenTelemetry Collector Helm chart](../../helm/collector/) you can use the [`logsCollection` preset](../../helm/collector/#logs-collection-preset) to get started. Since Kubernetes logs normally fit a set of standard formats, a typical Filelog Receiver configuration for Kubernetes looks like: ```yaml filelog: include: - /var/log/pods/*/*/*.log exclude: # Exclude logs from all containers named otel-collector - /var/log/pods/*/otel-collector/*.log start_at: beginning include_file_path: true include_file_name: false operators: # Find out which format is used by kubernetes - type: router id: get-format routes: - output: parser-docker expr: 'body matches "^\\{"' - output: parser-crio expr: 'body matches "^[^ Z]+ "' - output: parser-containerd expr: 'body matches "^[^ Z]+Z"' # Parse CRI-O format - type: regex_parser id: parser-crio regex: '^(?P