opentelemetry-collector/processor/batchprocessor
Bogdan Drutu 94a6cdde67
Rename pdata Size to OtlpProtoSize, fix some other comments (#2726)
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
2021-03-19 14:30:25 -04:00
..
testdata Add new clean nop components and use them in config tests (#2655) 2021-03-10 10:45:18 -08:00
README.md Support max batch size for metrics (#2422) 2021-02-09 12:28:23 -08:00
batch_processor.go Rename pdata Size to OtlpProtoSize, fix some other comments (#2726) 2021-03-19 14:30:25 -04:00
batch_processor_test.go Rename pdata Size to OtlpProtoSize, fix some other comments (#2726) 2021-03-19 14:30:25 -04:00
config.go Update copyright (#1597) 2020-08-19 18:25:44 -07:00
config_test.go Add new clean nop components and use them in config tests (#2655) 2021-03-10 10:45:18 -08:00
factory.go Add config settings for component telemetry, move the flag (#2148) 2020-11-14 08:49:31 -05:00
factory_test.go Rename processor component.TraceProcessor to component.TracesProcessor, and equivalent Create method feature request (#2026) 2020-10-28 18:11:25 -04:00
metrics.go Remove level from all the MetricViews calls (#2149) 2020-11-16 23:11:27 -05:00
metrics_test.go Remove level from all the MetricViews calls (#2149) 2020-11-16 23:11:27 -05:00
splitmetrics.go Support max batch size for metrics (#2422) 2021-02-09 12:28:23 -08:00
splitmetrics_test.go Support max batch size for metrics (#2422) 2021-02-09 12:28:23 -08:00
splittraces.go Avoid extra allocations (#2449) 2021-02-09 12:27:38 -08:00
splittraces_test.go Move testdata up one level, no point in being data/testdata (#2198) 2020-11-24 11:53:36 -08:00

README.md

Batch Processor

Supported pipeline types: metric, traces, logs

The batch processor accepts spans, metrics, or logs and places them into batches. Batching helps better compress the data and reduce the number of outgoing connections required to transmit the data. This processor supports both size and time based batching.

It is highly recommended to configure the batch processor on every collector. The batch processor should be defined in the pipeline after the memory_limiter as well as any sampling processors. This is because batching should happen after any data drops such as sampling.

Please refer to config.go for the config spec.

The following configuration options can be modified:

  • send_batch_size (default = 8192): Number of spans or metrics after which a batch will be sent.
  • timeout (default = 200ms): Time duration after which a batch will be sent regardless of size.
  • send_batch_max_size (default = 0): The maximum number of items in a batch. This property ensures that larger batches are split into smaller units. By default (0), there is no upper limit of the batch size. It is currently supported only for the trace and metric pipelines.

Examples:

processors:
  batch:
  batch/2:
    send_batch_size: 10000
    timeout: 10s

Refer to config.yaml for detailed examples on using the processor.