opentelemetry-collector/processor/batchprocessor
Bogdan Drutu e7e6693926
Expose telemetry level in the configtelemetry (#1796)
Next PR will add a config setting that can be embedded in every component config.

Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
2020-09-18 16:11:27 -07:00
..
testdata Fix overflowing batch size (#1310) 2020-07-14 08:04:37 -07:00
README.md Batch Processor: Log Support (#1723) 2020-09-02 18:33:26 -04:00
batch_processor.go Expose telemetry level in the configtelemetry (#1796) 2020-09-18 16:11:27 -07:00
batch_processor_test.go Expose telemetry level in the configtelemetry (#1796) 2020-09-18 16:11:27 -07:00
config.go Update copyright (#1597) 2020-08-19 18:25:44 -07:00
config_test.go Update copyright (#1597) 2020-08-19 18:25:44 -07:00
factory.go Batch Processor: Log Support (#1723) 2020-09-02 18:33:26 -04:00
factory_test.go Change CreateTraceProcessor and CreateMetricsProcessor to be consistent with receivers/logs processor and exporters. (#1785) 2020-09-15 12:39:38 -07:00
metrics.go Expose telemetry level in the configtelemetry (#1796) 2020-09-18 16:11:27 -07:00
metrics_test.go Expose telemetry level in the configtelemetry (#1796) 2020-09-18 16:11:27 -07:00
splittraces.go Fix slice Append to accept by value the element in pdata (#1784) 2020-09-15 09:18:05 -07:00
splittraces_test.go Update copyright (#1597) 2020-08-19 18:25:44 -07:00

README.md

Batch Processor

Supported pipeline types: metric, traces, logs

The batch processor accepts spans, metrics, or logs and places them into batches. Batching helps better compress the data and reduce the number of outgoing connections required to transmit the data. This processor supports both size and time based batching.

It is highly recommended to configure the batch processor on every collector. The batch processor should be defined in the pipeline after the memory_limiter as well as any sampling processors. This is because batching should happen after any data drops such as sampling.

Please refer to config.go for the config spec.

The following configuration options can be modified:

  • send_batch_size (default = 8192): Number of spans or metrics after which a batch will be sent.
  • timeout (default = 200ms): Time duration after which a batch will be sent regardless of size.
  • send_batch_max_size (default = 0): The maximum number of items in a batch. This property ensures that larger batches are split into smaller units. By default (0), there is no upper limit of the batch size. It is currently supported only for the trace pipeline.

Examples:

processors:
  batch:
  batch/2:
    send_batch_size: 10000
    timeout: 10s

Refer to config.yaml for detailed examples on using the processor.