Initial updates to migrate processor metrics to obsreport package, ie.: the new metrics.
Cleaned-up a bit some of the processor metrics and spelled out the rule names for new metrics.
Related to https://github.com/open-telemetry/opentelemetry-collector/issues/141
Testing: Added test for the processor common metrics, validated manually that legacy metrics were still working
Change "batches_dropped" to a .Sum(), and emit 0s for them on processor start.
Motivation:
Currently with batches_dropped being a .Count() we end up with a missing metric for bad_batches until one occurs. This makes discovering the "error" metrics I want to watch kind of annoying as I can't just look in the default prom metric list and choose what I want to dashboard / alert on.
I also think that for things we KNOW are 0, we should be emitting a 0. If we send 8000 batches, the bad_batches shouldn't be absent it should be 0. I do think that view.Count()'s should be initialized to 0 as well ( but I may not know the whole story there. )
I can add these to the other processors if we think this is a good idea. For now I just hoped the metric name is correct in my current production dashboards.
Testing:
I added some happy path metric tests to the processor. I can figure out a way to add bad path tests, it will just require a bunch of plumbing I think.
Documentation: I think the metrics as a whole need better documentation, for example I "fail_sends" in the exporter wasn't actually an error. Maybe a distinction between data loss events or not?
For historical reasons the tag associated to the name of a processor was still named "exporter", changed that to "processor". Added also the name of the queued retry instance to be used as the name of the processor.