When servers are in a high memory usage mode, the amount of times GC is
called creates a high CPU usage which combined with a high ingest rate
limits capability to offload existing queued data.
Configuring the minimum interval even for hard limit, allows the system
to spend some CPU cycles between GCs to offload old data from
queues/batch processor. The most amount of data I've seen accumulated
are blocked in the batch processor queue (incoming channel) not in the
exporter queue.
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
This PR only moves all the internal code used by internal/memorylimiter
under the same directory, next PR will extract that in a different
module to limit dependencies.
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
**Description:**
- Adds `component.MustNewType` to create a type. This function panics if
the type has invalid characters. Add similar functions
`component.MustNewID` and `component.MustNewIDWithName`.
- Adds `component.Type.String` to recover the string
- Use `component.MustNewType`, `component.MustNewID`,
`component.MustNewIDWithName` and `component.Type.String` everywhere in
this codebase. To do this I changed `component.Type` into an opaque
struct and checked for compile-time errors.
Some notes:
1. All components currently on core and contrib follow this rule. This
is still breaking for other components.
2. A future PR will change this into a struct, to actually validate this
(right now you can just do `component.Type("anything")` to bypass
validation). I want to do this in two steps to avoid breaking contrib
tests: we first introduce this function, and after that we change into a
struct.
**Link to tracking Issue:** Updates #9208
**Description:**
Following
https://github.com/open-telemetry/opentelemetry-collector/issues/8632,
this change introduces memory limiter as an extension. This allows
us to place the component to reject incoming connections due to limited
memory, providing better protection from running out of memory.
missing feature: receiver fairness. issue where a receiver hogs all the
resource can happen.
**Link to tracking Issue:** <Issue number if applicable>
https://github.com/open-telemetry/opentelemetry-collector/issues/8632
---------
Co-authored-by: Dmitry Anoshin <anoshindx@gmail.com>