This deprecates CreateSettings in favour of Settings.
NewNopCreateSettings is also being deprecated in favour of
NewNopSettings
Part of #9428
---------
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
**Description:**
Following
https://github.com/open-telemetry/opentelemetry-collector/issues/8632,
this change introduces memory limiter as an extension. This allows
us to place the component to reject incoming connections due to limited
memory, providing better protection from running out of memory.
missing feature: receiver fairness. issue where a receiver hogs all the
resource can happen.
**Link to tracking Issue:** <Issue number if applicable>
https://github.com/open-telemetry/opentelemetry-collector/issues/8632
---------
Co-authored-by: Dmitry Anoshin <anoshindx@gmail.com>
* [chore] use license shortform
To remain consistent w/ contrib repo, see https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/22052
Signed-off-by: Alex Boten <aboten@lightstep.com>
* make goporto
Signed-off-by: Alex Boten <aboten@lightstep.com>
---------
Signed-off-by: Alex Boten <aboten@lightstep.com>
The main reason is to remove the circular dependency between the config (including sub-packages) and component. Here is the current state:
* component depends on config
* config/sub-package[grpc, http, etc.] depends on config & component
Because of this "circular" dependency, we cannot split for example "config" into its own module, only if all the other config sub-packages are also split.
Signed-off-by: Bogdan <bogdandrutu@gmail.com>
On top of the errorlint errors, also changes `fmt.Errorf("string literal")` with `errors.New("string literal")`.
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
Signed-off-by: Dani Louca <dlouca@splunk.com>
**Description:**
This change 062e64f1d7 caused the memory limiter to "only" start the `checkMemLimits` routine for the ml instance used by the traces processor .
In other words, metrics and logs processor will NOT [drop/refuse](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiterprocessor/memorylimiter.go#L206) data and will pass them down to the next consumer regardless of the current memory pressure as their instance of ml->forcingDrop will not be set.
The simplest solution, is to call start for each processor (metrics, logs, traces) , but this will not be efficient as we'll be running 3 instances of `checkMemLimits`, ie: multiple GC .
But at the same we need to allow multiple instances, with different configs, example: `memory_limiter/another` and `memory_limiter`
````
extensions:
memory_ballast:
size_mib: 4
receivers:
otlp:
protocols:
grpc:
http:
processors:
memory_limiter:
check_interval: 2s
limit_mib: 10
memory_limiter/another:
check_interval: 1s
limit_mib: 100
exporters:
logging:
logLevel: info
service:
telemetry:
logs:
level: "info"
pipelines:
metrics:
receivers: [otlp]
processors: [memory_limiter]
exporters: [logging]
metrics/default:
receivers: [otlp]
processors: [memory_limiter]
exporters: [logging]
traces:
receivers: [otlp]
processors: [memory_limiter/another]
exporters: [logging]
extensions: [memory_ballast]
````
The fix adds a global map to keep track of the different instance and add ~~sync once~~ mutex for the start and shutdown call, so only the first processor can launch the `checkMemLimits` routine and the last one to call `shutdown` to take it down.
If shutdown was called and no `checkMemLimits` has started, then we'll return an error message; unit tests were updated to handle this.
**Testing:**
Tested with above config and using splunk otel instance with valid data.
Made sure only a single `checkMemLimits` is running when there is a single config for memory-limiter and more than one when we have multiple.
I also verified that under memory pressure, when we pass the soft limit, all data types, traces, logs and metrics are getting dropped.
One we agree on this solution, I will look into adding more unit test to validate the change