Depends on
https://github.com/open-telemetry/opentelemetry-collector/pull/12856Resolves#12676
This is a reboot of #11311, incorporating metrics defined in the
[component telemetry
RFC](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/rfcs/component-universal-telemetry.md)
and attributes added in #12617.
The basic pattern is:
- When building any pipeline component which produces data, wrap the
"next consumer" with instrumentation to measure the number of items
being passed. This wrapped consumer is then passed into the constructor
of the component.
- When building any pipeline component which consumes data, wrap the
component itself. This wrapped consumer is saved onto the graph node so
that it can be retrieved during graph assembly.
---------
Co-authored-by: Pablo Baeyens <pablo.baeyens@datadoghq.com>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
#### Description
<!-- Issue number if applicable -->
Adds a CodeCov status badge unless explicitly disabled.
I have disabled this in core until we roll this out in contrib to show
the Go SIG and move codecovgen to build tools
#### Link to tracking issue
Updates open-telemetry/opentelemetry-collector-contrib/issues/39583
This sets the level of all metrics that where not previously stabilized
as alpha. Since many of these metrics will change as a result of
https://github.com/open-telemetry/opentelemetry-collector/pull/11406, it
made sense to me to set their stability as alpha.
---------
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
#### Description
[mdatagen] support using a different github project in mdatagen README
issues list
Uses a a new configuration property `github_project` on mdatagen
configuration.
#### Link to tracking issue
Fixes
https://github.com/open-telemetry/opentelemetry-collector/issues/9240
---------
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Co-authored-by: Alex Boten <223565+codeboten@users.noreply.github.com>
**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
This change adds `goleak` to check for memory leaks. Originally there
were 3 failing tests in the `service` package, so I'll describe changes
in relation to resolving each test's failing goleak check.
1. `TestServiceTelemetryRestart`: Simplest fix, close the response body
to make sure goroutines aren't leaked by reopening a server on the same
port. This was just a test issue
2. `TestTelemetryInit.UseOTelWithSDKConfiguration`: The [meter
provider](fb3ed1b0d6/service/telemetry.go (L57-L58))
was being started in the initialization process ([metrics
reference](fb3ed1b0d6/service/internal/proctelemetry/config.go (L135))),
but never shutdown. The type originally being used
(`meter.MetricProvider`) was the base interface which didn't provide a
`Shutdown` method. I changed this to use the `sdk` interfaces that
provide the required `Shutdown` method. The actual functionality of
starting the providers was already using and returning the `sdk`
interface, so the actual underlying type remains the same. Since `mp` is
a private member and `sdkmetric` and implement the original type, I
don't believe changing the type is a breaking change.
3. `TestServiceTelemetryCleanupOnError`: This test starts a server using
a sub-goroutine, cancels it, starts again in a subroutine, and cancels
again in the main goroutine. This test showed the racing behavior of the
subroutine running
[`server.ListenAndServe`](fb3ed1b0d6/service/internal/proctelemetry/config.go (L148))
and the main goroutine's functionality of [calling
close](fb3ed1b0d6/service/telemetry.go (L219))
and then starting the server again [right
away](fb3ed1b0d6/service/service_test.go (L244)).
The solution here is to add a `sync.WaitGroup` variable that can
properly block until all servers are closed before returning from
`shutdown`. This will allow us to ensure it's safe to proceed knowing
the ports are free, and server is fully closed.
The first test fix was just a test issue, but 2 and 3 were real bugs. I
realize it's a bit hard to read with them all together, but I assumed
adding PR dependency notes would be more complicated.
**Link to tracking Issue:** <Issue number if applicable>
#9165
**Testing:** <Describe what testing was performed and which tests were
added.>
All tests are passing as well as goleak check.
---------
Co-authored-by: Pablo Baeyens <pablo.baeyens@datadoghq.com>
This removes the configuration of the OpenCensus bridge from the
Collector. This means that any metric still relying on it will no longer
be emitting metrics.
Closes#10414Closes#8945
This is blocked until
https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/29867
is completed.
---------
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
This reverts the reverts
https://github.com/open-telemetry/opentelemetry-collector/pull/10271 and
adds a mechanism to skip adding a create settings method for the service
package component test. Will need to figure out if
servicetelemetry.TelemetrySettings should be renamed to fit w/ the other
CreateSettings structs before removing this check.
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>