diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index d018e8632a..3d8b527b44 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -1,10 +1,9 @@ --- name: Bug report about: Create a report to help us improve -title: '' +title: "" labels: bug -assignees: '' - +assignees: "" --- **Describe the bug** diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md index 19f881377d..8e47557f98 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -1,10 +1,9 @@ --- name: Feature request about: Suggest an idea for this project -title: '' +title: "" labels: enhancement -assignees: '' - +assignees: "" --- **Is your feature request related to a problem? Please describe.** diff --git a/.github/repository-settings.md b/.github/repository-settings.md index 35b75d5bc3..824893a162 100644 --- a/.github/repository-settings.md +++ b/.github/repository-settings.md @@ -6,13 +6,13 @@ settings](https://github.com/open-telemetry/community/blob/main/docs/how-to-conf ## General > Pull Requests -* Allow squash merging > Default to pull request title +- Allow squash merging > Default to pull request title -* Allow auto-merge +- Allow auto-merge ## Actions > General -* Fork pull request workflows from outside collaborators: +- Fork pull request workflows from outside collaborators: "Require approval for first-time contributors who are new to GitHub" (To reduce friction for new contributors, @@ -22,14 +22,14 @@ settings](https://github.com/open-telemetry/community/blob/main/docs/how-to-conf ### `main` -* Require branches to be up to date before merging: UNCHECKED +- Require branches to be up to date before merging: UNCHECKED (PR jobs take too long, and leaving this unchecked has not been a significant problem) -* Status checks that are required: +- Status checks that are required: - * EasyCLA - * required-status-check + - EasyCLA + - required-status-check ### `release/*` @@ -42,7 +42,7 @@ for [`dependabot/**/**`](https://github.com/open-telemetry/community/blob/main/d ### `gh-pages` -* Everything UNCHECKED +- Everything UNCHECKED (This branch is currently only used for directly pushing benchmarking results from the [Nightly overhead benchmark](https://github.com/open-telemetry/opentelemetry-java-instrumentation/actions/workflows/nightly-benchmark-overhead.yml) @@ -50,20 +50,20 @@ for [`dependabot/**/**`](https://github.com/open-telemetry/community/blob/main/d ## Code security and analysis -* Secret scanning: Enabled +- Secret scanning: Enabled ## Secrets and variables > Actions -* `GE_CACHE_PASSWORD` -* `GE_CACHE_USERNAME` -* `GPG_PASSWORD` - stored in OpenTelemetry-Java 1Password -* `GPG_PRIVATE_KEY` - stored in OpenTelemetry-Java 1Password -* `GRADLE_ENTERPRISE_ACCESS_KEY` - owned by [@trask](https://github.com/trask) - * Generated at https://ge.opentelemetry.io > My settings > Access keys - * format of env var is `ge.opentelemetry.io=`, +- `GE_CACHE_PASSWORD` +- `GE_CACHE_USERNAME` +- `GPG_PASSWORD` - stored in OpenTelemetry-Java 1Password +- `GPG_PRIVATE_KEY` - stored in OpenTelemetry-Java 1Password +- `GRADLE_ENTERPRISE_ACCESS_KEY` - owned by [@trask](https://github.com/trask) + - Generated at https://ge.opentelemetry.io > My settings > Access keys + - format of env var is `ge.opentelemetry.io=`, see [docs](https://docs.gradle.com/enterprise/gradle-plugin/#via_environment_variable) -* `GRADLE_PUBLISH_KEY` -* `GRADLE_PUBLISH_SECRET` -* `OPENTELEMETRYBOT_GITHUB_TOKEN` - owned by [@trask](https://github.com/trask) -* `SONATYPE_KEY` - owned by [@trask](https://github.com/trask) -* `SONATYPE_USER` - owned by [@trask](https://github.com/trask) +- `GRADLE_PUBLISH_KEY` +- `GRADLE_PUBLISH_SECRET` +- `OPENTELEMETRYBOT_GITHUB_TOKEN` - owned by [@trask](https://github.com/trask) +- `SONATYPE_KEY` - owned by [@trask](https://github.com/trask) +- `SONATYPE_USER` - owned by [@trask](https://github.com/trask) diff --git a/CHANGELOG.md b/CHANGELOG.md index 5d424a0e34..0e829222d1 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -79,16 +79,16 @@ - The `opentelemetry-runtime-metrics` artifact has been renamed and split into `opentelemetry-runtime-telemetry-java8` and `opentelemetry-runtime-telemetry-java17` ([#8165](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8165), - [#8715](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8715)) + [#8715](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8715)) - `InetSocketAddressNetServerAttributesGetter` and `InetSocketAddressNetClientAttributesGetter` have been deprecated ([#8341](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8341), - [#8591](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8591)) + [#8591](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8591)) - The new HTTP and network semantic conventions can be opted into using either the system property `otel.semconv-stability.opt-in` or the environment variable `OTEL_SEMCONV_STABILITY_OPT_IN`, which support two values: - `http` - emit the new, stable HTTP and networking attributes, and stop emitting the old - experimental HTTP and networking attributes that the instrumentation emitted previously. + experimental HTTP and networking attributes that the instrumentation emitted previously. - `http/dup` - emit both the old and the stable HTTP and networking attributes, allowing for a more seamless transition. - The default behavior (in the absence of one of these values) is to continue emitting @@ -104,7 +104,7 @@ ([#8487](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8487)) - Reactor Kafka ([#8439](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8439), - [#8529](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8529)) + [#8529](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8529)) ### 📈 Enhancements @@ -286,7 +286,7 @@ ([#8174](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8174)) - Spring scheduling: run error handler with the same context as task ([#8220](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8220)) -- Switch from http.flavor to net.protocol.* +- Switch from http.flavor to net.protocol.\* ([#8131](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8131), [#8244](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/8244)) - Support latest Armeria release @@ -598,7 +598,7 @@ ### 🧰 Tooling -- Muzzle logs should be logged using the io.opentelemetry.* logger name +- Muzzle logs should be logged using the io.opentelemetry.\* logger name ([#7446](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/7446)) ## Version 1.21.0 (2022-12-13) @@ -840,12 +840,12 @@ The `opentelemetry-instrumentation-api` artifact is declared stable in this rele - There were a few late-breaking changes in `opentelemetry-instrumentation-api`, prior to it being declared stable: - * `InstrumenterBuilder.addAttributesExtractors(AttributesExtractor...)` was removed, use instead + - `InstrumenterBuilder.addAttributesExtractors(AttributesExtractor...)` was removed, use instead `addAttributesExtractors(AttributesExtractor)` or `addAttributesExtractors(Iterable)` - * `SpanLinksExtractor.extractFromRequest()` was removed, use instead manual extraction - * `ErrorCauseExtractor.jdk()` was renamed to `ErrorCauseExtractor.getDefault()` - * `ClassNames` utility was removed with no direct replacement + - `SpanLinksExtractor.extractFromRequest()` was removed, use instead manual extraction + - `ErrorCauseExtractor.jdk()` was renamed to `ErrorCauseExtractor.getDefault()` + - `ClassNames` utility was removed with no direct replacement - The deprecated `io.opentelemetry.instrumentation.api.config.Config` and related classes have been removed ([#6501](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/6501)) @@ -1208,15 +1208,15 @@ The `opentelemetry-instrumentation-api` artifact is declared stable in this rele - Micrometer instrumentation is now automatically applied to spring-boot-actuator apps - Some configuration properties have been renamed: - * `otel.instrumentation.common.experimental.suppress-controller-spans` + - `otel.instrumentation.common.experimental.suppress-controller-spans` → `otel.instrumentation.common.experimental.controller-telemetry.enabled` (important: note that the meaning is inverted) - * `otel.instrumentation.common.experimental.suppress-view-spans` + - `otel.instrumentation.common.experimental.suppress-view-spans` → `otel.instrumentation.common.experimental.view-telemetry.enabled` (important: note that the meaning is inverted) - * `otel.instrumentation.netty.always-create-connect-span` + - `otel.instrumentation.netty.always-create-connect-span` → `otel.instrumentation.netty.connection-telemetry.enabled` - * `otel.instrumentation.reactor-netty.always-create-connect-span` + - `otel.instrumentation.reactor-netty.always-create-connect-span` → `otel.instrumentation.reactor-netty.connection-telemetry.enabled` - Runtime memory metric names were updated to reflect [semantic conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/semantic_conventions/runtime-environment-metrics.md#jvm-metrics) @@ -1519,7 +1519,7 @@ The `opentelemetry-instrumentation-api` artifact is declared stable in this rele ([#5112](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/5112)) - Parameterize VirtualField field type ([#5165](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/5165)) -- Remove old TraceUtils and use InstrumentationTestRunner#run*Span() (almost) everywhere +- Remove old TraceUtils and use InstrumentationTestRunner#run\*Span() (almost) everywhere ([#5160](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/5160)) - Remove deprecated tracer API ([#5175](https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/5175)) diff --git a/RELEASING.md b/RELEASING.md index 2c4a7d8ad5..4be326ff4c 100644 --- a/RELEASING.md +++ b/RELEASING.md @@ -18,20 +18,20 @@ the second Monday of the month (roughly a few of days after the monthly minor re ## Preparing a new major or minor release -* Check that +- Check that [dependabot has run](https://github.com/open-telemetry/opentelemetry-java-instrumentation/network/updates) sometime in the past day (this link is only accessible if you have write access to the repository), and check that all [dependabot PRs](https://github.com/open-telemetry/opentelemetry-java-contrib/pulls/app%2Fdependabot) have been merged. -* Close the [release milestone](https://github.com/open-telemetry/opentelemetry-java-instrumentation/milestones) +- Close the [release milestone](https://github.com/open-telemetry/opentelemetry-java-instrumentation/milestones) if there is one. -* Merge a pull request to `main` updating the `CHANGELOG.md`. - * The heading for the unreleased entries should be `## Unreleased`. - * Use `.github/scripts/draft-change-log-entries.sh` as a starting point for writing the change log. -* Run the [Prepare release branch workflow](https://github.com/open-telemetry/opentelemetry-java-instrumentation/actions/workflows/prepare-release-branch.yml). - * Press the "Run workflow" button, and leave the default branch `main` selected. - * Review and merge the two pull requests that it creates +- Merge a pull request to `main` updating the `CHANGELOG.md`. + - The heading for the unreleased entries should be `## Unreleased`. + - Use `.github/scripts/draft-change-log-entries.sh` as a starting point for writing the change log. +- Run the [Prepare release branch workflow](https://github.com/open-telemetry/opentelemetry-java-instrumentation/actions/workflows/prepare-release-branch.yml). + - Press the "Run workflow" button, and leave the default branch `main` selected. + - Review and merge the two pull requests that it creates (one is targeted to the release branch and one is targeted to `main`). ## Preparing a new patch release @@ -41,31 +41,31 @@ All patch releases should include only bug-fixes, and must avoid adding/modifyin In general, patch releases are only made for regressions, security vulnerabilities, memory leaks and deadlocks. -* Backport pull request(s) to the release branch. - * Run the [Backport workflow](https://github.com/open-telemetry/opentelemetry-java-instrumentation/actions/workflows/backport.yml). - * Press the "Run workflow" button, then select the release branch from the dropdown list, +- Backport pull request(s) to the release branch. + - Run the [Backport workflow](https://github.com/open-telemetry/opentelemetry-java-instrumentation/actions/workflows/backport.yml). + - Press the "Run workflow" button, then select the release branch from the dropdown list, e.g. `release/v1.9.x`, then enter the pull request number that you want to backport, then click the "Run workflow" button below that. - * Review and merge the backport pull request that it generates. - * Note: if the PR contains any changes to workflow files, it will have to be manually backported, + - Review and merge the backport pull request that it generates. + - Note: if the PR contains any changes to workflow files, it will have to be manually backported, because the default `GITHUB_TOKEN` does not have permission to update workflow files (and the `opentelemetrybot` token doesn't have write permission to this repository at all, so while it can be used to open a PR, it can't be used to push to a local branch). -* Merge a pull request to the release branch updating the `CHANGELOG.md`. - * The heading for the unreleased entries should be `## Unreleased`. -* Run the [Prepare patch release workflow](https://github.com/open-telemetry/opentelemetry-java-instrumentation/actions/workflows/prepare-patch-release.yml). - * Press the "Run workflow" button, then select the release branch from the dropdown list, +- Merge a pull request to the release branch updating the `CHANGELOG.md`. + - The heading for the unreleased entries should be `## Unreleased`. +- Run the [Prepare patch release workflow](https://github.com/open-telemetry/opentelemetry-java-instrumentation/actions/workflows/prepare-patch-release.yml). + - Press the "Run workflow" button, then select the release branch from the dropdown list, e.g. `release/v1.9.x`, and click the "Run workflow" button below that. - * Review and merge the pull request that it creates for updating the version. + - Review and merge the pull request that it creates for updating the version. ## Making the release -* Run the [Release workflow](https://github.com/open-telemetry/opentelemetry-java-instrumentation/actions/workflows/release.yml). - * Press the "Run workflow" button, then select the release branch from the dropdown list, +- Run the [Release workflow](https://github.com/open-telemetry/opentelemetry-java-instrumentation/actions/workflows/release.yml). + - Press the "Run workflow" button, then select the release branch from the dropdown list, e.g. `release/v1.9.x`, and click the "Run workflow" button below that. - * This workflow will publish the artifacts to maven central and will publish a GitHub release + - This workflow will publish the artifacts to maven central and will publish a GitHub release with release notes based on the change log and with the javaagent jar attached. - * Review and merge the pull request that it creates for updating the change log in main + - Review and merge the pull request that it creates for updating the change log in main (note that if this is not a patch release then the change log on main may already be up-to-date, in which case no pull request will be created). diff --git a/VERSIONING.md b/VERSIONING.md index b2475a72a2..5c1d260575 100644 --- a/VERSIONING.md +++ b/VERSIONING.md @@ -9,15 +9,15 @@ Artifacts in this repository follow the same compatibility requirements describe EXCEPT for the following incompatible changes which are allowed in stable artifacts in this repository: -* Changes to the telemetry produced by instrumentation +- Changes to the telemetry produced by instrumentation (there will be some guarantees about telemetry stability in the future, see discussions in ) -* Changes to configuration properties that contain the word `experimental` -* Changes to configuration properties under the namespace `otel.javaagent.testing` +- Changes to configuration properties that contain the word `experimental` +- Changes to configuration properties under the namespace `otel.javaagent.testing` This means that: -* Changes to configuration properties (other than those that contain the word `experimental` +- Changes to configuration properties (other than those that contain the word `experimental` or are under the namespace `otel.javaagent.testing`) will be considered breaking changes (unless they only affect telemetry produced by instrumentation) @@ -39,24 +39,24 @@ where versioning stability is important. Bumping the minimum supported library version for library instrumentation is generally acceptable if there's a good reason because: -* Users of library instrumentation have to integrate the library instrumentation during build-time +- Users of library instrumentation have to integrate the library instrumentation during build-time of their application, and so have the option to bump the library version if they are using an unsupported library version. -* Users have the option of staying with the old version of library instrumentation, without being +- Users have the option of staying with the old version of library instrumentation, without being pinned on an old version of the OpenTelemetry API and SDK. -* Bumping the minimum library version changes the artifact name, so it is not technically a breaking +- Bumping the minimum library version changes the artifact name, so it is not technically a breaking change. ### Javaagent instrumentation The situation is much trickier for javaagent instrumentation: -* A common use case of the javaagent is applying instrumentation at deployment-time (including +- A common use case of the javaagent is applying instrumentation at deployment-time (including to third-party applications), where bumping the library version is frequently not an option. -* While users have the option of staying with the old version of javaagent, that pins them on +- While users have the option of staying with the old version of javaagent, that pins them on an old version of the OpenTelemetry API and SDK, which is problematic for the OpenTelemetry ecosystem. -* While bumping the minimum library version changes the instrumentation module name, it does not +- While bumping the minimum library version changes the instrumentation module name, it does not change the "aggregated" javaagent artifact name which most users depend on, so could be considered a breaking change for some users (though this is not a breaking change that we currently make any guarantees about). @@ -68,8 +68,8 @@ When there is functionality in a new library version that requires changes to th instrumentation which are incompatible with the current javaagent base library version, some options that do not require bumping the minimum supported library version include: -* Access the new functionality via reflection. This is a good technique only for very small changes. -* Create a new javaagent instrumentation module to support the new library version. This requires +- Access the new functionality via reflection. This is a good technique only for very small changes. +- Create a new javaagent instrumentation module to support the new library version. This requires configuring non-overlapping versions in the muzzle check and applying `assertInverse` to confirm that the two instrumentations are never be applied to the same library version (see [class loader matchers](https://github.com/open-telemetry/opentelemetry-java-instrumentation/blob/main/docs/contributing/writing-instrumentation-module.md#restrict-the-criteria-for-applying-the-instrumentation-by-extending-the-classloadermatcher-method) diff --git a/benchmark-overhead/README.md b/benchmark-overhead/README.md index fe3fd11321..5aaef1af19 100644 --- a/benchmark-overhead/README.md +++ b/benchmark-overhead/README.md @@ -1,13 +1,12 @@ - # Overhead tests -* [Process](#process) -* [What do we measure?](#what-do-we-measure) -* [Config](#config) -* [Agents](#agents) -* [Automation](#automation) -* [Setup and Usage](#setup-and-usage) -* [Visualization](#visualization) +- [Process](#process) +- [What do we measure?](#what-do-we-measure) +- [Config](#config) +- [Agents](#agents) +- [Automation](#automation) +- [Setup and Usage](#setup-and-usage) +- [Visualization](#visualization) This directory will contain tools and utilities that help us to measure the performance overhead introduced by @@ -42,44 +41,44 @@ After all the tests are complete, the results are collected and committed back t For each test pass, we record the following metrics in order to compare agents and determine relative overhead. -| metric name | units | description | -|--------------------------|------------|----------------------------------------------------------| -| Startup time | ms | How long it takes for the spring app to report "healthy" -| Total allocated mem | bytes | Across the life of the application -| Heap (min) | bytes | Smallest observed heap size -| Heap (max) | bytes | Largest observed heap size -| Thread switch rate | # / s | Max observed thread context switch rate -| GC time | ms | Total amount of time spent paused for garbage collection -| Request mean | ms | Average time to handle a single web request (measured at the caller) -| Request p95 | ms | 95th percentile time to handle a single web requ4st (measured at the caller) -| Iteration mean | ms | average time to do a single pass through the k6 test script -| Iteration p95 | ms | 95th percentile time to do a single pass through the k6 test script -| Peak threads | # | Highest number of running threads in the VM, including agent threads -| Network read mean | bits/s | Average network read rate -| Network write mean | bits/s | Average network write rate -| Average JVM user CPU | % | Average observed user CPU (range 0.0-1.0) -| Max JVM user CPU | % | Max observed user CPU used (range 0.0-1.0) -| Average machine tot. CPU | % | Average percentage of machine CPU used (range 0.0-1.0) -| Total GC pause nanos | ns | JVM time spent paused due to GC -| Run duration ms | ms | Duration of the test run, in ms +| metric name | units | description | +| ------------------------ | ------ | ---------------------------------------------------------------------------- | +| Startup time | ms | How long it takes for the spring app to report "healthy" | +| Total allocated mem | bytes | Across the life of the application | +| Heap (min) | bytes | Smallest observed heap size | +| Heap (max) | bytes | Largest observed heap size | +| Thread switch rate | # / s | Max observed thread context switch rate | +| GC time | ms | Total amount of time spent paused for garbage collection | +| Request mean | ms | Average time to handle a single web request (measured at the caller) | +| Request p95 | ms | 95th percentile time to handle a single web requ4st (measured at the caller) | +| Iteration mean | ms | average time to do a single pass through the k6 test script | +| Iteration p95 | ms | 95th percentile time to do a single pass through the k6 test script | +| Peak threads | # | Highest number of running threads in the VM, including agent threads | +| Network read mean | bits/s | Average network read rate | +| Network write mean | bits/s | Average network write rate | +| Average JVM user CPU | % | Average observed user CPU (range 0.0-1.0) | +| Max JVM user CPU | % | Max observed user CPU used (range 0.0-1.0) | +| Average machine tot. CPU | % | Average percentage of machine CPU used (range 0.0-1.0) | +| Total GC pause nanos | ns | JVM time spent paused due to GC | +| Run duration ms | ms | Duration of the test run, in ms | ## Config Each config contains the following: -* name -* description -* list of agents (see below) -* maxRequestRate (optional, used to throttle traffic) -* concurrentConnections (number of concurrent virtual users [VUs]) -* totalIterations - the number of passes to make through the k6 test script -* warmupSeconds - how long to wait before starting conducting measurements +- name +- description +- list of agents (see below) +- maxRequestRate (optional, used to throttle traffic) +- concurrentConnections (number of concurrent virtual users [VUs]) +- totalIterations - the number of passes to make through the k6 test script +- warmupSeconds - how long to wait before starting conducting measurements Currently, we test: -* no agent versus latest released agent -* no agent versus latest snapshot -* latest release vs. latest snapshot +- no agent versus latest released agent +- no agent versus latest snapshot +- latest release vs. latest snapshot Additional configurations can be created by submitting a PR against the `Configs` class. diff --git a/docs/advanced-configuration-options.md b/docs/advanced-configuration-options.md index 959f4e87bb..bd513d885b 100644 --- a/docs/advanced-configuration-options.md +++ b/docs/advanced-configuration-options.md @@ -14,16 +14,16 @@ Or as a quick workaround for an instrumentation bug, when byte code in one speci This option should not be used lightly, as it can leave some instrumentation partially applied, which could have unknown side-effects. -| System property | Environment variable | Purpose | -|--------------------------------|--------------------------------|---------------------------------------------------------------------------------------------------| -| otel.javaagent.exclude-classes | OTEL_JAVAAGENT_EXCLUDE_CLASSES | Suppresses all instrumentation for specific classes, format is "my.package.MyClass,my.package2.*" | +| System property | Environment variable | Purpose | +| ------------------------------ | ------------------------------ | -------------------------------------------------------------------------------------------------- | +| otel.javaagent.exclude-classes | OTEL_JAVAAGENT_EXCLUDE_CLASSES | Suppresses all instrumentation for specific classes, format is "my.package.MyClass,my.package2.\*" | ## Running application with security manager This option can be used to let agent run with all privileges without being affected by security policy restricting some operations. | System property | Environment variable | Purpose | -|--------------------------------------------------------------|--------------------------------------------------------------|---------------------------------------| +| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------- | | otel.javaagent.experimental.security-manager-support.enabled | OTEL_JAVAAGENT_EXPERIMENTAL_SECURITY_MANAGER_SUPPORT_ENABLED | Grant all privileges to agent code[1] | [1] Disclaimer: agent can provide application means for escaping security manager sandbox. Do not use diff --git a/docs/contributing/intellij-setup-and-troubleshooting.md b/docs/contributing/intellij-setup-and-troubleshooting.md index 831f47043b..a1ce9af3a4 100644 --- a/docs/contributing/intellij-setup-and-troubleshooting.md +++ b/docs/contributing/intellij-setup-and-troubleshooting.md @@ -26,14 +26,14 @@ maybe due to other reasons. In any case, here's some things that might help: ### Invalidate Caches > "Just restart" -* Go to File > Invalidate Caches... -* Unselect all the options -* Click the "Just restart" link +- Go to File > Invalidate Caches... +- Unselect all the options +- Click the "Just restart" link This seems to fix more issues than just closing and re-opening Intellij :shrug:. ### Delete your `.idea` directory -* Close Intellij -* Delete the `.idea` directory in the root directory of your local repository -* Open Intellij +- Close Intellij +- Delete the `.idea` directory in the root directory of your local repository +- Open Intellij diff --git a/docs/contributing/javaagent-structure.md b/docs/contributing/javaagent-structure.md index 90271abc08..01b4c74600 100644 --- a/docs/contributing/javaagent-structure.md +++ b/docs/contributing/javaagent-structure.md @@ -3,10 +3,10 @@ The javaagent can be logically divided into several parts, based on the class loader that contains particular classes (and resources) in the runtime: -* The main agent class living in the system class loader. -* Classes that live in the bootstrap class loader. -* Classes that live in the agent class loader. -* Javaagent extensions, and the extension class loader(s). +- The main agent class living in the system class loader. +- Classes that live in the bootstrap class loader. +- Classes that live in the agent class loader. +- Javaagent extensions, and the extension class loader(s). ## System class loader @@ -25,31 +25,31 @@ Inside the javaagent jar, this class is located in the `io/opentelemetry/javaage The bootstrap class loader contains several modules: -* **The `javaagent-bootstrap` module**: +- **The `javaagent-bootstrap` module**: it contains classes that continue the initialization work started by `OpenTelemetryAgent`, as well as some internal javaagent classes and interfaces that must be globally available to the whole application. This module is internal and its APIs are considered unstable. -* **The `instrumentation-api` and `instrumentation-api-semconv` modules**: +- **The `instrumentation-api` and `instrumentation-api-semconv` modules**: these modules contain the [Instrumenter API](using-instrumenter-api.md) and other related utilities. Because they are used by almost all instrumentations, they must be globally available to all classloaders running within the instrumented application. The classes located in these modules are used by both javaagent and library instrumentations - they all must be usable even without the javaagent present. -* **The `instrumentation-annotations-support` module**: +- **The `instrumentation-annotations-support` module**: it contains classes that provide support for annotation-based auto-instrumentation, e.g. the `@WithSpan` annotation. This module is internal and its APIs are considered unstable. -* **The `io.opentelemetry.javaagent.bootstrap` package from the `javaagent-extension-api` module**: +- **The `io.opentelemetry.javaagent.bootstrap` package from the `javaagent-extension-api` module**: this package contains several instrumentation utilities that are only usable when an application is instrumented with the javaagent; for example, the `Java8BytecodeBridge` that should be used inside advice classes. -* All modules using the `otel.javaagent-bootstrap` Gradle plugin: +- All modules using the `otel.javaagent-bootstrap` Gradle plugin: these modules contain instrumentation-specific classes that must be globally available in the bootstrap class loader. For example, classes that are used to coordinate different `InstrumentationModule`s, like the common utilities for storing Servlet context path, or the thread local switch used to coordinate different Kafka consumer instrumentations. By convention, all these modules are named according to this pattern: `:instrumentation:...:bootstrap`. -* The [OpenTelemetry API](https://github.com/open-telemetry/opentelemetry-java/tree/main/api/all). +- The [OpenTelemetry API](https://github.com/open-telemetry/opentelemetry-java/tree/main/api/all). Inside the javaagent jar, these classes are all located under the `io/opentelemetry/javaagent/` directory. Aside from the javaagent-specific `javaagent-bootstrap` and `javaagent-extension-api` @@ -61,27 +61,27 @@ versions of some of our APIs (`opentelemetry-api`, `instrumentation-api`). The agent class loader contains almost everything else not mentioned before, including: -* **The `javaagent-tooling` module**: +- **The `javaagent-tooling` module**: this module picks up the initialization process started by `OpenTelemetryAgent` and `javaagent-bootstrap` and actually finishes the work, starting up the OpenTelemetry SDK and building and installing the `ClassFileTransformer` in the JVM. The javaagent uses [ByteBuddy](https://bytebuddy.net) to configure and construct the `ClassFileTransformer`. This module is internal and its APIs are considered unstable. -* **The `muzzle` module**: +- **The `muzzle` module**: it contains classes that are internally used by [muzzle](muzzle.md), our safety net feature. This module is internal and its APIs are considered unstable. -* **The `io.opentelemetry.javaagent.extension` package from the `javaagent-extension-api` module**: +- **The `io.opentelemetry.javaagent.extension` package from the `javaagent-extension-api` module**: this package contains common extension points and SPIs that can be used to customize the agent behavior. -* All modules using the `otel.javaagent-instrumentation` Gradle plugin: +- All modules using the `otel.javaagent-instrumentation` Gradle plugin: these modules contain actual javaagent instrumentations. Almost all of them implement the `InstrumentationModule`, some of them include a library instrumentation as an `implementation` dependency. You can read more about writing instrumentations [here](writing-instrumentation.md). By convention, all these modules are named according to this pattern: `:instrumentation:...:javaagent`. -* The [OpenTelemetry SDK](https://github.com/open-telemetry/opentelemetry-java/tree/main/sdk/all), +- The [OpenTelemetry SDK](https://github.com/open-telemetry/opentelemetry-java/tree/main/sdk/all), along with various exporters and SDK extensions. -* [ByteBuddy](https://bytebuddy.net). +- [ByteBuddy](https://bytebuddy.net). Inside the javaagent jar, all classes and resources that are meant to be loaded by the `AgentClassLoader` are placed inside the `inst/` directory. All Java class files have diff --git a/docs/contributing/javaagent-test-infra.md b/docs/contributing/javaagent-test-infra.md index 7ece25ae69..6249ec53eb 100644 --- a/docs/contributing/javaagent-test-infra.md +++ b/docs/contributing/javaagent-test-infra.md @@ -7,10 +7,10 @@ There are a few key components that make this possible, described below. ## gradle/instrumentation.gradle -* shades the instrumentation -* adds jvm args to the test configuration - * -javaagent:[agent for testing] - * -Dotel.javaagent.experimental.initializer.jar=[shaded instrumentation jar] +- shades the instrumentation +- adds jvm args to the test configuration + - -javaagent:[agent for testing] + - -Dotel.javaagent.experimental.initializer.jar=[shaded instrumentation jar] The `otel.javaagent.experimental.initializer.jar` property is used to load the shaded instrumentation jar into the `AgentClassLoader`, so that the javaagent jar doesn't need to be re-built each time. diff --git a/docs/contributing/muzzle.md b/docs/contributing/muzzle.md index da88097035..61272d3a03 100644 --- a/docs/contributing/muzzle.md +++ b/docs/contributing/muzzle.md @@ -13,8 +13,8 @@ Muzzle will prevent loading an instrumentation if it detects any mismatch or con Muzzle has two phases: -* at compile time it collects references to the third-party symbols and used helper classes; -* at runtime it compares those references to the actual API symbols on the classpath. +- at compile time it collects references to the third-party symbols and used helper classes; +- at runtime it compares those references to the actual API symbols on the classpath. ### Compile-time reference collection @@ -73,19 +73,19 @@ it's not an optional feature. The gradle plugin defines two tasks: -* `muzzle` task runs the runtime muzzle verification against different library versions: +- `muzzle` task runs the runtime muzzle verification against different library versions: - ```sh - ./gradlew :instrumentation:google-http-client-1.19:javaagent:muzzle - ``` + ```sh + ./gradlew :instrumentation:google-http-client-1.19:javaagent:muzzle + ``` - If a new, incompatible version of the instrumented library is published it fails the build. + If a new, incompatible version of the instrumented library is published it fails the build. -* `printMuzzleReferences` task prints all API references in a given module: +- `printMuzzleReferences` task prints all API references in a given module: - ```sh - ./gradlew :instrumentation:google-http-client-1.19:javaagent:printMuzzleReferences - ``` + ```sh + ./gradlew :instrumentation:google-http-client-1.19:javaagent:printMuzzleReferences + ``` The muzzle plugin needs to be configured in the module's `.gradle` file. Example: @@ -117,14 +117,14 @@ muzzle { } ``` -* Using either `pass` or `fail` directive allows to specify whether muzzle should treat the +- Using either `pass` or `fail` directive allows to specify whether muzzle should treat the reference check failure as expected behavior; -* `versions` is a version range, where `[]` is inclusive and `()` is exclusive. It is not needed to +- `versions` is a version range, where `[]` is inclusive and `()` is exclusive. It is not needed to specify the exact version to start/end, e.g. `[1.0.0,4)` would usually behave in the same way as `[1.0.0,4.0.0-Alpha)`; -* `assertInverse` is basically a shortcut for adding an opposite directive for all library versions +- `assertInverse` is basically a shortcut for adding an opposite directive for all library versions that are not included in the specified `versions` range; -* `extraDependency` allows putting additional libs on the classpath just for the compile-time check; +- `extraDependency` allows putting additional libs on the classpath just for the compile-time check; this is usually used for jars that are not bundled with the instrumented lib but always present in the runtime anyway. diff --git a/docs/contributing/running-tests.md b/docs/contributing/running-tests.md index 4ce4ac44f8..7cd3fa1cee 100644 --- a/docs/contributing/running-tests.md +++ b/docs/contributing/running-tests.md @@ -63,7 +63,6 @@ Some tests can be executed as GraalVM native executables: ./gradlew nativeTest ``` - ## Docker disk space Some of the instrumentation tests (and all of the smoke tests) spin up docker containers via diff --git a/docs/contributing/style-guideline.md b/docs/contributing/style-guideline.md index 6904e5e0d5..b21c6cda75 100644 --- a/docs/contributing/style-guideline.md +++ b/docs/contributing/style-guideline.md @@ -59,23 +59,23 @@ We leverage static imports for many common types of operations. However, not all constants are necessarily good candidates for a static import. The following list is a very rough guideline of what are commonly accepted static imports: -* Test assertions (JUnit and AssertJ) -* Mocking/stubbing in tests (with Mockito) -* Collections helpers (such as `singletonList()` and `Collectors.toList()`) -* ByteBuddy `ElementMatchers` (for building instrumentation modules) -* Immutable constants (where clearly named) -* Singleton instances (especially where clearly named an hopefully immutable) -* `tracer()` methods that expose tracer singleton instances +- Test assertions (JUnit and AssertJ) +- Mocking/stubbing in tests (with Mockito) +- Collections helpers (such as `singletonList()` and `Collectors.toList()`) +- ByteBuddy `ElementMatchers` (for building instrumentation modules) +- Immutable constants (where clearly named) +- Singleton instances (especially where clearly named an hopefully immutable) +- `tracer()` methods that expose tracer singleton instances ## Ordering of class contents The following order is preferred: -* Static fields (final before non-final) -* Instance fields (final before non-final) -* Constructors -* Methods -* Nested classes +- Static fields (final before non-final) +- Instance fields (final before non-final) +- Constructors +- Methods +- Nested classes If methods call each other, it's nice if the calling method is ordered (somewhere) above the method that it calls. So, for one example, a private method would be ordered (somewhere) below diff --git a/docs/contributing/using-instrumenter-api.md b/docs/contributing/using-instrumenter-api.md index b4ebce0044..33ae648665 100644 --- a/docs/contributing/using-instrumenter-api.md +++ b/docs/contributing/using-instrumenter-api.md @@ -68,11 +68,11 @@ span and finishes recording the metrics (if any are registered in the `Instrumen The `end()` method accepts several arguments: -* The OpenTelemetry `Context` that was returned by the `start()` method. -* The `REQUEST` instance that started the processing. -* Optionally, the `RESPONSE` instance that ends the processing - it may be `null` in case it was not +- The OpenTelemetry `Context` that was returned by the `start()` method. +- The `REQUEST` instance that started the processing. +- Optionally, the `RESPONSE` instance that ends the processing - it may be `null` in case it was not received or an error has occurred. -* Optionally, a `Throwable` error that was thrown by the operation. +- Optionally, a `Throwable` error that was thrown by the operation. Consider the following example: @@ -105,13 +105,13 @@ An `Instrumenter` can be obtained by calling its static `builder()` method and u returned `InstrumenterBuilder` to configure captured telemetry and apply customizations. The `builder()` method accepts three arguments: -* An `OpenTelemetry` instance, which is used to obtain the `Tracer` and `Meter` objects. -* The instrumentation name, which indicates the _instrumentation_ library name, not the +- An `OpenTelemetry` instance, which is used to obtain the `Tracer` and `Meter` objects. +- The instrumentation name, which indicates the _instrumentation_ library name, not the _instrumented_ library name. The value passed here should uniquely identify the instrumentation library so that during troubleshooting it's possible to determine where the telemetry came from. Read more about instrumentation libraries in the [OpenTelemetry specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md#instrumentation-libraries). -* A `SpanNameExtractor` that determines the span name. +- A `SpanNameExtractor` that determines the span name. An `Instrumenter` can be built from several smaller components. The following subsections describe all interfaces that can be used to customize an `Instrumenter`. @@ -122,17 +122,17 @@ By setting the instrumentation library version, you let users identify which ver instrumentation produced the telemetry. Make sure you always provide the version to the `Instrumenter`. You can do this in two ways: -* By calling the `setInstrumentationVersion()` method on the `InstrumenterBuilder`. -* By making sure that the JAR file with your instrumentation library contains a properties file in +- By calling the `setInstrumentationVersion()` method on the `InstrumenterBuilder`. +- By making sure that the JAR file with your instrumentation library contains a properties file in the `META-INF/io/opentelemetry/instrumentation/` directory. You must name the file `${instrumentationName}.properties`, where `${instrumentationName}` is the name of the instrumentation library passed to the `Instrumenter#builder()` method. The file must contain a single property, `version`. For example: - ```properties - # META-INF/io/opentelemetry/instrumentation/my-instrumentation.properties - version=1.2.3 - ``` + ```properties + # META-INF/io/opentelemetry/instrumentation/my-instrumentation.properties + version=1.2.3 + ``` The `Instrumenter` automatically detects the properties file and determines the instrumentation version based on its name. @@ -171,9 +171,9 @@ method. An `AttributesExtractor` is responsible for extracting span and metric attributes when the processing starts and ends. It contains two methods: -* The `onStart()` method is called when the instrumented operation starts. It accepts two +- The `onStart()` method is called when the instrumented operation starts. It accepts two parameters: an `AttributesBuilder` instance and the incoming `REQUEST` instance. -* The `onEnd()` method is called when the instrumented operation ends. It accepts the same two +- The `onEnd()` method is called when the instrumented operation ends. It accepts the same two parameters as `onStart()` and also an optional `RESPONSE` and an optional `Throwable` error. The aim of both methods is to extract interesting attributes from the received request (and response @@ -253,9 +253,9 @@ the `setSpanStatusExtractor()` method. The `SpanLinksExtractor` interface can be used to add links to other spans when the instrumented operation starts. It has a single `extract()` method that receives the following arguments: -* A `SpanLinkBuilder` that can be used to add the links. -* The parent `Context` that was passed in to `Instrumenter#start()`. -* The `REQUEST` instance that was passed in to `Instrumenter#start()`. +- A `SpanLinkBuilder` that can be used to add the links. +- The parent `Context` that was passed in to `Instrumenter#start()`. +- The `REQUEST` instance that was passed in to `Instrumenter#start()`. You can read more about span links and their use cases [here](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md#links-between-spans). @@ -326,10 +326,10 @@ and `OperationListener` interfaces. `OperationMetrics` is simply a factory inter the `OperationListener` - it receives an OpenTelemetry `Meter` and returns a new listener. The `OperationListener` contains two methods: -* `onStart()` that gets executed when the instrumented operation starts. It returns a `Context` - it +- `onStart()` that gets executed when the instrumented operation starts. It returns a `Context` - it can be used to store internal metrics state that should be propagated to the `onEnd()` call, if needed. -* `onEnd()` that gets executed when the instrumented operation ends. +- `onEnd()` that gets executed when the instrumented operation ends. Both methods accept a `Context`, an instance of `Attributes` that contains either attributes computed on instrumented operation start or end, and the start and end nanoseconds timestamp that @@ -418,16 +418,16 @@ method for that: passing `false` will turn the newly created `Instrumenter` into The `Instrumenter` creation process ends with calling one of the following `InstrumenterBuilder` methods: -* `newInstrumenter()`: the returned `Instrumenter` will always start spans with kind `INTERNAL`. -* `newInstrumenter(SpanKindExtractor)`: the returned `Instrumenter` will always start spans with +- `newInstrumenter()`: the returned `Instrumenter` will always start spans with kind `INTERNAL`. +- `newInstrumenter(SpanKindExtractor)`: the returned `Instrumenter` will always start spans with kind determined by the passed `SpanKindExtractor`. -* `newClientInstrumenter(TextMapSetter)`: the returned `Instrumenter` will always start `CLIENT` +- `newClientInstrumenter(TextMapSetter)`: the returned `Instrumenter` will always start `CLIENT` spans and will propagate operation context into the outgoing request. -* `newServerInstrumenter(TextMapGetter)`: the returned `Instrumenter` will always start `SERVER` +- `newServerInstrumenter(TextMapGetter)`: the returned `Instrumenter` will always start `SERVER` spans and will extract the parent span context from the incoming request. -* `newProducerInstrumenter(TextMapSetter)`: the returned `Instrumenter` will always start `PRODUCER` +- `newProducerInstrumenter(TextMapSetter)`: the returned `Instrumenter` will always start `PRODUCER` spans and will propagate operation context into the outgoing request. -* `newConsumerInstrumenter(TextMapGetter)`: the returned `Instrumenter` will always start `SERVER` +- `newConsumerInstrumenter(TextMapGetter)`: the returned `Instrumenter` will always start `SERVER` spans and will extract the parent span context from the incoming request. The last four variants that create non-`INTERNAL` spans accept either `TextMapSetter` diff --git a/docs/contributing/writing-instrumentation-module.md b/docs/contributing/writing-instrumentation-module.md index c8c04b69b6..53bba6831f 100644 --- a/docs/contributing/writing-instrumentation-module.md +++ b/docs/contributing/writing-instrumentation-module.md @@ -190,11 +190,11 @@ This method describes what transformations should be applied to the matched type. The interface `TypeTransformer`, implemented internally by the agent, defines a set of available transformations that you can apply: -* `applyAdviceToMethod(ElementMatcher, String)` lets you apply +- `applyAdviceToMethod(ElementMatcher, String)` lets you apply an advice class (the second parameter) to all matching methods (the first parameter). We suggest to make the method matchers as strict as possible: the type instrumentation should only instrument the code that it targets. -* `applyTransformer(AgentBuilder.Transformer)` lets you to inject an arbitrary ByteBuddy +- `applyTransformer(AgentBuilder.Transformer)` lets you to inject an arbitrary ByteBuddy transformer. This is an advanced, low-level option that is not subjected to muzzle safety checks and helper class detection. Use it responsibly. @@ -238,15 +238,15 @@ the instrumented library class files. You should not treat them as ordinary, pla Unfortunately many standard practices do not apply to advice classes: -* If they're inner classes, they MUST be static. -* They MUST only contain static methods. -* They MUST NOT contain any state (fields) whatsoever, static constants included. Only the advice +- If they're inner classes, they MUST be static. +- They MUST only contain static methods. +- They MUST NOT contain any state (fields) whatsoever, static constants included. Only the advice methods' content is copied to the instrumented code, constants are not. -* Inner advice classes defined in an `InstrumentationModule` or a `TypeInstrumentation` MUST NOT use +- Inner advice classes defined in an `InstrumentationModule` or a `TypeInstrumentation` MUST NOT use anything from the outer class (loggers, constants, etc). -* Reusing code by extracting a common method and/or parent class won't work: create additional helper +- Reusing code by extracting a common method and/or parent class won't work: create additional helper classes to store any reusable code instead. -* They SHOULD NOT contain any methods other than `@Advice`-annotated method. +- They SHOULD NOT contain any methods other than `@Advice`-annotated method. Consider the following example: diff --git a/docs/contributing/writing-instrumentation.md b/docs/contributing/writing-instrumentation.md index d320a7aa8b..93873c2734 100644 --- a/docs/contributing/writing-instrumentation.md +++ b/docs/contributing/writing-instrumentation.md @@ -360,8 +360,8 @@ instrumentation modules. Some examples of this include: -* Application server instrumentations communicating with Servlet API instrumentations. -* Different high-level Kafka consumer instrumentations suppressing the low-level `kafka-clients` +- Application server instrumentations communicating with Servlet API instrumentations. +- Different high-level Kafka consumer instrumentations suppressing the low-level `kafka-clients` instrumentation. Create a module named `bootstrap` and add a `build.gradle.kts` file with the following content: diff --git a/docs/logger-mdc-instrumentation.md b/docs/logger-mdc-instrumentation.md index 42aa1ffdbc..3eb5dc3bd6 100644 --- a/docs/logger-mdc-instrumentation.md +++ b/docs/logger-mdc-instrumentation.md @@ -28,11 +28,11 @@ logs can correlate traces/spans with log statements. > Note: There are also log appenders for exporting logs to OpenTelemetry, not to be confused with the MDC appenders. -| Library | Auto-instrumented versions | Standalone Library Instrumentation | -|---------|----------------------------|--------------------------------------------------------------------------------------| -| Log4j 1 | 1.2+ | | -| Log4j 2 | 2.7+ | [opentelemetry-log4j-context-data-2.17-autoconfigure](../instrumentation/log4j/log4j-context-data/log4j-context-data-2.17/library-autoconfigure) | | -| Logback | 1.0+ | [opentelemetry-logback-mdc-1.0](../instrumentation/logback/logback-mdc-1.0/library) | +| Library | Auto-instrumented versions | Standalone Library Instrumentation | +| ------- | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | --- | +| Log4j 1 | 1.2+ | | +| Log4j 2 | 2.7+ | [opentelemetry-log4j-context-data-2.17-autoconfigure](../instrumentation/log4j/log4j-context-data/log4j-context-data-2.17/library-autoconfigure) | | +| Logback | 1.0+ | [opentelemetry-logback-mdc-1.0](../instrumentation/logback/logback-mdc-1.0/library) | ## Frameworks diff --git a/docs/misc/interop-design/interop-design.md b/docs/misc/interop-design/interop-design.md index 11cfccfec3..4bf472210f 100644 --- a/docs/misc/interop-design/interop-design.md +++ b/docs/misc/interop-design/interop-design.md @@ -4,8 +4,8 @@ These two things must seamlessly interoperate: -* Instrumentation provided by the Java agent -* Instrumentation provided by the user app, using any 1.0+ version of the OpenTelemetry API +- Instrumentation provided by the Java agent +- Instrumentation provided by the user app, using any 1.0+ version of the OpenTelemetry API ## Design diff --git a/docs/scope.md b/docs/scope.md index be1ed2c17d..7eb742037e 100644 --- a/docs/scope.md +++ b/docs/scope.md @@ -2,9 +2,9 @@ Both javaagent and library-based approaches to the following: -* Instrumentation for specific Java libraries and frameworks - * Emitting spans and metrics (and in the future logs) -* System metrics -* MDC logging integrations - * Encoding traceId/spanId into logs -* Spring Boot starters +- Instrumentation for specific Java libraries and frameworks + - Emitting spans and metrics (and in the future logs) +- System metrics +- MDC logging integrations + - Encoding traceId/spanId into logs +- Spring Boot starters diff --git a/docs/supported-libraries.md b/docs/supported-libraries.md index a0f9f2a9fd..fdd35c3e3e 100644 --- a/docs/supported-libraries.md +++ b/docs/supported-libraries.md @@ -3,22 +3,22 @@ We automatically instrument and support a huge number of libraries, frameworks, and application servers... right out of the box! -Don't see your favorite tool listed here? Consider [filing an issue](https://github.com/open-telemetry/opentelemetry-java-instrumentation/issues), +Don't see your favorite tool listed here? Consider [filing an issue](https://github.com/open-telemetry/opentelemetry-java-instrumentation/issues), or [contributing](../CONTRIBUTING.md). ## Contents -* [Libraries / Frameworks](#libraries--frameworks) -* [Application Servers](#application-servers) -* [JVMs and Operating Systems](#jvms-and-operating-systems) -* [Disabled instrumentations](#disabled-instrumentations) +- [Libraries / Frameworks](#libraries--frameworks) +- [Application Servers](#application-servers) +- [JVMs and Operating Systems](#jvms-and-operating-systems) +- [Disabled instrumentations](#disabled-instrumentations) ## Libraries / Frameworks These are the supported libraries and frameworks: | Library/Framework | Auto-instrumented versions | Standalone Library Instrumentation [1] | Semantic Conventions | -|---------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| +| ------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- | | [Akka Actors](https://doc.akka.io/docs/akka/current/typed/index.html) | 2.5+ | N/A | Context propagation | | [Akka HTTP](https://doc.akka.io/docs/akka-http/current/index.html) | 10.0+ | N/A | [HTTP Client Spans], [HTTP Client Metrics], [HTTP Server Spans], [HTTP Server Metrics] | | [Apache Axis2](https://axis.apache.org/axis2/java/core/) | 1.6+ | N/A | Provides `http.route` [2], Controller Spans [3] | @@ -71,7 +71,7 @@ These are the supported libraries and frameworks: | [Java Executors](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Executor.html) | Java 8+ | N/A | Context propagation | | [Java Http Client](https://docs.oracle.com/en/java/javase/11/docs/api/java.net.http/java/net/http/package-summary.html) | Java 11+ | [opentelemetry-java-http-client](../instrumentation/java-http-client/library) | [HTTP Client Spans], [HTTP Client Metrics] | | [java.util.logging](https://docs.oracle.com/javase/8/docs/api/java/util/logging/package-summary.html) | Java 8+ | N/A | none | -| [Java Platform](https://docs.oracle.com/javase/8/docs/api/java/lang/management/ManagementFactory.html) | Java 8+ | [opentelemetry-runtime-telemetry-java8](../instrumentation/runtime-telemetry/runtime-telemetry-java8/library),[opentelemetry-runtime-telemetry-java17](../instrumentation/runtime-telemetry/runtime-telemetry-java17/library),
[opentelemetry-resources](../instrumentation/resources/library) | [JVM Runtime Metrics] | +| [Java Platform](https://docs.oracle.com/javase/8/docs/api/java/lang/management/ManagementFactory.html) | Java 8+ | [opentelemetry-runtime-telemetry-java8](../instrumentation/runtime-telemetry/runtime-telemetry-java8/library),[opentelemetry-runtime-telemetry-java17](../instrumentation/runtime-telemetry/runtime-telemetry-java17/library),
[opentelemetry-resources](../instrumentation/resources/library) | [JVM Runtime Metrics] | | [JAX-RS](https://javaee.github.io/javaee-spec/javadocs/javax/ws/rs/package-summary.html) | 0.5+ | N/A | Provides `http.route` [2], Controller Spans [3] | | [JAX-RS Client](https://javaee.github.io/javaee-spec/javadocs/javax/ws/rs/client/package-summary.html) | 1.1+ | N/A | [HTTP Client Spans], [HTTP Client Metrics] | | [JAX-WS](https://jakarta.ee/specifications/xml-web-services/2.3/apidocs/javax/xml/ws/package-summary.html) | 2.0+ (not including 3.x yet) | N/A | Provides `http.route` [2], Controller Spans [3] | @@ -163,17 +163,17 @@ These are the supported libraries and frameworks: These are the application servers that the smoke tests are run against: -| Application server | Version | JVM | OS | -| ----------------------------------------------------------------------------------------- | --------------------------- | ----------------- | ------------------------------ | -| [Jetty](https://www.eclipse.org/jetty/) | 9.4.x, 10.0.x, 11.0.x | OpenJDK 8, 11, 17 | [`ubuntu-latest`], [`windows-latest`] | -| [Payara](https://www.payara.fish/) | 5.0.x, 5.1.x | OpenJDK 8, 11 | [`ubuntu-latest`], [`windows-latest`] | -| [Tomcat](http://tomcat.apache.org/) | 7.0.x | OpenJDK 8 | [`ubuntu-latest`], [`windows-latest`] | -| [Tomcat](http://tomcat.apache.org/) | 7.0.x, 8.5.x, 9.0.x, 10.0.x | OpenJDK 8, 11, 17 | [`ubuntu-latest`], [`windows-latest`] | -| [TomEE](https://tomee.apache.org/) | 7.x, 8.x | OpenJDK 8, 11, 17 | [`ubuntu-latest`], [`windows-latest`] | -| [Open Liberty](https://openliberty.io/) | 21.x, 22.x, 23.x | OpenJDK 8, 11, 17 | [`ubuntu-latest`], [`windows-latest`] | -| [Websphere Traditional](https://www.ibm.com/uk-en/cloud/websphere-application-server) | 8.5.5.x, 9.0.x | IBM JDK 8 | Red Hat Enterprise Linux 8.4 | -| [WildFly](https://www.wildfly.org/) | 13.x | OpenJDK 8 | [`ubuntu-latest`], [`windows-latest`] | -| [WildFly](https://www.wildfly.org/) | 17.x, 21.x, 25.x | OpenJDK 8, 11, 17 | [`ubuntu-latest`], [`windows-latest`] | +| Application server | Version | JVM | OS | +| ------------------------------------------------------------------------------------- | --------------------------- | ----------------- | ------------------------------------- | +| [Jetty](https://www.eclipse.org/jetty/) | 9.4.x, 10.0.x, 11.0.x | OpenJDK 8, 11, 17 | [`ubuntu-latest`], [`windows-latest`] | +| [Payara](https://www.payara.fish/) | 5.0.x, 5.1.x | OpenJDK 8, 11 | [`ubuntu-latest`], [`windows-latest`] | +| [Tomcat](http://tomcat.apache.org/) | 7.0.x | OpenJDK 8 | [`ubuntu-latest`], [`windows-latest`] | +| [Tomcat](http://tomcat.apache.org/) | 7.0.x, 8.5.x, 9.0.x, 10.0.x | OpenJDK 8, 11, 17 | [`ubuntu-latest`], [`windows-latest`] | +| [TomEE](https://tomee.apache.org/) | 7.x, 8.x | OpenJDK 8, 11, 17 | [`ubuntu-latest`], [`windows-latest`] | +| [Open Liberty](https://openliberty.io/) | 21.x, 22.x, 23.x | OpenJDK 8, 11, 17 | [`ubuntu-latest`], [`windows-latest`] | +| [Websphere Traditional](https://www.ibm.com/uk-en/cloud/websphere-application-server) | 8.5.5.x, 9.0.x | IBM JDK 8 | Red Hat Enterprise Linux 8.4 | +| [WildFly](https://www.wildfly.org/) | 13.x | OpenJDK 8 | [`ubuntu-latest`], [`windows-latest`] | +| [WildFly](https://www.wildfly.org/) | 17.x, 21.x, 25.x | OpenJDK 8, 11, 17 | [`ubuntu-latest`], [`windows-latest`] | [`ubuntu-latest`]: https://github.com/actions/runner-images#available-images [`windows-latest`]: https://github.com/actions/runner-images#available-images @@ -182,10 +182,10 @@ These are the application servers that the smoke tests are run against: These are the JVMs and operating systems that the integration tests are run against: -| JVM | Versions | OS | -| ------------------------------------------------------------------------------------------ | --------- | ------------------------------ | -| [OpenJDK (Eclipse Temurin)](https://adoptium.net/) | 8, 11, 17 | [`ubuntu-latest`], [`windows-latest`] | -| [OpenJ9 (IBM Semeru Runtimes)](https://developer.ibm.com/languages/java/semeru-runtimes/) | 8, 11, 17 | [`ubuntu-latest`] | +| JVM | Versions | OS | +| ----------------------------------------------------------------------------------------- | --------- | ------------------------------------- | +| [OpenJDK (Eclipse Temurin)](https://adoptium.net/) | 8, 11, 17 | [`ubuntu-latest`], [`windows-latest`] | +| [OpenJ9 (IBM Semeru Runtimes)](https://developer.ibm.com/languages/java/semeru-runtimes/) | 8, 11, 17 | [`ubuntu-latest`] | ## Disabled instrumentations diff --git a/examples/distro/README.md b/examples/distro/README.md index 4ed7a2fe69..781be60215 100644 --- a/examples/distro/README.md +++ b/examples/distro/README.md @@ -11,20 +11,20 @@ its usage. This repository has four main submodules: -* `custom` contains all custom functionality, SPI and other extensions -* `agent` contains the main repackaging functionality and, optionally, an entry point to the agent, if one wishes to -customize that -* `instrumentation` contains custom instrumentations added by vendor -* `smoke-tests` contains simple tests to verify that resulting agent builds and applies correctly +- `custom` contains all custom functionality, SPI and other extensions +- `agent` contains the main repackaging functionality and, optionally, an entry point to the agent, if one wishes to + customize that +- `instrumentation` contains custom instrumentations added by vendor +- `smoke-tests` contains simple tests to verify that resulting agent builds and applies correctly ## Extensions examples -* [DemoIdGenerator](custom/src/main/java/com/example/javaagent/DemoIdGenerator.java) - custom `IdGenerator` -* [DemoPropagator](custom/src/main/java/com/example/javaagent/DemoPropagator.java) - custom `TextMapPropagator` -* [DemoSampler](custom/src/main/java/com/example/javaagent/DemoSampler.java) - custom `Sampler` -* [DemoSpanProcessor](custom/src/main/java/com/example/javaagent/DemoSpanProcessor.java) - custom `SpanProcessor` -* [DemoSpanExporter](custom/src/main/java/com/example/javaagent/DemoSpanExporter.java) - custom `SpanExporter` -* [DemoServlet3InstrumentationModule](instrumentation/servlet-3/src/main/java/com/example/javaagent/instrumentation/DemoServlet3InstrumentationModule.java) - additional instrumentation +- [DemoIdGenerator](custom/src/main/java/com/example/javaagent/DemoIdGenerator.java) - custom `IdGenerator` +- [DemoPropagator](custom/src/main/java/com/example/javaagent/DemoPropagator.java) - custom `TextMapPropagator` +- [DemoSampler](custom/src/main/java/com/example/javaagent/DemoSampler.java) - custom `Sampler` +- [DemoSpanProcessor](custom/src/main/java/com/example/javaagent/DemoSpanProcessor.java) - custom `SpanProcessor` +- [DemoSpanExporter](custom/src/main/java/com/example/javaagent/DemoSpanExporter.java) - custom `SpanExporter` +- [DemoServlet3InstrumentationModule](instrumentation/servlet-3/src/main/java/com/example/javaagent/instrumentation/DemoServlet3InstrumentationModule.java) - additional instrumentation ## Instrumentation customisation diff --git a/examples/extension/README.md b/examples/extension/README.md index 35d5af4b6b..28509661db 100644 --- a/examples/extension/README.md +++ b/examples/extension/README.md @@ -17,11 +17,11 @@ To add the extension to the instrumentation agent: 1. Copy the jar file to a host that is running an application to which you've attached the OpenTelemetry Java instrumentation. 2. Modify the startup command to add the full path to the extension file. For example: - ```bash - java -javaagent:path/to/opentelemetry-javaagent.jar \ - -Dotel.javaagent.extensions=build/libs/opentelemetry-java-instrumentation-extension-demo-1.0-all.jar - -jar myapp.jar - ``` + ```bash + java -javaagent:path/to/opentelemetry-javaagent.jar \ + -Dotel.javaagent.extensions=build/libs/opentelemetry-java-instrumentation-extension-demo-1.0-all.jar + -jar myapp.jar + ``` Note: to load multiple extensions, you can specify a comma-separated list of extension jars or directories (that contain extension jars) for the `otel.javaagent.extensions` value. @@ -34,12 +34,12 @@ For more information, see the `extendedAgent` task in [build.gradle](build.gradl ## Extensions examples -* Custom `IdGenerator`: [DemoIdGenerator](src/main/java/com/example/javaagent/DemoIdGenerator.java) -* Custom `TextMapPropagator`: [DemoPropagator](src/main/java/com/example/javaagent/DemoPropagator.java) -* Custom `Sampler`: [DemoSampler](src/main/java/com/example/javaagent/DemoSampler.java) -* Custom `SpanProcessor`: [DemoSpanProcessor](src/main/java/com/example/javaagent/DemoSpanProcessor.java) -* Custom `SpanExporter`: [DemoSpanExporter](src/main/java/com/example/javaagent/DemoSpanExporter.java) -* Additional instrumentation: [DemoServlet3InstrumentationModule](src/main/java/com/example/javaagent/instrumentation/DemoServlet3InstrumentationModule.java) +- Custom `IdGenerator`: [DemoIdGenerator](src/main/java/com/example/javaagent/DemoIdGenerator.java) +- Custom `TextMapPropagator`: [DemoPropagator](src/main/java/com/example/javaagent/DemoPropagator.java) +- Custom `Sampler`: [DemoSampler](src/main/java/com/example/javaagent/DemoSampler.java) +- Custom `SpanProcessor`: [DemoSpanProcessor](src/main/java/com/example/javaagent/DemoSpanProcessor.java) +- Custom `SpanExporter`: [DemoSpanExporter](src/main/java/com/example/javaagent/DemoSpanExporter.java) +- Additional instrumentation: [DemoServlet3InstrumentationModule](src/main/java/com/example/javaagent/instrumentation/DemoServlet3InstrumentationModule.java) ## Sample use cases diff --git a/instrumentation/aws-lambda/README.md b/instrumentation/aws-lambda/README.md index 72fba2abab..9173db38f8 100644 --- a/instrumentation/aws-lambda/README.md +++ b/instrumentation/aws-lambda/README.md @@ -3,9 +3,9 @@ We provide two packages for instrumenting AWS lambda functions. - [aws-lambda-core-1.0](./aws-lambda-core-1.0/library) provides lightweight instrumentation of the Lambda core library, supporting -all versions. Use this package if you only use `RequestStreamHandler` or know you don't use any event classes from -`aws-lambda-java-events`. This also includes when you are using `aws-serverless-java-container` to run e.g., a -Spring Boot application on Lambda. + all versions. Use this package if you only use `RequestStreamHandler` or know you don't use any event classes from + `aws-lambda-java-events`. This also includes when you are using `aws-serverless-java-container` to run e.g., a + Spring Boot application on Lambda. - [aws-lambda-events-2.2](./aws-lambda-events-2.2/library) provides full instrumentation of the Lambda library, including standard -and custom event types, from `aws-lambda-java-events` 2.2+. + and custom event types, from `aws-lambda-java-events` 2.2+. diff --git a/instrumentation/aws-lambda/aws-lambda-core-1.0/library/README.md b/instrumentation/aws-lambda/aws-lambda-core-1.0/library/README.md index fa5cffc9d5..932bb41978 100644 --- a/instrumentation/aws-lambda/aws-lambda-core-1.0/library/README.md +++ b/instrumentation/aws-lambda/aws-lambda-core-1.0/library/README.md @@ -87,7 +87,7 @@ For API Gateway (HTTP) requests instrumented by using one of following methods: - extending `TracingRequestStreamHandler` or `TracingRequestHandler` - wrapping with `TracingRequestStreamWrapper` or `TracingRequestApiGatewayWrapper` -traces can be propagated with supported HTTP headers (see ). + traces can be propagated with supported HTTP headers (see ). In order to enable requested propagation for a handler, configure it on the SDK you build. diff --git a/instrumentation/aws-lambda/aws-lambda-events-2.2/library/README.md b/instrumentation/aws-lambda/aws-lambda-events-2.2/library/README.md index c200496bbd..10beb9fe72 100644 --- a/instrumentation/aws-lambda/aws-lambda-events-2.2/library/README.md +++ b/instrumentation/aws-lambda/aws-lambda-events-2.2/library/README.md @@ -118,7 +118,7 @@ For API Gateway (HTTP) requests instrumented by using one of following methods: - extending `TracingRequestStreamHandler` or `TracingRequestHandler` - wrapping with `TracingRequestStreamWrapper` or `TracingRequestApiGatewayWrapper` -traces can be propagated with supported HTTP headers (see ). + traces can be propagated with supported HTTP headers (see ). In order to enable requested propagation for a handler, configure it on the SDK you build. diff --git a/instrumentation/aws-sdk/README.md b/instrumentation/aws-sdk/README.md index d3f4893f33..f0266e760f 100644 --- a/instrumentation/aws-sdk/README.md +++ b/instrumentation/aws-sdk/README.md @@ -2,10 +2,10 @@ For more information, see the respective public setters in the `AwsSdkTelemetryBuilder` classes: -* [SDK v1](./aws-sdk-1.11/library/src/main/java/io/opentelemetry/instrumentation/awssdk/v1_11/AwsSdkTelemetryBuilder.java) -* [SDK v2](./aws-sdk-2.2/library/src/main/java/io/opentelemetry/instrumentation/awssdk/v2_2/AwsSdkTelemetryBuilder.java) +- [SDK v1](./aws-sdk-1.11/library/src/main/java/io/opentelemetry/instrumentation/awssdk/v1_11/AwsSdkTelemetryBuilder.java) +- [SDK v2](./aws-sdk-2.2/library/src/main/java/io/opentelemetry/instrumentation/awssdk/v2_2/AwsSdkTelemetryBuilder.java) -| System property | Type | Default | Description | -|---|---|---|---------------------------------------------------------------------------------------------------------------------------------------| -| `otel.instrumentation.aws-sdk.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | +| System property | Type | Default | Description | +| ------------------------------------------------------------------------ | ------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------- | +| `otel.instrumentation.aws-sdk.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | | `otel.instrumentation.aws-sdk.experimental-use-propagator-for-messaging` | Boolean | `false` | v2 only, inject into SNS/SQS attributes with configured propagator: See [v2 README](aws-sdk-2.2/library/README.md#trace-propagation). | diff --git a/instrumentation/aws-sdk/aws-sdk-2.2/library/README.md b/instrumentation/aws-sdk/aws-sdk-2.2/library/README.md index 96fb50bf64..3e964b1037 100644 --- a/instrumentation/aws-sdk/aws-sdk-2.2/library/README.md +++ b/instrumentation/aws-sdk/aws-sdk-2.2/library/README.md @@ -26,9 +26,9 @@ propagating the trace through them. Additionally, you can enable an experimental option to use the configured propagator to inject into message attributes (see [parent README](../../README.md)). This currently supports the following AWS APIs: -* SQS.SendMessage -* SQS.SendMessageBatch -* SNS.Publish +- SQS.SendMessage +- SQS.SendMessageBatch +- SNS.Publish (SNS.PublishBatch is not supported at the moment because it is not available in the minimum SDK version targeted by the instrumentation) diff --git a/instrumentation/camel-2.20/README.md b/instrumentation/camel-2.20/README.md index 0d3dd09472..6b115d4de7 100644 --- a/instrumentation/camel-2.20/README.md +++ b/instrumentation/camel-2.20/README.md @@ -1,5 +1,5 @@ # Settings for the Apache Camel instrumentation -| System property | Type | Default | Description | -|---|---|---|---| -| `otel.instrumentation.camel.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | +| System property | Type | Default | Description | +| --------------------------------------------------------- | ------- | ------- | --------------------------------------------------- | +| `otel.instrumentation.camel.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/couchbase/README.md b/instrumentation/couchbase/README.md index c27d0b2861..2258abb0a4 100644 --- a/instrumentation/couchbase/README.md +++ b/instrumentation/couchbase/README.md @@ -1,5 +1,5 @@ # Settings for the Couchbase instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| ------------------------------------------------------------- | ------- | ------- | --------------------------------------------------------------------------------------------------------- | | `otel.instrumentation.couchbase.experimental-span-attributes` | Boolean | `false` | Enables the capture of experimental span attributes (for version 2.6 and higher of this instrumentation). | diff --git a/instrumentation/elasticsearch/README.md b/instrumentation/elasticsearch/README.md index 18135dee85..51fbbfd95e 100644 --- a/instrumentation/elasticsearch/README.md +++ b/instrumentation/elasticsearch/README.md @@ -1,12 +1,13 @@ # Settings for the elasticsearch instrumentation ## Settings for the [Elasticsearch Java API Client](https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/current/index.html) instrumentation -| System property | Type | Default | Description | -|---|---|---|----------------------------------------------------------------------------------------------------------------------------| + +| System property | Type | Default | Description | +| --------------------------------------------------------- | -------- | ------- | -------------------------------------------------------------------------------------------------------------------------- | | `otel.instrumentation.elasticsearch.capture-search-query` | `Boolean | `false` | Enable the capture of search query bodies. Attention: Elasticsearch queries may contain personal or sensitive information. | - ## Settings for the [Elasticsearch Transport Client](https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/index.html) instrumentation -| System property | Type | Default | Description | -|---|---|---|---| + +| System property | Type | Default | Description | +| ----------------------------------------------------------------- | -------- | ------- | --------------------------------------------------- | | `otel.instrumentation.elasticsearch.experimental-span-attributes` | `Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/executors/README.md b/instrumentation/executors/README.md index 675b90d64f..deb0e9ed46 100644 --- a/instrumentation/executors/README.md +++ b/instrumentation/executors/README.md @@ -1,6 +1,6 @@ # Settings for the executors instrumentation -| System property | Type | Default | Description | -|---|---|---|---| -| `otel.instrumentation.executors.include` | List | Empty | List of `Executor` subclasses to be instrumented. | +| System property | Type | Default | Description | +| -------------------------------------------- | ------- | ------- | -------------------------------------------------------------------------- | +| `otel.instrumentation.executors.include` | List | Empty | List of `Executor` subclasses to be instrumented. | | `otel.instrumentation.executors.include-all` | Boolean | `false` | Whether to instrument all classes that implement the `Executor` interface. | diff --git a/instrumentation/external-annotations/README.md b/instrumentation/external-annotations/README.md index 143dbcbdd5..5dae19d769 100644 --- a/instrumentation/external-annotations/README.md +++ b/instrumentation/external-annotations/README.md @@ -1,6 +1,6 @@ # Settings for the external annotations instrumentation -| System property | Type | Default | Description | -|----------------- |------ |--------- |------------- | -| `otel.instrumentation.external-annotations.include` | String | Default annotations | Configuration for trace annotations, in the form of a pattern that matches `'package.Annotation$Name;*'`. -| `otel.instrumentation.external-annotations.exclude-methods` | String | | All methods to be excluded from auto-instrumentation by annotation-based advices. | +| System property | Type | Default | Description | +| ----------------------------------------------------------- | ------ | ------------------- | --------------------------------------------------------------------------------------------------------- | +| `otel.instrumentation.external-annotations.include` | String | Default annotations | Configuration for trace annotations, in the form of a pattern that matches `'package.Annotation$Name;*'`. | +| `otel.instrumentation.external-annotations.exclude-methods` | String | | All methods to be excluded from auto-instrumentation by annotation-based advices. | diff --git a/instrumentation/graphql-java-12.0/javaagent/README.md b/instrumentation/graphql-java-12.0/javaagent/README.md index 7bf9addd25..5554352bed 100644 --- a/instrumentation/graphql-java-12.0/javaagent/README.md +++ b/instrumentation/graphql-java-12.0/javaagent/README.md @@ -1,5 +1,5 @@ # Settings for the GraphQL instrumentation -| System property | Type | Default | Description | -|---|---|---------|--------------------------------------------------------------------------------------------| -| `otel.instrumentation.graphql.query-sanitizer.enabled` | Boolean | `true` | Whether to remove sensitive information from query source that is added as span attribute. | +| System property | Type | Default | Description | +| ------------------------------------------------------ | ------- | ------- | ------------------------------------------------------------------------------------------ | +| `otel.instrumentation.graphql.query-sanitizer.enabled` | Boolean | `true` | Whether to remove sensitive information from query source that is added as span attribute. | diff --git a/instrumentation/grpc-1.6/README.md b/instrumentation/grpc-1.6/README.md index 1ae85d8273..e392205b89 100644 --- a/instrumentation/grpc-1.6/README.md +++ b/instrumentation/grpc-1.6/README.md @@ -1,5 +1,5 @@ # Settings for the gRPC instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| -------------------------------------------------------- | ------- | ------- | --------------------------------------------------- | | `otel.instrumentation.grpc.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/guava-10.0/README.md b/instrumentation/guava-10.0/README.md index 9a6e8afbb5..6292d56a53 100644 --- a/instrumentation/guava-10.0/README.md +++ b/instrumentation/guava-10.0/README.md @@ -1,5 +1,5 @@ # Settings for the Guava instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| --------------------------------------------------------- | ------- | ------- | --------------------------------------------------- | | `otel.instrumentation.guava.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/hibernate/README.md b/instrumentation/hibernate/README.md index 3a74286829..726c930431 100644 --- a/instrumentation/hibernate/README.md +++ b/instrumentation/hibernate/README.md @@ -1,5 +1,5 @@ # Settings for the Hibernate instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| ------------------------------------------------------------- | ------- | ------- | --------------------------------------------------- | | `otel.instrumentation.hibernate.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/hystrix-1.4/javaagent/README.md b/instrumentation/hystrix-1.4/javaagent/README.md index 1aca029526..4ae238ad48 100644 --- a/instrumentation/hystrix-1.4/javaagent/README.md +++ b/instrumentation/hystrix-1.4/javaagent/README.md @@ -1,5 +1,5 @@ # Settings for the Hystrix instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| ----------------------------------------------------------- | ------- | ------- | --------------------------------------------------- | | `otel.instrumentation.hystrix.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/java-util-logging/javaagent/README.md b/instrumentation/java-util-logging/javaagent/README.md index 0d29c36cb3..e841f9a3c6 100644 --- a/instrumentation/java-util-logging/javaagent/README.md +++ b/instrumentation/java-util-logging/javaagent/README.md @@ -1,5 +1,5 @@ # Settings for the Java Util Logging instrumentation -| System property | Type | Default | Description | -|---|---------|--|------------------------------------------------------| +| System property | Type | Default | Description | +| -------------------------------------------------------------------- | ------- | ------- | --------------------------------------------------------------------------------- | | `otel.instrumentation.java-util-logging.experimental-log-attributes` | Boolean | `false` | Enable the capture of experimental span attributes `thread.name` and `thread.id`. | diff --git a/instrumentation/jaxrs/README.md b/instrumentation/jaxrs/README.md index 4deba69a25..78b4109007 100644 --- a/instrumentation/jaxrs/README.md +++ b/instrumentation/jaxrs/README.md @@ -1,5 +1,5 @@ # Settings for the Jaxrs instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| --------------------------------------------------------- | ------- | ------- | --------------------------------------------------- | | `otel.instrumentation.jaxrs.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/jmx-metrics/javaagent/activemq.md b/instrumentation/jmx-metrics/javaagent/activemq.md index 8cdc14dec3..49a0198529 100644 --- a/instrumentation/jmx-metrics/javaagent/activemq.md +++ b/instrumentation/jmx-metrics/javaagent/activemq.md @@ -2,16 +2,16 @@ Here is the list of metrics based on MBeans exposed by ActiveMQ. -| Metric Name | Type | Attributes | Description | -| ---------------- | --------------- | ---------------- | --------------- | -| activemq.ProducerCount | UpDownCounter | destination, broker | The number of producers attached to this destination | -| activemq.ConsumerCount | UpDownCounter | destination, broker | The number of consumers subscribed to this destination | -| activemq.memory.MemoryPercentUsage | Gauge | destination, broker | The percentage of configured memory used | -| activemq.message.QueueSize | UpDownCounter | destination, broker | The current number of messages waiting to be consumed | -| activemq.message.ExpiredCount | Counter | destination, broker | The number of messages not delivered because they expired | -| activemq.message.EnqueueCount | Counter | destination, broker | The number of messages sent to this destination | -| activemq.message.DequeueCount | Counter | destination, broker | The number of messages acknowledged and removed from this destination | -| activemq.message.AverageEnqueueTime | Gauge | destination, broker | The average time a message was held on this destination | -| activemq.connections.CurrentConnectionsCount | UpDownCounter | | The total number of current connections | -| activemq.disc.StorePercentUsage | Gauge | | The percentage of configured disk used for persistent messages | -| activemq.disc.TempPercentUsage | Gauge | | The percentage of configured disk used for non-persistent messages | +| Metric Name | Type | Attributes | Description | +| -------------------------------------------- | ------------- | ------------------- | --------------------------------------------------------------------- | +| activemq.ProducerCount | UpDownCounter | destination, broker | The number of producers attached to this destination | +| activemq.ConsumerCount | UpDownCounter | destination, broker | The number of consumers subscribed to this destination | +| activemq.memory.MemoryPercentUsage | Gauge | destination, broker | The percentage of configured memory used | +| activemq.message.QueueSize | UpDownCounter | destination, broker | The current number of messages waiting to be consumed | +| activemq.message.ExpiredCount | Counter | destination, broker | The number of messages not delivered because they expired | +| activemq.message.EnqueueCount | Counter | destination, broker | The number of messages sent to this destination | +| activemq.message.DequeueCount | Counter | destination, broker | The number of messages acknowledged and removed from this destination | +| activemq.message.AverageEnqueueTime | Gauge | destination, broker | The average time a message was held on this destination | +| activemq.connections.CurrentConnectionsCount | UpDownCounter | | The total number of current connections | +| activemq.disc.StorePercentUsage | Gauge | | The percentage of configured disk used for persistent messages | +| activemq.disc.TempPercentUsage | Gauge | | The percentage of configured disk used for non-persistent messages | diff --git a/instrumentation/jmx-metrics/javaagent/hadoop.md b/instrumentation/jmx-metrics/javaagent/hadoop.md index 7e628fe046..24e2b7cf78 100644 --- a/instrumentation/jmx-metrics/javaagent/hadoop.md +++ b/instrumentation/jmx-metrics/javaagent/hadoop.md @@ -3,7 +3,7 @@ Here is the list of metrics based on MBeans exposed by Hadoop. | Metric Name | Type | Attributes | Description | -|-----------------------------------|---------------|------------------|-------------------------------------------------------| +| --------------------------------- | ------------- | ---------------- | ----------------------------------------------------- | | hadoop.capacity.CapacityUsed | UpDownCounter | node_name | Current used capacity across all data nodes | | hadoop.capacity.CapacityTotal | UpDownCounter | node_name | Current raw capacity of data nodes | | hadoop.block.BlocksTotal | UpDownCounter | node_name | Current number of allocated blocks in the system | diff --git a/instrumentation/jmx-metrics/javaagent/jetty.md b/instrumentation/jmx-metrics/javaagent/jetty.md index e04622a179..d771214cfc 100644 --- a/instrumentation/jmx-metrics/javaagent/jetty.md +++ b/instrumentation/jmx-metrics/javaagent/jetty.md @@ -3,7 +3,7 @@ Here is the list of metrics based on MBeans exposed by Jetty. | Metric Name | Type | Attributes | Description | -|--------------------------------|---------------|--------------|------------------------------------------------------| +| ------------------------------ | ------------- | ------------ | ---------------------------------------------------- | | jetty.session.sessionsCreated | Counter | resource | The number of sessions established in total | | jetty.session.sessionTimeTotal | Counter | resource | The total time sessions have been active | | jetty.session.sessionTimeMax | Gauge | resource | The maximum amount of time a session has been active | diff --git a/instrumentation/jmx-metrics/javaagent/kafka-broker.md b/instrumentation/jmx-metrics/javaagent/kafka-broker.md index 2dddfbb19d..c0b8f1d394 100644 --- a/instrumentation/jmx-metrics/javaagent/kafka-broker.md +++ b/instrumentation/jmx-metrics/javaagent/kafka-broker.md @@ -4,7 +4,7 @@ Here is the list of metrics based on MBeans exposed by Kafka broker.

Log metrics: | Metric Name | Type | Attributes | Description | -|---------------------------|---------|------------|----------------------------------| +| ------------------------- | ------- | ---------- | -------------------------------- | | kafka.logs.flush.count | Counter | | Log flush count | | kafka.logs.flush.time.50p | Gauge | | Log flush time - 50th percentile | | kafka.logs.flush.time.99p | Gauge | | Log flush time - 99th percentile | diff --git a/instrumentation/jmx-metrics/javaagent/tomcat.md b/instrumentation/jmx-metrics/javaagent/tomcat.md index a2ea859d80..6ac2e3fea1 100644 --- a/instrumentation/jmx-metrics/javaagent/tomcat.md +++ b/instrumentation/jmx-metrics/javaagent/tomcat.md @@ -3,7 +3,7 @@ Here is the list of metrics based on MBeans exposed by Tomcat. | Metric Name | Type | Attributes | Description | -|--------------------------------------------|---------------|-----------------|-----------------------------------------------------------------| +| ------------------------------------------ | ------------- | --------------- | --------------------------------------------------------------- | | http.server.tomcat.sessions.activeSessions | UpDownCounter | context | The number of active sessions | | http.server.tomcat.errorCount | Gauge | name | The number of errors per second on all request processors | | http.server.tomcat.requestCount | Gauge | name | The number of requests per second across all request processors | diff --git a/instrumentation/jmx-metrics/javaagent/wildfly.md b/instrumentation/jmx-metrics/javaagent/wildfly.md index c88eee63be..637453a4ee 100644 --- a/instrumentation/jmx-metrics/javaagent/wildfly.md +++ b/instrumentation/jmx-metrics/javaagent/wildfly.md @@ -3,7 +3,7 @@ Here is the list of metrics based on MBeans exposed by Wildfly. | Metric Name | Type | Attributes | Description | -|----------------------------------------------------|---------------|--------------------|-------------------------------------------------------------------------| +| -------------------------------------------------- | ------------- | ------------------ | ----------------------------------------------------------------------- | | wildfly.network.io | Counter | direction, server | Total number of bytes transferred | | wildfly.request.errorCount | Counter | server, listener | The number of 500 responses that have been sent by this listener | | wildfly.request.requestCount | Counter | server, listener | The number of requests this listener has served | diff --git a/instrumentation/jsp-2.3/README.md b/instrumentation/jsp-2.3/README.md index 5e32f64188..95306dc8e5 100644 --- a/instrumentation/jsp-2.3/README.md +++ b/instrumentation/jsp-2.3/README.md @@ -1,5 +1,5 @@ # Settings for the JSP instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| ------------------------------------------------------- | ------- | ------- | --------------------------------------------------- | | `otel.instrumentation.jsp.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/kafka/kafka-clients/kafka-clients-2.6/library/README.md b/instrumentation/kafka/kafka-clients/kafka-clients-2.6/library/README.md index 64481b7ac7..4f6864f1cd 100644 --- a/instrumentation/kafka/kafka-clients/kafka-clients-2.6/library/README.md +++ b/instrumentation/kafka/kafka-clients/kafka-clients-2.6/library/README.md @@ -90,207 +90,207 @@ OpenTelemetry metric each maps to (if available). Empty values in the Instrument Description, etc column indicates there is no registered mapping for the metric and data is NOT collected. -| Metric Group | Metric Name | Attribute Keys | Instrument Name | Instrument Description | Instrument Type | -|--------------|-------------|----------------|-----------------|------------------------|-----------------| -| `app-info` | `commit-id` | `client-id` | | | | -| `app-info` | `start-time-ms` | `client-id` | | | | -| `app-info` | `version` | `client-id` | | | | -| `consumer-coordinator-metrics` | `assigned-partitions` | `client-id` | `kafka.consumer.assigned_partitions` | The number of partitions currently assigned to this consumer | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `commit-latency-avg` | `client-id` | `kafka.consumer.commit_latency_avg` | The average time taken for a commit request | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `commit-latency-max` | `client-id` | `kafka.consumer.commit_latency_max` | The max time taken for a commit request | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `commit-rate` | `client-id` | `kafka.consumer.commit_rate` | The number of commit calls per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `commit-total` | `client-id` | `kafka.consumer.commit_total` | The total number of commit calls | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-coordinator-metrics` | `failed-rebalance-rate-per-hour` | `client-id` | `kafka.consumer.failed_rebalance_rate_per_hour` | The number of failed rebalance events per hour | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `failed-rebalance-total` | `client-id` | `kafka.consumer.failed_rebalance_total` | The total number of failed rebalance events | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-coordinator-metrics` | `heartbeat-rate` | `client-id` | `kafka.consumer.heartbeat_rate` | The number of heartbeats per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `heartbeat-response-time-max` | `client-id` | `kafka.consumer.heartbeat_response_time_max` | The max time taken to receive a response to a heartbeat request | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `heartbeat-total` | `client-id` | `kafka.consumer.heartbeat_total` | The total number of heartbeats | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-coordinator-metrics` | `join-rate` | `client-id` | `kafka.consumer.join_rate` | The number of group joins per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `join-time-avg` | `client-id` | `kafka.consumer.join_time_avg` | The average time taken for a group rejoin | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `join-time-max` | `client-id` | `kafka.consumer.join_time_max` | The max time taken for a group rejoin | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `join-total` | `client-id` | `kafka.consumer.join_total` | The total number of group joins | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-coordinator-metrics` | `last-heartbeat-seconds-ago` | `client-id` | `kafka.consumer.last_heartbeat_seconds_ago` | The number of seconds since the last coordinator heartbeat was sent | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `last-rebalance-seconds-ago` | `client-id` | `kafka.consumer.last_rebalance_seconds_ago` | The number of seconds since the last successful rebalance event | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `partition-assigned-latency-avg` | `client-id` | `kafka.consumer.partition_assigned_latency_avg` | The average time taken for a partition-assigned rebalance listener callback | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `partition-assigned-latency-max` | `client-id` | `kafka.consumer.partition_assigned_latency_max` | The max time taken for a partition-assigned rebalance listener callback | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `partition-lost-latency-avg` | `client-id` | `kafka.consumer.partition_lost_latency_avg` | The average time taken for a partition-lost rebalance listener callback | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `partition-lost-latency-max` | `client-id` | `kafka.consumer.partition_lost_latency_max` | The max time taken for a partition-lost rebalance listener callback | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `partition-revoked-latency-avg` | `client-id` | `kafka.consumer.partition_revoked_latency_avg` | The average time taken for a partition-revoked rebalance listener callback | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `partition-revoked-latency-max` | `client-id` | `kafka.consumer.partition_revoked_latency_max` | The max time taken for a partition-revoked rebalance listener callback | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `rebalance-latency-avg` | `client-id` | `kafka.consumer.rebalance_latency_avg` | The average time taken for a group to complete a successful rebalance, which may be composed of several failed re-trials until it succeeded | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `rebalance-latency-max` | `client-id` | `kafka.consumer.rebalance_latency_max` | The max time taken for a group to complete a successful rebalance, which may be composed of several failed re-trials until it succeeded | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `rebalance-latency-total` | `client-id` | `kafka.consumer.rebalance_latency_total` | The total number of milliseconds this consumer has spent in successful rebalances since creation | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-coordinator-metrics` | `rebalance-rate-per-hour` | `client-id` | `kafka.consumer.rebalance_rate_per_hour` | The number of successful rebalance events per hour, each event is composed of several failed re-trials until it succeeded | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `rebalance-total` | `client-id` | `kafka.consumer.rebalance_total` | The total number of successful rebalance events, each event is composed of several failed re-trials until it succeeded | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-coordinator-metrics` | `sync-rate` | `client-id` | `kafka.consumer.sync_rate` | The number of group syncs per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `sync-time-avg` | `client-id` | `kafka.consumer.sync_time_avg` | The average time taken for a group sync | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `sync-time-max` | `client-id` | `kafka.consumer.sync_time_max` | The max time taken for a group sync | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-coordinator-metrics` | `sync-total` | `client-id` | `kafka.consumer.sync_total` | The total number of group syncs | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-fetch-manager-metrics` | `bytes-consumed-rate` | `client-id` | | | | -| `consumer-fetch-manager-metrics` | `bytes-consumed-rate` | `client-id`,`topic` | `kafka.consumer.bytes_consumed_rate` | The average number of bytes consumed per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `bytes-consumed-total` | `client-id` | | | | -| `consumer-fetch-manager-metrics` | `bytes-consumed-total` | `client-id`,`topic` | `kafka.consumer.bytes_consumed_total` | The total number of bytes consumed | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-fetch-manager-metrics` | `fetch-latency-avg` | `client-id` | `kafka.consumer.fetch_latency_avg` | The average time taken for a fetch request. | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `fetch-latency-max` | `client-id` | `kafka.consumer.fetch_latency_max` | The max time taken for any fetch request. | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `fetch-rate` | `client-id` | `kafka.consumer.fetch_rate` | The number of fetch requests per second. | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `fetch-size-avg` | `client-id` | | | | -| `consumer-fetch-manager-metrics` | `fetch-size-avg` | `client-id`,`topic` | `kafka.consumer.fetch_size_avg` | The average number of bytes fetched per request | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `fetch-size-max` | `client-id` | | | | -| `consumer-fetch-manager-metrics` | `fetch-size-max` | `client-id`,`topic` | `kafka.consumer.fetch_size_max` | The maximum number of bytes fetched per request | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `fetch-throttle-time-avg` | `client-id` | `kafka.consumer.fetch_throttle_time_avg` | The average throttle time in ms | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `fetch-throttle-time-max` | `client-id` | `kafka.consumer.fetch_throttle_time_max` | The maximum throttle time in ms | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `fetch-total` | `client-id` | `kafka.consumer.fetch_total` | The total number of fetch requests. | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-fetch-manager-metrics` | `preferred-read-replica` | `client-id`,`topic`,`partition` | | | | -| `consumer-fetch-manager-metrics` | `records-consumed-rate` | `client-id` | | | | -| `consumer-fetch-manager-metrics` | `records-consumed-rate` | `client-id`,`topic` | `kafka.consumer.records_consumed_rate` | The average number of records consumed per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `records-consumed-total` | `client-id` | | | | -| `consumer-fetch-manager-metrics` | `records-consumed-total` | `client-id`,`topic` | `kafka.consumer.records_consumed_total` | The total number of records consumed | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-fetch-manager-metrics` | `records-lag` | `client-id`,`topic`,`partition` | `kafka.consumer.records_lag` | The latest lag of the partition | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `records-lag-avg` | `client-id`,`topic`,`partition` | `kafka.consumer.records_lag_avg` | The average lag of the partition | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `records-lag-max` | `client-id` | | | | -| `consumer-fetch-manager-metrics` | `records-lag-max` | `client-id`,`topic`,`partition` | `kafka.consumer.records_lag_max` | The maximum lag in terms of number of records for any partition in this window | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `records-lead` | `client-id`,`topic`,`partition` | `kafka.consumer.records_lead` | The latest lead of the partition | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `records-lead-avg` | `client-id`,`topic`,`partition` | `kafka.consumer.records_lead_avg` | The average lead of the partition | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `records-lead-min` | `client-id` | | | | -| `consumer-fetch-manager-metrics` | `records-lead-min` | `client-id`,`topic`,`partition` | `kafka.consumer.records_lead_min` | The minimum lead in terms of number of records for any partition in this window | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-fetch-manager-metrics` | `records-per-request-avg` | `client-id` | | | | -| `consumer-fetch-manager-metrics` | `records-per-request-avg` | `client-id`,`topic` | `kafka.consumer.records_per_request_avg` | The average number of records in each request | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `connection-close-rate` | `client-id` | `kafka.consumer.connection_close_rate` | The number of connections closed per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `connection-close-total` | `client-id` | `kafka.consumer.connection_close_total` | The total number of connections closed | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-metrics` | `connection-count` | `client-id` | `kafka.consumer.connection_count` | The current number of active connections. | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `connection-creation-rate` | `client-id` | `kafka.consumer.connection_creation_rate` | The number of new connections established per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `connection-creation-total` | `client-id` | `kafka.consumer.connection_creation_total` | The total number of new connections established | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-metrics` | `failed-authentication-rate` | `client-id` | `kafka.consumer.failed_authentication_rate` | The number of connections with failed authentication per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `failed-authentication-total` | `client-id` | `kafka.consumer.failed_authentication_total` | The total number of connections with failed authentication | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-metrics` | `failed-reauthentication-rate` | `client-id` | `kafka.consumer.failed_reauthentication_rate` | The number of failed re-authentication of connections per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `failed-reauthentication-total` | `client-id` | `kafka.consumer.failed_reauthentication_total` | The total number of failed re-authentication of connections | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-metrics` | `incoming-byte-rate` | `client-id` | | | | -| `consumer-metrics` | `incoming-byte-total` | `client-id` | | | | -| `consumer-metrics` | `io-ratio` | `client-id` | `kafka.consumer.io_ratio` | The fraction of time the I/O thread spent doing I/O | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `io-time-ns-avg` | `client-id` | `kafka.consumer.io_time_ns_avg` | The average length of time for I/O per select call in nanoseconds. | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `io-wait-ratio` | `client-id` | `kafka.consumer.io_wait_ratio` | The fraction of time the I/O thread spent waiting | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `io-wait-time-ns-avg` | `client-id` | `kafka.consumer.io_wait_time_ns_avg` | The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds. | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `io-waittime-total` | `client-id` | `kafka.consumer.io_waittime_total` | The total time the I/O thread spent waiting | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-metrics` | `iotime-total` | `client-id` | `kafka.consumer.iotime_total` | The total time the I/O thread spent doing I/O | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-metrics` | `last-poll-seconds-ago` | `client-id` | `kafka.consumer.last_poll_seconds_ago` | The number of seconds since the last poll() invocation. | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `network-io-rate` | `client-id` | `kafka.consumer.network_io_rate` | The number of network operations (reads or writes) on all connections per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `network-io-total` | `client-id` | `kafka.consumer.network_io_total` | The total number of network operations (reads or writes) on all connections | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-metrics` | `outgoing-byte-rate` | `client-id` | | | | -| `consumer-metrics` | `outgoing-byte-total` | `client-id` | | | | -| `consumer-metrics` | `poll-idle-ratio-avg` | `client-id` | `kafka.consumer.poll_idle_ratio_avg` | The average fraction of time the consumer's poll() is idle as opposed to waiting for the user code to process records. | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `reauthentication-latency-avg` | `client-id` | `kafka.consumer.reauthentication_latency_avg` | The average latency observed due to re-authentication | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `reauthentication-latency-max` | `client-id` | `kafka.consumer.reauthentication_latency_max` | The max latency observed due to re-authentication | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `request-rate` | `client-id` | | | | -| `consumer-metrics` | `request-size-avg` | `client-id` | | | | -| `consumer-metrics` | `request-size-max` | `client-id` | | | | -| `consumer-metrics` | `request-total` | `client-id` | | | | -| `consumer-metrics` | `response-rate` | `client-id` | | | | -| `consumer-metrics` | `response-total` | `client-id` | | | | -| `consumer-metrics` | `select-rate` | `client-id` | `kafka.consumer.select_rate` | The number of times the I/O layer checked for new I/O to perform per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `select-total` | `client-id` | `kafka.consumer.select_total` | The total number of times the I/O layer checked for new I/O to perform | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-metrics` | `successful-authentication-no-reauth-total` | `client-id` | `kafka.consumer.successful_authentication_no_reauth_total` | The total number of connections with successful authentication where the client does not support re-authentication | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-metrics` | `successful-authentication-rate` | `client-id` | `kafka.consumer.successful_authentication_rate` | The number of connections with successful authentication per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `successful-authentication-total` | `client-id` | `kafka.consumer.successful_authentication_total` | The total number of connections with successful authentication | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-metrics` | `successful-reauthentication-rate` | `client-id` | `kafka.consumer.successful_reauthentication_rate` | The number of successful re-authentication of connections per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `successful-reauthentication-total` | `client-id` | `kafka.consumer.successful_reauthentication_total` | The total number of successful re-authentication of connections | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-metrics` | `time-between-poll-avg` | `client-id` | `kafka.consumer.time_between_poll_avg` | The average delay between invocations of poll(). | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-metrics` | `time-between-poll-max` | `client-id` | `kafka.consumer.time_between_poll_max` | The max delay between invocations of poll(). | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-node-metrics` | `incoming-byte-rate` | `client-id`,`node-id` | `kafka.consumer.incoming_byte_rate` | The number of bytes read off all sockets per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-node-metrics` | `incoming-byte-total` | `client-id`,`node-id` | `kafka.consumer.incoming_byte_total` | The total number of bytes read off all sockets | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-node-metrics` | `outgoing-byte-rate` | `client-id`,`node-id` | `kafka.consumer.outgoing_byte_rate` | The number of outgoing bytes sent to all servers per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-node-metrics` | `outgoing-byte-total` | `client-id`,`node-id` | `kafka.consumer.outgoing_byte_total` | The total number of outgoing bytes sent to all servers | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-node-metrics` | `request-latency-avg` | `client-id`,`node-id` | `kafka.consumer.request_latency_avg` | | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-node-metrics` | `request-latency-max` | `client-id`,`node-id` | `kafka.consumer.request_latency_max` | | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-node-metrics` | `request-rate` | `client-id`,`node-id` | `kafka.consumer.request_rate` | The number of requests sent per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-node-metrics` | `request-size-avg` | `client-id`,`node-id` | `kafka.consumer.request_size_avg` | The average size of requests sent. | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-node-metrics` | `request-size-max` | `client-id`,`node-id` | `kafka.consumer.request_size_max` | The maximum size of any request sent. | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-node-metrics` | `request-total` | `client-id`,`node-id` | `kafka.consumer.request_total` | The total number of requests sent | `DOUBLE_OBSERVABLE_COUNTER` | -| `consumer-node-metrics` | `response-rate` | `client-id`,`node-id` | `kafka.consumer.response_rate` | The number of responses received per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `consumer-node-metrics` | `response-total` | `client-id`,`node-id` | `kafka.consumer.response_total` | The total number of responses received | `DOUBLE_OBSERVABLE_COUNTER` | -| `kafka-metrics-count` | `count` | `client-id` | | | | -| `producer-metrics` | `batch-size-avg` | `client-id` | `kafka.producer.batch_size_avg` | The average number of bytes sent per partition per-request. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `batch-size-max` | `client-id` | `kafka.producer.batch_size_max` | The max number of bytes sent per partition per-request. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `batch-split-rate` | `client-id` | `kafka.producer.batch_split_rate` | The average number of batch splits per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `batch-split-total` | `client-id` | `kafka.producer.batch_split_total` | The total number of batch splits | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `buffer-available-bytes` | `client-id` | `kafka.producer.buffer_available_bytes` | The total amount of buffer memory that is not being used (either unallocated or in the free list). | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `buffer-exhausted-rate` | `client-id` | `kafka.producer.buffer_exhausted_rate` | The average per-second number of record sends that are dropped due to buffer exhaustion | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `buffer-exhausted-total` | `client-id` | `kafka.producer.buffer_exhausted_total` | The total number of record sends that are dropped due to buffer exhaustion | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `buffer-total-bytes` | `client-id` | `kafka.producer.buffer_total_bytes` | The maximum amount of buffer memory the client can use (whether or not it is currently used). | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `bufferpool-wait-ratio` | `client-id` | `kafka.producer.bufferpool_wait_ratio` | The fraction of time an appender waits for space allocation. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `bufferpool-wait-time-total` | `client-id` | `kafka.producer.bufferpool_wait_time_total` | The total time an appender waits for space allocation. | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `compression-rate-avg` | `client-id` | `kafka.producer.compression_rate_avg` | The average compression rate of record batches. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `connection-close-rate` | `client-id` | `kafka.producer.connection_close_rate` | The number of connections closed per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `connection-close-total` | `client-id` | `kafka.producer.connection_close_total` | The total number of connections closed | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `connection-count` | `client-id` | `kafka.producer.connection_count` | The current number of active connections. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `connection-creation-rate` | `client-id` | `kafka.producer.connection_creation_rate` | The number of new connections established per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `connection-creation-total` | `client-id` | `kafka.producer.connection_creation_total` | The total number of new connections established | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `failed-authentication-rate` | `client-id` | `kafka.producer.failed_authentication_rate` | The number of connections with failed authentication per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `failed-authentication-total` | `client-id` | `kafka.producer.failed_authentication_total` | The total number of connections with failed authentication | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `failed-reauthentication-rate` | `client-id` | `kafka.producer.failed_reauthentication_rate` | The number of failed re-authentication of connections per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `failed-reauthentication-total` | `client-id` | `kafka.producer.failed_reauthentication_total` | The total number of failed re-authentication of connections | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `incoming-byte-rate` | `client-id` | | | | -| `producer-metrics` | `incoming-byte-total` | `client-id` | | | | -| `producer-metrics` | `io-ratio` | `client-id` | `kafka.producer.io_ratio` | The fraction of time the I/O thread spent doing I/O | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `io-time-ns-avg` | `client-id` | `kafka.producer.io_time_ns_avg` | The average length of time for I/O per select call in nanoseconds. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `io-wait-ratio` | `client-id` | `kafka.producer.io_wait_ratio` | The fraction of time the I/O thread spent waiting | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `io-wait-time-ns-avg` | `client-id` | `kafka.producer.io_wait_time_ns_avg` | The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `io-waittime-total` | `client-id` | `kafka.producer.io_waittime_total` | The total time the I/O thread spent waiting | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `iotime-total` | `client-id` | `kafka.producer.iotime_total` | The total time the I/O thread spent doing I/O | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `metadata-age` | `client-id` | `kafka.producer.metadata_age` | The age in seconds of the current producer metadata being used. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `network-io-rate` | `client-id` | `kafka.producer.network_io_rate` | The number of network operations (reads or writes) on all connections per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `network-io-total` | `client-id` | `kafka.producer.network_io_total` | The total number of network operations (reads or writes) on all connections | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `outgoing-byte-rate` | `client-id` | | | | -| `producer-metrics` | `outgoing-byte-total` | `client-id` | | | | -| `producer-metrics` | `produce-throttle-time-avg` | `client-id` | `kafka.producer.produce_throttle_time_avg` | The average time in ms a request was throttled by a broker | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `produce-throttle-time-max` | `client-id` | `kafka.producer.produce_throttle_time_max` | The maximum time in ms a request was throttled by a broker | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `reauthentication-latency-avg` | `client-id` | `kafka.producer.reauthentication_latency_avg` | The average latency observed due to re-authentication | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `reauthentication-latency-max` | `client-id` | `kafka.producer.reauthentication_latency_max` | The max latency observed due to re-authentication | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `record-error-rate` | `client-id` | | | | -| `producer-metrics` | `record-error-total` | `client-id` | | | | -| `producer-metrics` | `record-queue-time-avg` | `client-id` | `kafka.producer.record_queue_time_avg` | The average time in ms record batches spent in the send buffer. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `record-queue-time-max` | `client-id` | `kafka.producer.record_queue_time_max` | The maximum time in ms record batches spent in the send buffer. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `record-retry-rate` | `client-id` | | | | -| `producer-metrics` | `record-retry-total` | `client-id` | | | | -| `producer-metrics` | `record-send-rate` | `client-id` | | | | -| `producer-metrics` | `record-send-total` | `client-id` | | | | -| `producer-metrics` | `record-size-avg` | `client-id` | `kafka.producer.record_size_avg` | The average record size | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `record-size-max` | `client-id` | `kafka.producer.record_size_max` | The maximum record size | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `records-per-request-avg` | `client-id` | `kafka.producer.records_per_request_avg` | The average number of records per request. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `request-latency-avg` | `client-id` | | | | -| `producer-metrics` | `request-latency-max` | `client-id` | | | | -| `producer-metrics` | `request-rate` | `client-id` | | | | -| `producer-metrics` | `request-size-avg` | `client-id` | | | | -| `producer-metrics` | `request-size-max` | `client-id` | | | | -| `producer-metrics` | `request-total` | `client-id` | | | | -| `producer-metrics` | `requests-in-flight` | `client-id` | `kafka.producer.requests_in_flight` | The current number of in-flight requests awaiting a response. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `response-rate` | `client-id` | | | | -| `producer-metrics` | `response-total` | `client-id` | | | | -| `producer-metrics` | `select-rate` | `client-id` | `kafka.producer.select_rate` | The number of times the I/O layer checked for new I/O to perform per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `select-total` | `client-id` | `kafka.producer.select_total` | The total number of times the I/O layer checked for new I/O to perform | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `successful-authentication-no-reauth-total` | `client-id` | `kafka.producer.successful_authentication_no_reauth_total` | The total number of connections with successful authentication where the client does not support re-authentication | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `successful-authentication-rate` | `client-id` | `kafka.producer.successful_authentication_rate` | The number of connections with successful authentication per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `successful-authentication-total` | `client-id` | `kafka.producer.successful_authentication_total` | The total number of connections with successful authentication | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `successful-reauthentication-rate` | `client-id` | `kafka.producer.successful_reauthentication_rate` | The number of successful re-authentication of connections per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-metrics` | `successful-reauthentication-total` | `client-id` | `kafka.producer.successful_reauthentication_total` | The total number of successful re-authentication of connections | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-metrics` | `waiting-threads` | `client-id` | `kafka.producer.waiting_threads` | The number of user threads blocked waiting for buffer memory to enqueue their records | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-node-metrics` | `incoming-byte-rate` | `client-id`,`node-id` | `kafka.producer.incoming_byte_rate` | The number of bytes read off all sockets per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-node-metrics` | `incoming-byte-total` | `client-id`,`node-id` | `kafka.producer.incoming_byte_total` | The total number of bytes read off all sockets | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-node-metrics` | `outgoing-byte-rate` | `client-id`,`node-id` | `kafka.producer.outgoing_byte_rate` | The number of outgoing bytes sent to all servers per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-node-metrics` | `outgoing-byte-total` | `client-id`,`node-id` | `kafka.producer.outgoing_byte_total` | The total number of outgoing bytes sent to all servers | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-node-metrics` | `request-latency-avg` | `client-id`,`node-id` | `kafka.producer.request_latency_avg` | The average request latency in ms | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-node-metrics` | `request-latency-max` | `client-id`,`node-id` | `kafka.producer.request_latency_max` | The maximum request latency in ms | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-node-metrics` | `request-rate` | `client-id`,`node-id` | `kafka.producer.request_rate` | The number of requests sent per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-node-metrics` | `request-size-avg` | `client-id`,`node-id` | `kafka.producer.request_size_avg` | The average size of requests sent. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-node-metrics` | `request-size-max` | `client-id`,`node-id` | `kafka.producer.request_size_max` | The maximum size of any request sent. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-node-metrics` | `request-total` | `client-id`,`node-id` | `kafka.producer.request_total` | The total number of requests sent | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-node-metrics` | `response-rate` | `client-id`,`node-id` | `kafka.producer.response_rate` | The number of responses received per second | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-node-metrics` | `response-total` | `client-id`,`node-id` | `kafka.producer.response_total` | The total number of responses received | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-topic-metrics` | `byte-rate` | `client-id`,`topic` | `kafka.producer.byte_rate` | The average number of bytes sent per second for a topic. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-topic-metrics` | `byte-total` | `client-id`,`topic` | `kafka.producer.byte_total` | The total number of bytes sent for a topic. | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-topic-metrics` | `compression-rate` | `client-id`,`topic` | `kafka.producer.compression_rate` | The average compression rate of record batches for a topic. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-topic-metrics` | `record-error-rate` | `client-id`,`topic` | `kafka.producer.record_error_rate` | The average per-second number of record sends that resulted in errors | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-topic-metrics` | `record-error-total` | `client-id`,`topic` | `kafka.producer.record_error_total` | The total number of record sends that resulted in errors | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-topic-metrics` | `record-retry-rate` | `client-id`,`topic` | `kafka.producer.record_retry_rate` | The average per-second number of retried record sends | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-topic-metrics` | `record-retry-total` | `client-id`,`topic` | `kafka.producer.record_retry_total` | The total number of retried record sends | `DOUBLE_OBSERVABLE_COUNTER` | -| `producer-topic-metrics` | `record-send-rate` | `client-id`,`topic` | `kafka.producer.record_send_rate` | The average number of records sent per second. | `DOUBLE_OBSERVABLE_GAUGE` | -| `producer-topic-metrics` | `record-send-total` | `client-id`,`topic` | `kafka.producer.record_send_total` | The total number of records sent. | `DOUBLE_OBSERVABLE_COUNTER` | +| Metric Group | Metric Name | Attribute Keys | Instrument Name | Instrument Description | Instrument Type | +| -------------------------------- | ------------------------------------------- | ------------------------------- | ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------- | +| `app-info` | `commit-id` | `client-id` | | | | +| `app-info` | `start-time-ms` | `client-id` | | | | +| `app-info` | `version` | `client-id` | | | | +| `consumer-coordinator-metrics` | `assigned-partitions` | `client-id` | `kafka.consumer.assigned_partitions` | The number of partitions currently assigned to this consumer | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `commit-latency-avg` | `client-id` | `kafka.consumer.commit_latency_avg` | The average time taken for a commit request | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `commit-latency-max` | `client-id` | `kafka.consumer.commit_latency_max` | The max time taken for a commit request | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `commit-rate` | `client-id` | `kafka.consumer.commit_rate` | The number of commit calls per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `commit-total` | `client-id` | `kafka.consumer.commit_total` | The total number of commit calls | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-coordinator-metrics` | `failed-rebalance-rate-per-hour` | `client-id` | `kafka.consumer.failed_rebalance_rate_per_hour` | The number of failed rebalance events per hour | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `failed-rebalance-total` | `client-id` | `kafka.consumer.failed_rebalance_total` | The total number of failed rebalance events | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-coordinator-metrics` | `heartbeat-rate` | `client-id` | `kafka.consumer.heartbeat_rate` | The number of heartbeats per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `heartbeat-response-time-max` | `client-id` | `kafka.consumer.heartbeat_response_time_max` | The max time taken to receive a response to a heartbeat request | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `heartbeat-total` | `client-id` | `kafka.consumer.heartbeat_total` | The total number of heartbeats | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-coordinator-metrics` | `join-rate` | `client-id` | `kafka.consumer.join_rate` | The number of group joins per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `join-time-avg` | `client-id` | `kafka.consumer.join_time_avg` | The average time taken for a group rejoin | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `join-time-max` | `client-id` | `kafka.consumer.join_time_max` | The max time taken for a group rejoin | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `join-total` | `client-id` | `kafka.consumer.join_total` | The total number of group joins | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-coordinator-metrics` | `last-heartbeat-seconds-ago` | `client-id` | `kafka.consumer.last_heartbeat_seconds_ago` | The number of seconds since the last coordinator heartbeat was sent | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `last-rebalance-seconds-ago` | `client-id` | `kafka.consumer.last_rebalance_seconds_ago` | The number of seconds since the last successful rebalance event | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `partition-assigned-latency-avg` | `client-id` | `kafka.consumer.partition_assigned_latency_avg` | The average time taken for a partition-assigned rebalance listener callback | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `partition-assigned-latency-max` | `client-id` | `kafka.consumer.partition_assigned_latency_max` | The max time taken for a partition-assigned rebalance listener callback | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `partition-lost-latency-avg` | `client-id` | `kafka.consumer.partition_lost_latency_avg` | The average time taken for a partition-lost rebalance listener callback | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `partition-lost-latency-max` | `client-id` | `kafka.consumer.partition_lost_latency_max` | The max time taken for a partition-lost rebalance listener callback | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `partition-revoked-latency-avg` | `client-id` | `kafka.consumer.partition_revoked_latency_avg` | The average time taken for a partition-revoked rebalance listener callback | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `partition-revoked-latency-max` | `client-id` | `kafka.consumer.partition_revoked_latency_max` | The max time taken for a partition-revoked rebalance listener callback | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `rebalance-latency-avg` | `client-id` | `kafka.consumer.rebalance_latency_avg` | The average time taken for a group to complete a successful rebalance, which may be composed of several failed re-trials until it succeeded | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `rebalance-latency-max` | `client-id` | `kafka.consumer.rebalance_latency_max` | The max time taken for a group to complete a successful rebalance, which may be composed of several failed re-trials until it succeeded | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `rebalance-latency-total` | `client-id` | `kafka.consumer.rebalance_latency_total` | The total number of milliseconds this consumer has spent in successful rebalances since creation | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-coordinator-metrics` | `rebalance-rate-per-hour` | `client-id` | `kafka.consumer.rebalance_rate_per_hour` | The number of successful rebalance events per hour, each event is composed of several failed re-trials until it succeeded | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `rebalance-total` | `client-id` | `kafka.consumer.rebalance_total` | The total number of successful rebalance events, each event is composed of several failed re-trials until it succeeded | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-coordinator-metrics` | `sync-rate` | `client-id` | `kafka.consumer.sync_rate` | The number of group syncs per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `sync-time-avg` | `client-id` | `kafka.consumer.sync_time_avg` | The average time taken for a group sync | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `sync-time-max` | `client-id` | `kafka.consumer.sync_time_max` | The max time taken for a group sync | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-coordinator-metrics` | `sync-total` | `client-id` | `kafka.consumer.sync_total` | The total number of group syncs | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-fetch-manager-metrics` | `bytes-consumed-rate` | `client-id` | | | | +| `consumer-fetch-manager-metrics` | `bytes-consumed-rate` | `client-id`,`topic` | `kafka.consumer.bytes_consumed_rate` | The average number of bytes consumed per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `bytes-consumed-total` | `client-id` | | | | +| `consumer-fetch-manager-metrics` | `bytes-consumed-total` | `client-id`,`topic` | `kafka.consumer.bytes_consumed_total` | The total number of bytes consumed | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-fetch-manager-metrics` | `fetch-latency-avg` | `client-id` | `kafka.consumer.fetch_latency_avg` | The average time taken for a fetch request. | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `fetch-latency-max` | `client-id` | `kafka.consumer.fetch_latency_max` | The max time taken for any fetch request. | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `fetch-rate` | `client-id` | `kafka.consumer.fetch_rate` | The number of fetch requests per second. | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `fetch-size-avg` | `client-id` | | | | +| `consumer-fetch-manager-metrics` | `fetch-size-avg` | `client-id`,`topic` | `kafka.consumer.fetch_size_avg` | The average number of bytes fetched per request | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `fetch-size-max` | `client-id` | | | | +| `consumer-fetch-manager-metrics` | `fetch-size-max` | `client-id`,`topic` | `kafka.consumer.fetch_size_max` | The maximum number of bytes fetched per request | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `fetch-throttle-time-avg` | `client-id` | `kafka.consumer.fetch_throttle_time_avg` | The average throttle time in ms | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `fetch-throttle-time-max` | `client-id` | `kafka.consumer.fetch_throttle_time_max` | The maximum throttle time in ms | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `fetch-total` | `client-id` | `kafka.consumer.fetch_total` | The total number of fetch requests. | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-fetch-manager-metrics` | `preferred-read-replica` | `client-id`,`topic`,`partition` | | | | +| `consumer-fetch-manager-metrics` | `records-consumed-rate` | `client-id` | | | | +| `consumer-fetch-manager-metrics` | `records-consumed-rate` | `client-id`,`topic` | `kafka.consumer.records_consumed_rate` | The average number of records consumed per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `records-consumed-total` | `client-id` | | | | +| `consumer-fetch-manager-metrics` | `records-consumed-total` | `client-id`,`topic` | `kafka.consumer.records_consumed_total` | The total number of records consumed | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-fetch-manager-metrics` | `records-lag` | `client-id`,`topic`,`partition` | `kafka.consumer.records_lag` | The latest lag of the partition | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `records-lag-avg` | `client-id`,`topic`,`partition` | `kafka.consumer.records_lag_avg` | The average lag of the partition | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `records-lag-max` | `client-id` | | | | +| `consumer-fetch-manager-metrics` | `records-lag-max` | `client-id`,`topic`,`partition` | `kafka.consumer.records_lag_max` | The maximum lag in terms of number of records for any partition in this window | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `records-lead` | `client-id`,`topic`,`partition` | `kafka.consumer.records_lead` | The latest lead of the partition | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `records-lead-avg` | `client-id`,`topic`,`partition` | `kafka.consumer.records_lead_avg` | The average lead of the partition | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `records-lead-min` | `client-id` | | | | +| `consumer-fetch-manager-metrics` | `records-lead-min` | `client-id`,`topic`,`partition` | `kafka.consumer.records_lead_min` | The minimum lead in terms of number of records for any partition in this window | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-fetch-manager-metrics` | `records-per-request-avg` | `client-id` | | | | +| `consumer-fetch-manager-metrics` | `records-per-request-avg` | `client-id`,`topic` | `kafka.consumer.records_per_request_avg` | The average number of records in each request | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `connection-close-rate` | `client-id` | `kafka.consumer.connection_close_rate` | The number of connections closed per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `connection-close-total` | `client-id` | `kafka.consumer.connection_close_total` | The total number of connections closed | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-metrics` | `connection-count` | `client-id` | `kafka.consumer.connection_count` | The current number of active connections. | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `connection-creation-rate` | `client-id` | `kafka.consumer.connection_creation_rate` | The number of new connections established per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `connection-creation-total` | `client-id` | `kafka.consumer.connection_creation_total` | The total number of new connections established | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-metrics` | `failed-authentication-rate` | `client-id` | `kafka.consumer.failed_authentication_rate` | The number of connections with failed authentication per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `failed-authentication-total` | `client-id` | `kafka.consumer.failed_authentication_total` | The total number of connections with failed authentication | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-metrics` | `failed-reauthentication-rate` | `client-id` | `kafka.consumer.failed_reauthentication_rate` | The number of failed re-authentication of connections per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `failed-reauthentication-total` | `client-id` | `kafka.consumer.failed_reauthentication_total` | The total number of failed re-authentication of connections | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-metrics` | `incoming-byte-rate` | `client-id` | | | | +| `consumer-metrics` | `incoming-byte-total` | `client-id` | | | | +| `consumer-metrics` | `io-ratio` | `client-id` | `kafka.consumer.io_ratio` | The fraction of time the I/O thread spent doing I/O | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `io-time-ns-avg` | `client-id` | `kafka.consumer.io_time_ns_avg` | The average length of time for I/O per select call in nanoseconds. | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `io-wait-ratio` | `client-id` | `kafka.consumer.io_wait_ratio` | The fraction of time the I/O thread spent waiting | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `io-wait-time-ns-avg` | `client-id` | `kafka.consumer.io_wait_time_ns_avg` | The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds. | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `io-waittime-total` | `client-id` | `kafka.consumer.io_waittime_total` | The total time the I/O thread spent waiting | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-metrics` | `iotime-total` | `client-id` | `kafka.consumer.iotime_total` | The total time the I/O thread spent doing I/O | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-metrics` | `last-poll-seconds-ago` | `client-id` | `kafka.consumer.last_poll_seconds_ago` | The number of seconds since the last poll() invocation. | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `network-io-rate` | `client-id` | `kafka.consumer.network_io_rate` | The number of network operations (reads or writes) on all connections per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `network-io-total` | `client-id` | `kafka.consumer.network_io_total` | The total number of network operations (reads or writes) on all connections | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-metrics` | `outgoing-byte-rate` | `client-id` | | | | +| `consumer-metrics` | `outgoing-byte-total` | `client-id` | | | | +| `consumer-metrics` | `poll-idle-ratio-avg` | `client-id` | `kafka.consumer.poll_idle_ratio_avg` | The average fraction of time the consumer's poll() is idle as opposed to waiting for the user code to process records. | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `reauthentication-latency-avg` | `client-id` | `kafka.consumer.reauthentication_latency_avg` | The average latency observed due to re-authentication | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `reauthentication-latency-max` | `client-id` | `kafka.consumer.reauthentication_latency_max` | The max latency observed due to re-authentication | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `request-rate` | `client-id` | | | | +| `consumer-metrics` | `request-size-avg` | `client-id` | | | | +| `consumer-metrics` | `request-size-max` | `client-id` | | | | +| `consumer-metrics` | `request-total` | `client-id` | | | | +| `consumer-metrics` | `response-rate` | `client-id` | | | | +| `consumer-metrics` | `response-total` | `client-id` | | | | +| `consumer-metrics` | `select-rate` | `client-id` | `kafka.consumer.select_rate` | The number of times the I/O layer checked for new I/O to perform per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `select-total` | `client-id` | `kafka.consumer.select_total` | The total number of times the I/O layer checked for new I/O to perform | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-metrics` | `successful-authentication-no-reauth-total` | `client-id` | `kafka.consumer.successful_authentication_no_reauth_total` | The total number of connections with successful authentication where the client does not support re-authentication | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-metrics` | `successful-authentication-rate` | `client-id` | `kafka.consumer.successful_authentication_rate` | The number of connections with successful authentication per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `successful-authentication-total` | `client-id` | `kafka.consumer.successful_authentication_total` | The total number of connections with successful authentication | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-metrics` | `successful-reauthentication-rate` | `client-id` | `kafka.consumer.successful_reauthentication_rate` | The number of successful re-authentication of connections per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `successful-reauthentication-total` | `client-id` | `kafka.consumer.successful_reauthentication_total` | The total number of successful re-authentication of connections | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-metrics` | `time-between-poll-avg` | `client-id` | `kafka.consumer.time_between_poll_avg` | The average delay between invocations of poll(). | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-metrics` | `time-between-poll-max` | `client-id` | `kafka.consumer.time_between_poll_max` | The max delay between invocations of poll(). | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-node-metrics` | `incoming-byte-rate` | `client-id`,`node-id` | `kafka.consumer.incoming_byte_rate` | The number of bytes read off all sockets per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-node-metrics` | `incoming-byte-total` | `client-id`,`node-id` | `kafka.consumer.incoming_byte_total` | The total number of bytes read off all sockets | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-node-metrics` | `outgoing-byte-rate` | `client-id`,`node-id` | `kafka.consumer.outgoing_byte_rate` | The number of outgoing bytes sent to all servers per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-node-metrics` | `outgoing-byte-total` | `client-id`,`node-id` | `kafka.consumer.outgoing_byte_total` | The total number of outgoing bytes sent to all servers | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-node-metrics` | `request-latency-avg` | `client-id`,`node-id` | `kafka.consumer.request_latency_avg` | | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-node-metrics` | `request-latency-max` | `client-id`,`node-id` | `kafka.consumer.request_latency_max` | | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-node-metrics` | `request-rate` | `client-id`,`node-id` | `kafka.consumer.request_rate` | The number of requests sent per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-node-metrics` | `request-size-avg` | `client-id`,`node-id` | `kafka.consumer.request_size_avg` | The average size of requests sent. | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-node-metrics` | `request-size-max` | `client-id`,`node-id` | `kafka.consumer.request_size_max` | The maximum size of any request sent. | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-node-metrics` | `request-total` | `client-id`,`node-id` | `kafka.consumer.request_total` | The total number of requests sent | `DOUBLE_OBSERVABLE_COUNTER` | +| `consumer-node-metrics` | `response-rate` | `client-id`,`node-id` | `kafka.consumer.response_rate` | The number of responses received per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `consumer-node-metrics` | `response-total` | `client-id`,`node-id` | `kafka.consumer.response_total` | The total number of responses received | `DOUBLE_OBSERVABLE_COUNTER` | +| `kafka-metrics-count` | `count` | `client-id` | | | | +| `producer-metrics` | `batch-size-avg` | `client-id` | `kafka.producer.batch_size_avg` | The average number of bytes sent per partition per-request. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `batch-size-max` | `client-id` | `kafka.producer.batch_size_max` | The max number of bytes sent per partition per-request. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `batch-split-rate` | `client-id` | `kafka.producer.batch_split_rate` | The average number of batch splits per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `batch-split-total` | `client-id` | `kafka.producer.batch_split_total` | The total number of batch splits | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `buffer-available-bytes` | `client-id` | `kafka.producer.buffer_available_bytes` | The total amount of buffer memory that is not being used (either unallocated or in the free list). | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `buffer-exhausted-rate` | `client-id` | `kafka.producer.buffer_exhausted_rate` | The average per-second number of record sends that are dropped due to buffer exhaustion | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `buffer-exhausted-total` | `client-id` | `kafka.producer.buffer_exhausted_total` | The total number of record sends that are dropped due to buffer exhaustion | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `buffer-total-bytes` | `client-id` | `kafka.producer.buffer_total_bytes` | The maximum amount of buffer memory the client can use (whether or not it is currently used). | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `bufferpool-wait-ratio` | `client-id` | `kafka.producer.bufferpool_wait_ratio` | The fraction of time an appender waits for space allocation. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `bufferpool-wait-time-total` | `client-id` | `kafka.producer.bufferpool_wait_time_total` | The total time an appender waits for space allocation. | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `compression-rate-avg` | `client-id` | `kafka.producer.compression_rate_avg` | The average compression rate of record batches. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `connection-close-rate` | `client-id` | `kafka.producer.connection_close_rate` | The number of connections closed per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `connection-close-total` | `client-id` | `kafka.producer.connection_close_total` | The total number of connections closed | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `connection-count` | `client-id` | `kafka.producer.connection_count` | The current number of active connections. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `connection-creation-rate` | `client-id` | `kafka.producer.connection_creation_rate` | The number of new connections established per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `connection-creation-total` | `client-id` | `kafka.producer.connection_creation_total` | The total number of new connections established | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `failed-authentication-rate` | `client-id` | `kafka.producer.failed_authentication_rate` | The number of connections with failed authentication per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `failed-authentication-total` | `client-id` | `kafka.producer.failed_authentication_total` | The total number of connections with failed authentication | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `failed-reauthentication-rate` | `client-id` | `kafka.producer.failed_reauthentication_rate` | The number of failed re-authentication of connections per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `failed-reauthentication-total` | `client-id` | `kafka.producer.failed_reauthentication_total` | The total number of failed re-authentication of connections | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `incoming-byte-rate` | `client-id` | | | | +| `producer-metrics` | `incoming-byte-total` | `client-id` | | | | +| `producer-metrics` | `io-ratio` | `client-id` | `kafka.producer.io_ratio` | The fraction of time the I/O thread spent doing I/O | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `io-time-ns-avg` | `client-id` | `kafka.producer.io_time_ns_avg` | The average length of time for I/O per select call in nanoseconds. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `io-wait-ratio` | `client-id` | `kafka.producer.io_wait_ratio` | The fraction of time the I/O thread spent waiting | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `io-wait-time-ns-avg` | `client-id` | `kafka.producer.io_wait_time_ns_avg` | The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `io-waittime-total` | `client-id` | `kafka.producer.io_waittime_total` | The total time the I/O thread spent waiting | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `iotime-total` | `client-id` | `kafka.producer.iotime_total` | The total time the I/O thread spent doing I/O | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `metadata-age` | `client-id` | `kafka.producer.metadata_age` | The age in seconds of the current producer metadata being used. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `network-io-rate` | `client-id` | `kafka.producer.network_io_rate` | The number of network operations (reads or writes) on all connections per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `network-io-total` | `client-id` | `kafka.producer.network_io_total` | The total number of network operations (reads or writes) on all connections | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `outgoing-byte-rate` | `client-id` | | | | +| `producer-metrics` | `outgoing-byte-total` | `client-id` | | | | +| `producer-metrics` | `produce-throttle-time-avg` | `client-id` | `kafka.producer.produce_throttle_time_avg` | The average time in ms a request was throttled by a broker | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `produce-throttle-time-max` | `client-id` | `kafka.producer.produce_throttle_time_max` | The maximum time in ms a request was throttled by a broker | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `reauthentication-latency-avg` | `client-id` | `kafka.producer.reauthentication_latency_avg` | The average latency observed due to re-authentication | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `reauthentication-latency-max` | `client-id` | `kafka.producer.reauthentication_latency_max` | The max latency observed due to re-authentication | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `record-error-rate` | `client-id` | | | | +| `producer-metrics` | `record-error-total` | `client-id` | | | | +| `producer-metrics` | `record-queue-time-avg` | `client-id` | `kafka.producer.record_queue_time_avg` | The average time in ms record batches spent in the send buffer. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `record-queue-time-max` | `client-id` | `kafka.producer.record_queue_time_max` | The maximum time in ms record batches spent in the send buffer. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `record-retry-rate` | `client-id` | | | | +| `producer-metrics` | `record-retry-total` | `client-id` | | | | +| `producer-metrics` | `record-send-rate` | `client-id` | | | | +| `producer-metrics` | `record-send-total` | `client-id` | | | | +| `producer-metrics` | `record-size-avg` | `client-id` | `kafka.producer.record_size_avg` | The average record size | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `record-size-max` | `client-id` | `kafka.producer.record_size_max` | The maximum record size | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `records-per-request-avg` | `client-id` | `kafka.producer.records_per_request_avg` | The average number of records per request. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `request-latency-avg` | `client-id` | | | | +| `producer-metrics` | `request-latency-max` | `client-id` | | | | +| `producer-metrics` | `request-rate` | `client-id` | | | | +| `producer-metrics` | `request-size-avg` | `client-id` | | | | +| `producer-metrics` | `request-size-max` | `client-id` | | | | +| `producer-metrics` | `request-total` | `client-id` | | | | +| `producer-metrics` | `requests-in-flight` | `client-id` | `kafka.producer.requests_in_flight` | The current number of in-flight requests awaiting a response. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `response-rate` | `client-id` | | | | +| `producer-metrics` | `response-total` | `client-id` | | | | +| `producer-metrics` | `select-rate` | `client-id` | `kafka.producer.select_rate` | The number of times the I/O layer checked for new I/O to perform per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `select-total` | `client-id` | `kafka.producer.select_total` | The total number of times the I/O layer checked for new I/O to perform | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `successful-authentication-no-reauth-total` | `client-id` | `kafka.producer.successful_authentication_no_reauth_total` | The total number of connections with successful authentication where the client does not support re-authentication | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `successful-authentication-rate` | `client-id` | `kafka.producer.successful_authentication_rate` | The number of connections with successful authentication per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `successful-authentication-total` | `client-id` | `kafka.producer.successful_authentication_total` | The total number of connections with successful authentication | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `successful-reauthentication-rate` | `client-id` | `kafka.producer.successful_reauthentication_rate` | The number of successful re-authentication of connections per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-metrics` | `successful-reauthentication-total` | `client-id` | `kafka.producer.successful_reauthentication_total` | The total number of successful re-authentication of connections | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-metrics` | `waiting-threads` | `client-id` | `kafka.producer.waiting_threads` | The number of user threads blocked waiting for buffer memory to enqueue their records | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-node-metrics` | `incoming-byte-rate` | `client-id`,`node-id` | `kafka.producer.incoming_byte_rate` | The number of bytes read off all sockets per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-node-metrics` | `incoming-byte-total` | `client-id`,`node-id` | `kafka.producer.incoming_byte_total` | The total number of bytes read off all sockets | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-node-metrics` | `outgoing-byte-rate` | `client-id`,`node-id` | `kafka.producer.outgoing_byte_rate` | The number of outgoing bytes sent to all servers per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-node-metrics` | `outgoing-byte-total` | `client-id`,`node-id` | `kafka.producer.outgoing_byte_total` | The total number of outgoing bytes sent to all servers | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-node-metrics` | `request-latency-avg` | `client-id`,`node-id` | `kafka.producer.request_latency_avg` | The average request latency in ms | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-node-metrics` | `request-latency-max` | `client-id`,`node-id` | `kafka.producer.request_latency_max` | The maximum request latency in ms | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-node-metrics` | `request-rate` | `client-id`,`node-id` | `kafka.producer.request_rate` | The number of requests sent per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-node-metrics` | `request-size-avg` | `client-id`,`node-id` | `kafka.producer.request_size_avg` | The average size of requests sent. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-node-metrics` | `request-size-max` | `client-id`,`node-id` | `kafka.producer.request_size_max` | The maximum size of any request sent. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-node-metrics` | `request-total` | `client-id`,`node-id` | `kafka.producer.request_total` | The total number of requests sent | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-node-metrics` | `response-rate` | `client-id`,`node-id` | `kafka.producer.response_rate` | The number of responses received per second | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-node-metrics` | `response-total` | `client-id`,`node-id` | `kafka.producer.response_total` | The total number of responses received | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-topic-metrics` | `byte-rate` | `client-id`,`topic` | `kafka.producer.byte_rate` | The average number of bytes sent per second for a topic. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-topic-metrics` | `byte-total` | `client-id`,`topic` | `kafka.producer.byte_total` | The total number of bytes sent for a topic. | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-topic-metrics` | `compression-rate` | `client-id`,`topic` | `kafka.producer.compression_rate` | The average compression rate of record batches for a topic. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-topic-metrics` | `record-error-rate` | `client-id`,`topic` | `kafka.producer.record_error_rate` | The average per-second number of record sends that resulted in errors | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-topic-metrics` | `record-error-total` | `client-id`,`topic` | `kafka.producer.record_error_total` | The total number of record sends that resulted in errors | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-topic-metrics` | `record-retry-rate` | `client-id`,`topic` | `kafka.producer.record_retry_rate` | The average per-second number of retried record sends | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-topic-metrics` | `record-retry-total` | `client-id`,`topic` | `kafka.producer.record_retry_total` | The total number of retried record sends | `DOUBLE_OBSERVABLE_COUNTER` | +| `producer-topic-metrics` | `record-send-rate` | `client-id`,`topic` | `kafka.producer.record_send_rate` | The average number of records sent per second. | `DOUBLE_OBSERVABLE_GAUGE` | +| `producer-topic-metrics` | `record-send-total` | `client-id`,`topic` | `kafka.producer.record_send_total` | The total number of records sent. | `DOUBLE_OBSERVABLE_COUNTER` | diff --git a/instrumentation/kubernetes-client-7.0/README.md b/instrumentation/kubernetes-client-7.0/README.md index 90b1a67139..ce23687523 100644 --- a/instrumentation/kubernetes-client-7.0/README.md +++ b/instrumentation/kubernetes-client-7.0/README.md @@ -1,5 +1,5 @@ # Settings for the Kubernetes client instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| --------------------------------------------------------------------- | ------- | ------- | --------------------------------------------------- | | `otel.instrumentation.kubernetes-client.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/lettuce/README.md b/instrumentation/lettuce/README.md index d2c67a3941..55b3eaca49 100644 --- a/instrumentation/lettuce/README.md +++ b/instrumentation/lettuce/README.md @@ -1,5 +1,5 @@ # Settings for the Lettuce instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| ----------------------------------------------------------- | ------- | ------- | --------------------------------------------------- | | `otel.instrumentation.lettuce.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/log4j/log4j-appender-2.17/javaagent/README.md b/instrumentation/log4j/log4j-appender-2.17/javaagent/README.md index ad1fde9bb3..b4b1938f04 100644 --- a/instrumentation/log4j/log4j-appender-2.17/javaagent/README.md +++ b/instrumentation/log4j/log4j-appender-2.17/javaagent/README.md @@ -1,10 +1,10 @@ # Settings for the Log4j Appender instrumentation -| System property | Type | Default | Description | -|---|---------|--|------------------------------------------------------| -| `otel.instrumentation.log4j-appender.experimental-log-attributes` | Boolean | `false` | Enable the capture of experimental span attributes `thread.name` and `thread.id`. | -| `otel.instrumentation.log4j-appender.experimental.capture-map-message-attributes` | Boolean | `false` | Enable the capture of `MapMessage` attributes. | -| `otel.instrumentation.log4j-appender.experimental.capture-marker-attribute` | Boolean | `false` | Enable the capture of Log4j markers as attributes. | -| `otel.instrumentation.log4j-appender.experimental.capture-context-data-attributes` | String | | List of context data attributes to capture. Use the wildcard character `*` to capture all attributes. | +| System property | Type | Default | Description | +| ---------------------------------------------------------------------------------- | ------- | ------- | ----------------------------------------------------------------------------------------------------- | +| `otel.instrumentation.log4j-appender.experimental-log-attributes` | Boolean | `false` | Enable the capture of experimental span attributes `thread.name` and `thread.id`. | +| `otel.instrumentation.log4j-appender.experimental.capture-map-message-attributes` | Boolean | `false` | Enable the capture of `MapMessage` attributes. | +| `otel.instrumentation.log4j-appender.experimental.capture-marker-attribute` | Boolean | `false` | Enable the capture of Log4j markers as attributes. | +| `otel.instrumentation.log4j-appender.experimental.capture-context-data-attributes` | String | | List of context data attributes to capture. Use the wildcard character `*` to capture all attributes. | [source code attributes]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/span-general.md#source-code-attributes diff --git a/instrumentation/logback/logback-appender-1.0/javaagent/README.md b/instrumentation/logback/logback-appender-1.0/javaagent/README.md index 5216777963..1aedcb5f56 100644 --- a/instrumentation/logback/logback-appender-1.0/javaagent/README.md +++ b/instrumentation/logback/logback-appender-1.0/javaagent/README.md @@ -1,11 +1,11 @@ # Settings for the Logback Appender instrumentation -| System property | Type | Default | Description | -|---|---------|--|------------------------------------------------------| -| `otel.instrumentation.logback-appender.experimental-log-attributes` | Boolean | `false` | Enable the capture of experimental span attributes `thread.name` and `thread.id`. | -| `otel.instrumentation.logback-appender.experimental.capture-code-attributes` | Boolean | `false` | Enable the capture of [source code attributes]. Note that capturing source code attributes at logging sites might add a performance overhead. | -| `otel.instrumentation.logback-appender.experimental.capture-marker-attribute` | Boolean | `false` | Enable the capture of Logback markers as attributes. | -| `otel.instrumentation.logback-appender.experimental.capture-key-value-pair-attributes` | Boolean | `false` | Enable the capture of Logback key value pairs as attributes. | -| `otel.instrumentation.logback-appender.experimental.capture-mdc-attributes` | String | | List of MDC attributes to capture. Use the wildcard character `*` to capture all attributes. | +| System property | Type | Default | Description | +| -------------------------------------------------------------------------------------- | ------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------- | +| `otel.instrumentation.logback-appender.experimental-log-attributes` | Boolean | `false` | Enable the capture of experimental span attributes `thread.name` and `thread.id`. | +| `otel.instrumentation.logback-appender.experimental.capture-code-attributes` | Boolean | `false` | Enable the capture of [source code attributes]. Note that capturing source code attributes at logging sites might add a performance overhead. | +| `otel.instrumentation.logback-appender.experimental.capture-marker-attribute` | Boolean | `false` | Enable the capture of Logback markers as attributes. | +| `otel.instrumentation.logback-appender.experimental.capture-key-value-pair-attributes` | Boolean | `false` | Enable the capture of Logback key value pairs as attributes. | +| `otel.instrumentation.logback-appender.experimental.capture-mdc-attributes` | String | | List of MDC attributes to capture. Use the wildcard character `*` to capture all attributes. | [source code attributes]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/span-general.md#source-code-attributes diff --git a/instrumentation/logback/logback-mdc-1.0/javaagent/README.md b/instrumentation/logback/logback-mdc-1.0/javaagent/README.md index ab349cf6a2..cc0970e509 100644 --- a/instrumentation/logback/logback-mdc-1.0/javaagent/README.md +++ b/instrumentation/logback/logback-mdc-1.0/javaagent/README.md @@ -1,5 +1,5 @@ # Settings for the Logback MDC instrumentation -| System property | Type | Default | Description | -|---|---|---|---| -| `otel.instrumentation.logback-mdc.add-baggage` | Boolean | `false` | Enable exposing baggage attributes through MDC. | +| System property | Type | Default | Description | +| ---------------------------------------------- | ------- | ------- | ----------------------------------------------- | +| `otel.instrumentation.logback-mdc.add-baggage` | Boolean | `false` | Enable exposing baggage attributes through MDC. | diff --git a/instrumentation/logback/logback-mdc-1.0/library/README.md b/instrumentation/logback/logback-mdc-1.0/library/README.md index cd71b6beb5..23d8fc3ff3 100644 --- a/instrumentation/logback/logback-mdc-1.0/library/README.md +++ b/instrumentation/logback/logback-mdc-1.0/library/README.md @@ -57,7 +57,7 @@ The following demonstrates how you might configure the appender in your `logback ``` > It's important to note you can also use other encoders in the `ConsoleAppender` like [logstash-logback-encoder](https://github.com/logfellow/logstash-logback-encoder). - This can be helpful when the `Span` is invalid and the `trace_id`, `span_id`, and `trace_flags` are all `null` and are hidden entirely from the logs. +> This can be helpful when the `Span` is invalid and the `trace_id`, `span_id`, and `trace_flags` are all `null` and are hidden entirely from the logs. Logging events will automatically have context information from the span context injected. The following attributes are available for use: diff --git a/instrumentation/methods/README.md b/instrumentation/methods/README.md index 217691c322..79f3221a5e 100644 --- a/instrumentation/methods/README.md +++ b/instrumentation/methods/README.md @@ -1,7 +1,7 @@ # Settings for the methods instrumentation -| System property | Type | Default | Description | -|----------------- |------ |--------- |------------- | -| `otel.instrumentation.methods.include` | String| None | List of methods to include for tracing. For more information, see [Creating spans around methods with `otel.instrumentation.methods.include`][cs]. +| System property | Type | Default | Description | +| -------------------------------------- | ------ | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | +| `otel.instrumentation.methods.include` | String | None | List of methods to include for tracing. For more information, see [Creating spans around methods with `otel.instrumentation.methods.include`][cs]. | [cs]: https://opentelemetry.io/docs/instrumentation/java/annotations/#creating-spans-around-methods-with-otelinstrumentationmethodsinclude diff --git a/instrumentation/micrometer/micrometer-1.5/README.md b/instrumentation/micrometer/micrometer-1.5/README.md index 8978c6ffed..865b96339d 100644 --- a/instrumentation/micrometer/micrometer-1.5/README.md +++ b/instrumentation/micrometer/micrometer-1.5/README.md @@ -1,7 +1,7 @@ # Settings for the Micrometer bridge instrumentation | System property | Type | Default | Description | -|------------------------------------------------------------|---------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| ---------------------------------------------------------- | ------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `otel.instrumentation.micrometer.base-time-unit` | String | `s` | Set the base time unit for the OpenTelemetry `MeterRegistry` implementation.
Valid values`ns`, `nanoseconds`, `us`, `microseconds`, `ms`, `milliseconds`, `s`, `seconds`, `min`, `minutes`, `h`, `hours`, `d`, `days`
| | `otel.instrumentation.micrometer.prometheus-mode.enabled` | boolean | false | Enable the "Prometheus mode" this will simulate the behavior of Micrometer's PrometheusMeterRegistry. The instruments will be renamed to match Micrometer instrument naming, and the base time unit will be set to seconds. | | `otel.instrumentation.micrometer.histogram-gauges.enabled` | boolean | false | Enables the generation of gauge-based Micrometer histograms for `DistributionSummary` and `Timer` instruments. | diff --git a/instrumentation/netty/README.md b/instrumentation/netty/README.md index d21cd145e6..0ca7e54154 100644 --- a/instrumentation/netty/README.md +++ b/instrumentation/netty/README.md @@ -1,6 +1,6 @@ # Settings for the Netty instrumentation | System property | Type | Default | Description | -|-----------------------------------------------------------|---------|---------|---------------------------------------------------------------------------------------------------| +| --------------------------------------------------------- | ------- | ------- | ------------------------------------------------------------------------------------------------- | | `otel.instrumentation.netty.connection-telemetry.enabled` | Boolean | `false` | Enable the creation of Connect and DNS spans by default for Netty 4.0 and higher instrumentation. | | `otel.instrumentation.netty.ssl-telemetry.enabled` | Boolean | `false` | Enable SSL telemetry for Netty 4.0 and higher instrumentation. | diff --git a/instrumentation/opensearch/README.md b/instrumentation/opensearch/README.md index 74c57e2180..1cab980dff 100644 --- a/instrumentation/opensearch/README.md +++ b/instrumentation/opensearch/README.md @@ -1,5 +1,5 @@ # Settings for the OpenSearch instrumentation | System property | Type | Default | Description | -|----------------------------------------------------------------|-----------|---------|-----------------------------------------------------| +| -------------------------------------------------------------- | --------- | ------- | --------------------------------------------------- | | `otel.instrumentation.opensearch.experimental-span-attributes` | `Boolean` | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/opentelemetry-extension-annotations-1.0/README.md b/instrumentation/opentelemetry-extension-annotations-1.0/README.md index 6c4aae743f..f36aec9fc6 100644 --- a/instrumentation/opentelemetry-extension-annotations-1.0/README.md +++ b/instrumentation/opentelemetry-extension-annotations-1.0/README.md @@ -1,5 +1,5 @@ # Settings for the OpenTelemetry Extension Annotations integration -| Environment variable | Type | Default | Description | -|----------------- |------ |--------- |------------- | -| `otel.instrumentation.opentelemetry-annotations.exclude-methods` | String | | All methods to be excluded from auto-instrumentation by annotation-based advices. | +| Environment variable | Type | Default | Description | +| ---------------------------------------------------------------- | ------ | ------- | --------------------------------------------------------------------------------- | +| `otel.instrumentation.opentelemetry-annotations.exclude-methods` | String | | All methods to be excluded from auto-instrumentation by annotation-based advices. | diff --git a/instrumentation/opentelemetry-instrumentation-annotations-1.16/README.md b/instrumentation/opentelemetry-instrumentation-annotations-1.16/README.md index ca1f2498c9..33bec5e257 100644 --- a/instrumentation/opentelemetry-instrumentation-annotations-1.16/README.md +++ b/instrumentation/opentelemetry-instrumentation-annotations-1.16/README.md @@ -1,5 +1,5 @@ # Settings for the OpenTelemetry Instrumentation Annotations integration -| Environment variable | Type | Default | Description | -|----------------- |------ |--------- |------------- | -| `otel.instrumentation.opentelemetry-instrumentation-annotations.exclude-methods` | String | | All methods to be excluded from auto-instrumentation by annotation-based advices. | +| Environment variable | Type | Default | Description | +| -------------------------------------------------------------------------------- | ------ | ------- | --------------------------------------------------------------------------------- | +| `otel.instrumentation.opentelemetry-instrumentation-annotations.exclude-methods` | String | | All methods to be excluded from auto-instrumentation by annotation-based advices. | diff --git a/instrumentation/pulsar/pulsar-2.8/README.md b/instrumentation/pulsar/pulsar-2.8/README.md index 50278457c8..ee9fb2e642 100644 --- a/instrumentation/pulsar/pulsar-2.8/README.md +++ b/instrumentation/pulsar/pulsar-2.8/README.md @@ -1,5 +1,5 @@ # Settings for the Apache Pulsar instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| ---------------------------------------------------------- | --------- | ------- | --------------------------------------------------- | | `otel.instrumentation.pulsar.experimental-span-attributes` | `Boolean` | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/rabbitmq-2.7/README.md b/instrumentation/rabbitmq-2.7/README.md index cd3f82a5b1..7021d3cf9a 100644 --- a/instrumentation/rabbitmq-2.7/README.md +++ b/instrumentation/rabbitmq-2.7/README.md @@ -1,5 +1,5 @@ # Settings for the RabbitMQ instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| ------------------------------------------------------------ | ------- | ------- | --------------------------------------------------- | | `otel.instrumentation.rabbitmq.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/reactor/reactor-3.1/README.md b/instrumentation/reactor/reactor-3.1/README.md index 7807abfa0a..fbbab7c17a 100644 --- a/instrumentation/reactor/reactor-3.1/README.md +++ b/instrumentation/reactor/reactor-3.1/README.md @@ -1,5 +1,5 @@ # Settings for the Reactor 3.1 instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| ----------------------------------------------------------- | ------- | ------- | --------------------------------------------------- | | `otel.instrumentation.reactor.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/reactor/reactor-netty/README.md b/instrumentation/reactor/reactor-netty/README.md index b1717e45a5..413d40321e 100644 --- a/instrumentation/reactor/reactor-netty/README.md +++ b/instrumentation/reactor/reactor-netty/README.md @@ -1,5 +1,5 @@ # Settings for the Reactor Netty instrumentation | System property | Type | Default | Description | -|-------------------------------------------------------------------|---------|---------|----------------------------------------------------------| +| ----------------------------------------------------------------- | ------- | ------- | -------------------------------------------------------- | | `otel.instrumentation.reactor-netty.connection-telemetry.enabled` | Boolean | `false` | Enable the creation of Connect and DNS spans by default. | diff --git a/instrumentation/rocketmq/rocketmq-client/rocketmq-client-4.8/README.md b/instrumentation/rocketmq/rocketmq-client/rocketmq-client-4.8/README.md index c200fdef1c..5918e2241a 100644 --- a/instrumentation/rocketmq/rocketmq-client/rocketmq-client-4.8/README.md +++ b/instrumentation/rocketmq/rocketmq-client/rocketmq-client-4.8/README.md @@ -1,5 +1,5 @@ # Settings for the Apache RocketMQ remoting-based client instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| ------------------------------------------------------------------- | ------- | ------- | --------------------------------------------------- | | `otel.instrumentation.rocketmq-client.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | diff --git a/instrumentation/runtime-telemetry/runtime-telemetry-java17/library/README.md b/instrumentation/runtime-telemetry/runtime-telemetry-java17/library/README.md index b3d81e11f6..07a3d9342f 100644 --- a/instrumentation/runtime-telemetry/runtime-telemetry-java17/library/README.md +++ b/instrumentation/runtime-telemetry/runtime-telemetry-java17/library/README.md @@ -1,4 +1,3 @@ - The main entry point is the `RuntimeMetrics` class in the package `io.opentelemetry.instrumentation.runtimemetrics.java17`: ```java @@ -36,16 +35,16 @@ default, and the telemetry each produces: -| JfrFeature | Default Enabled | Metrics | -|---|---|---| -| BUFFER_METRICS | false | `process.runtime.jvm.buffer.count`, `process.runtime.jvm.buffer.limit`, `process.runtime.jvm.buffer.usage` | -| CLASS_LOAD_METRICS | false | `process.runtime.jvm.classes.current_loaded`, `process.runtime.jvm.classes.loaded`, `process.runtime.jvm.classes.unloaded` | -| CONTEXT_SWITCH_METRICS | true | `process.runtime.jvm.cpu.context_switch` | -| CPU_COUNT_METRICS | true | `process.runtime.jvm.cpu.limit` | -| CPU_UTILIZATION_METRICS | false | `process.runtime.jvm.cpu.utilization`, `process.runtime.jvm.system.cpu.utilization` | -| GC_DURATION_METRICS | false | `process.runtime.jvm.gc.duration` | -| LOCK_METRICS | true | `process.runtime.jvm.cpu.longlock` | -| MEMORY_ALLOCATION_METRICS | true | `process.runtime.jvm.memory.allocation` | -| MEMORY_POOL_METRICS | false | `process.runtime.jvm.memory.committed`, `process.runtime.jvm.memory.init`, `process.runtime.jvm.memory.limit`, `process.runtime.jvm.memory.usage`, `process.runtime.jvm.memory.usage_after_last_gc` | -| NETWORK_IO_METRICS | true | `process.runtime.jvm.network.io`, `process.runtime.jvm.network.time` | -| THREAD_METRICS | false | `process.runtime.jvm.threads.count` | +| JfrFeature | Default Enabled | Metrics | +| ------------------------- | --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| BUFFER_METRICS | false | `process.runtime.jvm.buffer.count`, `process.runtime.jvm.buffer.limit`, `process.runtime.jvm.buffer.usage` | +| CLASS_LOAD_METRICS | false | `process.runtime.jvm.classes.current_loaded`, `process.runtime.jvm.classes.loaded`, `process.runtime.jvm.classes.unloaded` | +| CONTEXT_SWITCH_METRICS | true | `process.runtime.jvm.cpu.context_switch` | +| CPU_COUNT_METRICS | true | `process.runtime.jvm.cpu.limit` | +| CPU_UTILIZATION_METRICS | false | `process.runtime.jvm.cpu.utilization`, `process.runtime.jvm.system.cpu.utilization` | +| GC_DURATION_METRICS | false | `process.runtime.jvm.gc.duration` | +| LOCK_METRICS | true | `process.runtime.jvm.cpu.longlock` | +| MEMORY_ALLOCATION_METRICS | true | `process.runtime.jvm.memory.allocation` | +| MEMORY_POOL_METRICS | false | `process.runtime.jvm.memory.committed`, `process.runtime.jvm.memory.init`, `process.runtime.jvm.memory.limit`, `process.runtime.jvm.memory.usage`, `process.runtime.jvm.memory.usage_after_last_gc` | +| NETWORK_IO_METRICS | true | `process.runtime.jvm.network.io`, `process.runtime.jvm.network.time` | +| THREAD_METRICS | false | `process.runtime.jvm.threads.count` | diff --git a/instrumentation/runtime-telemetry/runtime-telemetry-java8/library/README.md b/instrumentation/runtime-telemetry/runtime-telemetry-java8/library/README.md index bd710c66ae..05b640d9e2 100644 --- a/instrumentation/runtime-telemetry/runtime-telemetry-java8/library/README.md +++ b/instrumentation/runtime-telemetry/runtime-telemetry-java8/library/README.md @@ -48,45 +48,45 @@ The attributes reported on the memory metrics (`process.runtime.jvm.memory.*`) a The following lists attributes reported for a variety of garbage collectors. Notice that attributes are not necessarily constant across `*.init`, `*.usage`, `*.committed`, and `*.limit` since not all memory pools report a limit. -* CMS Garbage Collector - * `process.runtime.jvm.memory.init`: {pool=Compressed Class Space,type=non_heap}, {pool=Par Eden Space,type=heap}, {pool=Tenured Gen,type=heap}, {pool=Par Survivor Space,type=heap}, {pool=Code Cache,type=non_heap}, {pool=Metaspace,type=non_heap} - * `process.runtime.jvm.memory.usage`: {pool=Compressed Class Space,type=non_heap}, {pool=Par Eden Space,type=heap}, {pool=Tenured Gen,type=heap}, {pool=Par Survivor Space,type=heap}, {pool=Code Cache,type=non_heap}, {pool=Metaspace,type=non_heap} - * `process.runtime.jvm.memory.committed`: {pool=Compressed Class Space,type=non_heap}, {pool=Par Eden Space,type=heap}, {pool=Tenured Gen,type=heap}, {pool=Par Survivor Space,type=heap}, {pool=Code Cache,type=non_heap}, {pool=Metaspace,type=non_heap} - * `process.runtime.jvm.memory.limit`: {pool=Compressed Class Space,type=non_heap}, {pool=Par Eden Space,type=heap}, {pool=Tenured Gen,type=heap}, {pool=Par Survivor Space,type=heap}, {pool=Code Cache,type=non_heap} - * `process.runtime.jvm.memory.usage_after_last_gc`: {pool=Par Eden Space,type=heap}, {pool=Tenured Gen,type=heap}, {pool=Par Survivor Space,type=heap} - * `process.runtime.jvm.gc.duration`: {action=end of minor GC,gc=ParNew}, {action=end of major GC,gc=MarkSweepCompact} -* G1 Garbage Collector - * `process.runtime.jvm.memory.init`: {pool=G1 Survivor Space,type=heap}, {pool=G1 Eden Space,type=heap}, {pool=CodeCache,type=non_heap}, {pool=G1 Old Gen,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} - * `process.runtime.jvm.memory.usage`: {pool=G1 Survivor Space,type=heap}, {pool=G1 Eden Space,type=heap}, {pool=CodeCache,type=non_heap}, {pool=G1 Old Gen,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} - * `process.runtime.jvm.memory.committed`: {pool=G1 Survivor Space,type=heap}, {pool=G1 Eden Space,type=heap}, {pool=CodeCache,type=non_heap}, {pool=G1 Old Gen,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} - * `process.runtime.jvm.memory.limit`: {pool=CodeCache,type=non_heap}, {pool=G1 Old Gen,type=heap}, {pool=Compressed Class Space,type=non_heap} - * `process.runtime.jvm.memory.usage_after_last_gc`: {pool=G1 Survivor Space,type=heap}, {pool=G1 Eden Space,type=heap}, {pool=G1 Old Gen,type=heap} - * `process.runtime.jvm.gc.duration`: {action=end of minor GC,gc=G1 Young Generation}, {action=end of major GC,gc=G1 Old Generation} -* Parallel Garbage Collector - * `process.runtime.jvm.memory.init`: {pool=CodeCache,type=non_heap}, {pool=PS Survivor Space,type=heap}, {pool=PS Old Gen,type=heap}, {pool=PS Eden Space,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} - * `process.runtime.jvm.memory.usage`: {pool=CodeCache,type=non_heap}, {pool=PS Survivor Space,type=heap}, {pool=PS Old Gen,type=heap}, {pool=PS Eden Space,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} - * `process.runtime.jvm.memory.committed`: {pool=CodeCache,type=non_heap}, {pool=PS Survivor Space,type=heap}, {pool=PS Old Gen,type=heap}, {pool=PS Eden Space,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} - * `process.runtime.jvm.memory.limit`: {pool=CodeCache,type=non_heap}, {pool=PS Survivor Space,type=heap}, {pool=PS Old Gen,type=heap}, {pool=PS Eden Space,type=heap}, {pool=Compressed Class Space,type=non_heap} - * `process.runtime.jvm.memory.usage_after_last_gc`: {pool=PS Survivor Space,type=heap}, {pool=PS Old Gen,type=heap}, {pool=PS Eden Space,type=heap} - * `process.runtime.jvm.gc.duration`: {action=end of major GC,gc=PS MarkSweep}, {action=end of minor GC,gc=PS Scavenge} -* Serial Garbage Collector - * `process.runtime.jvm.memory.init`: {pool=CodeCache,type=non_heap}, {pool=Tenured Gen,type=heap}, {pool=Eden Space,type=heap}, {pool=Survivor Space,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} - * `process.runtime.jvm.memory.usage`: {pool=CodeCache,type=non_heap}, {pool=Tenured Gen,type=heap}, {pool=Eden Space,type=heap}, {pool=Survivor Space,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} - * `process.runtime.jvm.memory.committed`: {pool=CodeCache,type=non_heap}, {pool=Tenured Gen,type=heap}, {pool=Eden Space,type=heap}, {pool=Survivor Space,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} - * `process.runtime.jvm.memory.limit`: {pool=CodeCache,type=non_heap}, {pool=Tenured Gen,type=heap}, {pool=Eden Space,type=heap}, {pool=Survivor Space,type=heap}, {pool=Compressed Class Space,type=non_heap} - * `process.runtime.jvm.memory.usage_after_last_gc`: {pool=Tenured Gen,type=heap}, {pool=Eden Space,type=heap}, {pool=Survivor Space,type=heap} - * `process.runtime.jvm.gc.duration`: {action=end of minor GC,gc=Copy}, {action=end of major GC,gc=MarkSweepCompact} -* Shenandoah Garbage Collector - * `process.runtime.jvm.memory.init`: {pool=Metaspace,type=non_heap}, {pool=CodeCache,type=non_heap}, {pool=Shenandoah,type=heap}, {pool=Compressed Class Space,type=non_heap} - * `process.runtime.jvm.memory.usage`: {pool=Metaspace,type=non_heap}, {pool=CodeCache,type=non_heap}, {pool=Shenandoah,type=heap}, {pool=Compressed Class Space,type=non_heap} - * `process.runtime.jvm.memory.committed`: {pool=Metaspace,type=non_heap}, {pool=CodeCache,type=non_heap}, {pool=Shenandoah,type=heap}, {pool=Compressed Class Space,type=non_heap} - * `process.runtime.jvm.memory.limit`: {pool=CodeCache,type=non_heap}, {pool=Shenandoah,type=heap}, {pool=Compressed Class Space,type=non_heap} - * `process.runtime.jvm.memory.usage_after_last_gc`: {pool=Shenandoah,type=heap} - * `process.runtime.jvm.gc.duration`: {action=end of GC cycle,gc=Shenandoah Cycles}, {action=end of GC pause,gc=Shenandoah Pauses} -* Z Garbage Collector - * `process.runtime.jvm.memory.init`: {pool=Metaspace,type=non_heap}, {pool=CodeCache,type=non_heap}, {pool=ZHeap,type=heap}, {pool=Compressed Class Space,type=non_heap} - * `process.runtime.jvm.memory.usage`: {pool=Metaspace,type=non_heap}, {pool=CodeCache,type=non_heap}, {pool=ZHeap,type=heap}, {pool=Compressed Class Space,type=non_heap} - * `process.runtime.jvm.memory.committed`: {pool=Metaspace,type=non_heap}, {pool=CodeCache,type=non_heap}, {pool=ZHeap,type=heap}, {pool=Compressed Class Space,type=non_heap} - * `process.runtime.jvm.memory.limit`: {pool=CodeCache,type=non_heap}, {pool=ZHeap,type=heap}, {pool=Compressed Class Space,type=non_heap} - * `process.runtime.jvm.memory.usage_after_last_gc`: {pool=ZHeap,type=heap} - * `process.runtime.jvm.gc.duration`: {action=end of GC cycle,gc=ZGC Cycles}, {action=end of GC pause,gc=ZGC Pauses} +- CMS Garbage Collector + - `process.runtime.jvm.memory.init`: {pool=Compressed Class Space,type=non_heap}, {pool=Par Eden Space,type=heap}, {pool=Tenured Gen,type=heap}, {pool=Par Survivor Space,type=heap}, {pool=Code Cache,type=non_heap}, {pool=Metaspace,type=non_heap} + - `process.runtime.jvm.memory.usage`: {pool=Compressed Class Space,type=non_heap}, {pool=Par Eden Space,type=heap}, {pool=Tenured Gen,type=heap}, {pool=Par Survivor Space,type=heap}, {pool=Code Cache,type=non_heap}, {pool=Metaspace,type=non_heap} + - `process.runtime.jvm.memory.committed`: {pool=Compressed Class Space,type=non_heap}, {pool=Par Eden Space,type=heap}, {pool=Tenured Gen,type=heap}, {pool=Par Survivor Space,type=heap}, {pool=Code Cache,type=non_heap}, {pool=Metaspace,type=non_heap} + - `process.runtime.jvm.memory.limit`: {pool=Compressed Class Space,type=non_heap}, {pool=Par Eden Space,type=heap}, {pool=Tenured Gen,type=heap}, {pool=Par Survivor Space,type=heap}, {pool=Code Cache,type=non_heap} + - `process.runtime.jvm.memory.usage_after_last_gc`: {pool=Par Eden Space,type=heap}, {pool=Tenured Gen,type=heap}, {pool=Par Survivor Space,type=heap} + - `process.runtime.jvm.gc.duration`: {action=end of minor GC,gc=ParNew}, {action=end of major GC,gc=MarkSweepCompact} +- G1 Garbage Collector + - `process.runtime.jvm.memory.init`: {pool=G1 Survivor Space,type=heap}, {pool=G1 Eden Space,type=heap}, {pool=CodeCache,type=non_heap}, {pool=G1 Old Gen,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} + - `process.runtime.jvm.memory.usage`: {pool=G1 Survivor Space,type=heap}, {pool=G1 Eden Space,type=heap}, {pool=CodeCache,type=non_heap}, {pool=G1 Old Gen,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} + - `process.runtime.jvm.memory.committed`: {pool=G1 Survivor Space,type=heap}, {pool=G1 Eden Space,type=heap}, {pool=CodeCache,type=non_heap}, {pool=G1 Old Gen,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} + - `process.runtime.jvm.memory.limit`: {pool=CodeCache,type=non_heap}, {pool=G1 Old Gen,type=heap}, {pool=Compressed Class Space,type=non_heap} + - `process.runtime.jvm.memory.usage_after_last_gc`: {pool=G1 Survivor Space,type=heap}, {pool=G1 Eden Space,type=heap}, {pool=G1 Old Gen,type=heap} + - `process.runtime.jvm.gc.duration`: {action=end of minor GC,gc=G1 Young Generation}, {action=end of major GC,gc=G1 Old Generation} +- Parallel Garbage Collector + - `process.runtime.jvm.memory.init`: {pool=CodeCache,type=non_heap}, {pool=PS Survivor Space,type=heap}, {pool=PS Old Gen,type=heap}, {pool=PS Eden Space,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} + - `process.runtime.jvm.memory.usage`: {pool=CodeCache,type=non_heap}, {pool=PS Survivor Space,type=heap}, {pool=PS Old Gen,type=heap}, {pool=PS Eden Space,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} + - `process.runtime.jvm.memory.committed`: {pool=CodeCache,type=non_heap}, {pool=PS Survivor Space,type=heap}, {pool=PS Old Gen,type=heap}, {pool=PS Eden Space,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} + - `process.runtime.jvm.memory.limit`: {pool=CodeCache,type=non_heap}, {pool=PS Survivor Space,type=heap}, {pool=PS Old Gen,type=heap}, {pool=PS Eden Space,type=heap}, {pool=Compressed Class Space,type=non_heap} + - `process.runtime.jvm.memory.usage_after_last_gc`: {pool=PS Survivor Space,type=heap}, {pool=PS Old Gen,type=heap}, {pool=PS Eden Space,type=heap} + - `process.runtime.jvm.gc.duration`: {action=end of major GC,gc=PS MarkSweep}, {action=end of minor GC,gc=PS Scavenge} +- Serial Garbage Collector + - `process.runtime.jvm.memory.init`: {pool=CodeCache,type=non_heap}, {pool=Tenured Gen,type=heap}, {pool=Eden Space,type=heap}, {pool=Survivor Space,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} + - `process.runtime.jvm.memory.usage`: {pool=CodeCache,type=non_heap}, {pool=Tenured Gen,type=heap}, {pool=Eden Space,type=heap}, {pool=Survivor Space,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} + - `process.runtime.jvm.memory.committed`: {pool=CodeCache,type=non_heap}, {pool=Tenured Gen,type=heap}, {pool=Eden Space,type=heap}, {pool=Survivor Space,type=heap}, {pool=Compressed Class Space,type=non_heap}, {pool=Metaspace,type=non_heap} + - `process.runtime.jvm.memory.limit`: {pool=CodeCache,type=non_heap}, {pool=Tenured Gen,type=heap}, {pool=Eden Space,type=heap}, {pool=Survivor Space,type=heap}, {pool=Compressed Class Space,type=non_heap} + - `process.runtime.jvm.memory.usage_after_last_gc`: {pool=Tenured Gen,type=heap}, {pool=Eden Space,type=heap}, {pool=Survivor Space,type=heap} + - `process.runtime.jvm.gc.duration`: {action=end of minor GC,gc=Copy}, {action=end of major GC,gc=MarkSweepCompact} +- Shenandoah Garbage Collector + - `process.runtime.jvm.memory.init`: {pool=Metaspace,type=non_heap}, {pool=CodeCache,type=non_heap}, {pool=Shenandoah,type=heap}, {pool=Compressed Class Space,type=non_heap} + - `process.runtime.jvm.memory.usage`: {pool=Metaspace,type=non_heap}, {pool=CodeCache,type=non_heap}, {pool=Shenandoah,type=heap}, {pool=Compressed Class Space,type=non_heap} + - `process.runtime.jvm.memory.committed`: {pool=Metaspace,type=non_heap}, {pool=CodeCache,type=non_heap}, {pool=Shenandoah,type=heap}, {pool=Compressed Class Space,type=non_heap} + - `process.runtime.jvm.memory.limit`: {pool=CodeCache,type=non_heap}, {pool=Shenandoah,type=heap}, {pool=Compressed Class Space,type=non_heap} + - `process.runtime.jvm.memory.usage_after_last_gc`: {pool=Shenandoah,type=heap} + - `process.runtime.jvm.gc.duration`: {action=end of GC cycle,gc=Shenandoah Cycles}, {action=end of GC pause,gc=Shenandoah Pauses} +- Z Garbage Collector + - `process.runtime.jvm.memory.init`: {pool=Metaspace,type=non_heap}, {pool=CodeCache,type=non_heap}, {pool=ZHeap,type=heap}, {pool=Compressed Class Space,type=non_heap} + - `process.runtime.jvm.memory.usage`: {pool=Metaspace,type=non_heap}, {pool=CodeCache,type=non_heap}, {pool=ZHeap,type=heap}, {pool=Compressed Class Space,type=non_heap} + - `process.runtime.jvm.memory.committed`: {pool=Metaspace,type=non_heap}, {pool=CodeCache,type=non_heap}, {pool=ZHeap,type=heap}, {pool=Compressed Class Space,type=non_heap} + - `process.runtime.jvm.memory.limit`: {pool=CodeCache,type=non_heap}, {pool=ZHeap,type=heap}, {pool=Compressed Class Space,type=non_heap} + - `process.runtime.jvm.memory.usage_after_last_gc`: {pool=ZHeap,type=heap} + - `process.runtime.jvm.gc.duration`: {action=end of GC cycle,gc=ZGC Cycles}, {action=end of GC pause,gc=ZGC Pauses} diff --git a/instrumentation/rxjava/README.md b/instrumentation/rxjava/README.md index 096ffbf706..cafdfa41b6 100644 --- a/instrumentation/rxjava/README.md +++ b/instrumentation/rxjava/README.md @@ -1,5 +1,5 @@ # Settings for the RxJava instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| ---------------------------------------------------------- | ------- | ------- | -------------------------------------------------------------------------------------- | | `otel.instrumentation.rxjava.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes for RxJava 2 and 3 instrumentation. | diff --git a/instrumentation/servlet/README.md b/instrumentation/servlet/README.md index c9c0ba87cd..54a9a407d7 100644 --- a/instrumentation/servlet/README.md +++ b/instrumentation/servlet/README.md @@ -2,10 +2,10 @@ ## Settings -| System property | Type | Default | Description | -|---|---|---|---| -| `otel.instrumentation.servlet.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | -| `otel.instrumentation.servlet.experimental.capture-request-parameters` | List | Empty | Request parameters to be captured (experimental). | +| System property | Type | Default | Description | +| ---------------------------------------------------------------------- | ------- | ------- | --------------------------------------------------- | +| `otel.instrumentation.servlet.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes. | +| `otel.instrumentation.servlet.experimental.capture-request-parameters` | List | Empty | Request parameters to be captured (experimental). | ### A word about version diff --git a/instrumentation/spring/README.md b/instrumentation/spring/README.md index 58f39393d6..287738ad79 100644 --- a/instrumentation/spring/README.md +++ b/instrumentation/spring/README.md @@ -1,4 +1,5 @@ # OpenTelemetry Instrumentation: Spring and Spring Boot + @@ -6,7 +7,7 @@ This package streamlines the manual instrumentation process of OpenTelemetry for The [first section](#manual-instrumentation-with-java-sdk) will walk you through span creation and propagation using the OpenTelemetry Java API and [Spring's RestTemplate Http Web Client](https://spring.io/guides/gs/consuming-rest/). This approach will use the "vanilla" OpenTelemetry API to make explicit tracing calls within an application's controller. -The [second section](#manual-instrumentation-using-handlers-and-filters) will build on the first. It will walk you through implementing spring-web handler and filter interfaces to create traces with minimal changes to existing application code. Using the OpenTelemetry API, this approach involves copy and pasting files and a significant amount of manual configurations. +The [second section](#manual-instrumentation-using-handlers-and-filters) will build on the first. It will walk you through implementing spring-web handler and filter interfaces to create traces with minimal changes to existing application code. Using the OpenTelemetry API, this approach involves copy and pasting files and a significant amount of manual configurations. The [third section](#auto-instrumentation-using-spring-starters) with build on the first two sections. We will use spring auto-configurations and instrumentation tools packaged in OpenTelemetry [Spring Starters](starters) to streamline the set up of OpenTelemetry using Spring. With these tools you will be able to setup distributed tracing with little to no changes to existing configurations and easily customize traces with minor additions to application code. @@ -14,18 +15,18 @@ In this guide we will be using a running example. In section one and two, we wil ## Settings -| System property | Type | Default | Description | -|---|---|---|---| -| `otel.instrumentation.spring-integration.global-channel-interceptor-patterns` | List | `*` | An array of Spring channel name patterns that will be intercepted. See [Spring Integration docs](https://docs.spring.io/spring-integration/reference/html/channel.html#global-channel-configuration-interceptors) for more details. | -| `otel.instrumentation.spring-integration.producer.enabled` | Boolean | `false` | Create producer spans when messages are sent to an output channel. Enable when you're using a messaging library that doesn't have its own instrumentation for generating producer spans. Note that the detection of output channels only works for [Spring Cloud Stream](https://spring.io/projects/spring-cloud-stream) `DirectWithAttributesChannel`. | -| `otel.instrumentation.spring-webflux.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes for Spring WebFlux version 5.0. | -| `otel.instrumentation.spring-webmvc.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes for Spring Web MVC 3.1. | +| System property | Type | Default | Description | +| ----------------------------------------------------------------------------- | ------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `otel.instrumentation.spring-integration.global-channel-interceptor-patterns` | List | `*` | An array of Spring channel name patterns that will be intercepted. See [Spring Integration docs](https://docs.spring.io/spring-integration/reference/html/channel.html#global-channel-configuration-interceptors) for more details. | +| `otel.instrumentation.spring-integration.producer.enabled` | Boolean | `false` | Create producer spans when messages are sent to an output channel. Enable when you're using a messaging library that doesn't have its own instrumentation for generating producer spans. Note that the detection of output channels only works for [Spring Cloud Stream](https://spring.io/projects/spring-cloud-stream) `DirectWithAttributesChannel`. | +| `otel.instrumentation.spring-webflux.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes for Spring WebFlux version 5.0. | +| `otel.instrumentation.spring-webmvc.experimental-span-attributes` | Boolean | `false` | Enable the capture of experimental span attributes for Spring Web MVC 3.1. | ## Manual Instrumentation Guide ### Create two Spring Projects -Using the [spring project initializer](https://start.spring.io/), we will create two spring projects. Name one project `MainService` and the other `TimeService`. In this example `MainService` will be a client of `TimeService` and they will be dealing with time. Make sure to select maven, Spring Boot 2.3, Java, and add the spring-web dependency. After downloading the two projects include the OpenTelemetry dependencies and configuration listed below. +Using the [spring project initializer](https://start.spring.io/), we will create two spring projects. Name one project `MainService` and the other `TimeService`. In this example `MainService` will be a client of `TimeService` and they will be dealing with time. Make sure to select maven, Spring Boot 2.3, Java, and add the spring-web dependency. After downloading the two projects include the OpenTelemetry dependencies and configuration listed below. ### Setup for Manual Instrumentation @@ -333,7 +334,7 @@ public class TimeServiceController { ### Run MainService and TimeService -***To view your distributed traces ensure either LogExporter or Jaeger is configured in the OtelConfig.java file*** +**_To view your distributed traces ensure either LogExporter or Jaeger is configured in the OtelConfig.java file_** To view traces on the Jaeger UI, deploy a Jaeger Exporter on localhost by running the command in terminal: @@ -343,9 +344,9 @@ After running Jaeger locally, navigate to the url below. Make sure to refresh th `http://localhost:16686` -Run MainService and TimeService from command line or using an IDE. The end point of interest for MainService is `http://localhost:8080/message` and `http://localhost:8081/time` for TimeService. Entering `localhost:8080/message` in a browser should call MainService and then TimeService, creating a trace. +Run MainService and TimeService from command line or using an IDE. The end point of interest for MainService is `http://localhost:8080/message` and `http://localhost:8081/time` for TimeService. Entering `localhost:8080/message` in a browser should call MainService and then TimeService, creating a trace. -***Note: The default port for the Apache Tomcat is 8080. On localhost both MainService and TimeService services will attempt to run on this port raising an error. To avoid this add `server.port=8081` to the resources/application.properties file. Ensure the port specified corresponds to port referenced by MainServiceController.TIME_SERVICE_URL.*** +**_Note: The default port for the Apache Tomcat is 8080. On localhost both MainService and TimeService services will attempt to run on this port raising an error. To avoid this add `server.port=8081` to the resources/application.properties file. Ensure the port specified corresponds to port referenced by MainServiceController.TIME_SERVICE_URL._** Congrats, we just created a distributed service with OpenTelemetry! @@ -636,7 +637,7 @@ implementation("io.opentelemetry.instrumentation:opentelemetry-spring-boot-start ### Create two Spring Projects -Using the [spring project initializer](https://start.spring.io/), we will create two spring projects. Name one project `MainService` and the other `TimeService`. Make sure to select maven, Spring Boot 2.3, Java, and add the spring-web dependency. After downloading the two projects include the OpenTelemetry dependencies listed above. +Using the [spring project initializer](https://start.spring.io/), we will create two spring projects. Name one project `MainService` and the other `TimeService`. Make sure to select maven, Spring Boot 2.3, Java, and add the spring-web dependency. After downloading the two projects include the OpenTelemetry dependencies listed above. ### Main Service Application @@ -877,12 +878,12 @@ Add the following configurations to overwrite the default exporter values listed To generate a trace using the zipkin exporter follow the steps below: - 1. Replace `opentelemetry-spring-boot-starter` with `opentelemetry-zipkin-spring-boot-starter` in your pom or gradle build file - 2. Use the Zipkin [quick starter](https://zipkin.io/pages/quickstart) to download and run the zipkin executable jar +1. Replace `opentelemetry-spring-boot-starter` with `opentelemetry-zipkin-spring-boot-starter` in your pom or gradle build file +2. Use the Zipkin [quick starter](https://zipkin.io/pages/quickstart) to download and run the zipkin executable jar - Ensure the zipkin endpoint matches the default value listed in your application properties - 3. Run `MainServiceApplication.java` and `TimeServiceApplication.java` - 4. Use your favorite browser to send a request to `http://localhost:8080/message` - 5. Navigate to `http://localhost:9411` to see your trace +3. Run `MainServiceApplication.java` and `TimeServiceApplication.java` +4. Use your favorite browser to send a request to `http://localhost:8080/message` +5. Navigate to `http://localhost:9411` to see your trace Shown below is the sample trace generated by `MainService` and `TimeService` using the opentelemetry-zipkin-spring-boot-starter. diff --git a/instrumentation/spring/spring-boot-autoconfigure/README.md b/instrumentation/spring/spring-boot-autoconfigure/README.md index 7811567fe8..a175ea8e17 100644 --- a/instrumentation/spring/spring-boot-autoconfigure/README.md +++ b/instrumentation/spring/spring-boot-autoconfigure/README.md @@ -181,7 +181,7 @@ OpenTelemetry WebClientFilter. #### Manual Instrumentation Support - @WithSpan -This feature uses spring-aop to wrap methods annotated with `@WithSpan` in a span. The arguments +This feature uses spring-aop to wrap methods annotated with `@WithSpan` in a span. The arguments to the method can be captured as attributed on the created span by annotating the method parameters with `@SpanAttribute`. @@ -262,51 +262,51 @@ The traces below were exported using Zipkin. ##### Spring Web - RestTemplate Client Span ```json - { - "traceId":"0371febbbfa76b2e285a08b53a055d17", - "parentId":"9b782243ad7df179", - "id":"43990118a8bdbdf5", - "kind":"CLIENT", - "name":"http get", - "timestamp":1596841405949825, - "duration":21288, - "localEndpoint":{ - "serviceName":"sample_trace", - "ipv4":"XXX.XXX.X.XXX" - }, - "tags":{ - "http.method":"GET", - "http.status_code":"200", - "http.url":"/spring-web/sample/rest-template", - "net.peer.name":"localhost", - "net.peer.port":"8081" - } - } +{ + "traceId": "0371febbbfa76b2e285a08b53a055d17", + "parentId": "9b782243ad7df179", + "id": "43990118a8bdbdf5", + "kind": "CLIENT", + "name": "http get", + "timestamp": 1596841405949825, + "duration": 21288, + "localEndpoint": { + "serviceName": "sample_trace", + "ipv4": "XXX.XXX.X.XXX" + }, + "tags": { + "http.method": "GET", + "http.status_code": "200", + "http.url": "/spring-web/sample/rest-template", + "net.peer.name": "localhost", + "net.peer.port": "8081" + } +} ``` ##### Spring Web-Flux - WebClient Span ```json - { - "traceId":"0371febbbfa76b2e285a08b53a055d17", - "parentId":"9b782243ad7df179", - "id":"1b14a2fc89d7a762", - "kind":"CLIENT", - "name":"http post", - "timestamp":1596841406109125, - "duration":25137, - "localEndpoint":{ - "serviceName":"sample_trace", - "ipv4":"XXX.XXX.X.XXX" - }, - "tags":{ - "http.method":"POST", - "http.status_code":"200", - "http.url":"/spring-webflux/sample/web-client", - "net.peer.name":"localhost", - "net.peer.port":"8082" - } - } +{ + "traceId": "0371febbbfa76b2e285a08b53a055d17", + "parentId": "9b782243ad7df179", + "id": "1b14a2fc89d7a762", + "kind": "CLIENT", + "name": "http post", + "timestamp": 1596841406109125, + "duration": 25137, + "localEndpoint": { + "serviceName": "sample_trace", + "ipv4": "XXX.XXX.X.XXX" + }, + "tags": { + "http.method": "POST", + "http.status_code": "200", + "http.url": "/spring-webflux/sample/web-client", + "net.peer.name": "localhost", + "net.peer.port": "8082" + } +} ``` ##### @WithSpan Instrumentation @@ -398,23 +398,23 @@ If an exporter is present in the classpath during runtime and a spring bean of t ##### Enabling/Disabling Features -| Feature | Property | Default Value | ConditionalOnClass | -|------------------|------------------------------------------|---------------|------------------------| -| spring-web | otel.springboot.httpclients.enabled | `true` | RestTemplate | -| spring-webmvc | otel.springboot.httpclients.enabled | `true` | OncePerRequestFilter | -| spring-webflux | otel.springboot.httpclients.enabled | `true` | WebClient | -| @WithSpan | otel.springboot.aspects.enabled | `true` | WithSpan, Aspect | -| Otlp Exporter | otel.exporter.otlp.enabled | `true` | OtlpGrpcSpanExporter | -| Jaeger Exporter | otel.exporter.jaeger.enabled | `true` | JaegerGrpcSpanExporter | -| Zipkin Exporter | otel.exporter.zipkin.enabled | `true` | ZipkinSpanExporter | -| Logging Exporter | otel.exporter.logging.enabled | `true` | LoggingSpanExporter | +| Feature | Property | Default Value | ConditionalOnClass | +| ---------------- | ----------------------------------- | ------------- | ---------------------- | +| spring-web | otel.springboot.httpclients.enabled | `true` | RestTemplate | +| spring-webmvc | otel.springboot.httpclients.enabled | `true` | OncePerRequestFilter | +| spring-webflux | otel.springboot.httpclients.enabled | `true` | WebClient | +| @WithSpan | otel.springboot.aspects.enabled | `true` | WithSpan, Aspect | +| Otlp Exporter | otel.exporter.otlp.enabled | `true` | OtlpGrpcSpanExporter | +| Jaeger Exporter | otel.exporter.jaeger.enabled | `true` | JaegerGrpcSpanExporter | +| Zipkin Exporter | otel.exporter.zipkin.enabled | `true` | ZipkinSpanExporter | +| Logging Exporter | otel.exporter.logging.enabled | `true` | LoggingSpanExporter | ##### Resource Properties | Feature | Property | Default Value | -|----------|--------------------------------------------------|------------------------| +| -------- | ------------------------------------------------ | ---------------------- | | Resource | otel.springboot.resource.enabled | `true` | | | otel.springboot.resource.attributes.service.name | `unknown_service:java` | | | otel.springboot.resource.attributes | `empty map` | @@ -434,7 +434,7 @@ otel.springboot.resource.attributes.xyz=foo ##### Exporter Properties | Feature | Property | Default Value | -|-----------------|-------------------------------|--------------------------------------| +| --------------- | ----------------------------- | ------------------------------------ | | Otlp Exporter | otel.exporter.otlp.endpoint | `localhost:4317` | | | otel.exporter.otlp.timeout | `1s` | | Jaeger Exporter | otel.exporter.jaeger.endpoint | `localhost:14250` | @@ -444,7 +444,7 @@ otel.springboot.resource.attributes.xyz=foo ##### Tracer Properties | Feature | Property | Default Value | -|---------|---------------------------------|---------------| +| ------- | ------------------------------- | ------------- | | Tracer | otel.traces.sampler.probability | `1.0` | ### Starter Guide diff --git a/instrumentation/spring/starters/jaeger-spring-boot-starter/README.md b/instrumentation/spring/starters/jaeger-spring-boot-starter/README.md index 04e16f76d1..aeecd57447 100644 --- a/instrumentation/spring/starters/jaeger-spring-boot-starter/README.md +++ b/instrumentation/spring/starters/jaeger-spring-boot-starter/README.md @@ -1,6 +1,6 @@ # OpenTelemetry Jaeger Exporter Starter -OpenTelemetry Jaeger Exporter Starter is a starter package that includes the opentelemetry-api, opentelemetry-sdk, opentelemetry-extension-annotations, opentelmetry-logging-exporter, opentelemetry-spring-boot-autoconfigurations and spring framework starters required to setup distributed tracing. It also provides the [opentelemetry-exporters-jaeger](https://github.com/open-telemetry/opentelemetry-java/tree/main/exporters/jaeger) artifact and corresponding exporter auto-configuration. Check out [opentelemetry-spring-boot-autoconfigure](../../spring-boot-autoconfigure/README.md#features) for the list of supported libraries and features. +OpenTelemetry Jaeger Exporter Starter is a starter package that includes the opentelemetry-api, opentelemetry-sdk, opentelemetry-extension-annotations, opentelmetry-logging-exporter, opentelemetry-spring-boot-autoconfigurations and spring framework starters required to setup distributed tracing. It also provides the [opentelemetry-exporters-jaeger](https://github.com/open-telemetry/opentelemetry-java/tree/main/exporters/jaeger) artifact and corresponding exporter auto-configuration. Check out [opentelemetry-spring-boot-autoconfigure](../../spring-boot-autoconfigure/README.md#features) for the list of supported libraries and features. ## Quickstart diff --git a/instrumentation/spring/starters/zipkin-spring-boot-starter/README.md b/instrumentation/spring/starters/zipkin-spring-boot-starter/README.md index 7286cdebae..bbb1b59731 100644 --- a/instrumentation/spring/starters/zipkin-spring-boot-starter/README.md +++ b/instrumentation/spring/starters/zipkin-spring-boot-starter/README.md @@ -1,8 +1,8 @@ # OpenTelemetry Zipkin Exporter Starter -The OpenTelemetry Exporter Starter for Java is a starter package that includes packages required to enable tracing using OpenTelemetry. It also provides the dependency and corresponding auto-configuration. Check out [opentelemetry-spring-boot-autoconfigure](../../spring-boot-autoconfigure/README.md#features) for the list of supported libraries and features. +The OpenTelemetry Exporter Starter for Java is a starter package that includes packages required to enable tracing using OpenTelemetry. It also provides the dependency and corresponding auto-configuration. Check out [opentelemetry-spring-boot-autoconfigure](../../spring-boot-autoconfigure/README.md#features) for the list of supported libraries and features. -OpenTelemetry Zipkin Exporter Starter is a starter package that includes the opentelemetry-api, opentelemetry-sdk, opentelemetry-extension-annotations, opentelmetry-logging-exporter, opentelemetry-spring-boot-autoconfigurations and spring framework starters required to setup distributed tracing. It also provides the [opentelemetry-exporters-zipkin](https://github.com/open-telemetry/opentelemetry-java/tree/main/exporters/zipkin) artifact and corresponding exporter auto-configuration. Check out [opentelemetry-spring-boot-autoconfigure](../../spring-boot-autoconfigure/README.md#features) for the list of supported libraries and features. +OpenTelemetry Zipkin Exporter Starter is a starter package that includes the opentelemetry-api, opentelemetry-sdk, opentelemetry-extension-annotations, opentelmetry-logging-exporter, opentelemetry-spring-boot-autoconfigurations and spring framework starters required to setup distributed tracing. It also provides the [opentelemetry-exporters-zipkin](https://github.com/open-telemetry/opentelemetry-java/tree/main/exporters/zipkin) artifact and corresponding exporter auto-configuration. Check out [opentelemetry-spring-boot-autoconfigure](../../spring-boot-autoconfigure/README.md#features) for the list of supported libraries and features. ## Quickstart @@ -10,7 +10,7 @@ OpenTelemetry Zipkin Exporter Starter is a starter package that includes the ope Replace `OPENTELEMETRY_VERSION` with the latest stable [release](https://search.maven.org/search?q=g:io.opentelemetry). -* Minimum version: `1.1.0` +- Minimum version: `1.1.0` #### Maven diff --git a/instrumentation/spymemcached-2.12/README.md b/instrumentation/spymemcached-2.12/README.md index 016c7f1346..a3c74c69a1 100644 --- a/instrumentation/spymemcached-2.12/README.md +++ b/instrumentation/spymemcached-2.12/README.md @@ -1,5 +1,5 @@ # Settings for the Spymemcached instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| ---------------------------------------------------------------- | ------- | ------- | ---------------------------------------------------- | | `otel.instrumentation.spymemcached.experimental-span-attributes` | Boolean | `false` | Enables the capture of experimental span attributes. | diff --git a/instrumentation/twilio-6.6/README.md b/instrumentation/twilio-6.6/README.md index 9bfbe30f2f..23e186fd09 100644 --- a/instrumentation/twilio-6.6/README.md +++ b/instrumentation/twilio-6.6/README.md @@ -1,5 +1,5 @@ # Settings for the Twilio instrumentation -| System property | Type | Default | Description | -|---|---|---|---| +| System property | Type | Default | Description | +| ---------------------------------------------------------- | ------- | ------- | ---------------------------------------------------- | | `otel.instrumentation.twilio.experimental-span-attributes` | Boolean | `false` | Enables the capture of experimental span attributes. |