Compare commits

...

134 Commits

Author SHA1 Message Date
Alexandru-Liviu Bratosin 063107068c
fix(async-processor): concurrent exports actually serialised (#3028) 2025-07-14 10:37:21 -07:00
Reiley Yang 8925f064d2
chore: Remove file .github/repository-settings.md (#3067) 2025-07-14 10:31:30 -07:00
Lalit Kumar Bhasin 8aba0913e9
chore: Bump semantic-conventions to v1.36.0 (#3064) 2025-07-14 10:04:21 -07:00
OpenTelemetry Bot 34d6d5082e
Sort contributor listings and remove affiliation from emeriti (#3060) 2025-07-09 22:11:59 +02:00
Berkus Decker 5e447d02cc
chore: Switch from unmaintained hex dependency to const-hex (#3053) 2025-07-09 08:54:12 -07:00
Whoemoon Jang 8d46c40b60
fix: Support HttpClient implementation for HyperClient with custom connectors (#3057) 2025-07-07 11:36:38 -07:00
Copilot eac368a7e4
chore: Fix spelling errors and typos in documentation (#3044)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-07-02 09:53:05 -07:00
dependabot[bot] 2bf8175d07
chore(deps): bump taiki-e/install-action from 2.52.4 to 2.56.0 (#3051)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-07-01 13:33:02 -07:00
dependabot[bot] 3fc7194796
chore(deps): bump EmbarkStudios/cargo-deny-action from 2.0.11 to 2.0.12 (#3052)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-01 13:24:26 -07:00
dependabot[bot] db15ecb541
chore(deps): bump obi1kenobi/cargo-semver-checks-action from 2.6 to 2.8 (#3050)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-01 13:24:14 -07:00
dependabot[bot] 674914a8ef
chore(deps): bump github/codeql-action from 3.28.16 to 3.29.2 (#3049)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-07-01 13:23:57 -07:00
dependabot[bot] 6bc2b19b85
chore(deps): bump step-security/harden-runner from 2.12.0 to 2.12.2 (#3048)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-01 12:49:44 -07:00
Lushe Shipkov d59aded375
docs: A few small doc touch-ups in some of the various in_memory_exporter modules (#3042) 2025-06-30 09:24:40 -07:00
OpenTelemetry Bot e7784bb78f
docs: Update community member listings (#3038)
Co-authored-by: otelbot <197425009+otelbot@users.noreply.github.com>
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-06-27 09:43:55 -06:00
Anton Grübel 5e29598369
chore: fix format lint (#3039) 2025-06-27 09:33:16 -06:00
yoshi-taka af2f1449e8
chore: remove unused glob (#3035) 2025-06-22 17:00:42 -07:00
Scott Gerring 0c2f808ec2
ci: Run benchmarks on main on the new oracle dedicated workers (#2942) 2025-06-20 08:37:17 -07:00
Cijo Thomas d4eb35a0cc
docs: on how to set right cardinality limit (#2998)
Co-authored-by: Lalit Kumar Bhasin <lalit_fin@yahoo.com>
2025-06-12 17:03:39 -07:00
Lalit Kumar Bhasin 1f0d9a9f62
chore: Prepare for opentelemetry-appender-tracing 0.30.1 - bump tracing-opentelemetry to 0.31 (#3022) 2025-06-05 11:43:18 -07:00
dependabot[bot] 51dc2f04b7
chore(deps): update dtolnay/rust-toolchain requirement to b3b07ba8b418998c39fb20f53e8b695cdcc8de1b (#3016)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lalit Kumar Bhasin <lalit_fin@yahoo.com>
2025-06-05 10:43:48 -07:00
Igor Unanua eaca267d04
feat: support multi-value key propagation extraction (#3008)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-06-04 14:52:57 -07:00
Lalit Kumar Bhasin 7b3db0b6a6
chore: Bump otel-proto v1.7.0 (#3018) 2025-06-03 08:20:25 -07:00
Lalit Kumar Bhasin 082213e4e9
chore: bump semcon 1.34.0 (#3019) 2025-06-02 11:24:51 -07:00
dependabot[bot] c473db0788
chore(deps): bump taiki-e/install-action from 2.50.4 to 2.52.4 (#3015) 2025-06-01 21:31:03 -07:00
dependabot[bot] 85e639aef9
chore(deps): bump ossf/scorecard-action from 2.4.1 to 2.4.2 (#3014) 2025-06-01 21:21:45 -07:00
dependabot[bot] f1a541c3ca
chore(deps): bump codecov/codecov-action from 5.4.2 to 5.4.3 (#3013)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-06-01 20:34:16 -07:00
dependabot[bot] c30dc37002
chore(deps): bump fossas/fossa-action from 1.6.0 to 1.7.0 (#3012)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-01 13:22:45 -07:00
Gabriel 28becc0674
fix: with_cleared_baggage (#3006)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-05-30 08:19:46 -07:00
Joonas Bergius cab5565ba1
fix: use default endpoint for endpoint when provided empty string (#3000)
Signed-off-by: Joonas Bergius <joonas@cosmonic.com>
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-05-30 07:51:40 -07:00
Cijo Thomas 62790608e1
fix: Small improvement to OTLP Exporter logs (#3007) 2025-05-30 07:13:26 -07:00
paullegranddc 167c94663a
fix(span_processor): only call on_start with recording spans (#3011) 2025-05-30 06:53:57 -07:00
Cijo Thomas 8e47d84922
chore: Add release notes for 0.30 (#3001)
Co-authored-by: Zhongyang Wu <zhongyang.wu@outlook.com>
2025-05-27 06:43:34 -07:00
Cijo Thomas 8882c31c95
chore: Nit fixes to examples (#3002) 2025-05-23 10:56:42 -07:00
Cijo Thomas c811cde1ae
chore: Prepare release 0.30.0 (#2999) 2025-05-23 09:52:50 -07:00
SF-Zhou 200885a6c3
fix: fix trace id in logs when using set_parent nested in a trace span (#2924)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-05-23 08:01:46 -07:00
Cijo Thomas c24369e86a
chore: Update metric sdk to stable status (#2996) 2025-05-23 07:51:41 -07:00
Cijo Thomas bf22aeb7cc
fix: Remove pub fields and replace with getter method consistently across … (#2997) 2025-05-22 22:41:49 -07:00
Cijo Thomas 4be1a32d3f
fix: remove cardinality capping via instrument advice (#2995) 2025-05-22 13:46:27 -07:00
Cijo Thomas 3d04c16e39
docs: Add metric doc (#2946)
Co-authored-by: Lalit Kumar Bhasin <lalit_fin@yahoo.com>
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-05-22 13:32:37 -07:00
Cijo Thomas 2018959eec
fix: Fix validation in Metric stream (#2991) 2025-05-22 11:18:44 -07:00
Anton Grübel 8c29ca7e21
chore: leverage fallback resolver for MSRV check (#2993) 2025-05-22 08:39:53 -07:00
Cijo Thomas 4b3a383267
chore: add required features to benches (#2990) 2025-05-21 20:45:20 -07:00
Cijo Thomas ebbebf57ba
fix: Further trim public API on views (#2989)
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-05-21 19:07:08 -07:00
Cijo Thomas e123996d80
feat: View cleanups (#2988) 2025-05-21 16:40:32 -07:00
Adrian Garcia Badaracco 3cdc62e716
feat: add generated proto models for profiles signal (#2979)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-05-21 09:24:47 -07:00
Cijo Thomas f04e9ec6cd
feat: Use builder pattern for constructing Metric Streams (#2984) 2025-05-21 07:37:16 -07:00
Cijo Thomas 7cfe8cd883
chore: fix changelogs (#2983) 2025-05-21 07:07:07 -07:00
Cijo Thomas aeb38a02c1
feat: Promote subset of Metric Views to stable (#2982) 2025-05-20 21:09:20 -07:00
Elichai Turkel 857a38b191
fix: Expose SpanExporterBuilder and MetricExporterBuilder (#2966)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-05-20 20:21:04 -07:00
Cijo Thomas cc93ead2df
fix: Metrics Views - fix a bug that causes unit, description to be lost when applying views that influence other aspects (#2981) 2025-05-20 17:59:17 -07:00
Cijo Thomas d52dcef07d
fix: MetricExporters use getter methods instead of direct access (#2973)
Co-authored-by: Lalit Kumar Bhasin <lalit_fin@yahoo.com>
2025-05-20 10:07:07 -07:00
Lalit Kumar Bhasin 84999140a7
chore: bump semconv 1.33.0 (#2975) 2025-05-16 20:14:16 -07:00
Cijo Thomas 9a0099ab8d
chore: remove unused (and incorrect!) doc links (#2974) 2025-05-16 13:21:00 -07:00
Mohammad Vatandoost c5f97180a3
feat: add shutdown with timeout for traces (#2956)
Co-authored-by: Lalit Kumar Bhasin <lalit_fin@yahoo.com>
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-05-16 07:12:50 -07:00
Cijo Thomas 8f4fe23bb1
fix: Avoid exposing HistogramBuckets and bounds (#2969) 2025-05-14 22:11:59 -07:00
StepSecurity Bot fed6fee190
ci: Harden GitHub Actions (#2971)
Signed-off-by: StepSecurity Bot <bot@stepsecurity.io>
2025-05-14 22:03:27 -07:00
Cijo Thomas 970bb1e4b6
fix: Avoid exposing implementation detail in public API for PushMetricExporter (#2968) 2025-05-14 10:15:50 -07:00
Utkarsh Umesan Pillai a4575af593
fix: Update ResourceMetrics public API (#2965)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-05-14 07:39:02 -07:00
Cijo Thomas 82e5ed405e
fix: PeriodicReader to reuse data structures across collect (#2963) 2025-05-14 07:29:37 -07:00
paullegranddc 8fe3dcccc4
fix(CI): patch dependencies before running external-types check (#2967) 2025-05-14 06:44:27 -07:00
Cijo Thomas 3f29f6d8bf
chore: fix cargo deny check by updating unicode allowed list (#2964) 2025-05-14 06:39:46 +02:00
Björn Antonsson f771404f82
fix: allow span links to be added to a SpanRef (#2959) 2025-05-07 08:34:42 -07:00
Lalit Kumar Bhasin 1d9bd25ec8
chore: Fix CI coverage error for failing to install llvm-cov (#2958) 2025-05-05 11:32:10 -07:00
Lalit Kumar Bhasin 377fe5db7c
chore: publish otel-proto v1.6.0 (#2955) 2025-05-05 10:32:16 -07:00
Cijo Thomas 3d589d6449
ci: Try to build examples in CI (#2711)
Co-authored-by: Harold Dost <h.dost@criteo.com>
2025-05-05 08:08:50 +02:00
dependabot[bot] fa692d8c5c
chore(deps): bump taiki-e/install-action from 2.49.45 to 2.50.4 (#2952)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-05-01 14:13:49 -07:00
dependabot[bot] c6c2453ac4
chore(deps): update dtolnay/rust-toolchain requirement to b3b07ba8b418998c39fb20f53e8b695cdcc8de1b (#2953)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-05-01 14:05:18 -07:00
dependabot[bot] 06c6dfd6a8
chore(deps): bump github/codeql-action from 3.28.13 to 3.28.16 (#2954)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-05-01 13:57:25 -07:00
dependabot[bot] 175c7c6e9c
chore(deps): bump step-security/harden-runner from 2.11.1 to 2.12.0 (#2951)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-05-01 13:43:03 -07:00
dependabot[bot] 225bc0ebfa
chore(deps): bump codecov/codecov-action from 4.6.0 to 5.4.2 (#2950)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-01 13:17:20 -07:00
Mohammad Vatandoost 1d37e07529
feat: add-shutdown-with-timeout-for-log-provider-and-processor (#2941)
Co-authored-by: Lalit Kumar Bhasin <lalit_fin@yahoo.com>
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-05-01 09:09:01 -07:00
Cijo Thomas 4f2de12350
fix: Pass immutable metrics to PushMetricExporter (#2947) 2025-04-30 10:01:52 -07:00
Mathieu Tricoire 1d610a211a
feat(otlp): Re-export tonic crate (#2898)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
Co-authored-by: Lalit Kumar Bhasin <lalit_fin@yahoo.com>
2025-04-30 09:44:23 -07:00
Cijo Thomas 409713d2d9
fix: Allow Histograms with no buckets (#2948) 2025-04-30 09:19:05 -07:00
bestgopher 7f8adcc3f5
docs(semconv): fix some invalid urls (#2944) 2025-04-29 10:49:48 -07:00
Anton Grübel 02c290de84
chore: change webpki-roots exception license to CDLA (#2945) 2025-04-28 16:18:24 -07:00
Cijo Thomas 9dc727ef58
chore: Add Bjorn as approver (#2937)
Björn has been actively helping the repo for last few months, and is leading the exploration/development of tokio-tracing interop, among many other contributions/reviews.
He has agreed to volunteer time as an Approver for the repo.

In my view, https://github.com/open-telemetry/community/blob/main/guides/contributor/membership.md#requirements-2 requirements have been met.
2025-04-22 22:25:22 +02:00
Mohammad Vatandoost 5c60f12f04
feat: add shutdown with timeout for log exporter (#2909)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-04-21 18:17:42 -07:00
Utkarsh Umesan Pillai d7f824486a
ci: Add cargo deny license check (#2936) 2025-04-21 14:55:29 -07:00
Utkarsh Umesan Pillai d8d57d834d
ci: Add cargo deny checks for bans and sources (#2935) 2025-04-21 11:34:20 +02:00
Gilles Henaux b5d31f11fa
docs: fix the HTTP and gRPC transports quickstart guides (#2933) 2025-04-17 07:43:06 -07:00
Lalit Kumar Bhasin 9cdc93161d
chore: bump semconv 1.32.0 (#2932) 2025-04-15 13:28:43 -07:00
Cijo Thomas 10cf02c458
chore: Fix changelogs and few nits (#2929) 2025-04-13 18:24:26 -07:00
Cijo Thomas 4ce765567c
feat: Hide MetricReader and friends (#2928) 2025-04-11 16:48:34 -07:00
Cijo Thomas 64cf2916c4
chore: Patch release prometheus to fix security vulnerability (#2927)
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-04-10 21:55:25 -07:00
houseme 431689dd04
chore: Upgrade `prometheus` to 0.14 and clean up protobuf-related code in `lib.rs` (#2920)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-04-10 19:33:36 -07:00
Utkarsh Umesan Pillai 8b3fc06555
ci: Update permissions for workflow (#2923) 2025-04-09 15:51:59 -07:00
Utkarsh Umesan Pillai 130e178ad3
docs: Add openssf scorecard badge (#2919) 2025-04-08 16:32:20 -07:00
Utkarsh Umesan Pillai 6b5251f0d0
ci: Update CodeQL workflow (#2918) 2025-04-08 14:46:01 -07:00
Cijo Thomas 4ff8e02031
fix: Cardinality overflow to use bool value instead of string (#2916) 2025-04-08 12:52:16 -07:00
Cijo Thomas d4c646738f
fix: cleanup MetricError (#2906) 2025-04-08 08:23:51 -07:00
Cijo Thomas df262401da
feat: Add ability to specify cardinality limit via Instrument advice (#2903) 2025-04-07 22:52:25 -07:00
Utkarsh Umesan Pillai 1760889e27
ci: Harden GitHub Actions (#2915) 2025-04-07 22:41:11 -07:00
Utkarsh Umesan Pillai e680514e4f
ci: Harden GitHub Actions (#2913) 2025-04-07 19:55:50 -07:00
Utkarsh Umesan Pillai bef0523b68
ci: Harden GitHub Actions (#2914) 2025-04-07 19:46:09 -07:00
StepSecurity Bot 72fc1b60a5
ci: [StepSecurity] Harden GitHub Actions (#2910)
Signed-off-by: StepSecurity Bot <bot@stepsecurity.io>
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-04-07 18:03:12 -07:00
StepSecurity Bot f99f20a87d
ci: [StepSecurity] Harden GitHub Actions (#2912)
Signed-off-by: StepSecurity Bot <bot@stepsecurity.io>
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-04-07 17:28:49 -07:00
Utkarsh Umesan Pillai 940ec2304b
ci: Harden GitHub Actions (#2911) 2025-04-07 15:25:42 -07:00
StepSecurity Bot 9a0ffc4adf
fix: [StepSecurity] ci: Harden GitHub Actions (#2907)
Signed-off-by: StepSecurity Bot <bot@stepsecurity.io>
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-04-07 13:01:32 -07:00
Lalit Kumar Bhasin 16ff4b0575
chore: CI lint fix for dead link (#2908) 2025-04-07 12:22:18 -07:00
Cijo Thomas bc82d4f66d
fix: Cleanup MetricError and use OTelSdkResult instead (#2905) 2025-04-06 11:29:55 -07:00
Mohammad Vatandoost e9ae9f90ef
feat: add shutdown with timeout for metric reader and provider (#2890)
Co-authored-by: Lalit Kumar Bhasin <lalit_fin@yahoo.com>
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-04-06 09:37:37 -07:00
Cijo Thomas 4791aae19f
test: Add ignored code in test to prove known issue (#2872) 2025-04-06 07:59:30 -07:00
Cijo Thomas 4b56ee354c
fix: Remove logging for cardinality overflow (#2904) 2025-04-06 07:38:55 -07:00
Cijo Thomas 2564a71808
feat: Add and enabled Metric cardinality capping by default (#2901) 2025-04-04 16:04:45 -07:00
Björn Antonsson 86e842ca5e
chore: fix clippy lint errors for rust 1.86.0 (#2896) 2025-04-03 09:14:04 -07:00
dependabot[bot] 24b92cb7c7
chore(deps): bump fossas/fossa-action from 1.5.0 to 1.6.0 (#2892)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 12:50:07 -07:00
dependabot[bot] 93a151e720
chore(deps): bump github/codeql-action from 3.28.12 to 3.28.13 (#2891)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 12:49:36 -07:00
Scott Gerring 7bdd2f4160
fix: re-export WithContext in the same place (#2879)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-04-01 08:26:38 -07:00
Mohammad Vatandoost af3a33e1b3
feat: Add shutdown with timeout for metric exporter (#2854)
Co-authored-by: Braden Steffaniak <BradenSteffaniak+github@gmail.com>
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
Co-authored-by: Cijo Thomas <cithomas@microsoft.com>
2025-04-01 08:02:17 -07:00
Scott Gerring 37d794788e
chore: Move 'main' benchmark to shared workers temporarily (#2889) 2025-04-01 07:49:30 -07:00
OpenTelemetry Bot 36633015ab
ci: Add ossf-scorecard scanning workflow (#2887)
Co-authored-by: otelbot <197425009+otelbot@users.noreply.github.com>
2025-04-01 07:43:12 -07:00
Björn Antonsson 867e2a172c
perf: Run all benchmarks in one action (#2885)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-03-31 08:03:31 -07:00
Anton Grübel d5e409ce1f
refactor: re-export tracing for internal-logs (#2867)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-03-31 07:47:41 -07:00
Cijo Thomas 99cb67d19c
perf: Nit fix to benchmarks (#2884) 2025-03-28 15:33:23 -07:00
Cijo Thomas 303803e304
chore: Add Anton Grübel as approver (#2863) 2025-03-28 14:55:40 -07:00
Cijo Thomas 62e43c5489
feat: Leverage Suppression Context in Sdk (#2868)
Co-authored-by: Utkarsh Umesan Pillai <66651184+utpilla@users.noreply.github.com>
2025-03-28 10:51:40 -07:00
Björn Antonsson 50f0bb82f8
ci: run clippy on features separately to find issues (#2866)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-03-28 09:38:51 -07:00
Björn Antonsson da2029ea17
perf: Run all benchmarks for shorter time (#2870) 2025-03-28 09:23:06 -07:00
Anton Grübel b2de6cc5a3
chore: update tonic to 0.13 (#2876) 2025-03-27 14:47:02 -07:00
Anton Grübel a071d8fc39
ci: update deny GHA and its config (#2875) 2025-03-27 12:03:54 -07:00
Cijo Thomas 297146701d
feat: Add Suppression flag to context (#2821)
Co-authored-by: Lalit Kumar Bhasin <lalit_fin@yahoo.com>
2025-03-26 16:14:41 -07:00
Mindaugas Vinkelis f3e93a09ea
refactor: AggregatedMetrics as enum instead of dyn Aggregation (#2857)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-03-26 10:39:57 -07:00
Cijo Thomas f12833f383
docs: Modify example to use logs, baggage (#2855)
Co-authored-by: Zhongyang Wu <zhongyang.wu@outlook.com>
2025-03-26 07:55:06 -07:00
Scott Gerring fb3699bc05
chore: Update error handling ADR - mention non_exhaustive (#2865) 2025-03-26 06:41:38 -07:00
Anton Grübel a711ae91c7
ci: add cargo machete and remove unused dependencies (#2864) 2025-03-25 15:01:24 -07:00
Cijo Thomas 5bfa70ef23
chore: Add company affiliation to maintainers and approvers (#2859) 2025-03-25 10:26:49 -07:00
houseme e9b27a4df6
chore: update from tracing-opentelemetry 0.29.0 to 0.30.0 (#2856)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-03-24 09:56:56 -07:00
Cijo Thomas f1c7ce9225
chore: fix few nit build warnings (#2848) 2025-03-24 08:11:49 -07:00
Cijo Thomas fa170f3258
fix: LogEnabled benchmarks to use blackbox (#2853) 2025-03-23 17:09:58 -07:00
tison 369b952baf
chore: Add link to sdk's CHANGELOG.md (#2850) 2025-03-22 10:28:12 -07:00
Braden Steffaniak e994d5237e
chore: Upgrade opentelemetry-prometheus to 0.29 (#2851) 2025-03-22 10:15:28 -07:00
Anton Grübel c5d5a1cc69
perf: small perf improvements in OTel API (#2842)
Co-authored-by: Cijo Thomas <cijo.thomas@gmail.com>
2025-03-21 20:20:52 -07:00
Cijo Thomas d32d34c8e0
chore: fix and release appender-tracing (#2847) 2025-03-21 17:47:50 -07:00
211 changed files with 10082 additions and 2784 deletions

3
.cargo/config.toml Normal file
View File

@ -0,0 +1,3 @@
[resolver]
# https://doc.rust-lang.org/cargo/reference/config.html#resolverincompatible-rust-versions
incompatible-rust-versions = "fallback"

View File

@ -12,7 +12,8 @@
"ignoreWords": [
"otel",
"rustdoc",
"rustfilt"
"rustfilt",
"webkpi"
],
// these are words that are always considered incorrect.
"flagWords": [
@ -26,50 +27,69 @@
// workspace dictionary.
"words": [
"actix",
"Antonsson",
"anyvalue",
"appender",
"appenders",
"autobenches",
"Bhasin",
"Björn",
"BLRP",
"chrono",
"Cijo",
"clippy",
"clonable",
"codecov",
"dashmap",
"datapoint",
"deque",
"Dirkjan",
"docsrs",
"Dwarnings",
"eprintln",
"EPYC",
"flamegraph",
"Gerring",
"grpcio",
"Grübel",
"hasher",
"impls",
"isahc",
"Isobel",
"jaegertracing",
"Kühle",
"Kumar",
"Lalit",
"LIBCLANG",
"logrecord",
"MILLIS",
"mpsc",
"msrv",
"mykey",
"myunit",
"myvalue",
"nocapture",
"Ochtman",
"opentelemetry",
"openzipkin",
"otcorrelations",
"OTELCOL",
"OTLP",
"periodicreader",
"Pillai",
"pprof",
"protos",
"prost",
"protoc",
"quantile",
"quantiles",
"Redelmeier",
"reqwest",
"rstest",
"runtimes",
"rustc",
"rustls",
"schemars",
"semconv",
"serde",
"shoppingcart",
@ -78,10 +98,18 @@
"testcontainers",
"testresults",
"thiserror",
"traceparent",
"Traceparent",
"tracerprovider",
"tracestate",
"UCUM",
"Umesan",
"unsampled",
"updown",
"urlencoding",
"usize",
"Utkarsh",
"webpki",
"Zhongyang",
"zipkin"
],

View File

@ -1,18 +0,0 @@
# Log of local changes
Maintainers are expected to maintain this log. This is required as per
[OpenTelemetry Community
guidelines](https://github.com/open-telemetry/community/blob/main/docs/how-to-configure-new-repository.md#collaborators-and-teams).
## May 6th 2024
Modified branch protection for main branch to require the following CI checks as
we now added Windows to CI.
test (ubuntu-latest, stable)
test (stable, windows-latest)
## April 30th 2024
Modified branch protection for main branch to require the following CI checks:
docs
test (stable)

View File

@ -13,42 +13,47 @@ on:
branches:
- main
name: benchmark pull requests
permissions:
contents: read
jobs:
runBenchmark:
name: run benchmark
permissions:
pull-requests: write
# If we're running on a PR, use ubuntu-latest - a shared runner. We can't use the self-hosted
# runners on arbitrary PRs, and we don't want to unleash that load on the pool anyway.
# If we're running on main, use the OTEL self-hosted runner pool.
runs-on: ${{ github.event_name == 'pull_request' && 'ubuntu-latest' || 'self-hosted' }}
# If we're running on main, use our oracle bare-metal runner for accuracy.
# If we're running on a PR, use github's shared workers to save resources.
runs-on: ${{ github.event_name == 'pull_request' && 'ubuntu-latest' || 'oracle-bare-metal-64cpu-512gb-x86-64' }}
if: ${{ (github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'performance')) || github.event_name == 'push' }}
container:
image: rust:slim-bullseye
env:
# For PRs, compare against the base branch - e.g., 'main'.
# For pushes to main, compare against the previous commit
BRANCH_NAME: ${{ github.event_name == 'pull_request' && github.base_ref || github.event.before }}
GIT_DISCOVERY_ACROSS_FILESYSTEM: 1
steps:
- uses: actions/checkout@v4
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
fetch-depth: 10 # Fetch current commit and its parent
- uses: arduino/setup-protoc@v3
egress-policy: audit
- name: Setup container environment
run: |
apt-get update && apt-get install --fix-missing -y unzip cmake build-essential pkg-config curl git
cargo install cargo-criterion
- name: Make repo safe for Git inside container
run: git config --global --add safe.directory "$GITHUB_WORKSPACE"
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 10 # Fetch a bit of history so we can do perf diffs
- uses: arduino/setup-protoc@c65c819552d16ad3c9b72d9dfd5ba5237b9c906b # v3.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
- uses: boa-dev/criterion-compare-action@adfd3a94634fe2041ce5613eb7df09d247555b87 # v3.2.4
with:
toolchain: stable
- uses: boa-dev/criterion-compare-action@v3
with:
cwd: opentelemetry
branchName: ${{ env.BRANCH_NAME }}
- uses: boa-dev/criterion-compare-action@v3
with:
cwd: opentelemetry-appender-tracing
features: spec_unstable_logs_enabled
branchName: ${{ env.BRANCH_NAME }}
- uses: boa-dev/criterion-compare-action@v3
with:
cwd: opentelemetry-sdk
features: rt-tokio,testing,metrics,logs,spec_unstable_metrics_views
branchName: ${{ env.BRANCH_NAME }}

View File

@ -1,6 +1,8 @@
name: CI
env:
CI: true
permissions:
contents: read
on:
pull_request:
push:
@ -29,6 +31,11 @@ jobs:
runs-on: ${{ matrix.os }}
continue-on-error: ${{ matrix.rust == 'beta' }}
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- name: Free disk space
if: ${{ matrix.os == 'ubuntu-latest'}}
run: |
@ -36,16 +43,16 @@ jobs:
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
df -h
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- uses: dtolnay/rust-toolchain@master
- uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b
with:
toolchain: ${{ matrix.rust }}
components: rustfmt
- name: "Set rustup profile"
run: rustup set profile minimal
- uses: arduino/setup-protoc@v3
- uses: arduino/setup-protoc@c65c819552d16ad3c9b72d9dfd5ba5237b9c906b # v3.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Test
@ -53,13 +60,22 @@ jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- uses: dtolnay/rust-toolchain@stable
- uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b
with:
components: rustfmt
- uses: arduino/setup-protoc@v3
toolchain: stable
components: rustfmt, clippy
- uses: taiki-e/install-action@0eee80d37f55e834144deec670972c19e81a85b0 # v2.56.0
with:
tool: cargo-hack
- uses: arduino/setup-protoc@c65c819552d16ad3c9b72d9dfd5ba5237b9c906b # v3.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Format
@ -69,55 +85,99 @@ jobs:
external-types:
strategy:
matrix:
example: [opentelemetry, opentelemetry-sdk, opentelemetry-otlp, opentelemetry-zipkin]
member: [opentelemetry, opentelemetry-sdk, opentelemetry-otlp, opentelemetry-zipkin]
runs-on: ubuntu-latest # TODO: Check if this could be covered for Windows. The step used currently fails on Windows.
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@nightly
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
toolchain: nightly-2024-06-30
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b
with:
# Rust version should be kept in sync with the one the release was tested with
# https://github.com/awslabs/cargo-check-external-types/releases
toolchain: nightly-2025-05-04
components: rustfmt
- uses: taiki-e/install-action@0eee80d37f55e834144deec670972c19e81a85b0 # v2.56.0
with:
tool: cargo-check-external-types@0.2.0
- name: external-type-check
run: |
cargo install cargo-check-external-types@0.1.13
cd ${{ matrix.example }}
cargo check-external-types --config allowed-external-types.toml
working-directory: ${{ matrix.member }}
run: cargo check-external-types --all-features --config allowed-external-types.toml
msrv:
strategy:
matrix:
os: [windows-latest, ubuntu-latest]
rust: [1.75.0]
runs-on: ${{ matrix.os }}
continue-on-error: true
steps:
- uses: actions/checkout@v4
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- name: Set up Rust ${{ matrix.rust }}
uses: dtolnay/rust-toolchain@master
- uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b
with:
toolchain: ${{ matrix.rust }}
- name: Patch dependencies versions
run: bash ./scripts/patch_dependencies.sh
toolchain: stable
- uses: taiki-e/install-action@0eee80d37f55e834144deec670972c19e81a85b0 # v2.56.0
with:
tool: cargo-msrv
- uses: arduino/setup-protoc@c65c819552d16ad3c9b72d9dfd5ba5237b9c906b # v3.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Check MSRV for all crates
run: bash ./scripts/msrv.sh ${{ matrix.rust }}
run: bash ./scripts/msrv.sh
cargo-deny:
runs-on: ubuntu-latest # This uses the step `EmbarkStudios/cargo-deny-action@v1` which is only supported on Linux
continue-on-error: true # Prevent sudden announcement of a new advisory from failing ci
steps:
- uses: actions/checkout@v4
- uses: EmbarkStudios/cargo-deny-action@v1
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Check advisories
uses: EmbarkStudios/cargo-deny-action@30f817c6f72275c6d54dc744fbca09ebc958599f # v2.0.12
with:
command: check advisories
- name: Check licenses
uses: EmbarkStudios/cargo-deny-action@30f817c6f72275c6d54dc744fbca09ebc958599f # v2.0.12
with:
command: check licenses
- name: Check bans
uses: EmbarkStudios/cargo-deny-action@30f817c6f72275c6d54dc744fbca09ebc958599f # v2.0.12
with:
command: check bans
- name: Check sources
uses: EmbarkStudios/cargo-deny-action@30f817c6f72275c6d54dc744fbca09ebc958599f # v2.0.12
with:
command: check sources
docs:
continue-on-error: true
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b
with:
toolchain: stable
components: rustfmt
- uses: arduino/setup-protoc@v3
- uses: arduino/setup-protoc@c65c819552d16ad3c9b72d9dfd5ba5237b9c906b # v3.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: doc
@ -130,26 +190,72 @@ jobs:
runs-on: ubuntu-latest
if: ${{ ! contains(github.event.pull_request.labels.*.name, 'dependencies') }}
steps:
- uses: actions/checkout@v4
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- uses: dtolnay/rust-toolchain@stable
- uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b
with:
toolchain: stable
components: rustfmt,llvm-tools-preview
- uses: arduino/setup-protoc@v3
- uses: arduino/setup-protoc@c65c819552d16ad3c9b72d9dfd5ba5237b9c906b # v3.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: cargo install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@0eee80d37f55e834144deec670972c19e81a85b0 # v2.56.0
with:
tool: cargo-llvm-cov
- name: cargo generate-lockfile
if: hashFiles('Cargo.lock') == ''
run: cargo generate-lockfile
- name: cargo llvm-cov
run: cargo llvm-cov --locked --all-features --workspace --lcov --lib --output-path lcov.info
- name: Upload to codecov.io
uses: codecov/codecov-action@v4
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
with:
fail_ci_if_error: true
build-examples:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # stable
with:
toolchain: stable
components: rustfmt
- uses: arduino/setup-protoc@c65c819552d16ad3c9b72d9dfd5ba5237b9c906b # v3.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Build examples
run: |
for example in examples/*; do
if [ -d "$example" ]; then
echo "Building $example"
cargo build
fi
done
cargo-machete:
continue-on-error: true
runs-on: ubuntu-latest
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b
with:
toolchain: stable
- uses: taiki-e/install-action@0eee80d37f55e834144deec670972c19e81a85b0 # v2.56.0
with:
tool: cargo-machete
- name: cargo machete
run: cargo machete

45
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@ -0,0 +1,45 @@
name: "CodeQL Analysis"
env:
CODEQL_ENABLE_EXPERIMENTAL_FEATURES : true # CodeQL support for Rust is experimental
permissions:
contents: read
on:
pull_request:
push:
branches: [main]
workflow_dispatch:
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
security-events: write # for github/codeql-action/autobuild to send a status report
strategy:
fail-fast: false
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- name: Initialize CodeQL
uses: github/codeql-action/init@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
with:
languages: rust
- name: Autobuild
uses: github/codeql-action/autobuild@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2

View File

@ -12,9 +12,14 @@ jobs:
fossa:
runs-on: ubuntu-latest
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: fossas/fossa-action@93a52ecf7c3ac7eb40f5de77fd69b1a19524de94 # v1.5.0
- uses: fossas/fossa-action@3ebcea1862c6ffbd5cf1b4d0bd6b3fe7bd6f2cac # v1.7.0
with:
api-key: ${{secrets.FOSSA_API_KEY}}
team: OpenTelemetry

View File

@ -5,24 +5,33 @@ on:
pull_request:
types: [ labeled, synchronize, opened, reopened ]
permissions:
contents: read
jobs:
integration_tests:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- name: Free disk space
run: |
df -h
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
df -h
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- uses: dtolnay/rust-toolchain@stable
- uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b
with:
toolchain: stable
components: rustfmt
- uses: arduino/setup-protoc@v3
- uses: arduino/setup-protoc@c65c819552d16ad3c9b72d9dfd5ba5237b9c906b # v3.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run integration tests

View File

@ -8,14 +8,22 @@ on:
paths:
- '**/*.md'
permissions:
contents: read
jobs:
markdown-link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install markdown-link-check
run: npm install -g markdown-link-check@3.11.2
run: npm install -g "git://github.com/tcort/markdown-link-check.git#ef7e09486e579ba7479700b386e7ca90f34cbd0a" # v3.13.7
- name: Run markdown-link-check
run: |

53
.github/workflows/ossf-scorecard.yml vendored Normal file
View File

@ -0,0 +1,53 @@
name: OSSF Scorecard
on:
push:
branches:
- main
schedule:
- cron: "50 3 * * 0" # once a week
workflow_dispatch:
permissions:
contents: read
jobs:
analysis:
runs-on: ubuntu-latest
permissions:
# Needed for Code scanning upload
security-events: write
# Needed for GitHub OIDC token if publish_results is true
id-token: write
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- uses: ossf/scorecard-action@05b42c624433fc40578a4040d5cf5e36ddca8cde # v2.4.2
with:
results_file: results.sarif
results_format: sarif
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable
# uploads of run results in SARIF format to the repository Actions tab.
# https://docs.github.com/en/actions/advanced-guides/storing-workflow-data-as-artifacts
- name: "Upload artifact"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@181d5eefc20863364f96762470ba6f862bdef56b # v3.29.2
with:
sarif_file: results.sarif

View File

@ -4,12 +4,20 @@ on:
pull_request:
types: [opened, synchronize, reopened, edited]
permissions:
contents: read
jobs:
validate-pr-title:
runs-on: ubuntu-latest
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- name: PR Conventional Commit Validation
uses: ytanikin/pr-conventional-commits@1.4.1
uses: ytanikin/pr-conventional-commits@8267db1bacc237419f9ed0228bb9d94e94271a1d # 1.4.1
with:
task_types: '["build","chore","ci","docs","feat","fix","perf","refactor","revert","test"]'
add_label: 'false'

View File

@ -1,6 +1,8 @@
name: Semver compliance
env:
CI: true
permissions:
contents: read
on:
pull_request:
types: [ labeled, synchronize, opened, reopened ]
@ -10,12 +12,18 @@ jobs:
timeout-minutes: 10
if: ${{ github.event.label.name == 'semver-check' || contains(github.event.pull_request.labels.*.name, 'semver-check') }}
steps:
- uses: actions/checkout@v4
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@6c439dc8bdf85cadbbce9ed30d1c7b959517bc49 # v2.12.2
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- name: Install stable
uses: dtolnay/rust-toolchain@stable
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b
with:
toolchain: stable
components: rustfmt
- name: cargo-semver-checks
uses: obi1kenobi/cargo-semver-checks-action@v2.6
uses: obi1kenobi/cargo-semver-checks-action@5b298c9520f7096a4683c0bd981a7ac5a7e249ae # v2.8

2
.gitmodules vendored
View File

@ -1,4 +1,4 @@
[submodule "opentelemetry-proto/src/proto/opentelemetry-proto"]
path = opentelemetry-proto/src/proto/opentelemetry-proto
url = https://github.com/open-telemetry/opentelemetry-proto
branch = tags/v1.0.0
branch = tags/v1.5.0

View File

@ -19,7 +19,6 @@ exclude = ["opentelemetry-prometheus"]
debug = 1
[workspace.dependencies]
async-std = "1.13"
async-trait = "0.1"
bytes = "1"
criterion = "0.5"
@ -42,15 +41,15 @@ serde = { version = "1.0", default-features = false }
serde_json = "1.0"
temp-env = "0.3.6"
thiserror = { version = "2", default-features = false }
tonic = { version = "0.12.3", default-features = false }
tonic-build = "0.12"
tonic = { version = "0.13", default-features = false }
tonic-build = "0.13"
tokio = { version = "1", default-features = false }
tokio-stream = "0.1"
# Using `tracing 0.1.40` because 0.1.39 (which is yanked) introduces the ability to set event names in macros,
# Using `tracing 0.1.40` because 0.1.39 (which is yanked) introduces the ability to set event names in macros,
# required for OpenTelemetry's internal logging macros.
tracing = { version = ">=0.1.40", default-features = false }
# `tracing-core >=0.1.33` is required for compatibility with `tracing >=0.1.40`.
tracing-core = { version = ">=0.1.33", default-features = false }
tracing-core = { version = ">=0.1.33", default-features = false }
tracing-subscriber = { version = "0.3", default-features = false }
url = { version = "2.5", default-features = false }
anyhow = "1.0.94"
@ -60,8 +59,7 @@ ctor = "0.2.9"
ctrlc = "3.2.5"
futures-channel = "0.3"
futures-sink = "0.3"
glob = "0.3.1"
hex = "0.4.3"
const-hex = "1.14.1"
lazy_static = "1.4.0"
num-format = "0.4.4"
num_cpus = "1.15.0"
@ -75,7 +73,7 @@ sysinfo = "0.32"
tempfile = "3.3.0"
testcontainers = "0.23.1"
tracing-log = "0.2"
tracing-opentelemetry = "0.29"
tracing-opentelemetry = "0.31"
typed-builder = "0.20"
uuid = "1.3"

View File

@ -3,10 +3,11 @@
The Rust [OpenTelemetry](https://opentelemetry.io/) implementation.
[![Crates.io: opentelemetry](https://img.shields.io/crates/v/opentelemetry.svg)](https://crates.io/crates/opentelemetry)
[![Documentation](https://docs.rs/opentelemetry/badge.svg)](https://docs.rs/opentelemetry)
[![LICENSE](https://img.shields.io/crates/l/opentelemetry)](./LICENSE)
[![GitHub Actions CI](https://github.com/open-telemetry/opentelemetry-rust/workflows/CI/badge.svg)](https://github.com/open-telemetry/opentelemetry-rust/actions?query=workflow%3ACI+branch%3Amain)
[![Documentation](https://docs.rs/opentelemetry/badge.svg)](https://docs.rs/opentelemetry)
[![codecov](https://codecov.io/gh/open-telemetry/opentelemetry-rust/branch/main/graph/badge.svg)](https://codecov.io/gh/open-telemetry/opentelemetry-rust)
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/open-telemetry/opentelemetry-rust/badge)](https://scorecard.dev/viewer/?uri=github.com/open-telemetry/opentelemetry-rust)
[![Slack](https://img.shields.io/badge/slack-@cncf/otel/rust-brightgreen.svg?logo=slack)](https://cloud-native.slack.com/archives/C03GDP0H023)
## Overview
@ -34,11 +35,11 @@ documentation.
| Baggage | RC |
| Propagators | Beta |
| Logs-API | Stable* |
| Logs-SDK | Stable |
| Logs-SDK | Stable |
| Logs-OTLP Exporter | RC |
| Logs-Appender-Tracing | Stable |
| Metrics-API | Stable |
| Metrics-SDK | RC |
| Metrics-SDK | Stable |
| Metrics-OTLP Exporter | RC |
| Traces-API | Beta |
| Traces-SDK | Beta |
@ -179,25 +180,33 @@ you're more than welcome to participate!
### Maintainers
* [Cijo Thomas](https://github.com/cijothomas)
* [Cijo Thomas](https://github.com/cijothomas), Microsoft
* [Harold Dost](https://github.com/hdost)
* [Lalit Kumar Bhasin](https://github.com/lalitb)
* [Utkarsh Umesan Pillai](https://github.com/utpilla)
* [Lalit Kumar Bhasin](https://github.com/lalitb), Microsoft
* [Utkarsh Umesan Pillai](https://github.com/utpilla), Microsoft
* [Zhongyang Wu](https://github.com/TommyCpp)
For more information about the maintainer role, see the [community repository](https://github.com/open-telemetry/community/blob/main/guides/contributor/membership.md#maintainer).
### Approvers
* [Shaun Cox](https://github.com/shaun-cox)
* [Scott Gerring](https://github.com/scottgerring)
* [Anton Grübel](https://github.com/gruebel), Baz
* [Björn Antonsson](https://github.com/bantonsson), Datadog
* [Scott Gerring](https://github.com/scottgerring), Datadog
* [Shaun Cox](https://github.com/shaun-cox), Microsoft
For more information about the approver role, see the [community repository](https://github.com/open-telemetry/community/blob/main/guides/contributor/membership.md#approver).
### Emeritus
* [Dirkjan Ochtman](https://github.com/djc)
* [Isobel Redelmeier](https://github.com/iredelmeier)
* [Jan Kühle](https://github.com/frigus02)
* [Julian Tescher](https://github.com/jtescher)
* [Isobel Redelmeier](https://github.com/iredelmeier)
* [Mike Goldsmith](https://github.com/MikeGoldsmith)
For more information about the emeritus role, see the [community repository](https://github.com/open-telemetry/community/blob/main/guides/contributor/membership.md#emeritus-maintainerapprovertriager).
### Thanks to all the people who have contributed
[![contributors](https://contributors-img.web.app/image?repo=open-telemetry/opentelemetry-rust)](https://github.com/open-telemetry/opentelemetry-rust/graphs/contributors)

View File

@ -1,20 +1,39 @@
exclude=[
"actix-http",
"actix-http-tracing",
"actix-udp",
"actix-udp-example",
"tracing-grpc",
"http"
]
[graph]
exclude=[]
[licenses]
unlicensed = "deny"
allow = [
"MIT",
"Apache-2.0",
"ISC",
"BSD-3-Clause",
"OpenSSL"
]
exceptions = [
{ allow = ["CDLA-Permissive-2.0"], crate = "webpki-roots" }, # This crate is a dependency of `reqwest`.
{ allow = ["Unicode-3.0"], crate = "icu_collections" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "icu_locid" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "icu_locid_transform" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "icu_locid_transform_data" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "icu_locale_core" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "icu_normalizer" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "icu_normalizer_data" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "icu_properties" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "icu_properties_data" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "icu_provider" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "icu_provider_macros" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "potential_utf" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "litemap" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "tinystr" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "writeable" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "unicode-ident" }, # This crate gets used transitively by `reqwest` and other crates.
{ allow = ["Unicode-3.0"], crate = "yoke" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "yoke-derive" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "zerovec" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "zerotrie" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "zerovec-derive" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "zerofrom" }, # This crate gets used transitively by `reqwest`.
{ allow = ["Unicode-3.0"], crate = "zerofrom-derive" }, # This crate gets used transitively by `reqwest`.
]
[licenses.private]
@ -28,6 +47,31 @@ license-files = [
{ path = "LICENSE", hash = 0xbd0eed23 }
]
# This section is considered when running `cargo deny check advisories`
# More documentation for the advisories section can be found here:
# https://embarkstudios.github.io/cargo-deny/checks/advisories/cfg.html
[advisories]
unmaintained = "allow"
yanked = "allow"
unmaintained = "none"
yanked = "deny"
# This section is considered when running `cargo deny check bans`.
# More documentation about the 'bans' section can be found here:
# https://embarkstudios.github.io/cargo-deny/checks/bans/cfg.html
[bans]
# Lint level for when multiple versions of the same crate are detected
multiple-versions = "warn"
# Lint level for when a crate version requirement is `*`
wildcards = "warn"
# The graph highlighting used when creating dotgraphs for crates
# with multiple versions
# * lowest-version - The path to the lowest versioned duplicate is highlighted
# * simplest-path - The path to the version with the fewest edges is highlighted
# * all - Both lowest-version and simplest-path are used
highlight = "all"
# This section is considered when running `cargo deny check sources`.
# More documentation about the 'sources' section can be found here:
# https://embarkstudios.github.io/cargo-deny/checks/sources/cfg.html
[sources]
unknown-registry = "deny"
unknown-git = "deny"

View File

@ -4,13 +4,13 @@
## Summary
This ADR describes the general pattern we will follow when modelling errors in public API interfaces - that is, APIs that are exposed to users of the project's published crates. It summarises the discussion and final option from [#2571](https://github.com/open-telemetry/opentelemetry-rust/issues/2571); for more context check out that issue.
This ADR describes the general pattern we will follow when modelling errors in public API interfaces - that is, APIs that are exposed to users of the project's published crates. It summarizes the discussion and final option from [#2571](https://github.com/open-telemetry/opentelemetry-rust/issues/2571); for more context check out that issue.
We will focus on the exporter traits in this example, but the outcome should be applied to _all_ public traits and their fallible operations.
These include [SpanExporter](https://github.com/open-telemetry/opentelemetry-rust/blob/eca1ce87084c39667061281e662d5edb9a002882/opentelemetry-sdk/src/trace/export.rs#L18), [LogExporter](https://github.com/open-telemetry/opentelemetry-rust/blob/eca1ce87084c39667061281e662d5edb9a002882/opentelemetry-sdk/src/logs/export.rs#L115), and [PushMetricExporter](https://github.com/open-telemetry/opentelemetry-rust/blob/eca1ce87084c39667061281e662d5edb9a002882/opentelemetry-sdk/src/metrics/exporter.rs#L11) which form part of the API surface of `opentelemetry-sdk`.
There are various ways to handle errors on trait methods, including swallowing them and logging, panicing, returning a shared global error, or returning a method-specific error. We strive for consistency, and we want to be sure that we've put enough thought into what this looks like that we don't have to make breaking interface changes unecessarily in the future.
There are various ways to handle errors on trait methods, including swallowing them and logging, panicking, returning a shared global error, or returning a method-specific error. We strive for consistency, and we want to be sure that we've put enough thought into what this looks like that we don't have to make breaking interface changes unnecessarily in the future.
## Design Guidance
@ -69,7 +69,7 @@ trait MyTrait {
## 3. Consolidate error types between signals where we can, let them diverge where we can't
Consider the `Exporter`s mentioned earlier. Each of them has the same failure indicators - as dicated by the OpenTelemetry spec - and we will
Consider the `Exporter`s mentioned earlier. Each of them has the same failure indicators - as dictated by the OpenTelemetry spec - and we will
share the error types accordingly:
**Don't do this** - each signal has its own error type, despite having exactly the same failure cases:
@ -171,3 +171,5 @@ Note that at the time of writing, there is no instance we have identified within
We will use [thiserror](https://docs.rs/thiserror/latest/thiserror/) by default to implement Rust's [error trait](https://doc.rust-lang.org/core/error/trait.Error.html).
This keeps our code clean, and as it does not appear in our interface, we can choose to replace any particular usage with a hand-rolled implementation should we need to.
### 6. Don't use `#[non_exhaustive]` by default
If an `Error` response set is closed - if we can confidently say it is very unlikely to gain new variants in the future - we should not annotate it with `#[non_exhaustive]`. By way of example, the variants of the exporter error types described above are exhaustively documented in the OpenTelemetry Specification, and we can confidently say that we do not expect new variants.

View File

@ -345,7 +345,25 @@ only meant for OTel components itself and anyone writing extensions like custom
Exporters etc.
// TODO: Document the principles followed when selecting severity for internal
logs // TODO: Document how this can cause circular loop and plans to address it.
logs
When OpenTelemetry components generate logs that could potentially feed back
into OpenTelemetry, this can result in what is known as "telemetry-induced
telemetry." To address this, OpenTelemetry provides a mechanism to suppress such
telemetry using the `Context`. Components are expected to mark telemetry as
suppressed within a specific `Context` by invoking
`Context::enter_telemetry_suppressed_scope()`. The Logs SDK implementation
checks this flag in the current `Context` and ignores logs if suppression is
enabled.
This mechanism relies on proper in-process propagation of the `Context`.
However, external libraries like `hyper` and `tonic`, which are used by
OpenTelemetry in its OTLP Exporters, do not propagate OpenTelemetry's `Context`.
As a result, the suppression mechanism does not work out-of-the-box to suppress
logs originating from these libraries.
// TODO: Document how OTLP can solve this issue without asking external
crates to respect and propagate OTel Context.
## Summary

767
docs/metrics.md Normal file
View File

@ -0,0 +1,767 @@
# OpenTelemetry Rust Metrics
Status: **Work-In-Progress**
<details>
<summary>Table of Contents</summary>
* [Introduction](#introduction)
* [Best Practices](#best-practices)
* [Metrics API](#metrics-api)
* [Meter](#meter)
* [Instruments](#instruments)
* [Reporting measurements - use array slices for
attributes](#reporting-measurements---use-array-slices-for-attributes)
* [Reporting measurements via synchronous
instruments](#reporting-measurements-via-synchronous-instruments)
* [Reporting measurements via asynchronous
instruments](#reporting-measurements-via-asynchronous-instruments)
* [MeterProvider Management](#meterprovider-management)
* [Memory Management](#memory-management)
* [Example](#example)
* [Pre-Aggregation](#pre-aggregation)
* [Pre-Aggregation Benefits](#pre-aggregation-benefits)
* [Cardinality Limits](#cardinality-limits)
* [Cardinality Limits - Implications](#cardinality-limits---implications)
* [Cardinality Limits - Example](#cardinality-limits---example)
* [Memory Preallocation](#memory-preallocation)
* [Metrics Correlation](#metrics-correlation)
* [Modelling Metric Attributes](#modelling-metric-attributes)
* [Common Issues Leading to Missing
Metrics](#common-issues-that-lead-to-missing-metrics)
</details>
## Introduction
This document provides comprehensive guidance on leveraging OpenTelemetry
metrics in Rust applications. Whether you're tracking request counts, monitoring
response times, or analyzing resource utilization, this guide equips you with
the knowledge to implement robust and efficient metrics collection.
It covers best practices, API usage patterns, memory management techniques, and
advanced topics to help you design effective metrics solutions while steering
clear of common challenges.
## Best Practices
// TODO: Add link to the examples, once they are modified to show best
practices.
## Metrics API
### Meter
[Meter](https://docs.rs/opentelemetry/latest/opentelemetry/metrics/struct.Meter.html)
provides the ability to create instruments for recording measurements or
accepting callbacks to report measurements.
:stop_sign: You should avoid creating duplicate
[`Meter`](https://docs.rs/opentelemetry/latest/opentelemetry/metrics/struct.Meter.html)
instances with the same name. `Meter` is fairly expensive and meant to be reused
throughout the application. For most applications, a `Meter` should be obtained
from `global` and saved for re-use.
> [!IMPORTANT] Create your `Meter` instance once at initialization time and
> store it for reuse throughout your application's lifecycle.
The fully qualified module name might be a good option for the Meter name.
Optionally, one may create a meter with version, schema_url, and additional
meter-level attributes as well. Both approaches are demonstrated below.
```rust
use opentelemetry::global;
use opentelemetry::InstrumentationScope;
use opentelemetry::KeyValue;
let scope = InstrumentationScope::builder("my_company.my_product.my_library")
.with_version("0.17")
.with_schema_url("https://opentelemetry.io/schema/1.2.0")
.with_attributes([KeyValue::new("key", "value")])
.build();
// creating Meter with InstrumentationScope, comprising of
// name, version, schema and attributes.
let meter = global::meter_with_scope(scope);
// creating Meter with just name
let meter = global::meter("my_company.my_product.my_library");
```
### Instruments
OpenTelemetry defines several types of metric instruments, each optimized for
specific usage patterns. The following table maps OpenTelemetry Specification
instruments to their corresponding Rust SDK types.
:heavy_check_mark: You should understand and pick the right instrument type.
> [!NOTE] Picking the right instrument type for your use case is crucial to
> ensure the correct semantics and performance. Check the [Instrument
Selection](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/supplementary-guidelines.md#instrument-selection)
section from the supplementary guidelines for more information.
| OpenTelemetry Specification | OpenTelemetry Rust Instrument Type |
| --------------------------- | -------------------- |
| [Asynchronous Counter](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#asynchronous-counter) | [`ObservableCounter`](https://docs.rs/opentelemetry/latest/opentelemetry/metrics/struct.ObservableCounter.html) |
| [Asynchronous Gauge](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#asynchronous-gauge) | [`ObservableGauge`](https://docs.rs/opentelemetry/latest/opentelemetry/metrics/struct.ObservableGauge.html) |
| [Asynchronous UpDownCounter](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#asynchronous-updowncounter) | [`ObservableUpDownCounter`](https://docs.rs/opentelemetry/latest/opentelemetry/metrics/struct.ObservableUpDownCounter.html) |
| [Counter](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#counter) | [`Counter`](https://docs.rs/opentelemetry/latest/opentelemetry/metrics/struct.Counter.html) |
| [Gauge](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#gauge) | [`Gauge`](https://docs.rs/opentelemetry/latest/opentelemetry/metrics/struct.Gauge.html) |
| [Histogram](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#histogram) | [`Histogram`](https://docs.rs/opentelemetry/latest/opentelemetry/metrics/struct.Histogram.html) |
| [UpDownCounter](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#updowncounter) | [`UpDownCounter`](https://docs.rs/opentelemetry/latest/opentelemetry/metrics/struct.UpDownCounter.html) |
:stop_sign: You should avoid creating duplicate instruments (e.g., `Counter`)
with the same name. Instruments are fairly expensive and meant to be reused
throughout the application. For most applications, an instrument should be
created once and saved for re-use. Instruments can also be cloned to create
multiple handles to the same instrument, but cloning should not occur on the hot
path. Instead, the cloned instance should be stored and reused.
:stop_sign: Do NOT use invalid instrument names.
> [!NOTE] OpenTelemetry will not collect metrics from instruments that are using
> invalid names. Refer to the [OpenTelemetry
Specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#instrument-name-syntax)
for the valid syntax.
:stop_sign: You should avoid changing the order of attributes while reporting
measurements.
> [!WARNING] The last line of code has bad performance since the attributes are
> not following the same order as before:
```rust
let counter = meter.u64_counter("fruits_sold").build();
counter.add(2, &[KeyValue::new("color", "red"), KeyValue::new("name", "apple")]);
counter.add(3, &[KeyValue::new("color", "green"), KeyValue::new("name", "lime")]);
counter.add(5, &[KeyValue::new("color", "yellow"), KeyValue::new("name", "lemon")]);
counter.add(8, &[KeyValue::new("name", "lemon"), KeyValue::new("color", "yellow")]); // bad performance
```
:heavy_check_mark: If feasible, provide the attributes sorted by `Key`s in
ascending order to minimize memory usage within the Metrics SDK. Using
consistent attribute ordering allows the SDK to efficiently reuse internal data
structures.
```rust
// Good practice: Consistent attribute ordering
let counter = meter.u64_counter("fruits_sold").build();
counter.add(2, &[KeyValue::new("color", "red"), KeyValue::new("name", "apple")]);
```
### Reporting measurements - use array slices for attributes
:heavy_check_mark: When reporting measurements, use array slices for attributes
rather than creating vectors. Arrays are more efficient as they avoid
unnecessary heap allocations on the measurement path. This is true for both
synchronous and observable instruments.
```rust
// Good practice: Using an array slice directly
counter.add(2, &[KeyValue::new("color", "red"), KeyValue::new("name", "apple")]);
// Observable instrument
let _observable_counter = meter
.u64_observable_counter("request_count")
.with_description("Counts HTTP requests")
.with_unit("requests") // Optional: Adding units improves context
.with_callback(|observer| {
// Good practice: Using an array slice directly
observer.observe(
100,
&[KeyValue::new("endpoint", "/api")]
)
})
.build();
// Avoid this: Creating a Vec is unnecessary, and it allocates on the heap each time
// counter.add(2, &vec![KeyValue::new("color", "red"), KeyValue::new("name", "apple")]);
```
### Reporting measurements via synchronous instruments
:heavy_check_mark: Use synchronous Counter when you need to increment counts at
specific points in your code:
```rust
// Example: Using Counter when incrementing at specific code points
use opentelemetry::KeyValue;
fn process_item(counter: &opentelemetry::metrics::Counter<u64>, item_type: &str) {
// Process item...
// Increment the counter with the item type as an attribute
counter.add(1, &[KeyValue::new("type", item_type)]);
}
```
### Reporting measurements via asynchronous instruments
Asynchronous instruments like `ObservableCounter` are ideal for reporting
metrics that are already being tracked or stored elsewhere in your application.
These instruments allow you to observe and report the current state of such
metric.
:heavy_check_mark: Use `ObservableCounter` when you already have a variable
tracking a count:
```rust
// Example: Using ObservableCounter when you already have a variable tracking counts
use opentelemetry::KeyValue;
use std::sync::atomic::{AtomicU64, Ordering};
// An existing variable in your application
static REQUEST_COUNT: AtomicU64 = AtomicU64::new(0);
// In your application code, you update this directly
fn handle_request() {
// Process request...
REQUEST_COUNT.fetch_add(1, Ordering::SeqCst);
}
// When setting up metrics, register an observable counter that reads from your variable
fn setup_metrics(meter: &opentelemetry::metrics::Meter) {
let _observable_counter = meter
.u64_observable_counter("request_count")
.with_description("Number of requests processed")
.with_unit("requests")
.with_callback(|observer| {
// Read the current value from your existing counter
observer.observe(
REQUEST_COUNT.load(Ordering::SeqCst),
&[KeyValue::new("endpoint", "/api")]
)
})
.build();
}
```
> [!NOTE] The callbacks in the Observable instruments are invoked by the SDK
> during each export cycle.
## MeterProvider Management
Most use-cases require you to create ONLY one instance of MeterProvider. You
should NOT create multiple instances of MeterProvider unless you have some
unusual requirement of having different export strategies within the same
application. Using multiple instances of MeterProvider requires users to
exercise caution.
// TODO: Mention about creating per-thread MeterProvider // as shown in
[this](https://github.com/open-telemetry/opentelemetry-rust/pull/2659) // PR
:heavy_check_mark: Properly manage the lifecycle of `MeterProvider` instances if
you create them. Creating a MeterProvider is typically done at application
startup. Follow these guidelines:
* **Cloning**: A `MeterProvider` is a handle to an underlying provider. Cloning
it creates a new handle pointing to the same provider. Clone the
`MeterProvider` when necessary, but re-use the cloned instead of repeatedly
cloning.
* **Set as Global Provider**: Use `opentelemetry::global::set_meter_provider` to
set a clone of the `MeterProvider` as the global provider. This ensures
consistent usage across the application, allowing applications and libraries
to obtain `Meter` from the global instance.
* **Shutdown**: Explicitly call `shutdown` on the `MeterProvider` at the end of
your application to ensure all metrics are properly flushed and exported.
> [!NOTE] If you did not use `opentelemetry::global::set_meter_provider` to set
> a clone of the `MeterProvider` as the global provider, then you should be
> aware that dropping the last instance of `MeterProvider` implicitly calls
> shutdown on the provider.
:heavy_check_mark: Always call `shutdown` on the `MeterProvider` at the end of
your application to ensure proper cleanup.
## Memory Management
In OpenTelemetry,
[measurements](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#measurement)
are reported via the metrics API. The SDK
[aggregates](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk.md#aggregation)
metrics using certain algorithms and memory management strategies to achieve
good performance and efficiency. Here are the rules which OpenTelemetry Rust
follows while implementing the metrics aggregation logic:
1. [**Pre-Aggregation**](#pre-aggregation): aggregation occurs within the SDK.
2. [**Cardinality Limits**](#cardinality-limits): the aggregation logic respects
[cardinality
limits](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk.md#cardinality-limits),
so the SDK does not use an indefinite amount of memory in the event of a
cardinality explosion.
3. [**Memory Preallocation**](#memory-preallocation): SDK tries to pre-allocate
the memory it needs at each instrument creation time.
### Example
Let us take the following example of OpenTelemetry Rust metrics being used to
track the number of fruits sold:
* During the time range (T0, T1]:
* value = 1, color = `red`, name = `apple`
* value = 2, color = `yellow`, name = `lemon`
* During the time range (T1, T2]:
* no fruit has been sold
* During the time range (T2, T3]:
* value = 5, color = `red`, name = `apple`
* value = 2, color = `green`, name = `apple`
* value = 4, color = `yellow`, name = `lemon`
* value = 2, color = `yellow`, name = `lemon`
* value = 1, color = `yellow`, name = `lemon`
* value = 3, color = `yellow`, name = `lemon`
### Example - Cumulative Aggregation Temporality
If we aggregate and export the metrics using [Cumulative Aggregation
Temporality](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/data-model.md#temporality):
* (T0, T1]
* attributes: {color = `red`, name = `apple`}, count: `1`
* attributes: {color = `yellow`, name = `lemon`}, count: `2`
* (T0, T2]
* attributes: {color = `red`, name = `apple`}, count: `1`
* attributes: {color = `yellow`, name = `lemon`}, count: `2`
* (T0, T3]
* attributes: {color = `red`, name = `apple`}, count: `6`
* attributes: {color = `green`, name = `apple`}, count: `2`
* attributes: {color = `yellow`, name = `lemon`}, count: `12`
Note that the start time is not advanced, and the exported values are the
cumulative total of what happened since the beginning.
### Example - Delta Aggregation Temporality
If we aggregate and export the metrics using [Delta Aggregation
Temporality](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/data-model.md#temporality):
* (T0, T1]
* attributes: {color = `red`, name = `apple`}, count: `1`
* attributes: {color = `yellow`, name = `lemon`}, count: `2`
* (T1, T2]
* nothing since we do not have any measurement received
* (T2, T3]
* attributes: {color = `red`, name = `apple`}, count: `5`
* attributes: {color = `green`, name = `apple`}, count: `2`
* attributes: {color = `yellow`, name = `lemon`}, count: `10`
Note that the start time is advanced after each export, and only the delta since
last export is exported, allowing the SDK to "forget" previous state.
### Pre-Aggregation
Rather than exporting every individual measurement to the backend, OpenTelemetry
Rust aggregates data locally and only exports the aggregated metrics.
Using the [fruit example](#example), there are six measurements reported during
the time range `(T2, T3]`. Instead of exporting each individual measurement
event, the SDK aggregates them and exports only the summarized results. This
summarization process, illustrated in the following diagram, is known as
pre-aggregation:
```mermaid
graph LR
subgraph SDK
Instrument --> | Measurements | Pre-Aggregation[Pre-Aggregation]
end
subgraph Collector
Aggregation
end
Pre-Aggregation --> | Metrics | Aggregation
```
In addition to the in-process aggregation performed by the OpenTelemetry Rust
Metrics SDK, further aggregations can be carried out by the Collector and/or the
metrics backend.
### Pre-Aggregation Benefits
Pre-aggregation offers several advantages:
1. **Reduced Data Volume**: Summarizes measurements before export, minimizing
network overhead and improving efficiency.
2. **Predictable Resource Usage**: Ensures consistent resource consumption by
applying [cardinality limits](#cardinality-limits) and [memory
preallocation](#memory-preallocation) during SDK initialization. In other
words, metrics memory/network usage remains capped, regardless of the volume
of measurements being made.This ensures that resource utilization remains
stable despite fluctuations in traffic volume.
3. **Improved Performance**: Reduces serialization costs as we work with
aggregated data and not the numerous individual measurements. It also reduces
computational load on downstream systems, enabling them to focus on analysis
and storage.
> [!NOTE] There is no ability to opt out of pre-aggregation in OpenTelemetry.
### Cardinality Limits
The number of distinct combinations of attributes for a given metric is referred
to as the cardinality of that metric. Taking the [fruit example](#example), if
we know that we can only have apple/lemon as the name, red/yellow/green as the
color, then we can say the cardinality is 6 (i.e., 2 names × 3 colors = 6
combinations). No matter how many fruits we sell, we can always use the
following table to summarize the total number of fruits based on the name and
color.
| Color | Name | Count |
| ------ | ----- | ----- |
| red | apple | 6 |
| yellow | apple | 0 |
| green | apple | 2 |
| red | lemon | 0 |
| yellow | lemon | 12 |
| green | lemon | 0 |
In other words, we know how much memory and network are needed to collect and
transmit these metrics, regardless of the traffic pattern or volume.
In real world applications, the cardinality can be extremely high. Imagine if we
have a long running service and we collect metrics with 7 attributes and each
attribute can have 30 different values. We might eventually end up having to
remember the complete set of 30⁷ - or 21.87 billion combinations! This
cardinality explosion is a well-known challenge in the metrics space. For
example, it can cause:
* Surprisingly high costs in the observability system
* Excessive memory consumption in your application
* Poor query performance in your metrics backend
* Potential denial-of-service vulnerability that could be exploited by bad
actors
[Cardinality
limit](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk.md#cardinality-limits)
is a throttling mechanism which allows the metrics collection system to have a
predictable and reliable behavior when there is a cardinality explosion, be it
due to a malicious attack or developer making mistakes while writing code.
OpenTelemetry has a default cardinality limit of `2000` per metric. This limit
can be configured at the individual metric level using the [View
API](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk.md#view)
leveraging the
[`cardinality_limit`](https://docs.rs/opentelemetry_sdk/latest/opentelemetry_sdk/metrics/struct.Stream.html#structfield.cardinality_limit)
setting.
It's important to understand that this cardinality limit applies only at the
OpenTelemetry SDK level, not to the ultimate cardinality of the metric as seen
by the backend system. For example, while a single process might be limited to
2000 attribute combinations per metric, the actual backend metrics system might
see much higher cardinality due to:
1. Resource attributes (such as `service.instance.id`, `host.name`, etc.) that
can be added to each metric
2. Multiple process instances running the same application across your
infrastructure
3. The possibility of reporting different key-value pair combinations in each
export interval, as the cardinality limit only applies to the number of
distinct attribute combinations tracked during a single export interval.
(This is only applicable to Delta temporality)
Therefore, the actual cardinality in your metrics backend can be orders of
magnitude higher than what any single OpenTelemetry SDK process handles in an
export cycle.
#### Cardinality Limits - Implications
Cardinality limits are enforced for each export interval, meaning the metrics
aggregation system only allows up to the configured cardinality limit of
distinct attribute combinations per metric. Understanding how this works in
practice is important:
* **Cardinality Capping**: When the limit is reached within an export interval,
any new attribute combination is not individually tracked but instead folded
into a single aggregation with the attribute `{"otel.metric.overflow": true}`.
This preserves the overall accuracy of aggregates (such as Sum, Count, etc.)
even though information about specific attribute combinations is lost. Every
measurement is accounted for - either with its original attributes or within
the overflow bucket.
* **Temporality Effects**: The impact of cardinality limits differs based on the
temporality mode:
* **Delta Temporality**: The SDK "forgets" the state after each
collection/export cycle. This means in each new interval, the SDK can track
up to the cardinality limit of distinct attribute combinations. Over time,
your metrics backend might see far more than the configured limit of
distinct combinations from a single process.
* **Cumulative Temporality**: Since the SDK maintains state across export
intervals, once the cardinality limit is reached, new attribute combinations
will continue to be folded into the overflow bucket. The total number of
distinct attribute combinations exported cannot exceed the cardinality limit
for the lifetime of that metric instrument.
* **Impact on Monitoring**: While cardinality limits protect your system from
unbounded resource consumption, they do mean that high-cardinality attributes
may not be fully represented in your metrics. Since cardinality capping can
cause metrics to be folded into the overflow bucket, it becomes impossible to
predict which specific attribute combinations were affected across multiple
collection cycles or different service instances.
This unpredictability creates several important considerations when querying
metrics in any backend system:
* **Total Accuracy**: OpenTelemetry Metrics always ensures the total
aggregation (sum of metric values across all attributes) remains accurate,
even when overflow occurs.
* **Attribute-Based Query Limitations**: Any metric query based on specific
attributes could be misleading, as it's possible that measurements recorded
with a superset of those attributes were folded into the overflow bucket due
to cardinality capping.
* **All Attributes Affected**: When overflow occurs, it's not just
high-cardinality attributes that are affected. The entire attribute
combination is replaced with the `{"otel.metric.overflow": true}` attribute,
meaning queries for any attribute in that combination will miss data points.
#### Cardinality Limits - Example
Extending our fruit sales tracking example, imagine we set a cardinality limit
of 3 and we're tracking sales with attributes for `name`, `color`, and
`store_location`:
During a busy sales period at time (T3, T4], we record:
1. 10 red apples sold at Downtown store
2. 5 yellow lemons sold at Uptown store
3. 8 green apples sold at Downtown store
4. 3 red apples sold at Midtown store (at this point, the cardinality limit is
hit, and attributes are replaced with overflow attribute.)
The exported metrics would be:
* attributes: {color = `red`, name = `apple`, store_location = `Downtown`},
count: `10`
* attributes: {color = `yellow`, name = `lemon`, store_location = `Uptown`},
count: `5`
* attributes: {color = `green`, name = `apple`, store_location = `Downtown`},
count: `8`
* attributes: {`otel.metric.overflow` = `true`}, count: `3` ← Notice this
special overflow attribute
If we later query "How many red apples were sold?" the answer would be 10, not
13, because the Midtown sales were folded into the overflow bucket. Similarly,
queries about "How many items were sold in Midtown?" would return 0, not 3.
However, the total count across all attributes (i.e How many total fruits were
sold in (T3, T4] would correctly give 26) would be accurate.
This limitation applies regardless of whether the attribute in question is
naturally high-cardinality. Even low-cardinality attributes like "color"
become unreliable for querying if they were part of attribute combinations
that triggered overflow.
OpenTelemetry's cardinality capping is only applied to attributes provided
when reporting measurements via the [Metrics API](#metrics-api). In other
words, attributes used to create `Meter` or `Resource` attributes are not
subject to this cap.
#### Cardinality Limits - How to Choose the Right Limit
Choosing the right cardinality limit is crucial for maintaining efficient memory
usage and predictable performance in your metrics system. The optimal limit
depends on your temporality choice and application characteristics.
Setting the limit incorrectly can have consequences:
* **Limit too high**: Due to the SDK's [memory
preallocation](#memory-preallocation) strategy, excess memory will be
allocated upfront and remain unused, leading to resource waste.
* **Limit too low**: Measurements will be folded into the overflow bucket
(`{"otel.metric.overflow": true}`), losing granular attribute information and
making attribute-based queries unreliable.
Consider these guidelines when determining the appropriate limit:
##### Choosing the Right Limit for Cumulative Temporality
Cumulative metrics retain every unique attribute combination that has *ever*
been observed since the start of the process.
* You must account for the theoretical maximum number of attribute combinations.
* This can be estimated by multiplying the number of possible values for each
attribute.
* If certain attribute combinations are invalid or will never occur in practice,
you can reduce the limit accordingly.
###### Example - Fruit Sales Scenario
Attributes:
* `name` can be "apple" or "lemon" (2 values)
* `color` can be "red", "yellow", or "green" (3 values)
The theoretical maximum is 2 × 3 = 6 unique attribute sets.
For this example, the simplest approach is to use the theoretical maximum and **set the cardinality limit to 6**.
However, if you know that certain combinations will never occur (for example, if "red lemons" don't exist in your application domain), you could reduce the limit to only account for valid combinations. In this case, if only 5 combinations are valid, **setting the cardinality limit to 5** would be more memory-efficient.
##### Choosing the Right Limit for Delta Temporality
Delta metrics reset their aggregation state after every export interval. This
approach enables more efficient memory utilization by focusing only on attributes
observed during each interval rather than maintaining state for all combinations.
* **When attributes are low-cardinality** (as in the fruit example), use the
same calculation method as with cumulative temporality.
* **When high-cardinality attribute(s) exist** like `user_id`, leverage Delta
temporality's "forget state" nature to set a much lower limit based on active
usage patterns. This is where Delta temporality truly excels - when the set of
active values changes dynamically and only a small subset is active during any
given interval.
###### Example - High Cardinality Attribute Scenario
Export interval: 60 sec
Attributes:
* `user_id` (up to 1 million unique users)
* `success` (true or false, 2 values)
Theoretical limit: 1 million users × 2 = 2 million attribute sets
But if only 10,000 users are typically active during a 60 sec export interval:
10,000 × 2 = 20,000
**You can set the limit to 20,000, dramatically reducing memory usage during
normal operation.**
###### Export Interval Tuning
Shorter export intervals further reduce the required cardinality:
* If your interval is halved (e.g., from 60 sec to 30 sec), the number of unique
attribute sets seen per interval may also be halved.
> [!NOTE] More frequent exports increase CPU/network overhead due to
> serialization and transmission costs.
##### Choosing the Right Limit - Backend Considerations
While delta temporality offers certain advantages for cardinality management,
your choice may be constrained by backend support:
* **Backend Restrictions:** Some metrics backends only support cumulative
temporality. For example, Prometheus requires cumulative temporality and
cannot directly consume delta metrics.
* **Collector Conversion:** To leverage delta temporality's memory advantages
while maintaining backend compatibility, configure your SDK to use delta
temporality and deploy an OpenTelemetry Collector with a delta-to-cumulative
conversion processor. This approach pushes the memory overhead from your
application to the collector, which can be more easily scaled and managed
independently.
TODO: Add the memory cost incurred by each data points, so users can know the
memory impact of setting a higher limits.
TODO: Add example of how query can be affected when overflow occurs, use
[Aspire](https://github.com/dotnet/aspire/pull/7784) tool.
### Memory Preallocation
OpenTelemetry Rust SDK aims to avoid memory allocation on the hot code path.
When this is combined with [proper use of Metrics API](#metrics-api), heap
allocation can be avoided on the hot code path.
## Metrics Correlation
Including `TraceId` and `SpanId` as attributes in metrics might seem like an
intuitive way to achieve correlation with traces or logs. However, this approach
is ineffective and can make metrics practically unusable. Moreover, it can
quickly lead to cardinality issues, resulting in metrics being capped.
A better alternative is to use a concept in OpenTelemetry called
[Exemplars](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/sdk.md#exemplar).
Exemplars provide a mechanism to correlate metrics with traces by sampling
specific measurements and attaching trace context to them.
> [!NOTE] Currently, exemplars are not yet implemented in the OpenTelemetry Rust
> SDK.
## Modelling Metric Attributes
When metrics are being collected, they normally get stored in a [time series
database](https://en.wikipedia.org/wiki/Time_series_database). From storage and
consumption perspective, metrics can be multi-dimensional. Taking the [fruit
example](#example), there are two attributes - "name" and "color". For basic
scenarios, all the attributes can be reported during the [Metrics
API](#metrics-api) invocation, however, for less trivial scenarios, the
attributes can come from different sources:
* [Measurements](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#measurement)
reported via the [Metrics API](#metrics-api).
* Additional attributes provided at meter creation time via
[`meter_with_scope`](https://docs.rs/opentelemetry/latest/opentelemetry/metrics/trait.MeterProvider.html#tymethod.meter_with_scope).
* [Resources](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md)
configured at the `MeterProvider` level.
* Additional attributes provided by the collector. For example, [jobs and
instances](https://prometheus.io/docs/concepts/jobs_instances/) in Prometheus.
### Best Practices for Modeling Attributes
Follow these guidelines when deciding where to attach metric attributes:
* **For static attributes** (constant throughout the process lifetime):
* **Resource-level attributes**: If the dimension applies to all metrics
(e.g., hostname, datacenter), model it as a Resource attribute, or better
yet, let the collector add these automatically.
```rust
// Example: Setting resource-level attributes
let resource = Resource::new(vec![
KeyValue::new("service.name", "payment-processor"),
KeyValue::new("deployment.environment", "production"),
]);
```
* **Meter-level attributes**: If the dimension applies only to a subset of
metrics (e.g., library version), model it as meter-level attributes via
`meter_with_scope`.
```rust
// Example: Setting meter-level attributes
let scope = InstrumentationScope::builder("payment_library")
.with_version("1.2.3")
.with_attributes([KeyValue::new("payment.gateway", "stripe")])
.build();
let meter = global::meter_with_scope(scope);
```
* **For dynamic attributes** (values that change during execution):
* Report these via the Metrics API with each measurement.
* Be mindful that [cardinality limits](#cardinality-limits) apply to these
attributes.
```rust
// Example: Using dynamic attributes with each measurement
counter.add(1, &[
KeyValue::new("customer.tier", customer.tier),
KeyValue::new("transaction.status", status.to_string()),
]);
```
## Common issues that lead to missing metrics
Common pitfalls that can result in missing metrics include:
1. **Invalid instrument names** - OpenTelemetry will not collect metrics from
instruments using invalid names. See the [specification for valid
syntax](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#instrument-name-syntax).
2. **Not calling `shutdown` on the MeterProvider** - Ensure you properly call
`shutdown` at application termination to flush any pending metrics.
3. **Cardinality explosion** - When too many unique attribute combinations are
used, some metrics may be placed in the overflow bucket.
// TODO: Add more specific examples
## References
[OTel Metrics Specification - Supplementary Guidelines](https://opentelemetry.io/docs/specs/otel/metrics/supplementary-guidelines/)

View File

@ -79,7 +79,7 @@ maintain any instrumentations directly. This has recently changed with a
[contribution](https://github.com/open-telemetry/opentelemetry-rust-contrib/pull/202)
from one of the founding members of the OpenTelemetry Rust project to the
contrib repository, providing an instrumentation library for
[`actix-web`](https://github.com/open-telemetry/opentelemetry-rust-contrib/tree/main/actix-web-opentelemetry).
[`actix-web`](https://github.com/open-telemetry/opentelemetry-rust-contrib/tree/main/opentelemetry-instrumentation-actix-web).
We expect that this instrumentation will serve as a reference implementation demonstrating best practices for
creating OpenTelemetry instrumentations in Rust.

68
docs/release_0.30.md Normal file
View File

@ -0,0 +1,68 @@
# Release Notes 0.30
OpenTelemetry Rust 0.30 introduces a few breaking changes to the
`opentelemetry_sdk` crate in the `metrics` feature. These changes were essential
to drive the Metrics SDK towards stability. With this release, the Metrics SDK
is officially declared stable. The Metrics API was declared stable last year,
and previously, the Logs API, SDK, and OTel-Appender-Tracing were also marked
stable. Importantly, no breaking changes have been introduced to components
already marked as stable.
It is worth noting that the `opentelemetry-otlp` crate remains in a
Release-Candidate state and is not yet considered stable. With the API and SDK
for Logs and Metrics now stable, the focus will shift towards further refining
and stabilizing the OTLP Exporters in upcoming releases. Additionally,
Distributed Tracing is expected to progress towards stability, addressing key
interoperability challenges.
For detailed changelogs of individual crates, please refer to their respective
changelog files. This document serves as a summary of the main changes.
## Key Changes
### Metrics SDK Improvements
1. **Stabilized "view" features**: Previously under an experimental feature
flag, views can now be used to modify the name, unit, description, and
cardinality limit of a metric. Advanced view capabilities, such as changing
aggregation or dropping attributes, remain under the experimental feature
flag.
2. **Cardinality capping**: Introduced the ability to cap cardinality and
configure limits using views.
3. **Polished public API**: Refined the public API to hide implementation
details from exporters, enabling future internal optimizations and ensuring
consistency. Some APIs related to authoring custom metric readers have been
moved behind experimental feature flags. These advanced use cases require
more time to finalize the API surface before being included in the stable
release.
### Context-Based Suppression
Added the ability to suppress telemetry based on Context. This feature prevents
telemetry-induced-telemetry scenarios and addresses a long-standing issue. Note
that suppression relies on proper context propagation. Certain libraries used in
OTLP Exporters utilize `tracing` but do not adopt OpenTelemetry's context
propagation. As a result, not all telemetry is automatically suppressed with
this feature. Improvements in this area are expected in future releases.
## Next Release
In the [next
release](https://github.com/open-telemetry/opentelemetry-rust/milestone/22), the
focus will shift to OTLP Exporters and Distributed Tracing, specifically
resolving
[interoperability](https://github.com/open-telemetry/opentelemetry-rust/issues/2420)
issues with `tokio-tracing` and other fixes required to drive Distributed
Tracing towards stability.
## Acknowledgments
Thank you to everyone who contributed to this milestone. We welcome your
feedback through GitHub issues or discussions in the OTel-Rust Slack channel
[here](https://cloud-native.slack.com/archives/C03GDP0H023).
We are also excited to announce that [Anton Grübel](https://github.com/gruebel)
and [Björn Antonsson](https://github.com/bantonsson) have joined the OTel Rust
project as Approvers.

View File

@ -3,7 +3,14 @@ name = "logs-basic"
version = "0.1.0"
edition = "2021"
license = "Apache-2.0"
rust-version = "1.75.0"
publish = false
autobenches = false
[[bin]]
name = "logs-basic"
path = "src/main.rs"
bench = false
[dependencies]
opentelemetry_sdk = { path = "../../opentelemetry-sdk", features = ["logs"] }

View File

@ -15,19 +15,21 @@ fn main() {
.with_simple_exporter(exporter)
.build();
// For the OpenTelemetry layer, add a tracing filter to filter events from
// OpenTelemetry and its dependent crates (opentelemetry-otlp uses crates
// like reqwest/tonic etc.) from being sent back to OTel itself, thus
// preventing infinite telemetry generation. The filter levels are set as
// follows:
// To prevent a telemetry-induced-telemetry loop, OpenTelemetry's own internal
// logging is properly suppressed. However, logs emitted by external components
// (such as reqwest, tonic, etc.) are not suppressed as they do not propagate
// OpenTelemetry context. Until this issue is addressed
// (https://github.com/open-telemetry/opentelemetry-rust/issues/2877),
// filtering like this is the best way to suppress such logs.
//
// The filter levels are set as follows:
// - Allow `info` level and above by default.
// - Restrict `opentelemetry`, `hyper`, `tonic`, and `reqwest` completely.
// Note: This will also drop events from crates like `tonic` etc. even when
// they are used outside the OTLP Exporter. For more details, see:
// https://github.com/open-telemetry/opentelemetry-rust/issues/761
// - Completely restrict logs from `hyper`, `tonic`, `h2`, and `reqwest`.
//
// Note: This filtering will also drop logs from these components even when
// they are used outside of the OTLP Exporter.
let filter_otel = EnvFilter::new("info")
.add_directive("hyper=off".parse().unwrap())
.add_directive("opentelemetry=off".parse().unwrap())
.add_directive("tonic=off".parse().unwrap())
.add_directive("h2=off".parse().unwrap())
.add_directive("reqwest=off".parse().unwrap());

View File

@ -3,10 +3,17 @@ name = "metrics-advanced"
version = "0.1.0"
edition = "2021"
license = "Apache-2.0"
rust-version = "1.75.0"
publish = false
autobenches = false
[[bin]]
name = "metrics-advanced"
path = "src/main.rs"
bench = false
[dependencies]
opentelemetry = { path = "../../opentelemetry", features = ["metrics"] }
opentelemetry_sdk = { path = "../../opentelemetry-sdk", features = ["spec_unstable_metrics_views", "rt-tokio"] }
opentelemetry_sdk = { path = "../../opentelemetry-sdk" }
opentelemetry-stdout = { workspace = true, features = ["metrics"] }
tokio = { workspace = true, features = ["full"] }

View File

@ -12,6 +12,3 @@ Run the following, and the Metrics will be written out to stdout.
```shell
$ cargo run
```

View File

@ -1,18 +1,19 @@
use opentelemetry::global;
use opentelemetry::Key;
use opentelemetry::KeyValue;
use opentelemetry_sdk::metrics::{Aggregation, Instrument, SdkMeterProvider, Stream, Temporality};
use opentelemetry_sdk::metrics::{Instrument, SdkMeterProvider, Stream, Temporality};
use opentelemetry_sdk::Resource;
use std::error::Error;
fn init_meter_provider() -> opentelemetry_sdk::metrics::SdkMeterProvider {
// for example 1
let my_view_rename_and_unit = |i: &Instrument| {
if i.name == "my_histogram" {
if i.name() == "my_histogram" {
Some(
Stream::new()
.name("my_histogram_renamed")
.unit("milliseconds"),
Stream::builder()
.with_name("my_histogram_renamed")
.with_unit("milliseconds")
.build()
.unwrap(),
)
} else {
None
@ -20,23 +21,13 @@ fn init_meter_provider() -> opentelemetry_sdk::metrics::SdkMeterProvider {
};
// for example 2
let my_view_drop_attributes = |i: &Instrument| {
if i.name == "my_counter" {
Some(Stream::new().allowed_attribute_keys(vec![Key::from("mykey1")]))
} else {
None
}
};
// for example 3
let my_view_change_aggregation = |i: &Instrument| {
if i.name == "my_second_histogram" {
Some(
Stream::new().aggregation(Aggregation::ExplicitBucketHistogram {
boundaries: vec![0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5],
record_min_max: false,
}),
)
let my_view_change_cardinality = |i: &Instrument| {
if i.name() == "my_second_histogram" {
// Note: If Stream is invalid, build() will return an error. By
// calling `.ok()`, any such error is ignored and treated as if the
// view does not match the instrument. If this is not the desired
// behavior, consider handling the error explicitly.
Stream::builder().with_cardinality_limit(2).build().ok()
} else {
None
}
@ -55,8 +46,7 @@ fn init_meter_provider() -> opentelemetry_sdk::metrics::SdkMeterProvider {
.with_periodic_exporter(exporter)
.with_resource(resource)
.with_view(my_view_rename_and_unit)
.with_view(my_view_drop_attributes)
.with_view(my_view_change_aggregation)
.with_view(my_view_change_cardinality)
.build();
global::set_meter_provider(provider.clone());
provider
@ -88,69 +78,43 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync + 'static>> {
],
);
// Example 2 - Drop unwanted attributes using view.
let counter = meter.u64_counter("my_counter").build();
// Record measurements using the Counter instrument.
// Though we are passing 4 attributes here, only 1 will be used
// for aggregation as view is configured to use only "mykey1"
// attribute.
counter.add(
10,
&[
KeyValue::new("mykey1", "myvalue1"),
KeyValue::new("mykey2", "myvalue2"),
KeyValue::new("mykey3", "myvalue3"),
KeyValue::new("mykey4", "myvalue4"),
],
);
// Example 3 - Change Aggregation configuration using View.
// Histograms are by default aggregated using ExplicitBucketHistogram
// with default buckets. The configured view will change the aggregation to
// use a custom set of boundaries, and min/max values will not be recorded.
// Example 2 - Change cardinality using View.
let histogram2 = meter
.f64_histogram("my_second_histogram")
.with_unit("ms")
.with_description("My histogram example description")
.build();
// Record measurements using the histogram instrument.
// The values recorded are in the range of 1.2 to 1.5, warranting
// the change of boundaries.
histogram2.record(
1.5,
&[
KeyValue::new("mykey1", "myvalue1"),
KeyValue::new("mykey2", "myvalue2"),
KeyValue::new("mykey3", "myvalue3"),
KeyValue::new("mykey4", "myvalue4"),
],
);
// Record measurements using the histogram instrument. This metric will have
// a cardinality limit of 2, as set in the view. Because of this, only the
// first two distinct attribute combinations will be recorded, and the rest
// will be folded into the overflow attribute. Any number of measurements
// can be recorded as long as they use the same or already-seen attribute
// combinations.
histogram2.record(1.5, &[KeyValue::new("mykey1", "v1")]);
histogram2.record(1.2, &[KeyValue::new("mykey1", "v2")]);
histogram2.record(
1.2,
&[
KeyValue::new("mykey1", "myvalue1"),
KeyValue::new("mykey2", "myvalue2"),
KeyValue::new("mykey3", "myvalue3"),
KeyValue::new("mykey4", "myvalue4"),
],
);
// Repeatedly emitting measurements for "v1" and "v2" will not
// trigger overflow, as they are already seen attribute combinations.
histogram2.record(1.7, &[KeyValue::new("mykey1", "v1")]);
histogram2.record(1.8, &[KeyValue::new("mykey1", "v2")]);
histogram2.record(
1.23,
&[
KeyValue::new("mykey1", "myvalue1"),
KeyValue::new("mykey2", "myvalue2"),
KeyValue::new("mykey3", "myvalue3"),
KeyValue::new("mykey4", "myvalue4"),
],
);
// Emitting measurements for new attribute combinations will trigger
// overflow, as the cardinality limit of 2 has been reached.
// All the below measurements will be folded into the overflow attribute.
histogram2.record(1.23, &[KeyValue::new("mykey1", "v3")]);
// Metrics are exported by default every 30 seconds when using stdout exporter,
histogram2.record(1.4, &[KeyValue::new("mykey1", "v4")]);
histogram2.record(1.6, &[KeyValue::new("mykey1", "v5")]);
histogram2.record(1.7, &[KeyValue::new("mykey1", "v6")]);
histogram2.record(1.8, &[KeyValue::new("mykey1", "v7")]);
// Metrics are exported by default every 60 seconds when using stdout exporter,
// however shutting down the MeterProvider here instantly flushes
// the metrics, instead of waiting for the 30 sec interval.
// the metrics, instead of waiting for the 60 sec interval.
meter_provider.shutdown()?;
Ok(())
}

View File

@ -3,11 +3,18 @@ name = "metrics-basic"
version = "0.1.0"
edition = "2021"
license = "Apache-2.0"
rust-version = "1.75.0"
publish = false
autobenches = false
[[bin]]
name = "metrics-basic"
path = "src/main.rs"
bench = false
[dependencies]
opentelemetry = { path = "../../opentelemetry", features = ["metrics"] }
opentelemetry_sdk = { path = "../../opentelemetry-sdk", features = ["metrics", "rt-tokio"] }
opentelemetry_sdk = { path = "../../opentelemetry-sdk", features = ["metrics"] }
opentelemetry-stdout = { workspace = true, features = ["metrics"] }
tokio = { workspace = true, features = ["full"] }

View File

@ -11,6 +11,3 @@ Run the following, and the Metrics will be written out to stdout.
```shell
$ cargo run
```

View File

@ -136,9 +136,9 @@ async fn main() -> Result<(), Box<dyn Error>> {
})
.build();
// Metrics are exported by default every 30 seconds when using stdout
// Metrics are exported by default every 60 seconds when using stdout
// exporter, however shutting down the MeterProvider here instantly flushes
// the metrics, instead of waiting for the 30 sec interval. Shutdown returns
// the metrics, instead of waiting for the 60 sec interval. Shutdown returns
// a result, which is bubbled up to the caller The commented code below
// demonstrates handling the shutdown result, instead of bubbling up the
// error.

View File

@ -3,15 +3,19 @@ name = "tracing-grpc"
version = "0.1.0"
edition = "2021"
license = "Apache-2.0"
rust-version = "1.75.0"
publish = false
autobenches = false
[[bin]] # Bin to run the gRPC server
name = "grpc-server"
path = "src/server.rs"
bench = false
[[bin]] # Bin to run the gRPC client
name = "grpc-client"
path = "src/client.rs"
bench = false
[dependencies]
opentelemetry = { path = "../../opentelemetry" }
@ -19,7 +23,12 @@ opentelemetry_sdk = { path = "../../opentelemetry-sdk", features = ["rt-tokio"]
opentelemetry-stdout = { workspace = true, features = ["trace"] }
prost = { workspace = true }
tokio = { workspace = true, features = ["full"] }
tonic = { workspace = true, features = ["server", "codegen", "channel", "prost"] }
tonic = { workspace = true, features = ["server", "codegen", "channel", "prost", "router"] }
[build-dependencies]
tonic-build = { workspace = true }
[package.metadata.cargo-machete]
ignored = [
"prost" # needed for `tonic-build`
]

View File

@ -3,17 +3,21 @@ name = "tracing-http-propagator"
version = "0.1.0"
edition = "2021"
license = "Apache-2.0"
rust-version = "1.75.0"
publish = false
autobenches = false
[[bin]] # Bin to run the http server
name = "http-server"
path = "src/server.rs"
doc = false
bench = false
[[bin]] # Bin to run the client
name = "http-client"
path = "src/client.rs"
doc = false
bench = false
[dependencies]
http-body-util = { workspace = true }
@ -23,5 +27,8 @@ tokio = { workspace = true, features = ["full"] }
opentelemetry = { path = "../../opentelemetry" }
opentelemetry_sdk = { path = "../../opentelemetry-sdk" }
opentelemetry-http = { path = "../../opentelemetry-http" }
opentelemetry-stdout = { workspace = true, features = ["trace"] }
opentelemetry-stdout = { workspace = true, features = ["trace", "logs"] }
opentelemetry-semantic-conventions = { path = "../../opentelemetry-semantic-conventions" }
opentelemetry-appender-tracing = { workspace = true }
tracing = { workspace = true, features = ["std"]}
tracing-subscriber = { workspace = true, features = ["env-filter","registry", "std", "fmt"] }

View File

@ -3,13 +3,18 @@ use hyper_util::{client::legacy::Client, rt::TokioExecutor};
use opentelemetry::{
global,
trace::{SpanKind, TraceContextExt, Tracer},
Context, KeyValue,
Context,
};
use opentelemetry_appender_tracing::layer::OpenTelemetryTracingBridge;
use opentelemetry_http::{Bytes, HeaderInjector};
use opentelemetry_sdk::{propagation::TraceContextPropagator, trace::SdkTracerProvider};
use opentelemetry_stdout::SpanExporter;
use opentelemetry_sdk::{
logs::SdkLoggerProvider, propagation::TraceContextPropagator, trace::SdkTracerProvider,
};
use opentelemetry_stdout::{LogExporter, SpanExporter};
use tracing::info;
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};
fn init_tracer() {
fn init_tracer() -> SdkTracerProvider {
global::set_text_map_propagator(TraceContextPropagator::new());
// Install stdout exporter pipeline to be able to retrieve the collected spans.
// For the demonstration, use `Sampler::AlwaysOn` sampler to sample all traces.
@ -17,7 +22,23 @@ fn init_tracer() {
.with_simple_exporter(SpanExporter::default())
.build();
global::set_tracer_provider(provider);
global::set_tracer_provider(provider.clone());
provider
}
fn init_logs() -> SdkLoggerProvider {
// Setup tracerprovider with stdout exporter
// that prints the spans to stdout.
let logger_provider = SdkLoggerProvider::builder()
.with_simple_exporter(LogExporter::default())
.build();
let otel_layer = OpenTelemetryTracingBridge::new(&logger_provider);
tracing_subscriber::registry()
.with(otel_layer)
.with(tracing_subscriber::filter::LevelFilter::INFO)
.init();
logger_provider
}
async fn send_request(
@ -37,21 +58,22 @@ async fn send_request(
global::get_text_map_propagator(|propagator| {
propagator.inject_context(&cx, &mut HeaderInjector(req.headers_mut().unwrap()))
});
req.headers_mut()
.unwrap()
.insert("baggage", "is_synthetic=true".parse().unwrap());
let res = client
.request(req.body(Full::new(Bytes::from(body_content.to_string())))?)
.await?;
cx.span().add_event(
"Got response!",
vec![KeyValue::new("status", res.status().to_string())],
);
info!(name: "ResponseReceived", status = res.status().to_string(), message = "Response received");
Ok(())
}
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
init_tracer();
let tracer_provider = init_tracer();
let logger_provider = init_logs();
send_request(
"http://127.0.0.1:3000/health",
@ -66,5 +88,11 @@ async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sy
)
.await?;
tracer_provider
.shutdown()
.expect("Shutdown provider failed");
logger_provider
.shutdown()
.expect("Shutdown provider failed");
Ok(())
}

View File

@ -2,16 +2,28 @@ use http_body_util::{combinators::BoxBody, BodyExt, Full};
use hyper::{body::Incoming, service::service_fn, Request, Response, StatusCode};
use hyper_util::rt::{TokioExecutor, TokioIo};
use opentelemetry::{
baggage::BaggageExt,
global::{self, BoxedTracer},
logs::LogRecord,
propagation::TextMapCompositePropagator,
trace::{FutureExt, Span, SpanKind, TraceContextExt, Tracer},
Context, KeyValue,
Context, InstrumentationScope, KeyValue,
};
use opentelemetry_appender_tracing::layer::OpenTelemetryTracingBridge;
use opentelemetry_http::{Bytes, HeaderExtractor};
use opentelemetry_sdk::{propagation::TraceContextPropagator, trace::SdkTracerProvider};
use opentelemetry_sdk::{
error::OTelSdkResult,
logs::{LogProcessor, SdkLogRecord, SdkLoggerProvider},
propagation::{BaggagePropagator, TraceContextPropagator},
trace::{SdkTracerProvider, SpanProcessor},
};
use opentelemetry_semantic_conventions::trace;
use opentelemetry_stdout::SpanExporter;
use opentelemetry_stdout::{LogExporter, SpanExporter};
use std::time::Duration;
use std::{convert::Infallible, net::SocketAddr, sync::OnceLock};
use tokio::net::TcpListener;
use tracing::info;
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};
fn get_tracer() -> &'static BoxedTracer {
static TRACER: OnceLock<BoxedTracer> = OnceLock::new();
@ -30,11 +42,11 @@ async fn handle_health_check(
_req: Request<Incoming>,
) -> Result<Response<BoxBody<Bytes, hyper::Error>>, Infallible> {
let tracer = get_tracer();
let mut span = tracer
let _span = tracer
.span_builder("health_check")
.with_kind(SpanKind::Internal)
.start(tracer);
span.add_event("Health check accessed", vec![]);
info!(name: "health_check", message = "Health check endpoint hit");
let res = Response::new(
Full::new(Bytes::from_static(b"Server is up and running!"))
@ -50,11 +62,11 @@ async fn handle_echo(
req: Request<Incoming>,
) -> Result<Response<BoxBody<Bytes, hyper::Error>>, Infallible> {
let tracer = get_tracer();
let mut span = tracer
let _span = tracer
.span_builder("echo")
.with_kind(SpanKind::Internal)
.start(tracer);
span.add_event("Echoing back the request", vec![]);
info!(name = "echo", message = "Echo endpoint hit");
let res = Response::new(req.into_body().boxed());
@ -69,14 +81,14 @@ async fn router(
let response = {
// Create a span parenting the remote client span.
let tracer = get_tracer();
let mut span = tracer
let span = tracer
.span_builder("router")
.with_kind(SpanKind::Server)
.start_with_context(tracer, &parent_cx);
span.add_event("dispatching request", vec![]);
info!(name = "router", message = "Dispatching request");
let cx = Context::default().with_span(span);
let cx = parent_cx.with_span(span);
match (req.method(), req.uri().path()) {
(&hyper::Method::GET, "/health") => handle_health_check(req).with_context(cx).await,
(&hyper::Method::GET, "/echo") => handle_echo(req).with_context(cx).await,
@ -93,12 +105,60 @@ async fn router(
response
}
/// A custom log processor that enriches LogRecords with baggage attributes.
/// Baggage information is not added automatically without this processor.
#[derive(Debug)]
struct EnrichWithBaggageLogProcessor;
impl LogProcessor for EnrichWithBaggageLogProcessor {
fn emit(&self, data: &mut SdkLogRecord, _instrumentation: &InstrumentationScope) {
Context::map_current(|cx| {
for (kk, vv) in cx.baggage().iter() {
data.add_attribute(kk.clone(), vv.0.clone());
}
});
}
fn force_flush(&self) -> OTelSdkResult {
Ok(())
}
}
/// A custom span processor that enriches spans with baggage attributes. Baggage
/// information is not added automatically without this processor.
#[derive(Debug)]
struct EnrichWithBaggageSpanProcessor;
impl SpanProcessor for EnrichWithBaggageSpanProcessor {
fn force_flush(&self) -> OTelSdkResult {
Ok(())
}
fn shutdown_with_timeout(&self, _timeout: Duration) -> OTelSdkResult {
Ok(())
}
fn on_start(&self, span: &mut opentelemetry_sdk::trace::Span, cx: &Context) {
for (kk, vv) in cx.baggage().iter() {
span.set_attribute(KeyValue::new(kk.clone(), vv.0.clone()));
}
}
fn on_end(&self, _span: opentelemetry_sdk::trace::SpanData) {}
}
fn init_tracer() -> SdkTracerProvider {
global::set_text_map_propagator(TraceContextPropagator::new());
let baggage_propagator = BaggagePropagator::new();
let trace_context_propagator = TraceContextPropagator::new();
let composite_propagator = TextMapCompositePropagator::new(vec![
Box::new(baggage_propagator),
Box::new(trace_context_propagator),
]);
global::set_text_map_propagator(composite_propagator);
// Setup tracerprovider with stdout exporter
// that prints the spans to stdout.
let provider = SdkTracerProvider::builder()
.with_span_processor(EnrichWithBaggageSpanProcessor)
.with_simple_exporter(SpanExporter::default())
.build();
@ -106,11 +166,25 @@ fn init_tracer() -> SdkTracerProvider {
provider
}
fn init_logs() -> SdkLoggerProvider {
// Setup tracerprovider with stdout exporter
// that prints the spans to stdout.
let logger_provider = SdkLoggerProvider::builder()
.with_log_processor(EnrichWithBaggageLogProcessor)
.with_simple_exporter(LogExporter::default())
.build();
let otel_layer = OpenTelemetryTracingBridge::new(&logger_provider);
tracing_subscriber::registry().with(otel_layer).init();
logger_provider
}
#[tokio::main]
async fn main() {
use hyper_util::server::conn::auto::Builder;
let provider = init_tracer();
let logger_provider = init_logs();
let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
let listener = TcpListener::bind(addr).await.unwrap();
@ -124,4 +198,7 @@ async fn main() {
}
provider.shutdown().expect("Shutdown provider failed");
logger_provider
.shutdown()
.expect("Shutdown provider failed");
}

View File

@ -2,6 +2,12 @@
## vNext
## 0.30.0
Released 2025-May-23
- Updated `opentelemetry` and `opentelemetry-semantic-conventions` dependencies to version 0.30.0.
## 0.29.0
Released 2025-Mar-21

View File

@ -1,6 +1,6 @@
[package]
name = "opentelemetry-appender-log"
version = "0.29.0"
version = "0.30.0"
description = "An OpenTelemetry appender for the log crate"
homepage = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-appender-log"
repository = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-appender-log"
@ -9,14 +9,18 @@ keywords = ["opentelemetry", "log", "logs"]
license = "Apache-2.0"
rust-version = "1.75.0"
edition = "2021"
autobenches = false
[lib]
bench = false
[dependencies]
opentelemetry = { version = "0.29", path = "../opentelemetry", features = [
opentelemetry = { version = "0.30", path = "../opentelemetry", features = [
"logs",
] }
log = { workspace = true, features = ["kv", "std"] }
serde = { workspace = true, optional = true, features = ["std"] }
opentelemetry-semantic-conventions = { version = "0.29", path = "../opentelemetry-semantic-conventions", optional = true, features = [
opentelemetry-semantic-conventions = { version = "0.30", path = "../opentelemetry-semantic-conventions", optional = true, features = [
"semconv_experimental",
] }

View File

@ -117,7 +117,7 @@ use opentelemetry::{
};
#[cfg(feature = "experimental_metadata_attributes")]
use opentelemetry_semantic_conventions::attribute::{
CODE_FILEPATH, CODE_LINE_NUMBER, CODE_NAMESPACE,
CODE_FILE_PATH, CODE_FUNCTION_NAME, CODE_LINE_NUMBER,
};
pub struct OpenTelemetryLogBridge<P, L>
@ -156,7 +156,7 @@ where
{
if let Some(filepath) = record.file() {
log_record.add_attribute(
Key::new(CODE_FILEPATH),
Key::new(CODE_FILE_PATH),
AnyValue::from(filepath.to_string()),
);
}
@ -167,7 +167,7 @@ where
if let Some(module) = record.module_path() {
log_record.add_attribute(
Key::new(CODE_NAMESPACE),
Key::new(CODE_FUNCTION_NAME),
AnyValue::from(module.to_string()),
);
}
@ -687,7 +687,7 @@ mod any_value {
) -> Result<(), Self::Error> {
let key = match key.serialize(ValueSerializer)? {
Some(AnyValue::String(key)) => Key::from(String::from(key)),
key => Key::from(format!("{:?}", key)),
key => Key::from(format!("{key:?}")),
};
self.key = Some(key);
@ -1180,7 +1180,7 @@ mod tests {
#[test]
fn logbridge_code_attributes() {
use opentelemetry_semantic_conventions::attribute::{
CODE_FILEPATH, CODE_LINE_NUMBER, CODE_NAMESPACE,
CODE_FILE_PATH, CODE_FUNCTION_NAME, CODE_LINE_NUMBER,
};
let exporter = InMemoryLogExporter::default();
@ -1219,11 +1219,11 @@ mod tests {
assert_eq!(
Some(AnyValue::String(StringValue::from("src/main.rs"))),
get(CODE_FILEPATH)
get(CODE_FILE_PATH)
);
assert_eq!(
Some(AnyValue::String(StringValue::from("service"))),
get(CODE_NAMESPACE)
get(CODE_FUNCTION_NAME)
);
assert_eq!(Some(AnyValue::Int(101)), get(CODE_LINE_NUMBER));
}

View File

@ -2,6 +2,26 @@
## vNext
## 0.30.1
Released 2025-June-05
- Bump `tracing-opentelemetry` to 0.31
## 0.30.0
Released 2025-May-23
- Updated `opentelemetry` dependency to version 0.30.0.
## 0.29.1
Released 2025-Mar-24
- Bump `tracing-opentelemetry` to 0.30
## 0.29.0
Released 2025-Mar-21

View File

@ -1,6 +1,6 @@
[package]
name = "opentelemetry-appender-tracing"
version = "0.28.1"
version = "0.30.1"
edition = "2021"
description = "An OpenTelemetry log appender for the tracing crate"
homepage = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-appender-tracing"
@ -9,15 +9,16 @@ readme = "README.md"
keywords = ["opentelemetry", "log", "logs", "tracing"]
license = "Apache-2.0"
rust-version = "1.75.0"
autobenches = false
[dependencies]
log = { workspace = true, optional = true }
opentelemetry = { version = "0.29", path = "../opentelemetry", features = ["logs"] }
opentelemetry = { version = "0.30", path = "../opentelemetry", features = ["logs"] }
tracing = { workspace = true, features = ["std"]}
tracing-core = { workspace = true }
tracing-log = { workspace = true, optional = true }
tracing-subscriber = { workspace = true, features = ["registry", "std"] }
# tracing-opentelemetry = { workspace = true, optional = true }
tracing-opentelemetry = { workspace = true, optional = true }
[dev-dependencies]
log = { workspace = true }
@ -36,8 +37,7 @@ pprof = { version = "0.14", features = ["flamegraph", "criterion"] }
default = []
experimental_metadata_attributes = ["dep:tracing-log"]
spec_unstable_logs_enabled = ["opentelemetry/spec_unstable_logs_enabled"]
# TODO - Enable this in 0.29.1 (once tracing-opentelemetry v0.30 is released)
# experimental_use_tracing_span_context = ["tracing-opentelemetry"]
experimental_use_tracing_span_context = ["tracing-opentelemetry"]
[[bench]]

View File

@ -43,10 +43,6 @@ impl LogProcessor for NoopProcessor {
fn force_flush(&self) -> OTelSdkResult {
Ok(())
}
fn shutdown(&self) -> OTelSdkResult {
Ok(())
}
}
/// Creates a single benchmark for a specific number of attributes
@ -64,7 +60,7 @@ fn create_benchmark(c: &mut Criterion, num_attributes: usize) {
let subscriber = Registry::default().with(ot_layer);
tracing::subscriber::with_default(subscriber, || {
c.bench_function(&format!("otel_{}_attributes", num_attributes), |b| {
c.bench_function(&format!("otel_{num_attributes}_attributes"), |b| {
b.iter(|| {
// Dynamically generate the error! macro call based on the number of attributes
match num_attributes {
@ -258,14 +254,19 @@ fn criterion_benchmark(c: &mut Criterion) {
#[cfg(not(target_os = "windows"))]
criterion_group! {
name = benches;
config = Criterion::default().with_profiler(PProfProfiler::new(100, Output::Flamegraph(None)));
config = Criterion::default()
.warm_up_time(std::time::Duration::from_secs(1))
.measurement_time(std::time::Duration::from_secs(2))
.with_profiler(PProfProfiler::new(100, Output::Flamegraph(None)));
targets = criterion_benchmark
}
#[cfg(target_os = "windows")]
criterion_group! {
name = benches;
config = Criterion::default();
config = Criterion::default()
.warm_up_time(std::time::Duration::from_secs(1))
.measurement_time(std::time::Duration::from_secs(2));
targets = criterion_benchmark
}

View File

@ -54,10 +54,6 @@ impl LogProcessor for NoopProcessor {
Ok(())
}
fn shutdown(&self) -> OTelSdkResult {
Ok(())
}
fn event_enabled(
&self,
_level: opentelemetry::logs::Severity,
@ -168,13 +164,18 @@ fn criterion_benchmark(c: &mut Criterion) {
#[cfg(not(target_os = "windows"))]
criterion_group! {
name = benches;
config = Criterion::default().with_profiler(PProfProfiler::new(100, Output::Flamegraph(None)));
config = Criterion::default()
.warm_up_time(std::time::Duration::from_secs(1))
.measurement_time(std::time::Duration::from_secs(2))
.with_profiler(PProfProfiler::new(100, Output::Flamegraph(None)));
targets = criterion_benchmark
}
#[cfg(target_os = "windows")]
criterion_group! {
name = benches;
config = Criterion::default();
config = Criterion::default()
.warm_up_time(std::time::Duration::from_secs(1))
.measurement_time(std::time::Duration::from_secs(2));
targets = criterion_benchmark
}
criterion_main!(benches);

View File

@ -16,16 +16,19 @@ fn main() {
.with_simple_exporter(exporter)
.build();
// For the OpenTelemetry layer, add a tracing filter to filter events from
// OpenTelemetry and its dependent crates (opentelemetry-otlp uses crates
// like reqwest/tonic etc.) from being sent back to OTel itself, thus
// preventing infinite telemetry generation. The filter levels are set as
// follows:
// To prevent a telemetry-induced-telemetry loop, OpenTelemetry's own internal
// logging is properly suppressed. However, logs emitted by external components
// (such as reqwest, tonic, etc.) are not suppressed as they do not propagate
// OpenTelemetry context. Until this issue is addressed
// (https://github.com/open-telemetry/opentelemetry-rust/issues/2877),
// filtering like this is the best way to suppress such logs.
//
// The filter levels are set as follows:
// - Allow `info` level and above by default.
// - Restrict `opentelemetry`, `hyper`, `tonic`, and `reqwest` completely.
// Note: This will also drop events from crates like `tonic` etc. even when
// they are used outside the OTLP Exporter. For more details, see:
// https://github.com/open-telemetry/opentelemetry-rust/issues/761
// - Completely restrict logs from `hyper`, `tonic`, `h2`, and `reqwest`.
//
// Note: This filtering will also drop logs from these components even when
// they are used outside of the OTLP Exporter.
let filter_otel = EnvFilter::new("info")
.add_directive("hyper=off".parse().unwrap())
.add_directive("opentelemetry=off".parse().unwrap())

View File

@ -73,7 +73,7 @@ impl<LR: LogRecord> tracing::field::Visit for EventVisitor<'_, LR> {
return;
}
if field.name() == "message" {
self.log_record.set_body(format!("{:?}", value).into());
self.log_record.set_body(format!("{value:?}").into());
} else {
self.log_record
.add_attribute(Key::new(field.name()), AnyValue::from(format!("{value:?}")));
@ -244,25 +244,28 @@ where
// Visit fields.
event.record(&mut visitor);
// #[cfg(feature = "experimental_use_tracing_span_context")]
// if let Some(span) = _ctx.event_span(event) {
// use tracing_opentelemetry::OtelData;
// let opt_span_id = span
// .extensions()
// .get::<OtelData>()
// .and_then(|otd| otd.builder.span_id);
// let opt_trace_id = span.scope().last().and_then(|root_span| {
// root_span
// .extensions()
// .get::<OtelData>()
// .and_then(|otd| otd.builder.trace_id)
// });
// if let Some((trace_id, span_id)) = opt_trace_id.zip(opt_span_id) {
// log_record.set_trace_context(trace_id, span_id, None);
// }
// }
#[cfg(feature = "experimental_use_tracing_span_context")]
if let Some(span) = _ctx.event_span(event) {
use opentelemetry::trace::TraceContextExt;
use tracing_opentelemetry::OtelData;
if let Some(otd) = span.extensions().get::<OtelData>() {
if let Some(span_id) = otd.builder.span_id {
let opt_trace_id = if otd.parent_cx.has_active_span() {
Some(otd.parent_cx.span().span_context().trace_id())
} else {
span.scope().last().and_then(|root_span| {
root_span
.extensions()
.get::<OtelData>()
.and_then(|otd| otd.builder.trace_id)
})
};
if let Some(trace_id) = opt_trace_id {
log_record.set_trace_context(trace_id, span_id, None);
}
}
}
}
//emit record
self.logger.emit(log_record);
@ -289,13 +292,11 @@ mod tests {
use opentelemetry::{logs::AnyValue, Key};
use opentelemetry_sdk::error::{OTelSdkError, OTelSdkResult};
use opentelemetry_sdk::logs::{InMemoryLogExporter, LogProcessor};
use opentelemetry_sdk::logs::{LogBatch, LogExporter};
use opentelemetry_sdk::logs::{SdkLogRecord, SdkLoggerProvider};
use opentelemetry_sdk::trace::{Sampler, SdkTracerProvider};
use tracing::{error, warn};
use tracing::error;
use tracing_subscriber::prelude::__tracing_subscriber_SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use tracing_subscriber::{EnvFilter, Layer};
use tracing_subscriber::Layer;
pub fn attributes_contains(log_record: &SdkLogRecord, key: &Key, value: &AnyValue) -> bool {
log_record
@ -313,69 +314,6 @@ mod tests {
}
// cargo test --features=testing
#[derive(Clone, Debug, Default)]
struct ReentrantLogExporter;
impl LogExporter for ReentrantLogExporter {
async fn export(&self, _batch: LogBatch<'_>) -> OTelSdkResult {
// This will cause a deadlock as the export itself creates a log
// while still within the lock of the SimpleLogProcessor.
warn!(name: "my-event-name", target: "reentrant", event_id = 20, user_name = "otel", user_email = "otel@opentelemetry.io");
Ok(())
}
}
#[test]
#[ignore = "See issue: https://github.com/open-telemetry/opentelemetry-rust/issues/1745"]
fn simple_processor_deadlock() {
let exporter: ReentrantLogExporter = ReentrantLogExporter;
let logger_provider = SdkLoggerProvider::builder()
.with_simple_exporter(exporter.clone())
.build();
let layer = layer::OpenTelemetryTracingBridge::new(&logger_provider);
// Setting subscriber as global as that is the only way to test this scenario.
tracing_subscriber::registry().with(layer).init();
warn!(name: "my-event-name", target: "my-system", event_id = 20, user_name = "otel", user_email = "otel@opentelemetry.io");
}
#[test]
#[ignore = "While this test runs fine, this uses global subscriber and does not play well with other tests."]
fn simple_processor_no_deadlock() {
let exporter: ReentrantLogExporter = ReentrantLogExporter;
let logger_provider = SdkLoggerProvider::builder()
.with_simple_exporter(exporter.clone())
.build();
let layer = layer::OpenTelemetryTracingBridge::new(&logger_provider);
// This filter will prevent the deadlock as the reentrant log will be
// ignored.
let filter = EnvFilter::new("debug").add_directive("reentrant=error".parse().unwrap());
// Setting subscriber as global as that is the only way to test this scenario.
tracing_subscriber::registry()
.with(filter)
.with(layer)
.init();
warn!(name: "my-event-name", target: "my-system", event_id = 20, user_name = "otel", user_email = "otel@opentelemetry.io");
}
#[tokio::test(flavor = "multi_thread", worker_threads = 1)]
#[ignore = "While this test runs fine, this uses global subscriber and does not play well with other tests."]
async fn batch_processor_no_deadlock() {
let exporter: ReentrantLogExporter = ReentrantLogExporter;
let logger_provider = SdkLoggerProvider::builder()
.with_batch_exporter(exporter.clone())
.build();
let layer = layer::OpenTelemetryTracingBridge::new(&logger_provider);
tracing_subscriber::registry().with(layer).init();
warn!(name: "my-event-name", target: "my-system", event_id = 20, user_name = "otel", user_email = "otel@opentelemetry.io");
}
#[test]
fn tracing_appender_standalone() {
// Arrange
@ -673,66 +611,117 @@ mod tests {
}
}
// #[cfg(feature = "experimental_use_tracing_span_context")]
// #[test]
// fn tracing_appender_inside_tracing_crate_context() {
// use opentelemetry_sdk::trace::InMemorySpanExporterBuilder;
#[cfg(feature = "experimental_use_tracing_span_context")]
#[test]
fn tracing_appender_inside_tracing_crate_context() {
use opentelemetry::{trace::SpanContext, Context, SpanId, TraceId};
use opentelemetry_sdk::trace::InMemorySpanExporterBuilder;
use tracing_opentelemetry::OpenTelemetrySpanExt;
// // Arrange
// let exporter: InMemoryLogExporter = InMemoryLogExporter::default();
// let logger_provider = SdkLoggerProvider::builder()
// .with_simple_exporter(exporter.clone())
// .build();
// Arrange
let exporter: InMemoryLogExporter = InMemoryLogExporter::default();
let logger_provider = SdkLoggerProvider::builder()
.with_simple_exporter(exporter.clone())
.build();
// // setup tracing layer to compare trace/span IDs against
// let span_exporter = InMemorySpanExporterBuilder::new().build();
// let tracer_provider = SdkTracerProvider::builder()
// .with_simple_exporter(span_exporter.clone())
// .build();
// let tracer = tracer_provider.tracer("test-tracer");
// setup tracing layer to compare trace/span IDs against
let span_exporter = InMemorySpanExporterBuilder::new().build();
let tracer_provider = SdkTracerProvider::builder()
.with_simple_exporter(span_exporter.clone())
.build();
let tracer = tracer_provider.tracer("test-tracer");
// let level_filter = tracing_subscriber::filter::LevelFilter::ERROR;
// let log_layer =
// layer::OpenTelemetryTracingBridge::new(&logger_provider).with_filter(level_filter);
let level_filter = tracing_subscriber::filter::LevelFilter::ERROR;
let log_layer =
layer::OpenTelemetryTracingBridge::new(&logger_provider).with_filter(level_filter);
// let subscriber = tracing_subscriber::registry()
// .with(log_layer)
// .with(tracing_opentelemetry::layer().with_tracer(tracer));
let subscriber = tracing_subscriber::registry()
.with(log_layer)
.with(tracing_opentelemetry::layer().with_tracer(tracer));
// // Avoiding global subscriber.init() as that does not play well with unit tests.
// let _guard = tracing::subscriber::set_default(subscriber);
// Avoiding global subscriber.init() as that does not play well with unit tests.
let _guard = tracing::subscriber::set_default(subscriber);
// // Act
// tracing::error_span!("outer-span").in_scope(|| {
// error!("first-event");
// Act
tracing::error_span!("outer-span").in_scope(|| {
error!("first-event");
// tracing::error_span!("inner-span").in_scope(|| {
// error!("second-event");
// });
// });
tracing::error_span!("inner-span").in_scope(|| {
error!("second-event");
});
});
// assert!(logger_provider.force_flush().is_ok());
assert!(logger_provider.force_flush().is_ok());
// let logs = exporter.get_emitted_logs().expect("No emitted logs");
// assert_eq!(logs.len(), 2, "Expected 2 logs, got: {logs:?}");
let logs = exporter.get_emitted_logs().expect("No emitted logs");
assert_eq!(logs.len(), 2, "Expected 2 logs, got: {logs:?}");
// let spans = span_exporter.get_finished_spans().unwrap();
// assert_eq!(spans.len(), 2);
let spans = span_exporter.get_finished_spans().unwrap();
assert_eq!(spans.len(), 2);
// let trace_id = spans[0].span_context.trace_id();
// assert_eq!(trace_id, spans[1].span_context.trace_id());
// let inner_span_id = spans[0].span_context.span_id();
// let outer_span_id = spans[1].span_context.span_id();
// assert_eq!(outer_span_id, spans[0].parent_span_id);
let trace_id = spans[0].span_context.trace_id();
assert_eq!(trace_id, spans[1].span_context.trace_id());
let inner_span_id = spans[0].span_context.span_id();
let outer_span_id = spans[1].span_context.span_id();
assert_eq!(outer_span_id, spans[0].parent_span_id);
// let trace_ctx0 = logs[0].record.trace_context().unwrap();
// let trace_ctx1 = logs[1].record.trace_context().unwrap();
let trace_ctx0 = logs[0].record.trace_context().unwrap();
let trace_ctx1 = logs[1].record.trace_context().unwrap();
// assert_eq!(trace_ctx0.trace_id, trace_id);
// assert_eq!(trace_ctx1.trace_id, trace_id);
// assert_eq!(trace_ctx0.span_id, outer_span_id);
// assert_eq!(trace_ctx1.span_id, inner_span_id);
// }
assert_eq!(trace_ctx0.trace_id, trace_id);
assert_eq!(trace_ctx1.trace_id, trace_id);
assert_eq!(trace_ctx0.span_id, outer_span_id);
assert_eq!(trace_ctx1.span_id, inner_span_id);
// Set context from remote.
let remote_trace_id = TraceId::from_u128(233);
let remote_span_id = SpanId::from_u64(2333);
let remote_span_context = SpanContext::new(
remote_trace_id,
remote_span_id,
TraceFlags::SAMPLED,
true,
Default::default(),
);
// Act again.
tracing::error_span!("outer-span").in_scope(|| {
let span = tracing::Span::current();
let parent_context = Context::current().with_remote_span_context(remote_span_context);
span.set_parent(parent_context);
error!("first-event");
tracing::error_span!("inner-span").in_scope(|| {
error!("second-event");
});
});
assert!(logger_provider.force_flush().is_ok());
let logs = exporter.get_emitted_logs().expect("No emitted logs");
assert_eq!(logs.len(), 4, "Expected 4 logs, got: {logs:?}");
let logs = &logs[2..];
let spans = span_exporter.get_finished_spans().unwrap();
assert_eq!(spans.len(), 4);
let spans = &spans[2..];
let trace_id = spans[0].span_context.trace_id();
assert_eq!(trace_id, remote_trace_id);
assert_eq!(trace_id, spans[1].span_context.trace_id());
let inner_span_id = spans[0].span_context.span_id();
let outer_span_id = spans[1].span_context.span_id();
assert_eq!(outer_span_id, spans[0].parent_span_id);
let trace_ctx0 = logs[0].record.trace_context().unwrap();
let trace_ctx1 = logs[1].record.trace_context().unwrap();
assert_eq!(trace_ctx0.trace_id, trace_id);
assert_eq!(trace_ctx1.trace_id, trace_id);
assert_eq!(trace_ctx0.span_id, outer_span_id);
assert_eq!(trace_ctx1.span_id, inner_span_id);
}
#[test]
fn tracing_appender_standalone_with_tracing_log() {
@ -942,10 +931,6 @@ mod tests {
fn force_flush(&self) -> OTelSdkResult {
Ok(())
}
fn shutdown(&self) -> OTelSdkResult {
Ok(())
}
}
#[cfg(feature = "spec_unstable_logs_enabled")]

View File

@ -2,6 +2,15 @@
## vNext
- Implementation of `Extractor::get_all` for `HeaderExtractor`
- Support `HttpClient` implementation for `HyperClient<C>` with custom connectors beyond `HttpConnector`, enabling Unix Domain Socket connections and other custom transports
## 0.30.0
Released 2025-May-23
- Updated `opentelemetry` dependency to version 0.30.0.
## 0.29.0
Released 2025-Mar-21
@ -45,7 +54,7 @@ Released 2024-Sep-30
## v0.12.0
- Add `reqwest-rustls-webkpi-roots` feature flag to configure [`reqwest`](https://docs.rs/reqwest/0.11.27/reqwest/index.html#optional-features) to use embedded `webkpi-roots`.
- Add `reqwest-rustls-webpki-roots` feature flag to configure [`reqwest`](https://docs.rs/reqwest/0.11.27/reqwest/index.html#optional-features) to use embedded `webpki-roots`.
- Update `opentelemetry` dependency version to 0.23
## v0.11.1

View File

@ -1,6 +1,6 @@
[package]
name = "opentelemetry-http"
version = "0.29.0"
version = "0.30.0"
description = "Helper implementations for sending HTTP requests. Uses include propagating and extracting context over http, exporting telemetry, requesting sampling strategies."
homepage = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-http"
repository = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-http"
@ -8,13 +8,14 @@ keywords = ["opentelemetry", "tracing", "context", "propagation"]
license = "Apache-2.0"
edition = "2021"
rust-version = "1.75.0"
autobenches = false
[features]
default = ["internal-logs"]
hyper = ["dep:http-body-util", "dep:hyper", "dep:hyper-util", "dep:tokio"]
reqwest-rustls = ["reqwest", "reqwest/rustls-tls-native-roots"]
reqwest-rustls-webpki-roots = ["reqwest", "reqwest/rustls-tls-webpki-roots"]
internal-logs = ["tracing", "opentelemetry/internal-logs"]
internal-logs = ["opentelemetry/internal-logs"]
[dependencies]
async-trait = { workspace = true }
@ -23,10 +24,12 @@ http = { workspace = true }
http-body-util = { workspace = true, optional = true }
hyper = { workspace = true, optional = true }
hyper-util = { workspace = true, features = ["client-legacy", "http1", "http2"], optional = true }
opentelemetry = { version = "0.29", path = "../opentelemetry", features = ["trace"] }
opentelemetry = { version = "0.30", path = "../opentelemetry", features = ["trace"] }
reqwest = { workspace = true, features = ["blocking"], optional = true }
tokio = { workspace = true, features = ["time"], optional = true }
tracing = {workspace = true, optional = true}
[lints]
workspace = true
[lib]
bench = false

View File

@ -43,6 +43,16 @@ impl Extractor for HeaderExtractor<'_> {
.map(|value| value.as_str())
.collect::<Vec<_>>()
}
/// Get all the values for a key from the HeaderMap
fn get_all(&self, key: &str) -> Option<Vec<&str>> {
let all_iter = self.0.get_all(key).iter();
if let (0, Some(0)) = all_iter.size_hint() {
return None;
}
Some(all_iter.filter_map(|value| value.to_str().ok()).collect())
}
}
pub type HttpError = Box<dyn std::error::Error + Send + Sync + 'static>;
@ -169,7 +179,11 @@ pub mod hyper {
}
#[async_trait]
impl HttpClient for HyperClient {
impl<C> HttpClient for HyperClient<C>
where
C: Connect + Clone + Send + Sync + 'static,
HyperClient<C>: Debug,
{
async fn send_bytes(&self, request: Request<Bytes>) -> Result<Response<Bytes>, HttpError> {
otel_debug!(name: "HyperClient.Send");
let (parts, body) = request.into_parts();
@ -236,6 +250,8 @@ impl<T> ResponseExt for Response<T> {
#[cfg(test)]
mod tests {
use http::HeaderValue;
use super::*;
#[test]
@ -250,6 +266,32 @@ mod tests {
)
}
#[test]
fn http_headers_get_all() {
let mut carrier = http::HeaderMap::new();
carrier.append("headerName", HeaderValue::from_static("value"));
carrier.append("headerName", HeaderValue::from_static("value2"));
carrier.append("headerName", HeaderValue::from_static("value3"));
assert_eq!(
HeaderExtractor(&carrier).get_all("HEADERNAME"),
Some(vec!["value", "value2", "value3"]),
"all values from a key extraction"
)
}
#[test]
fn http_headers_get_all_missing_key() {
let mut carrier = http::HeaderMap::new();
carrier.append("headerName", HeaderValue::from_static("value"));
assert_eq!(
HeaderExtractor(&carrier).get_all("not_existing"),
None,
"all values from a missing key extraction"
)
}
#[test]
fn http_headers_keys() {
let mut carrier = http::HeaderMap::new();

View File

@ -2,6 +2,12 @@
## vNext
## 0.30.0
Released 2025-May-23
- Updated `opentelemetry` dependency to version 0.30.0.
## 0.29.0
Released 2025-Mar-21

View File

@ -1,6 +1,6 @@
[package]
name = "opentelemetry-jaeger-propagator"
version = "0.29.0"
version = "0.30.0"
description = "Jaeger propagator for OpenTelemetry"
homepage = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-jaeger-propagator"
repository = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-jaeger-propagator"
@ -14,23 +14,26 @@ keywords = ["opentelemetry", "jaeger", "propagator"]
license = "Apache-2.0"
edition = "2021"
rust-version = "1.75.0"
autobenches = false
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
opentelemetry = { version = "0.29", default-features = false, features = [
opentelemetry = { version = "0.30", default-features = false, features = [
"trace",
], path = "../opentelemetry" }
tracing = {workspace = true, optional = true} # optional for opentelemetry internal logging
[dev-dependencies]
opentelemetry = { features = ["testing"], path = "../opentelemetry" }
[features]
default = ["internal-logs"]
internal-logs = ["tracing"]
internal-logs = ["opentelemetry/internal-logs"]
[lints]
workspace = true
[lib]
bench = false

View File

@ -325,7 +325,7 @@ mod tests {
true,
TraceState::default(),
),
format!("{}:{}:0:1", LONG_TRACE_ID_STR, SPAN_ID_STR),
format!("{LONG_TRACE_ID_STR}:{SPAN_ID_STR}:0:1"),
),
(
SpanContext::new(
@ -335,7 +335,7 @@ mod tests {
true,
TraceState::default(),
),
format!("{}:{}:0:0", LONG_TRACE_ID_STR, SPAN_ID_STR),
format!("{LONG_TRACE_ID_STR}:{SPAN_ID_STR}:0:0"),
),
(
SpanContext::new(
@ -345,7 +345,7 @@ mod tests {
true,
TraceState::default(),
),
format!("{}:{}:0:3", LONG_TRACE_ID_STR, SPAN_ID_STR),
format!("{LONG_TRACE_ID_STR}:{SPAN_ID_STR}:0:3"),
),
]
}
@ -356,7 +356,7 @@ mod tests {
let propagator = Propagator::with_custom_header(construct_header);
for (trace_id, span_id, flag, expected) in get_extract_data() {
let mut map: HashMap<String, String> = HashMap::new();
map.set(context_key, format!("{}:{}:0:{}", trace_id, span_id, flag));
map.set(context_key, format!("{trace_id}:{span_id}:0:{flag}"));
let context = propagator.extract(&map);
assert_eq!(context.span().span_context(), &expected);
}
@ -392,7 +392,7 @@ mod tests {
// Propagators implement debug
assert_eq!(
format!("{:?}", default_propagator),
format!("{default_propagator:?}"),
format!(
"Propagator {{ baggage_prefix: \"{}\", header_name: \"{}\", fields: [\"{}\"] }}",
JAEGER_BAGGAGE_PREFIX, JAEGER_HEADER, JAEGER_HEADER
@ -641,10 +641,7 @@ mod tests {
}
for (trace_id, span_id, flag, expected) in get_extract_data() {
let mut map: HashMap<String, String> = HashMap::new();
map.set(
JAEGER_HEADER,
format!("{}:{}:0:{}", trace_id, span_id, flag),
);
map.set(JAEGER_HEADER, format!("{trace_id}:{span_id}:0:{flag}"));
let context = propagator.extract(&map);
assert_eq!(context.span().span_context(), &expected);
}
@ -655,7 +652,7 @@ mod tests {
let mut map: HashMap<String, String> = HashMap::new();
map.set(
JAEGER_HEADER,
format!("{}:{}:0:1:aa", LONG_TRACE_ID_STR, SPAN_ID_STR),
format!("{LONG_TRACE_ID_STR}:{SPAN_ID_STR}:0:1:aa"),
);
let propagator = Propagator::new();
let context = propagator.extract(&map);
@ -667,7 +664,7 @@ mod tests {
let mut map: HashMap<String, String> = HashMap::new();
map.set(
JAEGER_HEADER,
format!("{}:{}:0:aa", LONG_TRACE_ID_STR, SPAN_ID_STR),
format!("{LONG_TRACE_ID_STR}:{SPAN_ID_STR}:0:aa"),
);
let propagator = Propagator::new();
let context = propagator.extract(&map);
@ -679,7 +676,7 @@ mod tests {
let mut map: HashMap<String, String> = HashMap::new();
map.set(
JAEGER_HEADER,
format!("{}%3A{}%3A0%3A1", LONG_TRACE_ID_STR, SPAN_ID_STR),
format!("{LONG_TRACE_ID_STR}%3A{SPAN_ID_STR}%3A0%3A1"),
);
let propagator = Propagator::new();
let context = propagator.extract(&map);

View File

@ -2,6 +2,22 @@
## vNext
## 0.30.0
Released 2025-May-23
- Update `opentelemetry` dependency version to 0.30
- Update `opentelemetry_sdk` dependency version to 0.30
- Update `opentelemetry-http` dependency version to 0.30
- Update `opentelemetry-proto` dependency version to 0.30
- Update `tonic` dependency version to 0.13
- Re-export `tonic` types under `tonic_types`
[2898](https://github.com/open-telemetry/opentelemetry-rust/pull/2898)
- Publicly re-exported `MetricExporterBuilder`, `SpanExporterBuilder`, and
`LogExporterBuilder` types, enabling users to directly reference and use these
builder types for metrics, traces, and logs exporters.
[2966](https://github.com/open-telemetry/opentelemetry-rust/pull/2966)
## 0.29.0
Released 2025-Mar-21
@ -11,7 +27,7 @@ Released 2025-Mar-21
- Update `opentelemetry-http` dependency version to 0.29
- Update `opentelemetry-proto` dependency version to 0.29
- The `OTEL_EXPORTER_OTLP_TIMEOUT`, `OTEL_EXPORTER_OTLP_TRACES_TIMEOUT`, `OTEL_EXPORTER_OTLP_METRICS_TIMEOUT` and `OTEL_EXPORTER_OTLP_LOGS_TIMEOUT` are changed from seconds to miliseconds.
- The `OTEL_EXPORTER_OTLP_TIMEOUT`, `OTEL_EXPORTER_OTLP_TRACES_TIMEOUT`, `OTEL_EXPORTER_OTLP_METRICS_TIMEOUT` and `OTEL_EXPORTER_OTLP_LOGS_TIMEOUT` are changed from seconds to milliseconds.
- Fixed `.with_headers()` in `HttpExporterBuilder` to correctly support multiple key/value pairs. [#2699](https://github.com/open-telemetry/opentelemetry-rust/pull/2699)
- Fixed
[#2770](https://github.com/open-telemetry/opentelemetry-rust/issues/2770)
@ -183,7 +199,7 @@ now use `.with_resource(RESOURCE::default())` to configure Resource when using
### Added
- Added `DeltaTemporalitySelector` ([#1568])
- Add `webkpi-roots` features to `reqwest` and `tonic` backends
- Add `webpki-roots` features to `reqwest` and `tonic` backends
[#1568]: https://github.com/open-telemetry/opentelemetry-rust/pull/1568

View File

@ -1,6 +1,6 @@
[package]
name = "opentelemetry-otlp"
version = "0.29.0"
version = "0.30.0"
description = "Exporter for the OpenTelemetry Collector"
homepage = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-otlp"
repository = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-otlp"
@ -15,6 +15,7 @@ license = "Apache-2.0"
edition = "2021"
rust-version = "1.75.0"
autotests = false
autobenches = false
[[test]]
name = "smoke"
@ -26,11 +27,10 @@ all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
futures-core = { workspace = true }
opentelemetry = { version = "0.29", default-features = false, path = "../opentelemetry" }
opentelemetry_sdk = { version = "0.29", default-features = false, path = "../opentelemetry-sdk" }
opentelemetry-http = { version = "0.29", path = "../opentelemetry-http", optional = true }
opentelemetry-proto = { version = "0.29", path = "../opentelemetry-proto", default-features = false }
opentelemetry = { version = "0.30", default-features = false, path = "../opentelemetry" }
opentelemetry_sdk = { version = "0.30", default-features = false, path = "../opentelemetry-sdk" }
opentelemetry-http = { version = "0.30", path = "../opentelemetry-http", optional = true }
opentelemetry-proto = { version = "0.30", path = "../opentelemetry-proto", default-features = false }
tracing = {workspace = true, optional = true}
prost = { workspace = true, optional = true }
@ -45,12 +45,11 @@ serde_json = { workspace = true, optional = true }
[dev-dependencies]
tokio-stream = { workspace = true, features = ["net"] }
# need tokio runtime to run smoke tests.
opentelemetry_sdk = { features = ["trace", "rt-tokio", "testing"], path = "../opentelemetry-sdk" }
opentelemetry_sdk = { features = ["trace", "testing"], path = "../opentelemetry-sdk" }
tokio = { workspace = true, features = ["macros", "rt-multi-thread"] }
futures-util = { workspace = true }
temp-env = { workspace = true }
tonic = { workspace = true, features = ["server"] }
tonic = { workspace = true, features = ["router", "server"] }
[features]
# telemetry pillars and functions
@ -68,8 +67,8 @@ default = ["http-proto", "reqwest-blocking-client", "trace", "metrics", "logs",
grpc-tonic = ["tonic", "prost", "http", "tokio", "opentelemetry-proto/gen-tonic"]
gzip-tonic = ["tonic/gzip"]
zstd-tonic = ["tonic/zstd"]
tls = ["tonic/tls"]
tls-roots = ["tls", "tonic/tls-roots"]
tls = ["tonic/tls-ring"]
tls-roots = ["tls", "tonic/tls-native-roots"]
tls-webpki-roots = ["tls", "tonic/tls-webpki-roots"]
# http binary
@ -86,3 +85,6 @@ integration-testing = ["tonic", "prost", "tokio/full", "trace", "logs"]
[lints]
workspace = true
[lib]
bench = false

View File

@ -3,20 +3,16 @@
# This is used with cargo-check-external-types to reduce the surface area of downstream crates from
# the public API. Ideally this can have a few exceptions as possible.
allowed_external_types = [
"opentelemetry::*",
"opentelemetry_http::*",
"opentelemetry_sdk::*",
# http is a pre 1.0 crate
"http::uri::InvalidUri",
"http::header::name::InvalidHeaderName",
"http::header::value::InvalidHeaderValue",
# prost is a pre 1.0 crate
"prost::error::EncodeError",
# serde
"serde::de::Deserialize",
"serde::ser::Serialize",
# tonic is a pre 1.0 crate
"tonic::status::Code",
"tonic::status::Status",
"tonic::metadata::map::MetadataMap",
"tonic::transport::channel::tls::ClientTlsConfig",
"tonic::transport::tls::Certificate",
"tonic::transport::tls::Identity",
"tonic::transport::channel::Channel",
"tonic::transport::error::Error",
"tonic::service::interceptor::Interceptor",
]

View File

@ -3,7 +3,14 @@ name = "basic-otlp-http"
version = "0.1.0"
edition = "2021"
license = "Apache-2.0"
rust-version = "1.75.0"
publish = false
autobenches = false
[[bin]]
name = "basic-otlp-http"
path = "src/main.rs"
bench = false
[features]
default = ["reqwest-blocking"]

View File

@ -72,19 +72,21 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync + 'static>> {
// Create a new OpenTelemetryTracingBridge using the above LoggerProvider.
let otel_layer = OpenTelemetryTracingBridge::new(&logger_provider);
// For the OpenTelemetry layer, add a tracing filter to filter events from
// OpenTelemetry and its dependent crates (opentelemetry-otlp uses crates
// like reqwest/tonic etc.) from being sent back to OTel itself, thus
// preventing infinite telemetry generation. The filter levels are set as
// follows:
// To prevent a telemetry-induced-telemetry loop, OpenTelemetry's own internal
// logging is properly suppressed. However, logs emitted by external components
// (such as reqwest, tonic, etc.) are not suppressed as they do not propagate
// OpenTelemetry context. Until this issue is addressed
// (https://github.com/open-telemetry/opentelemetry-rust/issues/2877),
// filtering like this is the best way to suppress such logs.
//
// The filter levels are set as follows:
// - Allow `info` level and above by default.
// - Restrict `opentelemetry`, `hyper`, `tonic`, and `reqwest` completely.
// Note: This will also drop events from crates like `tonic` etc. even when
// they are used outside the OTLP Exporter. For more details, see:
// https://github.com/open-telemetry/opentelemetry-rust/issues/761
// - Completely restrict logs from `hyper`, `tonic`, `h2`, and `reqwest`.
//
// Note: This filtering will also drop logs from these components even when
// they are used outside of the OTLP Exporter.
let filter_otel = EnvFilter::new("info")
.add_directive("hyper=off".parse().unwrap())
.add_directive("opentelemetry=off".parse().unwrap())
.add_directive("tonic=off".parse().unwrap())
.add_directive("h2=off".parse().unwrap())
.add_directive("reqwest=off".parse().unwrap());
@ -167,15 +169,15 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync + 'static>> {
// Collect all shutdown errors
let mut shutdown_errors = Vec::new();
if let Err(e) = tracer_provider.shutdown() {
shutdown_errors.push(format!("tracer provider: {}", e));
shutdown_errors.push(format!("tracer provider: {e}"));
}
if let Err(e) = meter_provider.shutdown() {
shutdown_errors.push(format!("meter provider: {}", e));
shutdown_errors.push(format!("meter provider: {e}"));
}
if let Err(e) = logger_provider.shutdown() {
shutdown_errors.push(format!("logger provider: {}", e));
shutdown_errors.push(format!("logger provider: {e}"));
}
// Return an error if any shutdown failed

View File

@ -3,7 +3,14 @@ name = "basic-otlp"
version = "0.1.0"
edition = "2021"
license = "Apache-2.0"
rust-version = "1.75.0"
publish = false
autobenches = false
[[bin]]
name = "basic-otlp"
path = "src/main.rs"
bench = false
[dependencies]
opentelemetry = { path = "../../../opentelemetry" }

View File

@ -66,19 +66,21 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync + 'static>> {
// Create a new OpenTelemetryTracingBridge using the above LoggerProvider.
let otel_layer = OpenTelemetryTracingBridge::new(&logger_provider);
// For the OpenTelemetry layer, add a tracing filter to filter events from
// OpenTelemetry and its dependent crates (opentelemetry-otlp uses crates
// like reqwest/tonic etc.) from being sent back to OTel itself, thus
// preventing infinite telemetry generation. The filter levels are set as
// follows:
// To prevent a telemetry-induced-telemetry loop, OpenTelemetry's own internal
// logging is properly suppressed. However, logs emitted by external components
// (such as reqwest, tonic, etc.) are not suppressed as they do not propagate
// OpenTelemetry context. Until this issue is addressed
// (https://github.com/open-telemetry/opentelemetry-rust/issues/2877),
// filtering like this is the best way to suppress such logs.
//
// The filter levels are set as follows:
// - Allow `info` level and above by default.
// - Restrict `opentelemetry`, `hyper`, `tonic`, and `reqwest` completely.
// Note: This will also drop events from crates like `tonic` etc. even when
// they are used outside the OTLP Exporter. For more details, see:
// https://github.com/open-telemetry/opentelemetry-rust/issues/761
// - Completely restrict logs from `hyper`, `tonic`, `h2`, and `reqwest`.
//
// Note: This filtering will also drop logs from these components even when
// they are used outside of the OTLP Exporter.
let filter_otel = EnvFilter::new("info")
.add_directive("hyper=off".parse().unwrap())
.add_directive("opentelemetry=off".parse().unwrap())
.add_directive("tonic=off".parse().unwrap())
.add_directive("h2=off".parse().unwrap())
.add_directive("reqwest=off".parse().unwrap());
@ -160,15 +162,15 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync + 'static>> {
// Collect all shutdown errors
let mut shutdown_errors = Vec::new();
if let Err(e) = tracer_provider.shutdown() {
shutdown_errors.push(format!("tracer provider: {}", e));
shutdown_errors.push(format!("tracer provider: {e}"));
}
if let Err(e) = meter_provider.shutdown() {
shutdown_errors.push(format!("meter provider: {}", e));
shutdown_errors.push(format!("meter provider: {e}"));
}
if let Err(e) = logger_provider.shutdown() {
shutdown_errors.push(format!("logger provider: {}", e));
shutdown_errors.push(format!("logger provider: {e}"));
}
// Return an error if any shutdown failed

View File

@ -3,13 +3,14 @@ use http::{header::CONTENT_TYPE, Method};
use opentelemetry::otel_debug;
use opentelemetry_sdk::error::{OTelSdkError, OTelSdkResult};
use opentelemetry_sdk::logs::{LogBatch, LogExporter};
use std::time;
impl LogExporter for OtlpHttpClient {
async fn export(&self, batch: LogBatch<'_>) -> OTelSdkResult {
let client = self
.client
.lock()
.map_err(|e| OTelSdkError::InternalFailure(format!("Mutex lock failed: {}", e)))?
.map_err(|e| OTelSdkError::InternalFailure(format!("Mutex lock failed: {e}")))?
.clone()
.ok_or(OTelSdkError::AlreadyShutdown)?;
@ -29,11 +30,12 @@ impl LogExporter for OtlpHttpClient {
}
let request_uri = request.uri().to_string();
otel_debug!(name: "HttpLogsClient.CallingExport");
otel_debug!(name: "HttpLogsClient.ExportStarted");
let response = client
.send_bytes(request)
.await
.map_err(|e| OTelSdkError::InternalFailure(format!("{e:?}")))?;
if !response.status().is_success() {
let error = format!(
"OpenTelemetry logs export failed. Url: {}, Status Code: {}, Response: {:?}",
@ -41,14 +43,17 @@ impl LogExporter for OtlpHttpClient {
response.status().as_u16(),
response.body()
);
otel_debug!(name: "HttpLogsClient.ExportFailed", error = &error);
return Err(OTelSdkError::InternalFailure(error));
}
otel_debug!(name: "HttpLogsClient.ExportSucceeded");
Ok(())
}
fn shutdown(&self) -> OTelSdkResult {
fn shutdown_with_timeout(&self, _timeout: time::Duration) -> OTelSdkResult {
let mut client_guard = self.client.lock().map_err(|e| {
OTelSdkError::InternalFailure(format!("Failed to acquire client lock: {}", e))
OTelSdkError::InternalFailure(format!("Failed to acquire client lock: {e}"))
})?;
if client_guard.take().is_none() {

View File

@ -9,7 +9,7 @@ use opentelemetry_sdk::metrics::data::ResourceMetrics;
use super::OtlpHttpClient;
impl MetricsClient for OtlpHttpClient {
async fn export(&self, metrics: &mut ResourceMetrics) -> OTelSdkResult {
async fn export(&self, metrics: &ResourceMetrics) -> OTelSdkResult {
let client = self
.client
.lock()
@ -19,8 +19,8 @@ impl MetricsClient for OtlpHttpClient {
_ => Err(OTelSdkError::AlreadyShutdown),
})?;
let (body, content_type) = self.build_metrics_export_body(metrics).map_err(|e| {
OTelSdkError::InternalFailure(format!("Failed to serialize metrics: {e:?}"))
let (body, content_type) = self.build_metrics_export_body(metrics).ok_or_else(|| {
OTelSdkError::InternalFailure("Failed to serialize metrics".to_string())
})?;
let mut request = http::Request::builder()
.method(Method::POST)
@ -33,19 +33,36 @@ impl MetricsClient for OtlpHttpClient {
request.headers_mut().insert(k.clone(), v.clone());
}
otel_debug!(name: "HttpMetricsClient.CallingExport");
client
.send_bytes(request)
.await
.map_err(|e| OTelSdkError::InternalFailure(format!("{e:?}")))?;
otel_debug!(name: "HttpMetricsClient.ExportStarted");
let result = client.send_bytes(request).await;
Ok(())
match result {
Ok(response) => {
if response.status().is_success() {
otel_debug!(name: "HttpMetricsClient.ExportSucceeded");
Ok(())
} else {
let error = format!(
"OpenTelemetry metrics export failed. Status Code: {}, Response: {:?}",
response.status().as_u16(),
response.body()
);
otel_debug!(name: "HttpMetricsClient.ExportFailed", error = &error);
Err(OTelSdkError::InternalFailure(error))
}
}
Err(e) => {
let error = format!("{e:?}");
otel_debug!(name: "HttpMetricsClient.ExportFailed", error = &error);
Err(OTelSdkError::InternalFailure(error))
}
}
}
fn shutdown(&self) -> OTelSdkResult {
self.client
.lock()
.map_err(|e| OTelSdkError::InternalFailure(format!("Failed to acquire lock: {}", e)))?
.map_err(|e| OTelSdkError::InternalFailure(format!("Failed to acquire lock: {e}")))?
.take();
Ok(())

View File

@ -4,6 +4,8 @@ use super::{
};
use crate::{ExportConfig, Protocol, OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_HEADERS};
use http::{HeaderName, HeaderValue, Uri};
#[cfg(feature = "http-json")]
use opentelemetry::otel_debug;
use opentelemetry_http::HttpClient;
use opentelemetry_proto::transform::common::tonic::ResourceAttributesWithSchema;
#[cfg(feature = "logs")]
@ -324,21 +326,22 @@ impl OtlpHttpClient {
#[cfg(feature = "metrics")]
fn build_metrics_export_body(
&self,
metrics: &mut ResourceMetrics,
) -> opentelemetry_sdk::metrics::MetricResult<(Vec<u8>, &'static str)> {
metrics: &ResourceMetrics,
) -> Option<(Vec<u8>, &'static str)> {
use opentelemetry_proto::tonic::collector::metrics::v1::ExportMetricsServiceRequest;
let req: ExportMetricsServiceRequest = (&*metrics).into();
let req: ExportMetricsServiceRequest = metrics.into();
match self.protocol {
#[cfg(feature = "http-json")]
Protocol::HttpJson => match serde_json::to_string_pretty(&req) {
Ok(json) => Ok((json.into(), "application/json")),
Err(e) => Err(opentelemetry_sdk::metrics::MetricError::Other(
e.to_string(),
)),
Ok(json) => Some((json.into(), "application/json")),
Err(e) => {
otel_debug!(name: "JsonSerializationFaied", error = e.to_string());
None
}
},
_ => Ok((req.encode_to_vec(), "application/x-protobuf")),
_ => Some((req.encode_to_vec(), "application/x-protobuf")),
}
}
}
@ -362,7 +365,7 @@ fn resolve_http_endpoint(
provided_endpoint: Option<&str>,
) -> Result<Uri, ExporterBuildError> {
// programmatic configuration overrides any value set via environment variables
if let Some(provider_endpoint) = provided_endpoint {
if let Some(provider_endpoint) = provided_endpoint.filter(|s| !s.is_empty()) {
provider_endpoint
.parse()
.map_err(|er: http::uri::InvalidUri| {
@ -525,6 +528,15 @@ mod tests {
);
}
#[test]
fn test_use_default_when_empty_string_for_option() {
run_env_test(vec![], || {
let endpoint =
super::resolve_http_endpoint("non_existent_var", "/v1/traces", Some("")).unwrap();
assert_eq!(endpoint, "http://localhost:4318/v1/traces");
});
}
#[test]
fn test_use_default_when_others_missing() {
run_env_test(vec![], || {
@ -603,17 +615,14 @@ mod tests {
assert_eq!(
headers.len(),
expected_headers.len(),
"Failed on input: {}",
input_str
"Failed on input: {input_str}"
);
for (expected_key, expected_value) in expected_headers {
assert_eq!(
headers.get(&HeaderName::from_static(expected_key)),
Some(&HeaderValue::from_static(expected_value)),
"Failed on key: {} with input: {}",
expected_key,
input_str
"Failed on key: {expected_key} with input: {input_str}"
);
}
}
@ -653,17 +662,14 @@ mod tests {
assert_eq!(
headers.len(),
expected_headers.len(),
"Failed on input: {}",
input_str
"Failed on input: {input_str}"
);
for (expected_key, expected_value) in expected_headers {
assert_eq!(
headers.get(&HeaderName::from_static(expected_key)),
Some(&HeaderValue::from_static(expected_value)),
"Failed on key: {} with input: {}",
expected_key,
input_str
"Failed on key: {expected_key} with input: {input_str}"
);
}
}

View File

@ -13,7 +13,7 @@ impl SpanExporter for OtlpHttpClient {
let client = match self
.client
.lock()
.map_err(|e| OTelSdkError::InternalFailure(format!("Mutex lock failed: {}", e)))
.map_err(|e| OTelSdkError::InternalFailure(format!("Mutex lock failed: {e}")))
.and_then(|g| match &*g {
Some(client) => Ok(Arc::clone(client)),
_ => Err(OTelSdkError::AlreadyShutdown),
@ -42,7 +42,7 @@ impl SpanExporter for OtlpHttpClient {
}
let request_uri = request.uri().to_string();
otel_debug!(name: "HttpTracesClient.CallingExport");
otel_debug!(name: "HttpTracesClient.ExportStarted");
let response = client
.send_bytes(request)
.await
@ -51,19 +51,21 @@ impl SpanExporter for OtlpHttpClient {
if !response.status().is_success() {
let error = format!(
"OpenTelemetry trace export failed. Url: {}, Status Code: {}, Response: {:?}",
response.status().as_u16(),
request_uri,
response.status().as_u16(),
response.body()
);
otel_debug!(name: "HttpTracesClient.ExportFailed", error = &error);
return Err(OTelSdkError::InternalFailure(error));
}
otel_debug!(name: "HttpTracesClient.ExportSucceeded");
Ok(())
}
fn shutdown(&mut self) -> OTelSdkResult {
let mut client_guard = self.client.lock().map_err(|e| {
OTelSdkError::InternalFailure(format!("Failed to acquire client lock: {}", e))
OTelSdkError::InternalFailure(format!("Failed to acquire client lock: {e}"))
})?;
if client_guard.take().is_none() {

View File

@ -396,8 +396,7 @@ mod tests {
exporter_result,
Err(crate::exporter::ExporterBuildError::InvalidUri(_, _))
),
"Expected InvalidUri error, but got {:?}",
exporter_result
"Expected InvalidUri error, but got {exporter_result:?}"
);
}

View File

@ -5,6 +5,7 @@ use opentelemetry_proto::tonic::collector::logs::v1::{
};
use opentelemetry_sdk::error::{OTelSdkError, OTelSdkResult};
use opentelemetry_sdk::logs::{LogBatch, LogExporter};
use std::time;
use tokio::sync::Mutex;
use tonic::{codegen::CompressionEncoding, service::Interceptor, transport::Channel, Request};
@ -62,7 +63,7 @@ impl LogExporter for TonicLogsClient {
let (m, e, _) = inner
.interceptor
.call(Request::new(()))
.map_err(|e| OTelSdkError::InternalFailure(format!("error: {:?}", e)))?
.map_err(|e| OTelSdkError::InternalFailure(format!("error: {e:?}")))?
.into_parts();
(inner.client.clone(), m, e)
}
@ -71,20 +72,30 @@ impl LogExporter for TonicLogsClient {
let resource_logs = group_logs_by_resource_and_scope(batch, &self.resource);
otel_debug!(name: "TonicsLogsClient.CallingExport");
otel_debug!(name: "TonicLogsClient.ExportStarted");
client
let result = client
.export(Request::from_parts(
metadata,
extensions,
ExportLogsServiceRequest { resource_logs },
))
.await
.map_err(|e| OTelSdkError::InternalFailure(format!("export error: {:?}", e)))?;
Ok(())
.await;
match result {
Ok(_) => {
otel_debug!(name: "TonicLogsClient.ExportSucceeded");
Ok(())
}
Err(e) => {
let error = format!("export error: {e:?}");
otel_debug!(name: "TonicLogsClient.ExportFailed", error = &error);
Err(OTelSdkError::InternalFailure(error))
}
}
}
fn shutdown(&self) -> OTelSdkResult {
fn shutdown_with_timeout(&self, _timeout: time::Duration) -> OTelSdkResult {
// TODO: Implement actual shutdown
// Due to the use of tokio::sync::Mutex to guard
// the inner client, we need to await the call to lock the mutex

View File

@ -52,7 +52,7 @@ impl TonicMetricsClient {
}
impl MetricsClient for TonicMetricsClient {
async fn export(&self, metrics: &mut ResourceMetrics) -> OTelSdkResult {
async fn export(&self, metrics: &ResourceMetrics) -> OTelSdkResult {
let (mut client, metadata, extensions) = self
.inner
.lock()
@ -75,24 +75,33 @@ impl MetricsClient for TonicMetricsClient {
)),
})?;
otel_debug!(name: "TonicsMetricsClient.CallingExport");
otel_debug!(name: "TonicMetricsClient.ExportStarted");
client
let result = client
.export(Request::from_parts(
metadata,
extensions,
ExportMetricsServiceRequest::from(&*metrics),
ExportMetricsServiceRequest::from(metrics),
))
.await
.map_err(|e| OTelSdkError::InternalFailure(format!("{e:?}")))?;
.await;
Ok(())
match result {
Ok(_) => {
otel_debug!(name: "TonicMetricsClient.ExportSucceeded");
Ok(())
}
Err(e) => {
let error = format!("{e:?}");
otel_debug!(name: "TonicMetricsClient.ExportFailed", error = &error);
Err(OTelSdkError::InternalFailure(error))
}
}
}
fn shutdown(&self) -> OTelSdkResult {
self.inner
.lock()
.map_err(|e| OTelSdkError::InternalFailure(format!("Failed to acquire lock: {}", e)))?
.map_err(|e| OTelSdkError::InternalFailure(format!("Failed to acquire lock: {e}")))?
.take();
Ok(())

View File

@ -145,6 +145,8 @@ impl Default for TonicExporterBuilder {
}
impl TonicExporterBuilder {
// This is for clippy to work with only the grpc-tonic feature enabled
#[allow(unused)]
fn build_channel(
self,
signal_endpoint_var: &str,
@ -223,7 +225,7 @@ impl TonicExporterBuilder {
// If users for some reason want to use a custom path, they can use env var or builder to pass it
//
// programmatic configuration overrides any value set via environment variables
if let Some(endpoint) = provided_endpoint {
if let Some(endpoint) = provided_endpoint.filter(|s| !s.is_empty()) {
endpoint
} else if let Ok(endpoint) = env::var(default_endpoint_var) {
endpoint
@ -514,6 +516,7 @@ mod tests {
assert!(tonic::codec::CompressionEncoding::try_from(Compression::Zstd).is_err());
}
#[cfg(feature = "zstd-tonic")]
#[test]
fn test_priority_of_signal_env_over_generic_env_for_compression() {
run_env_test(
@ -532,6 +535,7 @@ mod tests {
);
}
#[cfg(feature = "zstd-tonic")]
#[test]
fn test_priority_of_code_based_config_over_envs_for_compression() {
run_env_test(
@ -662,4 +666,15 @@ mod tests {
assert_eq!(url, "http://localhost:4317");
});
}
#[test]
fn test_use_default_when_empty_string_for_option() {
run_env_test(vec![], || {
let url = TonicExporterBuilder::resolve_endpoint(
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
Some(String::new()),
);
assert_eq!(url, "http://localhost:4317");
});
}
}

View File

@ -67,7 +67,7 @@ impl SpanExporter for TonicTracesClient {
.lock()
.await // tokio::sync::Mutex doesn't return a poisoned error, so we can safely use the interceptor here
.call(Request::new(()))
.map_err(|e| OTelSdkError::InternalFailure(format!("error: {:?}", e)))?
.map_err(|e| OTelSdkError::InternalFailure(format!("error: {e:?}")))?
.into_parts();
(inner.client.clone(), m, e)
}
@ -76,17 +76,27 @@ impl SpanExporter for TonicTracesClient {
let resource_spans = group_spans_by_resource_and_scope(batch, &self.resource);
otel_debug!(name: "TonicsTracesClient.CallingExport");
otel_debug!(name: "TonicTracesClient.ExportStarted");
client
let result = client
.export(Request::from_parts(
metadata,
extensions,
ExportTraceServiceRequest { resource_spans },
))
.await
.map_err(|e| OTelSdkError::InternalFailure(e.to_string()))?;
Ok(())
.await;
match result {
Ok(_) => {
otel_debug!(name: "TonicTracesClient.ExportSucceeded");
Ok(())
}
Err(e) => {
let error = e.to_string();
otel_debug!(name: "TonicTracesClient.ExportFailed", error = &error);
Err(OTelSdkError::InternalFailure(error))
}
}
}
fn shutdown(&mut self) -> OTelSdkResult {

View File

@ -39,10 +39,13 @@
//! .build()?;
//!
//! // Create a tracer provider with the exporter
//! let _ = opentelemetry_sdk::trace::SdkTracerProvider::builder()
//! let tracer_provider = opentelemetry_sdk::trace::SdkTracerProvider::builder()
//! .with_simple_exporter(otlp_exporter)
//! .build();
//!
//! // Set it as the global provider
//! global::set_tracer_provider(tracer_provider);
//!
//! // Get a tracer and create spans
//! let tracer = global::tracer("my_tracer");
//! tracer.in_span("doing_work", |_cx| {
@ -62,25 +65,30 @@
//! $ docker run -p 4317:4317 otel/opentelemetry-collector:latest
//! ```
//!
//! Configure your application to export traces via gRPC:
//! Configure your application to export traces via gRPC (the tonic client requires a Tokio runtime):
//!
//! - With `[tokio::main]`
//!
//! ```no_run
//! # #[cfg(all(feature = "trace", feature = "grpc-tonic"))]
//! # {
//! use opentelemetry::global;
//! use opentelemetry::trace::Tracer;
//! use opentelemetry::{global, trace::Tracer};
//!
//! fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
//! #[tokio::main]
//! async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
//! // Initialize OTLP exporter using gRPC (Tonic)
//! let otlp_exporter = opentelemetry_otlp::SpanExporter::builder()
//! .with_tonic()
//! .build()?;
//!
//! // Create a tracer provider with the exporter
//! let _ = opentelemetry_sdk::trace::SdkTracerProvider::builder()
//! let tracer_provider = opentelemetry_sdk::trace::SdkTracerProvider::builder()
//! .with_simple_exporter(otlp_exporter)
//! .build();
//!
//! // Set it as the global provider
//! global::set_tracer_provider(tracer_provider);
//!
//! // Get a tracer and create spans
//! let tracer = global::tracer("my_tracer");
//! tracer.in_span("doing_work", |_cx| {
@ -92,6 +100,41 @@
//! }
//! ```
//!
//! - Without `[tokio::main]`
//!
//! ```no_run
//! # #[cfg(all(feature = "trace", feature = "grpc-tonic"))]
//! # {
//! use opentelemetry::{global, trace::Tracer};
//!
//! fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> {
//! // Initialize OTLP exporter using gRPC (Tonic)
//! let rt = tokio::runtime::Runtime::new()?;
//! let tracer_provider = rt.block_on(async {
//! let exporter = opentelemetry_otlp::SpanExporter::builder()
//! .with_tonic()
//! .build()
//! .expect("Failed to create span exporter");
//! opentelemetry_sdk::trace::SdkTracerProvider::builder()
//! .with_simple_exporter(exporter)
//! .build()
//! });
//!
//! // Set it as the global provider
//! global::set_tracer_provider(tracer_provider);
//!
//! // Get a tracer and create spans
//! let tracer = global::tracer("my_tracer");
//! tracer.in_span("doing_work", |_cx| {
//! // Your application logic here...
//! });
//!
//! // Ensure the runtime (`rt`) remains active until the program ends
//! Ok(())
//! # }
//! }
//! ```
//!
//! ## Using with Jaeger
//!
//! Jaeger natively supports the OTLP protocol, making it easy to send traces directly:
@ -331,22 +374,25 @@ pub use crate::exporter::ExporterBuildError;
#[cfg(feature = "trace")]
#[cfg(any(feature = "http-proto", feature = "http-json", feature = "grpc-tonic"))]
pub use crate::span::{
SpanExporter, OTEL_EXPORTER_OTLP_TRACES_COMPRESSION, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
OTEL_EXPORTER_OTLP_TRACES_HEADERS, OTEL_EXPORTER_OTLP_TRACES_TIMEOUT,
SpanExporter, SpanExporterBuilder, OTEL_EXPORTER_OTLP_TRACES_COMPRESSION,
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_HEADERS,
OTEL_EXPORTER_OTLP_TRACES_TIMEOUT,
};
#[cfg(feature = "metrics")]
#[cfg(any(feature = "http-proto", feature = "http-json", feature = "grpc-tonic"))]
pub use crate::metric::{
MetricExporter, OTEL_EXPORTER_OTLP_METRICS_COMPRESSION, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT,
OTEL_EXPORTER_OTLP_METRICS_HEADERS, OTEL_EXPORTER_OTLP_METRICS_TIMEOUT,
MetricExporter, MetricExporterBuilder, OTEL_EXPORTER_OTLP_METRICS_COMPRESSION,
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_HEADERS,
OTEL_EXPORTER_OTLP_METRICS_TIMEOUT,
};
#[cfg(feature = "logs")]
#[cfg(any(feature = "http-proto", feature = "http-json", feature = "grpc-tonic"))]
pub use crate::logs::{
LogExporter, OTEL_EXPORTER_OTLP_LOGS_COMPRESSION, OTEL_EXPORTER_OTLP_LOGS_ENDPOINT,
OTEL_EXPORTER_OTLP_LOGS_HEADERS, OTEL_EXPORTER_OTLP_LOGS_TIMEOUT,
LogExporter, LogExporterBuilder, OTEL_EXPORTER_OTLP_LOGS_COMPRESSION,
OTEL_EXPORTER_OTLP_LOGS_ENDPOINT, OTEL_EXPORTER_OTLP_LOGS_HEADERS,
OTEL_EXPORTER_OTLP_LOGS_TIMEOUT,
};
#[cfg(any(feature = "http-proto", feature = "http-json"))]
@ -370,6 +416,8 @@ pub struct NoExporterBuilderSet;
///
/// Allowing access to [TonicExporterBuilder] specific configuration methods.
#[cfg(feature = "grpc-tonic")]
// This is for clippy to work with only the grpc-tonic feature enabled
#[allow(unused)]
#[derive(Debug, Default)]
pub struct TonicExporterBuilderSet(TonicExporterBuilder);
@ -405,3 +453,20 @@ pub enum Protocol {
#[doc(hidden)]
/// Placeholder type when no exporter pipeline has been configured in telemetry pipeline.
pub struct NoExporterConfig(());
/// Re-exported types from the `tonic` crate.
#[cfg(feature = "grpc-tonic")]
pub mod tonic_types {
/// Re-exported types from `tonic::metadata`.
pub mod metadata {
#[doc(no_inline)]
pub use tonic::metadata::MetadataMap;
}
/// Re-exported types from `tonic::transport`.
#[cfg(feature = "tls")]
pub mod transport {
#[doc(no_inline)]
pub use tonic::transport::{Certificate, ClientTlsConfig, Identity};
}
}

View File

@ -4,9 +4,9 @@
#[cfg(feature = "grpc-tonic")]
use opentelemetry::otel_debug;
use std::fmt::Debug;
use opentelemetry_sdk::{error::OTelSdkResult, logs::LogBatch};
use std::fmt::Debug;
use std::time;
use crate::{ExporterBuildError, HasExportConfig, NoExporterBuilderSet};
@ -31,6 +31,7 @@ pub const OTEL_EXPORTER_OTLP_LOGS_TIMEOUT: &str = "OTEL_EXPORTER_OTLP_LOGS_TIMEO
/// Note: this is only supported for HTTP.
pub const OTEL_EXPORTER_OTLP_LOGS_HEADERS: &str = "OTEL_EXPORTER_OTLP_LOGS_HEADERS";
/// Builder for creating a new [LogExporter].
#[derive(Debug, Default, Clone)]
pub struct LogExporterBuilder<C> {
client: C,
@ -38,10 +39,12 @@ pub struct LogExporterBuilder<C> {
}
impl LogExporterBuilder<NoExporterBuilderSet> {
/// Create a new [LogExporterBuilder] with default settings.
pub fn new() -> Self {
LogExporterBuilder::default()
}
/// With the gRPC Tonic transport.
#[cfg(feature = "grpc-tonic")]
pub fn with_tonic(self) -> LogExporterBuilder<TonicExporterBuilderSet> {
LogExporterBuilder {
@ -50,6 +53,7 @@ impl LogExporterBuilder<NoExporterBuilderSet> {
}
}
/// With the HTTP transport.
#[cfg(any(feature = "http-proto", feature = "http-json"))]
pub fn with_http(self) -> LogExporterBuilder<HttpExporterBuilderSet> {
LogExporterBuilder {
@ -61,6 +65,7 @@ impl LogExporterBuilder<NoExporterBuilderSet> {
#[cfg(feature = "grpc-tonic")]
impl LogExporterBuilder<TonicExporterBuilderSet> {
/// Build the [LogExporter] with the gRPC Tonic transport.
pub fn build(self) -> Result<LogExporter, ExporterBuildError> {
let result = self.client.0.build_log_exporter();
otel_debug!(name: "LogExporterBuilt", result = format!("{:?}", &result));
@ -70,6 +75,7 @@ impl LogExporterBuilder<TonicExporterBuilderSet> {
#[cfg(any(feature = "http-proto", feature = "http-json"))]
impl LogExporterBuilder<HttpExporterBuilderSet> {
/// Build the [LogExporter] with the HTTP transport.
pub fn build(self) -> Result<LogExporter, ExporterBuildError> {
self.client.0.build_log_exporter()
}
@ -157,7 +163,7 @@ impl opentelemetry_sdk::logs::LogExporter for LogExporter {
}
}
fn shutdown(&self) -> OTelSdkResult {
fn shutdown_with_timeout(&self, _timeout: time::Duration) -> OTelSdkResult {
match &self.client {
#[cfg(feature = "grpc-tonic")]
SupportedTransportClient::Tonic(client) => client.shutdown(),

View File

@ -21,6 +21,7 @@ use opentelemetry_sdk::metrics::{
data::ResourceMetrics, exporter::PushMetricExporter, Temporality,
};
use std::fmt::{Debug, Formatter};
use std::time::Duration;
/// Target to which the exporter is going to send metrics, defaults to https://localhost:4317/v1/metrics.
/// Learn about the relationship between this constant and default/spans/logs at
@ -36,6 +37,7 @@ pub const OTEL_EXPORTER_OTLP_METRICS_COMPRESSION: &str = "OTEL_EXPORTER_OTLP_MET
/// Note: this is only supported for HTTP.
pub const OTEL_EXPORTER_OTLP_METRICS_HEADERS: &str = "OTEL_EXPORTER_OTLP_METRICS_HEADERS";
/// A builder for creating a new [MetricExporter].
#[derive(Debug, Default, Clone)]
pub struct MetricExporterBuilder<C> {
client: C,
@ -43,12 +45,14 @@ pub struct MetricExporterBuilder<C> {
}
impl MetricExporterBuilder<NoExporterBuilderSet> {
/// Create a new [MetricExporterBuilder] with default settings.
pub fn new() -> Self {
MetricExporterBuilder::default()
}
}
impl<C> MetricExporterBuilder<C> {
/// With the gRPC Tonic transport.
#[cfg(feature = "grpc-tonic")]
pub fn with_tonic(self) -> MetricExporterBuilder<TonicExporterBuilderSet> {
MetricExporterBuilder {
@ -57,6 +61,7 @@ impl<C> MetricExporterBuilder<C> {
}
}
/// With the HTTP transport.
#[cfg(any(feature = "http-proto", feature = "http-json"))]
pub fn with_http(self) -> MetricExporterBuilder<HttpExporterBuilderSet> {
MetricExporterBuilder {
@ -65,6 +70,7 @@ impl<C> MetricExporterBuilder<C> {
}
}
/// Set the temporality for the metrics.
pub fn with_temporality(self, temporality: Temporality) -> MetricExporterBuilder<C> {
MetricExporterBuilder {
client: self.client,
@ -75,6 +81,7 @@ impl<C> MetricExporterBuilder<C> {
#[cfg(feature = "grpc-tonic")]
impl MetricExporterBuilder<TonicExporterBuilderSet> {
/// Build the [MetricExporter] with the gRPC Tonic transport.
pub fn build(self) -> Result<MetricExporter, ExporterBuildError> {
let exporter = self.client.0.build_metrics_exporter(self.temporality)?;
opentelemetry::otel_debug!(name: "MetricExporterBuilt");
@ -84,6 +91,7 @@ impl MetricExporterBuilder<TonicExporterBuilderSet> {
#[cfg(any(feature = "http-proto", feature = "http-json"))]
impl MetricExporterBuilder<HttpExporterBuilderSet> {
/// Build the [MetricExporter] with the HTTP transport.
pub fn build(self) -> Result<MetricExporter, ExporterBuildError> {
let exporter = self.client.0.build_metrics_exporter(self.temporality)?;
Ok(exporter)
@ -122,7 +130,7 @@ impl HasHttpConfig for MetricExporterBuilder<HttpExporterBuilderSet> {
pub(crate) trait MetricsClient: fmt::Debug + Send + Sync + 'static {
fn export(
&self,
metrics: &mut ResourceMetrics,
metrics: &ResourceMetrics,
) -> impl std::future::Future<Output = OTelSdkResult> + Send;
fn shutdown(&self) -> OTelSdkResult;
}
@ -148,7 +156,7 @@ impl Debug for MetricExporter {
}
impl PushMetricExporter for MetricExporter {
async fn export(&self, metrics: &mut ResourceMetrics) -> OTelSdkResult {
async fn export(&self, metrics: &ResourceMetrics) -> OTelSdkResult {
match &self.client {
#[cfg(feature = "grpc-tonic")]
SupportedTransportClient::Tonic(client) => client.export(metrics).await,
@ -163,6 +171,10 @@ impl PushMetricExporter for MetricExporter {
}
fn shutdown(&self) -> OTelSdkResult {
self.shutdown_with_timeout(Duration::from_secs(5))
}
fn shutdown_with_timeout(&self, _timeout: std::time::Duration) -> OTelSdkResult {
match &self.client {
#[cfg(feature = "grpc-tonic")]
SupportedTransportClient::Tonic(client) => client.shutdown(),

View File

@ -36,16 +36,19 @@ pub const OTEL_EXPORTER_OTLP_TRACES_COMPRESSION: &str = "OTEL_EXPORTER_OTLP_TRAC
/// Note: this is only supported for HTTP.
pub const OTEL_EXPORTER_OTLP_TRACES_HEADERS: &str = "OTEL_EXPORTER_OTLP_TRACES_HEADERS";
/// OTLP span exporter builder
#[derive(Debug, Default, Clone)]
pub struct SpanExporterBuilder<C> {
client: C,
}
impl SpanExporterBuilder<NoExporterBuilderSet> {
/// Create a new [SpanExporterBuilder] with default settings.
pub fn new() -> Self {
SpanExporterBuilder::default()
}
/// With the gRPC Tonic transport.
#[cfg(feature = "grpc-tonic")]
pub fn with_tonic(self) -> SpanExporterBuilder<TonicExporterBuilderSet> {
SpanExporterBuilder {
@ -53,6 +56,7 @@ impl SpanExporterBuilder<NoExporterBuilderSet> {
}
}
/// With the HTTP transport.
#[cfg(any(feature = "http-proto", feature = "http-json"))]
pub fn with_http(self) -> SpanExporterBuilder<HttpExporterBuilderSet> {
SpanExporterBuilder {
@ -63,6 +67,7 @@ impl SpanExporterBuilder<NoExporterBuilderSet> {
#[cfg(feature = "grpc-tonic")]
impl SpanExporterBuilder<TonicExporterBuilderSet> {
/// Build the [SpanExporter] with the gRPC Tonic transport.
pub fn build(self) -> Result<SpanExporter, ExporterBuildError> {
let span_exporter = self.client.0.build_span_exporter()?;
opentelemetry::otel_debug!(name: "SpanExporterBuilt");
@ -72,6 +77,7 @@ impl SpanExporterBuilder<TonicExporterBuilderSet> {
#[cfg(any(feature = "http-proto", feature = "http-json"))]
impl SpanExporterBuilder<HttpExporterBuilderSet> {
/// Build the [SpanExporter] with the HTTP transport.
pub fn build(self) -> Result<SpanExporter, ExporterBuildError> {
let span_exporter = self.client.0.build_span_exporter()?;
Ok(span_exporter)

View File

@ -2,7 +2,10 @@
name = "integration_test_runner"
version = "0.1.0"
edition = "2021"
license = "Apache-2.0"
rust-version = "1.75.0"
publish = false
autobenches = false
[dependencies]
opentelemetry = { path = "../../../opentelemetry", features = [] }
@ -11,7 +14,6 @@ opentelemetry-proto = { path = "../../../opentelemetry-proto", features = ["gen-
tokio = { workspace = true, features = ["full"] }
serde_json = { workspace = true }
testcontainers = { workspace = true, features = ["http_wait"] }
once_cell.workspace = true
anyhow = { workspace = true }
ctor = { workspace = true }
uuid = { workspace = true, features = ["v4"] }
@ -36,3 +38,6 @@ default = ["tonic-client", "internal-logs"]
[lints]
workspace = true
[lib]
bench = false

View File

@ -10,7 +10,8 @@
}
}
],
"droppedAttributesCount": 0
"droppedAttributesCount": 0,
"entityRefs": []
},
"scopeSpans": [
{

View File

@ -1,4 +1,10 @@
pub mod logs_asserter;
#[cfg(any(
feature = "hyper-client",
feature = "reqwest-client",
feature = "reqwest-blocking-client",
feature = "tonic-client"
))]
pub mod metric_helpers;
pub mod test_utils;
pub mod trace_asserter;

View File

@ -1,8 +1,7 @@
use anyhow::Result;
use opentelemetry_proto::tonic::{
common::v1::KeyValue,
logs::v1::{LogRecord, LogsData, ResourceLogs},
};
#[cfg(feature = "experimental_metadata_attributes")]
use opentelemetry_proto::tonic::common::v1::KeyValue;
use opentelemetry_proto::tonic::logs::v1::{LogRecord, LogsData, ResourceLogs};
use std::fs::File;
// Given two ResourceLogs, assert that they are equal except for the timestamps

View File

@ -101,9 +101,7 @@ pub fn assert_metrics_results_contains(expected_content: &str) -> Result<()> {
reader.read_to_string(&mut contents)?;
assert!(
contents.contains(expected_content),
"Expected content {} not found in actual content {}",
expected_content,
contents
"Expected content {expected_content} not found in actual content {contents}"
);
Ok(())
}
@ -162,12 +160,7 @@ pub fn fetch_latest_metrics_for_scope(scope_name: &str) -> Result<Value> {
None
})
})
.with_context(|| {
format!(
"No valid JSON line containing scope `{}` found.",
scope_name
)
})?;
.with_context(|| format!("No valid JSON line containing scope `{scope_name}` found."))?;
Ok(json_line)
}
@ -178,18 +171,16 @@ pub fn fetch_latest_metrics_for_scope(scope_name: &str) -> Result<Value> {
///
pub fn validate_metrics_against_results(scope_name: &str) -> Result<()> {
// Define the results file path
let results_file_path = format!("./expected/metrics/{}.json", scope_name);
let results_file_path = format!("./expected/metrics/{scope_name}.json");
// Fetch the actual metrics for the given scope
let actual_metrics = fetch_latest_metrics_for_scope(scope_name)
.context(format!("Failed to fetch metrics for scope: {}", scope_name))?;
.context(format!("Failed to fetch metrics for scope: {scope_name}"))?;
// Read the expected metrics from the results file
let expected_metrics = {
let file = File::open(&results_file_path).context(format!(
"Failed to open results file: {}",
results_file_path
))?;
let file = File::open(&results_file_path)
.context(format!("Failed to open results file: {results_file_path}"))?;
read_metrics_from_json(file)
}?;

View File

@ -119,7 +119,7 @@ impl SpanForest {
}
if !spans.is_empty() {
panic!("found spans with invalid parent: {:?}", spans);
panic!("found spans with invalid parent: {spans:?}");
}
forest

View File

@ -38,7 +38,7 @@ mod metrictests_roundtrip {
let metrics: MetricsData = serde_json::from_str(metrics_in)?;
let metrics_out = serde_json::to_string(&metrics)?;
println!("{:}", metrics_out);
println!("{metrics_out:}");
let metrics_in_json: Value = serde_json::from_str(metrics_in)?;
let metrics_out_json: Value = serde_json::from_str(&metrics_out)?;

View File

@ -89,7 +89,7 @@ async fn smoke_tracer() {
opentelemetry_otlp::SpanExporter::builder()
.with_tonic()
.with_compression(opentelemetry_otlp::Compression::Gzip)
.with_endpoint(format!("http://{}", addr))
.with_endpoint(format!("http://{addr}"))
.with_metadata(metadata)
.build()
.expect("gzip-tonic SpanExporter failed to build"),

View File

@ -2,6 +2,19 @@
## vNext
## 0.29.1
Released 2025-April-11
- Update `prometheus` dependency version to 0.14
- Remove `protobuf` dependency
## v0.29.0
- Update `opentelemetry` dependency version to 0.29
- Update `opentelemetry_sdk` dependency version to 0.29
- Update `opentelemetry-semantic-conventions` dependency version to 0.29
## v0.28.0
- Update `opentelemetry` dependency version to 0.28

View File

@ -1,7 +1,7 @@
[package]
name = "opentelemetry-prometheus"
version = "0.28.0"
description = "Prometheus exporter for OpenTelemetry"
version = "0.29.1"
description = "Prometheus exporter for OpenTelemetry (This crate is discontinued and is no longer maintained)"
homepage = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-prometheus"
repository = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-prometheus"
readme = "README.md"
@ -21,14 +21,13 @@ rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
once_cell = { version = "1.13" }
opentelemetry = { version = "0.28", default-features = false, features = ["metrics"] }
opentelemetry_sdk = { version = "0.28", default-features = false, features = ["metrics"] }
prometheus = "0.13"
protobuf = "2.14"
tracing = {version = ">=0.1.40", default-features = false, optional = true} # optional for opentelemetry internal logging
opentelemetry = { version = "0.29", default-features = false, features = ["metrics"] }
opentelemetry_sdk = { version = "0.29", default-features = false, features = ["metrics"] }
prometheus = "0.14"
tracing = { version = ">=0.1.40", default-features = false, optional = true } # optional for opentelemetry internal logging
[dev-dependencies]
opentelemetry-semantic-conventions = { version = "0.28" }
opentelemetry-semantic-conventions = { version = "0.29" }
http-body-util = { version = "0.1" }
hyper = { version = "1.3", features = ["full"] }
hyper-util = { version = "0.1", features = ["full"] }
@ -38,3 +37,8 @@ tokio = { version = "1", features = ["full"] }
default = ["internal-logs"]
prometheus-encoding = []
internal-logs = ["tracing"]
[package.metadata.cargo-machete]
ignored = [
"tracing" # needed for `internal-logs`
]

View File

@ -439,13 +439,13 @@ fn validate_metrics(
);
return (true, None);
}
if existing.get_help() != description {
if existing.help() != description {
otel_warn!(
name: "MetricValidationFailed",
message = "Instrument description conflict, using existing",
metric_description = format!("Instrument {name}, Existing: {:?}, dropped: {:?}", existing.get_help().to_string(), description.to_string()).as_str(),
metric_description = format!("Instrument {name}, Existing: {:?}, dropped: {:?}", existing.help().to_string(), description.to_string()).as_str(),
);
return (false, Some(existing.get_help().to_string()));
return (false, Some(existing.help().to_string()));
}
(false, None)
} else {
@ -491,16 +491,16 @@ fn add_histogram_metric<T: Numeric>(
let mut h = prometheus::proto::Histogram::default();
h.set_sample_sum(dp.sum.as_f64());
h.set_sample_count(dp.count);
h.set_bucket(protobuf::RepeatedField::from_vec(bucket));
h.set_bucket(bucket);
let mut pm = prometheus::proto::Metric::default();
pm.set_label(protobuf::RepeatedField::from_vec(kvs));
pm.set_label(kvs);
pm.set_histogram(h);
let mut mf = prometheus::proto::MetricFamily::default();
mf.set_name(name.to_string());
mf.set_help(description.clone());
mf.set_field_type(prometheus::proto::MetricType::HISTOGRAM);
mf.set_metric(protobuf::RepeatedField::from_vec(vec![pm]));
mf.set_metric(vec![pm]);
res.push(mf);
}
}
@ -525,7 +525,7 @@ fn add_sum_metric<T: Numeric>(
);
let mut pm = prometheus::proto::Metric::default();
pm.set_label(protobuf::RepeatedField::from_vec(kvs));
pm.set_label(kvs);
if sum.is_monotonic {
let mut c = prometheus::proto::Counter::default();
@ -541,7 +541,7 @@ fn add_sum_metric<T: Numeric>(
mf.set_name(name.to_string());
mf.set_help(description.clone());
mf.set_field_type(metric_type);
mf.set_metric(protobuf::RepeatedField::from_vec(vec![pm]));
mf.set_metric(vec![pm]);
res.push(mf);
}
}
@ -562,14 +562,14 @@ fn add_gauge_metric<T: Numeric>(
let mut g = prometheus::proto::Gauge::default();
g.set_value(dp.value.as_f64());
let mut pm = prometheus::proto::Metric::default();
pm.set_label(protobuf::RepeatedField::from_vec(kvs));
pm.set_label(kvs);
pm.set_gauge(g);
let mut mf = prometheus::proto::MetricFamily::default();
mf.set_name(name.to_string());
mf.set_help(description.to_string());
mf.set_field_type(MetricType::GAUGE);
mf.set_metric(protobuf::RepeatedField::from_vec(vec![pm]));
mf.set_metric(vec![pm]);
res.push(mf);
}
}
@ -583,17 +583,17 @@ fn create_info_metric(
g.set_value(1.0);
let mut m = prometheus::proto::Metric::default();
m.set_label(protobuf::RepeatedField::from_vec(get_attrs(
m.set_label(get_attrs(
&mut resource.iter(),
&[],
)));
));
m.set_gauge(g);
let mut mf = MetricFamily::default();
mf.set_name(target_info_name.into());
mf.set_help(target_info_description.into());
mf.set_field_type(MetricType::GAUGE);
mf.set_metric(protobuf::RepeatedField::from_vec(vec![m]));
mf.set_metric(vec![m]);
mf
}
@ -614,14 +614,14 @@ fn create_scope_info_metric(scope: &InstrumentationScope) -> MetricFamily {
}
let mut m = prometheus::proto::Metric::default();
m.set_label(protobuf::RepeatedField::from_vec(labels));
m.set_label(labels);
m.set_gauge(g);
let mut mf = MetricFamily::default();
mf.set_name(SCOPE_INFO_METRIC_NAME.into());
mf.set_help(SCOPE_INFO_DESCRIPTION.into());
mf.set_field_type(MetricType::GAUGE);
mf.set_metric(protobuf::RepeatedField::from_vec(vec![m]));
mf.set_metric(vec![m]);
mf
}

View File

@ -26,10 +26,10 @@ pub(crate) fn get_unit_suffixes(unit: &str) -> Option<Cow<'static, str>> {
get_prom_per_unit(second),
) {
(true, _, Some(second_part)) | (false, None, Some(second_part)) => {
Some(Cow::Owned(format!("per_{}", second_part)))
Some(Cow::Owned(format!("per_{second_part}")))
}
(false, Some(first_part), Some(second_part)) => {
Some(Cow::Owned(format!("{}_per_{}", first_part, second_part)))
Some(Cow::Owned(format!("{first_part}_per_{second_part}")))
}
_ => None,
};

View File

@ -2,6 +2,18 @@
## vNext
- Update proto definitions to v1.7.0.
## 0.30.0
Released 2025-May-23
- Update `opentelemetry` dependency version to 0.30
- Updated `opentelemetry_sdk` dependency to version 0.30.0.
- **Feature**: Added Rust code generation for profiles protos. [#2979](https://github.com/open-telemetry/opentelemetry-rust/pull/2979)
- Update `tonic` dependency version to 0.13
- Update proto definitions to v1.6.0.
## 0.29.0
Released 2025-Mar-21

View File

@ -1,6 +1,6 @@
[package]
name = "opentelemetry-proto"
version = "0.29.0"
version = "0.30.0"
description = "Protobuf generated files and transformations."
homepage = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-proto"
repository = "https://github.com/open-telemetry/opentelemetry-rust/tree/main/opentelemetry-proto"
@ -15,9 +15,11 @@ license = "Apache-2.0"
edition = "2021"
rust-version = "1.75.0"
autotests = false
autobenches = false
[lib]
doctest = false
bench = false
[[test]]
name = "grpc_build"
@ -41,22 +43,22 @@ trace = ["opentelemetry/trace", "opentelemetry_sdk/trace"]
metrics = ["opentelemetry/metrics", "opentelemetry_sdk/metrics"]
logs = ["opentelemetry/logs", "opentelemetry_sdk/logs"]
zpages = ["trace"]
profiles = []
testing = ["opentelemetry/testing"]
# add ons
internal-logs = ["tracing"]
internal-logs = ["opentelemetry/internal-logs"]
with-schemars = ["schemars"]
with-serde = ["serde", "hex", "base64"]
with-serde = ["serde", "const-hex", "base64"]
[dependencies]
tonic = { workspace = true, optional = true, features = ["codegen", "prost"] }
prost = { workspace = true, optional = true }
opentelemetry = { version = "0.29", default-features = false, path = "../opentelemetry" }
opentelemetry_sdk = { version = "0.29", default-features = false, path = "../opentelemetry-sdk" }
opentelemetry = { version = "0.30", default-features = false, path = "../opentelemetry" }
opentelemetry_sdk = { version = "0.30", default-features = false, path = "../opentelemetry-sdk" }
schemars = { workspace = true, optional = true }
serde = { workspace = true, optional = true, features = ["serde_derive"] }
hex = { workspace = true, optional = true }
tracing = {workspace = true, optional = true} # optional for opentelemetry internal logging
const-hex = { workspace = true, optional = true }
base64 = { workspace = true, optional = true }
[dev-dependencies]

View File

@ -6,7 +6,7 @@ pub(crate) mod serializers {
use crate::tonic::common::v1::any_value::{self, Value};
use crate::tonic::common::v1::AnyValue;
use serde::de::{self, MapAccess, Visitor};
use serde::ser::{SerializeMap, SerializeStruct};
use serde::ser::{SerializeMap, SerializeSeq, SerializeStruct};
use serde::{Deserialize, Deserializer, Serialize, Serializer};
use std::fmt;
@ -16,7 +16,7 @@ pub(crate) mod serializers {
where
S: Serializer,
{
let hex_string = hex::encode(bytes);
let hex_string = const_hex::encode(bytes);
serializer.serialize_str(&hex_string)
}
@ -37,7 +37,7 @@ pub(crate) mod serializers {
where
E: de::Error,
{
hex::decode(value).map_err(E::custom)
const_hex::decode(value).map_err(E::custom)
}
}
@ -174,6 +174,30 @@ pub(crate) mod serializers {
s.parse::<u64>().map_err(de::Error::custom)
}
pub fn serialize_vec_u64_to_string<S>(value: &[u64], serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let s = value.iter()
.map(|v| v.to_string())
.collect::<Vec<_>>();
let mut sq = serializer.serialize_seq(Some(s.len()))?;
for v in value {
sq.serialize_element(&v.to_string())?;
}
sq.end()
}
pub fn deserialize_vec_string_to_vec_u64<'de, D>(deserializer: D) -> Result<Vec<u64>, D::Error>
where
D: Deserializer<'de>,
{
let s: Vec<String> = Deserialize::deserialize(deserializer)?;
s.into_iter()
.map(|v| v.parse::<u64>().map_err(de::Error::custom))
.collect()
}
pub fn serialize_i64_to_string<S>(value: &i64, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
@ -266,5 +290,13 @@ pub mod tonic {
pub mod v1;
}
/// Generated types used in zpages.
#[cfg(feature = "profiles")]
#[path = ""]
pub mod profiles {
#[path = "opentelemetry.proto.profiles.v1development.rs"]
pub mod v1;
}
pub use crate::transform::common::tonic::Attributes;
}

@ -1 +1 @@
Subproject commit 2bd940b2b77c1ab57c27166af21384906da7bb2b
Subproject commit 8654ab7a5a43ca25fe8046e59dcd6935c3f76de0

View File

@ -90,7 +90,7 @@ pub mod logs_service_client {
}
impl<T> LogsServiceClient<T>
where
T: tonic::client::GrpcService<tonic::body::BoxBody>,
T: tonic::client::GrpcService<tonic::body::Body>,
T::Error: Into<StdError>,
T::ResponseBody: Body<Data = Bytes> + std::marker::Send + 'static,
<T::ResponseBody as Body>::Error: Into<StdError> + std::marker::Send,
@ -111,13 +111,13 @@ pub mod logs_service_client {
F: tonic::service::Interceptor,
T::ResponseBody: Default,
T: tonic::codegen::Service<
http::Request<tonic::body::BoxBody>,
http::Request<tonic::body::Body>,
Response = http::Response<
<T as tonic::client::GrpcService<tonic::body::BoxBody>>::ResponseBody,
<T as tonic::client::GrpcService<tonic::body::Body>>::ResponseBody,
>,
>,
<T as tonic::codegen::Service<
http::Request<tonic::body::BoxBody>,
http::Request<tonic::body::Body>,
>>::Error: Into<StdError> + std::marker::Send + std::marker::Sync,
{
LogsServiceClient::new(InterceptedService::new(inner, interceptor))
@ -153,8 +153,6 @@ pub mod logs_service_client {
self.inner = self.inner.max_encoding_message_size(limit);
self
}
/// For performance reasons, it is recommended to keep this RPC
/// alive for the entire life of the application.
pub async fn export(
&mut self,
request: impl tonic::IntoRequest<super::ExportLogsServiceRequest>,
@ -200,8 +198,6 @@ pub mod logs_service_server {
/// Generated trait containing gRPC methods that should be implemented for use with LogsServiceServer.
#[async_trait]
pub trait LogsService: std::marker::Send + std::marker::Sync + 'static {
/// For performance reasons, it is recommended to keep this RPC
/// alive for the entire life of the application.
async fn export(
&self,
request: tonic::Request<super::ExportLogsServiceRequest>,
@ -278,7 +274,7 @@ pub mod logs_service_server {
B: Body + std::marker::Send + 'static,
B::Error: Into<StdError> + std::marker::Send + 'static,
{
type Response = http::Response<tonic::body::BoxBody>;
type Response = http::Response<tonic::body::Body>;
type Error = std::convert::Infallible;
type Future = BoxFuture<Self::Response, Self::Error>;
fn poll_ready(
@ -336,7 +332,9 @@ pub mod logs_service_server {
}
_ => {
Box::pin(async move {
let mut response = http::Response::new(empty_body());
let mut response = http::Response::new(
tonic::body::Body::default(),
);
let headers = response.headers_mut();
headers
.insert(

View File

@ -90,7 +90,7 @@ pub mod metrics_service_client {
}
impl<T> MetricsServiceClient<T>
where
T: tonic::client::GrpcService<tonic::body::BoxBody>,
T: tonic::client::GrpcService<tonic::body::Body>,
T::Error: Into<StdError>,
T::ResponseBody: Body<Data = Bytes> + std::marker::Send + 'static,
<T::ResponseBody as Body>::Error: Into<StdError> + std::marker::Send,
@ -111,13 +111,13 @@ pub mod metrics_service_client {
F: tonic::service::Interceptor,
T::ResponseBody: Default,
T: tonic::codegen::Service<
http::Request<tonic::body::BoxBody>,
http::Request<tonic::body::Body>,
Response = http::Response<
<T as tonic::client::GrpcService<tonic::body::BoxBody>>::ResponseBody,
<T as tonic::client::GrpcService<tonic::body::Body>>::ResponseBody,
>,
>,
<T as tonic::codegen::Service<
http::Request<tonic::body::BoxBody>,
http::Request<tonic::body::Body>,
>>::Error: Into<StdError> + std::marker::Send + std::marker::Sync,
{
MetricsServiceClient::new(InterceptedService::new(inner, interceptor))
@ -153,8 +153,6 @@ pub mod metrics_service_client {
self.inner = self.inner.max_encoding_message_size(limit);
self
}
/// For performance reasons, it is recommended to keep this RPC
/// alive for the entire life of the application.
pub async fn export(
&mut self,
request: impl tonic::IntoRequest<super::ExportMetricsServiceRequest>,
@ -200,8 +198,6 @@ pub mod metrics_service_server {
/// Generated trait containing gRPC methods that should be implemented for use with MetricsServiceServer.
#[async_trait]
pub trait MetricsService: std::marker::Send + std::marker::Sync + 'static {
/// For performance reasons, it is recommended to keep this RPC
/// alive for the entire life of the application.
async fn export(
&self,
request: tonic::Request<super::ExportMetricsServiceRequest>,
@ -278,7 +274,7 @@ pub mod metrics_service_server {
B: Body + std::marker::Send + 'static,
B::Error: Into<StdError> + std::marker::Send + 'static,
{
type Response = http::Response<tonic::body::BoxBody>;
type Response = http::Response<tonic::body::Body>;
type Error = std::convert::Infallible;
type Future = BoxFuture<Self::Response, Self::Error>;
fn poll_ready(
@ -336,7 +332,9 @@ pub mod metrics_service_server {
}
_ => {
Box::pin(async move {
let mut response = http::Response::new(empty_body());
let mut response = http::Response::new(
tonic::body::Body::default(),
);
let headers = response.headers_mut();
headers
.insert(

View File

@ -90,7 +90,7 @@ pub mod trace_service_client {
}
impl<T> TraceServiceClient<T>
where
T: tonic::client::GrpcService<tonic::body::BoxBody>,
T: tonic::client::GrpcService<tonic::body::Body>,
T::Error: Into<StdError>,
T::ResponseBody: Body<Data = Bytes> + std::marker::Send + 'static,
<T::ResponseBody as Body>::Error: Into<StdError> + std::marker::Send,
@ -111,13 +111,13 @@ pub mod trace_service_client {
F: tonic::service::Interceptor,
T::ResponseBody: Default,
T: tonic::codegen::Service<
http::Request<tonic::body::BoxBody>,
http::Request<tonic::body::Body>,
Response = http::Response<
<T as tonic::client::GrpcService<tonic::body::BoxBody>>::ResponseBody,
<T as tonic::client::GrpcService<tonic::body::Body>>::ResponseBody,
>,
>,
<T as tonic::codegen::Service<
http::Request<tonic::body::BoxBody>,
http::Request<tonic::body::Body>,
>>::Error: Into<StdError> + std::marker::Send + std::marker::Sync,
{
TraceServiceClient::new(InterceptedService::new(inner, interceptor))
@ -153,8 +153,6 @@ pub mod trace_service_client {
self.inner = self.inner.max_encoding_message_size(limit);
self
}
/// For performance reasons, it is recommended to keep this RPC
/// alive for the entire life of the application.
pub async fn export(
&mut self,
request: impl tonic::IntoRequest<super::ExportTraceServiceRequest>,
@ -200,8 +198,6 @@ pub mod trace_service_server {
/// Generated trait containing gRPC methods that should be implemented for use with TraceServiceServer.
#[async_trait]
pub trait TraceService: std::marker::Send + std::marker::Sync + 'static {
/// For performance reasons, it is recommended to keep this RPC
/// alive for the entire life of the application.
async fn export(
&self,
request: tonic::Request<super::ExportTraceServiceRequest>,
@ -278,7 +274,7 @@ pub mod trace_service_server {
B: Body + std::marker::Send + 'static,
B::Error: Into<StdError> + std::marker::Send + 'static,
{
type Response = http::Response<tonic::body::BoxBody>;
type Response = http::Response<tonic::body::Body>;
type Error = std::convert::Infallible;
type Future = BoxFuture<Self::Response, Self::Error>;
fn poll_ready(
@ -336,7 +332,9 @@ pub mod trace_service_server {
}
_ => {
Box::pin(async move {
let mut response = http::Response::new(empty_body());
let mut response = http::Response::new(
tonic::body::Body::default(),
);
let headers = response.headers_mut();
headers
.insert(

View File

@ -106,3 +106,41 @@ pub struct InstrumentationScope {
#[prost(uint32, tag = "4")]
pub dropped_attributes_count: u32,
}
/// A reference to an Entity.
/// Entity represents an object of interest associated with produced telemetry: e.g spans, metrics, profiles, or logs.
///
/// Status: \[Development\]
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct EntityRef {
/// The Schema URL, if known. This is the identifier of the Schema that the entity data
/// is recorded in. To learn more about Schema URL see
/// <https://opentelemetry.io/docs/specs/otel/schemas/#schema-url>
///
/// This schema_url applies to the data in this message and to the Resource attributes
/// referenced by id_keys and description_keys.
/// TODO: discuss if we are happy with this somewhat complicated definition of what
/// the schema_url applies to.
///
/// This field obsoletes the schema_url field in ResourceMetrics/ResourceSpans/ResourceLogs.
#[prost(string, tag = "1")]
pub schema_url: ::prost::alloc::string::String,
/// Defines the type of the entity. MUST not change during the lifetime of the entity.
/// For example: "service" or "host". This field is required and MUST not be empty
/// for valid entities.
#[prost(string, tag = "2")]
pub r#type: ::prost::alloc::string::String,
/// Attribute Keys that identify the entity.
/// MUST not change during the lifetime of the entity. The Id must contain at least one attribute.
/// These keys MUST exist in the containing {message}.attributes.
#[prost(string, repeated, tag = "3")]
pub id_keys: ::prost::alloc::vec::Vec<::prost::alloc::string::String>,
/// Descriptive (non-identifying) attribute keys of the entity.
/// MAY change over the lifetime of the entity. MAY be empty.
/// These attribute keys are not part of entity's identity.
/// These keys MUST exist in the containing {message}.attributes.
#[prost(string, repeated, tag = "4")]
pub description_keys: ::prost::alloc::vec::Vec<::prost::alloc::string::String>,
}

View File

@ -190,8 +190,6 @@ pub struct LogRecord {
/// as an event.
///
/// \[Optional\].
///
/// Status: \[Development\]
#[prost(string, tag = "12")]
pub event_name: ::prost::alloc::string::String,
}

View File

@ -183,7 +183,7 @@ pub struct Metric {
#[prost(string, tag = "2")]
pub description: ::prost::alloc::string::String,
/// unit in which the metric value is reported. Follows the format
/// described by <http://unitsofmeasure.org/ucum.html.>
/// described by <https://unitsofmeasure.org/ucum.html.>
#[prost(string, tag = "3")]
pub unit: ::prost::alloc::string::String,
/// Additional metadata attributes that describe the metric. \[Optional\].
@ -292,7 +292,7 @@ pub struct ExponentialHistogram {
}
/// Summary metric data are used to convey quantile summaries,
/// a Prometheus (see: <https://prometheus.io/docs/concepts/metric_types/#summary>)
/// and OpenMetrics (see: <https://github.com/OpenObservability/OpenMetrics/blob/4dbf6075567ab43296eed941037c12951faafb92/protos/prometheus.proto#L45>)
/// and OpenMetrics (see: <https://github.com/prometheus/OpenMetrics/blob/4dbf6075567ab43296eed941037c12951faafb92/protos/prometheus.proto#L45>)
/// data type. These data points cannot always be merged in a meaningful way.
/// While they can be useful in some applications, histogram data points are
/// recommended for new applications.
@ -448,7 +448,9 @@ pub struct HistogramDataPoint {
/// The sum of the bucket_counts must equal the value in the count field.
///
/// The number of elements in bucket_counts array must be by one greater than
/// the number of elements in explicit_bounds array.
/// the number of elements in explicit_bounds array. The exception to this rule
/// is when the length of bucket_counts is 0, then the length of explicit_bounds
/// must also be 0.
#[prost(fixed64, repeated, tag = "6")]
pub bucket_counts: ::prost::alloc::vec::Vec<u64>,
/// explicit_bounds specifies buckets with explicitly defined bounds for values.
@ -464,6 +466,9 @@ pub struct HistogramDataPoint {
/// Histogram buckets are inclusive of their upper boundary, except the last
/// bucket where the boundary is at infinity. This format is intentionally
/// compatible with the OpenMetrics histogram definition.
///
/// If bucket_counts length is 0 then explicit_bounds length must also be 0,
/// otherwise the data point is invalid.
#[prost(double, repeated, tag = "7")]
pub explicit_bounds: ::prost::alloc::vec::Vec<f64>,
/// (Optional) List of exemplars collected from

View File

@ -0,0 +1,498 @@
// This file is @generated by prost-build.
/// ProfilesDictionary represents the profiles data shared across the
/// entire message being sent.
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct ProfilesDictionary {
/// Mappings from address ranges to the image/binary/library mapped
/// into that address range referenced by locations via Location.mapping_index.
#[prost(message, repeated, tag = "1")]
pub mapping_table: ::prost::alloc::vec::Vec<Mapping>,
/// Locations referenced by samples via Profile.location_indices.
#[prost(message, repeated, tag = "2")]
pub location_table: ::prost::alloc::vec::Vec<Location>,
/// Functions referenced by locations via Line.function_index.
#[prost(message, repeated, tag = "3")]
pub function_table: ::prost::alloc::vec::Vec<Function>,
/// Links referenced by samples via Sample.link_index.
#[prost(message, repeated, tag = "4")]
pub link_table: ::prost::alloc::vec::Vec<Link>,
/// A common table for strings referenced by various messages.
/// string_table\[0\] must always be "".
#[prost(string, repeated, tag = "5")]
pub string_table: ::prost::alloc::vec::Vec<::prost::alloc::string::String>,
/// A common table for attributes referenced by various messages.
#[prost(message, repeated, tag = "6")]
pub attribute_table: ::prost::alloc::vec::Vec<super::super::common::v1::KeyValue>,
/// Represents a mapping between Attribute Keys and Units.
#[prost(message, repeated, tag = "7")]
pub attribute_units: ::prost::alloc::vec::Vec<AttributeUnit>,
}
/// ProfilesData represents the profiles data that can be stored in persistent storage,
/// OR can be embedded by other protocols that transfer OTLP profiles data but do not
/// implement the OTLP protocol.
///
/// The main difference between this message and collector protocol is that
/// in this message there will not be any "control" or "metadata" specific to
/// OTLP protocol.
///
/// When new fields are added into this message, the OTLP request MUST be updated
/// as well.
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct ProfilesData {
/// An array of ResourceProfiles.
/// For data coming from an SDK profiler, this array will typically contain one
/// element. Host-level profilers will usually create one ResourceProfile per
/// container, as well as one additional ResourceProfile grouping all samples
/// from non-containerized processes.
/// Other resource groupings are possible as well and clarified via
/// Resource.attributes and semantic conventions.
#[prost(message, repeated, tag = "1")]
pub resource_profiles: ::prost::alloc::vec::Vec<ResourceProfiles>,
/// One instance of ProfilesDictionary
#[prost(message, optional, tag = "2")]
pub dictionary: ::core::option::Option<ProfilesDictionary>,
}
/// A collection of ScopeProfiles from a Resource.
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct ResourceProfiles {
/// The resource for the profiles in this message.
/// If this field is not set then no resource info is known.
#[prost(message, optional, tag = "1")]
pub resource: ::core::option::Option<super::super::resource::v1::Resource>,
/// A list of ScopeProfiles that originate from a resource.
#[prost(message, repeated, tag = "2")]
pub scope_profiles: ::prost::alloc::vec::Vec<ScopeProfiles>,
/// The Schema URL, if known. This is the identifier of the Schema that the resource data
/// is recorded in. Notably, the last part of the URL path is the version number of the
/// schema: http\[s\]://server\[:port\]/path/<version>. To learn more about Schema URL see
/// <https://opentelemetry.io/docs/specs/otel/schemas/#schema-url>
/// This schema_url applies to the data in the "resource" field. It does not apply
/// to the data in the "scope_profiles" field which have their own schema_url field.
#[prost(string, tag = "3")]
pub schema_url: ::prost::alloc::string::String,
}
/// A collection of Profiles produced by an InstrumentationScope.
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct ScopeProfiles {
/// The instrumentation scope information for the profiles in this message.
/// Semantically when InstrumentationScope isn't set, it is equivalent with
/// an empty instrumentation scope name (unknown).
#[prost(message, optional, tag = "1")]
pub scope: ::core::option::Option<super::super::common::v1::InstrumentationScope>,
/// A list of Profiles that originate from an instrumentation scope.
#[prost(message, repeated, tag = "2")]
pub profiles: ::prost::alloc::vec::Vec<Profile>,
/// The Schema URL, if known. This is the identifier of the Schema that the profile data
/// is recorded in. Notably, the last part of the URL path is the version number of the
/// schema: http\[s\]://server\[:port\]/path/<version>. To learn more about Schema URL see
/// <https://opentelemetry.io/docs/specs/otel/schemas/#schema-url>
/// This schema_url applies to all profiles in the "profiles" field.
#[prost(string, tag = "3")]
pub schema_url: ::prost::alloc::string::String,
}
/// Represents a complete profile, including sample types, samples,
/// mappings to binaries, locations, functions, string table, and additional metadata.
/// It modifies and annotates pprof Profile with OpenTelemetry specific fields.
///
/// Note that whilst fields in this message retain the name and field id from pprof in most cases
/// for ease of understanding data migration, it is not intended that pprof:Profile and
/// OpenTelemetry:Profile encoding be wire compatible.
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Profile {
/// A description of the samples associated with each Sample.value.
/// For a cpu profile this might be:
/// \[["cpu","nanoseconds"]\] or \[["wall","seconds"]\] or \[["syscall","count"]\]
/// For a heap profile, this might be:
/// \[["allocations","count"\], \["space","bytes"]\],
/// If one of the values represents the number of events represented
/// by the sample, by convention it should be at index 0 and use
/// sample_type.unit == "count".
#[prost(message, repeated, tag = "1")]
pub sample_type: ::prost::alloc::vec::Vec<ValueType>,
/// The set of samples recorded in this profile.
#[prost(message, repeated, tag = "2")]
pub sample: ::prost::alloc::vec::Vec<Sample>,
/// References to locations in ProfilesDictionary.location_table.
#[prost(int32, repeated, tag = "3")]
pub location_indices: ::prost::alloc::vec::Vec<i32>,
/// Time of collection (UTC) represented as nanoseconds past the epoch.
#[prost(int64, tag = "4")]
#[cfg_attr(
feature = "with-serde",
serde(
serialize_with = "crate::proto::serializers::serialize_i64_to_string",
deserialize_with = "crate::proto::serializers::deserialize_string_to_i64"
)
)]
pub time_nanos: i64,
/// Duration of the profile, if a duration makes sense.
#[prost(int64, tag = "5")]
pub duration_nanos: i64,
/// The kind of events between sampled occurrences.
/// e.g \[ "cpu","cycles" \] or \[ "heap","bytes" \]
#[prost(message, optional, tag = "6")]
pub period_type: ::core::option::Option<ValueType>,
/// The number of events between sampled occurrences.
#[prost(int64, tag = "7")]
pub period: i64,
/// Free-form text associated with the profile. The text is displayed as is
/// to the user by the tools that read profiles (e.g. by pprof). This field
/// should not be used to store any machine-readable information, it is only
/// for human-friendly content. The profile must stay functional if this field
/// is cleaned.
///
/// Indices into ProfilesDictionary.string_table.
#[prost(int32, repeated, tag = "8")]
pub comment_strindices: ::prost::alloc::vec::Vec<i32>,
/// Index into the sample_type array to the default sample type.
#[prost(int32, tag = "9")]
pub default_sample_type_index: i32,
/// A globally unique identifier for a profile. The ID is a 16-byte array. An ID with
/// all zeroes is considered invalid.
///
/// This field is required.
#[prost(bytes = "vec", tag = "10")]
#[cfg_attr(
feature = "with-serde",
serde(
serialize_with = "crate::proto::serializers::serialize_to_hex_string",
deserialize_with = "crate::proto::serializers::deserialize_from_hex_string"
)
)]
pub profile_id: ::prost::alloc::vec::Vec<u8>,
/// dropped_attributes_count is the number of attributes that were discarded. Attributes
/// can be discarded because their keys are too long or because there are too many
/// attributes. If this value is 0, then no attributes were dropped.
#[prost(uint32, tag = "11")]
pub dropped_attributes_count: u32,
/// Specifies format of the original payload. Common values are defined in semantic conventions. \[required if original_payload is present\]
#[prost(string, tag = "12")]
pub original_payload_format: ::prost::alloc::string::String,
/// Original payload can be stored in this field. This can be useful for users who want to get the original payload.
/// Formats such as JFR are highly extensible and can contain more information than what is defined in this spec.
/// Inclusion of original payload should be configurable by the user. Default behavior should be to not include the original payload.
/// If the original payload is in pprof format, it SHOULD not be included in this field.
/// The field is optional, however if it is present then equivalent converted data should be populated in other fields
/// of this message as far as is practicable.
#[prost(bytes = "vec", tag = "13")]
pub original_payload: ::prost::alloc::vec::Vec<u8>,
/// References to attributes in attribute_table. \[optional\]
/// It is a collection of key/value pairs. Note, global attributes
/// like server name can be set using the resource API. Examples of attributes:
///
/// "/http/user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
/// "/http/server_latency": 300
/// "abc.com/myattribute": true
/// "abc.com/score": 10.239
///
/// The OpenTelemetry API specification further restricts the allowed value types:
/// <https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/common/README.md#attribute>
/// Attribute keys MUST be unique (it is not allowed to have more than one
/// attribute with the same key).
#[prost(int32, repeated, tag = "14")]
pub attribute_indices: ::prost::alloc::vec::Vec<i32>,
}
/// Represents a mapping between Attribute Keys and Units.
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, Copy, PartialEq, ::prost::Message)]
pub struct AttributeUnit {
/// Index into string table.
#[prost(int32, tag = "1")]
pub attribute_key_strindex: i32,
/// Index into string table.
#[prost(int32, tag = "2")]
pub unit_strindex: i32,
}
/// A pointer from a profile Sample to a trace Span.
/// Connects a profile sample to a trace span, identified by unique trace and span IDs.
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Link {
/// A unique identifier of a trace that this linked span is part of. The ID is a
/// 16-byte array.
#[prost(bytes = "vec", tag = "1")]
pub trace_id: ::prost::alloc::vec::Vec<u8>,
/// A unique identifier for the linked span. The ID is an 8-byte array.
#[prost(bytes = "vec", tag = "2")]
pub span_id: ::prost::alloc::vec::Vec<u8>,
}
/// ValueType describes the type and units of a value, with an optional aggregation temporality.
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, Copy, PartialEq, ::prost::Message)]
pub struct ValueType {
/// Index into ProfilesDictionary.string_table.
#[prost(int32, tag = "1")]
pub type_strindex: i32,
/// Index into ProfilesDictionary.string_table.
#[prost(int32, tag = "2")]
pub unit_strindex: i32,
#[prost(enumeration = "AggregationTemporality", tag = "3")]
pub aggregation_temporality: i32,
}
/// Each Sample records values encountered in some program
/// context. The program context is typically a stack trace, perhaps
/// augmented with auxiliary information like the thread-id, some
/// indicator of a higher level request being handled etc.
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Sample {
/// locations_start_index along with locations_length refers to to a slice of locations in Profile.location_indices.
#[prost(int32, tag = "1")]
pub locations_start_index: i32,
/// locations_length along with locations_start_index refers to a slice of locations in Profile.location_indices.
/// Supersedes location_index.
#[prost(int32, tag = "2")]
pub locations_length: i32,
/// The type and unit of each value is defined by the corresponding
/// entry in Profile.sample_type. All samples must have the same
/// number of values, the same as the length of Profile.sample_type.
/// When aggregating multiple samples into a single sample, the
/// result has a list of values that is the element-wise sum of the
/// lists of the originals.
#[prost(int64, repeated, tag = "3")]
pub value: ::prost::alloc::vec::Vec<i64>,
/// References to attributes in ProfilesDictionary.attribute_table. \[optional\]
#[prost(int32, repeated, tag = "4")]
pub attribute_indices: ::prost::alloc::vec::Vec<i32>,
/// Reference to link in ProfilesDictionary.link_table. \[optional\]
#[prost(int32, optional, tag = "5")]
pub link_index: ::core::option::Option<i32>,
/// Timestamps associated with Sample represented in nanoseconds. These timestamps are expected
/// to fall within the Profile's time range. \[optional\]
#[prost(uint64, repeated, tag = "6")]
#[cfg_attr(
feature = "with-serde",
serde(
serialize_with = "crate::proto::serializers::serialize_vec_u64_to_string",
deserialize_with = "crate::proto::serializers::deserialize_vec_string_to_vec_u64"
)
)]
pub timestamps_unix_nano: ::prost::alloc::vec::Vec<u64>,
}
/// Describes the mapping of a binary in memory, including its address range,
/// file offset, and metadata like build ID
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Mapping {
/// Address at which the binary (or DLL) is loaded into memory.
#[prost(uint64, tag = "1")]
pub memory_start: u64,
/// The limit of the address range occupied by this mapping.
#[prost(uint64, tag = "2")]
pub memory_limit: u64,
/// Offset in the binary that corresponds to the first mapped address.
#[prost(uint64, tag = "3")]
pub file_offset: u64,
/// The object this entry is loaded from. This can be a filename on
/// disk for the main binary and shared libraries, or virtual
/// abstractions like "\[vdso\]".
///
/// Index into ProfilesDictionary.string_table.
#[prost(int32, tag = "4")]
pub filename_strindex: i32,
/// References to attributes in ProfilesDictionary.attribute_table. \[optional\]
#[prost(int32, repeated, tag = "5")]
pub attribute_indices: ::prost::alloc::vec::Vec<i32>,
/// The following fields indicate the resolution of symbolic info.
#[prost(bool, tag = "6")]
pub has_functions: bool,
#[prost(bool, tag = "7")]
pub has_filenames: bool,
#[prost(bool, tag = "8")]
pub has_line_numbers: bool,
#[prost(bool, tag = "9")]
pub has_inline_frames: bool,
}
/// Describes function and line table debug information.
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Location {
/// Reference to mapping in ProfilesDictionary.mapping_table.
/// It can be unset if the mapping is unknown or not applicable for
/// this profile type.
#[prost(int32, optional, tag = "1")]
pub mapping_index: ::core::option::Option<i32>,
/// The instruction address for this location, if available. It
/// should be within \[Mapping.memory_start...Mapping.memory_limit\]
/// for the corresponding mapping. A non-leaf address may be in the
/// middle of a call instruction. It is up to display tools to find
/// the beginning of the instruction if necessary.
#[prost(uint64, tag = "2")]
pub address: u64,
/// Multiple line indicates this location has inlined functions,
/// where the last entry represents the caller into which the
/// preceding entries were inlined.
///
/// E.g., if memcpy() is inlined into printf:
/// line\[0\].function_name == "memcpy"
/// line\[1\].function_name == "printf"
#[prost(message, repeated, tag = "3")]
pub line: ::prost::alloc::vec::Vec<Line>,
/// Provides an indication that multiple symbols map to this location's
/// address, for example due to identical code folding by the linker. In that
/// case the line information above represents one of the multiple
/// symbols. This field must be recomputed when the symbolization state of the
/// profile changes.
#[prost(bool, tag = "4")]
pub is_folded: bool,
/// References to attributes in ProfilesDictionary.attribute_table. \[optional\]
#[prost(int32, repeated, tag = "5")]
pub attribute_indices: ::prost::alloc::vec::Vec<i32>,
}
/// Details a specific line in a source code, linked to a function.
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, Copy, PartialEq, ::prost::Message)]
pub struct Line {
/// Reference to function in ProfilesDictionary.function_table.
#[prost(int32, tag = "1")]
pub function_index: i32,
/// Line number in source code. 0 means unset.
#[prost(int64, tag = "2")]
pub line: i64,
/// Column number in source code. 0 means unset.
#[prost(int64, tag = "3")]
pub column: i64,
}
/// Describes a function, including its human-readable name, system name,
/// source file, and starting line number in the source.
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[cfg_attr(feature = "with-serde", serde(default))]
#[derive(Clone, Copy, PartialEq, ::prost::Message)]
pub struct Function {
/// Function name. Empty string if not available.
#[prost(int32, tag = "1")]
pub name_strindex: i32,
/// Function name, as identified by the system. For instance,
/// it can be a C++ mangled name. Empty string if not available.
#[prost(int32, tag = "2")]
pub system_name_strindex: i32,
/// Source file containing the function. Empty string if not available.
#[prost(int32, tag = "3")]
pub filename_strindex: i32,
/// Line number in source file. 0 means unset.
#[prost(int64, tag = "4")]
pub start_line: i64,
}
/// Specifies the method of aggregating metric values, either DELTA (change since last report)
/// or CUMULATIVE (total since a fixed start time).
#[cfg_attr(feature = "with-schemars", derive(schemars::JsonSchema))]
#[cfg_attr(feature = "with-serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "with-serde", serde(rename_all = "camelCase"))]
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, ::prost::Enumeration)]
#[repr(i32)]
pub enum AggregationTemporality {
/// UNSPECIFIED is the default AggregationTemporality, it MUST not be used.
Unspecified = 0,
/// * DELTA is an AggregationTemporality for a profiler which reports
/// changes since last report time. Successive metrics contain aggregation of
/// values from continuous and non-overlapping intervals.
///
/// The values for a DELTA metric are based only on the time interval
/// associated with one measurement cycle. There is no dependency on
/// previous measurements like is the case for CUMULATIVE metrics.
///
/// For example, consider a system measuring the number of requests that
/// it receives and reports the sum of these requests every second as a
/// DELTA metric:
///
/// 1. The system starts receiving at time=t_0.
/// 2. A request is received, the system measures 1 request.
/// 3. A request is received, the system measures 1 request.
/// 4. A request is received, the system measures 1 request.
/// 5. The 1 second collection cycle ends. A metric is exported for the
/// number of requests received over the interval of time t_0 to
/// t_0+1 with a value of 3.
/// 6. A request is received, the system measures 1 request.
/// 7. A request is received, the system measures 1 request.
/// 8. The 1 second collection cycle ends. A metric is exported for the
/// number of requests received over the interval of time t_0+1 to
/// t_0+2 with a value of 2.
Delta = 1,
/// * CUMULATIVE is an AggregationTemporality for a profiler which
/// reports changes since a fixed start time. This means that current values
/// of a CUMULATIVE metric depend on all previous measurements since the
/// start time. Because of this, the sender is required to retain this state
/// in some form. If this state is lost or invalidated, the CUMULATIVE metric
/// values MUST be reset and a new fixed start time following the last
/// reported measurement time sent MUST be used.
///
/// For example, consider a system measuring the number of requests that
/// it receives and reports the sum of these requests every second as a
/// CUMULATIVE metric:
///
/// 1. The system starts receiving at time=t_0.
/// 2. A request is received, the system measures 1 request.
/// 3. A request is received, the system measures 1 request.
/// 4. A request is received, the system measures 1 request.
/// 5. The 1 second collection cycle ends. A metric is exported for the
/// number of requests received over the interval of time t_0 to
/// t_0+1 with a value of 3.
/// 6. A request is received, the system measures 1 request.
/// 7. A request is received, the system measures 1 request.
/// 8. The 1 second collection cycle ends. A metric is exported for the
/// number of requests received over the interval of time t_0 to
/// t_0+2 with a value of 5.
/// 9. The system experiences a fault and loses state.
/// 10. The system recovers and resumes receiving at time=t_1.
/// 11. A request is received, the system measures 1 request.
/// 12. The 1 second collection cycle ends. A metric is exported for the
/// number of requests received over the interval of time t_1 to
/// t_1+1 with a value of 1.
///
/// Note: Even though, when reporting changes since last report time, using
/// CUMULATIVE is valid, it is not recommended.
Cumulative = 2,
}
impl AggregationTemporality {
/// String value of the enum field names used in the ProtoBuf definition.
///
/// The values are not transformed in any way and thus are considered stable
/// (if the ProtoBuf definition does not change) and safe for programmatic use.
pub fn as_str_name(&self) -> &'static str {
match self {
Self::Unspecified => "AGGREGATION_TEMPORALITY_UNSPECIFIED",
Self::Delta => "AGGREGATION_TEMPORALITY_DELTA",
Self::Cumulative => "AGGREGATION_TEMPORALITY_CUMULATIVE",
}
}
/// Creates an enum from field names used in the ProtoBuf definition.
pub fn from_str_name(value: &str) -> ::core::option::Option<Self> {
match value {
"AGGREGATION_TEMPORALITY_UNSPECIFIED" => Some(Self::Unspecified),
"AGGREGATION_TEMPORALITY_DELTA" => Some(Self::Delta),
"AGGREGATION_TEMPORALITY_CUMULATIVE" => Some(Self::Cumulative),
_ => None,
}
}
}

View File

@ -15,4 +15,11 @@ pub struct Resource {
/// no attributes were dropped.
#[prost(uint32, tag = "2")]
pub dropped_attributes_count: u32,
/// Set of entities that participate in this Resource.
///
/// Note: keys in the references MUST exist in attributes of this message.
///
/// Status: \[Development\]
#[prost(message, repeated, tag = "3")]
pub entity_refs: ::prost::alloc::vec::Vec<super::super::common::v1::EntityRef>,
}

View File

@ -150,6 +150,7 @@ pub mod tonic {
resource: Some(Resource {
attributes: resource.attributes.0.clone(),
dropped_attributes_count: 0,
entity_refs: vec![],
}),
schema_url: resource.schema_url.clone().unwrap_or_default(),
scope_logs: vec![ScopeLogs {
@ -210,6 +211,7 @@ pub mod tonic {
resource: Some(Resource {
attributes: resource.attributes.0.clone(),
dropped_attributes_count: 0,
entity_refs: vec![],
}),
scope_logs,
schema_url: resource.schema_url.clone().unwrap_or_default(),
@ -239,10 +241,6 @@ mod tests {
fn force_flush(&self) -> OTelSdkResult {
Ok(())
}
fn shutdown(&self) -> OTelSdkResult {
Ok(())
}
}
fn create_test_log_data(

View File

@ -5,13 +5,13 @@
#[allow(deprecated)]
#[cfg(feature = "gen-tonic-messages")]
pub mod tonic {
use std::any::Any;
use std::fmt;
use std::fmt::Debug;
use opentelemetry::{otel_debug, Key, Value};
use opentelemetry_sdk::metrics::data::{
Exemplar as SdkExemplar, ExponentialHistogram as SdkExponentialHistogram,
Gauge as SdkGauge, Histogram as SdkHistogram, Metric as SdkMetric, ResourceMetrics,
AggregatedMetrics, Exemplar as SdkExemplar,
ExponentialHistogram as SdkExponentialHistogram, Gauge as SdkGauge,
Histogram as SdkHistogram, Metric as SdkMetric, MetricData, ResourceMetrics,
ScopeMetrics as SdkScopeMetrics, Sum as SdkSum,
};
use opentelemetry_sdk::metrics::Temporality;
@ -114,9 +114,13 @@ pub mod tonic {
fn from(rm: &ResourceMetrics) -> Self {
ExportMetricsServiceRequest {
resource_metrics: vec![TonicResourceMetrics {
resource: Some((&rm.resource).into()),
scope_metrics: rm.scope_metrics.iter().map(Into::into).collect(),
schema_url: rm.resource.schema_url().map(Into::into).unwrap_or_default(),
resource: Some((rm.resource()).into()),
scope_metrics: rm.scope_metrics().map(Into::into).collect(),
schema_url: rm
.resource()
.schema_url()
.map(Into::into)
.unwrap_or_default(),
}],
}
}
@ -127,6 +131,7 @@ pub mod tonic {
TonicResource {
attributes: resource.iter().map(Into::into).collect(),
dropped_attributes_count: 0,
entity_refs: vec![], // internal and currently unused
}
}
}
@ -134,10 +139,10 @@ pub mod tonic {
impl From<&SdkScopeMetrics> for TonicScopeMetrics {
fn from(sm: &SdkScopeMetrics) -> Self {
TonicScopeMetrics {
scope: Some((&sm.scope, None).into()),
metrics: sm.metrics.iter().map(Into::into).collect(),
scope: Some((sm.scope(), None).into()),
metrics: sm.metrics().map(Into::into).collect(),
schema_url: sm
.scope
.scope()
.schema_url()
.map(ToOwned::to_owned)
.unwrap_or_default(),
@ -148,50 +153,31 @@ pub mod tonic {
impl From<&SdkMetric> for TonicMetric {
fn from(metric: &SdkMetric) -> Self {
TonicMetric {
name: metric.name.to_string(),
description: metric.description.to_string(),
unit: metric.unit.to_string(),
name: metric.name().to_string(),
description: metric.description().to_string(),
unit: metric.unit().to_string(),
metadata: vec![], // internal and currently unused
data: metric.data.as_any().try_into().ok(),
data: Some(match metric.data() {
AggregatedMetrics::F64(data) => data.into(),
AggregatedMetrics::U64(data) => data.into(),
AggregatedMetrics::I64(data) => data.into(),
}),
}
}
}
impl TryFrom<&dyn Any> for TonicMetricData {
type Error = ();
fn try_from(data: &dyn Any) -> Result<Self, Self::Error> {
if let Some(hist) = data.downcast_ref::<SdkHistogram<i64>>() {
Ok(TonicMetricData::Histogram(hist.into()))
} else if let Some(hist) = data.downcast_ref::<SdkHistogram<u64>>() {
Ok(TonicMetricData::Histogram(hist.into()))
} else if let Some(hist) = data.downcast_ref::<SdkHistogram<f64>>() {
Ok(TonicMetricData::Histogram(hist.into()))
} else if let Some(hist) = data.downcast_ref::<SdkExponentialHistogram<i64>>() {
Ok(TonicMetricData::ExponentialHistogram(hist.into()))
} else if let Some(hist) = data.downcast_ref::<SdkExponentialHistogram<u64>>() {
Ok(TonicMetricData::ExponentialHistogram(hist.into()))
} else if let Some(hist) = data.downcast_ref::<SdkExponentialHistogram<f64>>() {
Ok(TonicMetricData::ExponentialHistogram(hist.into()))
} else if let Some(sum) = data.downcast_ref::<SdkSum<u64>>() {
Ok(TonicMetricData::Sum(sum.into()))
} else if let Some(sum) = data.downcast_ref::<SdkSum<i64>>() {
Ok(TonicMetricData::Sum(sum.into()))
} else if let Some(sum) = data.downcast_ref::<SdkSum<f64>>() {
Ok(TonicMetricData::Sum(sum.into()))
} else if let Some(gauge) = data.downcast_ref::<SdkGauge<u64>>() {
Ok(TonicMetricData::Gauge(gauge.into()))
} else if let Some(gauge) = data.downcast_ref::<SdkGauge<i64>>() {
Ok(TonicMetricData::Gauge(gauge.into()))
} else if let Some(gauge) = data.downcast_ref::<SdkGauge<f64>>() {
Ok(TonicMetricData::Gauge(gauge.into()))
} else {
otel_debug!(
name: "TonicMetricData::UnknownAggregator",
message= "Unknown aggregator type",
unknown_type= format!("{:?}", data),
);
Err(())
impl<T> From<&MetricData<T>> for TonicMetricData
where
T: Numeric + Debug,
{
fn from(data: &MetricData<T>) -> Self {
match data {
MetricData::Gauge(gauge) => TonicMetricData::Gauge(gauge.into()),
MetricData::Sum(sum) => TonicMetricData::Sum(sum.into()),
MetricData::Histogram(hist) => TonicMetricData::Histogram(hist.into()),
MetricData::ExponentialHistogram(hist) => {
TonicMetricData::ExponentialHistogram(hist.into())
}
}
}
}
@ -226,23 +212,22 @@ pub mod tonic {
fn from(hist: &SdkHistogram<T>) -> Self {
TonicHistogram {
data_points: hist
.data_points
.iter()
.data_points()
.map(|dp| TonicHistogramDataPoint {
attributes: dp.attributes.iter().map(Into::into).collect(),
start_time_unix_nano: to_nanos(hist.start_time),
time_unix_nano: to_nanos(hist.time),
count: dp.count,
sum: Some(dp.sum.into_f64()),
bucket_counts: dp.bucket_counts.clone(),
explicit_bounds: dp.bounds.clone(),
exemplars: dp.exemplars.iter().map(Into::into).collect(),
attributes: dp.attributes().map(Into::into).collect(),
start_time_unix_nano: to_nanos(hist.start_time()),
time_unix_nano: to_nanos(hist.time()),
count: dp.count(),
sum: Some(dp.sum().into_f64()),
bucket_counts: dp.bucket_counts().collect(),
explicit_bounds: dp.bounds().collect(),
exemplars: dp.exemplars().map(Into::into).collect(),
flags: TonicDataPointFlags::default() as u32,
min: dp.min.map(Numeric::into_f64),
max: dp.max.map(Numeric::into_f64),
min: dp.min().map(Numeric::into_f64),
max: dp.max().map(Numeric::into_f64),
})
.collect(),
aggregation_temporality: TonicTemporality::from(hist.temporality).into(),
aggregation_temporality: TonicTemporality::from(hist.temporality()).into(),
}
}
}
@ -254,76 +239,73 @@ pub mod tonic {
fn from(hist: &SdkExponentialHistogram<T>) -> Self {
TonicExponentialHistogram {
data_points: hist
.data_points
.iter()
.data_points()
.map(|dp| TonicExponentialHistogramDataPoint {
attributes: dp.attributes.iter().map(Into::into).collect(),
start_time_unix_nano: to_nanos(hist.start_time),
time_unix_nano: to_nanos(hist.time),
count: dp.count as u64,
sum: Some(dp.sum.into_f64()),
scale: dp.scale.into(),
zero_count: dp.zero_count,
attributes: dp.attributes().map(Into::into).collect(),
start_time_unix_nano: to_nanos(hist.start_time()),
time_unix_nano: to_nanos(hist.time()),
count: dp.count() as u64,
sum: Some(dp.sum().into_f64()),
scale: dp.scale().into(),
zero_count: dp.zero_count(),
positive: Some(TonicBuckets {
offset: dp.positive_bucket.offset,
bucket_counts: dp.positive_bucket.counts.clone(),
offset: dp.positive_bucket().offset(),
bucket_counts: dp.positive_bucket().counts().collect(),
}),
negative: Some(TonicBuckets {
offset: dp.negative_bucket.offset,
bucket_counts: dp.negative_bucket.counts.clone(),
offset: dp.negative_bucket().offset(),
bucket_counts: dp.negative_bucket().counts().collect(),
}),
flags: TonicDataPointFlags::default() as u32,
exemplars: dp.exemplars.iter().map(Into::into).collect(),
min: dp.min.map(Numeric::into_f64),
max: dp.max.map(Numeric::into_f64),
zero_threshold: dp.zero_threshold,
exemplars: dp.exemplars().map(Into::into).collect(),
min: dp.min().map(Numeric::into_f64),
max: dp.max().map(Numeric::into_f64),
zero_threshold: dp.zero_threshold(),
})
.collect(),
aggregation_temporality: TonicTemporality::from(hist.temporality).into(),
aggregation_temporality: TonicTemporality::from(hist.temporality()).into(),
}
}
}
impl<T> From<&SdkSum<T>> for TonicSum
where
T: fmt::Debug + Into<TonicExemplarValue> + Into<TonicDataPointValue> + Copy,
T: Debug + Into<TonicExemplarValue> + Into<TonicDataPointValue> + Copy,
{
fn from(sum: &SdkSum<T>) -> Self {
TonicSum {
data_points: sum
.data_points
.iter()
.data_points()
.map(|dp| TonicNumberDataPoint {
attributes: dp.attributes.iter().map(Into::into).collect(),
start_time_unix_nano: to_nanos(sum.start_time),
time_unix_nano: to_nanos(sum.time),
exemplars: dp.exemplars.iter().map(Into::into).collect(),
attributes: dp.attributes().map(Into::into).collect(),
start_time_unix_nano: to_nanos(sum.start_time()),
time_unix_nano: to_nanos(sum.time()),
exemplars: dp.exemplars().map(Into::into).collect(),
flags: TonicDataPointFlags::default() as u32,
value: Some(dp.value.into()),
value: Some(dp.value().into()),
})
.collect(),
aggregation_temporality: TonicTemporality::from(sum.temporality).into(),
is_monotonic: sum.is_monotonic,
aggregation_temporality: TonicTemporality::from(sum.temporality()).into(),
is_monotonic: sum.is_monotonic(),
}
}
}
impl<T> From<&SdkGauge<T>> for TonicGauge
where
T: fmt::Debug + Into<TonicExemplarValue> + Into<TonicDataPointValue> + Copy,
T: Debug + Into<TonicExemplarValue> + Into<TonicDataPointValue> + Copy,
{
fn from(gauge: &SdkGauge<T>) -> Self {
TonicGauge {
data_points: gauge
.data_points
.iter()
.data_points()
.map(|dp| TonicNumberDataPoint {
attributes: dp.attributes.iter().map(Into::into).collect(),
start_time_unix_nano: gauge.start_time.map(to_nanos).unwrap_or_default(),
time_unix_nano: to_nanos(gauge.time),
exemplars: dp.exemplars.iter().map(Into::into).collect(),
attributes: dp.attributes().map(Into::into).collect(),
start_time_unix_nano: gauge.start_time().map(to_nanos).unwrap_or_default(),
time_unix_nano: to_nanos(gauge.time()),
exemplars: dp.exemplars().map(Into::into).collect(),
flags: TonicDataPointFlags::default() as u32,
value: Some(dp.value.into()),
value: Some(dp.value().into()),
})
.collect(),
}
@ -337,13 +319,12 @@ pub mod tonic {
fn from(ex: &SdkExemplar<T>) -> Self {
TonicExemplar {
filtered_attributes: ex
.filtered_attributes
.iter()
.filtered_attributes()
.map(|kv| (&kv.key, &kv.value).into())
.collect(),
time_unix_nano: to_nanos(ex.time),
span_id: ex.span_id.into(),
trace_id: ex.trace_id.into(),
time_unix_nano: to_nanos(ex.time()),
span_id: ex.span_id().into(),
trace_id: ex.trace_id().into(),
value: Some(ex.value.into()),
}
}

View File

@ -11,3 +11,6 @@ pub mod logs;
#[cfg(feature = "zpages")]
pub mod tracez;
#[cfg(feature = "profiles")]
pub mod profiles;

View File

@ -0,0 +1 @@

View File

@ -97,6 +97,7 @@ pub mod tonic {
resource: Some(Resource {
attributes: resource.attributes.0.clone(),
dropped_attributes_count: 0,
entity_refs: vec![],
}),
schema_url: resource.schema_url.clone().unwrap_or_default(),
scope_spans: vec![ScopeSpans {
@ -182,6 +183,7 @@ pub mod tonic {
resource: Some(Resource {
attributes: resource.attributes.0.clone(),
dropped_attributes_count: 0,
entity_refs: vec![],
}),
scope_spans,
schema_url: resource.schema_url.clone().unwrap_or_default(),

View File

@ -12,6 +12,7 @@ const TONIC_PROTO_FILES: &[&str] = &[
"src/proto/opentelemetry-proto/opentelemetry/proto/collector/metrics/v1/metrics_service.proto",
"src/proto/opentelemetry-proto/opentelemetry/proto/logs/v1/logs.proto",
"src/proto/opentelemetry-proto/opentelemetry/proto/collector/logs/v1/logs_service.proto",
"src/proto/opentelemetry-proto/opentelemetry/proto/profiles/v1development/profiles.proto",
"src/proto/tracez.proto",
];
const TONIC_INCLUDES: &[&str] = &["src/proto/opentelemetry-proto", "src/proto"];
@ -66,6 +67,7 @@ fn build_tonic() {
"metrics.v1.Summary",
"metrics.v1.NumberDataPoint",
"metrics.v1.HistogramDataPoint",
"profiles.v1development.Function",
] {
builder = builder.type_attribute(
path,
@ -87,6 +89,7 @@ fn build_tonic() {
"logs.v1.LogRecord.trace_id",
"metrics.v1.Exemplar.span_id",
"metrics.v1.Exemplar.trace_id",
"profiles.v1development.Profile.profile_id",
] {
builder = builder
.field_attribute(path, "#[cfg_attr(feature = \"with-serde\", serde(serialize_with = \"crate::proto::serializers::serialize_to_hex_string\", deserialize_with = \"crate::proto::serializers::deserialize_from_hex_string\"))]")
@ -110,6 +113,14 @@ fn build_tonic() {
builder = builder
.field_attribute(path, "#[cfg_attr(feature = \"with-serde\", serde(serialize_with = \"crate::proto::serializers::serialize_u64_to_string\", deserialize_with = \"crate::proto::serializers::deserialize_string_to_u64\"))]")
}
for path in ["profiles.v1development.Profile.time_nanos"] {
builder = builder
.field_attribute(path, "#[cfg_attr(feature = \"with-serde\", serde(serialize_with = \"crate::proto::serializers::serialize_i64_to_string\", deserialize_with = \"crate::proto::serializers::deserialize_string_to_i64\"))]")
}
for path in ["profiles.v1development.Sample.timestamps_unix_nano"] {
builder = builder
.field_attribute(path, "#[cfg_attr(feature = \"with-serde\", serde(serialize_with = \"crate::proto::serializers::serialize_vec_u64_to_string\", deserialize_with = \"crate::proto::serializers::deserialize_vec_string_to_vec_u64\"))]")
}
// special serializer and deserializer for value
// The Value::value field must be hidden

View File

@ -44,6 +44,7 @@ mod json_serde {
}),
}],
dropped_attributes_count: 0,
entity_refs: vec![],
}),
scope_spans: vec![ScopeSpans {
scope: Some(InstrumentationScope {
@ -60,10 +61,11 @@ mod json_serde {
dropped_attributes_count: 0,
}),
spans: vec![Span {
trace_id: hex::decode("5b8efff798038103d269b633813fc60c").unwrap(),
span_id: hex::decode("eee19b7ec3c1b174").unwrap(),
trace_id: const_hex::decode("5b8efff798038103d269b633813fc60c")
.unwrap(),
span_id: const_hex::decode("eee19b7ec3c1b174").unwrap(),
trace_state: String::new(),
parent_span_id: hex::decode("eee19b7ec3c1b173").unwrap(),
parent_span_id: const_hex::decode("eee19b7ec3c1b173").unwrap(),
flags: 0,
name: String::from("I'm a server span"),
kind: 2,
@ -249,6 +251,7 @@ mod json_serde {
}),
}],
dropped_attributes_count: 1,
entity_refs: vec![],
}),
scope_spans: vec![ScopeSpans {
scope: Some(InstrumentationScope {
@ -265,10 +268,11 @@ mod json_serde {
dropped_attributes_count: 1,
}),
spans: vec![Span {
trace_id: hex::decode("5b8efff798038103d269b633813fc60c").unwrap(),
span_id: hex::decode("eee19b7ec3c1b174").unwrap(),
trace_id: const_hex::decode("5b8efff798038103d269b633813fc60c")
.unwrap(),
span_id: const_hex::decode("eee19b7ec3c1b174").unwrap(),
trace_state: String::from("browser=firefox,os=linux"),
parent_span_id: hex::decode("eee19b7ec3c1b173").unwrap(),
parent_span_id: const_hex::decode("eee19b7ec3c1b173").unwrap(),
flags: 1,
name: String::from("I'm a server span"),
kind: 2,
@ -306,9 +310,9 @@ mod json_serde {
}],
dropped_events_count: 1,
links: vec![Link {
trace_id: hex::decode("5b8efff798038103d269b633813fc60b")
trace_id: const_hex::decode("5b8efff798038103d269b633813fc60b")
.unwrap(),
span_id: hex::decode("eee19b7ec3c1b172").unwrap(),
span_id: const_hex::decode("eee19b7ec3c1b172").unwrap(),
trace_state: String::from("food=pizza,color=red"),
attributes: vec![KeyValue {
key: String::from("my.link.attr"),
@ -792,6 +796,7 @@ mod json_serde {
}),
}],
dropped_attributes_count: 0,
entity_refs: vec![],
}),
scope_metrics: vec![ScopeMetrics {
scope: Some(InstrumentationScope {
@ -1178,6 +1183,7 @@ mod json_serde {
}),
}],
dropped_attributes_count: 0,
entity_refs: vec![],
}),
scope_logs: vec![ScopeLogs {
scope: Some(InstrumentationScope {
@ -1268,8 +1274,9 @@ mod json_serde {
],
dropped_attributes_count: 0,
flags: 0,
trace_id: hex::decode("5b8efff798038103d269b633813fc60c").unwrap(),
span_id: hex::decode("eee19b7ec3c1b174").unwrap(),
trace_id: const_hex::decode("5b8efff798038103d269b633813fc60c")
.unwrap(),
span_id: const_hex::decode("eee19b7ec3c1b174").unwrap(),
}],
schema_url: String::new(),
}],

View File

@ -2,6 +2,123 @@
## vNext
- TODO: Placeholder for Span processor related things
- *Fix* SpanProcessor::on_start is no longer called on non recording spans
- **Fix**: Restore true parallel exports in the async-native `BatchSpanProcessor` by honoring `OTEL_BSP_MAX_CONCURRENT_EXPORTS` ([#2959](https://github.com/open-telemetry/opentelemetry-rust/pull/3028)). A regression in [#2685](https://github.com/open-telemetry/opentelemetry-rust/pull/2685) inadvertently awaited the `export()` future directly in `opentelemetry-sdk/src/trace/span_processor_with_async_runtime.rs` instead of spawning it on the runtime, forcing all exports to run sequentially.
## 0.30.0
Released 2025-May-23
- Updated `opentelemetry` and `opentelemetry-http` dependencies to version 0.30.0.
- It is now possible to add links to a `Span` via the `SpanRef` that you get from
a `Context`. [2959](https://github.com/open-telemetry/opentelemetry-rust/pull/2959)
- **Feature**: Added context based telemetry suppression. [#2868](https://github.com/open-telemetry/opentelemetry-rust/pull/2868)
- `SdkLogger`, `SdkTracer` modified to respect telemetry suppression based on
`Context`. In other words, if the current context has telemetry suppression
enabled, then logs/spans will be ignored.
- The flag is typically set by OTel
components to prevent telemetry from itself being fed back into OTel.
- `BatchLogProcessor`, `BatchSpanProcessor`, and `PeriodicReader` modified to set
the suppression flag in their dedicated thread, so that telemetry generated from
those threads will not be fed back into OTel.
- Similarly, `SimpleLogProcessor`
also modified to suppress telemetry before invoking exporters.
- **Feature**: Implemented and enabled cardinality capping for Metrics by
default. [#2901](https://github.com/open-telemetry/opentelemetry-rust/pull/2901)
- The default cardinality limit is 2000 and can be customized using Views.
- This feature was previously removed in version 0.28 due to the lack of
configurability but has now been reintroduced with the ability to configure
the limit.
- Fixed the overflow attribute to correctly use the boolean value `true`
instead of the string `"true"`.
[#2878](https://github.com/open-telemetry/opentelemetry-rust/issues/2878)
- The `shutdown_with_timeout` method is added to SpanProcessor, SpanExporter trait and TracerProvider.
- The `shutdown_with_timeout` method is added to LogExporter trait.
- The `shutdown_with_timeout` method is added to LogProvider and LogProcessor trait.
- *Breaking* `MetricError`, `MetricResult` no longer public (except when
`spec_unstable_metrics_views` feature flag is enabled). `OTelSdkResult` should
be used instead, wherever applicable. [#2906](https://github.com/open-telemetry/opentelemetry-rust/pull/2906)
- *Breaking* change, affecting custom `MetricReader` authors:
- The
`shutdown_with_timeout` method is added to `MetricReader` trait.
- `collect`
method on `MetricReader` modified to return `OTelSdkResult`.
[#2905](https://github.com/open-telemetry/opentelemetry-rust/pull/2905)
- `MetricReader`
trait, `ManualReader` struct, `Pipeline` struct, `InstrumentKind` enum moved
behind feature flag "experimental_metrics_custom_reader".
[#2928](https://github.com/open-telemetry/opentelemetry-rust/pull/2928)
- **Views improvements**:
- Core view functionality is now available by default—users can change the
name, unit, description, and cardinality limit of a metric via views without
enabling the `spec_unstable_metrics_views` feature flag. Advanced view
features, such as custom aggregation or attribute filtering, still require
the `spec_unstable_metrics_views` feature.
- Removed `new_view()` method and `View` trait. Views can now be added by passing
a function with signature `Fn(&Instrument) -> Option<Stream>` to the `with_view`
method on `MeterProviderBuilder`.
- Introduced a builder pattern for `Stream` creation to use with views:
- Added `StreamBuilder` struct with methods to configure stream properties
- Added `Stream::builder()` method that returns a new `StreamBuilder`
- `StreamBuilder::build()` returns `Result<Stream, Box<dyn Error>>` enabling
proper validation.
Example of using views to rename a metric:
```rust
let view_rename = |i: &Instrument| {
if i.name() == "my_histogram" {
Some(
Stream::builder()
.with_name("my_histogram_renamed")
.build()
.unwrap(),
)
} else {
None
}
};
let provider = SdkMeterProvider::builder()
// add exporters, set resource etc.
.with_view(view_rename)
.build();
```
- *Breaking* `Aggregation` enum moved behind feature flag
"spec_unstable_metrics_views". This was only required when using advanced view
capabilities.
[#2928](https://github.com/open-telemetry/opentelemetry-rust/pull/2928)
- *Breaking* change, affecting custom `PushMetricExporter` authors:
- The `export` method on `PushMetricExporter` now accepts `&ResourceMetrics`
instead of `&mut ResourceMetrics`.
- `ResourceMetrics` no longer exposes `scope_metrics` field, but instead
offers `scope_metrics()` method that returns an iterator over the same.
- `ScopeMetrics` no longer exposes `metrics` field, but instead offers
`metrics()` method that returns an iterator over the same.
- `Sum`, `Gauge`, `Histogram` & `ExponentialHistogram` no longer exposes
`data_points` field, but instead offers `data_points()` method that returns
an iterator over the same.
- `SumDataPoint`, `GaugeDataPoint`, `HistogramDataPoint` &
`ExponentialHistogramDataPoint` no longer exposes `attributes`, `exemplars`
field, but instead offers `attributes()`, and `exemplars()` method that
returns an iterator over the same.
- `Exemplar` no longer exposes `filtered_attributes` field, but instead
offers `filtered_attributes()` method that returns an iterator over
the same.
- `HistogramDataPoint` no longer exposes `bounds` and `bucket_counts`, but
instead offers `bounds()` and `bucket_counts()` methods that returns an
iterator over the same.
- `Metric` no longer exposes `name`, `description`, `unit`, `data` fields, but
instead offers `name()`, `description()`, `unit()`, and `data()` accessor methods.
- `ResourceMetrics` no longer exposes `resource` field, but instead offers
a `resource()` accessor method.
- `ScopeMetrics` no longer exposes `scope` field, but instead offers
a `scope()` accessor method.
## 0.29.0
Released 2025-Mar-21
@ -58,6 +175,7 @@ Released 2025-Mar-21
Custom exporters will need to internally synchronize any mutable state, if applicable.
- **Breaking** The `shutdown_with_timeout` method is added to MetricExporter trait. This is breaking change for custom `MetricExporter` authors.
- Bug Fix: `BatchLogProcessor` now correctly calls `shutdown` on the exporter
when its `shutdown` is invoked.
@ -72,12 +190,11 @@ Released 2025-Mar-21
needs to mutate state, it should rely on interior mutability.
[2764](https://github.com/open-telemetry/opentelemetry-rust/pull/2764)
- *Breaking (Affects custom Exporter/Processor authors only)* Removed
`opentelelemetry_sdk::logs::error::{LogError, LogResult}`. These were not
`opentelemetry_sdk::logs::error::{LogError, LogResult}`. These were not
intended to be public. If you are authoring custom processor/exporters, use
`opentelemetry_sdk::error::OTelSdkError` and
`opentelemetry_sdk::error::OTelSdkResult`.
// PLACEHOLDER to fill in when the similar change is done for traces, metrics.
// PLACEHOLDER to put all the PR links together.
[2790](https://github.com/open-telemetry/opentelemetry-rust/pull/2790)
- **Breaking** for custom `LogProcessor` authors: Changed `set_resource`
to require mutable ref.
`fn set_resource(&mut self, _resource: &Resource) {}`
@ -556,8 +673,7 @@ Released 2024-Sep-30
Update `LogProcessor::emit()` method to take mutable reference to LogData. This is breaking
change for LogProcessor developers. If the processor needs to invoke the exporter
asynchronously, it should clone the data to ensure it can be safely processed without
lifetime issues. Any changes made to the log data before cloning in this method will be
reflected in the next log processor in the chain, as well as to the exporter.
lifetime issues. Any changes made to the log data before cloning in this method will be reflected in the next log processor in the chain, as well as to the exporter.
- *Breaking* [1726](https://github.com/open-telemetry/opentelemetry-rust/pull/1726)
Update `LogExporter::export()` method to accept a batch of log data, which can be either a
reference or owned`LogData`. If the exporter needs to process the log data

Some files were not shown because too many files have changed in this diff Show More