opentelemetry-python-contrib/instrumentation-genai/opentelemetry-instrumentati.../examples/zero-code
wrisa 60a670f093
Added span support for genAI langchain llm invocation (#3665)
* Added span support for llm invocation

* removed invalid code

* added entry point and fixed unwrap

* fixed check runs and updated dependencies

* fixed ruff error

* moved span generation code and added test coverage

* ruff formatting

* ruff formatting again

* removed config exception logger

* removed dontThrow

* fixed span name

* fixed ruff

* fixed typecheck

* added span exist check

* fixed typecheck

* removed start time from span state and moved error handler method to span manager

* fixed ruff

* made SpanManager class and method private

* removed deprecated gen_ai.system attribute

* Moved model to fixture and changed imports

* Fixed ruff errors and renamed method

* Added bedrock support and test

* Fixed ruff errors

* Addressed Aaron's comments

* Reverted versions and ignored typecheck errors

* removed context and added issue

* fixed versions

* skipped telemetry for other than ChatOpenAI and ChatBedrock. Added test for the same.

* Fixed telemetry skipping logic

* Fixed ruff

* added notice file

* fixed conflict

* fixed ruff and typecheck

* fixed ruff

* upgraded semcov version

---------

Co-authored-by: Aaron Abbott <aaronabbott@google.com>
2025-09-19 21:19:40 +00:00
..
.env Added span support for genAI langchain llm invocation (#3665) 2025-09-19 21:19:40 +00:00
README.rst Added span support for genAI langchain llm invocation (#3665) 2025-09-19 21:19:40 +00:00
main.py Added span support for genAI langchain llm invocation (#3665) 2025-09-19 21:19:40 +00:00
requirements.txt Added span support for genAI langchain llm invocation (#3665) 2025-09-19 21:19:40 +00:00

README.rst

OpenTelemetry Langchain Zero-Code Instrumentation Example
======================================================

This is an example of how to instrument Langchain with zero code changes,
using `opentelemetry-instrument`.

When :code:`main.py <main.py>`_ is run, it exports traces to an OTLP-compatible endpoint.
Traces include details such as the span name and other attributes.

Note: :code:`.env <.env>`_ file configures additional environment variables:
- :code:`OTEL_LOGS_EXPORTER=otlp` to specify exporter type.
- :code:`OPENAI_API_KEY` open AI key for accessing the OpenAI API.
- :code:`OTEL_EXPORTER_OTLP_ENDPOINT` to specify the endpoint for exporting traces (default is http://localhost:4317).

Setup
-----

Minimally, update the :code:`.env <.env>`_ file with your :code:`OPENAI_API_KEY`.
An OTLP compatible endpoint should be listening for traces http://localhost:4317.
If not, update :code:`OTEL_EXPORTER_OTLP_ENDPOINT` as well.

Next, set up a virtual environment like this:

::

    python3 -m venv .venv
    source .venv/bin/activate
    pip install "python-dotenv[cli]"
    pip install -r requirements.txt

Run
---

Run the example like this:

::

    dotenv run -- opentelemetry-instrument python main.py

You should see the capital of France generated by Langchain ChatOpenAI while traces export to your configured observability tool.