416 lines
38 KiB
Markdown
416 lines
38 KiB
Markdown
<!--- Hugo front matter used to generate the website version of this page:
|
|
linkTitle: Agent spans
|
|
--->
|
|
|
|
# Semantic Conventions for GenAI agent and framework spans
|
|
|
|
**Status**: [Development][DocumentStatus]
|
|
|
|
<!-- toc -->
|
|
|
|
- [Spans](#spans)
|
|
- [Create agent span](#create-agent-span)
|
|
- [Invoke agent span](#invoke-agent-span)
|
|
- [Execute tool span](#execute-tool-span)
|
|
|
|
<!-- tocstop -->
|
|
|
|
> [!Warning]
|
|
>
|
|
> Existing GenAI instrumentations that are using
|
|
> [v1.36.0 of this document](https://github.com/open-telemetry/semantic-conventions/blob/v1.36.0/docs/gen-ai/README.md)
|
|
> (or prior):
|
|
>
|
|
> * SHOULD NOT change the version of the GenAI conventions that they emit by default.
|
|
> Conventions include, but are not limited to, attributes, metric, span and event names,
|
|
> span kind and unit of measure.
|
|
> * SHOULD introduce an environment variable `OTEL_SEMCONV_STABILITY_OPT_IN`
|
|
> as a comma-separated list of category-specific values. The list of values
|
|
> includes:
|
|
> * `gen_ai_latest_experimental` - emit the latest experimental version of
|
|
> GenAI conventions (supported by the instrumentation) and do not emit the
|
|
> old one (v1.36.0 or prior).
|
|
> * The default behavior is to continue emitting whatever version of the GenAI
|
|
> conventions the instrumentation was emitting (1.34.0 or prior).
|
|
>
|
|
> This transition plan will be updated to include stable version before the
|
|
> GenAI conventions are marked as stable.
|
|
|
|
Generative AI models can be trained to use tools to access real-time information or suggest a real-world action. For example, a model can leverage a database retrieval tool to access specific information, like a customer's purchase history, so it can generate tailored shopping recommendations. Alternatively, based on a user's query, a model can make various API calls to send an email response to a colleague or complete a financial transaction on your behalf. To do so, the model must not only have access to a set of external tools, it needs the ability to plan and execute any task in a self-directed fashion. This combination of reasoning, logic, and access to external information that are all connected to a Generative AI model invokes the concept of an agent.
|
|
|
|
This document defines semantic conventions for GenAI agent calls that are defined by this [whitepaper](https://www.kaggle.com/whitepaper-agents).
|
|
|
|
It MAY be applicable to agent operations that are performed by the GenAI framework locally.
|
|
|
|
The semantic conventions for GenAI agents extend and override the semantic conventions for [Gen AI Spans](gen-ai-spans.md).
|
|
|
|
## Spans
|
|
|
|
### Create agent span
|
|
|
|
<!-- semconv span.gen_ai.create_agent.client -->
|
|
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
|
|
<!-- see templates/registry/markdown/snippet.md.j2 -->
|
|
<!-- prettier-ignore-start -->
|
|
<!-- markdownlint-capture -->
|
|
<!-- markdownlint-disable -->
|
|
|
|
**Status:** 
|
|
|
|
Describes GenAI agent creation and is usually applicable when working with remote agent services.
|
|
|
|
The `gen_ai.operation.name` SHOULD be `create_agent`.
|
|
|
|
**Span name** SHOULD be `create_agent {gen_ai.agent.name}`.
|
|
Semantic conventions for individual GenAI systems and frameworks MAY specify different span name format.
|
|
|
|
**Span kind** SHOULD be `CLIENT`.
|
|
|
|
**Span status** SHOULD follow the [Recording Errors](/docs/general/recording-errors.md) document.
|
|
|
|
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
|
|---|---|---|---|---|---|
|
|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
|
|
| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
|
|
| [`error.type`](/docs/registry/attributes/error.md) | string | Describes a class of error the operation ended with. [3] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | `Conditionally Required` if the operation ended in an error |  |
|
|
| [`gen_ai.agent.description`](/docs/registry/attributes/gen-ai.md) | string | Free-form description of the GenAI agent provided by the application. | `Helps with math problems`; `Generates fiction stories` | `Conditionally Required` If provided by the application. |  |
|
|
| [`gen_ai.agent.id`](/docs/registry/attributes/gen-ai.md) | string | The unique identifier of the GenAI agent. | `asst_5j66UpCpwteGg4YSxUnt7lPY` | `Conditionally Required` if applicable. |  |
|
|
| [`gen_ai.agent.name`](/docs/registry/attributes/gen-ai.md) | string | Human-readable name of the GenAI agent provided by the application. | `Math Tutor`; `Fiction Writer` | `Conditionally Required` If provided by the application. |  |
|
|
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. [4] | `gpt-4` | `Conditionally Required` If available. |  |
|
|
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [5] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
|
|
| [`server.address`](/docs/registry/attributes/server.md) | string | GenAI server address. [6] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | `Recommended` |  |
|
|
| [`gen_ai.system_instructions`](/docs/registry/attributes/gen-ai.md) | any | The system message or instructions provided to the GenAI model separately from the chat history. | [<br> {<br> "type": "text",<br> "content": "You are an Agent that greet users, always use greetings tool to respond"<br> }<br>]; [<br> {<br> "type": "text",<br> "content": "You are a language translator."<br> },<br> {<br> "type": "text",<br> "content": "Your mission is to translate text in English to French."<br> }<br>] | `Opt-In` |  |
|
|
|
|
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
|
|
|
|
**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
|
|
knowledge and may differ from the actual model provider.
|
|
|
|
Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
|
|
are accessible using the OpenAI REST API and corresponding client libraries,
|
|
but may proxy or host models from different providers.
|
|
|
|
The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
|
|
attributes may help identify the actual system in use.
|
|
|
|
The `gen_ai.provider.name` attribute acts as a discriminator that
|
|
identifies the GenAI telemetry format flavor specific to that provider
|
|
within GenAI semantic conventions.
|
|
It SHOULD be set consistently with provider-specific attributes and signals.
|
|
For example, GenAI spans, metrics, and events related to AWS Bedrock
|
|
should have the `gen_ai.provider.name` set to `aws.bedrock` and include
|
|
applicable `aws.bedrock.*` attributes and are not expected to include
|
|
`openai.*` attributes.
|
|
|
|
**[3] `error.type`:** The `error.type` SHOULD match the error code returned by the Generative AI provider or the client library,
|
|
the canonical name of exception that occurred, or another low-cardinality error identifier.
|
|
Instrumentations SHOULD document the list of errors they report.
|
|
|
|
**[4] `gen_ai.request.model`:** The name of the GenAI model a request is being made to. If the model is supplied by a vendor, then the value must be the exact name of the model requested. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
|
|
|
|
**[5] `server.port`:** When observed from the client side, and when communicating through an intermediary, `server.port` SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.
|
|
|
|
**[6] `server.address`:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available.
|
|
|
|
---
|
|
|
|
`error.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. |  |
|
|
|
|
---
|
|
|
|
`gen_ai.operation.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `chat` | Chat completion operation such as [OpenAI Chat API](https://platform.openai.com/docs/api-reference/chat) |  |
|
|
| `create_agent` | Create GenAI agent |  |
|
|
| `embeddings` | Embeddings operation such as [OpenAI Create embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create) |  |
|
|
| `execute_tool` | Execute a tool |  |
|
|
| `generate_content` | Multimodal content generation operation such as [Gemini Generate Content](https://ai.google.dev/api/generate-content) |  |
|
|
| `invoke_agent` | Invoke GenAI agent |  |
|
|
| `text_completion` | Text completions operation such as [OpenAI Completions API (Legacy)](https://platform.openai.com/docs/api-reference/completions) |  |
|
|
|
|
---
|
|
|
|
`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
|
|
| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
|
|
| `azure.ai.inference` | Azure AI Inference |  |
|
|
| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
|
|
| `cohere` | [Cohere](https://cohere.com/) |  |
|
|
| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
|
|
| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [7] |  |
|
|
| `gcp.gen_ai` | Any Google generative AI endpoint [8] |  |
|
|
| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [9] |  |
|
|
| `groq` | [Groq](https://groq.com/) |  |
|
|
| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
|
|
| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
|
|
| `openai` | [OpenAI](https://openai.com/) |  |
|
|
| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
|
|
| `x_ai` | [xAI](https://x.ai/) |  |
|
|
|
|
**[7]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
|
|
|
|
**[8]:** May be used when specific backend is unknown.
|
|
|
|
**[9]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
|
|
|
|
<!-- markdownlint-restore -->
|
|
<!-- prettier-ignore-end -->
|
|
<!-- END AUTOGENERATED TEXT -->
|
|
<!-- endsemconv -->
|
|
|
|
### Invoke agent span
|
|
|
|
<!-- semconv span.gen_ai.invoke_agent.client -->
|
|
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
|
|
<!-- see templates/registry/markdown/snippet.md.j2 -->
|
|
<!-- prettier-ignore-start -->
|
|
<!-- markdownlint-capture -->
|
|
<!-- markdownlint-disable -->
|
|
|
|
**Status:** 
|
|
|
|
Describes GenAI agent invocation and is usually applicable when working with remote agent services.
|
|
|
|
The `gen_ai.operation.name` SHOULD be `invoke_agent`.
|
|
The **span name** SHOULD be `invoke_agent {gen_ai.agent.name}` if `gen_ai.agent.name` is readily available.
|
|
When `gen_ai.agent.name` is not available, it SHOULD be `invoke_agent`.
|
|
Semantic conventions for individual GenAI systems and frameworks MAY specify different span name format.
|
|
|
|
**Span kind** SHOULD be `CLIENT`.
|
|
|
|
**Span status** SHOULD follow the [Recording Errors](/docs/general/recording-errors.md) document.
|
|
|
|
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
|
|---|---|---|---|---|---|
|
|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
|
|
| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
|
|
| [`error.type`](/docs/registry/attributes/error.md) | string | Describes a class of error the operation ended with. [3] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | `Conditionally Required` if the operation ended in an error |  |
|
|
| [`gen_ai.agent.description`](/docs/registry/attributes/gen-ai.md) | string | Free-form description of the GenAI agent provided by the application. | `Helps with math problems`; `Generates fiction stories` | `Conditionally Required` when available |  |
|
|
| [`gen_ai.agent.id`](/docs/registry/attributes/gen-ai.md) | string | The unique identifier of the GenAI agent. | `asst_5j66UpCpwteGg4YSxUnt7lPY` | `Conditionally Required` if applicable. |  |
|
|
| [`gen_ai.agent.name`](/docs/registry/attributes/gen-ai.md) | string | Human-readable name of the GenAI agent provided by the application. | `Math Tutor`; `Fiction Writer` | `Conditionally Required` when available |  |
|
|
| [`gen_ai.conversation.id`](/docs/registry/attributes/gen-ai.md) | string | The unique identifier for a conversation (session, thread), used to store and correlate messages within this conversation. [4] | `conv_5j66UpCpwteGg4YSxUnt7lPY` | `Conditionally Required` when available |  |
|
|
| [`gen_ai.data_source.id`](/docs/registry/attributes/gen-ai.md) | string | The data source identifier. [5] | `H7STPQYOND` | `Conditionally Required` if applicable. |  |
|
|
| [`gen_ai.output.type`](/docs/registry/attributes/gen-ai.md) | string | Represents the content type requested by the client. [6] | `text`; `json`; `image` | `Conditionally Required` [7] |  |
|
|
| [`gen_ai.request.choice.count`](/docs/registry/attributes/gen-ai.md) | int | The target number of candidate completions to return. | `3` | `Conditionally Required` if available, in the request, and !=1 |  |
|
|
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. [8] | `gpt-4` | `Conditionally Required` If available. |  |
|
|
| [`gen_ai.request.seed`](/docs/registry/attributes/gen-ai.md) | int | Requests with same seed value more likely to return same result. | `100` | `Conditionally Required` if applicable and if the request includes a seed |  |
|
|
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [9] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
|
|
| [`gen_ai.request.frequency_penalty`](/docs/registry/attributes/gen-ai.md) | double | The frequency penalty setting for the GenAI request. | `0.1` | `Recommended` |  |
|
|
| [`gen_ai.request.max_tokens`](/docs/registry/attributes/gen-ai.md) | int | The maximum number of tokens the model generates for a request. | `100` | `Recommended` |  |
|
|
| [`gen_ai.request.presence_penalty`](/docs/registry/attributes/gen-ai.md) | double | The presence penalty setting for the GenAI request. | `0.1` | `Recommended` |  |
|
|
| [`gen_ai.request.stop_sequences`](/docs/registry/attributes/gen-ai.md) | string[] | List of sequences that the model will use to stop generating further tokens. | `["forest", "lived"]` | `Recommended` |  |
|
|
| [`gen_ai.request.temperature`](/docs/registry/attributes/gen-ai.md) | double | The temperature setting for the GenAI request. | `0.0` | `Recommended` |  |
|
|
| [`gen_ai.request.top_p`](/docs/registry/attributes/gen-ai.md) | double | The top_p sampling setting for the GenAI request. | `1.0` | `Recommended` |  |
|
|
| [`gen_ai.response.finish_reasons`](/docs/registry/attributes/gen-ai.md) | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `["stop"]`; `["stop", "length"]` | `Recommended` |  |
|
|
| [`gen_ai.response.id`](/docs/registry/attributes/gen-ai.md) | string | The unique identifier for the completion. | `chatcmpl-123` | `Recommended` |  |
|
|
| [`gen_ai.response.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the model that generated the response. [10] | `gpt-4-0613` | `Recommended` |  |
|
|
| [`gen_ai.usage.input_tokens`](/docs/registry/attributes/gen-ai.md) | int | The number of tokens used in the GenAI input (prompt). | `100` | `Recommended` |  |
|
|
| [`gen_ai.usage.output_tokens`](/docs/registry/attributes/gen-ai.md) | int | The number of tokens used in the GenAI response (completion). | `180` | `Recommended` |  |
|
|
| [`server.address`](/docs/registry/attributes/server.md) | string | GenAI server address. [11] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | `Recommended` |  |
|
|
| [`gen_ai.input.messages`](/docs/registry/attributes/gen-ai.md) | any | The chat history provided to the model as an input. [12] | [<br> {<br> "role": "user",<br> "parts": [<br> {<br> "type": "text",<br> "content": "Weather in Paris?"<br> }<br> ]<br> },<br> {<br> "role": "assistant",<br> "parts": [<br> {<br> "type": "tool_call",<br> "id": "call_VSPygqKTWdrhaFErNvMV18Yl",<br> "name": "get_weather",<br> "arguments": {<br> "location": "Paris"<br> }<br> }<br> ]<br> },<br> {<br> "role": "tool",<br> "parts": [<br> {<br> "type": "tool_call_response",<br> "id": " call_VSPygqKTWdrhaFErNvMV18Yl",<br> "result": "rainy, 57°F"<br> }<br> ]<br> }<br>] | `Opt-In` |  |
|
|
| [`gen_ai.output.messages`](/docs/registry/attributes/gen-ai.md) | any | Messages returned by the model where each message represents a specific model response (choice, candidate). [13] | [<br> {<br> "role": "assistant",<br> "parts": [<br> {<br> "type": "text",<br> "content": "The weather in Paris is currently rainy with a temperature of 57°F."<br> }<br> ],<br> "finish_reason": "stop"<br> }<br>] | `Opt-In` |  |
|
|
| [`gen_ai.system_instructions`](/docs/registry/attributes/gen-ai.md) | any | The system message or instructions provided to the GenAI model separately from the chat history. [14] | [<br> {<br> "type": "text",<br> "content": "You are an Agent that greet users, always use greetings tool to respond"<br> }<br>]; [<br> {<br> "type": "text",<br> "content": "You are a language translator."<br> },<br> {<br> "type": "text",<br> "content": "Your mission is to translate text in English to French."<br> }<br>] | `Opt-In` |  |
|
|
| [`gen_ai.tool.definitions`](/docs/registry/attributes/gen-ai.md) | any | The list of source system tool definitions available to the GenAI agent or model. [15] | [<br> {<br> "type": "function",<br> "name": "get_current_weather",<br> "description": "Get the current weather in a given location",<br> "parameters": {<br> "type": "object",<br> "properties": {<br> "location": {<br> "type": "string",<br> "description": "The city and state, e.g. San Francisco, CA"<br> },<br> "unit": {<br> "type": "string",<br> "enum": [<br> "celsius",<br> "fahrenheit"<br> ]<br> }<br> },<br> "required": [<br> "location",<br> "unit"<br> ]<br> }<br> }<br>] | `Opt-In` |  |
|
|
|
|
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
|
|
|
|
**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
|
|
knowledge and may differ from the actual model provider.
|
|
|
|
Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
|
|
are accessible using the OpenAI REST API and corresponding client libraries,
|
|
but may proxy or host models from different providers.
|
|
|
|
The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
|
|
attributes may help identify the actual system in use.
|
|
|
|
The `gen_ai.provider.name` attribute acts as a discriminator that
|
|
identifies the GenAI telemetry format flavor specific to that provider
|
|
within GenAI semantic conventions.
|
|
It SHOULD be set consistently with provider-specific attributes and signals.
|
|
For example, GenAI spans, metrics, and events related to AWS Bedrock
|
|
should have the `gen_ai.provider.name` set to `aws.bedrock` and include
|
|
applicable `aws.bedrock.*` attributes and are not expected to include
|
|
`openai.*` attributes.
|
|
|
|
**[3] `error.type`:** The `error.type` SHOULD match the error code returned by the Generative AI provider or the client library,
|
|
the canonical name of exception that occurred, or another low-cardinality error identifier.
|
|
Instrumentations SHOULD document the list of errors they report.
|
|
|
|
**[4] `gen_ai.conversation.id`:** Instrumentations SHOULD populate conversation id when they have it readily available
|
|
for a given operation, for example:
|
|
|
|
- when client framework being instrumented manages conversation history
|
|
(see [LlamaIndex chat store](https://docs.llamaindex.ai/en/stable/module_guides/storing/chat_stores/))
|
|
|
|
- when instrumenting GenAI client libraries that maintain conversation on the backend side
|
|
(see [AWS Bedrock agent sessions](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-session-state.html),
|
|
[OpenAI Assistant threads](https://platform.openai.com/docs/api-reference/threads))
|
|
|
|
Application developers that manage conversation history MAY add conversation id to GenAI and other
|
|
spans or logs using custom span or log record processors or hooks provided by instrumentation
|
|
libraries.
|
|
|
|
**[5] `gen_ai.data_source.id`:** Data sources are used by AI agents and RAG applications to store grounding data. A data source may be an external database, object store, document collection, website, or any other storage system used by the GenAI agent or application. The `gen_ai.data_source.id` SHOULD match the identifier used by the GenAI system rather than a name specific to the external storage, such as a database or object store. Semantic conventions referencing `gen_ai.data_source.id` MAY also leverage additional attributes, such as `db.*`, to further identify and describe the data source.
|
|
|
|
**[6] `gen_ai.output.type`:** This attribute SHOULD be used when the client requests output of a specific type. The model may return zero or more outputs of this type.
|
|
This attribute specifies the output modality and not the actual output format. For example, if an image is requested, the actual output could be a URL pointing to an image file.
|
|
Additional output format details may be recorded in the future in the `gen_ai.output.{type}.*` attributes.
|
|
|
|
**[7] `gen_ai.output.type`:** when applicable and if the request includes an output format.
|
|
|
|
**[8] `gen_ai.request.model`:** The name of the GenAI model a request is being made to. If the model is supplied by a vendor, then the value must be the exact name of the model requested. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
|
|
|
|
**[9] `server.port`:** When observed from the client side, and when communicating through an intermediary, `server.port` SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.
|
|
|
|
**[10] `gen_ai.response.model`:** If available. The name of the GenAI model that provided the response. If the model is supplied by a vendor, then the value must be the exact name of the model actually used. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
|
|
|
|
**[11] `server.address`:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available.
|
|
|
|
**[12] `gen_ai.input.messages`:** Instrumentations MUST follow [Input messages JSON schema](/docs/gen-ai/gen-ai-input-messages.json).
|
|
When the attribute is recorded on events, it MUST be recorded in structured
|
|
form. When recorded on spans, it MAY be recorded as a JSON string if structured
|
|
format is not supported and SHOULD be recorded in structured form otherwise.
|
|
|
|
Messages MUST be provided in the order they were sent to the model.
|
|
Instrumentations MAY provide a way for users to filter or truncate
|
|
input messages.
|
|
|
|
> [!Warning]
|
|
> This attribute is likely to contain sensitive information including user/PII data.
|
|
|
|
See [Recording content on attributes](/docs/gen-ai/gen-ai-spans.md#recording-content-on-attributes)
|
|
section for more details.
|
|
|
|
**[13] `gen_ai.output.messages`:** Instrumentations MUST follow [Output messages JSON schema](/docs/gen-ai/gen-ai-output-messages.json)
|
|
|
|
Each message represents a single output choice/candidate generated by
|
|
the model. Each message corresponds to exactly one generation
|
|
(choice/candidate) and vice versa - one choice cannot be split across
|
|
multiple messages or one message cannot contain parts from multiple choices.
|
|
|
|
When the attribute is recorded on events, it MUST be recorded in structured
|
|
form. When recorded on spans, it MAY be recorded as a JSON string if structured
|
|
format is not supported and SHOULD be recorded in structured form otherwise.
|
|
|
|
Instrumentations MAY provide a way for users to filter or truncate
|
|
output messages.
|
|
|
|
> [!Warning]
|
|
> This attribute is likely to contain sensitive information including user/PII data.
|
|
|
|
See [Recording content on attributes](/docs/gen-ai/gen-ai-spans.md#recording-content-on-attributes)
|
|
section for more details.
|
|
|
|
**[14] `gen_ai.system_instructions`:** This attribute SHOULD be used when the corresponding provider or API
|
|
allows to provide system instructions or messages separately from the
|
|
chat history.
|
|
|
|
Instructions that are part of the chat history SHOULD be recorded in
|
|
`gen_ai.input.messages` attribute instead.
|
|
|
|
Instrumentations MUST follow [System instructions JSON schema](/docs/gen-ai/gen-ai-system-instructions.json).
|
|
|
|
When recorded on spans, it MAY be recorded as a JSON string if structured
|
|
format is not supported and SHOULD be recorded in structured form otherwise.
|
|
|
|
Instrumentations MAY provide a way for users to filter or truncate
|
|
system instructions.
|
|
|
|
> [!Warning]
|
|
> This attribute may contain sensitive information.
|
|
|
|
See [Recording content on attributes](/docs/gen-ai/gen-ai-spans.md#recording-content-on-attributes)
|
|
section for more details.
|
|
|
|
**[15] `gen_ai.tool.definitions`:** The value of this attribute matches source system tool definition format.
|
|
|
|
It's expected to be an array of objects where each object represents a tool definition. In case a serialized string is available
|
|
to the instrumentation, the instrumentation SHOULD do the best effort to
|
|
deserialize it to an array. When recorded on spans, it MAY be recorded as a JSON string if structured format is not supported and SHOULD be recorded in structured form otherwise.
|
|
|
|
Since this attribute could be large, it's NOT RECOMMENDED to populate
|
|
it by default. Instrumentations MAY provide a way to enable
|
|
populating this attribute.
|
|
|
|
---
|
|
|
|
`error.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. |  |
|
|
|
|
---
|
|
|
|
`gen_ai.operation.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `chat` | Chat completion operation such as [OpenAI Chat API](https://platform.openai.com/docs/api-reference/chat) |  |
|
|
| `create_agent` | Create GenAI agent |  |
|
|
| `embeddings` | Embeddings operation such as [OpenAI Create embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create) |  |
|
|
| `execute_tool` | Execute a tool |  |
|
|
| `generate_content` | Multimodal content generation operation such as [Gemini Generate Content](https://ai.google.dev/api/generate-content) |  |
|
|
| `invoke_agent` | Invoke GenAI agent |  |
|
|
| `text_completion` | Text completions operation such as [OpenAI Completions API (Legacy)](https://platform.openai.com/docs/api-reference/completions) |  |
|
|
|
|
---
|
|
|
|
`gen_ai.output.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `image` | Image |  |
|
|
| `json` | JSON object with known or unknown schema |  |
|
|
| `speech` | Speech |  |
|
|
| `text` | Plain text |  |
|
|
|
|
---
|
|
|
|
`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
|
|
| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
|
|
| `azure.ai.inference` | Azure AI Inference |  |
|
|
| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
|
|
| `cohere` | [Cohere](https://cohere.com/) |  |
|
|
| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
|
|
| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [16] |  |
|
|
| `gcp.gen_ai` | Any Google generative AI endpoint [17] |  |
|
|
| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [18] |  |
|
|
| `groq` | [Groq](https://groq.com/) |  |
|
|
| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
|
|
| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
|
|
| `openai` | [OpenAI](https://openai.com/) |  |
|
|
| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
|
|
| `x_ai` | [xAI](https://x.ai/) |  |
|
|
|
|
**[16]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
|
|
|
|
**[17]:** May be used when specific backend is unknown.
|
|
|
|
**[18]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
|
|
|
|
<!-- markdownlint-restore -->
|
|
<!-- prettier-ignore-end -->
|
|
<!-- END AUTOGENERATED TEXT -->
|
|
<!-- endsemconv -->
|
|
|
|
## Execute tool span
|
|
|
|
If you are using some tools in your agent, refer to [Execute Tool Span](./gen-ai-spans.md#execute-tool-span).
|
|
|
|
[DocumentStatus]: https://opentelemetry.io/docs/specs/otel/document-status
|