Use GenAI instead of LLM and other clean ups (#1087)

This commit is contained in:
Liudmila Molkova 2024-06-13 04:57:32 +01:00 committed by GitHub
parent 611c8d705c
commit 8388bfb608
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
7 changed files with 71 additions and 85 deletions

4
.chloggen/1087.yaml Normal file
View File

@ -0,0 +1,4 @@
change_type: enhancement
component: gen-ai
note: Use GenAI instead of LLM on GenAI trace semantic conventions and minor cleanups.
issues: [1087]

View File

@ -8,30 +8,31 @@
## Gen AI Attributes
This document defines the attributes used to describe telemetry in the context of LLM (Large Language Models) requests and responses.
This document defines the attributes used to describe telemetry in the context of Generative Artificial Intelligence (GenAI) Models requests and responses.
| Attribute | Type | Description | Examples | Stability |
| -------------------------------- | -------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------- | ---------------------------------------------------------------- |
| `gen_ai.completion` | string | The full response received from the LLM. [1] | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.completion` | string | The full response received from the GenAI model. [1] | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.operation.name` | string | The name of the operation being performed. | `chat`; `completion` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.prompt` | string | The full prompt sent to an LLM. [2] | `[{'role': 'user', 'content': 'What is the capital of France?'}]` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.max_tokens` | int | The maximum number of tokens the LLM generates for a request. | `100` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.model` | string | The name of the LLM a request is being made to. | `gpt-4` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.temperature` | double | The temperature setting for the LLM request. | `0.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.top_p` | double | The top_p sampling setting for the LLM request. | `1.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.prompt` | string | The full prompt sent to the GenAI model. [2] | `[{'role': 'user', 'content': 'What is the capital of France?'}]` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.max_tokens` | int | The maximum number of tokens the model generates for a request. | `100` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.model` | string | The name of the GenAI model a request is being made to. | `gpt-4` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.temperature` | double | The temperature setting for the GenAI request. | `0.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.top_p` | double | The top_p sampling setting for the GenAI request. | `1.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.response.finish_reasons` | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `["stop"]` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.response.model` | string | The name of the LLM a response was generated from. | `gpt-4-0613` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.response.model` | string | The name of the model that generated the response. | `gpt-4-0613` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.system` | string | The Generative AI product as identified by the client instrumentation. [3] | `openai` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.token.type` | string | The type of token being counted. | `input`; `output` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.usage.completion_tokens` | int | The number of tokens used in the LLM response (completion). | `180` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.usage.prompt_tokens` | int | The number of tokens used in the LLM prompt. | `100` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.usage.completion_tokens` | int | The number of tokens used in the GenAI response (completion). | `180` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.usage.prompt_tokens` | int | The number of tokens used in the GenAI input or prompt. | `100` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
**[1]:** It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
**[2]:** It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
**[3]:** The actual GenAI product may differ from the one identified by the client. For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system` is set to `openai` based on the instrumentation's best knowledge.
For custom model, a custom friendly name SHOULD be used. If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

View File

@ -14,16 +14,9 @@ The semantic conventions for GenAI and LLM are currently in development.
We encourage instrumentation libraries and telemetry consumers developers to
use the conventions in limited non-critical workloads and share the feedback
This document defines semantic conventions for the following kind of Generative AI systems:
* LLMs
Semantic conventions for Generative AI operations are defined for the following signals:
* [Metrics](gen-ai-metrics.md): Semantic Conventions for Generative AI operations - *metrics*.
Semantic conventions for LLM operations are defined for the following signals:
* [LLM Spans](llm-spans.md): Semantic Conventions for LLM requests - *spans*.
* [Spans](gen-ai-spans.md): Semantic Conventions for Generative AI requests - *spans*.
[DocumentStatus]: https://opentelemetry.io/docs/specs/otel/document-status

View File

@ -68,14 +68,15 @@ This metric SHOULD be specified with [ExplicitBucketBoundaries] of [1, 4, 16, 64
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.operation.name`](/docs/attributes-registry/gen-ai.md) | string | The name of the operation being performed. | `chat`; `completion` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the LLM a request is being made to. | `gpt-4` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the GenAI model a request is being made to. | `gpt-4` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client instrumentation. [1] | `openai` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.token.type`](/docs/attributes-registry/gen-ai.md) | string | The type of token being counted. | `input`; `output` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`server.port`](/docs/attributes-registry/server.md) | int | Server port number. [2] | `80`; `8080`; `443` | `Conditionally Required` If `sever.address` is set. | ![Stable](https://img.shields.io/badge/-stable-lightgreen) |
| [`gen_ai.response.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the LLM a response was generated from. | `gpt-4-0613` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.response.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the model that generated the response. | `gpt-4-0613` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`server.address`](/docs/attributes-registry/server.md) | string | Server domain name if available without reverse DNS lookup; otherwise, IP address or Unix domain socket name. [3] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | `Recommended` | ![Stable](https://img.shields.io/badge/-stable-lightgreen) |
**[1]:** The actual GenAI product may differ from the one identified by the client. For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system` is set to `openai` based on the instrumentation's best knowledge.
For custom model, a custom friendly name SHOULD be used. If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
**[2]:** When observed from the client side, and when communicating through an intermediary, `server.port` SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.
@ -140,14 +141,15 @@ This metric SHOULD be specified with [ExplicitBucketBoundaries] of [ 0.01, 0.02,
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.operation.name`](/docs/attributes-registry/gen-ai.md) | string | The name of the operation being performed. | `chat`; `completion` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the LLM a request is being made to. | `gpt-4` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the GenAI model a request is being made to. | `gpt-4` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client instrumentation. [1] | `openai` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`error.type`](/docs/attributes-registry/error.md) | string | Describes a class of error the operation ended with. [2] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | `Conditionally Required` if the operation ended in an error | ![Stable](https://img.shields.io/badge/-stable-lightgreen) |
| [`server.port`](/docs/attributes-registry/server.md) | int | Server port number. [3] | `80`; `8080`; `443` | `Conditionally Required` If `sever.address` is set. | ![Stable](https://img.shields.io/badge/-stable-lightgreen) |
| [`gen_ai.response.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the LLM a response was generated from. | `gpt-4-0613` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.response.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the model that generated the response. | `gpt-4-0613` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`server.address`](/docs/attributes-registry/server.md) | string | Server domain name if available without reverse DNS lookup; otherwise, IP address or Unix domain socket name. [4] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | `Recommended` | ![Stable](https://img.shields.io/badge/-stable-lightgreen) |
**[1]:** The actual GenAI product may differ from the one identified by the client. For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system` is set to `openai` based on the instrumentation's best knowledge.
For custom model, a custom friendly name SHOULD be used. If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
**[2]:** The `error.type` SHOULD match the error code returned by the Generative AI provider or the client library,
the canonical name of exception that occurred, or another low-cardinality error identifier.
@ -185,4 +187,4 @@ Instrumentations SHOULD document the list of errors they report.
[DocumentStatus]: https://opentelemetry.io/docs/specs/otel/document-status
[MetricRequired]: /docs/general/metric-requirement-level.md#required
[MetricRecommended]: /docs/general/metric-requirement-level.md#recommended
[ExplicitBucketBoundaries]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.31.0/specification/metrics/api.md#instrument-advisory-parameters
[ExplicitBucketBoundaries]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.33.0/specification/metrics/api.md#instrument-advisory-parameters

View File

@ -1,8 +1,8 @@
<!--- Hugo front matter used to generate the website version of this page:
linkTitle: LLM requests
linkTitle: Generative AI traces
--->
# Semantic Conventions for LLM requests
# Semantic Conventions for GenAI operations
**Status**: [Experimental][DocumentStatus]
@ -11,32 +11,32 @@ linkTitle: LLM requests
<!-- toc -->
- [Configuration](#configuration)
- [LLM Request attributes](#llm-request-attributes)
- [GenAI attributes](#genai-attributes)
- [Events](#events)
<!-- tocstop -->
A request to an LLM is modeled as a span in a trace.
A request to an Generative AI is modeled as a span in a trace.
**Span kind:** MUST always be `CLIENT`.
The **span name** SHOULD be set to a low cardinality value describing an operation made to an LLM.
For example, the API name such as [Create chat completion](https://platform.openai.com/docs/api-reference/chat/create) could be represented as `ChatCompletions gpt-4` to include the API and the LLM.
The **span name** SHOULD be set to a low cardinality value describing an operation made to an Generative AI system.
For example, the API name such as [Create chat completion](https://platform.openai.com/docs/api-reference/chat/create) could be represented as `completion gpt-4` to include the API and the model name.
## Configuration
Instrumentations for LLMs MAY capture prompts and completions.
Instrumentations for Generative AI clients MAY capture prompts and completions.
Instrumentations that support it, MUST offer the ability to turn off capture of prompts and completions. This is for three primary reasons:
1. Data privacy concerns. End users of LLM applications may input sensitive information or personally identifiable information (PII) that they do not wish to be sent to a telemetry backend.
2. Data size concerns. Although there is no specified limit to sizes, there are practical limitations in programming languages and telemetry systems. Some LLMs allow for extremely large context windows that end users may take full advantage of.
1. Data privacy concerns. End users of GenAI applications may input sensitive information or personally identifiable information (PII) that they do not wish to be sent to a telemetry backend.
2. Data size concerns. Although there is no specified limit to sizes, there are practical limitations in programming languages and telemetry systems. Some GenAI systems allow for extremely large context windows that end users may take full advantage of.
3. Performance concerns. Sending large amounts of data to a telemetry backend may cause performance issues for the application.
## LLM Request attributes
## GenAI attributes
These attributes track input data and metadata for a request to an LLM. Each attribute represents a concept that is common to most LLMs.
These attributes track input data and metadata for a request to an GenAI model. Each attribute represents a concept that is common to most Generative AI clients.
<!-- semconv gen_ai.request -->
<!-- semconv trace.gen_ai.client -->
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
<!-- see templates/registry/markdown/snippet.md.j2 -->
<!-- prettier-ignore-start -->
@ -45,22 +45,23 @@ These attributes track input data and metadata for a request to an LLM. Each att
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.request.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the LLM a request is being made to. [1] | `gpt-4` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the GenAI model a request is being made to. [1] | `gpt-4` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client instrumentation. [2] | `openai` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.max_tokens`](/docs/attributes-registry/gen-ai.md) | int | The maximum number of tokens the LLM generates for a request. | `100` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.temperature`](/docs/attributes-registry/gen-ai.md) | double | The temperature setting for the LLM request. | `0.0` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.top_p`](/docs/attributes-registry/gen-ai.md) | double | The top_p sampling setting for the LLM request. | `1.0` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.max_tokens`](/docs/attributes-registry/gen-ai.md) | int | The maximum number of tokens the model generates for a request. | `100` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.temperature`](/docs/attributes-registry/gen-ai.md) | double | The temperature setting for the GenAI request. | `0.0` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.request.top_p`](/docs/attributes-registry/gen-ai.md) | double | The top_p sampling setting for the GenAI request. | `1.0` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.response.finish_reasons`](/docs/attributes-registry/gen-ai.md) | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `["stop"]` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.response.id`](/docs/attributes-registry/gen-ai.md) | string | The unique identifier for the completion. | `chatcmpl-123` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.response.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the LLM a response was generated from. [3] | `gpt-4-0613` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.usage.completion_tokens`](/docs/attributes-registry/gen-ai.md) | int | The number of tokens used in the LLM response (completion). | `180` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.usage.prompt_tokens`](/docs/attributes-registry/gen-ai.md) | int | The number of tokens used in the LLM prompt. | `100` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.response.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the model that generated the response. [3] | `gpt-4-0613` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.usage.completion_tokens`](/docs/attributes-registry/gen-ai.md) | int | The number of tokens used in the GenAI response (completion). | `180` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.usage.prompt_tokens`](/docs/attributes-registry/gen-ai.md) | int | The number of tokens used in the GenAI input or prompt. | `100` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
**[1]:** The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
**[1]:** The name of the GenAI model a request is being made to. If the model is supplied by a vendor, then the value must be the exact name of the model requested. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
**[2]:** If not using a vendor-supplied model, provide a custom friendly name, such as a name of the company or project. If the instrumetnation reports any attributes specific to a custom model, the value provided in the `gen_ai.system` SHOULD match the custom attribute namespace segment. For example, if `gen_ai.system` is set to `the_best_llm`, custom attributes should be added in the `gen_ai.the_best_llm.*` namespace. If none of above options apply, the instrumentation should set `_OTHER`.
**[2]:** The actual GenAI product may differ from the one identified by the client. For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system` is set to `openai` based on the instrumentation's best knowledge.
For custom model, a custom friendly name SHOULD be used. If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
**[3]:** If available. The name of the LLM serving a response. If the LLM is supplied by a vendor, then the value must be the exact name of the model actually used. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
**[3]:** If available. The name of the GenAI model that provided the response. If the model is supplied by a vendor, then the value must be the exact name of the model actually used. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
@ -82,7 +83,7 @@ These attributes track input data and metadata for a request to an LLM. Each att
## Events
In the lifetime of an LLM span, an event for prompts sent and completions received MAY be created, depending on the configuration of the instrumentation.
In the lifetime of a GenAI span, an event for prompts sent and completions received MAY be created, depending on the configuration of the instrumentation.
<!-- semconv gen_ai.content.prompt -->
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
@ -95,7 +96,7 @@ The event name MUST be `gen_ai.content.prompt`.
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.prompt`](/docs/attributes-registry/gen-ai.md) | string | The full prompt sent to an LLM. [1] | `[{'role': 'user', 'content': 'What is the capital of France?'}]` | `Conditionally Required` if and only if corresponding event is enabled | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.prompt`](/docs/attributes-registry/gen-ai.md) | string | The full prompt sent to the GenAI model. [1] | `[{'role': 'user', 'content': 'What is the capital of France?'}]` | `Conditionally Required` if and only if corresponding event is enabled | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
**[1]:** It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
@ -118,7 +119,7 @@ The event name MUST be `gen_ai.content.completion`.
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.completion`](/docs/attributes-registry/gen-ai.md) | string | The full response received from the LLM. [1] | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` | `Conditionally Required` if and only if corresponding event is enabled | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.completion`](/docs/attributes-registry/gen-ai.md) | string | The full response received from the GenAI model. [1] | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` | `Conditionally Required` if and only if corresponding event is enabled | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
**[1]:** It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
@ -130,4 +131,4 @@ The event name MUST be `gen_ai.content.completion`.
<!-- END AUTOGENERATED TEXT -->
<!-- endsemconv -->
[DocumentStatus]: https://opentelemetry.io/docs/specs/otel/document-status
[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md

View File

@ -3,12 +3,11 @@ groups:
prefix: gen_ai
type: attribute_group
brief: >
This document defines the attributes used to describe telemetry in the context of LLM (Large Language Models) requests and responses.
This document defines the attributes used to describe telemetry in the context of Generative Artificial Intelligence (GenAI) Models requests and responses.
attributes:
- id: system
stability: experimental
type:
allow_custom_values: true
members:
- id: openai
stability: experimental
@ -31,62 +30,55 @@ groups:
The actual GenAI product may differ from the one identified by the client.
For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system`
is set to `openai` based on the instrumentation's best knowledge.
For custom model, a custom friendly name SHOULD be used.
If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
examples: 'openai'
tag: llm-generic-request
- id: request.model
stability: experimental
type: string
brief: The name of the LLM a request is being made to.
brief: The name of the GenAI model a request is being made to.
examples: 'gpt-4'
tag: llm-generic-request
- id: request.max_tokens
stability: experimental
type: int
brief: The maximum number of tokens the LLM generates for a request.
brief: The maximum number of tokens the model generates for a request.
examples: [100]
tag: llm-generic-request
- id: request.temperature
stability: experimental
type: double
brief: The temperature setting for the LLM request.
brief: The temperature setting for the GenAI request.
examples: [0.0]
tag: llm-generic-request
- id: request.top_p
stability: experimental
type: double
brief: The top_p sampling setting for the LLM request.
brief: The top_p sampling setting for the GenAI request.
examples: [1.0]
tag: llm-generic-request
- id: response.id
stability: experimental
type: string
brief: The unique identifier for the completion.
examples: ['chatcmpl-123']
tag: llm-generic-response
- id: response.model
stability: experimental
type: string
brief: The name of the LLM a response was generated from.
brief: The name of the model that generated the response.
examples: ['gpt-4-0613']
tag: llm-generic-response
- id: response.finish_reasons
stability: experimental
type: string[]
brief: Array of reasons the model stopped generating tokens, corresponding to each generation received.
examples: ['stop']
tag: llm-generic-response
- id: usage.prompt_tokens
stability: experimental
type: int
brief: The number of tokens used in the LLM prompt.
brief: The number of tokens used in the GenAI input or prompt.
examples: [100]
tag: llm-generic-response
- id: usage.completion_tokens
stability: experimental
type: int
brief: The number of tokens used in the LLM response (completion).
brief: The number of tokens used in the GenAI response (completion).
examples: [180]
tag: llm-generic-response
- id: token.type
stability: experimental
type:
@ -104,17 +96,15 @@ groups:
- id: prompt
stability: experimental
type: string
brief: The full prompt sent to an LLM.
brief: The full prompt sent to the GenAI model.
note: It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
examples: ["[{'role': 'user', 'content': 'What is the capital of France?'}]"]
tag: llm-generic-events
- id: completion
stability: experimental
type: string
brief: The full response received from the LLM.
brief: The full response received from the GenAI model.
note: It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
examples: ["[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]"]
tag: llm-generic-events
- id: operation.name
stability: experimental
type: string

View File

@ -1,21 +1,16 @@
groups:
- id: gen_ai.request
- id: trace.gen_ai.client
type: span
brief: >
A request to an LLM is modeled as a span in a trace. The span name should be a low cardinality value representing the request made to an LLM, like the name of the API endpoint being called.
Describes GenAI operation span.
attributes:
- ref: gen_ai.system
requirement_level: required
note: >
If not using a vendor-supplied model, provide a custom friendly name, such as a name of the company or project.
If the instrumetnation reports any attributes specific to a custom model, the value provided in the `gen_ai.system` SHOULD match the custom attribute namespace segment.
For example, if `gen_ai.system` is set to `the_best_llm`, custom attributes should be added in the `gen_ai.the_best_llm.*` namespace.
If none of above options apply, the instrumentation should set `_OTHER`.
- ref: gen_ai.request.model
requirement_level: required
note: >
The name of the LLM a request is being made to. If the LLM is supplied by a vendor,
then the value must be the exact name of the model requested. If the LLM is a fine-tuned
The name of the GenAI model a request is being made to. If the model is supplied by a vendor,
then the value must be the exact name of the model requested. If the model is a fine-tuned
custom model, the value should have a more specific name than the base model that's been fine-tuned.
- ref: gen_ai.request.max_tokens
requirement_level: recommended
@ -28,8 +23,8 @@ groups:
- ref: gen_ai.response.model
requirement_level: recommended
note: >
If available. The name of the LLM serving a response. If the LLM is supplied by a vendor,
then the value must be the exact name of the model actually used. If the LLM is a
If available. The name of the GenAI model that provided the response. If the model is supplied by a vendor,
then the value must be the exact name of the model actually used. If the model is a
fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
- ref: gen_ai.response.finish_reasons
requirement_level: recommended
@ -45,7 +40,7 @@ groups:
name: gen_ai.content.prompt
type: event
brief: >
In the lifetime of an LLM span, events for prompts sent and completions received
In the lifetime of an GenAI span, events for prompts sent and completions received
may be created, depending on the configuration of the instrumentation.
attributes:
- ref: gen_ai.prompt
@ -58,7 +53,7 @@ groups:
name: gen_ai.content.completion
type: event
brief: >
In the lifetime of an LLM span, events for prompts sent and completions received
In the lifetime of an GenAI span, events for prompts sent and completions received
may be created, depending on the configuration of the instrumentation.
attributes:
- ref: gen_ai.completion