GenAI: Rename prompt and completion tokens attributes to input and output (#1200)
This commit is contained in:
parent
0f067bb98f
commit
5971366ae2
|
|
@ -0,0 +1,6 @@
|
|||
change_type: enhancement
|
||||
component: gen_ai
|
||||
note: >
|
||||
Rename `gen_ai.usage.prompt_tokens` to `gen_ai.usage.input_tokens` and `gen_ai.usage.completion_tokens` to `gen_ai.usage.output_tokens`
|
||||
to align terminology between spans and metrics.
|
||||
issues: [ 1200 ]
|
||||
|
|
@ -6,6 +6,9 @@
|
|||
|
||||
# Gen AI
|
||||
|
||||
- [Gen Ai](#gen-ai-attributes)
|
||||
- [Gen Ai Deprecated](#gen-ai-deprecated-attributes)
|
||||
|
||||
## Gen AI Attributes
|
||||
|
||||
This document defines the attributes used to describe telemetry in the context of Generative Artificial Intelligence (GenAI) Models requests and responses.
|
||||
|
|
@ -28,8 +31,8 @@ This document defines the attributes used to describe telemetry in the context o
|
|||
| `gen_ai.response.model` | string | The name of the model that generated the response. | `gpt-4-0613` |  |
|
||||
| `gen_ai.system` | string | The Generative AI product as identified by the client or server instrumentation. [3] | `openai` |  |
|
||||
| `gen_ai.token.type` | string | The type of token being counted. | `input`; `output` |  |
|
||||
| `gen_ai.usage.completion_tokens` | int | The number of tokens used in the GenAI response (completion). | `180` |  |
|
||||
| `gen_ai.usage.prompt_tokens` | int | The number of tokens used in the GenAI input or prompt. | `100` |  |
|
||||
| `gen_ai.usage.input_tokens` | int | The number of tokens used in the GenAI input (prompt). | `100` |  |
|
||||
| `gen_ai.usage.output_tokens` | int | The number of tokens used in the GenAI response (completion). | `180` |  |
|
||||
|
||||
**[1]:** It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
|
||||
|
||||
|
|
@ -53,3 +56,12 @@ For custom model, a custom friendly name SHOULD be used. If none of these option
|
|||
| -------- | ------------------------------------------ | ---------------------------------------------------------------- |
|
||||
| `input` | Input tokens (prompt, input, etc.) |  |
|
||||
| `output` | Output tokens (completion, response, etc.) |  |
|
||||
|
||||
## Gen AI Deprecated Attributes
|
||||
|
||||
Describes deprecated `gen_ai` attributes.
|
||||
|
||||
| Attribute | Type | Description | Examples | Stability |
|
||||
| -------------------------------- | ---- | ----------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------ |
|
||||
| `gen_ai.usage.completion_tokens` | int | Deprecated, use `gen_ai.usage.output_tokens` instead. | `42` | <br>Replaced by `gen_ai.usage.output_tokens` attribute. |
|
||||
| `gen_ai.usage.prompt_tokens` | int | Deprecated, use `gen_ai.usage.input_tokens` instead. | `42` | <br>Replaced by `gen_ai.usage.input_tokens` attribute. |
|
||||
|
|
|
|||
|
|
@ -57,8 +57,8 @@ These attributes track input data and metadata for a request to an GenAI model.
|
|||
| [`gen_ai.response.finish_reasons`](/docs/attributes-registry/gen-ai.md) | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `["stop"]` | `Recommended` |  |
|
||||
| [`gen_ai.response.id`](/docs/attributes-registry/gen-ai.md) | string | The unique identifier for the completion. | `chatcmpl-123` | `Recommended` |  |
|
||||
| [`gen_ai.response.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the model that generated the response. [3] | `gpt-4-0613` | `Recommended` |  |
|
||||
| [`gen_ai.usage.completion_tokens`](/docs/attributes-registry/gen-ai.md) | int | The number of tokens used in the GenAI response (completion). | `180` | `Recommended` |  |
|
||||
| [`gen_ai.usage.prompt_tokens`](/docs/attributes-registry/gen-ai.md) | int | The number of tokens used in the GenAI input or prompt. | `100` | `Recommended` |  |
|
||||
| [`gen_ai.usage.input_tokens`](/docs/attributes-registry/gen-ai.md) | int | The number of tokens used in the GenAI input (prompt). | `100` | `Recommended` |  |
|
||||
| [`gen_ai.usage.output_tokens`](/docs/attributes-registry/gen-ai.md) | int | The number of tokens used in the GenAI response (completion). | `180` | `Recommended` |  |
|
||||
|
||||
**[1]:** The name of the GenAI model a request is being made to. If the model is supplied by a vendor, then the value must be the exact name of the model requested. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,17 @@
|
|||
groups:
|
||||
- id: registry.gen_ai.deprecated
|
||||
type: attribute_group
|
||||
brief: Describes deprecated `gen_ai` attributes.
|
||||
attributes:
|
||||
- id: gen_ai.usage.prompt_tokens
|
||||
type: int
|
||||
stability: experimental
|
||||
deprecated: Replaced by `gen_ai.usage.input_tokens` attribute.
|
||||
brief: "Deprecated, use `gen_ai.usage.input_tokens` instead."
|
||||
examples: [42]
|
||||
- id: gen_ai.usage.completion_tokens
|
||||
type: int
|
||||
stability: experimental
|
||||
deprecated: Replaced by `gen_ai.usage.output_tokens` attribute.
|
||||
brief: "Deprecated, use `gen_ai.usage.output_tokens` instead."
|
||||
examples: [42]
|
||||
|
|
@ -90,12 +90,12 @@ groups:
|
|||
type: string[]
|
||||
brief: Array of reasons the model stopped generating tokens, corresponding to each generation received.
|
||||
examples: ['stop']
|
||||
- id: usage.prompt_tokens
|
||||
- id: usage.input_tokens
|
||||
stability: experimental
|
||||
type: int
|
||||
brief: The number of tokens used in the GenAI input or prompt.
|
||||
brief: The number of tokens used in the GenAI input (prompt).
|
||||
examples: [100]
|
||||
- id: usage.completion_tokens
|
||||
- id: usage.output_tokens
|
||||
stability: experimental
|
||||
type: int
|
||||
brief: The number of tokens used in the GenAI response (completion).
|
||||
|
|
|
|||
|
|
@ -36,9 +36,9 @@ groups:
|
|||
fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
|
||||
- ref: gen_ai.response.finish_reasons
|
||||
requirement_level: recommended
|
||||
- ref: gen_ai.usage.prompt_tokens
|
||||
- ref: gen_ai.usage.input_tokens
|
||||
requirement_level: recommended
|
||||
- ref: gen_ai.usage.completion_tokens
|
||||
- ref: gen_ai.usage.output_tokens
|
||||
requirement_level: recommended
|
||||
events:
|
||||
- gen_ai.content.prompt
|
||||
|
|
|
|||
|
|
@ -11,6 +11,11 @@ versions:
|
|||
messaging.rocketmq.client_group: messaging.consumer.group.name
|
||||
messaging.evenhubs.consumer.group: messaging.consumer.group.name
|
||||
message.servicebus.destination.subscription_name: messaging.destination.subscription.name
|
||||
# https://github.com/open-telemetry/semantic-conventions/pull/1200
|
||||
- rename_attributes:
|
||||
attribute_map:
|
||||
gen_ai.usage.completion_tokens: gen_ai.usage.output_tokens
|
||||
gen_ai.usage.prompt_tokens: gen_ai.usage.input_tokens
|
||||
spans:
|
||||
changes:
|
||||
# https://github.com/open-telemetry/semantic-conventions/pull/1002
|
||||
|
|
|
|||
Loading…
Reference in New Issue