# Gen AI
- [GenAI Attributes](#genai-attributes)
- [Deprecated GenAI Attributes](#deprecated-genai-attributes)
## GenAI Attributes
This document defines the attributes used to describe telemetry in the context of Generative Artificial Intelligence (GenAI) Models requests and responses.
| Attribute | Type | Description | Examples | Stability |
| ---------------------------------- | -------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------- | ---------------------------------------------------------------- |
| `gen_ai.completion` | string | The full response received from the GenAI model. [1] | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` |  |
| `gen_ai.operation.name` | string | The name of the operation being performed. [2] | `chat`; `text_completion` |  |
| `gen_ai.prompt` | string | The full prompt sent to the GenAI model. [3] | `[{'role': 'user', 'content': 'What is the capital of France?'}]` |  |
| `gen_ai.request.frequency_penalty` | double | The frequency penalty setting for the GenAI request. | `0.1` |  |
| `gen_ai.request.max_tokens` | int | The maximum number of tokens the model generates for a request. | `100` |  |
| `gen_ai.request.model` | string | The name of the GenAI model a request is being made to. | `gpt-4` |  |
| `gen_ai.request.presence_penalty` | double | The presence penalty setting for the GenAI request. | `0.1` |  |
| `gen_ai.request.stop_sequences` | string[] | List of sequences that the model will use to stop generating further tokens. | `["forest", "lived"]` |  |
| `gen_ai.request.temperature` | double | The temperature setting for the GenAI request. | `0.0` |  |
| `gen_ai.request.top_k` | double | The top_k sampling setting for the GenAI request. | `1.0` |  |
| `gen_ai.request.top_p` | double | The top_p sampling setting for the GenAI request. | `1.0` |  |
| `gen_ai.response.finish_reasons` | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `["stop"]` |  |
| `gen_ai.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` |  |
| `gen_ai.response.model` | string | The name of the model that generated the response. | `gpt-4-0613` |  |
| `gen_ai.system` | string | The Generative AI product as identified by the client or server instrumentation. [4] | `openai` |  |
| `gen_ai.token.type` | string | The type of token being counted. | `input`; `output` |  |
| `gen_ai.usage.input_tokens` | int | The number of tokens used in the GenAI input (prompt). | `100` |  |
| `gen_ai.usage.output_tokens` | int | The number of tokens used in the GenAI response (completion). | `180` |  |
**[1]:** It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
**[2]:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
**[3]:** It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
**[4]:** The `gen_ai.system` describes a family of GenAI models with specific model identified
by `gen_ai.request.model` and `gen_ai.response.model` attributes.
The actual GenAI product may differ from the one identified by the client.
For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system`
is set to `openai` based on the instrumentation's best knowledge.
For custom model, a custom friendly name SHOULD be used.
If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
`gen_ai.operation.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
| ----------------- | -------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------- |
| `chat` | Chat completion operation such as [OpenAI Chat API](https://platform.openai.com/docs/api-reference/chat) |  |
| `text_completion` | Text completions operation such as [OpenAI Completions API (Legacy)](https://platform.openai.com/docs/api-reference/completions) |  |
`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
| ----------- | ----------- | ---------------------------------------------------------------- |
| `anthropic` | Anthropic |  |
| `cohere` | Cohere |  |
| `openai` | OpenAI |  |
| `vertex_ai` | Vertex AI |  |
`gen_ai.token.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
| Value | Description | Stability |
| -------- | ------------------------------------------ | ---------------------------------------------------------------- |
| `input` | Input tokens (prompt, input, etc.) |  |
| `output` | Output tokens (completion, response, etc.) |  |
## Deprecated GenAI Attributes
Describes deprecated `gen_ai` attributes.
| Attribute | Type | Description | Examples | Stability |
| -------------------------------- | ---- | ----------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------ |
| `gen_ai.usage.completion_tokens` | int | Deprecated, use `gen_ai.usage.output_tokens` instead. | `42` | 
Replaced by `gen_ai.usage.output_tokens` attribute. |
| `gen_ai.usage.prompt_tokens` | int | Deprecated, use `gen_ai.usage.input_tokens` instead. | `42` | 
Replaced by `gen_ai.usage.input_tokens` attribute. |