GenAI events - yamlify body definitions and cosmetic examples improvements (#1469)
This commit is contained in:
parent
f0974a892c
commit
abd92c153b
|
|
@ -0,0 +1,5 @@
|
|||
change_type: 'enhancement'
|
||||
component: gen_ai
|
||||
note: Yamlify gen_ai events and clean up examples.
|
||||
|
||||
issues: [1469]
|
||||
|
|
@ -10,15 +10,11 @@ linkTitle: Generative AI events
|
|||
|
||||
<!-- toc -->
|
||||
|
||||
- [Common attributes](#common-attributes)
|
||||
- [System event](#system-event)
|
||||
- [User event](#user-event)
|
||||
- [Assistant event](#assistant-event)
|
||||
- [`ToolCall` object](#toolcall-object)
|
||||
- [`Function` object](#function-object)
|
||||
- [Tool event](#tool-event)
|
||||
- [Choice event](#choice-event)
|
||||
- [`Message` object](#message-object)
|
||||
- [Event: `gen_ai.system.message`](#event-gen_aisystemmessage)
|
||||
- [Event: `gen_ai.user.message`](#event-gen_aiusermessage)
|
||||
- [Event: `gen_ai.assistant.message`](#event-gen_aiassistantmessage)
|
||||
- [Event: `gen_ai.tool.message`](#event-gen_aitoolmessage)
|
||||
- [Event: `gen_ai.choice`](#event-gen_aichoice)
|
||||
- [Custom events](#custom-events)
|
||||
- [Examples](#examples)
|
||||
- [Chat completion](#chat-completion)
|
||||
|
|
@ -29,7 +25,7 @@ linkTitle: Generative AI events
|
|||
|
||||
GenAI instrumentations MAY capture user inputs sent to the model and responses received from it as [events](https://github.com/open-telemetry/opentelemetry-specification/tree/v1.39.0/specification/logs/event-api.md).
|
||||
|
||||
> Note:
|
||||
> [!NOTE]
|
||||
> Event API is experimental and not yet available in some languages. Check [spec-compliance matrix](https://github.com/open-telemetry/opentelemetry-specification/blob/main/spec-compliance-matrix.md#events) to see the implementation status in corresponding language.
|
||||
|
||||
Instrumentations MAY capture inputs and outputs if and only if application has enabled the collection of this data.
|
||||
|
|
@ -50,17 +46,21 @@ Telemetry consumers SHOULD expect to receive unknown body fields.
|
|||
Instrumentations SHOULD NOT capture undocumented body fields and MUST follow the documented defaults for known fields.
|
||||
Instrumentations MAY offer configuration options allowing to disable events or allowing to capture all fields.
|
||||
|
||||
## Common attributes
|
||||
## Event: `gen_ai.system.message`
|
||||
|
||||
The following attributes apply to all GenAI events.
|
||||
|
||||
<!-- semconv gen_ai.common.event.attributes -->
|
||||
<!-- semconv event.gen_ai.system.message -->
|
||||
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
|
||||
<!-- see templates/registry/markdown/snippet.md.j2 -->
|
||||
<!-- prettier-ignore-start -->
|
||||
<!-- markdownlint-capture -->
|
||||
<!-- markdownlint-disable -->
|
||||
|
||||
**Status:** 
|
||||
|
||||
The event name MUST be `gen_ai.system.message`.
|
||||
|
||||
This event describes the system instructions passed to the GenAI model.
|
||||
|
||||
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
||||
|---|---|---|---|---|---|
|
||||
| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` |  |
|
||||
|
|
@ -89,97 +89,281 @@ If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
|
|||
| `openai` | OpenAI |  |
|
||||
| `vertex_ai` | Vertex AI |  |
|
||||
|
||||
**Body fields:**
|
||||
|
||||
| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
||||
|---|---|---|---|---|---|
|
||||
| `content` | undefined | The contents of the system message. | `You're a helpful bot` | `Opt-In` |  |
|
||||
| `role` | string | The actual role of the message author as passed in the message. | `system`; `instruction` | `Conditionally Required` if available and not equal to `system`. |  |
|
||||
|
||||
<!-- markdownlint-restore -->
|
||||
<!-- prettier-ignore-end -->
|
||||
<!-- END AUTOGENERATED TEXT -->
|
||||
<!-- endsemconv -->
|
||||
|
||||
## System event
|
||||
## Event: `gen_ai.user.message`
|
||||
|
||||
This event describes the instructions passed to the GenAI model.
|
||||
<!-- semconv event.gen_ai.user.message -->
|
||||
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
|
||||
<!-- see templates/registry/markdown/snippet.md.j2 -->
|
||||
<!-- prettier-ignore-start -->
|
||||
<!-- markdownlint-capture -->
|
||||
<!-- markdownlint-disable -->
|
||||
|
||||
The event name MUST be `gen_ai.system.message`.
|
||||
|
||||
| Body Field | Type | Description | Examples | Requirement Level |
|
||||
|---|---|---|---|---|
|
||||
| `role` | string | The actual role of the message author as passed in the message. | `"system"`, `"instructions"` | `Conditionally Required`: if available and not equal to `system` |
|
||||
| `content` | `AnyValue` | The contents of the system message. | `"You're a friendly bot that answers questions about OpenTelemetry."` | `Opt-In` |
|
||||
|
||||
## User event
|
||||
|
||||
This event describes the prompt message specified by the user.
|
||||
**Status:** 
|
||||
|
||||
The event name MUST be `gen_ai.user.message`.
|
||||
|
||||
| Body Field | Type | Description | Examples | Requirement Level |
|
||||
|---|---|---|---|---|
|
||||
| `role` | string | The actual role of the message author as passed in the message. | `"user"`, `"customer"` | `Conditionally Required`: if available and if not equal to `user` |
|
||||
| `content` | `AnyValue` | The contents of the user message. | `What telemetry is reported by OpenAI instrumentations?` | `Opt-In` |
|
||||
This event describes the user message passed to the GenAI model.
|
||||
|
||||
## Assistant event
|
||||
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
||||
|---|---|---|---|---|---|
|
||||
| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` |  |
|
||||
|
||||
This event describes the assistant message.
|
||||
**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
|
||||
by `gen_ai.request.model` and `gen_ai.response.model` attributes.
|
||||
|
||||
The actual GenAI product may differ from the one identified by the client.
|
||||
For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system`
|
||||
is set to `openai` based on the instrumentation's best knowledge.
|
||||
|
||||
For custom model, a custom friendly name SHOULD be used.
|
||||
If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
|
||||
|
||||
---
|
||||
|
||||
`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
||||
|
||||
| Value | Description | Stability |
|
||||
|---|---|---|
|
||||
| `anthropic` | Anthropic |  |
|
||||
| `aws.bedrock` | AWS Bedrock |  |
|
||||
| `az.ai.inference` | Azure AI Inference |  |
|
||||
| `cohere` | Cohere |  |
|
||||
| `ibm.watsonx.ai` | IBM Watsonx AI |  |
|
||||
| `openai` | OpenAI |  |
|
||||
| `vertex_ai` | Vertex AI |  |
|
||||
|
||||
**Body fields:**
|
||||
|
||||
| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
||||
|---|---|---|---|---|---|
|
||||
| `content` | undefined | The contents of the user message. | `What's the weather in Paris?` | `Opt-In` |  |
|
||||
| `role` | string | The actual role of the message author as passed in the message. | `user`; `customer` | `Conditionally Required` if available and not equal to `user`. |  |
|
||||
|
||||
<!-- markdownlint-restore -->
|
||||
<!-- prettier-ignore-end -->
|
||||
<!-- END AUTOGENERATED TEXT -->
|
||||
<!-- endsemconv -->
|
||||
|
||||
## Event: `gen_ai.assistant.message`
|
||||
|
||||
<!-- semconv event.gen_ai.assistant.message -->
|
||||
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
|
||||
<!-- see templates/registry/markdown/snippet.md.j2 -->
|
||||
<!-- prettier-ignore-start -->
|
||||
<!-- markdownlint-capture -->
|
||||
<!-- markdownlint-disable -->
|
||||
|
||||
**Status:** 
|
||||
|
||||
The event name MUST be `gen_ai.assistant.message`.
|
||||
|
||||
| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) |
|
||||
|--------------|--------------------------------|----------------------------------------|-------------------------------------------------|-------------------|
|
||||
| `role` | string | The actual role of the message author as passed in the message. | `"assistant"`, `"bot"` | `Conditionally Required`: if available and if not equal to `assistant` |
|
||||
| `content` | `AnyValue` | The contents of the assistant message. | `Spans, events, metrics defined by the GenAI semantic conventions.` | `Opt-In` |
|
||||
| `tool_calls` | [ToolCall](#toolcall-object)[] | The tool calls generated by the model, such as function calls. | `[{"id":"call_mszuSIzqtI65i1wAUOE8w5H4", "function":{"name":"get_link_to_otel_semconv", "arguments":{"semconv":"gen_ai"}}, "type":"function"}]` | `Conditionally Required`: if available |
|
||||
This event describes the assistant message passed to GenAI system.
|
||||
|
||||
### `ToolCall` object
|
||||
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
||||
|---|---|---|---|---|---|
|
||||
| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` |  |
|
||||
|
||||
| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) |
|
||||
|------------|-----------------------------|------------------------------------|-------------------------------------------------|-------------------|
|
||||
| `id` | string | The id of the tool call | `call_mszuSIzqtI65i1wAUOE8w5H4` | `Required` |
|
||||
| `type` | string | The type of the tool | `function` | `Required` |
|
||||
| `function` | [Function](#function-object)| The function that the model called | `{"name":"get_link_to_otel_semconv", "arguments":{"semconv":"gen_ai"}}` | `Required` |
|
||||
**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
|
||||
by `gen_ai.request.model` and `gen_ai.response.model` attributes.
|
||||
|
||||
### `Function` object
|
||||
The actual GenAI product may differ from the one identified by the client.
|
||||
For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system`
|
||||
is set to `openai` based on the instrumentation's best knowledge.
|
||||
|
||||
| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) |
|
||||
|-------------|------------|----------------------------------------|----------------------------|-------------------|
|
||||
| `name` | string | The name of the function to call | `get_link_to_otel_semconv` | `Required` |
|
||||
| `arguments` | `AnyValue` | The arguments to pass the the function | `{"semconv": "gen_ai"}` | `Opt-In` |
|
||||
For custom model, a custom friendly name SHOULD be used.
|
||||
If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
|
||||
|
||||
## Tool event
|
||||
---
|
||||
|
||||
This event describes the output of the tool or function submitted back to the model.
|
||||
`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
||||
|
||||
| Value | Description | Stability |
|
||||
|---|---|---|
|
||||
| `anthropic` | Anthropic |  |
|
||||
| `aws.bedrock` | AWS Bedrock |  |
|
||||
| `az.ai.inference` | Azure AI Inference |  |
|
||||
| `cohere` | Cohere |  |
|
||||
| `ibm.watsonx.ai` | IBM Watsonx AI |  |
|
||||
| `openai` | OpenAI |  |
|
||||
| `vertex_ai` | Vertex AI |  |
|
||||
|
||||
**Body fields:**
|
||||
|
||||
| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
||||
|---|---|---|---|---|---|
|
||||
| `content` | undefined | The contents of the tool message. | `The weather in Paris is rainy and overcast, with temperatures around 57°F` | `Opt-In` |  |
|
||||
| `role` | string | The actual role of the message author as passed in the message. | `assistant`; `bot` | `Conditionally Required` if available and not equal to `assistant`. |  |
|
||||
| `tool_calls`: | map[] | The tool calls generated by the model, such as function calls. | | `Conditionally Required` if available |  |
|
||||
| `function`: | map | The function call. | | `Required` |  |
|
||||
| `arguments` | undefined | The arguments of the function as provided in the LLM response. [1] | `{\"location\": \"Paris\"}` | `Opt-In` |  |
|
||||
| `name` | string | The name of the function. | `get_weather` | `Required` |  |
|
||||
| `id` | string | The id of the tool call. | `call_mszuSIzqtI65i1wAUOE8w5H4` | `Required` |  |
|
||||
| `type` | enum | The type of the tool. | `function` | `Required` |  |
|
||||
|
||||
**[1]:** Models usually return arguments as a JSON string. In this case, it's RECOMMENDED to provide arguments as is without attempting to deserialize them.
|
||||
Semantic conventions for individual systems MAY specify a different type for arguments field.
|
||||
|
||||
`type` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
||||
|
||||
| Value | Description | Stability |
|
||||
|---|---|---|
|
||||
| `function` | Function |  |
|
||||
|
||||
<!-- markdownlint-restore -->
|
||||
<!-- prettier-ignore-end -->
|
||||
<!-- END AUTOGENERATED TEXT -->
|
||||
<!-- endsemconv -->
|
||||
|
||||
## Event: `gen_ai.tool.message`
|
||||
|
||||
<!-- semconv event.gen_ai.tool.message -->
|
||||
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
|
||||
<!-- see templates/registry/markdown/snippet.md.j2 -->
|
||||
<!-- prettier-ignore-start -->
|
||||
<!-- markdownlint-capture -->
|
||||
<!-- markdownlint-disable -->
|
||||
|
||||
**Status:** 
|
||||
|
||||
The event name MUST be `gen_ai.tool.message`.
|
||||
|
||||
| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) |
|
||||
|----------------|--------|-----------------------------------------------|---------------------------------|-------------------|
|
||||
| `role` | string | The actual role of the message author as passed in the message. | `"tool"`, `"function"` | `Conditionally Required`: if available and if not equal to `tool` |
|
||||
| `content` | AnyValue | The contents of the tool message. | `opentelemetry.io` | `Opt-In` |
|
||||
| `id` | string | Tool call that this message is responding to. | `call_mszuSIzqtI65i1wAUOE8w5H4` | `Required` |
|
||||
This event describes the response from a tool or function call passed to the GenAI model.
|
||||
|
||||
## Choice event
|
||||
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
||||
|---|---|---|---|---|---|
|
||||
| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` |  |
|
||||
|
||||
This event describes model-generated individual chat response (choice).
|
||||
If GenAI model returns multiple choices, each choice SHOULD be recorded as an individual event.
|
||||
**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
|
||||
by `gen_ai.request.model` and `gen_ai.response.model` attributes.
|
||||
|
||||
When response is streamed, instrumentations that report response events MUST reconstruct and report the full message and MUST NOT report individual chunks as events.
|
||||
If the request to GenAI model fails with an error before content is received, instrumentation SHOULD report an event with truncated content (if enabled). If `finish_reason` was not received, it MUST be set to `error`.
|
||||
The actual GenAI product may differ from the one identified by the client.
|
||||
For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system`
|
||||
is set to `openai` based on the instrumentation's best knowledge.
|
||||
|
||||
For custom model, a custom friendly name SHOULD be used.
|
||||
If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
|
||||
|
||||
---
|
||||
|
||||
`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
||||
|
||||
| Value | Description | Stability |
|
||||
|---|---|---|
|
||||
| `anthropic` | Anthropic |  |
|
||||
| `aws.bedrock` | AWS Bedrock |  |
|
||||
| `az.ai.inference` | Azure AI Inference |  |
|
||||
| `cohere` | Cohere |  |
|
||||
| `ibm.watsonx.ai` | IBM Watsonx AI |  |
|
||||
| `openai` | OpenAI |  |
|
||||
| `vertex_ai` | Vertex AI |  |
|
||||
|
||||
**Body fields:**
|
||||
|
||||
| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
||||
|---|---|---|---|---|---|
|
||||
| `content` | undefined | The contents of the tool message. | `rainy, 57°F` | `Opt-In` |  |
|
||||
| `id` | string | Tool call id that this message is responding to. | `call_mszuSIzqtI65i1wAUOE8w5H4` | `Required` |  |
|
||||
| `role` | string | The actual role of the message author as passed in the message. | `tool`; `function` | `Conditionally Required` if available and not equal to `tool`. |  |
|
||||
|
||||
<!-- markdownlint-restore -->
|
||||
<!-- prettier-ignore-end -->
|
||||
<!-- END AUTOGENERATED TEXT -->
|
||||
<!-- endsemconv -->
|
||||
|
||||
## Event: `gen_ai.choice`
|
||||
|
||||
<!-- semconv event.gen_ai.choice -->
|
||||
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
|
||||
<!-- see templates/registry/markdown/snippet.md.j2 -->
|
||||
<!-- prettier-ignore-start -->
|
||||
<!-- markdownlint-capture -->
|
||||
<!-- markdownlint-disable -->
|
||||
|
||||
**Status:** 
|
||||
|
||||
The event name MUST be `gen_ai.choice`.
|
||||
|
||||
Choice event body has the following fields:
|
||||
This event describes the Gen AI response message.
|
||||
|
||||
| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) |
|
||||
|-----------------|----------------------------|-------------------------------------------------|----------------------------------------|-------------------|
|
||||
| `finish_reason` | string | The reason the model stopped generating tokens. | `stop`, `tool_calls`, `content_filter` | `Required` |
|
||||
| `index` | int | The index of the choice in the list of choices. | `1` | `Required` |
|
||||
| `message` | [Message](#message-object) | GenAI response message | `{"content":"The OpenAI semantic conventions are available at opentelemetry.io"}` | `Recommended` |
|
||||
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
||||
|---|---|---|---|---|---|
|
||||
| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` |  |
|
||||
|
||||
### `Message` object
|
||||
**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
|
||||
by `gen_ai.request.model` and `gen_ai.response.model` attributes.
|
||||
|
||||
| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) |
|
||||
|----------------|--------------------------------|-----------------------------------------------|---------------------------------|-------------------|
|
||||
| `role` | string | The actual role of the message author as passed in the message. | `"assistant"`, `"bot"` | `Conditionally Required`: if available and if not equal to `assistant` |
|
||||
| `content` | `AnyValue` | The contents of the assistant message. | `Spans, events, metrics defined by the GenAI semantic conventions.` | `Opt-In` |
|
||||
| `tool_calls` | [ToolCall](#toolcall-object)[] | The tool calls generated by the model, such as function calls. | `[{"id":"call_mszuSIzqtI65i1wAUOE8w5H4", "function":{"name":"get_link_to_otel_semconv", "arguments":"{\"semconv\":\"gen_ai\"}"}, "type":"function"}]` | `Conditionally Required`: if available |
|
||||
The actual GenAI product may differ from the one identified by the client.
|
||||
For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system`
|
||||
is set to `openai` based on the instrumentation's best knowledge.
|
||||
|
||||
For custom model, a custom friendly name SHOULD be used.
|
||||
If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
|
||||
|
||||
---
|
||||
|
||||
`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
||||
|
||||
| Value | Description | Stability |
|
||||
|---|---|---|
|
||||
| `anthropic` | Anthropic |  |
|
||||
| `aws.bedrock` | AWS Bedrock |  |
|
||||
| `az.ai.inference` | Azure AI Inference |  |
|
||||
| `cohere` | Cohere |  |
|
||||
| `ibm.watsonx.ai` | IBM Watsonx AI |  |
|
||||
| `openai` | OpenAI |  |
|
||||
| `vertex_ai` | Vertex AI |  |
|
||||
|
||||
**Body fields:**
|
||||
|
||||
| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
||||
|---|---|---|---|---|---|
|
||||
| `finish_reason` | enum | The reason the model stopped generating tokens. | `stop`; `tool_calls`; `content_filter` | `Required` |  |
|
||||
| `index` | int | The index of the choice in the list of choices. | `0`; `1` | `Required` |  |
|
||||
| `message`: | map | GenAI response message. | | `Recommended` |  |
|
||||
| `content` | undefined | The contents of the assistant message. | `The weather in Paris is rainy and overcast, with temperatures around 57°F` | `Opt-In` |  |
|
||||
| `role` | string | The actual role of the message author as passed in the message. | `assistant`; `bot` | `Conditionally Required` if available and not equal to `assistant`. |  |
|
||||
| `tool_calls`: | map[] | The tool calls generated by the model, such as function calls. | | `Conditionally Required` if available |  |
|
||||
| `function`: | map | The function that the model called. | | `Required` |  |
|
||||
| `arguments` | undefined | The arguments of the function as provided in the LLM response. [1] | `{\"location\": \"Paris\"}` | `Opt-In` |  |
|
||||
| `name` | string | The name of the function. | `get_weather` | `Required` |  |
|
||||
| `id` | string | The id of the tool call. | `call_mszuSIzqtI65i1wAUOE8w5H4` | `Required` |  |
|
||||
| `type` | enum | The type of the tool. | `function` | `Required` |  |
|
||||
|
||||
**[1]:** Models usually return arguments as a JSON string. In this case, it's RECOMMENDED to provide arguments as is without attempting to deserialize them.
|
||||
Semantic conventions for individual systems MAY specify a different type for arguments field.
|
||||
|
||||
`finish_reason` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
||||
|
||||
| Value | Description | Stability |
|
||||
|---|---|---|
|
||||
| `content_filter` | Content Filter |  |
|
||||
| `error` | Error |  |
|
||||
| `length` | Length |  |
|
||||
| `stop` | Stop |  |
|
||||
| `tool_calls` | Tool Calls |  |
|
||||
|
||||
`type` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
||||
|
||||
| Value | Description | Stability |
|
||||
|---|---|---|
|
||||
| `function` | Function |  |
|
||||
|
||||
<!-- markdownlint-restore -->
|
||||
<!-- prettier-ignore-end -->
|
||||
<!-- END AUTOGENERATED TEXT -->
|
||||
<!-- endsemconv -->
|
||||
|
||||
## Custom events
|
||||
|
||||
|
|
@ -190,15 +374,27 @@ SHOULD follow `gen_ai.{gen_ai.system}.*` naming pattern for system-specific even
|
|||
|
||||
### Chat completion
|
||||
|
||||
This example covers the following scenario:
|
||||
This is an example of telemetry generated for a chat completion call with system and user messages.
|
||||
|
||||
- user requests chat completion from OpenAI GPT-4 model for the following prompt:
|
||||
- System message: `You're a friendly bot that answers questions about OpenTelemetry.`
|
||||
- User message: `How to instrument GenAI library with OTel?`
|
||||
```mermaid
|
||||
%%{init:
|
||||
{
|
||||
"sequence": { "messageAlign": "left", "htmlLabels":true },
|
||||
"themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
|
||||
}
|
||||
}%%
|
||||
sequenceDiagram
|
||||
participant A as Application
|
||||
participant I as Instrumented Client
|
||||
participant M as Model
|
||||
A->>+I: #U+200D
|
||||
I->>M: gen_ai.system.message: You are a helpful bot<br/>gen_ai.user.message: Tell me a joke about OpenTelemetry
|
||||
Note left of I: GenAI Client span
|
||||
I-->M: gen_ai.choice: Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!
|
||||
I-->>-A: #U+200D
|
||||
```
|
||||
|
||||
- The model responds with `"Follow GenAI semantic conventions available at opentelemetry.io."` message
|
||||
|
||||
Span:
|
||||
**GenAI Client span:**
|
||||
|
||||
| Attribute name | Value |
|
||||
|---------------------------------|--------------------------------------------|
|
||||
|
|
@ -213,79 +409,97 @@ Span:
|
|||
| `gen_ai.usage.input_tokens` | `52` |
|
||||
| `gen_ai.response.finish_reasons`| `["stop"]` |
|
||||
|
||||
Events:
|
||||
**Events:**
|
||||
|
||||
1. `gen_ai.system.message`.
|
||||
1. `gen_ai.system.message`
|
||||
|
||||
| Property | Value |
|
||||
|---------------------|-------------------------------------------------------|
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| Event body | `{"content": "You're a friendly bot that answers questions about OpenTelemetry."}` |
|
||||
| Event body (with content enabled) | `{"content": "You're a helpful bot"}` |
|
||||
|
||||
2. `gen_ai.user.message`
|
||||
|
||||
| Property | Value |
|
||||
|---------------------|-------------------------------------------------------|
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| Event body | `{"content":"How to instrument GenAI library with OTel?"}` |
|
||||
| Event body (with content enabled) | `{"content":"Tell me a joke about OpenTelemetry"}` |
|
||||
|
||||
3. `gen_ai.choice`
|
||||
|
||||
| Property | Value |
|
||||
|---------------------|-------------------------------------------------------|
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| Event body (with content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Follow GenAI semantic conventions available at opentelemetry.io."}}` |
|
||||
| Event body (with content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"}}` |
|
||||
| Event body (without content) | `{"index":0,"finish_reason":"stop","message":{}}` |
|
||||
|
||||
### Tools
|
||||
|
||||
This example covers the following scenario:
|
||||
This is an example of telemetry generated for a chat completion call with user message and function definition
|
||||
that results in a model requesting application to call provided function. Application executes a function and
|
||||
requests another completion now with the tool response.
|
||||
|
||||
1. Application requests chat completion from OpenAI GPT-4 model and provides a function definition.
|
||||
|
||||
- Application provides the following prompt:
|
||||
- User message: `How to instrument GenAI library with OTel?`
|
||||
- Application defines a tool (a function) names `get_link_to_otel_semconv` with single string argument named `semconv`
|
||||
|
||||
2. The model responds with a tool call request which application executes
|
||||
3. The application requests chat completion again now with the tool execution result
|
||||
```mermaid
|
||||
%%{init:
|
||||
{
|
||||
"sequence": { "messageAlign": "left", "htmlLabels":true },
|
||||
"themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
|
||||
}
|
||||
}%%
|
||||
sequenceDiagram
|
||||
participant A as Application
|
||||
participant I as Instrumented Client
|
||||
participant M as Model
|
||||
A->>+I: #U+200D
|
||||
I->>M: gen_ai.user.message: What's the weather in Paris?
|
||||
Note left of I: GenAI Client span 1
|
||||
I-->M: gen_ai.choice: Call to the get_weather tool with Paris as the location argument.
|
||||
I-->>-A: #U+200D
|
||||
A -->> A: parse tool parameters<br/>execute tool<br/>update chat history
|
||||
A->>+I: #U+200D
|
||||
I->>M: gen_ai.user.message: What's the weather in Paris?<br/>gen_ai.assistant.message: get_weather tool call<br/>gen_ai.tool.message: rainy, 57°F
|
||||
Note left of I: GenAI Client span 2
|
||||
I-->M: gen_ai.choice: The weather in Paris is rainy and overcast, with temperatures around 57°F
|
||||
I-->>-A: #U+200D
|
||||
```
|
||||
|
||||
Here's the telemetry generated for each step in this scenario:
|
||||
|
||||
1. Chat completion resulting in a tool call.
|
||||
**GenAI Client span 1:**
|
||||
|
||||
| Attribute name | Value |
|
||||
|---------------------|-------------------------------------------------------|
|
||||
| Span name | `"chat gpt-4"` |
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| `gen_ai.request.model`| `"gpt-4"` |
|
||||
| `gen_ai.request.max_tokens`| `200` |
|
||||
| `gen_ai.request.top_p`| `1.0` |
|
||||
| `gen_ai.response.id`| `"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"` |
|
||||
| `gen_ai.response.model`| `"gpt-4-0613"` |
|
||||
| `gen_ai.usage.output_tokens`| `17` |
|
||||
| `gen_ai.usage.input_tokens`| `47` |
|
||||
| `gen_ai.response.finish_reasons`| `["tool_calls"]` |
|
||||
| Attribute name | Value |
|
||||
|---------------------|-------------------------------------------------------|
|
||||
| Span name | `"chat gpt-4"` |
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| `gen_ai.request.model`| `"gpt-4"` |
|
||||
| `gen_ai.request.max_tokens`| `200` |
|
||||
| `gen_ai.request.top_p`| `1.0` |
|
||||
| `gen_ai.response.id`| `"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"` |
|
||||
| `gen_ai.response.model`| `"gpt-4-0613"` |
|
||||
| `gen_ai.usage.output_tokens`| `17` |
|
||||
| `gen_ai.usage.input_tokens`| `47` |
|
||||
| `gen_ai.response.finish_reasons`| `["tool_calls"]` |
|
||||
|
||||
Events parented to this span:
|
||||
**Events**:
|
||||
|
||||
- `gen_ai.user.message` (not reported when capturing content is disabled)
|
||||
All the following events are parented to the **GenAI chat span 1**.
|
||||
|
||||
1. `gen_ai.user.message` (not reported when capturing content is disabled)
|
||||
|
||||
| Property | Value |
|
||||
|---------------------|-------------------------------------------------------|
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| Event body | `{"content":"How to instrument GenAI library with OTel?"}` |
|
||||
| Event body | `{"content":"What's the weather in Paris?"}` |
|
||||
|
||||
- `gen_ai.choice`
|
||||
2. `gen_ai.choice`
|
||||
|
||||
| Property | Value |
|
||||
|---------------------|-------------------------------------------------------|
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| Event body (with content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv","arguments":"{\"semconv\":\"GenAI\"}"},"type":"function"}]}` |
|
||||
| Event body (without content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv"},"type":"function"}]}` |
|
||||
| Event body (with content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":"{\"location\":\"Paris\"}"},"type":"function"}]}` |
|
||||
| Event body (without content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather"},"type":"function"}]}` |
|
||||
|
||||
2. Application executes the tool call. Application may create span which is not covered by this semantic convention.
|
||||
3. Final chat completion call
|
||||
**GenAI Client span 2:**
|
||||
|
||||
| Attribute name | Value |
|
||||
|---------------------------------|-------------------------------------------------------|
|
||||
|
|
@ -300,55 +514,66 @@ Here's the telemetry generated for each step in this scenario:
|
|||
| `gen_ai.usage.input_tokens` | `47` |
|
||||
| `gen_ai.response.finish_reasons`| `["stop"]` |
|
||||
|
||||
Events parented to this span:
|
||||
(in this example, the event content matches the original messages, but applications may also drop messages or change their content)
|
||||
**Events**:
|
||||
|
||||
- `gen_ai.user.message` (not reported when capturing content is not enabled)
|
||||
All the following events are parented to the **GenAI chat span 2**.
|
||||
|
||||
In this example, the event content matches the original messages, but applications may also drop messages or change their content.
|
||||
|
||||
1. `gen_ai.user.message`
|
||||
|
||||
| Property | Value |
|
||||
|----------------------------------|------------------------------------------------------------|
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| Event body | `{"content":"How to instrument GenAI library with OTel?"}` |
|
||||
| Event body | `{"content":"What's the weather in Paris?"}` |
|
||||
|
||||
- `gen_ai.assistant.message`
|
||||
2. `gen_ai.assistant.message`
|
||||
|
||||
| Property | Value |
|
||||
|----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| Event body (content enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv","arguments":"{\"semconv\":\"GenAI\"}"},"type":"function"}]}` |
|
||||
| Event body (content not enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv"},"type":"function"}]}` |
|
||||
| Event body (content enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":"{\"location\":\"Paris\"}"},"type":"function"}]}` |
|
||||
| Event body (content not enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather"},"type":"function"}]}` |
|
||||
|
||||
- `gen_ai.tool.message`
|
||||
3. `gen_ai.tool.message`
|
||||
|
||||
| Property | Value |
|
||||
|----------------------------------|------------------------------------------------------------------------------------------------|
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| Event body (content enabled) | `{"content":"opentelemetry.io/semconv/gen-ai","id":"call_VSPygqKTWdrhaFErNvMV18Yl"}` |
|
||||
| Event body (content enabled) | `{"content":"rainy, 57°F","id":"call_VSPygqKTWdrhaFErNvMV18Yl"}` |
|
||||
| Event body (content not enabled) | `{"id":"call_VSPygqKTWdrhaFErNvMV18Yl"}` |
|
||||
|
||||
- `gen_ai.choice`
|
||||
4. `gen_ai.choice`
|
||||
|
||||
| Property | Value |
|
||||
|----------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Follow OTel semconv available at opentelemetry.io/semconv/gen-ai"}}` |
|
||||
| Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"The weather in Paris is rainy and overcast, with temperatures around 57°F"}}` |
|
||||
| Event body (content not enabled) | `{"index":0,"finish_reason":"stop","message":{}}` |
|
||||
|
||||
### Chat completion with multiple choices
|
||||
|
||||
This example covers the following scenario:
|
||||
This example covers the scenario when user requests model to generate two completions for the same prompt :
|
||||
|
||||
- user requests 2 chat completion from OpenAI GPT-4 model for the following prompt:
|
||||
```mermaid
|
||||
%%{init:
|
||||
{
|
||||
"sequence": { "messageAlign": "left", "htmlLabels":true },
|
||||
"themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
|
||||
}
|
||||
}%%
|
||||
sequenceDiagram
|
||||
participant A as Application
|
||||
participant I as Instrumented Client
|
||||
participant M as Model
|
||||
A->>+I: #U+200D
|
||||
I->>M: gen_ai.system.message - "You are a helpful bot"<br/>gen_ai.user.message - "Tell me a joke about OpenTelemetry"
|
||||
Note left of I: GenAI Client span
|
||||
I-->M: gen_ai.choice - Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!<br/>gen_ai.choice - Why did OpenTelemetry get promoted? It had great span of control!
|
||||
I-->>-A: #U+200D
|
||||
```
|
||||
|
||||
- System message: `You're a friendly bot that answers questions about OpenTelemetry.`
|
||||
- User message: `How to instrument GenAI library with OTel?`
|
||||
|
||||
- The model responds with two choices
|
||||
|
||||
- `"Follow GenAI semantic conventions available at opentelemetry.io."` message
|
||||
- `"Use OpenAI instrumentation library."` message
|
||||
|
||||
Span:
|
||||
**GenAI Client Span**:
|
||||
|
||||
| Attribute name | Value |
|
||||
|---------------------|--------------------------------------------|
|
||||
|
|
@ -361,24 +586,26 @@ Span:
|
|||
| `gen_ai.response.model`| `"gpt-4-0613"` |
|
||||
| `gen_ai.usage.output_tokens`| `77` |
|
||||
| `gen_ai.usage.input_tokens`| `52` |
|
||||
| `gen_ai.response.finish_reasons`| `["stop"]` |
|
||||
| `gen_ai.response.finish_reasons`| `["stop", "stop"]` |
|
||||
|
||||
Events:
|
||||
**Events**:
|
||||
|
||||
All events are parented to the GenAI chat span above.
|
||||
|
||||
1. `gen_ai.system.message`: the same as in the [Chat Completion](#chat-completion) example
|
||||
2. `gen_ai.user.message`: the same as in the previous example
|
||||
2. `gen_ai.user.message`: the same as in the [Chat Completion](#chat-completion) example
|
||||
3. `gen_ai.choice`
|
||||
|
||||
| Property | Value |
|
||||
|------------------------------|-------------------------------------------------------|
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Follow GenAI semantic conventions available at opentelemetry.io."}}` |
|
||||
| Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"}}` |
|
||||
|
||||
4. `gen_ai.choice`
|
||||
|
||||
| Property | Value |
|
||||
|------------------------------|-------------------------------------------------------|
|
||||
| `gen_ai.system` | `"openai"` |
|
||||
| Event body (content enabled) | `{"index":1,"finish_reason":"stop","message":{"content":"Use OpenAI instrumentation library."}}` |
|
||||
| Event body (content enabled) | `{"index":1,"finish_reason":"stop","message":{"content":"Why did OpenTelemetry get promoted? It had great span of control!"}}` |
|
||||
|
||||
[DocumentStatus]: https://opentelemetry.io/docs/specs/otel/document-status
|
||||
|
|
|
|||
|
|
@ -12,32 +12,172 @@ groups:
|
|||
type: event
|
||||
stability: experimental
|
||||
brief: >
|
||||
This event describes the instructions passed to the GenAI system inside the prompt.
|
||||
This event describes the system instructions passed to the GenAI model.
|
||||
extends: gen_ai.common.event.attributes
|
||||
body:
|
||||
id: gen_ai.system.message
|
||||
requirement_level: opt_in
|
||||
type: map
|
||||
fields:
|
||||
- id: content
|
||||
type: undefined
|
||||
stability: experimental
|
||||
brief: >
|
||||
The contents of the system message.
|
||||
examples: ["You're a helpful bot"]
|
||||
requirement_level: opt_in
|
||||
- id: role
|
||||
type: string
|
||||
stability: experimental
|
||||
brief: >
|
||||
The actual role of the message author as passed in the message.
|
||||
examples: ["system", "instruction"]
|
||||
requirement_level:
|
||||
conditionally_required: if available and not equal to `system`.
|
||||
|
||||
- id: event.gen_ai.user.message
|
||||
name: gen_ai.user.message
|
||||
type: event
|
||||
stability: experimental
|
||||
brief: >
|
||||
This event describes the prompt message specified by the user.
|
||||
This event describes the user message passed to the GenAI model.
|
||||
extends: gen_ai.common.event.attributes
|
||||
body:
|
||||
id: gen_ai.user.message
|
||||
requirement_level: opt_in
|
||||
type: map
|
||||
fields:
|
||||
- id: content
|
||||
type: undefined
|
||||
stability: experimental
|
||||
brief: >
|
||||
The contents of the user message.
|
||||
examples: ["What's the weather in Paris?"]
|
||||
requirement_level: opt_in
|
||||
- id: role
|
||||
type: string
|
||||
stability: experimental
|
||||
brief: >
|
||||
The actual role of the message author as passed in the message.
|
||||
examples: ["user", "customer"]
|
||||
requirement_level:
|
||||
conditionally_required: if available and not equal to `user`.
|
||||
|
||||
- id: event.gen_ai.assistant.message
|
||||
name: gen_ai.assistant.message
|
||||
type: event
|
||||
stability: experimental
|
||||
brief: >
|
||||
This event describes the assistant message passed to GenAI system or received from it.
|
||||
This event describes the assistant message passed to GenAI system.
|
||||
extends: gen_ai.common.event.attributes
|
||||
body:
|
||||
id: gen_ai.assistant.message
|
||||
requirement_level: opt_in
|
||||
type: map
|
||||
fields:
|
||||
- id: content
|
||||
type: undefined
|
||||
stability: experimental
|
||||
brief: >
|
||||
The contents of the tool message.
|
||||
examples: ["The weather in Paris is rainy and overcast, with temperatures around 57°F"]
|
||||
requirement_level: opt_in
|
||||
- id: role
|
||||
type: string
|
||||
stability: experimental
|
||||
brief: >
|
||||
The actual role of the message author as passed in the message.
|
||||
examples: ["assistant", "bot"]
|
||||
requirement_level:
|
||||
conditionally_required: if available and not equal to `assistant`.
|
||||
- id: tool_calls
|
||||
type: map[]
|
||||
stability: experimental
|
||||
brief: >
|
||||
The tool calls generated by the model, such as function calls.
|
||||
requirement_level:
|
||||
conditionally_required: if available
|
||||
fields:
|
||||
- id: id
|
||||
type: string
|
||||
stability: experimental
|
||||
brief: >
|
||||
The id of the tool call.
|
||||
examples: ["call_mszuSIzqtI65i1wAUOE8w5H4"]
|
||||
requirement_level: required
|
||||
- id: type
|
||||
type: enum
|
||||
members:
|
||||
- id: function
|
||||
value: 'function'
|
||||
brief: Function
|
||||
stability: experimental
|
||||
brief: >
|
||||
The type of the tool.
|
||||
examples: ["function"]
|
||||
requirement_level: required
|
||||
- id: function
|
||||
type: map
|
||||
stability: experimental
|
||||
brief: >
|
||||
The function call.
|
||||
requirement_level: required
|
||||
fields:
|
||||
- id: name
|
||||
type: string
|
||||
stability: experimental
|
||||
brief: >
|
||||
The name of the function.
|
||||
examples: ["get_weather"]
|
||||
requirement_level: required
|
||||
- id: arguments
|
||||
type: undefined
|
||||
stability: experimental
|
||||
brief: >
|
||||
The arguments of the function as provided in the LLM response.
|
||||
note: >
|
||||
Models usually return arguments as a JSON string. In this case, it's
|
||||
RECOMMENDED to provide arguments as is without attempting to deserialize them.
|
||||
|
||||
Semantic conventions for individual systems MAY specify a different type for
|
||||
arguments field.
|
||||
examples: ['{\"location\": \"Paris\"}']
|
||||
requirement_level: opt_in
|
||||
|
||||
- id: event.gen_ai.tool.message
|
||||
name: gen_ai.tool.message
|
||||
type: event
|
||||
stability: experimental
|
||||
brief: >
|
||||
This event describes the tool or function response message.
|
||||
This event describes the response from a tool or function call passed to the GenAI model.
|
||||
extends: gen_ai.common.event.attributes
|
||||
body:
|
||||
id: gen_ai.tool.message
|
||||
requirement_level: opt_in
|
||||
type: map
|
||||
fields:
|
||||
- id: content
|
||||
type: undefined
|
||||
stability: experimental
|
||||
brief: >
|
||||
The contents of the tool message.
|
||||
examples: ["rainy, 57°F"]
|
||||
requirement_level: opt_in
|
||||
- id: role
|
||||
type: string
|
||||
stability: experimental
|
||||
brief: >
|
||||
The actual role of the message author as passed in the message.
|
||||
examples: ["tool", "function"]
|
||||
requirement_level:
|
||||
conditionally_required: if available and not equal to `tool`.
|
||||
- id: id
|
||||
type: string
|
||||
stability: experimental
|
||||
brief: >
|
||||
Tool call id that this message is responding to.
|
||||
examples: ["call_mszuSIzqtI65i1wAUOE8w5H4"]
|
||||
requirement_level: required
|
||||
|
||||
- id: event.gen_ai.choice
|
||||
name: gen_ai.choice
|
||||
|
|
@ -46,3 +186,123 @@ groups:
|
|||
brief: >
|
||||
This event describes the Gen AI response message.
|
||||
extends: gen_ai.common.event.attributes
|
||||
body:
|
||||
id: gen_ai.choice
|
||||
requirement_level: opt_in
|
||||
type: map
|
||||
note: >
|
||||
If GenAI model returns multiple choices, each choice SHOULD be recorded as an individual event.
|
||||
When response is streamed, instrumentations that report response events MUST reconstruct and report
|
||||
the full message and MUST NOT report individual chunks as events.
|
||||
If the request to GenAI model fails with an error before content is received,
|
||||
instrumentation SHOULD report an event with truncated content (if enabled).
|
||||
If `finish_reason` was not received, it MUST be set to `error`.
|
||||
fields:
|
||||
- id: index
|
||||
type: int
|
||||
stability: experimental
|
||||
brief: >
|
||||
The index of the choice in the list of choices.
|
||||
examples: [0, 1]
|
||||
requirement_level: required
|
||||
- id: finish_reason
|
||||
type: enum
|
||||
members:
|
||||
- id: stop
|
||||
value: 'stop'
|
||||
stability: experimental
|
||||
brief: Stop
|
||||
- id: tool_calls
|
||||
value: 'tool_calls'
|
||||
stability: experimental
|
||||
brief: Tool Calls
|
||||
- id: content_filter
|
||||
value: 'content_filter'
|
||||
stability: experimental
|
||||
brief: Content Filter
|
||||
- id: length
|
||||
value: 'length'
|
||||
stability: experimental
|
||||
brief: Length
|
||||
- id: error
|
||||
value: 'error'
|
||||
stability: experimental
|
||||
brief: Error
|
||||
stability: experimental
|
||||
brief: >
|
||||
The reason the model stopped generating tokens.
|
||||
requirement_level: required
|
||||
- id: message
|
||||
type: map
|
||||
stability: experimental
|
||||
brief: >
|
||||
GenAI response message.
|
||||
requirement_level: recommended
|
||||
fields:
|
||||
- id: content
|
||||
type: undefined
|
||||
stability: experimental
|
||||
brief: >
|
||||
The contents of the assistant message.
|
||||
examples: ["The weather in Paris is rainy and overcast, with temperatures around 57°F"]
|
||||
requirement_level: opt_in
|
||||
- id: role
|
||||
type: string
|
||||
stability: experimental
|
||||
brief: >
|
||||
The actual role of the message author as passed in the message.
|
||||
examples: ["assistant", "bot"]
|
||||
requirement_level:
|
||||
conditionally_required: if available and not equal to `assistant`.
|
||||
- id: tool_calls
|
||||
type: map[]
|
||||
stability: experimental
|
||||
brief: >
|
||||
The tool calls generated by the model, such as function calls.
|
||||
requirement_level:
|
||||
conditionally_required: if available
|
||||
fields:
|
||||
- id: id
|
||||
type: string
|
||||
stability: experimental
|
||||
brief: >
|
||||
The id of the tool call.
|
||||
examples: ["call_mszuSIzqtI65i1wAUOE8w5H4"]
|
||||
requirement_level: required
|
||||
- id: type
|
||||
type: enum
|
||||
members:
|
||||
- id: function
|
||||
value: 'function'
|
||||
brief: Function
|
||||
stability: experimental
|
||||
brief: >
|
||||
The type of the tool.
|
||||
requirement_level: required
|
||||
- id: function
|
||||
type: map
|
||||
stability: experimental
|
||||
brief: >
|
||||
The function that the model called.
|
||||
requirement_level: required
|
||||
fields:
|
||||
- id: name
|
||||
type: string
|
||||
stability: experimental
|
||||
brief: >
|
||||
The name of the function.
|
||||
examples: ["get_weather"]
|
||||
requirement_level: required
|
||||
- id: arguments
|
||||
type: undefined
|
||||
stability: experimental
|
||||
brief: >
|
||||
The arguments of the function as provided in the LLM response.
|
||||
note: >
|
||||
Models usually return arguments as a JSON string. In this case, it's
|
||||
RECOMMENDED to provide arguments as is without attempting to deserialize them.
|
||||
|
||||
Semantic conventions for individual systems MAY specify a different type for
|
||||
arguments field.
|
||||
examples: ['{\"location\": \"Paris\"}']
|
||||
requirement_level: opt_in
|
||||
|
|
|
|||
|
|
@ -6,7 +6,10 @@
|
|||
{% macro flatten(fields, ns, depth) %}{% if fields %}{% for f in fields | sort(attribute="id") %}
|
||||
{% set ns.flat = [ns.flat, [{'field':f,'depth':depth}]] | flatten %}{% if f.fields %}{% set _= flatten(f.fields, ns, depth + 1) %}{% endif %}
|
||||
{% endfor %}{% endif %}{% endmacro %}
|
||||
{% macro field_name(field, depth) %}{% set name= " " * 2 * depth ~ '`' ~ field.id ~ '`' %}{% if field.type == "map" %}{{ name ~ ":"}}{% else %}{{ name }}{% endif %}{% endmacro %}
|
||||
{% macro field_name(field, depth) -%}
|
||||
{%- set name= " " * 2 * depth ~ '`' ~ field.id ~ '`' -%}
|
||||
{%- if (field.type == "map") or (field.type == "map[]") %}{{ name ~ ":"}}{% else -%}
|
||||
{{ name }}{% endif %}{% endmacro %}
|
||||
{#- Macro for creating body table -#}
|
||||
{% macro generate(fields) %}{% if (fields | length > 0) %}{% set ns = namespace(flat=[])%}{% set _ = flatten(fields, ns, 0) %}| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
||||
|---|---|---|---|---|---|
|
||||
|
|
|
|||
Loading…
Reference in New Issue