semantic-conventions/model/gen-ai/events.yaml

316 lines
11 KiB
YAML

groups:
- id: gen_ai.common.event.attributes
type: attribute_group
stability: development
brief: >
Describes common Gen AI event attributes.
attributes:
- ref: gen_ai.system
- id: event.gen_ai.system.message
name: gen_ai.system.message
type: event
stability: development
brief: >
This event describes the system instructions passed to the GenAI model.
extends: gen_ai.common.event.attributes
body:
id: gen_ai.system.message
requirement_level: opt_in
stability: development
type: map
fields:
- id: content
type: undefined
stability: development
brief: >
The contents of the system message.
examples: ["You're a helpful bot"]
requirement_level: opt_in
- id: role
type: string
stability: development
brief: >
The actual role of the message author as passed in the message.
examples: ["system", "instruction"]
requirement_level:
conditionally_required: if available and not equal to `system`.
- id: event.gen_ai.user.message
name: gen_ai.user.message
type: event
stability: development
brief: >
This event describes the user message passed to the GenAI model.
extends: gen_ai.common.event.attributes
body:
id: gen_ai.user.message
requirement_level: opt_in
stability: development
type: map
fields:
- id: content
type: undefined
stability: development
brief: >
The contents of the user message.
examples: ["What's the weather in Paris?"]
requirement_level: opt_in
- id: role
type: string
stability: development
brief: >
The actual role of the message author as passed in the message.
examples: ["user", "customer"]
requirement_level:
conditionally_required: if available and not equal to `user`.
- id: event.gen_ai.assistant.message
name: gen_ai.assistant.message
type: event
stability: development
brief: >
This event describes the assistant message passed to GenAI system.
extends: gen_ai.common.event.attributes
body:
id: gen_ai.assistant.message
requirement_level: opt_in
stability: development
type: map
fields:
- id: content
type: undefined
stability: development
brief: >
The contents of the tool message.
examples: ["The weather in Paris is rainy and overcast, with temperatures around 57°F"]
requirement_level: opt_in
- id: role
type: string
stability: development
brief: >
The actual role of the message author as passed in the message.
examples: ["assistant", "bot"]
requirement_level:
conditionally_required: if available and not equal to `assistant`.
- id: tool_calls
type: map[]
stability: development
brief: >
The tool calls generated by the model, such as function calls.
requirement_level:
conditionally_required: if available
fields:
- id: id
type: string
stability: development
brief: >
The id of the tool call.
examples: ["call_mszuSIzqtI65i1wAUOE8w5H4"]
requirement_level: required
- id: type
type: enum
members:
- id: function
value: 'function'
brief: Function
stability: development
stability: development
brief: >
The type of the tool.
examples: ["function"]
requirement_level: required
- id: function
type: map
stability: development
brief: >
The function call.
requirement_level: required
fields:
- id: name
type: string
stability: development
brief: >
The name of the function.
examples: ["get_weather"]
requirement_level: required
- id: arguments
type: undefined
stability: development
brief: >
The arguments of the function as provided in the LLM response.
note: >
Models usually return arguments as a JSON string. In this case, it's
RECOMMENDED to provide arguments as is without attempting to deserialize them.
Semantic conventions for individual systems MAY specify a different type for
arguments field.
examples: ['{\"location\": \"Paris\"}']
requirement_level: opt_in
- id: event.gen_ai.tool.message
name: gen_ai.tool.message
type: event
stability: development
brief: >
This event describes the response from a tool or function call passed to the GenAI model.
extends: gen_ai.common.event.attributes
body:
id: gen_ai.tool.message
requirement_level: opt_in
stability: development
type: map
fields:
- id: content
type: undefined
stability: development
brief: >
The contents of the tool message.
examples: ["rainy, 57°F"]
requirement_level: opt_in
- id: role
type: string
stability: development
brief: >
The actual role of the message author as passed in the message.
examples: ["tool", "function"]
requirement_level:
conditionally_required: if available and not equal to `tool`.
- id: id
type: string
stability: development
brief: >
Tool call id that this message is responding to.
examples: ["call_mszuSIzqtI65i1wAUOE8w5H4"]
requirement_level: required
- id: event.gen_ai.choice
name: gen_ai.choice
type: event
stability: development
brief: >
This event describes the Gen AI response message.
extends: gen_ai.common.event.attributes
body:
id: gen_ai.choice
requirement_level: opt_in
stability: development
type: map
note: >
If GenAI model returns multiple choices, each choice SHOULD be recorded as an individual event.
When response is streamed, instrumentations that report response events MUST reconstruct and report
the full message and MUST NOT report individual chunks as events.
If the request to GenAI model fails with an error before content is received,
instrumentation SHOULD report an event with truncated content (if enabled).
If `finish_reason` was not received, it MUST be set to `error`.
fields:
- id: index
type: int
stability: development
brief: >
The index of the choice in the list of choices.
examples: [0, 1]
requirement_level: required
- id: finish_reason
type: enum
members:
- id: stop
value: 'stop'
stability: development
brief: Stop
- id: tool_calls
value: 'tool_calls'
stability: development
brief: Tool Calls
- id: content_filter
value: 'content_filter'
stability: development
brief: Content Filter
- id: length
value: 'length'
stability: development
brief: Length
- id: error
value: 'error'
stability: development
brief: Error
stability: development
brief: >
The reason the model stopped generating tokens.
requirement_level: required
- id: message
type: map
stability: development
brief: >
GenAI response message.
requirement_level: recommended
fields:
- id: content
type: undefined
stability: development
brief: >
The contents of the assistant message.
examples: ["The weather in Paris is rainy and overcast, with temperatures around 57°F"]
requirement_level: opt_in
- id: role
type: string
stability: development
brief: >
The actual role of the message author as passed in the message.
examples: ["assistant", "bot"]
requirement_level:
conditionally_required: if available and not equal to `assistant`.
- id: tool_calls
type: map[]
stability: development
brief: >
The tool calls generated by the model, such as function calls.
requirement_level:
conditionally_required: if available
fields:
- id: id
type: string
stability: development
brief: >
The id of the tool call.
examples: ["call_mszuSIzqtI65i1wAUOE8w5H4"]
requirement_level: required
- id: type
type: enum
members:
- id: function
value: 'function'
brief: Function
stability: development
stability: development
brief: >
The type of the tool.
requirement_level: required
- id: function
type: map
stability: development
brief: >
The function that the model called.
requirement_level: required
fields:
- id: name
type: string
stability: development
brief: >
The name of the function.
examples: ["get_weather"]
requirement_level: required
- id: arguments
type: undefined
stability: development
brief: >
The arguments of the function as provided in the LLM response.
note: >
Models usually return arguments as a JSON string. In this case, it's
RECOMMENDED to provide arguments as is without attempting to deserialize them.
Semantic conventions for individual systems MAY specify a different type for
arguments field.
examples: ['{\"location\": \"Paris\"}']
requirement_level: opt_in