69 lines
3.4 KiB
YAML
69 lines
3.4 KiB
YAML
groups:
|
|
- id: gen_ai.request
|
|
type: span
|
|
brief: >
|
|
A request to an LLM is modeled as a span in a trace. The span name should be a low cardinality value representing the request made to an LLM, like the name of the API endpoint being called.
|
|
attributes:
|
|
- ref: gen_ai.system
|
|
requirement_level: required
|
|
note: >
|
|
If not using a vendor-supplied model, provide a custom friendly name, such as a name of the company or project.
|
|
If the instrumetnation reports any attributes specific to a custom model, the value provided in the `gen_ai.system` SHOULD match the custom attribute namespace segment.
|
|
For example, if `gen_ai.system` is set to `the_best_llm`, custom attributes should be added in the `gen_ai.the_best_llm.*` namespace.
|
|
If none of above options apply, the instrumentation should set `_OTHER`.
|
|
- ref: gen_ai.request.model
|
|
requirement_level: required
|
|
note: >
|
|
The name of the LLM a request is being made to. If the LLM is supplied by a vendor,
|
|
then the value must be the exact name of the model requested. If the LLM is a fine-tuned
|
|
custom model, the value should have a more specific name than the base model that's been fine-tuned.
|
|
- ref: gen_ai.request.max_tokens
|
|
requirement_level: recommended
|
|
- ref: gen_ai.request.temperature
|
|
requirement_level: recommended
|
|
- ref: gen_ai.request.top_p
|
|
requirement_level: recommended
|
|
- ref: gen_ai.response.id
|
|
requirement_level: recommended
|
|
- ref: gen_ai.response.model
|
|
requirement_level: recommended
|
|
note: >
|
|
If available. The name of the LLM serving a response. If the LLM is supplied by a vendor,
|
|
then the value must be the exact name of the model actually used. If the LLM is a
|
|
fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
|
|
- ref: gen_ai.response.finish_reasons
|
|
requirement_level: recommended
|
|
- ref: gen_ai.usage.prompt_tokens
|
|
requirement_level: recommended
|
|
- ref: gen_ai.usage.completion_tokens
|
|
requirement_level: recommended
|
|
events:
|
|
- gen_ai.content.prompt
|
|
- gen_ai.content.completion
|
|
|
|
- id: gen_ai.content.prompt
|
|
name: gen_ai.content.prompt
|
|
type: event
|
|
brief: >
|
|
In the lifetime of an LLM span, events for prompts sent and completions received
|
|
may be created, depending on the configuration of the instrumentation.
|
|
attributes:
|
|
- ref: gen_ai.prompt
|
|
requirement_level:
|
|
conditionally_required: if and only if corresponding event is enabled
|
|
note: >
|
|
It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
|
|
|
|
- id: gen_ai.content.completion
|
|
name: gen_ai.content.completion
|
|
type: event
|
|
brief: >
|
|
In the lifetime of an LLM span, events for prompts sent and completions received
|
|
may be created, depending on the configuration of the instrumentation.
|
|
attributes:
|
|
- ref: gen_ai.completion
|
|
requirement_level:
|
|
conditionally_required: if and only if corresponding event is enabled
|
|
note: >
|
|
It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
|