614 lines
47 KiB
Markdown
614 lines
47 KiB
Markdown
<!--- Hugo front matter used to generate the website version of this page:
|
|
linkTitle: Metrics
|
|
--->
|
|
|
|
# Semantic conventions for generative AI metrics
|
|
|
|
**Status**: [Development][DocumentStatus]
|
|
|
|
<!-- toc -->
|
|
|
|
- [Generative AI client metrics](#generative-ai-client-metrics)
|
|
- [Metric: `gen_ai.client.token.usage`](#metric-gen_aiclienttokenusage)
|
|
- [Metric: `gen_ai.client.operation.duration`](#metric-gen_aiclientoperationduration)
|
|
- [Generative AI model server metrics](#generative-ai-model-server-metrics)
|
|
- [Metric: `gen_ai.server.request.duration`](#metric-gen_aiserverrequestduration)
|
|
- [Metric: `gen_ai.server.time_per_output_token`](#metric-gen_aiservertime_per_output_token)
|
|
- [Metric: `gen_ai.server.time_to_first_token`](#metric-gen_aiservertime_to_first_token)
|
|
|
|
<!-- tocstop -->
|
|
|
|
> [!Warning]
|
|
>
|
|
> Existing GenAI instrumentations that are using
|
|
> [v1.36.0 of this document](https://github.com/open-telemetry/semantic-conventions/blob/v1.36.0/docs/gen-ai/README.md)
|
|
> (or prior):
|
|
>
|
|
> * SHOULD NOT change the version of the GenAI conventions that they emit by default.
|
|
> Conventions include, but are not limited to, attributes, metric, span and event names,
|
|
> span kind and unit of measure.
|
|
> * SHOULD introduce an environment variable `OTEL_SEMCONV_STABILITY_OPT_IN`
|
|
> as a comma-separated list of category-specific values. The list of values
|
|
> includes:
|
|
> * `gen_ai_latest_experimental` - emit the latest experimental version of
|
|
> GenAI conventions (supported by the instrumentation) and do not emit the
|
|
> old one (v1.36.0 or prior).
|
|
> * The default behavior is to continue emitting whatever version of the GenAI
|
|
> conventions the instrumentation was emitting (1.36.0 or prior).
|
|
>
|
|
> This transition plan will be updated to include stable version before the
|
|
> GenAI conventions are marked as stable.
|
|
|
|
## Generative AI client metrics
|
|
|
|
The conventions described in this section are specific to Generative AI client
|
|
applications.
|
|
|
|
**Disclaimer:** These are initial Generative AI client metric instruments
|
|
and attributes but more may be added in the future.
|
|
|
|
The following metric instruments describe Generative AI operations. An
|
|
operation may be a request to an LLM, a function call, or some other
|
|
distinct action within a larger Generative AI workflow.
|
|
|
|
Individual systems may include additional system-specific attributes.
|
|
It is recommended to check system-specific documentation, if available.
|
|
|
|
### Metric: `gen_ai.client.token.usage`
|
|
|
|
This metric is [recommended][MetricRecommended] when an operation involves the usage
|
|
of tokens and the count is readily available.
|
|
|
|
For example, if GenAI system returns usage information in the streaming response, it SHOULD be used. Or if GenAI system returns each token independently, instrumentation SHOULD count number of output tokens and record the result.
|
|
|
|
If instrumentation cannot efficiently obtain number of input and/or output tokens, it MAY allow users to enable offline token counting. Otherwise it MUST NOT report usage metric.
|
|
|
|
When systems report both used tokens and billable tokens, instrumentation MUST report billable tokens.
|
|
|
|
This metric SHOULD be specified with [ExplicitBucketBoundaries] of [1, 4, 16, 64, 256, 1024, 4096, 16384, 65536, 262144, 1048576, 4194304, 16777216, 67108864].
|
|
|
|
<!-- semconv metric.gen_ai.client.token.usage -->
|
|
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
|
|
<!-- see templates/registry/markdown/snippet.md.j2 -->
|
|
<!-- prettier-ignore-start -->
|
|
<!-- markdownlint-capture -->
|
|
<!-- markdownlint-disable -->
|
|
|
|
| Name | Instrument Type | Unit (UCUM) | Description | Stability | Entity Associations |
|
|
| -------- | --------------- | ----------- | -------------- | --------- | ------ |
|
|
| `gen_ai.client.token.usage` | Histogram | `{token}` | Number of input and output tokens used. |  | |
|
|
|
|
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
|
|---|---|---|---|---|---|
|
|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
|
|
| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
|
|
| [`gen_ai.token.type`](/docs/registry/attributes/gen-ai.md) | string | The type of token being counted. | `input`; `output` | `Required` |  |
|
|
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. | `gpt-4` | `Conditionally Required` If available. |  |
|
|
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [3] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
|
|
| [`gen_ai.response.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the model that generated the response. | `gpt-4-0613` | `Recommended` |  |
|
|
| [`server.address`](/docs/registry/attributes/server.md) | string | GenAI server address. [4] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | `Recommended` |  |
|
|
|
|
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
|
|
|
|
**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
|
|
knowledge and may differ from the actual model provider.
|
|
|
|
Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
|
|
are accessible using the OpenAI REST API and corresponding client libraries,
|
|
but may proxy or host models from different providers.
|
|
|
|
The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
|
|
attributes may help identify the actual system in use.
|
|
|
|
The `gen_ai.provider.name` attribute acts as a discriminator that
|
|
identifies the GenAI telemetry format flavor specific to that provider
|
|
within GenAI semantic conventions.
|
|
It SHOULD be set consistently with provider-specific attributes and signals.
|
|
For example, GenAI spans, metrics, and events related to AWS Bedrock
|
|
should have the `gen_ai.provider.name` set to `aws.bedrock` and include
|
|
applicable `aws.bedrock.*` attributes and are not expected to include
|
|
`openai.*` attributes.
|
|
|
|
**[3] `server.port`:** When observed from the client side, and when communicating through an intermediary, `server.port` SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.
|
|
|
|
**[4] `server.address`:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available.
|
|
|
|
---
|
|
|
|
`gen_ai.operation.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `chat` | Chat completion operation such as [OpenAI Chat API](https://platform.openai.com/docs/api-reference/chat) |  |
|
|
| `create_agent` | Create GenAI agent |  |
|
|
| `embeddings` | Embeddings operation such as [OpenAI Create embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create) |  |
|
|
| `execute_tool` | Execute a tool |  |
|
|
| `generate_content` | Multimodal content generation operation such as [Gemini Generate Content](https://ai.google.dev/api/generate-content) |  |
|
|
| `invoke_agent` | Invoke GenAI agent |  |
|
|
| `text_completion` | Text completions operation such as [OpenAI Completions API (Legacy)](https://platform.openai.com/docs/api-reference/completions) |  |
|
|
|
|
---
|
|
|
|
`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
|
|
| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
|
|
| `azure.ai.inference` | Azure AI Inference |  |
|
|
| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
|
|
| `cohere` | [Cohere](https://cohere.com/) |  |
|
|
| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
|
|
| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [5] |  |
|
|
| `gcp.gen_ai` | Any Google generative AI endpoint [6] |  |
|
|
| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [7] |  |
|
|
| `groq` | [Groq](https://groq.com/) |  |
|
|
| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
|
|
| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
|
|
| `openai` | [OpenAI](https://openai.com/) |  |
|
|
| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
|
|
| `x_ai` | [xAI](https://x.ai/) |  |
|
|
|
|
**[5]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
|
|
|
|
**[6]:** May be used when specific backend is unknown.
|
|
|
|
**[7]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
|
|
|
|
---
|
|
|
|
`gen_ai.token.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `input` | Input tokens (prompt, input, etc.) |  |
|
|
| `output` | Output tokens (completion, response, etc.) |  |
|
|
|
|
<!-- markdownlint-restore -->
|
|
<!-- prettier-ignore-end -->
|
|
<!-- END AUTOGENERATED TEXT -->
|
|
<!-- endsemconv -->
|
|
|
|
### Metric: `gen_ai.client.operation.duration`
|
|
|
|
This metric is [required][MetricRequired].
|
|
|
|
This metric SHOULD be specified with [ExplicitBucketBoundaries] of [0.01, 0.02, 0.04, 0.08, 0.16, 0.32, 0.64, 1.28, 2.56, 5.12, 10.24, 20.48, 40.96, 81.92].
|
|
|
|
<!-- semconv metric.gen_ai.client.operation.duration -->
|
|
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
|
|
<!-- see templates/registry/markdown/snippet.md.j2 -->
|
|
<!-- prettier-ignore-start -->
|
|
<!-- markdownlint-capture -->
|
|
<!-- markdownlint-disable -->
|
|
|
|
| Name | Instrument Type | Unit (UCUM) | Description | Stability | Entity Associations |
|
|
| -------- | --------------- | ----------- | -------------- | --------- | ------ |
|
|
| `gen_ai.client.operation.duration` | Histogram | `s` | GenAI operation duration. |  | |
|
|
|
|
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
|
|---|---|---|---|---|---|
|
|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
|
|
| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
|
|
| [`error.type`](/docs/registry/attributes/error.md) | string | Describes a class of error the operation ended with. [3] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | `Conditionally Required` if the operation ended in an error |  |
|
|
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. | `gpt-4` | `Conditionally Required` If available. |  |
|
|
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [4] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
|
|
| [`gen_ai.response.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the model that generated the response. | `gpt-4-0613` | `Recommended` |  |
|
|
| [`server.address`](/docs/registry/attributes/server.md) | string | GenAI server address. [5] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | `Recommended` |  |
|
|
|
|
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
|
|
|
|
**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
|
|
knowledge and may differ from the actual model provider.
|
|
|
|
Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
|
|
are accessible using the OpenAI REST API and corresponding client libraries,
|
|
but may proxy or host models from different providers.
|
|
|
|
The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
|
|
attributes may help identify the actual system in use.
|
|
|
|
The `gen_ai.provider.name` attribute acts as a discriminator that
|
|
identifies the GenAI telemetry format flavor specific to that provider
|
|
within GenAI semantic conventions.
|
|
It SHOULD be set consistently with provider-specific attributes and signals.
|
|
For example, GenAI spans, metrics, and events related to AWS Bedrock
|
|
should have the `gen_ai.provider.name` set to `aws.bedrock` and include
|
|
applicable `aws.bedrock.*` attributes and are not expected to include
|
|
`openai.*` attributes.
|
|
|
|
**[3] `error.type`:** The `error.type` SHOULD match the error code returned by the Generative AI provider or the client library,
|
|
the canonical name of exception that occurred, or another low-cardinality error identifier.
|
|
Instrumentations SHOULD document the list of errors they report.
|
|
|
|
**[4] `server.port`:** When observed from the client side, and when communicating through an intermediary, `server.port` SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.
|
|
|
|
**[5] `server.address`:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available.
|
|
|
|
---
|
|
|
|
`error.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. |  |
|
|
|
|
---
|
|
|
|
`gen_ai.operation.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `chat` | Chat completion operation such as [OpenAI Chat API](https://platform.openai.com/docs/api-reference/chat) |  |
|
|
| `create_agent` | Create GenAI agent |  |
|
|
| `embeddings` | Embeddings operation such as [OpenAI Create embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create) |  |
|
|
| `execute_tool` | Execute a tool |  |
|
|
| `generate_content` | Multimodal content generation operation such as [Gemini Generate Content](https://ai.google.dev/api/generate-content) |  |
|
|
| `invoke_agent` | Invoke GenAI agent |  |
|
|
| `text_completion` | Text completions operation such as [OpenAI Completions API (Legacy)](https://platform.openai.com/docs/api-reference/completions) |  |
|
|
|
|
---
|
|
|
|
`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
|
|
| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
|
|
| `azure.ai.inference` | Azure AI Inference |  |
|
|
| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
|
|
| `cohere` | [Cohere](https://cohere.com/) |  |
|
|
| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
|
|
| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [6] |  |
|
|
| `gcp.gen_ai` | Any Google generative AI endpoint [7] |  |
|
|
| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [8] |  |
|
|
| `groq` | [Groq](https://groq.com/) |  |
|
|
| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
|
|
| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
|
|
| `openai` | [OpenAI](https://openai.com/) |  |
|
|
| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
|
|
| `x_ai` | [xAI](https://x.ai/) |  |
|
|
|
|
**[6]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
|
|
|
|
**[7]:** May be used when specific backend is unknown.
|
|
|
|
**[8]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
|
|
|
|
<!-- markdownlint-restore -->
|
|
<!-- prettier-ignore-end -->
|
|
<!-- END AUTOGENERATED TEXT -->
|
|
<!-- endsemconv -->
|
|
|
|
## Generative AI model server metrics
|
|
|
|
The following metric instruments describe Generative AI model servers'
|
|
operational metrics. It includes both functional and performance metrics.
|
|
|
|
### Metric: `gen_ai.server.request.duration`
|
|
|
|
This metric is [recommended][MetricRecommended] to report the model server
|
|
latency in terms of time spent per request.
|
|
|
|
This metric SHOULD be specified with [ExplicitBucketBoundaries] of
|
|
[0.01, 0.02, 0.04, 0.08, 0.16, 0.32, 0.64, 1.28, 2.56, 5.12, 10.24, 20.48, 40.96, 81.92].
|
|
|
|
<!-- semconv metric.gen_ai.server.request.duration -->
|
|
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
|
|
<!-- see templates/registry/markdown/snippet.md.j2 -->
|
|
<!-- prettier-ignore-start -->
|
|
<!-- markdownlint-capture -->
|
|
<!-- markdownlint-disable -->
|
|
|
|
| Name | Instrument Type | Unit (UCUM) | Description | Stability | Entity Associations |
|
|
| -------- | --------------- | ----------- | -------------- | --------- | ------ |
|
|
| `gen_ai.server.request.duration` | Histogram | `s` | Generative AI server request duration such as time-to-last byte or last output token. |  | |
|
|
|
|
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
|
|---|---|---|---|---|---|
|
|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
|
|
| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
|
|
| [`error.type`](/docs/registry/attributes/error.md) | string | Describes a class of error the operation ended with. [3] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | `Conditionally Required` if the operation ended in an error |  |
|
|
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. | `gpt-4` | `Conditionally Required` If available. |  |
|
|
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [4] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
|
|
| [`gen_ai.response.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the model that generated the response. | `gpt-4-0613` | `Recommended` |  |
|
|
| [`server.address`](/docs/registry/attributes/server.md) | string | GenAI server address. [5] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | `Recommended` |  |
|
|
|
|
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
|
|
|
|
**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
|
|
knowledge and may differ from the actual model provider.
|
|
|
|
Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
|
|
are accessible using the OpenAI REST API and corresponding client libraries,
|
|
but may proxy or host models from different providers.
|
|
|
|
The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
|
|
attributes may help identify the actual system in use.
|
|
|
|
The `gen_ai.provider.name` attribute acts as a discriminator that
|
|
identifies the GenAI telemetry format flavor specific to that provider
|
|
within GenAI semantic conventions.
|
|
It SHOULD be set consistently with provider-specific attributes and signals.
|
|
For example, GenAI spans, metrics, and events related to AWS Bedrock
|
|
should have the `gen_ai.provider.name` set to `aws.bedrock` and include
|
|
applicable `aws.bedrock.*` attributes and are not expected to include
|
|
`openai.*` attributes.
|
|
|
|
**[3] `error.type`:** The `error.type` SHOULD match the error code returned by the Generative AI service,
|
|
the canonical name of exception that occurred, or another low-cardinality error identifier.
|
|
Instrumentations SHOULD document the list of errors they report.
|
|
|
|
**[4] `server.port`:** When observed from the client side, and when communicating through an intermediary, `server.port` SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.
|
|
|
|
**[5] `server.address`:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available.
|
|
|
|
---
|
|
|
|
`error.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. |  |
|
|
|
|
---
|
|
|
|
`gen_ai.operation.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `chat` | Chat completion operation such as [OpenAI Chat API](https://platform.openai.com/docs/api-reference/chat) |  |
|
|
| `create_agent` | Create GenAI agent |  |
|
|
| `embeddings` | Embeddings operation such as [OpenAI Create embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create) |  |
|
|
| `execute_tool` | Execute a tool |  |
|
|
| `generate_content` | Multimodal content generation operation such as [Gemini Generate Content](https://ai.google.dev/api/generate-content) |  |
|
|
| `invoke_agent` | Invoke GenAI agent |  |
|
|
| `text_completion` | Text completions operation such as [OpenAI Completions API (Legacy)](https://platform.openai.com/docs/api-reference/completions) |  |
|
|
|
|
---
|
|
|
|
`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
|
|
| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
|
|
| `azure.ai.inference` | Azure AI Inference |  |
|
|
| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
|
|
| `cohere` | [Cohere](https://cohere.com/) |  |
|
|
| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
|
|
| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [6] |  |
|
|
| `gcp.gen_ai` | Any Google generative AI endpoint [7] |  |
|
|
| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [8] |  |
|
|
| `groq` | [Groq](https://groq.com/) |  |
|
|
| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
|
|
| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
|
|
| `openai` | [OpenAI](https://openai.com/) |  |
|
|
| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
|
|
| `x_ai` | [xAI](https://x.ai/) |  |
|
|
|
|
**[6]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
|
|
|
|
**[7]:** May be used when specific backend is unknown.
|
|
|
|
**[8]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
|
|
|
|
<!-- markdownlint-restore -->
|
|
<!-- prettier-ignore-end -->
|
|
<!-- END AUTOGENERATED TEXT -->
|
|
<!-- endsemconv -->
|
|
|
|
### Metric: `gen_ai.server.time_per_output_token`
|
|
|
|
This metric is [recommended][MetricRecommended] to report the model server
|
|
latency in terms of time per token generated after the first token for any model
|
|
servers which support serving LLMs. It is measured by subtracting the time taken
|
|
to generate the first output token from the request duration and dividing the
|
|
rest of the duration by the number of output tokens generated after the first
|
|
token. This is important in measuring the performance of the decode phase of LLM
|
|
inference.
|
|
|
|
This metric SHOULD be specified with [ExplicitBucketBoundaries] of
|
|
[0.01, 0.025, 0.05, 0.075, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.75, 1.0, 2.5].
|
|
|
|
<!-- semconv metric.gen_ai.server.time_per_output_token -->
|
|
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
|
|
<!-- see templates/registry/markdown/snippet.md.j2 -->
|
|
<!-- prettier-ignore-start -->
|
|
<!-- markdownlint-capture -->
|
|
<!-- markdownlint-disable -->
|
|
|
|
| Name | Instrument Type | Unit (UCUM) | Description | Stability | Entity Associations |
|
|
| -------- | --------------- | ----------- | -------------- | --------- | ------ |
|
|
| `gen_ai.server.time_per_output_token` | Histogram | `s` | Time per output token generated after the first token for successful responses. |  | |
|
|
|
|
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
|
|---|---|---|---|---|---|
|
|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
|
|
| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
|
|
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. | `gpt-4` | `Conditionally Required` If available. |  |
|
|
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [3] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
|
|
| [`gen_ai.response.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the model that generated the response. | `gpt-4-0613` | `Recommended` |  |
|
|
| [`server.address`](/docs/registry/attributes/server.md) | string | GenAI server address. [4] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | `Recommended` |  |
|
|
|
|
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
|
|
|
|
**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
|
|
knowledge and may differ from the actual model provider.
|
|
|
|
Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
|
|
are accessible using the OpenAI REST API and corresponding client libraries,
|
|
but may proxy or host models from different providers.
|
|
|
|
The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
|
|
attributes may help identify the actual system in use.
|
|
|
|
The `gen_ai.provider.name` attribute acts as a discriminator that
|
|
identifies the GenAI telemetry format flavor specific to that provider
|
|
within GenAI semantic conventions.
|
|
It SHOULD be set consistently with provider-specific attributes and signals.
|
|
For example, GenAI spans, metrics, and events related to AWS Bedrock
|
|
should have the `gen_ai.provider.name` set to `aws.bedrock` and include
|
|
applicable `aws.bedrock.*` attributes and are not expected to include
|
|
`openai.*` attributes.
|
|
|
|
**[3] `server.port`:** When observed from the client side, and when communicating through an intermediary, `server.port` SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.
|
|
|
|
**[4] `server.address`:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available.
|
|
|
|
---
|
|
|
|
`gen_ai.operation.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `chat` | Chat completion operation such as [OpenAI Chat API](https://platform.openai.com/docs/api-reference/chat) |  |
|
|
| `create_agent` | Create GenAI agent |  |
|
|
| `embeddings` | Embeddings operation such as [OpenAI Create embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create) |  |
|
|
| `execute_tool` | Execute a tool |  |
|
|
| `generate_content` | Multimodal content generation operation such as [Gemini Generate Content](https://ai.google.dev/api/generate-content) |  |
|
|
| `invoke_agent` | Invoke GenAI agent |  |
|
|
| `text_completion` | Text completions operation such as [OpenAI Completions API (Legacy)](https://platform.openai.com/docs/api-reference/completions) |  |
|
|
|
|
---
|
|
|
|
`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
|
|
| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
|
|
| `azure.ai.inference` | Azure AI Inference |  |
|
|
| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
|
|
| `cohere` | [Cohere](https://cohere.com/) |  |
|
|
| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
|
|
| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [5] |  |
|
|
| `gcp.gen_ai` | Any Google generative AI endpoint [6] |  |
|
|
| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [7] |  |
|
|
| `groq` | [Groq](https://groq.com/) |  |
|
|
| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
|
|
| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
|
|
| `openai` | [OpenAI](https://openai.com/) |  |
|
|
| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
|
|
| `x_ai` | [xAI](https://x.ai/) |  |
|
|
|
|
**[5]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
|
|
|
|
**[6]:** May be used when specific backend is unknown.
|
|
|
|
**[7]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
|
|
|
|
<!-- markdownlint-restore -->
|
|
<!-- prettier-ignore-end -->
|
|
<!-- END AUTOGENERATED TEXT -->
|
|
<!-- endsemconv -->
|
|
|
|
### Metric: `gen_ai.server.time_to_first_token`
|
|
|
|
This metric is [recommended][MetricRecommended] to report the model server
|
|
latency in terms of time spent to generate the first token of the response for
|
|
any model servers which support serving LLMs. It helps measure the time spent in
|
|
the queue and the prefill phase. It is important especially for streaming
|
|
requests. It is calculated at a request level and is reported as a histogram
|
|
using the buckets mentioned below.
|
|
|
|
This metric SHOULD be specified with [ExplicitBucketBoundaries] of
|
|
[0.001, 0.005, 0.01, 0.02, 0.04, 0.06, 0.08, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0].
|
|
|
|
<!-- semconv metric.gen_ai.server.time_to_first_token -->
|
|
<!-- NOTE: THIS TEXT IS AUTOGENERATED. DO NOT EDIT BY HAND. -->
|
|
<!-- see templates/registry/markdown/snippet.md.j2 -->
|
|
<!-- prettier-ignore-start -->
|
|
<!-- markdownlint-capture -->
|
|
<!-- markdownlint-disable -->
|
|
|
|
| Name | Instrument Type | Unit (UCUM) | Description | Stability | Entity Associations |
|
|
| -------- | --------------- | ----------- | -------------- | --------- | ------ |
|
|
| `gen_ai.server.time_to_first_token` | Histogram | `s` | Time to generate first token for successful responses. |  | |
|
|
|
|
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|
|
|---|---|---|---|---|---|
|
|
| [`gen_ai.operation.name`](/docs/registry/attributes/gen-ai.md) | string | The name of the operation being performed. [1] | `chat`; `generate_content`; `text_completion` | `Required` |  |
|
|
| [`gen_ai.provider.name`](/docs/registry/attributes/gen-ai.md) | string | The Generative AI provider as identified by the client or server instrumentation. [2] | `openai`; `gcp.gen_ai`; `gcp.vertex_ai` | `Required` |  |
|
|
| [`gen_ai.request.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the GenAI model a request is being made to. | `gpt-4` | `Conditionally Required` If available. |  |
|
|
| [`server.port`](/docs/registry/attributes/server.md) | int | GenAI server port. [3] | `80`; `8080`; `443` | `Conditionally Required` If `server.address` is set. |  |
|
|
| [`gen_ai.response.model`](/docs/registry/attributes/gen-ai.md) | string | The name of the model that generated the response. | `gpt-4-0613` | `Recommended` |  |
|
|
| [`server.address`](/docs/registry/attributes/server.md) | string | GenAI server address. [4] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | `Recommended` |  |
|
|
|
|
**[1] `gen_ai.operation.name`:** If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
|
|
|
|
**[2] `gen_ai.provider.name`:** The attribute SHOULD be set based on the instrumentation's best
|
|
knowledge and may differ from the actual model provider.
|
|
|
|
Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms
|
|
are accessible using the OpenAI REST API and corresponding client libraries,
|
|
but may proxy or host models from different providers.
|
|
|
|
The `gen_ai.request.model`, `gen_ai.response.model`, and `server.address`
|
|
attributes may help identify the actual system in use.
|
|
|
|
The `gen_ai.provider.name` attribute acts as a discriminator that
|
|
identifies the GenAI telemetry format flavor specific to that provider
|
|
within GenAI semantic conventions.
|
|
It SHOULD be set consistently with provider-specific attributes and signals.
|
|
For example, GenAI spans, metrics, and events related to AWS Bedrock
|
|
should have the `gen_ai.provider.name` set to `aws.bedrock` and include
|
|
applicable `aws.bedrock.*` attributes and are not expected to include
|
|
`openai.*` attributes.
|
|
|
|
**[3] `server.port`:** When observed from the client side, and when communicating through an intermediary, `server.port` SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.
|
|
|
|
**[4] `server.address`:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available.
|
|
|
|
---
|
|
|
|
`gen_ai.operation.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `chat` | Chat completion operation such as [OpenAI Chat API](https://platform.openai.com/docs/api-reference/chat) |  |
|
|
| `create_agent` | Create GenAI agent |  |
|
|
| `embeddings` | Embeddings operation such as [OpenAI Create embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create) |  |
|
|
| `execute_tool` | Execute a tool |  |
|
|
| `generate_content` | Multimodal content generation operation such as [Gemini Generate Content](https://ai.google.dev/api/generate-content) |  |
|
|
| `invoke_agent` | Invoke GenAI agent |  |
|
|
| `text_completion` | Text completions operation such as [OpenAI Completions API (Legacy)](https://platform.openai.com/docs/api-reference/completions) |  |
|
|
|
|
---
|
|
|
|
`gen_ai.provider.name` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
|
|
|
|
| Value | Description | Stability |
|
|
|---|---|---|
|
|
| `anthropic` | [Anthropic](https://www.anthropic.com/) |  |
|
|
| `aws.bedrock` | [AWS Bedrock](https://aws.amazon.com/bedrock) |  |
|
|
| `azure.ai.inference` | Azure AI Inference |  |
|
|
| `azure.ai.openai` | [Azure OpenAI](https://azure.microsoft.com/products/ai-services/openai-service/) |  |
|
|
| `cohere` | [Cohere](https://cohere.com/) |  |
|
|
| `deepseek` | [DeepSeek](https://www.deepseek.com/) |  |
|
|
| `gcp.gemini` | [Gemini](https://cloud.google.com/products/gemini) [5] |  |
|
|
| `gcp.gen_ai` | Any Google generative AI endpoint [6] |  |
|
|
| `gcp.vertex_ai` | [Vertex AI](https://cloud.google.com/vertex-ai) [7] |  |
|
|
| `groq` | [Groq](https://groq.com/) |  |
|
|
| `ibm.watsonx.ai` | [IBM Watsonx AI](https://www.ibm.com/products/watsonx-ai) |  |
|
|
| `mistral_ai` | [Mistral AI](https://mistral.ai/) |  |
|
|
| `openai` | [OpenAI](https://openai.com/) |  |
|
|
| `perplexity` | [Perplexity](https://www.perplexity.ai/) |  |
|
|
| `x_ai` | [xAI](https://x.ai/) |  |
|
|
|
|
**[5]:** Used when accessing the 'generativelanguage.googleapis.com' endpoint. Also known as the AI Studio API.
|
|
|
|
**[6]:** May be used when specific backend is unknown.
|
|
|
|
**[7]:** Used when accessing the 'aiplatform.googleapis.com' endpoint.
|
|
|
|
<!-- markdownlint-restore -->
|
|
<!-- prettier-ignore-end -->
|
|
<!-- END AUTOGENERATED TEXT -->
|
|
<!-- endsemconv -->
|
|
|
|
[DocumentStatus]: https://opentelemetry.io/docs/specs/otel/document-status
|
|
[MetricRequired]: /docs/general/metric-requirement-level.md#required
|
|
[MetricRecommended]: /docs/general/metric-requirement-level.md#recommended
|
|
[ExplicitBucketBoundaries]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.47.0/specification/metrics/api.md#instrument-advisory-parameters
|