Class GenAiIncubatingAttributes
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic final classValues forGEN_AI_OPENAI_REQUEST_RESPONSE_FORMAT.static final classValues forGEN_AI_OPENAI_REQUEST_SERVICE_TIER.static final classValues forGEN_AI_OPERATION_NAME.static final classValues forGEN_AI_SYSTEM.static final classValues forGEN_AI_TOKEN_TYPE. -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final io.opentelemetry.api.common.AttributeKey<String>Deprecated.Removed, no replacement at this time.static final io.opentelemetry.api.common.AttributeKey<String>The response format that is requested.static final io.opentelemetry.api.common.AttributeKey<Long>Requests with same seed value more likely to return same result.static final io.opentelemetry.api.common.AttributeKey<String>The service tier requested.static final io.opentelemetry.api.common.AttributeKey<String>The service tier used for the response.static final io.opentelemetry.api.common.AttributeKey<String>A fingerprint to track any eventual change in the Generative AI environment.static final io.opentelemetry.api.common.AttributeKey<String>The name of the operation being performed.static final io.opentelemetry.api.common.AttributeKey<String>Deprecated.Removed, no replacement at this time.The encoding formats requested in an embeddings operation, if specified.static final io.opentelemetry.api.common.AttributeKey<Double>The frequency penalty setting for the GenAI request.static final io.opentelemetry.api.common.AttributeKey<Long>The maximum number of tokens the model generates for a request.static final io.opentelemetry.api.common.AttributeKey<String>The name of the GenAI model a request is being made to.static final io.opentelemetry.api.common.AttributeKey<Double>The presence penalty setting for the GenAI request.List of sequences that the model will use to stop generating further tokens.static final io.opentelemetry.api.common.AttributeKey<Double>The temperature setting for the GenAI request.static final io.opentelemetry.api.common.AttributeKey<Double>The top_k sampling setting for the GenAI request.static final io.opentelemetry.api.common.AttributeKey<Double>The top_p sampling setting for the GenAI request.Array of reasons the model stopped generating tokens, corresponding to each generation received.static final io.opentelemetry.api.common.AttributeKey<String>The unique identifier for the completion.static final io.opentelemetry.api.common.AttributeKey<String>The name of the model that generated the response.static final io.opentelemetry.api.common.AttributeKey<String>The Generative AI product as identified by the client or server instrumentation.static final io.opentelemetry.api.common.AttributeKey<String>The type of token being counted.static final io.opentelemetry.api.common.AttributeKey<Long>Deprecated.Replaced bygen_ai.usage.output_tokensattribute.static final io.opentelemetry.api.common.AttributeKey<Long>The number of tokens used in the GenAI input (prompt).static final io.opentelemetry.api.common.AttributeKey<Long>The number of tokens used in the GenAI response (completion).static final io.opentelemetry.api.common.AttributeKey<Long>Deprecated.Replaced bygen_ai.usage.input_tokensattribute. -
Method Summary
-
Field Details
-
GEN_AI_COMPLETION
Deprecated.Removed, no replacement at this time.Deprecated, use Event API to report completions contents. -
GEN_AI_OPENAI_REQUEST_RESPONSE_FORMAT
public static final io.opentelemetry.api.common.AttributeKey<String> GEN_AI_OPENAI_REQUEST_RESPONSE_FORMATThe response format that is requested. -
GEN_AI_OPENAI_REQUEST_SEED
Requests with same seed value more likely to return same result. -
GEN_AI_OPENAI_REQUEST_SERVICE_TIER
public static final io.opentelemetry.api.common.AttributeKey<String> GEN_AI_OPENAI_REQUEST_SERVICE_TIERThe service tier requested. May be a specific tier, default, or auto. -
GEN_AI_OPENAI_RESPONSE_SERVICE_TIER
public static final io.opentelemetry.api.common.AttributeKey<String> GEN_AI_OPENAI_RESPONSE_SERVICE_TIERThe service tier used for the response. -
GEN_AI_OPENAI_RESPONSE_SYSTEM_FINGERPRINT
public static final io.opentelemetry.api.common.AttributeKey<String> GEN_AI_OPENAI_RESPONSE_SYSTEM_FINGERPRINTA fingerprint to track any eventual change in the Generative AI environment. -
GEN_AI_OPERATION_NAME
The name of the operation being performed.Notes:
If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
-
GEN_AI_PROMPT
Deprecated.Removed, no replacement at this time.Deprecated, use Event API to report prompt contents. -
GEN_AI_REQUEST_ENCODING_FORMATS
public static final io.opentelemetry.api.common.AttributeKey<List<String>> GEN_AI_REQUEST_ENCODING_FORMATSThe encoding formats requested in an embeddings operation, if specified.Notes:
In some GenAI systems the encoding formats are called embedding types. Also, some GenAI systems only accept a single format per request.
-
GEN_AI_REQUEST_FREQUENCY_PENALTY
public static final io.opentelemetry.api.common.AttributeKey<Double> GEN_AI_REQUEST_FREQUENCY_PENALTYThe frequency penalty setting for the GenAI request. -
GEN_AI_REQUEST_MAX_TOKENS
The maximum number of tokens the model generates for a request. -
GEN_AI_REQUEST_MODEL
The name of the GenAI model a request is being made to. -
GEN_AI_REQUEST_PRESENCE_PENALTY
public static final io.opentelemetry.api.common.AttributeKey<Double> GEN_AI_REQUEST_PRESENCE_PENALTYThe presence penalty setting for the GenAI request. -
GEN_AI_REQUEST_STOP_SEQUENCES
public static final io.opentelemetry.api.common.AttributeKey<List<String>> GEN_AI_REQUEST_STOP_SEQUENCESList of sequences that the model will use to stop generating further tokens. -
GEN_AI_REQUEST_TEMPERATURE
The temperature setting for the GenAI request. -
GEN_AI_REQUEST_TOP_K
The top_k sampling setting for the GenAI request. -
GEN_AI_REQUEST_TOP_P
The top_p sampling setting for the GenAI request. -
GEN_AI_RESPONSE_FINISH_REASONS
public static final io.opentelemetry.api.common.AttributeKey<List<String>> GEN_AI_RESPONSE_FINISH_REASONSArray of reasons the model stopped generating tokens, corresponding to each generation received. -
GEN_AI_RESPONSE_ID
The unique identifier for the completion. -
GEN_AI_RESPONSE_MODEL
The name of the model that generated the response. -
GEN_AI_SYSTEM
The Generative AI product as identified by the client or server instrumentation.Notes:
The
gen_ai.systemdescribes a family of GenAI models with specific model identified bygen_ai.request.modelandgen_ai.response.modelattributes.The actual GenAI product may differ from the one identified by the client. For example, when using OpenAI client libraries to communicate with Mistral, the
gen_ai.systemis set toopenaibased on the instrumentation's best knowledge.For custom model, a custom friendly name SHOULD be used. If none of these options apply, the
gen_ai.systemSHOULD be set to_OTHER. -
GEN_AI_TOKEN_TYPE
The type of token being counted. -
GEN_AI_USAGE_COMPLETION_TOKENS
@Deprecated public static final io.opentelemetry.api.common.AttributeKey<Long> GEN_AI_USAGE_COMPLETION_TOKENSDeprecated.Replaced bygen_ai.usage.output_tokensattribute.Deprecated, usegen_ai.usage.output_tokensinstead. -
GEN_AI_USAGE_INPUT_TOKENS
The number of tokens used in the GenAI input (prompt). -
GEN_AI_USAGE_OUTPUT_TOKENS
The number of tokens used in the GenAI response (completion). -
GEN_AI_USAGE_PROMPT_TOKENS
@Deprecated public static final io.opentelemetry.api.common.AttributeKey<Long> GEN_AI_USAGE_PROMPT_TOKENSDeprecated.Replaced bygen_ai.usage.input_tokensattribute.Deprecated, usegen_ai.usage.input_tokensinstead.
-