-
Notifications
You must be signed in to change notification settings - Fork 44
feat(vercel): add gen.ai.input.messages + gen.ai.output.messages #711
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughTransforms internal ai.* prompt/response attributes into standardized GenAI message attributes (SemanticAttributes.GEN_AI_INPUT_MESSAGES / GEN_AI_OUTPUT_MESSAGES), populates LLM_PROMPTS/LLM_COMPLETIONS and tool_call fields, removes legacy ai.* attributes where applicable, updates OpenAI instrumentation tests and recordings, removes some Traceloop AI integration tests, and bumps @opentelemetry/semantic-conventions to ^1.37.0 across packages. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant App
participant SDK as Traceloop SDK
participant Span as Span Attributes
App->>SDK: provide AI_PROMPT or AI_PROMPT_MESSAGES
SDK->>SDK: parse messages -> inputMessages = [{role, parts:[{type:TYPE_TEXT, content}]}]
SDK->>Span: set SemanticAttributes.GEN_AI_INPUT_MESSAGES = JSON.stringify(inputMessages)
SDK->>Span: set LLM_PROMPTS[0] = { role:ROLE_USER, content }
App->>SDK: receive AI_RESPONSE_TEXT / AI_RESPONSE_OBJECT
SDK->>SDK: build outputMessages = [{ role:ROLE_ASSISTANT, parts:[{type:TYPE_TEXT, content}] }]
SDK->>Span: set SemanticAttributes.GEN_AI_OUTPUT_MESSAGES = JSON.stringify(outputMessages)
SDK->>Span: set LLM_COMPLETIONS[0].content and .role = ROLE_ASSISTANT
App->>SDK: receive AI_RESPONSE_TOOL_CALLS
SDK->>SDK: build toolCallParts -> parts with type:TYPE_TOOL_CALL
SDK->>Span: set SemanticAttributes.GEN_AI_OUTPUT_MESSAGES = JSON.stringify([{role:ROLE_ASSISTANT, parts:toolCallParts}])
SDK->>Span: set LLM_COMPLETIONS[0].tool_calls[...] = {name, arguments}
note over SDK,Span: legacy ai.* attributes are deleted after transformation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro Disabled knowledge base sources:
📒 Files selected for processing (1)
💤 Files with no reviewable changes (1)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed everything up to 92e145e in 1 minute and 29 seconds. Click for details.
- Reviewed
154
lines of code in2
files - Skipped
0
files when reviewing. - Skipped posting
3
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/ai-semantic-conventions/src/SemanticAttributes.ts:25
- Draft comment:
Consider adding inline documentation for the new attributes (LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES) to clarify their purpose and mapping to OpenTelemetry standards. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:60
- Draft comment:
Both transformResponseText and transformResponseToolCalls assign to LLM_OUTPUT_MESSAGES. If a response contains both text and tool call data, one may overwrite the other. Confirm if this is intentional or consider merging the outputs. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The comment raises a valid point about potential data loss. Looking at the code structure, these transformations are called sequentially and could overwrite each other. However, looking at the input data structure (AI_RESPONSE_TEXT and AI_RESPONSE_TOOL_CALLS), it seems these are mutually exclusive - a response would either be text OR tool calls, not both. The code appears to be designed with this assumption. I could be wrong about the mutual exclusivity assumption. Without seeing the actual data structures and how they're used, I can't be 100% certain that a response never contains both text and tool calls. While there's some uncertainty, the code structure strongly suggests these are meant to be separate response types. If they weren't mutually exclusive, the code would likely have been structured differently. The comment should be deleted. While it raises an interesting edge case, the code structure suggests this is working as intended with mutually exclusive response types.
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:261
- Draft comment:
The logic for setting LLM_INPUT_MESSAGES is duplicated in two branches (handling AI_PROMPT_MESSAGES and AI_PROMPT). This might lead to one branch overwriting the other if both keys exist. Consider unifying the handling of input messages. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_KtG5WusWJZXeLfLg
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (7)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (7)
60-73
: Emit gen_ai.output.messages for text — consider size cap to prevent exporter/back-end dropsJSON blobs can get large; cap or truncate before setting the attribute.
- attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([ - outputMessage, - ]); + attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = safeStringify([ + outputMessage, + ]);Helper to add near the top of the file:
function safeStringify(v: unknown, max = 64_000): string { const s = typeof v === "string" ? v : JSON.stringify(v); return s.length > max ? s.slice(0, max) + "…[truncated]" : s; }
84-97
: Object response encoded as “text” — ensure it’s a string before serializing to messagesIf AI_RESPONSE_OBJECT is an object, stringify it so the part content is valid and consistent.
- const outputMessage = { + const contentString = + typeof attributes[AI_RESPONSE_OBJECT] === "string" + ? attributes[AI_RESPONSE_OBJECT] + : JSON.stringify(attributes[AI_RESPONSE_OBJECT]); + const outputMessage = { role: "assistant", parts: [ { type: "text", - content: attributes[AI_RESPONSE_OBJECT], + content: contentString, }, ], }; - attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([ - outputMessage, - ]); + attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = safeStringify([outputMessage]);Also confirm whether a non-“text” part type (e.g., structured/json) is recommended in the latest OTel GenAI messages schema.
121-129
: Tool-call part: normalize args to string and include optional id if presentEnsures consistent shape and avoids accidental object embedding.
- toolCallParts.push({ + const call: any = { type: "tool_call", tool_call: { name: toolCall.toolName, - arguments: toolCall.args, + arguments: + typeof toolCall.args === "string" + ? toolCall.args + : JSON.stringify(toolCall.args), }, - }); + }; + if (toolCall.id) call.tool_call.id = toolCall.id; + toolCallParts.push(call);Please also confirm the exact field names in the OTel GenAI messages spec (“tool_call” type, and whether an “id” is expected).
132-142
: Potential overwrite if both text and tool-call outputs existCurrent order means the later handler wins. If both are present in one span, consider merging parts into a single assistant message rather than overwriting.
259-276
: Input messages: preserve non-text parts when provided by the SDKWhen msg.content is an array of parts, reuse it instead of collapsing to a single text part to avoid losing modality/tool info.
- inputMessages.push({ - role: msg.role, - parts: [ - { - type: "text", - content: processedContent, - }, - ], - }); + const parts = + Array.isArray(msg.content) && msg.content.length > 0 + ? msg.content + : [{ type: "text", content: processedContent }]; + inputMessages.push({ role: msg.role, parts });Verify the allowed part types per the latest OTel GenAI convention (text, image, tool_call, tool_result, etc.).
279-284
: Cap serialized input messages to avoid attribute bloatUse the same safeStringify helper for consistency.
- attributes[SpanAttributes.LLM_INPUT_MESSAGES] = - JSON.stringify(inputMessages); + attributes[SpanAttributes.LLM_INPUT_MESSAGES] = + safeStringify(inputMessages);
299-312
: Single prompt path duplicates logic — reuse the same serialization + size capKeeps behavior uniform with the multi-message path and avoids divergence.
- attributes[SpanAttributes.LLM_INPUT_MESSAGES] = JSON.stringify([ - inputMessage, - ]); + attributes[SpanAttributes.LLM_INPUT_MESSAGES] = + safeStringify([inputMessage]);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
packages/ai-semantic-conventions/src/SemanticAttributes.ts
(1 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
(6 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
packages/ai-semantic-conventions/src/SemanticAttributes.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}
: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧠 Learnings (3)
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/ai-semantic-conventions/src/SemanticAttributes.ts
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-61)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add tests please - with tool usage and passing a conversation similar to the example we're trying to solve...
LLM_INPUT_MESSAGES: "gen_ai.input.messages", | ||
LLM_OUTPUT_MESSAGES: "gen_ai.output.messages", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we gotta use it from otel semconv pacakge rather than copying it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CHAT GPT:
The attributes gen_ai.input.messages and gen_ai.output.messages do exist in the OpenTelemetry GenAI semantic conventions spec.
However, they are not currently exposed in the stable @opentelemetry/semantic-conventions package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update to v1.37.0
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
Outdated
Show resolved
Hide resolved
role: msg.role, | ||
parts: [ | ||
{ | ||
type: "text", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about tools? how will they be represented here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 8e007ec in 2 minutes and 14 seconds. Click for details.
- Reviewed
365
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
4
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1183
- Draft comment:
Great comprehensive tests for the new gen_ai input/output messages covering text, tool calls, and mixed content scenarios. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1189
- Draft comment:
For consistency with legacy transformation tests, consider asserting that the original legacy keys (e.g. 'ai.prompt.messages' or 'ai.response.text') are removed after transformation. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The suggestion is technically valid - it would make the test more complete to verify cleanup. However, this behavior is already well-tested throughout the file in other test cases. For example, line 217 shows "assert.strictEqual(attributes["ai.prompt.messages"], undefined);" in a similar test. Adding redundant assertions here would not meaningfully improve test coverage. The comment identifies a real gap in this specific test case. More thorough testing at each point could catch regressions in cleanup behavior. While thorough testing is good, the cleanup behavior is already well-covered by multiple other test cases in the file. Adding redundant assertions would increase test maintenance burden without providing meaningful additional coverage. The comment should be deleted since the cleanup behavior is already thoroughly tested elsewhere in the file and adding redundant assertions would not improve test quality.
3. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1472
- Draft comment:
The test for complex multi-turn conversation correctly flattens mixed content to text parts. It would help to document this behavior (i.e. non-text types like images are dropped) for future maintainers. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold50%
The comment is purely informative and suggests documenting behavior, which is not allowed by the rules. It doesn't provide a specific code suggestion or ask for a test to be written.
4. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1321
- Draft comment:
Consider adding inline comments in tests (e.g. for tool call aggregation into a single output message) to clarify design decisions in these transformations. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The tests are already well-structured with clear sections and descriptive test names. The assertions have clear comments explaining what they're checking. The suggestion to add more inline comments about design decisions is vague and doesn't point to specific unclear areas. The test code itself serves as documentation of the expected behavior. The comment could be pointing to genuinely complex transformations that future maintainers might find hard to understand without more context about why certain design choices were made. While design decisions could be documented, tests should focus on demonstrating behavior rather than explaining implementation details. The test names and structure already provide good context about what's being tested. The comment should be deleted as it's too vague and the tests are already well-documented through clear structure and descriptive assertions.
Workflow ID: wflow_YGDyB2OWD39gv8Fa
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 224f548 in 2 minutes and 0 seconds. Click for details.
- Reviewed
40
lines of code in2
files - Skipped
0
files when reviewing. - Skipped posting
4
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:255
- Draft comment:
Refactor the manual JSON unescaping (e.g. replacing /\'/g and /\\\\/g) into a helper for clarity and robustness. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:138
- Draft comment:
If both ai.response.text and ai.response.toolCalls are present, the latter overwrites the output messages. Confirm this is intended or merge parts instead. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:108
- Draft comment:
Consider logging or handling JSON parse errors in catch blocks instead of silently ignoring them. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
4. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1183
- Draft comment:
The new tests for gen_ai.input.messages and gen_ai.output.messages are very comprehensive. Great work! - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
Workflow ID: wflow_H7OMOvKWUlS01oBL
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (7)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (4)
22-25
: Good move centralizing magic strings; consider adding missing constants.Add ROLE_SYSTEM and maybe TOOL_CALL_TYPE_FUNCTION to avoid reintroducing literals elsewhere.
+const ROLE_SYSTEM = "system"; +const TOOL_CALL_TYPE_FUNCTION = "function";
62-76
: DRY up assistant text output construction.
transformResponseText
andtransformResponseObject
duplicate message-building; extract a helper.- const outputMessage = { - role: ROLE_ASSISTANT, - parts: [ - { - type: TYPE_TEXT, - content: attributes[AI_RESPONSE_TEXT], - }, - ], - }; - attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([ - outputMessage, - ]); + setAssistantTextOutput(attributes, attributes[AI_RESPONSE_TEXT]);Add this helper near the top of the file:
function setAssistantTextOutput( attributes: Record<string, any>, content: string, ) { const outputMessage = { role: ROLE_ASSISTANT, parts: [{ type: TYPE_TEXT, content }], }; attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([ outputMessage, ]); }
85-99
: Apply the same helper for object responses.- const outputMessage = { - role: ROLE_ASSISTANT, - parts: [ - { - type: TYPE_TEXT, - content: attributes[AI_RESPONSE_OBJECT], - }, - ], - }; - attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([ - outputMessage, - ]); + setAssistantTextOutput(attributes, attributes[AI_RESPONSE_OBJECT]);
111-141
: Include tool call IDs and guard existing output messages
- Add
id: toolCall.toolCallId
to eachtool_call
part, per OpenTelemetry GenAI spec, to enable request-to-response correlation.- Only set
SpanAttributes.LLM_OUTPUT_MESSAGES
when not already present, avoiding clobbering existing messages.- toolCallParts.push({ + toolCallParts.push({ type: TYPE_TOOL_CALL, tool_call: { name: toolCall.toolName, - arguments: toolCall.args, + arguments: toolCall.args, + id: toolCall.toolCallId, }, }); @@ - if (toolCallParts.length > 0) { + if (toolCallParts.length > 0) { const outputMessage = { role: ROLE_ASSISTANT, parts: toolCallParts, }; - attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([ - outputMessage, - ]); + if (!attributes[SpanAttributes.LLM_OUTPUT_MESSAGES]) { + attributes[SpanAttributes.LLM_OUTPUT_MESSAGES] = JSON.stringify([ + outputMessage, + ]); + } }packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (3)
1263-1319
: Also assert legacy attribute removal for completeness.Add an assertion that
ai.response.toolCalls
is deleted after transformation to mirror earlier suites.transformAiSdkAttributes(attributes); @@ const outputMessages = JSON.parse( attributes[SpanAttributes.LLM_OUTPUT_MESSAGES], ); @@ assert.strictEqual( outputMessages[0].parts[1].tool_call.arguments, '{"location": "San Francisco", "cuisine": "italian"}', ); + // Original attribute should be removed + assert.strictEqual(attributes["ai.response.toolCalls"], undefined);
1350-1437
: Great end-to-end test; add one check for tools cleanup.Verify
ai.prompt.tools
was removed.assert.strictEqual( attributes[`${SpanAttributes.LLM_REQUEST_FUNCTIONS}.1.name`], "searchRestaurants", ); + // Original tools should be removed + assert.strictEqual(attributes["ai.prompt.tools"], undefined);
1183-1535
: Add a regression test for prompt override behavior.Cover the case where both
ai.prompt.messages
andai.prompt
are set to ensure messages are merged (not overwritten).it("should not overwrite gen_ai.input.messages when ai.prompt and ai.prompt.messages both exist", () => { const attributes: Record<string, any> = { "ai.prompt.messages": JSON.stringify([ { role: "system", content: "You are helpful" }, { role: "user", content: "Hi" }, ]), "ai.prompt": JSON.stringify({ prompt: "Extra single prompt" }), }; transformAiSdkAttributes(attributes); const parsed = JSON.parse(attributes[SpanAttributes.LLM_INPUT_MESSAGES]); assert.strictEqual(parsed.length, 3); assert.strictEqual(parsed[0].role, "system"); assert.strictEqual(parsed[1].role, "user"); assert.strictEqual(parsed[2].role, "user"); assert.strictEqual(parsed[2].parts[0].content, "Extra single prompt"); });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
(10 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}
: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (2)
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-61)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformAiSdkAttributes
(366-378)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-61)
🪛 GitHub Actions: CI
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
[error] 1-1: Prettier check failed for command 'pnpm prettier --check .'. Run 'pnpm prettier --write' to fix code style issues in this file.
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
[error] 1-1: Prettier check failed for command 'pnpm prettier --check .'. Run 'pnpm prettier --write' to fix code style issues in this file.
🔇 Additional comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
1183-1235
: LGTM: solid coverage for gen_ai.input.messages with multi-turn text.
1473-1534
: LGTM: complex mixed-content case correctly flattened into text parts.
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed f98bdf5 in 47 seconds. Click for details.
- Reviewed
347
lines of code in3
files - Skipped
1
files when reviewing. - Skipped posting
4
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/package.json:4
- Draft comment:
Typo in the package description: 'Instrumentaion' should be 'Instrumentation'. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/instrumentation-openai/package.json:55
- Draft comment:
New dependency '@traceloop/node-server-sdk' is added. Ensure that the workspace version ('workspace:*') resolves correctly in all environments. Consider pinning a version or using a version range if stability is needed. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold50%
This comment is about a new dependency being added and suggests ensuring that the workspace version resolves correctly. It also suggests pinning a version or using a version range for stability. This falls under the category of dependency management, which the rules specify should not be commented on unless it's about specific version mismatches or similar issues. Therefore, this comment should be removed.
3. packages/instrumentation-openai/test/instrumentation.test.ts:889
- Draft comment:
The new test case verifying LLM_INPUT_MESSAGES and LLM_OUTPUT_MESSAGES attributes is comprehensive. Consider adding additional scenarios with multiple input or output messages if the feature supports more complex interactions. - Reason this comment was not posted:
Confidence changes required:50%
<= threshold50%
None
4. packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har:1
- Draft comment:
The recorded HAR file appears to be generated correctly. Verify that no sensitive production data (e.g. API keys or personal data) is being captured in these recordings. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold50%
The comment is asking the PR author to verify that no sensitive data is being captured in the HAR file recordings. This falls under the category of asking the author to double-check something, which is against the rules.
Workflow ID: wflow_2rvkFPXupErm96o4
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (3)
packages/instrumentation-openai/package.json
(1 hunks)packages/instrumentation-openai/test/instrumentation.test.ts
(3 hunks)packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
🧰 Additional context used
📓 Path-based instructions (4)
packages/instrumentation-*/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place each provider integration in its own package under packages/instrumentation-[provider]/
Files:
packages/instrumentation-openai/package.json
packages/instrumentation-openai/test/instrumentation.test.ts
packages/*/package.json
📄 CodeRabbit inference engine (CLAUDE.md)
Use workspace:* for intra-repo package dependencies in package.json
Files:
packages/instrumentation-openai/package.json
packages/instrumentation-*/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/instrumentation-*/**/*.{ts,tsx}
: Instrumentation classes must extend InstrumentationBase and register hooks using InstrumentationModuleDefinition
Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Instrumentations must extract request/response data and token usage from wrapped calls
Instrumentations must capture and record errors appropriately
Do not implement anonymous telemetry collection in instrumentation packages; telemetry is collected only in the SDK
Files:
packages/instrumentation-openai/test/instrumentation.test.ts
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/instrumentation-openai/test/instrumentation.test.ts
🧠 Learnings (8)
📓 Common learnings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/package.json : When adding a new instrumentation package, add it to the main SDK dependencies
Applied to files:
packages/instrumentation-openai/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/src/lib/node-server-sdk.ts : Register new instrumentation packages during SDK initialization
Applied to files:
packages/instrumentation-openai/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/instrumentation-openai/package.json
packages/instrumentation-openai/test/instrumentation.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/*/package.json : Use workspace:* for intra-repo package dependencies in package.json
Applied to files:
packages/instrumentation-openai/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/instrumentation-openai/package.json
packages/instrumentation-openai/test/instrumentation.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : Use the provided decorators (workflow, task, agent) for workflow/task/agent spans instead of re-implementing them
Applied to files:
packages/instrumentation-openai/package.json
packages/instrumentation-openai/test/instrumentation.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/instrumentation-openai/test/instrumentation.test.ts
🧬 Code graph analysis (1)
packages/instrumentation-openai/test/instrumentation.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/span-processor.ts (1)
createSpanProcessor
(68-109)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-61)
🔇 Additional comments (3)
packages/instrumentation-openai/package.json (1)
55-55
: Good addition: local SDK span processor for testsAdding @traceloop/node-server-sdk as a devDependency with workspace:* is appropriate for test-only span processing and keeps runtime deps clean.
packages/instrumentation-openai/test/instrumentation.test.ts (2)
27-27
: Import of Traceloop span processor — OKUsing createSpanProcessor from @traceloop/node-server-sdk aligns with the new gen_ai.* message transformations.
889-931
: New gen_ai.input/output.messages assertions — LGTMThe JSON structure checks (role, parts, type, content) are clear and validate the new attributes correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 259191f in 40 seconds. Click for details.
- Reviewed
87
lines of code in2
files - Skipped
0
files when reviewing. - Skipped posting
4
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/package.json:52
- Draft comment:
Removed the dependency on @traceloop/node-server-sdk. Ensure that no other part of the codebase relies on it. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
2. packages/instrumentation-openai/test/instrumentation.test.ts:28
- Draft comment:
Consider adding a proper type (e.g. a Record or interface) for the 'attributes' parameter in transformToStandardFormat instead of using 'any' to improve type safety. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
3. packages/instrumentation-openai/test/instrumentation.test.ts:45
- Draft comment:
The removal of createSpanProcessor in favor of a simpler SimpleSpanProcessor is fine for tests. Verify that this change aligns with production behavior, especially regarding batching. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
4. packages/instrumentation-openai/test/instrumentation.test.ts:935
- Draft comment:
If transformToStandardFormat is needed in multiple tests, consider extracting it to a shared utility to reduce duplication. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
Workflow ID: wflow_x042p15lZ48d69B2
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Skipped PR review on 2264bf2 because no changed files had a supported extension. If you think this was in error, please contact us and we'll fix it right away.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed ae63671 in 37 seconds. Click for details.
- Reviewed
56
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
2
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/test/instrumentation.test.ts:37
- Draft comment:
The added trailing commas in the object literals (e.g. in theparts
array) and the multi-line assignment for JSON.stringify improve code consistency and readability. No changes required. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
2. packages/instrumentation-openai/test/instrumentation.test.ts:946
- Draft comment:
Reformatting the assert statement for checking message content improves readability without changing functionality. Looks good. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
Workflow ID: wflow_K2paxJsmGjtyusWO
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
LLM_INPUT_MESSAGES: "gen_ai.input.messages", | ||
LLM_OUTPUT_MESSAGES: "gen_ai.output.messages", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update to v1.37.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed e9268a0 in 1 minute and 2 seconds. Click for details.
- Reviewed
220
lines of code in6
files - Skipped
1
files when reviewing. - Skipped posting
3
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1186
- Draft comment:
Comprehensive tests for gen_ai.input/output.messages have been added and clearly check both structure and removal of original keys. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:518
- Draft comment:
Consider using deepStrictEqual on parsed tool parameters instead of comparing JSON.stringify output to guard against potential key order issues. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1541
- Draft comment:
Span transformation tests effectively verify both name rewriting and removal of original AI SDK attributes. Good coverage. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_tocV607bT3NnCwVH
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
1-1
: Fix CI: Prettier is failing on this file.Run pnpm prettier --write.
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
296-316
: Don’t overwrite existing gen_ai.input.messages when ai.prompt is present.When both ai.prompt.messages and ai.prompt exist, this block clobbers previously built GEN_AI_INPUT_MESSAGES.
- attributes[SemanticAttributes.GEN_AI_INPUT_MESSAGES] = JSON.stringify([ - inputMessage, - ]); + const existing = attributes[SemanticAttributes.GEN_AI_INPUT_MESSAGES]; + if (existing) { + try { + const arr = JSON.parse(existing); + arr.push(inputMessage); + attributes[SemanticAttributes.GEN_AI_INPUT_MESSAGES] = JSON.stringify(arr); + } catch { + attributes[SemanticAttributes.GEN_AI_INPUT_MESSAGES] = JSON.stringify([inputMessage]); + } + } else { + attributes[SemanticAttributes.GEN_AI_INPUT_MESSAGES] = JSON.stringify([inputMessage]); + }
🧹 Nitpick comments (3)
packages/ai-semantic-conventions/package.json (1)
37-39
: Align version range with the rest of the repo.You pinned "@opentelemetry/semantic-conventions" to 1.37.0 here while other packages use ^1.37.0. Recommend using a caret for consistency and easier upgrades.
- "@opentelemetry/semantic-conventions": "1.37.0" + "@opentelemetry/semantic-conventions": "^1.37.0"packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
342-349
: Total tokens should allow zero values.Truthiness check skips valid 0 counts. Use null/undefined checks.
- if (promptTokens && completionTokens) { - attributes[`${SpanAttributes.LLM_USAGE_TOTAL_TOKENS}`] = - Number(promptTokens) + Number(completionTokens); - } + if (promptTokens != null && completionTokens != null) { + attributes[`${SpanAttributes.LLM_USAGE_TOTAL_TOKENS}`] = + Number(promptTokens) + Number(completionTokens); + }packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har (1)
123-131
: Redact Set-Cookie headers in HAR fixtures.Multiple .har recordings under test/recordings contain Set-Cookie headers (e.g. packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har — lines ~123–131 and ~197–205). Remove those headers or replace their values with "" (or strip headers) in committed .har fixtures.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (5)
packages/ai-semantic-conventions/package.json
(1 hunks)packages/instrumentation-openai/package.json
(1 hunks)packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
(1 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
(11 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
packages/instrumentation-*/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place each provider integration in its own package under packages/instrumentation-[provider]/
Files:
packages/instrumentation-openai/package.json
packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
packages/*/package.json
📄 CodeRabbit inference engine (CLAUDE.md)
Use workspace:* for intra-repo package dependencies in package.json
Files:
packages/instrumentation-openai/package.json
packages/ai-semantic-conventions/package.json
**/recordings/**
📄 CodeRabbit inference engine (CLAUDE.md)
Store HTTP interaction recordings for tests under recordings/ directories for Polly.js replay
Files:
packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-LLM_INPUT_MESSAGES-and-LLM_OUTPUT_MESSAGES-attributes-for-chat-completions_99541399/recording.har
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}
: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/instrumentation-openai/package.json
packages/ai-semantic-conventions/package.json
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/ai-semantic-conventions/package.json
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
🧬 Code graph analysis (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformAiSdkAttributes
(372-384)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-59)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-59)
🪛 GitHub Actions: CI
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
[warning] 1-1: Prettier formatting issues detected in this file. Run 'pnpm prettier --write' to fix formatting.
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
[warning] 1-1: Prettier formatting issues detected in this file. Run 'pnpm prettier --write' to fix formatting.
🔇 Additional comments (4)
packages/instrumentation-openai/package.json (1)
43-45
: Dependency bump LGTM.Upgrading to ^1.37.0 aligns with new GEN_AI_* attributes usage.
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
1186-1539
: GEN_AI input/output messages tests LGTM.Good coverage for text, tool calls, and object responses using SemanticAttributes.
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (2)
1-4
: Imports follow guideline.Using SpanAttributes from @traceloop/ai-semantic-conventions and SemanticAttributes for GEN_AI_* is correct.
1-1
: Fix CI: Prettier is failing on this file.Run pnpm prettier --write.
attributes[`${SpanAttributes.LLM_PROMPTS}.0.content`] = | ||
promptData.prompt; | ||
attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`] = "user"; | ||
attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`] = ROLE_USER; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid overwriting existing LLM_PROMPTS indices.
Writing to .0.* here can override prompts created from ai.prompt.messages. Append to the next index instead.
- attributes[`${SpanAttributes.LLM_PROMPTS}.0.content`] =
- promptData.prompt;
- attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`] = ROLE_USER;
+ const prefix = `${SpanAttributes.LLM_PROMPTS}.`;
+ const nextIndex = Math.max(
+ -1,
+ ...Object.keys(attributes)
+ .filter((k) => k.startsWith(prefix))
+ .map((k) => parseInt(k.slice(prefix.length).split(".")[0], 10))
+ .filter((n) => Number.isFinite(n)),
+ ) + 1;
+ attributes[`${SpanAttributes.LLM_PROMPTS}.${nextIndex}.content`] = promptData.prompt;
+ attributes[`${SpanAttributes.LLM_PROMPTS}.${nextIndex}.role`] = ROLE_USER;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
attributes[`${SpanAttributes.LLM_PROMPTS}.0.content`] = | |
promptData.prompt; | |
attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`] = "user"; | |
attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`] = ROLE_USER; | |
const prefix = `${SpanAttributes.LLM_PROMPTS}.`; | |
const nextIndex = Math.max( | |
-1, | |
...Object.keys(attributes) | |
.filter((k) => k.startsWith(prefix)) | |
.map((k) => parseInt(k.slice(prefix.length).split(".")[0], 10)) | |
.filter((n) => Number.isFinite(n)), | |
) + 1; | |
attributes[`${SpanAttributes.LLM_PROMPTS}.${nextIndex}.content`] = promptData.prompt; | |
attributes[`${SpanAttributes.LLM_PROMPTS}.${nextIndex}.role`] = ROLE_USER; |
🤖 Prompt for AI Agents
In packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts around lines
300-303, the code unconditionally writes to
attributes[`${SpanAttributes.LLM_PROMPTS}.0.*`] which can overwrite existing
prompts; instead compute the next free numeric index by scanning existing
attribute keys that match the SpanAttributes.LLM_PROMPTS prefix, parse their
indices, pick max+1 (or 0 if none), then set
attributes[`${SpanAttributes.LLM_PROMPTS}.${idx}.content`] = promptData.prompt
and attributes[`${SpanAttributes.LLM_PROMPTS}.${idx}.role`] = ROLE_USER so the
new prompt is appended rather than overwriting index 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 50c87c1 in 57 seconds. Click for details.
- Reviewed
398
lines of code in14
files - Skipped
1
files when reviewing. - Skipped posting
5
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:147
- Draft comment:
Consider logging error details in the catch block of transformResponseToolCalls instead of silently swallowing errors. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:257
- Draft comment:
The unescaping logic using string replace in transformPrompts may be fragile; consider a more robust JSON unescaping approach. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:359
- Draft comment:
Consider making vendor prefix matching in transformVendor case-insensitive to handle variations in vendor name casing. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:410
- Draft comment:
Consider adding documentation/comments to clarify the expected behavior for JSON escape sequences in prompt messages, as behavior appears to differ across tests. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:12
- Draft comment:
Optionally, enhance createMockSpan with additional ReadableSpan properties for improved test realism in future tests. - Reason this comment was not posted:
Confidence changes required:40%
<= threshold50%
None
Workflow ID: wflow_NL1si4X9yUHHRmSu
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
1186-1539
: Fix CI: Prettier still failing on this file.Run: pnpm prettier --write packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (or pnpm prettier --write .)
🧹 Nitpick comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
4-6
: Consolidate imports and let Prettier format.Minor style nit; combine into one import and keep spacing consistent.
-import {ATTR_GEN_AI_INPUT_MESSAGES} from "@opentelemetry/semantic-conventions"; -import {ATTR_GEN_AI_OUTPUT_MESSAGES} from "@opentelemetry/semantic-conventions"; +import { + ATTR_GEN_AI_INPUT_MESSAGES, + ATTR_GEN_AI_OUTPUT_MESSAGES, +} from "@opentelemetry/semantic-conventions";
1186-1539
: Also assert legacy ai. attrs are cleared in these new tests.*You already check removal in earlier suites; mirroring a few assertions here will catch regressions where GEN_AI_* are added but ai.* aren’t removed.
@@ it("should create gen_ai.input.messages for conversation with text", () => { @@ transformAiSdkAttributes(attributes); @@ + // Original attribute should be removed + assert.strictEqual(attributes["ai.prompt.messages"], undefined); }); @@ it("should create gen_ai.output.messages for text response", () => { @@ transformAiSdkAttributes(attributes); @@ + // Original attribute should be removed + assert.strictEqual(attributes["ai.response.text"], undefined); }); @@ it("should create gen_ai.output.messages for tool calls", () => { @@ transformAiSdkAttributes(attributes); @@ + // Original attribute should be removed + assert.strictEqual(attributes["ai.response.toolCalls"], undefined); }); @@ it("should create gen_ai.output.messages for object response", () => { @@ transformAiSdkAttributes(attributes); @@ + // Original attribute should be removed + assert.strictEqual(attributes["ai.response.object"], undefined); });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (14)
packages/ai-semantic-conventions/package.json
(1 hunks)packages/instrumentation-anthropic/package.json
(1 hunks)packages/instrumentation-bedrock/package.json
(1 hunks)packages/instrumentation-chromadb/package.json
(1 hunks)packages/instrumentation-cohere/package.json
(1 hunks)packages/instrumentation-langchain/package.json
(1 hunks)packages/instrumentation-llamaindex/package.json
(1 hunks)packages/instrumentation-openai/test/instrumentation.test.ts
(2 hunks)packages/instrumentation-pinecone/package.json
(1 hunks)packages/instrumentation-together/package.json
(1 hunks)packages/instrumentation-vertexai/package.json
(1 hunks)packages/traceloop-sdk/package.json
(1 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
(11 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- packages/instrumentation-pinecone/package.json
🚧 Files skipped from review as they are similar to previous changes (3)
- packages/ai-semantic-conventions/package.json
- packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
- packages/instrumentation-openai/test/instrumentation.test.ts
🧰 Additional context used
📓 Path-based instructions (5)
packages/instrumentation-*/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place each provider integration in its own package under packages/instrumentation-[provider]/
Files:
packages/instrumentation-bedrock/package.json
packages/instrumentation-anthropic/package.json
packages/instrumentation-langchain/package.json
packages/instrumentation-llamaindex/package.json
packages/instrumentation-chromadb/package.json
packages/instrumentation-cohere/package.json
packages/instrumentation-together/package.json
packages/instrumentation-vertexai/package.json
packages/*/package.json
📄 CodeRabbit inference engine (CLAUDE.md)
Use workspace:* for intra-repo package dependencies in package.json
Files:
packages/instrumentation-bedrock/package.json
packages/instrumentation-anthropic/package.json
packages/instrumentation-langchain/package.json
packages/traceloop-sdk/package.json
packages/instrumentation-llamaindex/package.json
packages/instrumentation-chromadb/package.json
packages/instrumentation-cohere/package.json
packages/instrumentation-together/package.json
packages/instrumentation-vertexai/package.json
packages/traceloop-sdk/package.json
📄 CodeRabbit inference engine (CLAUDE.md)
When adding a new instrumentation package, add it to the main SDK dependencies
Files:
packages/traceloop-sdk/package.json
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}
: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (6)
📓 Common learnings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/instrumentation-bedrock/package.json
packages/instrumentation-anthropic/package.json
packages/instrumentation-langchain/package.json
packages/traceloop-sdk/package.json
packages/instrumentation-llamaindex/package.json
packages/instrumentation-chromadb/package.json
packages/instrumentation-cohere/package.json
packages/instrumentation-together/package.json
packages/instrumentation-vertexai/package.json
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/package.json : When adding a new instrumentation package, add it to the main SDK dependencies
Applied to files:
packages/traceloop-sdk/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/traceloop-sdk/**/*.{ts,tsx} : For manual LLM operations, use trace.withLLMSpan from traceloop/node-server-sdk
Applied to files:
packages/traceloop-sdk/package.json
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformAiSdkAttributes
(373-385)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-59)
🪛 GitHub Actions: CI
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
[warning] 1-1: Prettier formatting issues detected in this file. Run 'prettier --write' to fix code style.
🔇 Additional comments (10)
packages/instrumentation-bedrock/package.json (1)
44-44
: Version align: ok to ^1.37.0.Ensure all instrumentation packages are on the same @opentelemetry/semantic-conventions to avoid duplication.
Use the monorepo check script from the SDK comment to confirm no leftovers on ^1.36.x.
packages/instrumentation-cohere/package.json (1)
44-44
: Upgrade approved; aligns with repo-wide ^1.37.0.No other changes needed here; just verify consistency across packages.
Reuse the provided script to confirm all packages declare ^1.37.0 only.
packages/instrumentation-llamaindex/package.json (1)
43-43
: Looks good: bumped to ^1.37.0.Keep all instrumentation packages on the same semantic-conventions version to prevent multiple copies.
Run the consistency script from the SDK comment.
packages/instrumentation-chromadb/package.json (1)
44-44
: Approved: dependency now ^1.37.0.Nothing else to change here; just confirm repo-wide alignment.
Use the earlier script to ensure no mismatches remain.
packages/instrumentation-vertexai/package.json (1)
43-43
: Approved bump to ^1.37.0.Recommend verifying that all packages (deps/devDeps/peerDeps) reference the same version.
Run the monorepo check script from the SDK comment.
packages/instrumentation-together/package.json (1)
43-43
: Good to go: ^1.37.0 across instrumentation.No action beyond confirming monorepo consistency.
Execute the shared script to detect any outliers.
packages/instrumentation-anthropic/package.json (1)
44-44
: Approved: semantic-conventions upgraded to ^1.37.0.Keep versions uniform to avoid duplicate installs at runtime.
Please run the consistency script noted in the SDK comment.
packages/traceloop-sdk/package.json (1)
66-66
: Approve bump to @opentelemetry/semantic-conventions ^1.37.0 — verify repo-wideBump looks good; keep versions consistent repo-wide (matches workspace:*). Automated sandbox scan failed — run locally to confirm no ^1.36.x remains:
# list declarations (run from repo root) find . -type f -name package.json -not -path "*/node_modules/*" -not -path "./.git/*" -print0 \ | while IFS= read -r -d '' f; do name=$(jq -r '.name // empty' "$f") for scope in dependencies devDependencies peerDependencies; do ver=$(jq -r --arg sc "$scope" '.[$sc]?["@opentelemetry/semantic-conventions"] // empty' "$f") [ -n "$ver" ] && printf '%s\t%s\t%s\t%s\n' "$name" "$scope" "$ver" "$f" done done | sort -u # then check for stragglers: # (pipe the output of the previous command into the grep below) grep -E '\^1\.36\.' || echo "no ^1.36.x usages found"packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
1186-1539
: LGTM: strong coverage for gen_ai.input.messages and gen_ai.output.messages paths.The scenarios (text, tools, object, mixed content, end‑to‑end) exercise the new transformations well.
4-6
: No change required — @opentelemetry/semantic-conventions is aligned.
packages/traceloop-sdk/package.json declares "@opentelemetry/semantic-conventions": "^1.37.0" and other packages use "^1.37.0".
"@opentelemetry/semantic-conventions": "^1.37.0", | ||
"@traceloop/ai-semantic-conventions": "workspace:*", | ||
"tslib": "^2.8.1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Bump to @opentelemetry/semantic-conventions ^1.37.0 looks good; verify Node engines vs OTel 2.x.
Ensure the "engines": { "node": ">=14" }
requirement is compatible with your OTel 2.x deps at runtime; if OTel requires a higher Node version in practice, bump the engines field accordingly.
Run the checks below to confirm workspace consistency and engine alignment:
🏁 Script executed:
#!/bin/bash
# Show all packages depending on @opentelemetry/semantic-conventions and their requested ranges
rg -n --glob "packages/**/package.json" '"@opentelemetry/semantic-conventions":' -C2
# Print this package's Node engine
jq -r '.engines' packages/instrumentation-langchain/package.json
# (Optional) Quick peek at OTel SDK packages requested here to cross-check Node support
rg -n --glob "packages/**/package.json" '"@opentelemetry/(sdk|sdk-trace-node|core|api)":' -C2
Length of output: 17312
Bump Node engine to >=18.19 to match OpenTelemetry JS SDK 2.x.
OpenTelemetry JS SDK 2.x raised its minimum Node requirement to ^18.19.0 || >=20.6.0; this repo has @opentelemetry/*@2.x in multiple packages while packages/instrumentation-langchain/package.json still declares "engines": { "node": ">=14" } — update engines to at least "^18.19.0 || >=20.6.0" (or ">=18.19.0") to avoid install/runtime breakage. (opentelemetry.io)
🤖 Prompt for AI Agents
In packages/instrumentation-langchain/package.json around lines 45 to 47, the
package.json still declares "engines": { "node": ">=14" } while OpenTelemetry JS
SDK 2.x requires Node >=18.19 (or ^18.19.0 || >=20.6.0); update the
"engines.node" field to at least ">=18.19.0" (or use "^18.19.0 || >=20.6.0") to
match the OpenTelemetry dependency requirement and prevent install/runtime
breakage, then run an install/test to ensure compatibility across the monorepo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 1ec2dfa in 1 minute and 12 seconds. Click for details.
- Reviewed
43
lines of code in3
files - Skipped
0
files when reviewing. - Skipped posting
6
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:144
- Draft comment:
Empty catch block in transformResponseToolCalls may hide JSON parsing errors. Consider logging error details to aid debugging. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:150
- Draft comment:
processMessageContent uses a filter for objects with a 'text' property. Ensure that all expected message content formats use 'text' (and not for example 'content') for consistency. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:256
- Draft comment:
The manual unescaping in transformPrompts (using .replace(/'/g, "'") and similar) might not handle all escape cases. Consider using a more robust JSON unescaping strategy or documenting the expected input format. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:55
- Draft comment:
Overriding span.name via (span as any).name is a known workaround for readonly properties. It would help to add a comment explaining why this approach is acceptable. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:525
- Draft comment:
Tests comparing tool parameters use JSON string equality. Consider using deepStrictEqual on the parsed objects to avoid failures due to key ordering. - Reason this comment was not posted:
Confidence changes required:50%
<= threshold50%
None
6. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1186
- Draft comment:
The test file is very comprehensive. For maintainability, consider splitting these tests into multiple files (e.g. separate tests for prompts, tools, tokens, and gen_ai messages). - Reason this comment was not posted:
Confidence changes required:40%
<= threshold50%
None
Workflow ID: wflow_qFkoNSTQPWWPaN4t
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (3)
295-323
: Potential data loss: Don't overwrite existing input messages.The transformation unconditionally overwrites
ATTR_GEN_AI_INPUT_MESSAGES
when processingAI_PROMPT
, which could discard messages already set fromAI_PROMPT_MESSAGES
. Consider merging instead of replacing.if (AI_PROMPT in attributes) { try { const promptData = JSON.parse(attributes[AI_PROMPT] as string); if (promptData.prompt && typeof promptData.prompt === "string") { attributes[`${SpanAttributes.LLM_PROMPTS}.0.content`] = promptData.prompt; attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`] = ROLE_USER; const inputMessage = { role: ROLE_USER, parts: [ { type: TYPE_TEXT, content: promptData.prompt, }, ], }; - attributes[ATTR_GEN_AI_INPUT_MESSAGES] = JSON.stringify([ - inputMessage, - ]); + // Merge with existing messages if present + const existing = attributes[ATTR_GEN_AI_INPUT_MESSAGES]; + if (existing) { + try { + const existingMessages = JSON.parse(existing); + existingMessages.push(inputMessage); + attributes[ATTR_GEN_AI_INPUT_MESSAGES] = JSON.stringify(existingMessages); + } catch { + // Fallback to single message if parse fails + attributes[ATTR_GEN_AI_INPUT_MESSAGES] = JSON.stringify([inputMessage]); + } + } else { + attributes[ATTR_GEN_AI_INPUT_MESSAGES] = JSON.stringify([inputMessage]); + } delete attributes[AI_PROMPT]; } } catch { // Ignore parsing errors } }
24-27
: Extract magic strings as constants.Following the reviewer's feedback about avoiding magic strings that repeat more than once, these role and type strings should be extracted as constants.
The constants are already defined, which is good. Consider also defining constants for any other repeated strings if they appear elsewhere in the codebase.
300-303
: Potential index collision in LLM_PROMPTS.Writing to index 0 could overwrite existing prompts from
ai.prompt.messages
. Calculate the next available index instead.- attributes[`${SpanAttributes.LLM_PROMPTS}.0.content`] = - promptData.prompt; - attributes[`${SpanAttributes.LLM_PROMPTS}.0.role`] = ROLE_USER; + // Find next available index to avoid overwriting + const prefix = `${SpanAttributes.LLM_PROMPTS}.`; + const existingIndices = Object.keys(attributes) + .filter(k => k.startsWith(prefix)) + .map(k => parseInt(k.slice(prefix.length).split('.')[0], 10)) + .filter(n => !isNaN(n)); + const nextIndex = existingIndices.length > 0 ? Math.max(...existingIndices) + 1 : 0; + + attributes[`${prefix}${nextIndex}.content`] = promptData.prompt; + attributes[`${prefix}${nextIndex}.role`] = ROLE_USER;
🧹 Nitpick comments (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
1-4
: Fix formatting: Run Prettier to resolve CI failure.The CI pipeline is failing due to formatting issues. Please run
pnpm prettier --write .
to fix the formatting.-import {ATTR_GEN_AI_INPUT_MESSAGES, ATTR_GEN_AI_OUTPUT_MESSAGES} from "@opentelemetry/semantic-conventions/experimental"; - +import { + ATTR_GEN_AI_INPUT_MESSAGES, + ATTR_GEN_AI_OUTPUT_MESSAGES, +} from "@opentelemetry/semantic-conventions/experimental";packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
1-5
: Fix formatting: Run Prettier to resolve CI failure.The CI pipeline is failing due to formatting issues in this test file. Please run the formatter.
-import {ATTR_GEN_AI_INPUT_MESSAGES, ATTR_GEN_AI_OUTPUT_MESSAGES} from "@opentelemetry/semantic-conventions/experimental"; - +import { + ATTR_GEN_AI_INPUT_MESSAGES, + ATTR_GEN_AI_OUTPUT_MESSAGES, +} from "@opentelemetry/semantic-conventions/experimental";
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
packages/instrumentation-openai/test/instrumentation.test.ts
(2 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
(11 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/instrumentation-openai/test/instrumentation.test.ts
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}
: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-59)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformAiSdkAttributes
(372-384)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-59)
🪛 GitHub Actions: CI
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
[warning] 1-1: Prettier formatting issue detected in this file. Run 'pnpm prettier --write .' to fix.
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
[warning] 1-1: Prettier formatting issue detected in this file. Run 'pnpm prettier --write .' to fix.
🔇 Additional comments (5)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (3)
60-81
: LGTM! Text response transformation looks good.The implementation correctly transforms text responses to both the legacy format and the new gen_ai.output.messages format with proper role assignment and cleanup.
106-150
: LGTM! Tool call transformation is well-implemented.The tool call transformation correctly handles multiple function calls, properly structures them in the new format with tool_call parts, and includes error handling for malformed JSON.
152-203
: LGTM! Content processing handles various formats well.The
processMessageContent
function robustly handles different content formats including arrays, objects, and JSON strings, with proper text extraction and fallback strategies.packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
1186-1539
: LGTM! Comprehensive test coverage for new message attributes.The test suite thoroughly covers various scenarios including:
- Multi-turn conversations with different roles
- Text and tool call responses
- Mixed content types
- Object responses preserved as JSON text
The tests properly validate the structure and content of both input and output messages in the new OpenTelemetry format.
1324-1440
: Excellent test for complete workflow with tools.This test effectively validates the end-to-end transformation of a complex conversation involving tools, ensuring both input messages and tool call outputs are properly formatted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed d47215d in 1 minute and 47 seconds. Click for details.
- Reviewed
85
lines of code in3
files - Skipped
0
files when reviewing. - Skipped posting
4
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/test/instrumentation.test.ts:27
- Draft comment:
Avoid using an internal build path for experimental attributes. Consider using a stable export if available to prevent breakage when the library's internal structure changes. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The comment raises a valid concern about importing from an internal build path which could break if the library structure changes. However, I need to consider: 1) Are these attributes actually available in the suggested path? 2) Since these are experimental attributes, they may only be available in the experimental path. 3) Without being able to verify the library structure, I can't be 100% sure the suggested change would work. I don't have direct evidence that the suggested import path would actually work - these are experimental attributes that may only be available through the build path. The comment could be leading the author down the wrong path. While using build paths is generally not ideal, for experimental features it may be necessary. Without being able to verify the library structure, suggesting a change could cause more problems. The comment should be deleted since we don't have strong evidence that the suggested change would work, and experimental features may require using internal paths.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:68
- Draft comment:
The transformation logic for text responses in transformResponseText and transformResponseObject is almost identical. Consider refactoring to a shared helper to reduce duplication and ease maintenance. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:104
- Draft comment:
In transformResponseToolCalls, catching and ignoring JSON parsing errors might hide issues with malformed tool call data. Consider logging the error or at least adding a comment for future maintainers. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1187
- Draft comment:
Extensive tests for gen.ai.input.messages and gen.ai.output.messages provide great coverage of multiple scenarios including complex multi-turn conversations and tool calls. Nice work! - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
Workflow ID: wflow_icE3o6RFpyScILdM
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed baeafd4 in 1 minute and 9 seconds. Click for details.
- Reviewed
41
lines of code in3
files - Skipped
0
files when reviewing. - Skipped posting
6
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/test/instrumentation.test.ts:27
- Draft comment:
Good update – the import now uses the 'incubating' path. Ensure all parts of the project use the same dependency version. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold50%
The comment starts with a purely informative statement about the update, which is not allowed. The second part of the comment asks the author to ensure consistency in dependency versions across the project, which is not specific enough and violates the rule against asking the author to ensure things. Therefore, this comment should be removed.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:144
- Draft comment:
In transformResponseToolCalls, consider logging the JSON parsing errors instead of silently ignoring them for easier debugging. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:288
- Draft comment:
In transformPrompts, the catch blocks silently fail. Consider adding logging or comments to document why errors are ignored. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
4. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:348
- Draft comment:
When matching the vendor prefix in transformVendor, consider normalizing the vendor string (e.g. to lower-case) to handle case variations. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:56
- Draft comment:
Overriding span.name via a type assertion is a workaround. Consider updating span types or adding a comment noting the intent. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
6. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1188
- Draft comment:
The tests are very comprehensive. Consider adding additional edge case tests for non‐standard message content (e.g. when messages contain mixed types beyond text). - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_JxTr0v5VG4S3aIsp
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 1fc523a in 1 minute and 5 seconds. Click for details.
- Reviewed
64
lines of code in4
files - Skipped
0
files when reviewing. - Skipped posting
5
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/rollup.config.js:17
- Draft comment:
Good: Explicit tsconfig path in typescript plugin improves build consistency. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
2. packages/traceloop-sdk/rollup.config.js:38
- Draft comment:
Good: Explicit tsconfig in dts plugin ensures proper type definitions. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
3. packages/traceloop-sdk/src/lib/tracing/tracing.ts:1
- Draft comment:
Nice: Consolidated Context import from '@opentelemetry/api' avoids deep path dependency. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
4. packages/traceloop-sdk/tsconfig.build.json:12
- Draft comment:
Minor: Consider adding a newline at EOF for style consistency. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
5. packages/traceloop-sdk/tsconfig.test.json:7
- Draft comment:
Good: Inclusion of 'mocha' and 'node' types is appropriate for test support. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
Workflow ID: wflow_4lfhYfPYTMam8C1z
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 9b536b3 in 1 minute and 7 seconds. Click for details.
- Reviewed
259
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
3
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-ATTR_GEN_AI_INPUT_MESSAGES-and-ATTR_GEN_AI_OUTPUT_MESSAGES-attributes-for-chat_1049053971/recording.har:103
- Draft comment:
This new HAR recording fixture is comprehensive. Please ensure that cookie values (e.g., __cf_bm, _cfuvid) and other header data are either sanitized or confirmed as safe for inclusion in tests. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold50%
The comment is asking the PR author to ensure that cookie values and header data are sanitized or confirmed as safe. This falls under asking the author to ensure something is done, which is against the rules.
2. packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-ATTR_GEN_AI_INPUT_MESSAGES-and-ATTR_GEN_AI_OUTPUT_MESSAGES-attributes-for-chat_1049053971/recording.har:87
- Draft comment:
The request's postData 'text' field correctly includes a JSON payload with a chat message. Verify that this fixture fully exercises the mapping to gen.ai.input/messages as intended by the new feature. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold50%
The comment is asking the PR author to verify that a fixture fully exercises a mapping, which is a form of asking for confirmation or verification. This violates the rule against asking the PR author to confirm their intention or ensure behavior is intended.
3. packages/instrumentation-openai/test/recordings/Test-OpenAI-instrumentation_1770406427/should-set-ATTR_GEN_AI_INPUT_MESSAGES-and-ATTR_GEN_AI_OUTPUT_MESSAGES-attributes-for-chat_1049053971/recording.har:96
- Draft comment:
The response content 'text' field is encoded in base64 and appears as a JSON array. Confirm that the instrumentation correctly handles this format when mapping to gen.ai.output/messages. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold50%
The comment is asking the author to confirm that the instrumentation correctly handles a specific format. This falls under asking the author to ensure behavior is intended, which is against the rules. Therefore, this comment should be removed.
Workflow ID: wflow_yku5VBBxwyB6JEVV
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
This reverts commit 1fc523a.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed be4725a in 1 minute and 34 seconds. Click for details.
- Reviewed
91
lines of code in3
files - Skipped
0
files when reviewing. - Skipped posting
3
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:341
- Draft comment:
Potential bug: the condition 'if (promptTokens && completionTokens)' will fail if either token value is 0. Use an explicit check (e.g. promptTokens !== undefined && completionTokens !== undefined) to correctly sum zeros. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:144
- Draft comment:
Consider logging errors in the catch block of transformResponseToolCalls to aid debugging rather than silently ignoring parse errors. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts:257
- Draft comment:
The regex used to unescape JSON strings in transformPrompts is complex. Consider adding inline documentation or simplifying it to improve clarity. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_pj0NthdEdZG718VG
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (3)
4-8
: Avoid importing experimental attrs via internal path; re-export from ai-semantic-conventions
@opentelemetry/semantic-conventions/build/src/experimental_attributes
is not a public API and can change. Re-export these in@traceloop/ai-semantic-conventions
and import from there to keep call sites stable.Apply this diff (after adding a re-export as shown below):
-import { - ATTR_GEN_AI_INPUT_MESSAGES, - ATTR_GEN_AI_OUTPUT_MESSAGES, -} from "@opentelemetry/semantic-conventions/build/src/experimental_attributes"; +import { + ATTR_GEN_AI_INPUT_MESSAGES, + ATTR_GEN_AI_OUTPUT_MESSAGES, +} from "@traceloop/ai-semantic-conventions/experimental";Add this re-export (outside this file):
// packages/ai-semantic-conventions/src/experimental.ts export { ATTR_GEN_AI_INPUT_MESSAGES, ATTR_GEN_AI_OUTPUT_MESSAGES, } from "@opentelemetry/semantic-conventions/build/src/experimental_attributes";
1324-1441
: Add test: output with both text and tool_call partsModels can return text plus tool-calls in the same turn. Add a test asserting both parts are emitted in order within a single assistant message.
1475-1536
: Mixed content: preserve non-text parts or explicitly redactOTEL semantic-conventions v1.37.0 does not define an "image" message-part for gen_ai.*.messages, so flattening mixed content to text-only is spec-compatible — still choose one:
- If omission is intentional: add a short comment in the implementation/tests stating non-text parts (images) are intentionally omitted/redacted for privacy/size. Location: packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1475-1536.
- If not: emit a documented custom "image" part and update tests (suggested example fields: type:'image', url, mime_type, alt_text/caption, content_hash).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
packages/instrumentation-openai/test/instrumentation.test.ts
(2 hunks)packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
(11 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts
- packages/instrumentation-openai/test/instrumentation.test.ts
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}
: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformAiSdkAttributes
(367-379)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-59)
🔇 Additional comments (2)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
1188-1441
: Solid coverage for gen_ai input/output messagesNew tests exercise text, tool-calls, objects, and end-to-end scenarios. Looks good.
1247-1264
: No change needed — use role="assistant". OpenTelemetry Semantic Conventions v1.37.0 use "assistant" for model responses; the test is correct.
// Check first tool call | ||
assert.strictEqual(outputMessages[0].parts[0].type, "tool_call"); | ||
assert.strictEqual( | ||
outputMessages[0].parts[0].tool_call.name, | ||
"getWeather", | ||
); | ||
assert.strictEqual( | ||
outputMessages[0].parts[0].tool_call.arguments, | ||
'{"location": "San Francisco", "unit": "celsius"}', | ||
); | ||
|
||
// Check second tool call | ||
assert.strictEqual(outputMessages[0].parts[1].type, "tool_call"); | ||
assert.strictEqual( | ||
outputMessages[0].parts[1].tool_call.name, | ||
"findRestaurants", | ||
); | ||
assert.strictEqual( | ||
outputMessages[0].parts[1].tool_call.arguments, | ||
'{"location": "San Francisco", "cuisine": "italian"}', | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Validate schema for tool_call parts
Ensure the spec uses parts[].type === "tool_call"
with nested fields named tool_call.name
and tool_call.arguments
(stringified JSON) and whether id
is required.
🌐 Web query:
In OTEL Semantic Conventions v1.37.0, what is the exact schema for gen_ai.*.messages[].parts[] of type "tool_call"? Field names ("name", "arguments"), types, and required keys?
💡 Result:
Summary (OTEL Semantic Conventions v1.37.0)
- Part object type: "tool_call"
- JSON fields (exact names and types)
- type — string. The part type (value "tool_call"). (required)
- id — string. The tool-call identifier (e.g. "call_..."). (present in examples; treated as the tool call id)
- name — string. The tool/function name to call (e.g. "get_weather"). (required)
- arguments — AnyValue (JSON object / OTLP AnyValue). The function/tool arguments (e.g. {"location":"Paris"}). (opt‑in / may be omitted)
Sources: OpenTelemetry gen-ai examples & attribute registry (shows tool_call parts with id, name, arguments and that arguments are AnyValue). [1][2][3]
References
[1] OpenTelemetry — Gen AI semantic conventions (examples showing "tool_call" with id, name, arguments).
[2] OpenTelemetry — Gen AI events / gen_ai.input.messages and gen_ai.output.messages pages (example tool_call part).
[3] microsoft/opentelemetry-semantic-conventions — gen-ai events doc (Function object: name string required; arguments AnyValue).
Align tests with OTEL gen_ai v1.37.0 tool_call schema
- Change the assertions in packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (lines 1301–1321): tool_call.arguments is an OTLP AnyValue (structured JSON/object) and may be omitted — do not assert it is a stringified JSON. Compare as an object (or accept both object and string by parsing before assert). tool_call.name (string) is required; tool_call.id is optional and should not be assumed required.
🤖 Prompt for AI Agents
In packages/traceloop-sdk/test/ai-sdk-transformations.test.ts around lines 1301
to 1321, the test currently asserts tool_call.arguments is a stringified JSON
and assumes fields that may be optional; update the assertions to only require
tool_call.name (string) and to handle tool_call.arguments as either an object
(OTLP AnyValue) or a string by: if arguments exists, parse/stringify as needed
and compare as an object (deep-equal), or accept both formats; do not assert
tool_call.id (treat it as optional) and allow arguments to be absent instead of
asserting a string value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Skipped PR review on 6fcf577 because no changed files had a supported extension. If you think this was in error, please contact us and we'll fix it right away.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 94e732b in 1 minute and 3 seconds. Click for details.
- Reviewed
1665
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
1
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1
- Draft comment:
The entire test file has been removed. Ensure that the test coverage for the transformation functions, including the new gen.ai.input and gen.ai.output messages, is maintained (either by migrating these tests or adding equivalent tests elsewhere). - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_6RDQO0Qi9sEWAWiI
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 2d76807 in 52 seconds. Click for details.
- Reviewed
979
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
2
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/instrumentation-openai/test/instrumentation.test.ts:1
- Draft comment:
The entire legacy test suite has been removed. Ensure that new tests sufficiently cover the functionality for gen.ai.input.messages and gen.ai.output.messages now that this file is gone. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/instrumentation-openai/test/instrumentation.test.ts:15
- Draft comment:
The helper function 'transformToStandardFormat' that standardized messages into ATTR_GEN_AI_INPUT_MESSAGES and ATTR_GEN_AI_OUTPUT_MESSAGES is removed. Verify that its functionality is now covered either in production or via alternative tests. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_dISzfsxsWmQwU78z
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 6ea013e in 59 seconds. Click for details.
- Reviewed
223
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
2
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:1
- Draft comment:
Legacy integration test suite removed. Confirm that new tests cover the new standardized gen.ai.input.messages and gen.ai.output.messages functionality, ensuring regression coverage for telemetry attributes. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. packages/traceloop-sdk/test/ai-sdk-integration.test.ts:40
- Draft comment:
The file also removed custom PollyJS recording setup and span validations for both OpenAI and Google providers. Ensure that equivalent tests exist to validate the new telemetry fields and instrumentations. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_kdRaM6sLfnLoeMbc
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 43de098 in 52 seconds. Click for details.
- Reviewed
1665
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
1
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/test/ai-sdk-transformations.test.ts:1
- Draft comment:
Entire test file removed. Ensure that all transformation cases—including the new gen.ai input/output messages—are still covered by alternate tests to prevent regression. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_Ieexc5Nwpyp6bhHE
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
1266-1322
: Fix assertion: tool_call.arguments should be AnyValue (object), not strictly a JSON string.OTel gen_ai v1.37.0 models tool_call.arguments as AnyValue; implementations may emit an object (preferred) or a stringified JSON. The current test enforces a string and will be brittle.
Apply this diff to accept both formats:
// Check first tool call assert.strictEqual(outputMessages[0].parts[0].type, "tool_call"); assert.strictEqual( outputMessages[0].parts[0].tool_call.name, "getWeather", ); - assert.strictEqual( - outputMessages[0].parts[0].tool_call.arguments, - '{"location": "San Francisco", "unit": "celsius"}', - ); + { + const args0 = outputMessages[0].parts[0].tool_call.arguments; + const obj0 = typeof args0 === "string" ? JSON.parse(args0) : args0; + assert.deepStrictEqual(obj0, { + location: "San Francisco", + unit: "celsius", + }); + } // Check second tool call assert.strictEqual(outputMessages[0].parts[1].type, "tool_call"); assert.strictEqual( outputMessages[0].parts[1].tool_call.name, "findRestaurants", ); - assert.strictEqual( - outputMessages[0].parts[1].tool_call.arguments, - '{"location": "San Francisco", "cuisine": "italian"}', - ); + { + const args1 = outputMessages[0].parts[1].tool_call.arguments; + const obj1 = typeof args1 === "string" ? JSON.parse(args1) : args1; + assert.deepStrictEqual(obj1, { + location: "San Francisco", + cuisine: "italian", + }); + }To double-check the schema detail before merging:
In OTEL Semantic Conventions v1.37.0, what is the exact schema for gen_ai.*.messages parts of type "tool_call"? Is "arguments" specified as AnyValue/object? Is "id" required?
🧹 Nitpick comments (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (1)
1324-1440
: Nice end-to-end test; consider DRYing argument checks.Optional: extract a tiny helper to coerce string-or-object arguments to an object to reuse across tests.
+const toObj = (v: unknown) => (typeof v === "string" ? JSON.parse(v) : v);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
packages/instrumentation-openai/test/instrumentation.test.ts
(2 hunks)packages/traceloop-sdk/test/ai-sdk-integration.test.ts
(0 hunks)packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
(2 hunks)
💤 Files with no reviewable changes (1)
- packages/traceloop-sdk/test/ai-sdk-integration.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/instrumentation-openai/test/instrumentation.test.ts
🧰 Additional context used
📓 Path-based instructions (2)
packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Import AI/LLM semantic attribute constants from @traceloop/ai-semantic-conventions rather than hardcoding strings
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
packages/traceloop-sdk/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
packages/traceloop-sdk/**/*.{ts,tsx}
: Use the provided decorators (@workflow, @task, @agent) for workflow/task/agent spans instead of re-implementing them
For manual LLM operations, use trace.withLLMSpan from @traceloop/node-server-sdk
Files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/{instrumentation-*,traceloop-sdk}/**/*.{ts,tsx} : Import AI/LLM semantic attribute constants from traceloop/ai-semantic-conventions rather than hardcoding strings
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/ai-semantic-conventions/src/SemanticAttributes.ts : Define all AI/LLM span attribute constants in packages/ai-semantic-conventions/src/SemanticAttributes.ts
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
📚 Learning: 2025-08-24T22:08:07.023Z
Learnt from: CR
PR: traceloop/openllmetry-js#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T22:08:07.023Z
Learning: Applies to packages/instrumentation-*/**/*.{ts,tsx} : Instrumentations must create spans with appropriate AI/LLM semantic attributes for calls they wrap
Applied to files:
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts
🧬 Code graph analysis (1)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (2)
packages/traceloop-sdk/src/lib/tracing/ai-sdk-transformations.ts (1)
transformAiSdkAttributes
(367-379)packages/ai-semantic-conventions/src/SemanticAttributes.ts (1)
SpanAttributes
(17-59)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build and test
🔇 Additional comments (5)
packages/traceloop-sdk/test/ai-sdk-transformations.test.ts (5)
1188-1238
: LGTM: solid coverage for gen_ai.input.messages with text parts.
1240-1264
: LGTM: gen_ai.output.messages for assistant text response.
1442-1473
: LGTM: object response serialized as text part; consistent with current transform.
1475-1536
: LGTM: complex multi-turn content flattened to text parts.
4-7
: Avoid deep import of OTel experimental constants; re-export via @traceloop/ai-semantic-conventions.Deep-importing "@opentelemetry/semantic-conventions/build/src/experimental_attributes" is brittle — in OTel v1.37.0 these GenAI attributes are exposed via the incubating entry-point (e.g. @opentelemetry/semantic-conventions/incubating). Re-export ATTR_GEN_AI_INPUT_MESSAGES and ATTR_GEN_AI_OUTPUT_MESSAGES from @traceloop/ai-semantic-conventions and import them here.
-import { - ATTR_GEN_AI_INPUT_MESSAGES, - ATTR_GEN_AI_OUTPUT_MESSAGES, -} from "@opentelemetry/semantic-conventions/build/src/experimental_attributes"; +import { + ATTR_GEN_AI_INPUT_MESSAGES, + ATTR_GEN_AI_OUTPUT_MESSAGES, +} from "@traceloop/ai-semantic-conventions";If the re-exports are missing, add re-exports in packages/ai-semantic-conventions that forward these symbols from @opentelemetry/semantic-conventions/incubating.
Important
Add standardized GenAI input/output message attributes and update tests and dependencies for improved AI SDK functionality.
ai-sdk-transformations.ts
.instrumentation.test.ts
.@opentelemetry/semantic-conventions
to^1.37.0
in multiplepackage.json
files.instrumentation.test.ts
.This description was created by
for 43de098. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
New Features
Tests
Chores