-
Notifications
You must be signed in to change notification settings - Fork 844
fix(langchain): incorrect role on generations #3465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughDerives chat-response role from Changes
Sequence Diagram(s)sequenceDiagram
participant Caller as Instrumentation code
participant set_chat_response as set_chat_response
participant RoleMap as _message_type_to_role
participant ToolHandler as _set_chat_tool_calls
Caller->>set_chat_response: call with generation (may include message, tool_calls, additional_kwargs)
alt message exists
set_chat_response->>RoleMap: _message_type_to_role(generation.message.type)
RoleMap-->>set_chat_response: role
set_chat_response->>set_chat_response: set prefix.role from role
else no message
set_chat_response-->>set_chat_response: skip message-derived role
end
alt tool_calls or function_call present
set_chat_response->>ToolHandler: _set_chat_tool_calls(generation.tool_calls / function_call)
ToolHandler-->>set_chat_response: tool call attrs set on span
end
set_chat_response-->>Caller: return (span attrs include role and any tool call attrs)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro Disabled knowledge base sources:
📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (1)
🧰 Additional context used📓 Path-based instructions (1)**/*.py📄 CodeRabbit inference engine (CLAUDE.md)
Files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
🔇 Additional comments (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed everything up to 2a6c479 in 1 minute and 13 seconds. Click for details.
- Reviewed
41lines of code in1files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/span_utils.py:211
- Draft comment:
Good removal of the incorrect use ofgeneration.type. The role should be derived fromgeneration.message.type, which avoids misclassification. - Reason this comment was not posted:
Comment looked like it was already resolved.
2. packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/span_utils.py:263
- Draft comment:
The new block sets the role attribute usinggeneration.message.typeafter handling tool calls, with a fallback to 'assistant'. This looks correct and aligns with the intended design. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative and does not provide any actionable feedback or suggestions for improvement. It simply states that the code aligns with the intended design, which is not necessary for the PR author to know.
Workflow ID: wflow_vKaihN1GMn6Uk4xL
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
nirga
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ianchi - can you add a test for this?
...pentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/span_utils.py
Show resolved
Hide resolved
|
Hi @nirga I added a test and aligned the "completions" caso to OpenAI, (no role at all). |
|
@ianchi it should always be with a role. Can you walk me through what are you looking to fix? |
|
I added a detailed explanation of the error, with printscreen and code snippets, in this comment. Please let me know if any additional info is required. Regarding this:
As far as I undertand the openai implementation, in the case of a plain "completion" (not a chat completion) does NOT add a "role" to the span. In the case of "chat completions" the From your previous comment:
I understood that you wanted to mirror this behaviour in the Langchain instrumentation also. And thus I also modified the "textual response" case. |
|
Hi @nirga, any comments on this? |
|
@nirga Friendly ping. We are still getting all responses as Thanks |
| set_chat_response(span, result) | ||
|
|
||
| prefix = f"{GenAIAttributes.GEN_AI_COMPLETION}.0" | ||
| assert prefix + ".role" not in span.attributes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should always have a role - it should be "assistant" in this case
This fixes an error introduced by #3138 that records the wrong
roleit uses the generation type instead of the message type.The generation type is only for serialization and is NOT "ai"/"human".
Important
Fixes incorrect role assignment in
set_chat_responseby using message type instead of generation type inspan_utils.py.set_chat_responseinspan_utils.pyby usinggeneration.message.typeinstead ofgeneration.type.generation.messageis not present.This description was created by
for 2a6c479. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
Bug Fixes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.