-
-
Notifications
You must be signed in to change notification settings - Fork 11.9k
[Bugfix] Multiple fixes for gpt-oss Chat Completion prompting #28729
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a comprehensive set of fixes for prompting gpt-oss models for Chat Completion, addressing several issues related to tool calls, reasoning, and multi-turn conversations. The changes significantly improve the model's performance on function calling benchmarks. The addition of a thorough test suite to validate these fixes is a great contribution. The code is well-structured, and the fixes in vllm/entrypoints/harmony_utils.py are robust. I have one suggestion to improve the code by removing a side effect, which will enhance maintainability.
|
Before this fix, when running the bfcl multi_turn and single_turn suits (4,441 tests) I received 8 HarmonyErrors of So, I can also confidently say that by fixing these issues in how we were prompting the gpt-oss-120b model, we are able to reduce the frequency that it does not follow its Harmony format in its responses. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
|
Are we able to get this merged before v0.11.1? Looks very critical. |
|
Can this be merged please 🙏? |
|
I'm going to make some minor changes to address the review feedback from the two code review bots and to isolate this change entirely from the Responses API path so that we can merge this quicker without worry of it impacting Responses handling in any way. I'll get that done, re-tested, and push the updates today. |
Note that even without this change, the gpt-oss support on latest merged main branch is substantially better than what v0.11.0 or v0.10.2 shipped with. I don't know the timing or cutoff for fixes in v0.11.1, but even if this does not make that release there will still be measurably better gpt-oss support in v0.11.1. With that said, I support getting this in if the window for additional fixes is still open for v0.11.1. |
|
Ok, from my perspective I've addressed the review bot comments, entirely isolated these changes to only impact the Chat Completions path, expanded the existing test_harmony_utils.py test suite so that it tests both Chat Completions and Responses paths with a common set of tests while moving logic specific to each into its own test class, and re-run my BFCL evaluation on gpt-oss-120b with the latest changes to ensure the numbers are approximately equal as before (approximately because there is a bit of run-to-run variance in this test suite as even though the inputs are identical in every test run the outputs are not entirely deterministic). I'll keep an eye out for any other review feedback or comments required to get this in. |
|
While I agree that tool calling and various other quirks in GPT-OSS have improved from basically unusable to a usable state before this PR, this should be designated as a milestone for v0.11.1. |
|
/cc @yeqcharlotte PTAL. |
alecsolder
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great! I think we're finally landing on an implementation that takes the learnings from all of us over the past few months :)
| for item in content | ||
| if isinstance(item, dict) and item.get("type") == "text" | ||
| ) | ||
| if content: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is this case for? If tool calls is populated, I can understand reasoning being populated, but not this (which would imply it would be more like a "final" message)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right that this content closer matches the final content, but now you made me double-check how we generate the outputs and we have some issues in mapping of reasoning and tool call outputs when generating Chat Completion responses. In my head, the content here matched commentary channel responses from tool calls while the reasoning key matches analysis channel responses. However, that's not necessarily the case and it looks like there are some edge cases in our reasoning and tool call parsers for gpt-oss models that can sometimes end up with this, sometimes end up with analysis or commentary messages missed entirely, or most often end up with analyis messages as reasoning and final messages as the tool call content.
In practice, this still seems to work quite well as-is. But, we can likely pick up a bit more accuracy by doing a once-over of Chat Completions reasoning/tool parsers and chat_completion_*_generator paths for harmony models to ensure streaming/non-streaming cases are all aligned with exactly how we parse input. I probably need a new test suite here that starts with Harmony messages, goes through our response parsing to Chat Completion responses, sends those back in as Chat Completion requests, and ensures the Harmony messages generated from that match what we originally sent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Part of what worries me here is on the response generation side, where it makes assumptions about which messages to map as reasoning vs content solely based on order of the output messages without checking of any channels:
vllm/vllm/entrypoints/harmony_utils.py
Lines 522 to 534 in 63fed55
| if len(output_msgs) == 0: | |
| # The generation has stopped during reasoning. | |
| reasoning = parser.current_content | |
| final_content = None | |
| elif len(output_msgs) == 1: | |
| # The generation has stopped during final message. | |
| reasoning = output_msgs[0].content[0].text | |
| final_content = parser.current_content | |
| else: | |
| reasoning_msg = output_msgs[:-1] | |
| final_msg = output_msgs[-1] | |
| reasoning = "\n".join([msg.content[0].text for msg in reasoning_msg]) | |
| final_content = final_msg.content[0].text |
Then, we overwrite he content from that if using a tool call parser at
vllm/vllm/entrypoints/openai/serving_chat.py
Lines 1340 to 1351 in 63fed55
| tool_call_info = tool_parser.extract_tool_calls( | |
| "", | |
| request=request, | |
| token_ids=token_ids, # type: ignore | |
| ) | |
| content = tool_call_info.content | |
| message = ChatMessage( | |
| role=role, | |
| reasoning=reasoning, | |
| content=content, | |
| tool_calls=tool_call_info.tool_calls, | |
| ) |
The tool call parser at https://github.com/vllm-project/vllm/blob/63fed5550609b96b578d2512aefced09efe76e1e/vllm/entrypoints/openai/tool_parsers/openai_tool_parser.py only looks at harmony messages with function recipients or to the final channel.
So, we may end up mixing analysis and commentary channel messages in reasoning or even dropping one of those entirely when making the Chat Completion response. To get this request mapping to Harmony messages exactly right, I'll need to clean up and pair this with some adjustments to the Harmony message to response mapping path so I can do this deterministically.
That will take a bit of time to match up, add some round-trip tests to ensure it doesn't deviate in the future, and re-run the eval. I can tackle this next week, either as part of this or a folllow-up based on timing preferences.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, there were some bugs on the Chat Completion response generation side around what I outlined above. I vastly expanded the new tests in test_serving_chat.py to verify various round-trips of Chat Completion requests to Harmony messages and Harmony outputs to Chat Completion responses including multi-turn variants that then send those Chat Completion response messages back in to future Chat Completion requests.
| analysis_msg = Message.from_role_and_content( | ||
| Role.ASSISTANT, reasoning_content | ||
| ) | ||
| analysis_msg = analysis_msg.with_channel("analysis") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This runs counter to how I feel like it should work, but I think is correct.
Trying to understand OpenAI's implementation, when reasoning is output to the commentary channel, that information is meant to signal that this reasoning should be shown to the user as-is (vs needing a summarization) i.e they want to provide the actual reasoning for the tool call so the user can approve it or not vs only a summarization. But then when it is provided as input again, it doesn't matter so they just always change it back to analysis so that it can be dropped.
| msg = Message.from_role_and_content(Role.ASSISTANT, arguments) | ||
| msg = msg.with_channel("commentary") | ||
| msg = msg.with_recipient(f"functions.{name}") | ||
| msg = msg.with_content_type("json") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this could use a comment as it differs from the OpenAI implementation, however I think there has been enough proof at this point that it increases benchmark scores so should be kept
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a comment to this effect, but also indicating it may need further evaluation.
| if not msg.channel: | ||
| msg = msg.with_channel("final") | ||
| msgs.append(msg) | ||
| # For user/system/developer messages, add them directly even if no content. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think system and developer cases here need more complex logic.
I think from a high level:
- We should never have more than 1 system message
- We should never have more than 1 developer message
(Can this validation be added to a test?)
Since we create system message and developer message every time in _make_request_with_harmony I'm worried about what happens with system messages here.
Since there is no "instructions" field like in responses, we need to merge the developer message created with the included tools with the developer message here, but not duplicating the tools for example. Complicated and definitely worth the tests I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have not addressed this yet. This behavior should be unchanged from how it was before my PR, and I'm tempted to defer addressing this specific system/developer concern as this one is already large.
|
Some >30% of the time, the model stops generating right after "reasoning_content"/"reasoning" is finished outputting with stop_reason 200012 (200012 corresponds to the <|call|> token). No final answer in the "content" field is generated. This behavior is incompatible for most clients, i.e. Cline. Out of curiosity, will this PR fix said issue? This is prevalent in vLLM 0.11.1 (from my observation, 0.11.1 may have made it occur more frequently now) and 0.11.0 |
@bbRLdev Yes, I think that it is plausible that the discussions and fixes here would help. It seems that very few self-hosted LLM runtimes have actually mastered OpenAI's Harmony format correctly, yet. |
|
Some more manual testing and close examination of the exact Harmony inputs and outputs from a few failed BFCL multi_turn test cases help me squash some additional small bugs in how we converted Harmony outputs to Chat Completion responses, specifically around some invalid assumptions for which messages are reasoning and which are not from the I feel this is in a good place and will kick off another full BFCL multi_turn run with gpt-oss-120b based on the latest commit here on two different generations of hardware (L40S and DGX Spark) to ensure everything looks good. I've run various individual tests while working so don't expect any regressions there. |
So, it is expected for generation to stop after the If you are using the Chat Completions API (as opposed to Responses API) to query gpt-oss models, then this could fix your issue. I'm happy to test directly against what you're seeing if you have a semi-reliable way to reproduce it. Or, I encourage you to test against this PR if you're able to see how it impacts what you're seeing. |
|
I reran the same BFCL multi_turn suite on the same machine I reported these results on earlier (4xL40S) and things are holding steady. This latest round of fixes didn't meaningfully improve the results in that particular eval suite, but that suite doesn't use streaming requests so the parity between non-streaming and streaming now in how they convert to/from Chat Completion and Harmony messages would not be reflected. It does give me confidence that the latest changes don't regress anything in multi_turn scenarios compared to my previously tested changes. |
@bbrowning So I was able to pull this change and test it myself with GPT OSS 120b + Cline. The model no longer skips the final answer and is not generating tools right after reasoning. This PR took GPT OSS 120b from being nearly unusable with Cline to being very much usable. So thank you for your work.
If you copy and paste the prompt Cline produces for your initial task and feed it directly to vLLM v0.11.1 in a simple request body only containing model and messages, you can reproduce it. |
|
Hi @bbrowning any idea when your PR and #29236 can be merged? I tried your PR (great work btw) but I still reproduce the issue on this test payload (which was shared here: #22519 (comment)): |
|
@bbrowning thanks for the MR! Is there a way to make it work in codex? Currently, it stops after first tool execution. My codex config.toml for codex includes vllm profile |
|
I found one bug in vllm the content was in reasoning and codex doesn't handle that |
vLLM_serve-1 | (APIServer pid=1) INFO: 192.168.100.17:64030 - "POST /v1/chat/completions HTTP/1.1" 200 OK
vLLM_serve-1 | (APIServer pid=1) ERROR 11-27 21:29:25 [serving_chat.py:1281] Error in chat completion stream generator.
vLLM_serve-1 | (APIServer pid=1) ERROR 11-27 21:29:25 [serving_chat.py:1281] Traceback (most recent call last):
vLLM_serve-1 | (APIServer pid=1) ERROR 11-27 21:29:25 [serving_chat.py:1281] File "/root/vllm_package/vllm/vllm/entrypoints/openai/serving_chat.py", line 725, in chat_completion_stream_generator
vLLM_serve-1 | (APIServer pid=1) ERROR 11-27 21:29:25 [serving_chat.py:1281] harmony_parser.process(token_id)
vLLM_serve-1 | (APIServer pid=1) ERROR 11-27 21:29:25 [serving_chat.py:1281] File "/usr/local/lib/python3.12/dist-packages/openai_harmony/__init__.py", line 637, in process
vLLM_serve-1 | (APIServer pid=1) ERROR 11-27 21:29:25 [serving_chat.py:1281] self._inner.process(token)
vLLM_serve-1 | (APIServer pid=1) ERROR 11-27 21:29:25 [serving_chat.py:1281] openai_harmony.HarmonyError: unexpected tokens remaining in message header: Some("need to get coordinates for Moscow, Shanghai, Sapporo, Los Angeles. Use get_direct_geocoding. For Moscow, maybe \"Moscow, Russia\". It failed due to 'ko'? maybe language? Let's try \"Moscow, Russia\" again.<|end|><|start|>assistant<|channel|>commentary")Thanks for your previous response. I followed the link you provided, switched to the chat-harmony-tests branch, built it, and tried the process again, but the same error as before is still occurring. |
This isn't a bug though, is it? The model generated reasoning text, but hit its max_tokens so it had to stop before it had a chance to generate the user-visible content. So, all you got output was reasoning text. I see the same locally, but this would be expected behavior because generation stopped before any non-reasoning content was ever generated. If you'd like to see the content as well, you'll need to raise |
This fixes multiple issues with how we were prompting gpt-oss models for Chat Completion requests. A summary of the fixes: - We were not setting the recipient to "assistant" on tool responses - We were not mapping the tool_call_id to the proper function name on tool responses, resulting in the model getting confused about which tool response matched which tool call. - When the model generates tool calls during a turn, we return any model output to the commentary channel as the `content` of that tool call response. We were not properly restoring this content as a commentary message when those tool calls were passed back in via Chat Completion messages. - When the model generates reasoning messages during a tool call turn, we return that as a `reasoning` properly on the Chat Completion response. We were not properly restoring that reasoning content as a message to the analysis channel when subsequent turns passed that back in. - When non-tool reasoning responses from the model were passed back in via the `reasoning` extra_body in subsequent turns, we were not properly turning those back into analysis messages. This is the same problem we had with reasoning in tool calls, but a slightly different fix for the non-tool call path. - We were not properly marking the channel as `final` for non-tool, non-reasoning model output when generating the Harmony messages for subsequent turns. - We were not dropping analysis messages that precede a model output to the `final` channel in all cases, resulting in us often showing the model analysis messages from previous turns where it has already generated a final output leading to increased token usage, context pollution, and not following the best practices for handling raw chain of thought outlined by OpenAI. I added a series of tests that pass in Chat Completion requests and thoroughly validate the exact Harmony messages generated from those requests to test for all of the cases above. Signed-off-by: Ben Browning <bbrownin@redhat.com>
Instead of doing my own workaround for Pydantic's poor handling of Iterable fields, re-use the existing maybe_serialize_tool_calls method that the MistralTokenizer work added to turn these ValidatorIterator instances into actual lists. Signed-off-by: Ben Browning <bbrownin@redhat.com>
This isolates the fixes made for turning Chat Completion Requests into Harmony Messages from the code path that Responses uses for previous_input_messages so that Chat Completions support can be iterated and improved without worrying about regressing Responses previous input handling. Signed-off-by: Ben Browning <bbrownin@redhat.com>
This addresses a few minor points of review feedback from my larger set of fixes for Chat Completions and gpt-oss models: - In `auto_drop_analysis_messages`, drop any analysis messages before the latest final message instead of just ones from assistant. - Cleaned up `parse_chat_output` so that is is checking message channel before assigning messages as reasoning or final content instead of relying solely on the order of messages and assumptions about the number of messages present in each response that were incorrect. - Some misc. typos, extracting copy/paste code into methods, etc. This also adds many tests, as I found and fixed issues raised during review feedback and manual testing. There are now round-trip tests for converting Chat Completion requests to Harmony messages and from converting Harmony output to Chat Completion responses, all with mocked generation so they're part of the unit test suite. These test in both streaming and non-streaming mode to ensure parity there. Signed-off-by: Ben Browning <bbrownin@redhat.com>
Signed-off-by: Ben Browning <bbrownin@redhat.com>
16862a8 to
63096f1
Compare
|
I rebased this to fix conflicts and adapt to the processor -> input_processor change in serving_engine.py. All the tests continue to pass and this still meaningfully improves gpt-oss model results on multi-turn benchmarks. From my perspective, this is ready to merge and provides much needed fixes to running these models via Chat Completions. |
|
This pull request has merge conflicts that must be resolved before it can be |
Purpose
This fixes multiple issues with how we were prompting gpt-oss models for Chat Completion requests. A summary of the fixes:
contentof that tool call response. We were not properly restoring this content as a commentary message when those tool calls were passed back in via Chat Completion messages.reasoningproperly on the Chat Completion response. We were not properly restoring that reasoning content as a message to the analysis channel when subsequent turns passed that back in.reasoningextra_body in subsequent turns, we were not properly turning those back into analysis messages. This is the same problem we had with reasoning in tool calls, but a slightly different fix for the non-tool call path.finalfor non-tool, non-reasoning model output when generating the Harmony messages for subsequent turns.finalchannel in all cases, resulting in us often showing the model analysis messages from previous turns where it has already generated a final output leading to increased token usage, context pollution, and not following the best practices for handling raw chain of thought outlined by OpenAI.I added a series of tests that pass in Chat Completion requests and thoroughly validate the exact Harmony messages generated from those requests to test for all of the cases above.
Test Plan
Unit Tests
Berkeley Function Calling Leaderboard v4 Results
Run vLLM built with this fix:
Run
bfcl generateagainst that vLLM:Test Result
Unit Tests
Berkeley Function Calling Leaderboard v4 Results
Baseline, from vLLM main as of Nov 12, 2025
commit 3044195
45.0% overall accuracy
With this fix on top of same commit from main