-
Notifications
You must be signed in to change notification settings - Fork 13.6k
common : implement parser combinators for chat parsing [WIP] #17136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Yes! This is exactly what I was thinking about :) can you give me push writes to your repo so I can contribute without doing PRs to PRs? |
Sure. I've never managed permissions on a GitHub repo, but let me know if you can't push. The interface isn't solidified, so hammer away. I do want to clean up the header and move stuff into the source file. Figured I'd handle that as I get further along. The partial parsing works, but does require careful attention if editing. The idea is to "succeed" if the parse tree is partially traversed and the input is marked as incomplete. With some caveats: if a literal is partially matched, it will propagate a result indicating we need more input. I intend to add a I need to clean up the caching. Initially, I thought, maybe we could reuse the cache as we get more and more input. I'm finding it very difficult to find the correct time to cache. So I'm thinking about nixing that idea and just provide a cache per parsing run--as the packrat algorithm originally intended. Then we can profile if caching is beneficial or not on a real example. I suspect there shouldn't be a whole lot of backtracking, so the memory cost might not be worth it if the gains are minuscule. |
|
Aight, let me bounce my original idea - what if we just created a GBNF parser builder and used that to parse the messages? Then we have both problems (tool call / reasoning and compatibility with normal parsing) done in one go. Unless (haven't looked into it) it would just be too inefficient for normal content parsing? Because right now it feels like we're adding another intermediate abstraction while GBNF is already implemented in GGML - so maybe just use a builder as an abstraction layer to create all the needed objects and add any missing partial parse support? This is just an idea, not very fixated on it, just thought I'd share it. Regarding memory coatsnand the packrat parser, I think O(n) with typical LLM inputs is negligible, even with super long contexts we're looking at like a few MB overhead at most. |
|
Sounds like you're thinking of a parser generator. Something like yacc, bison, or ANTLR. The problem I see with those solutions is they require building a parse table upfront, which is less intuitive than building a parse tree such as in this PR. You could create a recursive descent parser but that would have to be done at compile time. If you did it at runtime, I think the solution would look a lot like this! I haven't examined the GBNF code with a scalpel, but taking a brief look it seems like it uses a pushdown automata and may be challenging to extract content. Not that we would want to, since it is part of the core and not common. I believe there is a desire to keep the chat parsing isolated in common. I also think you lose the expressiveness of being able to define the grammar in C++. For example, with this solution we could add a The solutions I mentioned above do this by defining their own language to insert code--not pretty in my experience. That said, I am open to ideas. If you have a clearer picture of what that looks like, I'm happy to review. I understand inserting a new abstraction is a tough ask. I wanted to roll out a PoC to hopefully show value. |
|
@aldehir Nah, you're probably right. I looked at the GBNF code and in fact it would take too much effort to extract the parsed content from there. We're better off just doing it your way. I'll try to code some of the missing pieces. |
|
@pwilkin great! If you have any questions, feel free to ask. |
Putting this out there as a proof-of-concept and to gather feedback. It is still a WIP.
cc @pwilkin
Problem
Each model currently requires a custom parser to handle reasoning and tool calls. XML-based models are particularly challenging to parse. For example, Qwen3-Coder outputs:
Supporting this format requires the parser to know the type of each argument based on the provided schema.
Proposal
I propose using parser combinators to simplify parsing. We can compose parsers suitable for PEG grammars, which should handle model output effectively. This PR implements a proof-of-concept.
Here's an example from
test/test-chat-parser-combinator.cpp:The parser supports partial parsing for streaming output:
The generated parse tree can be used to produce a GBNF grammar. The plan is to build the parser during chat param initialization and derive grammar rules with support for lazy triggers. This should support both
tool_choice = autoandtool_choice = required.Specifics
This PR implements parser combinators for PEG grammars. It uses caching to implement packrat parsing. The following are implemented:
The operators
+,|, and~constructsequence,choice, andnegateparsers respectively. The<<operator includes a space rule between parsers.Drawbacks
Parsers that match content while excluding certain patterns, such as end tags, have a less obvious syntax. For example,
p.zero_or_more(~(space + p.literal("</think>")) + p.any())matches any character that isn't followed by</think>. Thep.until("</think>")parser is intended to simplify this.Packrat parsing requires caching all intermediate parse results, which introduces memory overhead proportional to input size and grammar complexity
Each model still requires a custom parser, though they share a common framework that simplifies implementation
Parser combinators may offer less flexibility for handling malformed model output compared to hand-written parsers, though constrained decoding should prevent malformed tool calls
To do
content()andreasoning()parsers to populate content/reasoning fields.tool(),tool_name(),tool_args(), as well astool_arg_name()andtool_arg_value()for models such as Qwen3-Coder.json-schema-to-grammarsupport. The JSON parser will parse any JSON, but the generated GBNF grammar should still be constructed from the user-provided schema.