-
Notifications
You must be signed in to change notification settings - Fork 476
Add support for gemma 3n #1248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add support for gemma 3n #1248
Conversation
Introduces LLamaFlashAttentionType enum and integrates flash attention configuration into LLamaContextParams. Adds support for diffusion-based models in SafeLlamaModelHandle. Updates NativeApi and SafeLLamaContextHandle with new adapter metadata and sequence state methods. Syncs llama.cpp submodule.
|
Thanks for putting this together. I've started a build here to produce new binaries, once those are ready we can update the csproj and run the tests. |
|
Hi @martindevans , i guess i need to hardcode the binary file url for now once the changes are tested and ready to merge we can restore the original url right ? |
|
Have changed it assuming i had to hardcode if anything else is required do let me know. |
|
Sorry for the delay, I'll create a new release on https://github.com/SciSharp/LLamaSharpBinaries shortly, then you can just put the ID of that release into the csproj file. |
|
Ok I've created https://github.com/SciSharp/LLamaSharpBinaries/releases/tag/86587da, you can put that release ID into the csproj here and it should auto download on build (probably best to do a clean and rebuild to be sure). |
|
Will do it in sometime. |
|
Hi @martindevans , few tests are failing but i checked out to the last commit before my contribution even there the tests are failing... am i missing something ? |
martindevans
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reference, here's the llama.h diff I'm working from for this review: ggml-org/llama.cpp@11dd5a4...86587da#diff-dd7c3bc82b7d2728f1fe661e7820aebbfd53f558a5fdd85bbd6dd618213a118d
There are a few bits missing from this PR that are in the diff, the most important is the flash_attn parameter which has been removed on the C++ side but not the C# side, that will cause all the other parameters to be misaligned. Hopefully fixing this might resolve some of the test failures 🤞
There are a few functions that are missing:
- llama_adapter_get_alora_n_invocation_tokens
- llama_adapter_get_alora_invocation_tokens
- llama_flash_attn_type_name
- llama_state_seq_flags (3 related function)
If you're not interested in implementing those right now please just dump their signatures into a comment in NativeApi.cs, that way we don't get out of sync.
|
I tried fixing these but still the tests failed. I will try to dig deeper this weekend. |
|
I'm on holiday this week, so I won't be around much to help out. I'll try to keep tabs on this thread though. A lot of the test failures seem to be similar, something to do with invalid batches in the Reranker. e.g.
That seems to be coming from here in llama.cpp, that code looks like checks that none of the tokens in the batch are being assigned to a The LLamaReranker is relatively new, so it's very possible there's a bug of some kind in it. |
|
Sure,will go through this |
|
I just opened a PR on your fork into this branch that fixes 2 of the failing tests. The other seems to be a genuine failure, something wrong with the way batching works. I tried out the BatchedExecutor forking demo as another check and that fails. |
|
Will check that , sorry for the delay have been busy with other tasks lately. |
|
No worries, I was meaning to check this out myself ages ago too! I'm hoping to get some time to properly look through this and other PRs this weekend 🤞 |
…ich is why these were failing now. (#1)
| result.type_k = @params.TypeK ?? GGMLType.GGML_TYPE_F16; | ||
| result.type_v = @params.TypeV ?? GGMLType.GGML_TYPE_F16; | ||
| result.offload_kqv = !@params.NoKqvOffload; | ||
| result.flash_attention = @params.FlashAttention; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of completely removing the option to use flash attention, can you pass to llama_flash_attn_type?
I would suggest keeping the previous FlashAttention bool as it was -- but turn it to nullable, so null == Auto.
result.llama_flash_attn_type = @params.FlashAttention switch
{
true => LLamaFlashAttentionType.LLAMA_FLASH_ATTENTION_TYPE_ENABLED,
false => LLamaFlashAttentionType.LLAMA_FLASH_ATTENTION_TYPE_DISABLED,
null => LLamaFlashAttentionType.LLAMA_FLASH_ATTENTION_TYPE_AUTO
}
result.kv_unified = true; // if we wanna hardcode it here instead of in `Default()`.|
Well i tried doing this but it didn't seem to make a difference. |
|
I took a look at the batched sampling test failure — it appears to be caused by a regression in the as a solution i passed the let me know if i got this right. |
|
Hi @martindevans , @Lyrcaxis you got some time to take a look at this ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.
Set FlashAttention to true in BuilderExtensions for improved performance. Refactored NativeApi P/Invoke method signatures to single-line format for better readability and consistency.
FlashAttention is now enabled by default in model parameter initialization for embedding and text generation. The unused SeqMax parameter has been removed from unit tests to simplify configuration. Minor formatting improvements were made in IContextParamsExtensions and NativeApi for consistency.
|
@krisbiradar @martindevans alright, I figured some way out. So, if On my local integration tests, I had solved everything on the constructor level -- with basicallly just: This effectively maintains previous LlamaSharp behaviour, but not sure if you'll like that, Martin. Let me know your thoughts. Also, since the binaries for gemma 3n were built, new noteworthy models came out (e.g. LFM2-MOE). Might be worth an update. |
Thanks will push this right away! |
|
any luck @Lyrcaxis , @martindevans ? |
|
are we good to go or there are more changes , do let me know @martindevans |
|
Sorry I haven't kept up with this PR! If it's functional now, I'm happy to merge it as-is. We'll probably need to address this new n_seq_max thing in a more complete way in future PRs, but let's get this merged first, it's long overdue! Thanks so much for your work on this and keeping it active ❤️ |
sorry i forgot to run all tests when fixing the max_seq issue had tested only the failing one's now that i fixed the max_seq issue it is causing the context size tests to fail. and i guess the fix worked because we just happened to hardcode the correct value of the context size hope this time i get the pr merged!!! |
|
Am i required to change anything here again ? because the tests do pass on windows environment @Lyrcaxis @martindevans ? |
|
Not sure what's going on there, it looks like it failed to find some files before it even started the tests. I've restarted the failed tests to see if it was a one-off. |

No description provided.