Commit 7b30a36
Bump base image and dependencies for KDA support
Update to nvcr.io/nvidia/pytorch:25.11-py3 which includes:
- PyTorch 2.10
- CUDA 13.0
- flash-attn 2.7.4.post1 (pre-installed, no compilation needed)
Dependency updates:
- causal-conv1d: v1.5.4 (was pinned to commit 2a288a1)
- mamba-ssm: 2.2.6.post3 (was pinned to commit 4a8a2a2)
- flash-linear-attention: pin to commit 67eee20 (was @main)
- flash-attn: 2.7.4.post1 to match base image (was 2.7.3)
- triton: 3.5.1 in Dockerfile (was 3.1.0)
These updates enable Kimi Delta Attention (KDA) support via the
flash-linear-attention library. The pinned versions are tested and
working, unlike the nightly/unpinned approach in #395.
Note: Dropless MoE kernel remains broken with triton >= 3.2.0 and
needs a complete rewrite (also limited to 32 experts). This is
tracked separately and doesn't block KDA work.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>1 parent cc009a4 commit 7b30a36
2 files changed
+11
-10
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1 | 1 | | |
2 | | - | |
| 2 | + | |
3 | 3 | | |
4 | 4 | | |
5 | 5 | | |
| |||
29 | 29 | | |
30 | 30 | | |
31 | 31 | | |
32 | | - | |
33 | | - | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
34 | 35 | | |
35 | 36 | | |
36 | 37 | | |
37 | 38 | | |
38 | 39 | | |
39 | 40 | | |
40 | 41 | | |
41 | | - | |
| 42 | + | |
42 | 43 | | |
43 | 44 | | |
44 | 45 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
25 | 25 | | |
26 | 26 | | |
27 | 27 | | |
28 | | - | |
29 | | - | |
30 | | - | |
31 | | - | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
32 | 32 | | |
33 | 33 | | |
34 | 34 | | |
| |||
52 | 52 | | |
53 | 53 | | |
54 | 54 | | |
55 | | - | |
56 | | - | |
| 55 | + | |
| 56 | + | |
57 | 57 | | |
58 | 58 | | |
59 | 59 | | |
| |||
0 commit comments