-
Notifications
You must be signed in to change notification settings - Fork 39
Apriel 2 #397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
tscholak
wants to merge
17
commits into
main
Choose a base branch
from
tscholak/apriel2-conversion
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Apriel 2 #397
+12,351
−605
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Rename Apriel2CheckpointFormat to Apriel2TextCheckpointFormat for text-only models - Add new Apriel2CheckpointFormat for multimodal models (tabled for now) - Replace num_hidden_layers with num_blocks in decoder config (Fast-LLM convention) - Update test fixtures to use num_blocks in decoder configs - Fix stochastic mixer preprocess() to collect attention_mask from nested mixers - Add cache initialization to Apriel2GatedDeltaNet for lazy allocation - Use past_key_values (plural) consistently per HuggingFace convention - Update test code to use model.model.decoder.blocks[idx] accessor 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
…aches - Test 1: Empty cache vs filled cache - verifies cache is being used at all - Test 2: Corrupted cache (zeros) vs correct cache - verifies cache VALUES matter - Derive cache dimensions from actual forward pass (handles different attention configs) - Fix: original test used wrong attribute names (key_cache/value_cache instead of key/value) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Update modeling_apriel2.py to use direct dict access instead of helper methods (config.embeddings["max_position_embeddings"] instead of config.get_max_position_embeddings()) - Fix activation export in vision adapter converter to use .hf_name instead of .value for proper round-trip conversion - Fix MultiModalInferenceRunner naming in multimodal/config.py - Raise NotImplementedError for multimodal HF wrapper (not implemented) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Multimodal converter: stratified inheritance from Pixtral/LLaVA - Inherit get_converters for Attention, Block, Encoder, Adapter (shares weight conversion logic) - Standalone PatchConvolutionConverter (different paths, no meaningful sharing) - Override all import_config/export_config (different naming and nested structure) - Remove verbose docstrings and self-narrative comments from all Apriel2 files 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
6d67133 to
98a5d25
Compare
Introduces convert_from_llava.py which converts Llava/Pixtral models (like Apriel 1.5) to Apriel2 format. The converter handles: - Config conversion from Llava to Apriel2 format - Weight mapping between different naming conventions - Vision encoder, projector, and language model weights - Support for both local paths and HuggingFace model IDs Test coverage includes: - Config conversion validation - Component-level forward pass equivalence (embeddings, vision encoder, projector, language model layers) - Full model forward pass equivalence for text-only inputs - Multimodal forward pass validation (image + text inputs) - Apriel 1.5 large model conversion test (marked as slow) Note: Multimodal numerical equivalence is not possible due to architectural differences between Pixtral and Apriel2 vision encoders (Pixtral produces (size/16)^2 - 1 patches vs Apriel2's (size/16)^2). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Refactors the Llava-to-Apriel2 converter to cleanly separate concerns: 1. **convert_from_llava.py** - Pure format conversion (Llava -> Apriel2) - Config conversion: 1-to-1 mapping of Llava config to Apriel2 format - Weight conversion: Pure name mapping, no transformations - No surgery logic - just format translation 2. **surgery.py** - Generic Apriel2 -> Apriel2 transformation - Layer-by-layer conversion using converter registry - For stochastic mixers, source is always the main mixer - Supports wrapping attention with stochastic mixer - Random initialization for incompatible conversions (e.g., attention -> mamba) 3. **converters.py** - Converter registry and implementations - Identity: forall a. a -> a - Bidirectional: attention <-> sliding_window - Random init utilities for mamba, attention, gated_delta_net Benefits: - Surgery can be applied to ANY Apriel2 model, not just converted ones - Easy to add new source formats (Qwen, Llama, etc.) - No intermediate persistence - all operations on in-memory state dicts - Cleaner code: 725 lines removed in refactor 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add expr_plan.py: declarative weight transformation with composable expressions (Ref, Slice, Concat, Init, Reshape) and streaming executor - Implement MIL (Mamba Initialization from LLM) for attention->mamba surgery - Remove legacy converters.py and surgery.py (imperative approach) - Simplify convert_from_llava.py to use plan-based streaming only - Update tests to use new expr_plan API The plan system enables: - Composable conversions via plan composition (Llava->Apriel2->Modified) - Memory-efficient streaming execution with ref-counting - Declarative, inspectable transformation plans - W path builder for readable key construction 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Key changes: - Add GatedDeltaNet (DIL) conversion from attention weights - Support stochastic mixer with multiple sub-mixers (attention + mamba/GDN) - Add dt_init_floor parameter for Mamba dt_bias initialization - Fix plan tree collapsing to merge layers but not projections - Add example YAML configs for hybrid architectures The tree collapsing fix ensures that layers [0..47] are merged at the blocks level while projections (q_proj, k_proj, etc.) remain separate. This is achieved by tracking which positions vary within each group and only allowing merges when the cross-group variation matches. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add SafetensorLoader context manager for O(1) key lookup across sharded files - Add ShardedSafetensorWriter for streaming output with configurable shard size - Update convert_from_llava.py to use streaming pipeline - Bounds peak memory to ~5GB instead of ~30GB for large models 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
…erter - Split monolithic expr_plan.py into conversion/ subpackage: - expr.py: Expression DSL types (Ref, Slice, Concat, Init, Reshape) - render.py: Plan rendering and tree visualization - executor.py: Plan execution and streaming executor - io.py: SafetensorLoader and ShardedSafetensorWriter - converters.py: MIL/DIL converters and surgery planning - Move Llava-specific code into conversion/llava/: - config.py: Llava config to Apriel2 config conversion - plan.py: Llava to Apriel2 weight plan builder - Create source-format agnostic convert.py: - Registry pattern for source formats (SOURCE_FORMATS dict) - Auto-detection via detect_source_format() - Generic build_plan() and convert() functions - Update tests to use new imports and add seed=0 to execute() calls 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The GDN uses DIL initialization which maps attention Q/K/V/O weights to GDN projections. Only conv_kernel_size needs to be specified - other dimensions (num_value_heads, num_key_heads, head dims) are automatically derived from the source attention config. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
CLI changes: - Support multiple --surgery/-s args for chaining surgeries - Add apriel2 as source format (surgery-only mode, no conversion) - Auto-detect Apriel2 configs by model_type or decoder field New modules: - config.py: compose_configs for declarative config composition - test_compose_configs.py: Monoid laws and config composition tests - test_plan_composition_torture.py: Cycling surgeries for stochastic mixers Bug fixes: - Increase cache correctness tolerance in test_modeling (GPU precision) - Comment out GDN conv1d.bias (Qwen3NextGatedDeltaNet has bias=False) Documentation cleanup: - Remove verbose Args/Returns sections (prefer type signatures) - Condense inline comments to essential "what and why" - Remove historical context, focus on current design - Shorten function docstrings to one-liners where obvious 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Aligns Apriel2 external HF model naming with upstream Fast-LLM's VisionEncoderConfig which renamed patch_convolution → embeddings. Changes: - Rename Apriel2PatchConvolution class to Apriel2Embeddings - Rename .conv/.norm to .patch_embeddings/.normalization - Update all weight paths and config keys - Add image_sizes support to Apriel2 for dynamic image cropping - Enable HuggingFace wrapper for multimodal models No backwards compatibility shims - clean break since no Apriel2 checkpoints exist yet. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
…uite - Fix tensor contiguity issue in Apriel2Embeddings.forward that caused ~4.7e-7 numerical differences vs Pixtral. The transpose operation creates a non-contiguous tensor, and RMSNorm produces slightly different results on non-contiguous tensors due to FP computation order differences. - Add test_equivalence.py with source-of-truth isolation testing philosophy: each component is tested by using Pixtral's output as input to both models, ensuring strict 1e-6 tolerance and pinpointing exactly which component has a bug if tests fail. - Remove redundant forward-pass tests from test_convert_from_llava.py that are now covered by the comprehensive equivalence test suite. - Add model_pair fixture and various input configurations for thorough testing across different batch sizes and image configurations. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The external Apriel2 HuggingFace model removed the `.self_attn` wrapper
indirection from attention layers. This updates the converters to match:
- Vision encoder: `mixer.self_attn` -> `mixer`
- Text decoder attention blocks: `mixer.self_attn` -> `mixer`
- Stochastic mixer attention: `mixers.{name}.self_attn` -> `mixers.{name}`
Without this fix, weight conversion produced warnings about unused weights
at `mixer.self_attn.*` paths and uninitialized weights at `mixer.*` paths.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR enables declarative architecture surgery on Apriel2 models. You can now convert a standard attention-based model into exotic architectures (stochastic supernets, hybrid Mamba/GDN models) with a single command and a YAML config.
To convert Apriel-1.5-15b-Thinker into a stochastic supernet with 3 mixer types per layer:
python convert.py ServiceNow-AI/Apriel-1.5-15b-Thinker /tmp/apriel2-supernet \ --surgery examples/stochastic_supernet.yaml --verboseThe converter shows you exactly what it's doing with a visual plan tree:
The result is a sharded Apriel2 checkpoint with attention + sliding window + gated delta net mixers in every layer.
The surgery config for this looks like this:
The converter supports the following better-than-random initializations for linear mixers: