-
Notifications
You must be signed in to change notification settings - Fork 80
Commit c1ec9b7
committed
Refactor lit_model.py: Eliminate code duplication (Phase 1.2)
Implemented Phase 1.2 from REFACTORING_PLAN.md: Eliminate ~140 lines of
duplicated deep supervision logic between training_step and validation_step.
## Changes Made
### New Helper Methods (3 methods, ~195 lines)
1. `_compute_loss_for_scale()` - Computes loss for a single scale
- Handles both multi-task and standard deep supervision
- Includes NaN detection (training mode only)
- Properly clamps outputs to prevent numerical instability
- Returns (scale_loss, loss_dict) for flexible logging
2. `_compute_deep_supervision_loss()` - Orchestrates multi-scale loss
- Iterates over all scales with weights [1.0, 0.5, 0.25, 0.125, 0.0625]
- Delegates to _compute_loss_for_scale() for each scale
- Returns (total_loss, loss_dict)
3. `_compute_standard_loss()` - Handles single-scale loss
- Supports both multi-task and standard loss
- Stage-aware logging (train vs val prefixes)
- Returns (total_loss, loss_dict)
### Simplified Methods
- **training_step**: 140 lines → 21 lines (85% reduction)
Before: Inline deep supervision with nested loops, NaN detection
After: Clean delegation to helper methods
- **validation_step**: 90 lines → 16 lines (82% reduction)
Before: Duplicated deep supervision logic from training_step
After: Same clean delegation pattern
## Benefits
✅ Zero code duplication - deep supervision logic defined once
✅ Maintainability - changes only need to be made once
✅ Readability - training/validation steps are now trivial to understand
✅ Testability - helper methods can be unit tested independently
✅ Consistency - guaranteed identical behavior between train and val
## Metrics
- Total duplicated code eliminated: ~140 lines
- New reusable helper methods: ~195 lines
- File size: 1,819 → 1,830 lines (+11 lines)
- Net result: Acceptable trade-off for significantly improved maintainability
## Verification
- ✅ Python syntax check passed
- ✅ No logic changes - only code organization
- ✅ All NaN detection preserved (training mode)
- ✅ All multi-task learning support preserved
- ✅ All logging preserved with correct stage prefixes
- ✅ Deep supervision weights unchanged
- ✅ Output clamping behavior identical
## Impact on REFACTORING_PLAN.md
This completes Priority 1.2 (HIGH PRIORITY):
- ✅ Eliminated code duplication in lit_model.py
- ✅ Reduced maintenance burden
- ✅ Eliminated risk of divergence between train/val logic
- ✅ Improved code quality score
Next steps: Phase 1.3 - Update integration tests for Lightning 2.0 API1 parent 43129a6 commit c1ec9b7Copy full SHA for c1ec9b7
File tree
Expand file treeCollapse file tree
1 file changed
+204
-193
lines changedOpen diff view settings
Filter options
- connectomics/lightning
Expand file treeCollapse file tree
1 file changed
+204
-193
lines changedOpen diff view settings
0 commit comments