You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
🤖 feat: Add first-class Ollama support with auto-detect and auto-start
Implement comprehensive Ollama integration that automatically detects
running servers, starts them if needed, and manages model availability.
Key features:
- Provider resolution: parseModelSpec() handles 'ollama:model:tag' format
- OllamaManager service: health checks, CLI detection, server lifecycle
- Auto-start: spawns 'ollama serve' with exponential backoff (max 30s)
- Model management: list, pull, warm-up operations via /api/* endpoints
- Security: localhost-only by default, validates non-local hosts
- IPC channels: 6 new handlers for health, start, list, pull, cancel, config
- AI service: integrates ollama-ai-provider package with lazy loading
- Type safety: comprehensive TypeScript types and defensive programming
Architecture:
- ~800 LoC production code across provider resolver, manager, types
- Full test coverage for provider resolution (16 tests)
- Tracks server lifecycle (only kills processes we started)
- Streaming pull progress support (channel ready, UI pending)
Phase 1 complete - server detection, CLI validation, model ops, chat streaming.
Phase 2 (future PR): UI for model picker, pull progress modal, settings panel.
Generated with cmux
Change-Id: I25dcd66747c1db7b57d539b198cb761892b9f7ec
Signed-off-by: Test <test@example.com>
0 commit comments