You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Make cache keys more generic and future-proof
- Cache Ollama binary separately for instant cached runs
- Update model examples to popular models (gpt-oss, qwen3-coder)
Changes:
- Split Ollama caching into binary + models for better performance
- Only install Ollama if binary is not cached (saves time)
- Update docs to reference gpt-oss:20b, gpt-oss:120b, qwen3-coder:30b
_Generated with `cmux`_
0 commit comments