Skip to content

Commit 94d4aa9

Browse files
committed
🤖 refactor: address review comments
- Make cache keys more generic and future-proof - Cache Ollama binary separately for instant cached runs - Update model examples to popular models (gpt-oss, qwen3-coder) Changes: - Split Ollama caching into binary + models for better performance - Only install Ollama if binary is not cached (saves time) - Update docs to reference gpt-oss:20b, gpt-oss:120b, qwen3-coder:30b _Generated with `cmux`_
1 parent 472270c commit 94d4aa9

File tree

2 files changed

+20
-9
lines changed

2 files changed

+20
-9
lines changed

.github/workflows/ci.yml

Lines changed: 16 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -99,25 +99,37 @@ jobs:
9999

100100
- uses: ./.github/actions/setup-cmux
101101

102+
- name: Cache Ollama binary
103+
id: cache-ollama-binary
104+
uses: actions/cache@v4
105+
with:
106+
path: /usr/local/bin/ollama
107+
key: ${{ runner.os }}-ollama-binary-v1
108+
restore-keys: |
109+
${{ runner.os }}-ollama-binary-
110+
102111
- name: Cache Ollama models
103112
id: cache-ollama-models
104113
uses: actions/cache@v4
105114
with:
106115
path: ~/.ollama/models
107-
key: ${{ runner.os }}-ollama-gpt-oss-20b-v1
116+
key: ${{ runner.os }}-ollama-models-v1
108117
restore-keys: |
109-
${{ runner.os }}-ollama-gpt-oss-
118+
${{ runner.os }}-ollama-models-
110119
111120
- name: Install Ollama
121+
if: steps.cache-ollama-binary.outputs.cache-hit != 'true'
112122
run: |
113123
curl -fsSL https://ollama.com/install.sh | sh
124+
125+
- name: Start Ollama and pull models
126+
run: |
114127
# Start Ollama service in background
115128
ollama serve &
116129
# Wait for Ollama to be ready
117130
timeout 30 sh -c 'until curl -s http://localhost:11434/api/tags > /dev/null 2>&1; do sleep 1; done'
118131
echo "Ollama is ready"
119-
# Pull the gpt-oss:20b model for tests (this may take a few minutes on first run)
120-
# Subsequent runs will use cached model
132+
# Pull the gpt-oss:20b model for tests (cached after first run)
121133
ollama pull gpt-oss:20b
122134
echo "Model pulled successfully"
123135

docs/models.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -31,16 +31,15 @@ TODO: add issue link here.
3131

3232
Run models locally with Ollama. No API key required:
3333

34-
- `ollama:llama3.2:7b`
35-
- `ollama:llama3.2:13b`
36-
- `ollama:codellama:7b`
37-
- `ollama:qwen2.5:7b`
34+
- `ollama:gpt-oss:20b`
35+
- `ollama:gpt-oss:120b`
36+
- `ollama:qwen3-coder:30b`
3837
- Any model from the [Ollama Library](https://ollama.com/library)
3938

4039
**Setup:**
4140

4241
1. Install Ollama from [ollama.com](https://ollama.com)
43-
2. Pull a model: `ollama pull llama3.2:7b`
42+
2. Pull a model: `ollama pull gpt-oss:20b`
4443
3. Configure in `~/.cmux/providers.jsonc`:
4544

4645
```jsonc

0 commit comments

Comments
 (0)