Skip to content

Commit 10a4ad3

Browse files
authored
Merge pull request #6 from codelion/feature/update-example
Feature/update example
2 parents b75b359 + f6eefd8 commit 10a4ad3

File tree

5 files changed

+253
-8
lines changed

5 files changed

+253
-8
lines changed

README.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,7 @@ When resuming from a checkpoint:
7676
- The system loads all previously evolved programs and their metrics
7777
- Checkpoint numbering continues from where it left off (e.g., if loaded from checkpoint_50, the next checkpoint will be checkpoint_60)
7878
- All evolution state is preserved (best programs, feature maps, archives, etc.)
79+
- Each checkpoint directory contains a copy of the best program at that point in time
7980

8081
Example workflow with checkpoints:
8182

@@ -91,6 +92,32 @@ python openevolve-run.py examples/function_minimization/initial_program.py \
9192
--checkpoint examples/function_minimization/openevolve_output/checkpoints/checkpoint_50 \
9293
--iterations 50
9394
```
95+
96+
### Comparing Results Across Checkpoints
97+
98+
Each checkpoint directory contains the best program found up to that point, making it easy to compare solutions over time:
99+
100+
```
101+
checkpoints/
102+
checkpoint_10/
103+
best_program.py # Best program at iteration 10
104+
best_program_info.json # Metrics and details
105+
programs/ # All programs evaluated so far
106+
metadata.json # Database state
107+
checkpoint_20/
108+
best_program.py # Best program at iteration 20
109+
...
110+
```
111+
112+
You can compare the evolution of solutions by examining the best programs at different checkpoints:
113+
114+
```bash
115+
# Compare best programs at different checkpoints
116+
diff -u checkpoints/checkpoint_10/best_program.py checkpoints/checkpoint_20/best_program.py
117+
118+
# Compare metrics
119+
cat checkpoints/checkpoint_*/best_program_info.json | grep -A 10 metrics
120+
```
94121
### Docker
95122

96123
You can also install and execute via Docker:
Lines changed: 179 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,179 @@
1+
# Function Minimization Example
2+
3+
This example demonstrates how OpenEvolve can discover sophisticated optimization algorithms starting from a simple implementation.
4+
5+
## Problem Description
6+
7+
The task is to minimize a complex non-convex function with multiple local minima:
8+
9+
```python
10+
f(x, y) = sin(x) * cos(y) + sin(x*y) + (x^2 + y^2)/20
11+
```
12+
13+
The global minimum is approximately at (-1.704, 0.678) with a value of -1.519.
14+
15+
## Getting Started
16+
17+
To run this example:
18+
19+
```bash
20+
cd examples/function_minimization
21+
python ../../openevolve-run.py initial_program.py evaluator.py --config config.yaml
22+
```
23+
24+
## Algorithm Evolution
25+
26+
### Initial Algorithm (Random Search)
27+
28+
The initial implementation was a simple random search that had no memory between iterations:
29+
30+
```python
31+
def search_algorithm(iterations=1000, bounds=(-5, 5)):
32+
"""
33+
A simple random search algorithm that often gets stuck in local minima.
34+
35+
Args:
36+
iterations: Number of iterations to run
37+
bounds: Bounds for the search space (min, max)
38+
39+
Returns:
40+
Tuple of (best_x, best_y, best_value)
41+
"""
42+
# Initialize with a random point
43+
best_x = np.random.uniform(bounds[0], bounds[1])
44+
best_y = np.random.uniform(bounds[0], bounds[1])
45+
best_value = evaluate_function(best_x, best_y)
46+
47+
for _ in range(iterations):
48+
# Simple random search
49+
x = np.random.uniform(bounds[0], bounds[1])
50+
y = np.random.uniform(bounds[0], bounds[1])
51+
value = evaluate_function(x, y)
52+
53+
if value < best_value:
54+
best_value = value
55+
best_x, best_y = x, y
56+
57+
return best_x, best_y, best_value
58+
```
59+
60+
### Evolved Algorithm (Simulated Annealing)
61+
62+
After running OpenEvolve, it discovered a simulated annealing algorithm with a completely different approach:
63+
64+
```python
65+
def simulated_annealing(bounds=(-5, 5), iterations=1000, step_size=0.1, initial_temperature=100, cooling_rate=0.99):
66+
"""
67+
Simulated Annealing algorithm for function minimization.
68+
69+
Args:
70+
bounds: Bounds for the search space (min, max)
71+
iterations: Number of iterations to run
72+
step_size: Step size for perturbing the solution
73+
initial_temperature: Initial temperature for the simulated annealing process
74+
cooling_rate: Cooling rate for the simulated annealing process
75+
76+
Returns:
77+
Tuple of (best_x, best_y, best_value)
78+
"""
79+
# Initialize with a random point
80+
best_x = np.random.uniform(bounds[0], bounds[1])
81+
best_y = np.random.uniform(bounds[0], bounds[1])
82+
best_value = evaluate_function(best_x, best_y)
83+
84+
current_x, current_y = best_x, best_y
85+
current_value = best_value
86+
temperature = initial_temperature
87+
88+
for _ in range(iterations):
89+
# Perturb the current solution
90+
new_x = current_x + np.random.uniform(-step_size, step_size)
91+
new_y = current_y + np.random.uniform(-step_size, step_size)
92+
93+
# Ensure the new solution is within bounds
94+
new_x = max(bounds[0], min(new_x, bounds[1]))
95+
new_y = max(bounds[0], min(new_y, bounds[1]))
96+
97+
new_value = evaluate_function(new_x, new_y)
98+
99+
# Calculate the acceptance probability
100+
if new_value < current_value:
101+
current_x, current_y = new_x, new_y
102+
current_value = new_value
103+
104+
if new_value < best_value:
105+
best_x, best_y = new_x, new_y
106+
best_value = new_value
107+
else:
108+
probability = np.exp((current_value - new_value) / temperature)
109+
if np.random.rand() < probability:
110+
current_x, current_y = new_x, new_y
111+
current_value = new_value
112+
113+
# Cool down the temperature
114+
temperature *= cooling_rate
115+
116+
return best_x, best_y, best_value
117+
```
118+
119+
## Key Improvements
120+
121+
Through evolutionary iterations, OpenEvolve discovered several key algorithmic concepts:
122+
123+
1. **Local Search**: Instead of random sampling across the entire space, the evolved algorithm makes small perturbations to promising solutions:
124+
```python
125+
new_x = current_x + np.random.uniform(-step_size, step_size)
126+
new_y = current_y + np.random.uniform(-step_size, step_size)
127+
```
128+
129+
2. **Temperature-based Acceptance**: The algorithm can escape local minima by occasionally accepting worse solutions:
130+
```python
131+
probability = np.exp((current_value - new_value) / temperature)
132+
if np.random.rand() < probability:
133+
current_x, current_y = new_x, new_y
134+
current_value = new_value
135+
```
136+
137+
3. **Cooling Schedule**: The temperature gradually decreases, transitioning from exploration to exploitation:
138+
```python
139+
temperature *= cooling_rate
140+
```
141+
142+
4. **Parameter Introduction**: The system discovered the need for additional parameters to control the algorithm's behavior:
143+
```python
144+
def simulated_annealing(bounds=(-5, 5), iterations=1000, step_size=0.1, initial_temperature=100, cooling_rate=0.99):
145+
```
146+
147+
## Results
148+
149+
The evolved algorithm shows substantial improvement in finding better solutions:
150+
151+
| Metric | Value |
152+
|--------|-------|
153+
| Value Score | 0.677 |
154+
| Distance Score | 0.258 |
155+
| Reliability Score | 1.000 |
156+
| Overall Score | 0.917 |
157+
| Combined Score | 0.584 |
158+
159+
The simulated annealing algorithm:
160+
- Achieves higher quality solutions (closer to the global minimum)
161+
- Has perfect reliability (100% success rate in completing runs)
162+
- Maintains a good balance between performance and reliability
163+
164+
## How It Works
165+
166+
This example demonstrates key features of OpenEvolve:
167+
168+
- **Code Evolution**: Only the code inside the evolve blocks is modified
169+
- **Complete Algorithm Redesign**: The system transformed a random search into a completely different algorithm
170+
- **Automatic Discovery**: The system discovered simulated annealing without being explicitly programmed with knowledge of optimization algorithms
171+
- **Function Renaming**: The system even recognized that the algorithm should have a more descriptive name
172+
173+
## Next Steps
174+
175+
Try modifying the config.yaml file to:
176+
- Increase the number of iterations
177+
- Change the LLM model configuration
178+
- Adjust the evaluator settings to prioritize different metrics
179+
- Try a different objective function by modifying `evaluate_function()`

examples/function_minimization/evaluator.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -55,9 +55,9 @@ def evaluate(program_path):
5555
Dictionary of metrics
5656
"""
5757
# Known global minimum (approximate)
58-
GLOBAL_MIN_X = -1.76
59-
GLOBAL_MIN_Y = -1.03
60-
GLOBAL_MIN_VALUE = -2.104
58+
GLOBAL_MIN_X = -1.704
59+
GLOBAL_MIN_Y = 0.678
60+
GLOBAL_MIN_VALUE = -1.519
6161

6262
try:
6363
# Load the program
@@ -216,9 +216,9 @@ def evaluate(program_path):
216216
def evaluate_stage1(program_path):
217217
"""First stage evaluation with fewer trials"""
218218
# Known global minimum (approximate)
219-
GLOBAL_MIN_X = float(-1.76)
220-
GLOBAL_MIN_Y = float(-1.03)
221-
GLOBAL_MIN_VALUE = float(-2.104)
219+
GLOBAL_MIN_X = float(-1.704)
220+
GLOBAL_MIN_Y = float(0.678)
221+
GLOBAL_MIN_VALUE = float(-1.519)
222222

223223
# Quick check to see if the program runs without errors
224224
try:

examples/function_minimization/initial_program.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,4 +49,3 @@ def run_search():
4949
if __name__ == "__main__":
5050
x, y, value = run_search()
5151
print(f"Found minimum at ({x}, {y}) with value {value}")
52-
# The global minimum is around (-1.76, -1.03) with value -2.104

openevolve/controller.py

Lines changed: 41 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -353,10 +353,50 @@ def _save_checkpoint(self, iteration: int) -> None:
353353
checkpoint_dir = os.path.join(self.output_dir, "checkpoints")
354354
os.makedirs(checkpoint_dir, exist_ok=True)
355355

356-
# Save the database
356+
# Create specific checkpoint directory
357357
checkpoint_path = os.path.join(checkpoint_dir, f"checkpoint_{iteration}")
358+
os.makedirs(checkpoint_path, exist_ok=True)
359+
360+
# Save the database
358361
self.database.save(checkpoint_path, iteration)
359362

363+
# Save the best program found so far
364+
best_program = None
365+
if self.database.best_program_id:
366+
best_program = self.database.get(self.database.best_program_id)
367+
else:
368+
best_program = self.database.get_best_program()
369+
370+
if best_program:
371+
# Save the best program at this checkpoint
372+
best_program_path = os.path.join(checkpoint_path, f"best_program{self.file_extension}")
373+
with open(best_program_path, "w") as f:
374+
f.write(best_program.code)
375+
376+
# Save metrics
377+
best_program_info_path = os.path.join(checkpoint_path, "best_program_info.json")
378+
with open(best_program_info_path, "w") as f:
379+
import json
380+
381+
json.dump(
382+
{
383+
"id": best_program.id,
384+
"generation": best_program.generation,
385+
"iteration": iteration,
386+
"metrics": best_program.metrics,
387+
"language": best_program.language,
388+
"timestamp": best_program.timestamp,
389+
"saved_at": time.time(),
390+
},
391+
f,
392+
indent=2,
393+
)
394+
395+
logger.info(
396+
f"Saved best program at checkpoint {iteration} with metrics: "
397+
f"{', '.join(f'{name}={value:.4f}' for name, value in best_program.metrics.items())}"
398+
)
399+
360400
logger.info(f"Saved checkpoint at iteration {iteration} to {checkpoint_path}")
361401

362402
def _save_best_program(self, program: Optional[Program] = None) -> None:

0 commit comments

Comments
 (0)