Skip to content

Commit eeffaa4

Browse files
committed
another test
1 parent d7d8e7a commit eeffaa4

File tree

1 file changed

+1
-202
lines changed

1 file changed

+1
-202
lines changed

.github/workflows/cont-bench.yml

Lines changed: 1 addition & 202 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,6 @@ name: Continuous Benchmarking
22

33
on:
44
pull_request:
5-
workflow_dispatch:
6-
push:
7-
branches: [main, master]
85

96
permissions:
107
contents: write
@@ -35,202 +32,4 @@ jobs:
3532
- name: Run Fortran Benchmark
3633
run: |
3734
(cd pr && ./mfc.sh bench -o bench.yaml)
38-
find pr/ -maxdepth 1 -name "*.yaml" -exec sh -c 'yq eval -o=json "$1" > "${1%.yaml}.json"' _ {} \;
39-
40-
# - name: Store benchmark result
41-
# uses: benchmark-action/github-action-benchmark@v1
42-
# with:
43-
# name: Fortran Benchmark
44-
# tool: 'googlecpp'
45-
# output-file-path: pr/bench*.json
46-
# github-token: ${{ secrets.GITHUB_TOKEN }}
47-
# auto-push: true
48-
# alert-threshold: '200%'
49-
# comment-on-alert: true
50-
# fail-on-alert: true
51-
# gh-pages-branch: 'gh-pages'
52-
# benchmark-data-dir-path: benchmarks
53-
54-
# - name: Create Benchmark Documentation
55-
# run: |
56-
# mkdir -p pr/docs/documentation
57-
# cat > pr/docs/documentation/cont-bench.md << 'EOF'
58-
# # Continuous Benchmarking
59-
60-
# This page provides an overview of MFC's continuous benchmarking system and results.
61-
62-
# ## Overview
63-
64-
# The continuous benchmarking system automatically runs performance tests on every approved pull request and main branch commit to track performance regressions and improvements over time.
65-
66-
# ## Benchmark Results
67-
68-
# ### Live Dashboard
69-
70-
# View the interactive benchmark dashboard with historical performance data:
71-
72-
# **[🔗 Live Benchmark Results](https://mflowcode.github.io/MFC/benchmarks/)**
73-
74-
# ### Key Metrics
75-
76-
# Our benchmarking system tracks the following performance metrics:
77-
78-
# - **Execution Time**: Overall runtime of benchmark cases
79-
# - **Memory Usage**: Peak memory consumption during execution
80-
# - **Computational Efficiency**: Performance per computational unit
81-
# - **Scalability**: Performance across different problem sizes
82-
83-
# ### Benchmark Cases
84-
85-
# The benchmark suite includes:
86-
87-
# 1. **Standard Test Cases**: Representative fluid dynamics problems
88-
# 2. **Scaling Tests**: Performance evaluation across different core counts
89-
# 3. **Memory Tests**: Memory efficiency and usage patterns
90-
# 4. **Accuracy Tests**: Verification of numerical accuracy
91-
92-
# ## Performance Trends
93-
94-
# ```mermaid
95-
# graph LR
96-
# A[PR Submitted] --> B[Benchmark Run]
97-
# B --> C[Results Stored]
98-
# C --> D[Performance Comparison]
99-
# D --> E{Performance OK?}
100-
# E -->|Yes| F[PR Approved]
101-
# E -->|No| G[Alert Generated]
102-
# G --> H[Developer Notified]
103-
# ```
104-
105-
# ## Alert System
106-
107-
# The system automatically:
108-
109-
# - 🚨 **Generates alerts** when performance degrades by more than 200%
110-
# - 📊 **Comments on PRs** with performance impact analysis
111-
# - 📈 **Tracks trends** to identify gradual performance changes
112-
# - 👥 **Notifies maintainers** of significant performance issues
113-
114-
# ## Configuration
115-
116-
# ### Benchmark Triggers
117-
118-
# Benchmarks are automatically triggered on:
119-
120-
# - ✅ Approved pull request reviews
121-
# - 🔄 Pull requests from trusted contributors
122-
# - 📦 Pushes to main/master branches
123-
# - 🎯 Manual workflow dispatch
124-
125-
# ### Performance Thresholds
126-
127-
# - **Alert Threshold**: 200% performance degradation
128-
# - **Fail Threshold**: Critical performance regressions
129-
# - **Comparison Base**: Previous main branch performance
130-
131-
# ## Interpreting Results
132-
133-
# ### Performance Metrics
134-
135-
# | Metric | Description | Good Trend | Bad Trend |
136-
# |--------|-------------|------------|-----------|
137-
# | Runtime | Execution time | ⬇️ Decreasing | ⬆️ Increasing |
138-
# | Memory | Peak memory usage | ⬇️ Decreasing | ⬆️ Increasing |
139-
# | Efficiency | Ops per second | ⬆️ Increasing | ⬇️ Decreasing |
140-
141-
# ### Reading the Dashboard
142-
143-
# 1. **Timeline View**: Shows performance evolution over time
144-
# 2. **Comparison View**: Compares current vs. baseline performance
145-
# 3. **Detailed Metrics**: Drill down into specific performance aspects
146-
# 4. **Regression Detection**: Automatically highlights performance issues
147-
148-
# ## Contributing to Benchmarks
149-
150-
# ### Adding New Benchmarks
151-
152-
# To add new benchmark cases:
153-
154-
# 1. Add test case to `benchmarks/` directory
155-
# 2. Update benchmark configuration
156-
# 3. Ensure proper performance metrics collection
157-
# 4. Test locally before submitting PR
158-
159-
# ### Best Practices
160-
161-
# - 🎯 **Focus on representative cases** that reflect real-world usage
162-
# - 📊 **Include scalability tests** for different problem sizes
163-
# - 🔄 **Maintain benchmark stability** to ensure reliable comparisons
164-
# - 📝 **Document benchmark purpose** and expected performance characteristics
165-
166-
# ## Troubleshooting
167-
168-
# ### Common Issues
169-
170-
# | Issue | Cause | Solution |
171-
# |-------|-------|----------|
172-
# | Benchmark timeout | Long-running test | Optimize test case or increase timeout |
173-
# | Memory errors | Insufficient resources | Check memory requirements |
174-
# | Inconsistent results | System variability | Multiple runs or statistical analysis |
175-
176-
# ### Getting Help
177-
178-
# - 📧 Contact: @sbryngelson for benchmark-related issues
179-
# - 🐛 Issues: Report problems via GitHub issues
180-
# - 📖 Documentation: Check MFC documentation for detailed guides
181-
182-
# ---
183-
184-
# *Last updated: $(date '+%Y-%m-%d %H:%M:%S UTC')*
185-
# *Generated automatically by the continuous benchmarking workflow*
186-
# EOF
187-
188-
# - name: Commit Documentation
189-
# if: github.event_name == 'push' && github.ref == 'refs/heads/main'
190-
# run: |
191-
# cd pr
192-
# git config --local user.email "action@github.com"
193-
# git config --local user.name "GitHub Action"
194-
# git add docs/documentation/cont-bench.md
195-
# if git diff --staged --quiet; then
196-
# echo "No changes to commit"
197-
# else
198-
# git commit -m "Update continuous benchmarking documentation"
199-
# git push
200-
# fi
201-
202-
# - name: Archive Results
203-
# uses: actions/upload-artifact@v4
204-
# if: always()
205-
# with:
206-
# name: benchmark-results
207-
# path: |
208-
# pr/bench-*
209-
# pr/build/benchmarks/*
210-
# pr/docs/documentation/cont-bench.md
211-
212-
# deploy-pages:
213-
# name: Deploy Benchmark Pages
214-
# if: ${{ github.event_name == 'push' && github.ref == 'refs/heads/main' }}
215-
# needs: self
216-
# runs-on: ubuntu-latest
217-
# environment:
218-
# name: github-pages
219-
# url: ${{ steps.deployment.outputs.page_url }}
220-
# steps:
221-
# - name: Checkout
222-
# uses: actions/checkout@v4
223-
# with:
224-
# ref: gh-pages
225-
226-
# - name: Setup Pages
227-
# uses: actions/configure-pages@v4
228-
229-
# - name: Upload artifact
230-
# uses: actions/upload-pages-artifact@v2
231-
# with:
232-
# path: .
233-
234-
# - name: Deploy to GitHub Pages
235-
# id: deployment
236-
# uses: actions/deploy-pages@v3
35+
find pr/ -maxdepth 1 -name "*.yaml" -exec sh -c 'yq eval -o=json "$1" > "${1%.yaml}.json"' _ {} \;

0 commit comments

Comments
 (0)