Skip to content

Commit b7515ee

Browse files
committed
Optimizes iterable & string utils to improve perf
- Replaces generator utils w/ optimized, class-based iterator versions - Adds unit tests for new iterator implementations - Adds rough benchmarking framework, powered by `tinybench`
1 parent 144edac commit b7515ee

File tree

13 files changed

+3207
-146
lines changed

13 files changed

+3207
-146
lines changed

CONTRIBUTING.md

Lines changed: 138 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -251,3 +251,141 @@ To add new icons to the GL Icons font follow the steps below:
251251
```
252252

253253
Once you've finished copy the new `glicons.woff2?<uuid>` URL from `src/webviews/apps/shared/glicons.scss` and search and replace the old references with the new one.
254+
255+
## Testing
256+
257+
GitLens uses VS Code's testing infrastructure for unit and integration tests.
258+
259+
### Running Tests
260+
261+
```bash
262+
# Run all tests
263+
pnpm run test
264+
265+
# Run E2E tests
266+
pnpm run test:e2e
267+
268+
# Build test files (required before running tests)
269+
pnpm run build:tests
270+
271+
# Watch mode for tests during development
272+
pnpm run watch:tests
273+
```
274+
275+
### Writing Tests
276+
277+
Tests are co-located with source files in `__tests__/` directories:
278+
279+
```
280+
src/
281+
└── system/
282+
├── string.ts
283+
└── __tests__/
284+
└── string.test.ts ← Test file
285+
```
286+
287+
Create test files using the naming pattern `*.test.ts`:
288+
289+
```typescript
290+
import { describe, it, expect } from 'vitest';
291+
import { functionToTest } from '../module';
292+
293+
describe('functionToTest', () => {
294+
it('should handle basic case', () => {
295+
const result = functionToTest('input');
296+
expect(result).toBe('expected output');
297+
});
298+
299+
it('should handle edge cases', () => {
300+
expect(functionToTest('')).toBe('');
301+
expect(functionToTest(null)).toBeUndefined();
302+
});
303+
});
304+
```
305+
306+
### Test Best Practices
307+
308+
- **Co-locate tests**: Place tests in `__tests__/` directories next to the code they test
309+
- **Descriptive names**: Use clear test names that describe what is being tested
310+
- **Test behavior**: Focus on testing behavior and outputs, not implementation details
311+
- **Edge cases**: Always test edge cases, empty inputs, and error conditions
312+
- **Mock dependencies**: Use mocks for external dependencies (VS Code API, file system, etc.)
313+
- **Keep tests fast**: Unit tests should run quickly; use mocks to avoid slow operations
314+
315+
### Debugging Tests
316+
317+
Use VS Code's built-in test runner:
318+
319+
1. Open the Testing view in the sidebar
320+
2. Click the debug icon next to any test to debug it
321+
3. Set breakpoints in test files or source code
322+
4. Use the Debug Console to inspect variables
323+
324+
Alternatively, use the provided launch configurations:
325+
326+
- Open the Run and Debug view
327+
- Select a test-related launch configuration
328+
- Press `F5` to start debugging
329+
330+
## Benchmarking
331+
332+
GitLens includes a benchmarking framework for measuring and comparing performance of critical code paths.
333+
334+
### Running Benchmarks
335+
336+
```bash
337+
# List all available benchmarks
338+
pnpm run benchmark:list
339+
340+
# Run all benchmarks
341+
pnpm run benchmark
342+
343+
# Run a specific benchmark by name
344+
pnpm run benchmark <name>
345+
```
346+
347+
### Creating New Benchmarks
348+
349+
Benchmarks are automatically discovered when placed in `__tests__/` directories:
350+
351+
1. Create a file named `*.benchmark.ts` in any `__tests__/` directory
352+
2. Use [tinybench](https://github.com/tinylibs/tinybench) to write your benchmark:
353+
354+
```typescript
355+
import { Bench } from 'tinybench';
356+
357+
async function runBenchmark() {
358+
console.log('MY FEATURE BENCHMARK');
359+
360+
const bench = new Bench({ time: 100 });
361+
362+
bench
363+
.add('Method A', () => {
364+
// Code to benchmark
365+
})
366+
.add('Method B', () => {
367+
// Alternative implementation
368+
});
369+
370+
await bench.run();
371+
372+
// Display results
373+
for (const task of bench.tasks) {
374+
console.log(`${task.name}: ${task.result?.hz.toLocaleString()} ops/sec`);
375+
}
376+
}
377+
378+
runBenchmark();
379+
```
380+
381+
3. Run your benchmark: `pnpm run benchmark <name>`
382+
383+
### Best Practices
384+
385+
- **Use realistic data**: Generate test data matching real-world patterns
386+
- **Test multiple scenarios**: Benchmark with different data sizes (small, medium, large)
387+
- **Measure complete operations**: Include all relevant work in benchmarks
388+
- **Display clear results**: Show ops/sec, average time, and margin of error
389+
- **Focus on hot paths**: Benchmark performance-critical code
390+
391+
See the full [Benchmarking Guide](docs/benchmarking.md) for detailed information and the `src/system/__tests__/string.benchmark.ts` example.

package.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25886,6 +25886,8 @@
2588625886
"build:webviews:quick": "node ./scripts/compile-composer-templates.mjs && webpack --mode development --config-name webviews:common --config-name webviews --env skipLint",
2588725887
"build:icons": "pnpm run icons:svgo && pnpm fantasticon && pnpm run icons:apply && pnpm run icons:export",
2588825888
"build:tests": "node ./scripts/esbuild.tests.mjs",
25889+
"benchmark": "node ./scripts/runBenchmark.mjs",
25890+
"benchmark:list": "node ./scripts/runBenchmark.mjs --list",
2588925891
"// Extracts the contributions from package.json into contributions.json": "//",
2589025892
"extract:contributions": "node --experimental-strip-types ./scripts/generateContributions.mts --extract",
2589125893
"// Generates contributions in contributions.json into package.json": "//",
@@ -26039,6 +26041,7 @@
2603926041
"sinon": "21.0.0",
2604026042
"svgo": "4.0.0",
2604126043
"terser-webpack-plugin": "5.3.14",
26044+
"tinybench": "5.0.1",
2604226045
"ts-loader": "9.5.4",
2604326046
"typescript": "5.9.3",
2604426047
"typescript-eslint": "8.46.0",

pnpm-lock.yaml

Lines changed: 9 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

scripts/esbuild.tests.mjs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ async function buildTests(target) {
1919
/** @type BuildOptions | WatchOptions */
2020
const config = {
2121
bundle: true,
22-
entryPoints: ['src/**/__tests__/**/*.test.ts'],
22+
entryPoints: ['src/**/__tests__/**/*.test.ts', 'src/**/__tests__/**/*.benchmark.ts'],
2323
entryNames: '[dir]/[name]',
2424
external: ['vscode'],
2525
format: 'cjs',

scripts/runBenchmark.mjs

Lines changed: 177 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,177 @@
1+
#!/usr/bin/env node
2+
3+
/**
4+
* Benchmark runner for GitLens
5+
*
6+
* Usage:
7+
* pnpm run benchmark # Run all benchmarks
8+
* pnpm run benchmark string # Run specific benchmark by name
9+
* pnpm run benchmark --list # List all available benchmarks
10+
*/
11+
12+
import { execSync } from 'child_process';
13+
import { existsSync, readdirSync, statSync } from 'fs';
14+
import { fileURLToPath } from 'url';
15+
import { dirname, join, basename } from 'path';
16+
17+
const __filename = fileURLToPath(import.meta.url);
18+
const __dirname = dirname(__filename);
19+
const rootDir = join(__dirname, '..');
20+
21+
// Parse command line arguments
22+
const args = process.argv.slice(2);
23+
const shouldList = args.includes('--list') || args.includes('-l');
24+
const specificBenchmark = args.find(arg => !arg.startsWith('--'));
25+
26+
/**
27+
* Find all benchmark files in the codebase
28+
*/
29+
function findBenchmarkFiles() {
30+
const benchmarks = [];
31+
const testDirs = [];
32+
33+
// Find all __tests__ directories
34+
function findTestDirs(dir) {
35+
const entries = readdirSync(dir, { withFileTypes: true });
36+
for (const entry of entries) {
37+
if (entry.name === 'node_modules' || entry.name === 'dist' || entry.name === 'out') continue;
38+
39+
const fullPath = join(dir, entry.name);
40+
if (entry.isDirectory()) {
41+
if (entry.name === '__tests__') {
42+
testDirs.push(fullPath);
43+
}
44+
findTestDirs(fullPath);
45+
}
46+
}
47+
}
48+
49+
findTestDirs(join(rootDir, 'src'));
50+
51+
// Find benchmark files in test directories
52+
for (const testDir of testDirs) {
53+
const entries = readdirSync(testDir);
54+
for (const entry of entries) {
55+
if (entry.endsWith('.benchmark.ts')) {
56+
const fullPath = join(testDir, entry);
57+
const relativePath = fullPath.replace(rootDir, '').replace(/\\/g, '/').substring(1);
58+
const name = basename(entry, '.benchmark.ts');
59+
60+
benchmarks.push({
61+
name,
62+
sourcePath: relativePath,
63+
outputPath: relativePath
64+
.replace('src/', 'out/tests/')
65+
.replace('.ts', '.js'),
66+
});
67+
}
68+
}
69+
}
70+
71+
return benchmarks;
72+
}
73+
74+
/**
75+
* List all available benchmarks
76+
*/
77+
function listBenchmarks(benchmarks) {
78+
console.log('Available benchmarks:\n');
79+
for (const benchmark of benchmarks) {
80+
console.log(` ${benchmark.name.padEnd(20)} - ${benchmark.sourcePath}`);
81+
}
82+
console.log(`\nTotal: ${benchmarks.length} benchmark(s)`);
83+
console.log('\nUsage:');
84+
console.log(' pnpm run benchmark # Run all benchmarks');
85+
console.log(' pnpm run benchmark <name> # Run specific benchmark');
86+
console.log(' pnpm run benchmark --list # Show this list');
87+
}
88+
89+
/**
90+
* Build benchmarks
91+
*/
92+
function buildBenchmarks() {
93+
console.log('Building benchmarks...\n');
94+
try {
95+
execSync(`node ${join(rootDir, 'scripts', 'esbuild.tests.mjs')}`, {
96+
stdio: 'inherit',
97+
cwd: rootDir,
98+
});
99+
} catch (error) {
100+
console.error('Error building benchmarks:', error.message);
101+
process.exit(1);
102+
}
103+
}
104+
105+
/**
106+
* Run a specific benchmark
107+
*/
108+
function runBenchmark(benchmark) {
109+
const benchmarkPath = join(rootDir, benchmark.outputPath);
110+
111+
if (!existsSync(benchmarkPath)) {
112+
console.error(`Error: Benchmark file not found at ${benchmarkPath}`);
113+
console.error('Make sure the build completed successfully.');
114+
process.exit(1);
115+
}
116+
117+
console.log(`\nRunning benchmark: ${benchmark.name}`);
118+
console.log(`Source: ${benchmark.sourcePath}\n`);
119+
120+
try {
121+
execSync(`node "${benchmarkPath}"`, { stdio: 'inherit', cwd: rootDir });
122+
} catch (error) {
123+
console.error(`Error running benchmark ${benchmark.name}:`, error.message);
124+
process.exit(1);
125+
}
126+
}
127+
128+
/**
129+
* Main execution
130+
*/
131+
function main() {
132+
const benchmarks = findBenchmarkFiles();
133+
134+
if (benchmarks.length === 0) {
135+
console.log('No benchmarks found.');
136+
console.log('Create benchmark files named *.benchmark.ts in __tests__ directories.');
137+
process.exit(0);
138+
}
139+
140+
// Handle --list flag
141+
if (shouldList) {
142+
listBenchmarks(benchmarks);
143+
process.exit(0);
144+
}
145+
146+
// Build benchmarks
147+
buildBenchmarks();
148+
149+
// Run specific benchmark if specified
150+
if (specificBenchmark) {
151+
const benchmark = benchmarks.find(b => b.name === specificBenchmark);
152+
if (!benchmark) {
153+
console.error(`Error: Benchmark "${specificBenchmark}" not found.`);
154+
console.error('\nAvailable benchmarks:');
155+
for (const b of benchmarks) {
156+
console.error(` - ${b.name}`);
157+
}
158+
process.exit(1);
159+
}
160+
161+
runBenchmark(benchmark);
162+
} else {
163+
// Run all benchmarks
164+
console.log(`\nRunning ${benchmarks.length} benchmark(s)...\n`);
165+
166+
for (let i = 0; i < benchmarks.length; i++) {
167+
if (i > 0) {
168+
console.log('\n' + '━'.repeat(80) + '\n');
169+
}
170+
runBenchmark(benchmarks[i]);
171+
}
172+
}
173+
174+
console.log('\n✓ All benchmarks completed successfully!\n');
175+
}
176+
177+
main();

0 commit comments

Comments
 (0)