|
| 1 | +# Local Gateway Server Benchmark |
| 2 | + |
| 3 | +This example demonstrates how to benchmark the performance of the fetch-proxy library by creating two local HTTP servers: |
| 4 | + |
| 5 | +1. **Backend Server** (port 3001) - Simulates a real API service |
| 6 | +2. **Gateway Server** (port 3000) - Uses fetch-proxy to proxy requests to the backend |
| 7 | + |
| 8 | +## Running the Benchmark |
| 9 | + |
| 10 | +```bash |
| 11 | +npm run example:benchmark |
| 12 | +``` |
| 13 | + |
| 14 | +## Available Endpoints |
| 15 | + |
| 16 | +### Backend Endpoints (via Gateway) |
| 17 | + |
| 18 | +- `GET /api/small` - Small JSON response (~200 bytes) |
| 19 | +- `GET /api/medium` - Medium JSON response (~100 items, ~5KB) |
| 20 | +- `GET /api/large` - Large JSON response (~1000 items, ~50KB) |
| 21 | +- `GET /api/error` - Randomly returns 500 errors (30% failure rate) |
| 22 | +- `GET /api/slow` - Slow response with 100ms delay |
| 23 | +- `GET /api/health` - Health check endpoint |
| 24 | + |
| 25 | +### Gateway-Specific Endpoints |
| 26 | + |
| 27 | +- `GET /stats` - Performance statistics and metrics |
| 28 | +- `GET /reset` - Reset performance statistics |
| 29 | +- `GET /benchmark` - Run automated benchmark tests |
| 30 | + |
| 31 | +## Benchmark Examples |
| 32 | + |
| 33 | +### Small Response Benchmark |
| 34 | + |
| 35 | +```bash |
| 36 | +curl 'http://localhost:3000/benchmark?iterations=100&concurrency=10&endpoint=/api/small' |
| 37 | +``` |
| 38 | + |
| 39 | +### Medium Response Benchmark |
| 40 | + |
| 41 | +```bash |
| 42 | +curl 'http://localhost:3000/benchmark?iterations=50&concurrency=5&endpoint=/api/medium' |
| 43 | +``` |
| 44 | + |
| 45 | +### Large Response Benchmark |
| 46 | + |
| 47 | +```bash |
| 48 | +curl 'http://localhost:3000/benchmark?iterations=20&concurrency=3&endpoint=/api/large' |
| 49 | +``` |
| 50 | + |
| 51 | +### Error Handling Benchmark |
| 52 | + |
| 53 | +```bash |
| 54 | +curl 'http://localhost:3000/benchmark?iterations=50&concurrency=5&endpoint=/api/error' |
| 55 | +``` |
| 56 | + |
| 57 | +## Performance Monitoring |
| 58 | + |
| 59 | +### View Current Statistics |
| 60 | + |
| 61 | +```bash |
| 62 | +curl http://localhost:3000/stats |
| 63 | +``` |
| 64 | + |
| 65 | +Example output: |
| 66 | + |
| 67 | +```json |
| 68 | +{ |
| 69 | + "gateway": { |
| 70 | + "totalRequests": 156, |
| 71 | + "errorCount": 3, |
| 72 | + "averageLatency": 12.5, |
| 73 | + "circuitBreakerState": "CLOSED", |
| 74 | + "circuitBreakerFailures": 0 |
| 75 | + }, |
| 76 | + "recentLatencies": [15, 12, 18, 9, 14, 11, 16, 13, 10, 12], |
| 77 | + "timestamp": "2025-05-30T18:47:34.637Z" |
| 78 | +} |
| 79 | +``` |
| 80 | + |
| 81 | +### Reset Statistics |
| 82 | + |
| 83 | +```bash |
| 84 | +curl http://localhost:3000/reset |
| 85 | +``` |
| 86 | + |
| 87 | +## Performance Comparison |
| 88 | + |
| 89 | +Compare direct backend access vs gateway proxying: |
| 90 | + |
| 91 | +```bash |
| 92 | +# Direct backend access |
| 93 | +time curl -s http://localhost:3001/api/small > /dev/null |
| 94 | + |
| 95 | +# Via fetch-proxy gateway |
| 96 | +time curl -s http://localhost:3000/api/small > /dev/null |
| 97 | +``` |
| 98 | + |
| 99 | +## Circuit Breaker Testing |
| 100 | + |
| 101 | +Test the circuit breaker by making multiple requests to the error endpoint: |
| 102 | + |
| 103 | +```bash |
| 104 | +for i in {1..10}; do |
| 105 | + echo "Request $i:" |
| 106 | + curl -s http://localhost:3000/api/error | head -c 100 |
| 107 | + echo |
| 108 | +done |
| 109 | +``` |
| 110 | + |
| 111 | +## Features Demonstrated |
| 112 | + |
| 113 | +1. **Performance Metrics** - Request counting, latency tracking, error monitoring |
| 114 | +2. **Circuit Breaker** - Automatic failure detection and recovery |
| 115 | +3. **Header Management** - Request ID tracking, timestamp headers |
| 116 | +4. **Concurrent Benchmarking** - Configurable concurrency and iteration counts |
| 117 | +5. **Error Handling** - Graceful degradation and error reporting |
| 118 | +6. **Memory Efficiency** - Sliding window for latency tracking |
| 119 | + |
| 120 | +## Benchmark Parameters |
| 121 | + |
| 122 | +- `iterations` - Number of requests to make (default: 100) |
| 123 | +- `concurrency` - Number of concurrent requests (default: 10) |
| 124 | +- `endpoint` - Target endpoint to benchmark (default: /api/small) |
| 125 | + |
| 126 | +## Expected Performance |
| 127 | + |
| 128 | +On a typical development machine, you can expect: |
| 129 | + |
| 130 | +- **Small responses**: 200-500 requests/second |
| 131 | +- **Medium responses**: 100-300 requests/second |
| 132 | +- **Large responses**: 50-150 requests/second |
| 133 | +- **Gateway overhead**: ~1-2ms additional latency |
| 134 | + |
| 135 | +Performance will vary based on: |
| 136 | + |
| 137 | +- System resources (CPU, memory) |
| 138 | +- Network conditions |
| 139 | +- Concurrent load |
| 140 | +- Response sizes |
| 141 | +- Backend processing time |
| 142 | + |
| 143 | +## Use Cases |
| 144 | + |
| 145 | +This benchmark example is useful for: |
| 146 | + |
| 147 | +1. **Performance Testing** - Measure fetch-proxy overhead |
| 148 | +2. **Load Testing** - Test gateway behavior under load |
| 149 | +3. **Circuit Breaker Validation** - Verify fault tolerance |
| 150 | +4. **Latency Analysis** - Understand response time patterns |
| 151 | +5. **Capacity Planning** - Determine optimal configuration |
0 commit comments