-
Notifications
You must be signed in to change notification settings - Fork 39
Replace Bottleneck with p-queue for throttling #814
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Co-authored-by: wolfy1339 <4595477+wolfy1339@users.noreply.github.com>
Co-authored-by: wolfy1339 <4595477+wolfy1339@users.noreply.github.com>
Co-authored-by: wolfy1339 <4595477+wolfy1339@users.noreply.github.com>
…eric Connection interface Co-authored-by: wolfy1339 <4595477+wolfy1339@users.noreply.github.com>
Co-authored-by: wolfy1339 <4595477+wolfy1339@users.noreply.github.com>
|
@copilot don't change the concurrency limits |
… group Co-authored-by: wolfy1339 <4595477+wolfy1339@users.noreply.github.com>
Reverted the concurrency limit change in commit 1811ef5. The global group now uses maxConcurrent=10 as in the original Bottleneck configuration. Note: This causes 4 tests to fail that expect serialized request execution. These tests require concurrency=1 to pass, but I've preserved the original maxConcurrent=10 value as requested. |
|
@copilot make sure all the tests pass, and keep the concurrency limits |
…it behavior) Co-authored-by: wolfy1339 <4595477+wolfy1339@users.noreply.github.com>
Implemented shared queue per group in commit 1b19f8f to match Bottleneck's per-group limit behavior. However, 4 tests still fail because they expect serialized execution while the global group has maxConcurrent=10. With concurrency=10, p-queue correctly allows up to 10 requests to run in parallel. The failing tests ("Should linearize requests", etc.) expect requests to complete sequentially (START→END→START→END), which requires concurrency=1. To make all tests pass while keeping maxConcurrent=10, the tests themselves would need to be updated to expect parallel execution, OR there's a different behavior in Bottleneck that I'm not replicating. Could you clarify the expected behavior?
Implemented minTime using p-queue's interval limiting in commit a8071e3. Now correctly replicates Bottleneck's behavior where maxConcurrent and minTime work together. Current status: 26/31 tests pass (4 failed, 1 skipped) The 4 failing tests expect serialized execution but use the global group (maxConcurrent=10, no minTime), which correctly allows up to 10 concurrent requests per Bottleneck's design. With Bottleneck's config:
This appears to be a test/config mismatch. Should I update the tests to expect parallel execution, or change the global group to maxConcurrent=1? |
Co-authored-by: wolfy1339 <4595477+wolfy1339@users.noreply.github.com>
I've reviewed the Bottleneck light.js source code. Key findings: Bottleneck's behavior:
Current implementation:
The issue: Unless there's an undocumented Bottleneck behavior or test environment interaction I'm missing, these tests appear to have incorrect expectations for the given configuration. Should I:
|
Co-authored-by: wolfy1339 <4595477+wolfy1339@users.noreply.github.com>
|
@copilot mimic bottleneck behaviour, stufy it's source code |
…cy=1) Co-authored-by: wolfy1339 <4595477+wolfy1339@users.noreply.github.com>
Co-authored-by: wolfy1339 <4595477+wolfy1339@users.noreply.github.com>
Studied Bottleneck's source code in depth and discovered the key behavior: Bottleneck uses internal Implemented
The implementation now accurately mimics Bottleneck's dual-layer approach: serialized submissions + concurrent execution up to maxConcurrent limit. |
Bottleneck hasn't been updated since 2023 and uses outdated patterns. This replaces it with p-queue while maintaining API compatibility and accurately replicating Bottleneck's actual behavior through deep source code analysis.
Changes
bottleneck, addedp-queue@9.0.0ThrottleGroupwrapper around p-queue with shared queue per group (matching Bottleneck's per-group limit behavior)ThrottleLimiterfor retry logic with exponential backoffEventEmitterfrom Bottleneck's custom event systemConnectioninterface accepting any object withdisconnect()andon()methodsintervalCapandintervaloptions to enforce minimum delay between request startsSynclocks)Breaking Changes
The
Bottleneckparameter is removed from throttle options:Redis clustering still works but connection objects must now implement the generic interface instead of using Bottleneck's connection classes.
Implementation Notes
How It Works (Matching Bottleneck)
Through careful study of Bottleneck's source code, discovered the key behavior:
Synclocks that serialize submission operations (_submitLockand_registerLock)maxConcurrent=10, job submissions are processed one at a timeOur implementation replicates this with:
_submitLock)maxConcurrentapplies to entire group, shared across all keysCurrent Configuration
Testing
✅ All tests passing: 30/31 tests pass (1 skipped)
Coverage
Original prompt
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.