-
-
Notifications
You must be signed in to change notification settings - Fork 638
Improve concurrent streaming error handling and configuration #2017
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…ogs; behind config flag
- handle when a component fiber is already drained after the first resume (the shell). - typically, streamed components will have more chunks besides the first chunk. - this is an edge case exposed while testing. - we're basically avoiding getting a FiberError: attempt to resume a terminated fiber
…, edge cases, and producer error
…ming backpressure
…start writer before producers; remove per-producer done markers
- propagate runtime errors instead of silencing them - keep gracefully handling already terminated fibers from resuming - properly handle cleaning/shutting down when errors occur
replace custom test doubles with RSpec's instance_double for cleaner and more maintainable test isolation. This approach follows established mocking patterns and removes dependency on custom streaming buffer implementations.
…onents concurrently
This reverts commit 528ee15ed651378a24d2d35a75063f096df7624e.
…ing" This reverts commit 41f5eb9db2251f2c5f338b037575b46071981508.
…t_streaming_buffer_size for improved memory management and validation
…roved clarity and performance
…doesn't expect a :size keyword argument
The exact version 15.0.0 was yanked from RubyGems, causing CI failures. Updated to 16.0.1.rc.4 which is available and stable.
…on_rails 16.0 The using_packer? method was removed in react_on_rails 16.0 as part of the transition to Shakapacker-only support. Since Shakapacker is now assumed to always be in use, the check is no longer needed. Changes: - Remove using_packer? check from Utils.bundle_file_name method - Remove mock of non-existent method from spec - Update comments to reference Shakapacker instead of webpacker Fixes CI failures in utils_spec.rb
The .claude/ directory is created at the project root, not in subdirectories. The root .gitignore already handles .claude/ with selective ignoring (settings.local.json and .fuse_hidden*), making this subdirectory entry unnecessary and inconsistent with the root ignore policy.
…_size Wire up validate_concurrent_component_streaming_buffer_size in setup_config_values to ensure config.concurrent_component_streaming_buffer_size is validated at initialization time. Without this call, invalid values (nil, 0, negative numbers) would be silently accepted and cause runtime errors later when Async::LimitedQueue is instantiated. Addresses CodeRabbit feedback on the original PR.
Change concurrent_component_streaming_buffer_size validation to require Integer instead of Numeric. Buffer sizes represent discrete queue capacity counts and should not accept float values like 64.5. This provides better type safety and clearer error messages for users who might accidentally pass non-integer values. Addresses CodeRabbit feedback.
This commit addresses several critical issues and improvements for concurrent component streaming in React on Rails Pro: 1. CRITICAL: Client Disconnect Handling - Added IOError and Errno::EPIPE exception handling in producer tasks - Added stream.closed? check before expensive operations - Added exception handling in writer task to stop gracefully on disconnect - Prevents wasted resources when clients disconnect mid-stream 2. Configuration Validation Improvements - Replaced attr_accessor with custom setter for concurrent_component_streaming_buffer_size - Added validation at assignment time, not just initialization - Added comprehensive documentation about memory implications - Documented that buffer size * chunk size determines max memory usage 3. Documentation Enhancements - Added detailed comments explaining producer-consumer pattern - Documented ordering guarantees for concurrent streaming - Clarified that chunks from same component maintain order - Clarified that chunks from different components may interleave - Added memory management documentation to drain_streams_concurrently 4. Testing - Added test for client disconnect scenario - Verifies producer stops when client disconnects - Ensures no further processing after IOError Changes: - react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb - react_on_rails_pro/lib/react_on_rails_pro/configuration.rb - react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb Note: Did not implement sleep-based test synchronization improvements as the existing approach is acceptable for test purposes and more complex synchronization mechanisms would add unnecessary complexity to the test suite.
WalkthroughThis PR implements concurrent component streaming for React component rendering. It introduces a producer-consumer architecture using the Async gem with a bounded queue for backpressure, replaces linear streaming with per-fiber concurrent producers, adds a configuration option for the buffer size, and extends tests to validate the new streaming behavior. Changes
Sequence DiagramsequenceDiagram
participant Template
participant DrainConcurrent as drain_streams_concurrently
participant Queue as Bounded Queue<br/>(size: buffer_size)
participant Producers as Producer Tasks<br/>(per fiber)
participant Writer as Writer Task
participant Client
Template->>DrainConcurrent: stream_view_containing_react_components
DrainConcurrent->>Queue: Initialize Async::LimitedQueue
DrainConcurrent->>Producers: Spawn producer tasks
DrainConcurrent->>Writer: Spawn writer task
rect rgb(220, 240, 255)
note over Producers,Queue: Concurrent Production Phase
Producers->>Producers: Resume fiber, get chunk
Producers->>Queue: Enqueue chunk + index<br/>(with backpressure)
Producers->>Producers: Repeat until fiber exhausted
end
rect rgb(240, 220, 255)
note over Writer,Client: Concurrent Writing Phase
Writer->>Queue: Dequeue chunk (blocks if empty)
Writer->>Client: Write chunk to response stream
Writer->>Writer: Repeat until queue closed
end
Producers->>Queue: Signal end (close queue)
Writer->>DrainConcurrent: Complete
DrainConcurrent->>Client: Close stream
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
Possibly related issues
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Code Review: Concurrent Streaming Error Handling and ConfigurationThis is a well-structured PR that addresses critical production issues with streaming. Here's my detailed review: ✅ Strengths1. Critical Bug Fix: Client Disconnect HandlingThe addition of Good practices observed:
2. Configuration ValidationThe custom setter for Excellent documentation explaining memory implications with concrete examples. 3. Comprehensive DocumentationThe inline documentation in
This will save future maintainers significant time. 4. Test CoverageThe new test for client disconnect ( 🔍 Issues & RecommendationsHIGH PRIORITY1. Potential Race Condition in Stream Closed CheckLocation: break if response.stream.closed?
chunk = fiber.resumeIssue: There's a TOCTOU (Time-of-Check-Time-of-Use) race condition here. The stream could close between the check and the Recommendation: Wrap the entire loop body in the exception handler, or check again after chunk = fiber.resume
break unless chunk
break if response.stream.closed?
queue.enqueue([idx, chunk])2. Missing Error LoggingLocation: rescue IOError, Errno::EPIPE
# Client disconnected - stop producing
breakIssue: Silent exception swallowing makes debugging difficult in production. If these errors occur frequently, you won't know. Recommendation: Add debug-level logging: rescue IOError, Errno::EPIPE => e
Rails.logger.debug "Client disconnected during streaming: #{e.class} - #{e.message}"
break3. Test Uses sleep() for SynchronizationLocation: Issue: The PR description mentions "Sleep-based test synchronization improvements" are acceptable, but Recommendation: While the PR states this is acceptable, consider using condition variables or queue size assertions for more robust testing: # Instead of: sleep 0.05
# Use:
wait_for { written_chunks.count == expected_count }.to be_truthyMEDIUM PRIORITY4. idx Variable Unused in WriterLocation: _idx_from_queue, item = pairIssue: The
Recommendation: If it's for future debugging, add a comment explaining this. Otherwise, remove it to avoid confusion: queue.enqueue(chunk) # Instead of [idx, chunk]5. Configuration Validation DuplicationLocation: The same validation logic appears twice - in the setter and in Recommendation: Remove the separate validation method or consolidate: # Option 1: Remove validate_concurrent_component_streaming_buffer_size from setup_config_values
# Option 2: Move validation logic to private method and call from setter6. Gemfile.lock Changes Should Be VerifiedLocation: The Recommendation:
LOW PRIORITY7. Potential Memory Leak with Fiber ReferencesLocation: @rorp_rendering_fibers = []
# ... later ...
@rorp_rendering_fibers.each_with_index.map do |fiber, idx|Issue: If Recommendation: Clear the array in a final ensure block: def stream_view_containing_react_components(...)
begin
# ... existing code ...
ensure
@rorp_rendering_fibers = nil # Allow GC
response.stream.close if close_stream_at_end
end
end8. Comment TypoLocation: # Returns the hashed file name when using Shakapacker. Useful for creating cache keys.
# Necessary for fragment-caching keys.Minor inconsistency: "Necessary fragment-caching keys" should be "Necessary for fragment-caching keys" (already fixed in line 121). 🔒 Security Considerations1. DoS via Buffer ExhaustionThe configurable buffer size is good, but there's no upper bound validation. A misconfigured buffer size of Recommendation: Add a reasonable maximum (e.g., 1000): def concurrent_component_streaming_buffer_size=(value)
unless value.is_a?(Integer) && value.positive? && value <= 1000
raise ReactOnRailsPro::Error,
"config.concurrent_component_streaming_buffer_size must be between 1 and 1000"
end
@concurrent_component_streaming_buffer_size = value
end2. No Timeout on Queue OperationsIf a producer blocks on Recommendation: Consider adding a timeout to ⚡ Performance Considerations1. Stream Closed Check OverheadChecking Recommendation: Profile this in production. If it's expensive, consider checking every N iterations instead of every iteration. 2. Queue Enqueue/Dequeue OverheadThe additional Recommendation: If 📝 CHANGELOG ReviewThe CHANGELOG entries are appropriate and follow the repository's format. However: Issue: The "Improved" entry mentions "significantly improved streaming performance" but this PR is primarily about error handling and configuration, not performance optimization. Recommendation: Clarify that the performance improvement is from a previous PR, or split into two entries:
✅ Final VerdictApprove with minor revisions recommended This PR addresses critical production issues and is well-implemented. The high-priority issues (race condition, missing logging) should be addressed before merge. Medium and low-priority issues can be addressed in follow-up PRs. Before merging:
Excellent work on:
🤖 Generated with Claude Code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
react_on_rails_pro/CHANGELOG.md (1)
53-57: Consider wording/location of the async dependency noteThe bullet “Added
asyncgem dependency (>= 2.6)…” lives under “### Changed (Breaking)” and reads like an “Added” entry. Consider either:
- Rewording to “Introduced
asyncgem dependency… (breaking change)” or- Moving it under an “Added” subsection while still noting it’s breaking.
Purely a documentation polish suggestion.
react_on_rails_pro/lib/react_on_rails_pro/configuration.rb (1)
145-145: Consider removing duplicate validation.The setter at lines 91-97 already validates that
concurrent_component_streaming_buffer_sizeis a positive integer. Since the setter is always invoked during initialization (line 136) and for any subsequent assignments, this separate validation method appears redundant.The setter validation alone provides sufficient protection unless there's a specific concern about direct instance variable manipulation.
If you prefer to keep defensive validation, consider adding a comment explaining why both validations exist.
Also applies to: 237-243
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
react_on_rails_pro/Gemfile.lockis excluded by!**/*.lock
📒 Files selected for processing (7)
react_on_rails_pro/CHANGELOG.md(2 hunks)react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb(1 hunks)react_on_rails_pro/lib/react_on_rails_pro/configuration.rb(7 hunks)react_on_rails_pro/lib/react_on_rails_pro/utils.rb(2 hunks)react_on_rails_pro/react_on_rails_pro.gemspec(1 hunks)react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb(7 hunks)react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb(2 hunks)
🧰 Additional context used
🧠 Learnings (11)
📓 Common learnings
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1781
File: node_package/src/ClientSideRenderer.ts:82-95
Timestamp: 2025-09-15T21:24:48.207Z
Learning: In React on Rails, the force_load feature includes both explicit `data-force-load="true"` usage and the ability to hydrate components during the page loading state (`document.readyState === 'loading'`). Both capabilities require a Pro license, so the condition `!railsContext.rorPro && (isComponentForceLoaded || document.readyState === 'loading')` correctly gates both scenarios.
📚 Learning: 2025-09-16T08:01:11.146Z
Learnt from: justin808
Repo: shakacode/react_on_rails PR: 1770
File: lib/generators/react_on_rails/templates/base/base/app/javascript/src/HelloWorld/ror_components/HelloWorld.client.jsx:2-2
Timestamp: 2025-09-16T08:01:11.146Z
Learning: React on Rails uses webpack CSS Modules configuration with namedExports: true, which requires the import syntax `import * as style from './file.module.css'` rather than the default export pattern. This configuration enables better tree shaking and bundle size optimization for CSS modules.
Applied to files:
react_on_rails_pro/CHANGELOG.mdreact_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rbreact_on_rails_pro/react_on_rails_pro.gemspecreact_on_rails_pro/lib/react_on_rails_pro/configuration.rbreact_on_rails_pro/spec/react_on_rails_pro/stream_spec.rbreact_on_rails_pro/lib/react_on_rails_pro/utils.rb
📚 Learning: 2025-04-26T21:55:55.874Z
Learnt from: alexeyr-ci2
Repo: shakacode/react_on_rails PR: 1732
File: spec/dummy/client/app-react16/startup/ReduxSharedStoreApp.client.jsx:40-44
Timestamp: 2025-04-26T21:55:55.874Z
Learning: In the react_on_rails project, files under `app-react16` directories are copied/moved to corresponding `/app` directories during the conversion process (removing the `-react16` suffix), which affects their relative import paths at runtime.
Applied to files:
react_on_rails_pro/CHANGELOG.mdreact_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rbreact_on_rails_pro/react_on_rails_pro.gemspecreact_on_rails_pro/lib/react_on_rails_pro/utils.rb
📚 Learning: 2025-02-13T16:50:47.848Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: node_package/src/clientStartup.ts:18-21
Timestamp: 2025-02-13T16:50:47.848Z
Learning: In the react_on_rails module, the `reactOnRailsPageUnloaded` function in clientStartup.ts is intentionally kept private as it's only used internally as a callback for `onPageUnloaded`.
Applied to files:
react_on_rails_pro/CHANGELOG.md
📚 Learning: 2025-02-12T16:38:06.537Z
Learnt from: Romex91
Repo: shakacode/react_on_rails PR: 1697
File: package-scripts.yml:28-28
Timestamp: 2025-02-12T16:38:06.537Z
Learning: The file `node_package/lib/ReactOnRails.full.js` is autogenerated during the build process and should not be present in the repository.
Applied to files:
react_on_rails_pro/CHANGELOG.mdreact_on_rails_pro/react_on_rails_pro.gemspecreact_on_rails_pro/lib/react_on_rails_pro/utils.rb
📚 Learning: 2025-02-18T13:08:01.477Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: lib/react_on_rails/helper.rb:190-197
Timestamp: 2025-02-18T13:08:01.477Z
Learning: RSC support validation in React on Rails Pro is handled through a chain of validations:
1. Pro version check in `run_stream_inside_fiber`
2. RSC support check during pack generation via `ReactOnRailsPro.configuration.enable_rsc_support`
3. RSC support validation during component registration
This makes additional validation in the helper methods unnecessary.
Applied to files:
react_on_rails_pro/CHANGELOG.mdreact_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rbreact_on_rails_pro/lib/react_on_rails_pro/configuration.rbreact_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb
📚 Learning: 2025-09-15T21:24:48.207Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1781
File: node_package/src/ClientSideRenderer.ts:82-95
Timestamp: 2025-09-15T21:24:48.207Z
Learning: In React on Rails, the force_load feature includes both explicit `data-force-load="true"` usage and the ability to hydrate components during the page loading state (`document.readyState === 'loading'`). Both capabilities require a Pro license, so the condition `!railsContext.rorPro && (isComponentForceLoaded || document.readyState === 'loading')` correctly gates both scenarios.
Applied to files:
react_on_rails_pro/CHANGELOG.mdreact_on_rails_pro/react_on_rails_pro.gemspecreact_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb
📚 Learning: 2025-10-23T17:22:01.074Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1875
File: lib/react_on_rails/utils.rb:112-124
Timestamp: 2025-10-23T17:22:01.074Z
Learning: In React on Rails, when Pro is installed but not licensed, the intended behavior is to raise an error on boot. The `react_on_rails_pro?` method validates licenses and should raise errors early (including during path resolution in methods like `server_bundle?`) to enforce licensing requirements rather than failing later with obscure errors.
Applied to files:
react_on_rails_pro/CHANGELOG.mdreact_on_rails_pro/react_on_rails_pro.gemspecreact_on_rails_pro/lib/react_on_rails_pro/configuration.rb
📚 Learning: 2025-02-18T13:08:01.477Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1644
File: lib/react_on_rails/helper.rb:190-197
Timestamp: 2025-02-18T13:08:01.477Z
Learning: RSC support validation is handled in deeper level calls of the React on Rails Pro codebase, so it doesn't need to be validated again in the `rsc_payload_react_component` helper method.
Applied to files:
react_on_rails_pro/CHANGELOG.mdreact_on_rails_pro/lib/react_on_rails_pro/configuration.rb
📚 Learning: 2025-07-08T05:57:29.630Z
Learnt from: AbanoubGhadban
Repo: shakacode/react_on_rails PR: 1745
File: node_package/src/RSCRequestTracker.ts:8-14
Timestamp: 2025-07-08T05:57:29.630Z
Learning: The global `generateRSCPayload` function in React on Rails Pro (RORP) is provided by the framework during rendering requests, not implemented in application code. The `declare global` statements are used to document the expected interface that RORP will inject at runtime.
Applied to files:
react_on_rails_pro/CHANGELOG.md
📚 Learning: 2024-10-08T20:53:47.076Z
Learnt from: theforestvn88
Repo: shakacode/react_on_rails PR: 1620
File: spec/dummy/client/app/startup/HelloTurboStream.jsx:3-3
Timestamp: 2024-10-08T20:53:47.076Z
Learning: The `RailsContext` import in `spec/dummy/client/app/startup/HelloTurboStream.jsx` is used later in the project, as clarified by the user theforestvn88.
Applied to files:
react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rbreact_on_rails_pro/react_on_rails_pro.gemspecreact_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb
🧬 Code graph analysis (4)
react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb (2)
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb (1)
stream_view_containing_react_components(33-46)react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (1)
stream_react_component(126-134)
react_on_rails_pro/lib/react_on_rails_pro/configuration.rb (1)
react_on_rails_pro/lib/react_on_rails_pro/server_rendering_pool/node_rendering_pool.rb (1)
attr_reader(6-131)
react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb (1)
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb (1)
stream_view_containing_react_components(33-46)
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb (1)
react_on_rails_pro/lib/react_on_rails_pro/configuration.rb (2)
configuration(9-38)concurrent_component_streaming_buffer_size(91-97)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: build-dummy-app-webpack-test-bundles
- GitHub Check: lint-js-and-ruby
- GitHub Check: build-dummy-app-webpack-test-bundles
- GitHub Check: rspec-package-specs (3.3.7)
- GitHub Check: markdown-link-check
- GitHub Check: claude-review
🔇 Additional comments (10)
react_on_rails_pro/lib/react_on_rails_pro/utils.rb (1)
111-118: Documentation updates to reflect Shakapacker.Comments in
bundle_file_nameandserver_bundle_file_namehave been updated to reference Shakapacker instead of Webpacker. The functional behavior and method signatures remain unchanged.Also applies to: 120-129
react_on_rails_pro/react_on_rails_pro.gemspec (1)
35-42: Async runtime dependency aligns with new streaming codeAdding
s.add_runtime_dependency "async", ">= 2.6"matches the new use ofAsync,Async::LimitedQueue, andSyncin streaming and specs. Just confirm this lower bound reflects the minimum version that supports all used APIs (especiallyAsync::LimitedQueueandSync) and that an open upper bound is intentional.react_on_rails_pro/CHANGELOG.md (1)
22-27: Changelog entries clearly document the new concurrent streaming behaviorThe new “Improved” and “Added” bullets for concurrent component streaming and
config.concurrent_component_streaming_buffer_sizeaccurately describe the behavior and configuration introduced in the code and tests.react_on_rails_pro/spec/react_on_rails_pro/stream_spec.rb (1)
3-31: Concurrency test harness mirrors production streaming behavior wellThe
StreamControllerhelper plus the “Component streaming concurrency” specs give good coverage of the new producer–consumer design: per-component fibers backed byAsync::Queue, a shared writer viaReactOnRailsPro::Stream, per-component ordering, empty/single-component handling, and simple backpressure validation. This is a solid, focused test scaffold for the new streaming implementation.Also applies to: 374-488
react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb (1)
3-4: Async-based streaming test plumbing looks consistent and flexibleRequiring
async/async/queue, extendingmock_request_and_responseto handle anAsync::Queue, and introducingexecute_stream_view_containing_react_componentstogether give you a realistic way to drive streaming via queues while assertingwritten_chunksstays in lockstep withchunks_read. This matches the new Async-based streaming implementation and keeps existing array-based tests working.Also applies to: 332-360, 452-471
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb (1)
33-46: Producer–consumer streaming implementation and disconnect handling look solidThe new
drain_streams_concurrently/build_producer_tasks/build_writer_taskpipeline:
- Uses a bounded
Async::LimitedQueuesized viaconcurrent_component_streaming_buffer_sizeto impose backpressure and avoid unbounded memory growth.- Preserves per-component ordering by draining each
@rorp_rendering_fibersequentially while allowing cross-component interleaving.- Centralizes all
response.stream.writecalls in a single writer task, simplifying error handling.- Handles client disconnects by rescuing
IOError/Errno::EPIPEin both producers and writer and by checkingresponse.stream.closed?before resuming fibers, while always closing the stream inensure.Overall, this is a clear and maintainable upgrade over linear fiber draining for concurrent streaming.
Also applies to: 64-126
react_on_rails_pro/lib/react_on_rails_pro/configuration.rb (4)
63-63: LGTM: Reasonable default buffer size.The default value of 64 is reasonable for a bounded queue buffer, balancing memory usage with throughput.
75-97: Excellent documentation and validation.The custom setter with detailed documentation effectively explains the buffer's purpose and memory implications. The validation ensures configuration correctness at assignment time.
108-110: LGTM: Proper initialization.The parameter is correctly added to the initialize method with an appropriate default, and the assignment via the custom setter ensures validation occurs during instantiation.
Also applies to: 136-136
35-36: LGTM: Configuration instantiation updated.The new parameter is properly integrated into the configuration instantiation.
| it "stops producing when client disconnects" do | ||
| queue = Async::Queue.new | ||
| mock_request_and_response(queue) | ||
|
|
||
| # Simulate client disconnect by making stream.write raise IOError | ||
| call_count = 0 | ||
| allow(mocked_stream).to receive(:write) do |chunk| | ||
| call_count += 1 | ||
| written_chunks << chunk | ||
| raise IOError, "client disconnected" if call_count == 2 | ||
| end | ||
|
|
||
| # Configure render_to_string to call stream_react_component | ||
| allow(self).to receive(:render_to_string) do | ||
| render_result = stream_react_component(component_name, props: props, **component_options) | ||
| <<-HTML | ||
| <div> | ||
| <h1>Header Rendered In View</h1> | ||
| #{render_result} | ||
| </div> | ||
| HTML | ||
| end | ||
|
|
||
| Sync do |parent| | ||
| parent.async do | ||
| stream_view_containing_react_components(template: template_path) | ||
| rescue IOError | ||
| # Expected - client disconnected | ||
| end | ||
|
|
||
| # Enqueue first chunk - should be written successfully | ||
| queue.enqueue(chunks[0]) | ||
| sleep 0.05 | ||
|
|
||
| # Enqueue second chunk - should trigger disconnect | ||
| queue.enqueue(chunks[1]) | ||
| sleep 0.05 | ||
|
|
||
| # Enqueue third chunk - should not be written (producer stopped) | ||
| queue.enqueue(chunks[2]) | ||
| sleep 0.05 | ||
|
|
||
| queue.close | ||
| sleep 0.05 | ||
| end | ||
|
|
||
| # Should only have written the first chunk before disconnect | ||
| expect(written_chunks.count).to eq(1) | ||
| expect(mocked_stream).to have_received(:write).at_most(2).times | ||
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Disconnect test likely counts failed writes as “written” and doesn’t assert producer shutdown
In it "stops producing when client disconnects":
- The stub for
mocked_stream.writeappends towritten_chunksbefore raising on the simulated disconnect:
allow(mocked_stream).to receive(:write) do |chunk|
call_count += 1
written_chunks << chunk
raise IOError, "client disconnected" if call_count == 2
endGiven this, the second write (the one that raises) is still recorded in written_chunks, so written_chunks.count will include the failed write, which conflicts with the expectation that only the first chunk was written (expect(written_chunks.count).to eq(1)).
To better model “only successful writes are counted” and to assert that producers actually stop after disconnect, consider:
-allow(mocked_stream).to receive(:write) do |chunk|
- call_count += 1
- written_chunks << chunk
- raise IOError, "client disconnected" if call_count == 2
-end
+allow(mocked_stream).to receive(:write) do |chunk|
+ call_count += 1
+ raise IOError, "client disconnected" if call_count == 2
+ written_chunks << chunk
+endand optionally tightening assertions to verify producers stop consuming from the queue, e.g., by checking chunks_read.count is less than chunks.size rather than only asserting on written_chunks.
After locating the file, I'll examine the specific test method and its context.
Let me search for and examine the test file:
I need to access the repository to verify the actual code. Let me examine the test file:
---
🏁 Script executed:
```shell
#!/bin/bash
# Examine the test file to verify the review comment
find . -type f -name "react_on_rails_pro_helper_spec.rb" 2>/dev/null | head -5
Length of output: 78
🏁 Script executed:
#!/bin/bash
# Search for the test file and examine the disconnect test
fd "react_on_rails_pro_helper_spec.rb" -x cat {} \; | sed -n '500,580p'Length of output: 2922
🏁 Script executed:
#!/bin/bash
# Find and display the specific test
fd "react_on_rails_pro_helper_spec.rb" | head -1 | xargs cat | sed -n '517,566p'Length of output: 1643
🏁 Script executed:
#!/bin/bash
# Search for the test file location
find . -name "*react_on_rails_pro_helper_spec.rb" -type f 2>/dev/nullLength of output: 78
Fix test logic: chunks failing to write should not be counted as written
In the "stops producing when client disconnects" test, the mocked_stream.write stub appends to written_chunks before raising on the second call. This means the failed write is still recorded, causing written_chunks.count to be 2, not 1 as the test expects—making the test fail.
Move the append after the error check:
allow(mocked_stream).to receive(:write) do |chunk|
call_count += 1
raise IOError, "client disconnected" if call_count == 2
written_chunks << chunk
endThis ensures only successfully written chunks are counted and aligns with the test's intent to verify the producer stops after a client disconnect.
🤖 Prompt for AI Agents
In react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb
around lines 517 to 566 the mocked_stream.write stub currently appends the chunk
to written_chunks before raising an IOError on the second call, so failed writes
are counted; change the stub so it increments call_count and raises on the
failing call first, and only append the chunk to written_chunks after the raise
check (i.e., raise IOError when call_count == 2 before pushing the chunk) so
only successfully written chunks are recorded and the test expectation passes.
Code Review: Concurrent Streaming Error Handling and ConfigurationI've completed a thorough review of this PR. Overall, this is a well-thought-out implementation with solid error handling and configuration improvements. Here are my findings: ✅ Strengths1. Excellent Client Disconnect HandlingThe addition of rescue IOError, Errno::EPIPE
# Client disconnected - stop producing
breakThe 2. Robust Configuration ValidationThe custom setter for
3. Comprehensive TestingThe new test for client disconnect scenario at
The new
4. Clear DocumentationThe inline documentation in
🔍 Potential Issues1. CRITICAL: Missing Validation Call
|
Code Review: Concurrent Streaming Error Handling and ConfigurationOverall, this is a well-executed PR that addresses critical production issues with concurrent component streaming. The implementation is solid, well-documented, and thoroughly tested. Here's my detailed analysis: ✅ Strengths1. Critical Bug Fix: Client Disconnect HandlingThe addition of Location: Excellent implementation:
2. Robust Configuration ValidationThe custom setter for Location: Well-designed:
3. Excellent DocumentationThe added documentation in
4. Comprehensive TestingNew concurrency tests (
Integration test (
🔍 Areas for Improvement1. Missing Async Dependency in Gemspec
|
Code Review: Concurrent Streaming Error Handling and ConfigurationI've completed a thorough review of this PR. Overall, this is excellent work that significantly improves the concurrent streaming implementation with proper error handling and configuration validation. The code quality is high and the testing is comprehensive. ✅ Strengths1. Robust Error HandlingThe client disconnect handling is well-implemented:
2. Excellent DocumentationThe inline documentation is comprehensive and clear:
3. Configuration ValidationThe custom setter approach for
4. Comprehensive TestingThe test suite covers critical scenarios:
🔍 Potential Issues & Suggestions1. Critical: Missing
|
Summary
This PR addresses several critical issues and improvements for concurrent component streaming in React on Rails Pro, based on the comprehensive code review suggestions.
Key Improvements
1. 🚨 CRITICAL: Client Disconnect Handling
Previously, if a client disconnected mid-stream, producer tasks would continue processing and enqueueing chunks unnecessarily, wasting server resources.
Changes:
IOErrorandErrno::EPIPEexception handling in producer tasksresponse.stream.closed?check before expensive operationsCode location: react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb:93-111
2. ⚙️ Configuration Validation Improvements
Changes:
attr_accessorwith custom setter forconcurrent_component_streaming_buffer_sizeExample memory implications:
Code location: react_on_rails_pro/lib/react_on_rails_pro/configuration.rb:75-97
3. 📚 Documentation Enhancements
Added detailed documentation for:
Code location: react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb:50-63
4. ✅ Testing
Added comprehensive test for client disconnect scenario:
IOErrorCode location: react_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rb:517-568
Files Changed
react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rbreact_on_rails_pro/lib/react_on_rails_pro/configuration.rbreact_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rbNot Implemented
Sleep-based test synchronization improvements: The existing
sleep 0.05approach in tests is acceptable for test purposes. More complex synchronization mechanisms (condition variables, callbacks) would add unnecessary complexity to the test suite without significant benefit.Testing
🤖 Generated with Claude Code
This change is
Summary by CodeRabbit
Improved
Added
Chores