-
Notifications
You must be signed in to change notification settings - Fork 2
add sse-communication.md implementation plan #203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
jhodapp
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fabulous proposal Caleb, thank you! Here are some of my current thoughts about the proposal and suggestions for some improvements or things to think through in more detail.
|
|
||
| ## Phase 4: Frontend Integration | ||
|
|
||
| ### 4.1 Create SSE Client Hook |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a hook can make sense here, but also perhaps it makes sense to use the Provider pattern that we use for things like the Zustand state stores as a new provider. And perhaps we might consider obfuscating at the top level of this abstraction the fact that the event messaging system is SSE? So something like ServerEventProvider or BackendEventProvider, etc.
|
|
||
| export function useSSEEventHandler( | ||
| eventSource: EventSource | null, | ||
| eventType: string, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just pointing this out early that we'll want a stronger type for events (e.g. enum) than just a string type. This is a common thing that Claude misses in Typescript.
| useSSEEventHandler(eventSource, 'action_created', (data) => { | ||
| // Only update if viewing this coaching session | ||
| if (data.coaching_session_id === sessionId) { | ||
| setActions(prev => [...prev, data.action]); | ||
| } | ||
| }); | ||
|
|
||
| // Handle action updated events | ||
| useSSEEventHandler(eventSource, 'action_updated', (data) => { | ||
| if (data.coaching_session_id === sessionId) { | ||
| setActions(prev => | ||
| prev.map(a => a.id === data.action.id ? data.action : a) | ||
| ); | ||
| } | ||
| }); | ||
|
|
||
| // Handle action deleted events | ||
| useSSEEventHandler(eventSource, 'action_deleted', (data) => { | ||
| if (data.coaching_session_id === sessionId) { | ||
| setActions(prev => prev.filter(a => a.id !== data.action_id)); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would love to see an API pattern where the calling code wanting to listen in for particular events can register the kind of events they're interested in by providing a function reference that the EventHandler can call for each type of event. Take a look at how TipTap's collaboration component handles calling client code event handlers for inspiration.
|
|
||
| es.onerror = (error) => { | ||
| console.error('SSE connection error:', error); | ||
| // EventSource will automatically reconnect |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd love to see us handle a long-broken connection error that can surface to the client code, which would then allow us to incorporate TipTap's presence state and our own SSE event component's connection state to inform the user of several connection-related info/warn/error cases like lost connection, offline, reconnecting, etc.
| Add Server-Sent Events (SSE) infrastructure to enable real-time, unidirectional communication from backend to frontend. The implementation supports single-user messages and broadcasts to all clients. Each authenticated user maintains one app-wide SSE connection that persists across page navigation. | ||
|
|
||
| **⚠️ IMPORTANT: Single Instance Limitation** | ||
| This implementation uses in-memory connection tracking (DashMap) and **only works with a single backend instance**. If you scale horizontally (multiple backend replicas), SSE events will randomly fail. Redis Pub/Sub is required for multi-instance deployments. See "Multi-Instance Architecture" section below for migration path. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about the idea of having a very simple first version of persistence for SSE connections contained in DashMap? It would be similar to the existing axum-login session persistence in the Postgres DB into its own table and only currently useful to quickly restore connections across the single backend instance restarting?
It could add/remove connections as they're created/destroyed, or done in periodic batch operations.
When the backend stops gracefully, we can hook into that event and persist all current SSE connections.
When the backend starts back up, load all SSE connections in the table into DashMap.
I don't think this would be hard and gives us something really nice to have without worrying about horizontal scaling yet (overcomplicating our solution for current needs).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah it's definitely something I have thought about.. Take this with a grain of salt as I'm still figuring out the nuances of this type of architecture.
I think I'd rather go with the simpler route first here and focus on the message passing and events and then come back and add some type of persistence afterward. Even if that's the quick turn around we work on next.
I think that will help me understand what type of persistence, data structures to persist, etc. will best fit the solution .
My gut tells me implementing a database backed persistence mechanism right now will add a fair amount of complexity to this first iteration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@calebbourg Ok, here's my suggestion then. Keep this persistence mechanism in mind as you come up with your initial architecture and design, just so we can lead ourselves easily into the persistence system architecture that is mostly logical to implement next.
| **Why these fields:** | ||
| - `connection_id`: Server-generated UUID for internal tracking in DashMap | ||
| - `user_id`: From authenticated session (via AuthenticatedUser extractor) | ||
| - `sender`: Channel to send events to this specific connection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I inquired with Claude into understanding more the performance characteristics that we can expect using the RwLock + HashMap (i.e. DashMap) structure to track connections on our current single core, 1 GB of RAM production server.
I'll paste the entire conversation with Claude here, but in my own summary, Claude suggests having a second DashMap that tracks user connections as the HashMap keys and the user ids as the values. It's a reverse lookup structure that prevents having to iterate over all users to find their connection.
★ Insight ─────────────────────────────────────
The Core Problem: The current implementation plan stores connections in a flat structure:
// Current design (line 301)
pub struct SseManager {
connections: Arc<DashMap<String, ConnectionMetadata>>,
}
// DashMap structure: connection_id -> metadata
// {
// "uuid-1" -> { user_id: alice, sender: Channel1 },
// "uuid-2" -> { user_id: bob, sender: Channel2 },
// "uuid-3" -> { user_id: alice, sender: Channel3 }, // Alice's 2nd tab
// "uuid-4" -> { user_id: carol, sender: Channel4 }
// }When sending a User-scoped message (the most common case - 90%+ of events):
// Current approach (line 326-354)
pub fn send_message(&self, message: SseMessage) {
for entry in self.connections.iter() { // Scans ALL connections
if Self::should_receive_message(metadata, &message.scope) {
metadata.sender.send(Ok(event));
}
}
}
fn should_receive_message(metadata: &ConnectionMetadata, scope: &MessageScope) -> bool {
match scope {
MessageScope::User { user_id } => metadata.user_id == *user_id, // Check EVERY connection
MessageScope::Broadcast => true,
}
}Performance on 400 connections, 1-core server:
- Need to send to Alice (1 specific user, maybe 2 browser tabs)
- Must iterate ALL 400 connections checking metadata.user_id == alice_id
- Only 2 connections match, but we scanned 400
- Wasted work: 99.5%
───────────────────────────────────────
The Index Optimization Explained
What It Is
Add a secondary index that maps user_id -> list of connection_ids for fast lookups:
pub struct SseManager {
// Primary storage: connection_id -> full metadata
connections: Arc<DashMap<String, ConnectionMetadata>>,
// NEW: Secondary index for fast user lookup
user_connections: Arc<DashMap<Id, Vec<String>>>, // user_id -> [connection_ids]
}Example data structure state:
// connections DashMap:
{
"uuid-1" -> { user_id: alice, sender: Channel1 },
"uuid-2" -> { user_id: bob, sender: Channel2 },
"uuid-3" -> { user_id: alice, sender: Channel3 },
"uuid-4" -> { user_id: carol, sender: Channel4 }
}
// user_connections index:
{
alice -> ["uuid-1", "uuid-3"], // Alice has 2 connections
bob -> ["uuid-2"], // Bob has 1 connection
carol -> ["uuid-4"] // Carol has 1 connection
}How It Works
On Connection Registration (line 312-319):
pub fn register_connection(&self, metadata: ConnectionMetadata) {
let connection_id = metadata.connection_id.clone();
let user_id = metadata.user_id;
// Store in primary map
self.connections.insert(connection_id.clone(), metadata);
// NEW: Update index
self.user_connections
.entry(user_id)
.or_insert_with(Vec::new)
.push(connection_id);
}On Connection Cleanup (line 321-324):
pub fn unregister_connection(&self, connection_id: &str) {
// Get user_id before removing
if let Some((_, metadata)) = self.connections.remove(connection_id) {
let user_id = metadata.user_id;
// NEW: Clean up index
if let Some(mut conn_ids) = self.user_connections.get_mut(&user_id) {
conn_ids.retain(|id| id != connection_id);
// If user has no more connections, remove from index
if conn_ids.is_empty() {
drop(conn_ids); // Release lock
self.user_connections.remove(&user_id);
}
}
}
}On Send Message - OPTIMIZED (line 326-354):
pub fn send_message(&self, message: SseMessage) {
let event_type = format!("{:?}", message.event)
.split('(').next().unwrap().to_lowercase();
match message.scope {
MessageScope::User { user_id } => {
// NEW: O(1) lookup instead of O(n) scan
if let Some(conn_ids) = self.user_connections.get(&user_id) {
// Only iterate this user's connections (typically 1-3)
for conn_id in conn_ids.iter() {
if let Some(metadata) = self.connections.get(conn_id) {
let event_data = match serde_json::to_string(&message.event) {
Ok(json) => json,
Err(e) => {
error!("Failed to serialize SSE event: {}", e);
continue;
}
};
let event = Event::default()
.event(&event_type)
.data(event_data);
if let Err(e) = metadata.sender.send(Ok(event)) {
warn!("Failed to send to connection {}: {}", conn_id, e);
}
}
}
}
},
MessageScope::Broadcast => {
// Still need O(n) - can't optimize broadcasts
for entry in self.connections.iter() {
// ... same as before
}
}
}
}Performance Improvement
Before (Current Plan)
Scenario: Send action_created to Alice (2 browser tabs)
Total connections: 400
Operation: for entry in connections.iter()
Complexity: O(n) = 400 iterations
Lock hold time: ~4-8ms (single shard, holds lock entire time)
Connections checked: 400
Events sent: 2
Efficiency: 0.5%
After (With Index)
Scenario: Send action_created to Alice (2 browser tabs)
Total connections: 400
Operation: user_connections.get(alice_id)
Complexity: O(1) + O(k) where k = Alice's connections
Lock hold time: ~0.01ms (just the index lookup)
Connections checked: 2
Events sent: 2
Efficiency: 100%
Latency Reduction
Single-core server, 400 concurrent connections:
| Operation | Without Index | With Index | Improvement |
|-------------------------------|---------------|-------------|---------------------------------|
| User-scoped message (typical) | 4-8ms | 0.01-0.05ms | 99% faster |
| Broadcast message | 4-8ms | 4-8ms | Same (no optimization possible) |
| Register connection | 0.01ms | 0.02ms | Negligible overhead |
| Unregister connection | 0.01ms | 0.03ms | Negligible overhead |
Contention Reduction:
Without index:
Time: 0ms -> send_message(User{alice}) starts, LOCKS shard
Time: 0-4ms -> Iterating 400 connections, shard LOCKED
Time: 2ms -> New connection tries to register -> BLOCKED
Time: 4ms -> send_message completes, UNLOCKS shard
Time: 4ms -> New connection proceeds
Result: 2ms delay for new connection
With index:
Time: 0ms -> send_message(User{alice}) starts
Time: 0ms -> Lookup alice in index: O(1)
Time: 0ms -> Get alice's 2 connections: O(2)
Time: 0.05ms -> send_message completes
Time: 0.05ms -> shard available (no blocking)
Result: No delay for concurrent operations
Memory Overhead
The index adds minimal memory:
// Per user in index:
struct UserIndexEntry {
user_id: Id, // 16 bytes (UUID)
connections: Vec<String>, // 24 bytes (Vec header) + (24 bytes per String)
}
// For 200 users with average 2 connections each:
// Index overhead = 200 * (16 + 24 + (2 * 24)) = 200 * 88 = 17.6 KB
// Total SSE memory with 400 connections:
// - connections DashMap: ~40 KB
// - user_connections index: ~18 KB
// - Channel buffers: ~100-500 KB (variable)
// Total: ~160-560 KB (well within 1GB constraint)Why This Matters on 1-Core Server
On a multi-core server (e.g., 32 cores):
- DashMap has 32 shards
- Iteration locks each shard sequentially
- 400 connections / 32 shards = ~12 per shard
- Lock contention is distributed
On a 1-core server:
- DashMap has 1 shard (everything shares one lock)
- 400 connections in 1 shard
- Every iteration holds the ONLY lock
- Concurrent operations queue up quickly
The index optimization eliminates 99%+ of iterations for User-scoped messages, which is critical when you only have one lock to share across all operations.
Real-World Impact
Typical coaching platform workload (based on your use case):
- 90% User-scoped messages (action_created, note_created, etc.)
- 10% Broadcast messages (force_logout, system announcements)
Without index (400 connections, 10 messages/second):
- 9 User messages/sec × 4ms = 36ms/sec spent holding lock
- 1 Broadcast message/sec × 4ms = 4ms/sec spent holding lock
- Total: 40ms/sec = 4% CPU time just iterating
- Lock contention causes queuing during busy periods
With index (400 connections, 10 messages/second):
- 9 User messages/sec × 0.05ms = 0.45ms/sec spent holding lock
- 1 Broadcast message/sec × 4ms = 4ms/sec spent holding lock
- Total: 4.45ms/sec = 0.4% CPU time
- 10x reduction in lock contention
Would you like me to add this optimization to the implementation plan as a recommended enhancement for single-core deployments?
> Why is your assertion about needing to scan all connections in the DashMap to send to a specific user true? I thought a HashMap is a kind of data structure that can match a key with less than O(n) complexity?
The Core Problem: The current implementation plan stores connections in a flat structure:
// Current design (line 301)
pub struct SseManager {
connections: Arc<DashMap<String, ConnectionMetadata>>,
}
// DashMap structure: connection_id -> metadata
// {
// "uuid-1" -> { user_id: alice, sender: Channel1 },
// "uuid-2" -> { user_id: bob, sender: Channel2 },
// "uuid-3" -> { user_id: alice, sender: Channel3 }, // Alice's 2nd tab
// "uuid-4" -> { user_id: carol, sender: Channel4 }
// }When sending a User-scoped message (the most common case - 90%+ of events):
// Current approach (line 326-354)
pub fn send_message(&self, message: SseMessage) {
for entry in self.connections.iter() { // Scans ALL connections
if Self::should_receive_message(metadata, &message.scope) {
metadata.sender.send(Ok(event));
}
}
}
fn should_receive_message(metadata: &ConnectionMetadata, scope: &MessageScope) -> bool {
match scope {
MessageScope::User { user_id } => metadata.user_id == *user_id, // Check EVERY connection
MessageScope::Broadcast => true,
}
}Performance on 400 connections, 1-core server:
- Need to send to Alice (1 specific user, maybe 2 browser tabs)
- Must iterate ALL 400 connections checking metadata.user_id == alice_id
- Only 2 connections match, but we scanned 400
- Wasted work: 99.5%
⏺ Excellent question! You're absolutely right to challenge this - HashMaps do provide O(1) lookups. The problem is that the keys don't match what we're searching for.
★ Insight ─────────────────────────────────────
The Key Mismatch Problem:
The DashMap is indexed by connection_id (a server-generated UUID):
connections: DashMap<String, ConnectionMetadata>
// ^^^^^^
// This is connection_id, NOT user_idBut we want to send messages to a user_id:
MessageScope::User { user_id } // We know the user_id
// ^^^^^^^
// But this is NOT the key in our HashMap!The HashMap can only do O(1) lookup if you have the exact key. Since user_id is stored in the value (ConnectionMetadata), not the key (connection_id), we must scan all
entries to find matches.
It's like having a phone book sorted by phone number, but you only know someone's name - you have to read every entry to find their number.
─────────────────────────────────────────────────
Why the Keys Are connection_id Instead of user_id
Looking at the data structure more carefully:
// From line 450-476
pub struct ConnectionMetadata {
pub connection_id: String, // UUID generated server-side
pub user_id: Id, // From authenticated session
pub sender: UnboundedSender<Result<Event, Infallible>>,
}The HashMap structure is:
DashMap<connection_id, ConnectionMetadata>
^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
KEY VALUE (contains user_id)Why not just use user_id as the key?
Because one user can have multiple simultaneous connections:
- Alice opens tab 1 → connection_id: "uuid-1"
- Alice opens tab 2 → connection_id: "uuid-3"
If we used user_id as the key:
// This wouldn't work because you can't have duplicate keys
DashMap<user_id, ConnectionMetadata>
{
alice -> { connection_id: "uuid-1", sender: Channel1 },
alice -> { connection_id: "uuid-3", sender: Channel3 }, // ERROR: duplicate key!
}HashMaps require unique keys, so we must use connection_id (unique per connection) rather than user_id (shared across multiple connections).
The Lookup Problem Illustrated
When we receive:
send_message(SseMessage {
event: ActionCreated { ... },
scope: MessageScope::User { user_id: alice_id }
})We have:
- What we know: alice_id (the user_id)
- What we need: All connection_ids where metadata.user_id == alice_id
- What the HashMap is indexed by: connection_id
This is a "reverse lookup" - we're searching by value, not by key:
// What we WANT to do (O(1)):
let connection = connections.get(alice_id); // ❌ Won't work!
// What we MUST do (O(n)):
for entry in connections.iter() {
if entry.value().user_id == alice_id { // Check every entry
// Found a match!
}
}Why the Index Solves This
The index creates a second HashMap with the user_id as the key:
pub struct SseManager {
// Primary storage: connection_id -> metadata
connections: DashMap<String, ConnectionMetadata>,
// ^^^^^^ Key: connection_id
// Secondary index: user_id -> [connection_ids]
user_connections: DashMap<Id, Vec<String>>,
// ^^ Key: user_id (what we search by!)
}Now the lookup becomes:
// Step 1: O(1) - Look up user_id in the INDEX
if let Some(conn_ids) = user_connections.get(&alice_id) {
// Returns: ["uuid-1", "uuid-3"]
// Step 2: O(k) - For each connection_id, get the metadata
for conn_id in conn_ids.iter() { // k = Alice's connections (typically 1-3)
if let Some(metadata) = connections.get(conn_id) { // O(1) lookup!
metadata.sender.send(event);
}
}
}Total complexity: O(1) + O(k) where k = number of connections for that user (typically 1-3)
Visual Comparison
Without Index:
Question: "Send event to Alice"
connections HashMap (keyed by connection_id):
┌────────────┬──────────────────────────┐
│ Key │ Value │
├────────────┼──────────────────────────┤
│ uuid-1 │ user: alice, chan: Ch1 │ ← Check: alice? YES! ✓
│ uuid-2 │ user: bob, chan: Ch2 │ ← Check: alice? no
│ uuid-3 │ user: alice, chan: Ch3 │ ← Check: alice? YES! ✓
│ uuid-4 │ user: carol, chan: Ch4 │ ← Check: alice? no
│ uuid-5 │ user: dave, chan: Ch5 │ ← Check: alice? no
│ ... │ ... │ ← Check: alice? no
│ uuid-400 │ user: zoe, chan: Ch400 │ ← Check: alice? no
└────────────┴──────────────────────────┘
Must scan ALL 400 entries because we don't know which keys
correspond to alice until we read the values.
With Index:
Question: "Send event to Alice"
Step 1: user_connections index (keyed by user_id):
┌────────────┬─────────────────────┐
│ Key │ Value │
├────────────┼─────────────────────┤
│ alice │ [uuid-1, uuid-3] │ ← O(1) lookup! ✓
│ bob │ [uuid-2] │
│ carol │ [uuid-4] │
│ dave │ [uuid-5] │
└────────────┴─────────────────────┘
Step 2: For each connection_id from index:
connections.get("uuid-1") → O(1) lookup → Send to Ch1
connections.get("uuid-3") → O(1) lookup → Send to Ch3
Total: 2 HashMap lookups instead of 400 scans!
The Database Analogy
This is exactly like database indexes:
Without index (table scan):
-- connections table (primary key: connection_id)
SELECT * FROM connections WHERE user_id = 'alice';
-- Must scan entire table because user_id is not indexed
-- Performance: O(n)With index:
-- Add index on user_id
CREATE INDEX idx_user_id ON connections(user_id);
SELECT * FROM connections WHERE user_id = 'alice';
-- Uses index for fast lookup
-- Performance: O(log n) or O(1) for hash indexThe user_connections DashMap is essentially a hash index on the user_id field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't get a chance to read through this whole thing yet. But I understand the issue with the flat data structure and I think it's 100% valid (I had a similar thought in my manual review of claude's implementation plan).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@calebbourg The main gist of this is to have efficient, low-latency forward and reverse lookup mechanisms for users and connections, to/from each other when provided with IDs. Typical Claude, the rest is not worthless, but verbose details.
| app_state.sse_manager.send_message(SseMessage { | ||
| event: SseEvent::ForceLogout { | ||
| reason: "User logged out".to_string(), | ||
| }, | ||
| scope: MessageScope::User { user_id: user.id }, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you intentionally not handling application-level errors in this first SSE architecture? This is separate from transport layer (i.e. connection error states) error handling, which you currently do handle in this proposal.
For example:
Application-Layer Error Scenarios (NOT HANDLED)
| Scenario | What SHOULD Happen | What ACTUALLY Happens | User Impact
|
|--------------------------------------|------------------------------------------------|--------------------------------------|-----------------------------------------------|
| Coach creates action, but save fails | SSE event to Coachee: "Action creation failed" | Nothing (no SSE event sent) | Coachee never knows Coach attempted something
| Optimistic update conflict | SSE event: "Your change was rejected, refresh" | Nothing | Coachee sees stale data
| Permission change | SSE event: "You lost access to this session" | Only ForceLogout exists (too coarse) | User sees errors when trying to load data
| Validation error | SSE event: "Invalid data submitted" | Nothing | Silent failure for observers
This is a scenario that Claude came up with. I'm not sure we'd ever want to optimistically show Action data like this before the original create action REST call succeeds, but it does provide an interesting tangible example that shows the kind of motivation for why we may want to think through this now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's come back to this once we experience out how the SSE events actually "feel" in the UX. My hunch is that a Coachee doesn't necessarily need to be informed when a Coach, for example, attempts to create an Action and that fails.
I can't think of an example in other platforms where that's the case.
Not to say that there aren't scenarios where we might want to send that sort of information over the SSE wire.. Just might be something we need to sit down and think through the where and when of.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@calebbourg I agree it's a silly example, but having application level errors that are separate from the transport layer errors of SSE is an important concept to make sure you're thinking about. Make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed it certainly should be an integral part of this.
Manual line by line review without AI of the SSE implementation plan
|
@jhodapp I just went through this plan with a bit more of a fine-toothed comb manually as well as your review comments. Thank you very much for the excellent points! I made some initial changes here: 8f70234 and here are some other notes I took to come back to:
|
Indeed. This being as efficient and low-latency as possible is one of the most important things to achieve with this SSE system design and implementation. It's going to get exercised heavily as the system grows in users.
Let me know if any questions come up about the UI and how things interact with the user. |
|
@calebbourg Note to look into writing a rust client to the BE to integration test this work. Potential to have something run in CI
|
|
Discussed with @jhodapp live to come back to the FE implementation once we have a more concrete idea of what the BE will look like |
jhodapp
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thanks for this.
Description
describe the intent of your changes here
GitHub Issue: [Closes|Fixes|Resolves] #your GitHub issue number here
Changes
Testing Strategy
describe how you or someone else can test and verify the changes
Concerns
describe any concerns that might be worth mentioning or discussing