-
Notifications
You must be signed in to change notification settings - Fork 421
Always forward gossip for all our channels to all our peers #4216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Always forward gossip for all our channels to all our peers #4216
Conversation
TheBlueMatt
commented
Nov 10, 2025
Sadly, the lightning gossip protocol operates by always flooding peers with all the latest gossip they receive. For nodes with many peers, this can result in lots of duplicative gossip as they receive every message from every peer. As a results, some lightning implementations disable gossip with new peers after some threshold. This should mostly work as these peers expect to receive the latest gossip from their many other peers. However, in some cases an LDK node may wish to open public channels but only has a single connection to the bulk of the rest of the network - with one such peer which requests that it not receive any gossip. In that case, LDK would dutifully never send any gossip to its only connection to the outside world. We would still send gossip for channels with that peer as it would be sent as unicast gossip, but if we then open another connection to another peer which doesn't have any connection to the outside world any information on that channel wouldn't propagate. We've seen this setup on some LSPs, where they have a public node and then an LSP which only connects through that public node, but expects to open public channels to its LSP clients. In a coming commit we'll start forwarding such gossip to all peers by ignoring peer gossip limitations about all of our own channels. While other gossip has the `node_id`s of both sides of the channel explicitly in the message, `ChannelUpdate`s do not, so here we add the `node_id`s to `BroadcastChannelUpdate` giving us the information we need when we go to broadcast updates.
|
👋 Thanks for assigning @valentinewallace as a reviewer! |
wpaulino
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this was spelled out in the spec already:
This assumption of adequate propagation does not apply for gossip messages generated directly by the node itself, so they should ignore filters.
LGTM, though some test coverage would be nice.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #4216 +/- ##
========================================
Coverage 89.33% 89.33%
========================================
Files 180 180
Lines 138055 138222 +167
Branches 138055 138222 +167
========================================
+ Hits 123326 123483 +157
- Misses 12122 12130 +8
- Partials 2607 2609 +2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
This is more than just that - we're also forwarding gossip from our peers for our channels. Will ask an LLM to add coverage. |
a61877b to
f39a230
Compare
|
Claude kinda wrote a test...of course the test it generated passed the checks entirely by accident and didn't actually test the new logic, but it was a start lol. |
lightning/src/ln/peer_handler.rs
Outdated
| let mut fd_1_0 = FileDescriptor::new(2); | ||
|
|
||
| // Connect the peers and exchange the initial connection handshake. | ||
| let initial_data = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you use the establish_connection util instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I thought we couldn't but it actually skips delivering the last init message so we can...
| /// The node_id of the first endpoint of the channel. | ||
| /// | ||
| /// This is not used in the message broadcast, but rather is useful for deciding which | ||
| /// peer(s) to send the update to. | ||
| node_id_1: NodeId, | ||
| /// The node_id of the second endpoint of the channel. | ||
| /// | ||
| /// This is not used in the message broadcast, but rather is useful for deciding which | ||
| /// peer(s) to send the update to. | ||
| node_id_2: NodeId, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could these be replaced by a force_broadcast: bool field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The network graph/P2PGossipSync (where this comes from sometimes) doesn't know our NodeId. We could give them that knowledge, but it seemed cleaner not to and to just provide the node_ids here, which is cheap.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah okay, I see it now in routing::utxo.
Sadly, the lightning gossip protocol operates by always flooding peers with all the latest gossip they receive. For nodes with many peers, this can result in lots of duplicative gossip as they receive every message from every peer. As a results, some lightning implementations disable gossip with new peers after some threshold. This should mostly work as these peers expect to receive the latest gossip from their many other peers. However, in some cases an LDK node may wish to open public channels but only has a single connection to the bulk of the rest of the network - with one such peer which requests that it not receive any gossip. In that case, LDK would dutifully never send any gossip to its only connection to the outside world. We would still send gossip for channels with that peer as it would be sent as unicast gossip, but if we then open another connection to another peer which doesn't have any connection to the outside world any information on that channel wouldn't propagate. We've seen this setup on some LSPs, where they have a public node and then an LSP which only connects through that public node, but expects to open public channels to its LSP clients. In the next commit we'll start forwarding such gossip to all peers by ignoring peer gossip limitations about all of our own channels. Here we do the final prep work, passing the two sides of each channel back from `handle_channel_update` calls and into `forward_broadcast_msg`.
Sadly, the lightning gossip protocol operates by always flooding peers with all the latest gossip they receive. For nodes with many peers, this can result in lots of duplicative gossip as they receive every message from every peer. As a results, some lightning implementations disable gossip with new peers after some threshold. This should mostly work as these peers expect to receive the latest gossip from their many other peers. However, in some cases an LDK node may wish to open public channels but only has a single connection to the bulk of the rest of the network - with one such peer which requests that it not receive any gossip. In that case, LDK would dutifully never send any gossip to its only connection to the outside world. We would still send gossip for channels with that peer as it would be sent as unicast gossip, but if we then open another connection to another peer which doesn't have any connection to the outside world any information on that channel wouldn't propagate. We've seen this setup on some LSPs, where they have a public node and then an LSP which only connects through that public node, but expects to open public channels to its LSP clients. Here we finally forward all gossip that is about either side of any of our channels to all our peers, irrespective of their requested gossip filtering.
f39a230 to
692d9b2
Compare
valentinewallace
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM after squash
| /// The node_id of the first endpoint of the channel. | ||
| /// | ||
| /// This is not used in the message broadcast, but rather is useful for deciding which | ||
| /// peer(s) to send the update to. | ||
| node_id_1: NodeId, | ||
| /// The node_id of the second endpoint of the channel. | ||
| /// | ||
| /// This is not used in the message broadcast, but rather is useful for deciding which | ||
| /// peer(s) to send the update to. | ||
| node_id_2: NodeId, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah okay, I see it now in routing::utxo.