Skip to content

Conversation

@TheBlueMatt
Copy link
Collaborator

Sadly, the lightning gossip protocol operates by always flooding
peers with all the latest gossip they receive. For nodes with many
peers, this can result in lots of duplicative gossip as they
receive every message from every peer. As a results, some lightning
implementations disable gossip with new peers after some threshold.
This should mostly work as these peers expect to receive the latest
gossip from their many other peers.

However, in some cases an LDK node may wish to open public
channels but only has a single connection to the bulk of the  rest
of the network - with one such peer which requests that it not
receive any gossip. In that case, LDK would dutifully never send
any gossip to its only connection to the outside world. We would
still send gossip for channels with that peer as it would be sent
as unicast gossip, but if we then open another connection to
another peer which doesn't have any connection to the outside world
any information on that channel wouldn't propagate.

We've seen this setup on some LSPs, where they have a public node
and then an LSP which only connects through that public node, but
expects to open public channels to its LSP clients.

Here we finally forward all gossip that is about either side of any
of our channels to all our peers, irrespective of their requested
gossip filtering.

Sadly, the lightning gossip protocol operates by always flooding
peers with all the latest gossip they receive. For nodes with many
peers, this can result in lots of duplicative gossip as they
receive every message from every peer. As a results, some lightning
implementations disable gossip with new peers after some threshold.
This should mostly work as these peers expect to receive the latest
gossip from their many other peers.

However, in some cases an LDK node may wish to open public
channels but only has a single connection to the bulk of the  rest
of the network - with one such peer which requests that it not
receive any gossip. In that case, LDK would dutifully never send
any gossip to its only connection to the outside world. We would
still send gossip for channels with that peer as it would be sent
as unicast gossip, but if we then open another connection to
another peer which doesn't have any connection to the outside world
any information on that channel wouldn't propagate.

We've seen this setup on some LSPs, where they have a public node
and then an LSP which only connects through that public node, but
expects to open public channels to its LSP clients.

In a coming commit we'll start forwarding such gossip to all peers
by ignoring peer gossip limitations about all of our own channels.
While other gossip has the `node_id`s of both sides of the channel
explicitly in the message, `ChannelUpdate`s do not, so here we add
the `node_id`s to `BroadcastChannelUpdate` giving us the
information we need when we go to broadcast updates.
@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Nov 10, 2025

👋 Thanks for assigning @valentinewallace as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

@TheBlueMatt TheBlueMatt added this to the 0.3 milestone Nov 10, 2025
wpaulino
wpaulino previously approved these changes Nov 10, 2025
Copy link
Contributor

@wpaulino wpaulino left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this was spelled out in the spec already:

This assumption of adequate propagation does not apply for gossip messages generated directly by the node itself, so they should ignore filters.

LGTM, though some test coverage would be nice.

@codecov
Copy link

codecov bot commented Nov 10, 2025

Codecov Report

❌ Patch coverage is 90.21739% with 18 lines in your changes missing coverage. Please review.
✅ Project coverage is 89.33%. Comparing base (e42e74e) to head (f39a230).
⚠️ Report is 16 commits behind head on main.

Files with missing lines Patch % Lines
lightning/src/ln/peer_handler.rs 90.00% 11 Missing and 1 partial ⚠️
lightning/src/routing/gossip.rs 83.33% 2 Missing and 1 partial ⚠️
lightning-net-tokio/src/lib.rs 0.00% 2 Missing ⚠️
lightning/src/routing/utxo.rs 80.00% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff            @@
##             main    #4216    +/-   ##
========================================
  Coverage   89.33%   89.33%            
========================================
  Files         180      180            
  Lines      138055   138222   +167     
  Branches   138055   138222   +167     
========================================
+ Hits       123326   123483   +157     
- Misses      12122    12130     +8     
- Partials     2607     2609     +2     
Flag Coverage Δ
fuzzing 33.64% <57.42%> (+0.06%) ⬆️
tests 88.71% <85.86%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@TheBlueMatt
Copy link
Collaborator Author

This is more than just that - we're also forwarding gossip from our peers for our channels. Will ask an LLM to add coverage.

@TheBlueMatt
Copy link
Collaborator Author

Claude kinda wrote a test...of course the test it generated passed the checks entirely by accident and didn't actually test the new logic, but it was a start lol.

wpaulino
wpaulino previously approved these changes Nov 11, 2025
let mut fd_1_0 = FileDescriptor::new(2);

// Connect the peers and exchange the initial connection handshake.
let initial_data =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you use the establish_connection util instead?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I thought we couldn't but it actually skips delivering the last init message so we can...

Comment on lines +1920 to +1929
/// The node_id of the first endpoint of the channel.
///
/// This is not used in the message broadcast, but rather is useful for deciding which
/// peer(s) to send the update to.
node_id_1: NodeId,
/// The node_id of the second endpoint of the channel.
///
/// This is not used in the message broadcast, but rather is useful for deciding which
/// peer(s) to send the update to.
node_id_2: NodeId,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could these be replaced by a force_broadcast: bool field?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The network graph/P2PGossipSync (where this comes from sometimes) doesn't know our NodeId. We could give them that knowledge, but it seemed cleaner not to and to just provide the node_ids here, which is cheap.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah okay, I see it now in routing::utxo.

Sadly, the lightning gossip protocol operates by always flooding
peers with all the latest gossip they receive. For nodes with many
peers, this can result in lots of duplicative gossip as they
receive every message from every peer. As a results, some lightning
implementations disable gossip with new peers after some threshold.
This should mostly work as these peers expect to receive the latest
gossip from their many other peers.

However, in some cases an LDK node may wish to open public
channels but only has a single connection to the bulk of the  rest
of the network - with one such peer which requests that it not
receive any gossip. In that case, LDK would dutifully never send
any gossip to its only connection to the outside world. We would
still send gossip for channels with that peer as it would be sent
as unicast gossip, but if we then open another connection to
another peer which doesn't have any connection to the outside world
any information on that channel wouldn't propagate.

We've seen this setup on some LSPs, where they have a public node
and then an LSP which only connects through that public node, but
expects to open public channels to its LSP clients.

In the next commit we'll start forwarding such gossip to all peers
by ignoring peer gossip limitations about all of our own channels.
Here we do the final prep work, passing the two sides of each
channel back from `handle_channel_update` calls and into
`forward_broadcast_msg`.
Sadly, the lightning gossip protocol operates by always flooding
peers with all the latest gossip they receive. For nodes with many
peers, this can result in lots of duplicative gossip as they
receive every message from every peer. As a results, some lightning
implementations disable gossip with new peers after some threshold.
This should mostly work as these peers expect to receive the latest
gossip from their many other peers.

However, in some cases an LDK node may wish to open public
channels but only has a single connection to the bulk of the  rest
of the network - with one such peer which requests that it not
receive any gossip. In that case, LDK would dutifully never send
any gossip to its only connection to the outside world. We would
still send gossip for channels with that peer as it would be sent
as unicast gossip, but if we then open another connection to
another peer which doesn't have any connection to the outside world
any information on that channel wouldn't propagate.

We've seen this setup on some LSPs, where they have a public node
and then an LSP which only connects through that public node, but
expects to open public channels to its LSP clients.

Here we finally forward all gossip that is about either side of any
of our channels to all our peers, irrespective of their requested
gossip filtering.
Copy link
Contributor

@valentinewallace valentinewallace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM after squash

Comment on lines +1920 to +1929
/// The node_id of the first endpoint of the channel.
///
/// This is not used in the message broadcast, but rather is useful for deciding which
/// peer(s) to send the update to.
node_id_1: NodeId,
/// The node_id of the second endpoint of the channel.
///
/// This is not used in the message broadcast, but rather is useful for deciding which
/// peer(s) to send the update to.
node_id_2: NodeId,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah okay, I see it now in routing::utxo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants