Skip to content

Commit b765d0e

Browse files
committed
ice: fix Rx page leak on multi-buffer frames
JIRA: https://issues.redhat.com/browse/RHEL-104682 Upstream commit(s): commit 84bf1ac Author: Jacob Keller <jacob.e.keller@intel.com> Date: Mon Aug 25 16:00:14 2025 -0700 ice: fix Rx page leak on multi-buffer frames The ice_put_rx_mbuf() function handles calling ice_put_rx_buf() for each buffer in the current frame. This function was introduced as part of handling multi-buffer XDP support in the ice driver. It works by iterating over the buffers from first_desc up to 1 plus the total number of fragments in the frame, cached from before the XDP program was executed. If the hardware posts a descriptor with a size of 0, the logic used in ice_put_rx_mbuf() breaks. Such descriptors get skipped and don't get added as fragments in ice_add_xdp_frag. Since the buffer isn't counted as a fragment, we do not iterate over it in ice_put_rx_mbuf(), and thus we don't call ice_put_rx_buf(). Because we don't call ice_put_rx_buf(), we don't attempt to re-use the page or free it. This leaves a stale page in the ring, as we don't increment next_to_alloc. The ice_reuse_rx_page() assumes that the next_to_alloc has been incremented properly, and that it always points to a buffer with a NULL page. Since this function doesn't check, it will happily recycle a page over the top of the next_to_alloc buffer, losing track of the old page. Note that this leak only occurs for multi-buffer frames. The ice_put_rx_mbuf() function always handles at least one buffer, so a single-buffer frame will always get handled correctly. It is not clear precisely why the hardware hands us descriptors with a size of 0 sometimes, but it happens somewhat regularly with "jumbo frames" used by 9K MTU. To fix ice_put_rx_mbuf(), we need to make sure to call ice_put_rx_buf() on all buffers between first_desc and next_to_clean. Borrow the logic of a similar function in i40e used for this same purpose. Use the same logic also in ice_get_pgcnts(). Instead of iterating over just the number of fragments, use a loop which iterates until the current index reaches to the next_to_clean element just past the current frame. Unlike i40e, the ice_put_rx_mbuf() function does call ice_put_rx_buf() on the last buffer of the frame indicating the end of packet. For non-linear (multi-buffer) frames, we need to take care when adjusting the pagecnt_bias. An XDP program might release fragments from the tail of the frame, in which case that fragment page is already released. Only update the pagecnt_bias for the first descriptor and fragments still remaining post-XDP program. Take care to only access the shared info for fragmented buffers, as this avoids a significant cache miss. The xdp_xmit value only needs to be updated if an XDP program is run, and only once per packet. Drop the xdp_xmit pointer argument from ice_put_rx_mbuf(). Instead, set xdp_xmit in the ice_clean_rx_irq() function directly. This avoids needing to pass the argument and avoids an extra bit-wise OR for each buffer in the frame. Move the increment of the ntc local variable to ensure its updated *before* all calls to ice_get_pgcnts() or ice_put_rx_mbuf(), as the loop logic requires the index of the element just after the current frame. Now that we use an index pointer in the ring to identify the packet, we no longer need to track or cache the number of fragments in the rx_ring. Cc: Christoph Petrausch <christoph.petrausch@deepl.com> Cc: Jesper Dangaard Brouer <hawk@kernel.org> Reported-by: Jaroslav Pulchart <jaroslav.pulchart@gooddata.com> Closes: https://lore.kernel.org/netdev/CAK8fFZ4hY6GUJNENz3wY9jaYLZXGfpr7dnZxzGMYoE44caRbgw@mail.gmail.com/ Fixes: 743bbd9 ("ice: put Rx buffers after being done with current frame") Tested-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Acked-by: Jesper Dangaard Brouer <hawk@kernel.org> Tested-by: Priya Singh <priyax.singh@intel.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Petr Oros <poros@redhat.com>
1 parent 09d0790 commit b765d0e

File tree

2 files changed

+34
-47
lines changed

2 files changed

+34
-47
lines changed

drivers/net/ethernet/intel/ice/ice_txrx.c

Lines changed: 34 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -865,10 +865,6 @@ ice_add_xdp_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
865865
__skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page,
866866
rx_buf->page_offset, size);
867867
sinfo->xdp_frags_size += size;
868-
/* remember frag count before XDP prog execution; bpf_xdp_adjust_tail()
869-
* can pop off frags but driver has to handle it on its own
870-
*/
871-
rx_ring->nr_frags = sinfo->nr_frags;
872868

873869
if (page_is_pfmemalloc(rx_buf->page))
874870
xdp_buff_set_frag_pfmemalloc(xdp);
@@ -939,20 +935,20 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
939935
/**
940936
* ice_get_pgcnts - grab page_count() for gathered fragments
941937
* @rx_ring: Rx descriptor ring to store the page counts on
938+
* @ntc: the next to clean element (not included in this frame!)
942939
*
943940
* This function is intended to be called right before running XDP
944941
* program so that the page recycling mechanism will be able to take
945942
* a correct decision regarding underlying pages; this is done in such
946943
* way as XDP program can change the refcount of page
947944
*/
948-
static void ice_get_pgcnts(struct ice_rx_ring *rx_ring)
945+
static void ice_get_pgcnts(struct ice_rx_ring *rx_ring, unsigned int ntc)
949946
{
950-
u32 nr_frags = rx_ring->nr_frags + 1;
951947
u32 idx = rx_ring->first_desc;
952948
struct ice_rx_buf *rx_buf;
953949
u32 cnt = rx_ring->count;
954950

955-
for (int i = 0; i < nr_frags; i++) {
951+
while (idx != ntc) {
956952
rx_buf = &rx_ring->rx_buf[idx];
957953
rx_buf->pgcnt = page_count(rx_buf->page);
958954

@@ -1125,62 +1121,51 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf)
11251121
}
11261122

11271123
/**
1128-
* ice_put_rx_mbuf - ice_put_rx_buf() caller, for all frame frags
1124+
* ice_put_rx_mbuf - ice_put_rx_buf() caller, for all buffers in frame
11291125
* @rx_ring: Rx ring with all the auxiliary data
11301126
* @xdp: XDP buffer carrying linear + frags part
1131-
* @xdp_xmit: XDP_TX/XDP_REDIRECT verdict storage
1132-
* @ntc: a current next_to_clean value to be stored at rx_ring
1127+
* @ntc: the next to clean element (not included in this frame!)
11331128
* @verdict: return code from XDP program execution
11341129
*
1135-
* Walk through gathered fragments and satisfy internal page
1136-
* recycle mechanism; we take here an action related to verdict
1137-
* returned by XDP program;
1130+
* Called after XDP program is completed, or on error with verdict set to
1131+
* ICE_XDP_CONSUMED.
1132+
*
1133+
* Walk through buffers from first_desc to the end of the frame, releasing
1134+
* buffers and satisfying internal page recycle mechanism. The action depends
1135+
* on verdict from XDP program.
11381136
*/
11391137
static void ice_put_rx_mbuf(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
1140-
u32 *xdp_xmit, u32 ntc, u32 verdict)
1138+
u32 ntc, u32 verdict)
11411139
{
1142-
u32 nr_frags = rx_ring->nr_frags + 1;
11431140
u32 idx = rx_ring->first_desc;
11441141
u32 cnt = rx_ring->count;
1145-
u32 post_xdp_frags = 1;
11461142
struct ice_rx_buf *buf;
1147-
int i;
1143+
u32 xdp_frags = 0;
1144+
int i = 0;
11481145

11491146
if (unlikely(xdp_buff_has_frags(xdp)))
1150-
post_xdp_frags += xdp_get_shared_info_from_buff(xdp)->nr_frags;
1147+
xdp_frags = xdp_get_shared_info_from_buff(xdp)->nr_frags;
11511148

1152-
for (i = 0; i < post_xdp_frags; i++) {
1149+
while (idx != ntc) {
11531150
buf = &rx_ring->rx_buf[idx];
1151+
if (++idx == cnt)
1152+
idx = 0;
11541153

1155-
if (verdict & (ICE_XDP_TX | ICE_XDP_REDIR)) {
1154+
/* An XDP program could release fragments from the end of the
1155+
* buffer. For these, we need to keep the pagecnt_bias as-is.
1156+
* To do this, only adjust pagecnt_bias for fragments up to
1157+
* the total remaining after the XDP program has run.
1158+
*/
1159+
if (verdict != ICE_XDP_CONSUMED)
11561160
ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
1157-
*xdp_xmit |= verdict;
1158-
} else if (verdict & ICE_XDP_CONSUMED) {
1161+
else if (i++ <= xdp_frags)
11591162
buf->pagecnt_bias++;
1160-
} else if (verdict == ICE_XDP_PASS) {
1161-
ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
1162-
}
11631163

11641164
ice_put_rx_buf(rx_ring, buf);
1165-
1166-
if (++idx == cnt)
1167-
idx = 0;
1168-
}
1169-
/* handle buffers that represented frags released by XDP prog;
1170-
* for these we keep pagecnt_bias as-is; refcount from struct page
1171-
* has been decremented within XDP prog and we do not have to increase
1172-
* the biased refcnt
1173-
*/
1174-
for (; i < nr_frags; i++) {
1175-
buf = &rx_ring->rx_buf[idx];
1176-
ice_put_rx_buf(rx_ring, buf);
1177-
if (++idx == cnt)
1178-
idx = 0;
11791165
}
11801166

11811167
xdp->data = NULL;
11821168
rx_ring->first_desc = ntc;
1183-
rx_ring->nr_frags = 0;
11841169
}
11851170

11861171
/**
@@ -1260,6 +1245,10 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
12601245
/* retrieve a buffer from the ring */
12611246
rx_buf = ice_get_rx_buf(rx_ring, size, ntc);
12621247

1248+
/* Increment ntc before calls to ice_put_rx_mbuf() */
1249+
if (++ntc == cnt)
1250+
ntc = 0;
1251+
12631252
if (!xdp->data) {
12641253
void *hard_start;
12651254

@@ -1268,24 +1257,23 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
12681257
xdp_prepare_buff(xdp, hard_start, offset, size, !!offset);
12691258
xdp_buff_clear_frags_flag(xdp);
12701259
} else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) {
1271-
ice_put_rx_mbuf(rx_ring, xdp, NULL, ntc, ICE_XDP_CONSUMED);
1260+
ice_put_rx_mbuf(rx_ring, xdp, ntc, ICE_XDP_CONSUMED);
12721261
break;
12731262
}
1274-
if (++ntc == cnt)
1275-
ntc = 0;
12761263

12771264
/* skip if it is NOP desc */
12781265
if (ice_is_non_eop(rx_ring, rx_desc))
12791266
continue;
12801267

1281-
ice_get_pgcnts(rx_ring);
1268+
ice_get_pgcnts(rx_ring, ntc);
12821269
xdp_verdict = ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_desc);
12831270
if (xdp_verdict == ICE_XDP_PASS)
12841271
goto construct_skb;
12851272
total_rx_bytes += xdp_get_buff_len(xdp);
12861273
total_rx_pkts++;
12871274

1288-
ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
1275+
ice_put_rx_mbuf(rx_ring, xdp, ntc, xdp_verdict);
1276+
xdp_xmit |= xdp_verdict & (ICE_XDP_TX | ICE_XDP_REDIR);
12891277

12901278
continue;
12911279
construct_skb:
@@ -1298,7 +1286,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
12981286
rx_ring->ring_stats->rx_stats.alloc_page_failed++;
12991287
xdp_verdict = ICE_XDP_CONSUMED;
13001288
}
1301-
ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
1289+
ice_put_rx_mbuf(rx_ring, xdp, ntc, xdp_verdict);
13021290

13031291
if (!skb)
13041292
break;

drivers/net/ethernet/intel/ice/ice_txrx.h

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -358,7 +358,6 @@ struct ice_rx_ring {
358358
struct ice_tx_ring *xdp_ring;
359359
struct ice_rx_ring *next; /* pointer to next ring in q_vector */
360360
struct xsk_buff_pool *xsk_pool;
361-
u32 nr_frags;
362361
u16 max_frame;
363362
u16 rx_buf_len;
364363
dma_addr_t dma; /* physical address of ring */

0 commit comments

Comments
 (0)