Skip to content

Commit 75dacf7

Browse files
committed
veth: Fix race with AF_XDP exposing old or uninitialized descriptors
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2164865 Upstream Status: net.git commit fa349e3 commit fa349e3 Author: Shawn Bohrer <sbohrer@cloudflare.com> Date: Tue Dec 20 12:59:03 2022 -0600 veth: Fix race with AF_XDP exposing old or uninitialized descriptors When AF_XDP is used on on a veth interface the RX ring is updated in two steps. veth_xdp_rcv() removes packet descriptors from the FILL ring fills them and places them in the RX ring updating the cached_prod pointer. Later xdp_do_flush() syncs the RX ring prod pointer with the cached_prod pointer allowing user-space to see the recently filled in descriptors. The rings are intended to be SPSC, however the existing order in veth_poll allows the xdp_do_flush() to run concurrently with another CPU creating a race condition that allows user-space to see old or uninitialized descriptors in the RX ring. This bug has been observed in production systems. To summarize, we are expecting this ordering: CPU 0 __xsk_rcv_zc() CPU 0 __xsk_map_flush() CPU 2 __xsk_rcv_zc() CPU 2 __xsk_map_flush() But we are seeing this order: CPU 0 __xsk_rcv_zc() CPU 2 __xsk_rcv_zc() CPU 0 __xsk_map_flush() CPU 2 __xsk_map_flush() This occurs because we rely on NAPI to ensure that only one napi_poll handler is running at a time for the given veth receive queue. napi_schedule_prep() will prevent multiple instances from getting scheduled. However calling napi_complete_done() signals that this napi_poll is complete and allows subsequent calls to napi_schedule_prep() and __napi_schedule() to succeed in scheduling a concurrent napi_poll before the xdp_do_flush() has been called. For the veth driver a concurrent call to napi_schedule_prep() and __napi_schedule() can occur on a different CPU because the veth xmit path can additionally schedule a napi_poll creating the race. The fix as suggested by Magnus Karlsson, is to simply move the xdp_do_flush() call before napi_complete_done(). This syncs the producer ring pointers before another instance of napi_poll can be scheduled on another CPU. It will also slightly improve performance by moving the flush closer to when the descriptors were placed in the RX ring. Fixes: d139600 ("veth: Add XDP TX and REDIRECT") Suggested-by: Magnus Karlsson <magnus.karlsson@gmail.com> Signed-off-by: Shawn Bohrer <sbohrer@cloudflare.com> Link: https://lore.kernel.org/r/20221220185903.1105011-1-sbohrer@cloudflare.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Davide Caratti <dcaratti@redhat.com>
1 parent 081d5c4 commit 75dacf7

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

drivers/net/veth.c

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -985,6 +985,9 @@ static int veth_poll(struct napi_struct *napi, int budget)
985985
xdp_set_return_frame_no_direct();
986986
done = veth_xdp_rcv(rq, budget, &bq, &stats);
987987

988+
if (stats.xdp_redirect > 0)
989+
xdp_do_flush();
990+
988991
if (done < budget && napi_complete_done(napi, done)) {
989992
/* Write rx_notify_masked before reading ptr_ring */
990993
smp_store_mb(rq->rx_notify_masked, false);
@@ -998,8 +1001,6 @@ static int veth_poll(struct napi_struct *napi, int budget)
9981001

9991002
if (stats.xdp_tx > 0)
10001003
veth_xdp_flush(rq, &bq);
1001-
if (stats.xdp_redirect > 0)
1002-
xdp_do_flush();
10031004
xdp_clear_return_frame_no_direct();
10041005

10051006
return done;

0 commit comments

Comments
 (0)