Skip to content

Commit 72255bd

Browse files
committed
net: fix sk_memory_allocated_{add|sub} vs softirqs
jira LE-1907 Rebuild_History Non-Buildable kernel-5.14.0-427.28.1.el9_4 commit-author Eric Dumazet <edumazet@google.com> commit 3584718 Empty-Commit: Cherry-Pick Conflicts during history rebuild. Will be included in final tarball splat. Ref for failed cherry-pick at: ciq/ciq_backports/kernel-5.14.0-427.28.1.el9_4/3584718c.failed Jonathan Heathcote reported a regression caused by blamed commit on aarch64 architecture. x86 happens to have irq-safe __this_cpu_add_return() and __this_cpu_sub(), but this is not generic. I think my confusion came from "struct sock" argument, because these helpers are called with a locked socket. But the memory accounting is per-proto (and per-cpu after the blamed commit). We might cleanup these helpers later to directly accept a "struct proto *proto" argument. Switch to this_cpu_add_return() and this_cpu_xchg() operations, and get rid of preempt_disable()/preempt_enable() pairs. Fast path becomes a bit faster as a result :) Many thanks to Jonathan Heathcote for his awesome report and investigations. Fixes: 3cd3399 ("net: implement per-cpu reserves for memory_allocated") Reported-by: Jonathan Heathcote <jonathan.heathcote@bbc.co.uk> Closes: https://lore.kernel.org/netdev/VI1PR01MB42407D7947B2EA448F1E04EFD10D2@VI1PR01MB4240.eurprd01.prod.exchangelabs.com/ Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev> Link: https://lore.kernel.org/r/20240421175248.1692552-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> (cherry picked from commit 3584718) Signed-off-by: Jonathan Maple <jmaple@ciq.com> # Conflicts: # include/net/sock.h
1 parent 7ccf765 commit 72255bd

File tree

1 file changed

+110
-0
lines changed

1 file changed

+110
-0
lines changed
Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
net: fix sk_memory_allocated_{add|sub} vs softirqs
2+
3+
jira LE-1907
4+
Rebuild_History Non-Buildable kernel-5.14.0-427.28.1.el9_4
5+
commit-author Eric Dumazet <edumazet@google.com>
6+
commit 3584718cf2ec7e79b6814f2596dcf398c5fb2eca
7+
Empty-Commit: Cherry-Pick Conflicts during history rebuild.
8+
Will be included in final tarball splat. Ref for failed cherry-pick at:
9+
ciq/ciq_backports/kernel-5.14.0-427.28.1.el9_4/3584718c.failed
10+
11+
Jonathan Heathcote reported a regression caused by blamed commit
12+
on aarch64 architecture.
13+
14+
x86 happens to have irq-safe __this_cpu_add_return()
15+
and __this_cpu_sub(), but this is not generic.
16+
17+
I think my confusion came from "struct sock" argument,
18+
because these helpers are called with a locked socket.
19+
But the memory accounting is per-proto (and per-cpu after
20+
the blamed commit). We might cleanup these helpers later
21+
to directly accept a "struct proto *proto" argument.
22+
23+
Switch to this_cpu_add_return() and this_cpu_xchg()
24+
operations, and get rid of preempt_disable()/preempt_enable() pairs.
25+
26+
Fast path becomes a bit faster as a result :)
27+
28+
Many thanks to Jonathan Heathcote for his awesome report and
29+
investigations.
30+
31+
Fixes: 3cd3399dd7a8 ("net: implement per-cpu reserves for memory_allocated")
32+
Reported-by: Jonathan Heathcote <jonathan.heathcote@bbc.co.uk>
33+
Closes: https://lore.kernel.org/netdev/VI1PR01MB42407D7947B2EA448F1E04EFD10D2@VI1PR01MB4240.eurprd01.prod.exchangelabs.com/
34+
Signed-off-by: Eric Dumazet <edumazet@google.com>
35+
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
36+
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
37+
Link: https://lore.kernel.org/r/20240421175248.1692552-1-edumazet@google.com
38+
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
39+
(cherry picked from commit 3584718cf2ec7e79b6814f2596dcf398c5fb2eca)
40+
Signed-off-by: Jonathan Maple <jmaple@ciq.com>
41+
42+
# Conflicts:
43+
# include/net/sock.h
44+
diff --cc include/net/sock.h
45+
index 94057981f7fc,b4b553df7870..000000000000
46+
--- a/include/net/sock.h
47+
+++ b/include/net/sock.h
48+
@@@ -1485,33 -1408,36 +1485,55 @@@ sk_memory_allocated(const struct sock *
49+
50+
/* 1 MB per cpu, in page units */
51+
#define SK_MEMORY_PCPU_RESERVE (1 << (20 - PAGE_SHIFT))
52+
-extern int sysctl_mem_pcpu_rsv;
53+
54+
- static inline void
55+
- sk_memory_allocated_add(struct sock *sk, int amt)
56+
+ static inline void proto_memory_pcpu_drain(struct proto *proto)
57+
{
58+
- int local_reserve;
59+
+ int val = this_cpu_xchg(*proto->per_cpu_fw_alloc, 0);
60+
61+
++<<<<<<< HEAD
62+
+ preempt_disable();
63+
+ local_reserve = __this_cpu_add_return(*sk->sk_prot->per_cpu_fw_alloc, amt);
64+
+ if (local_reserve >= SK_MEMORY_PCPU_RESERVE) {
65+
+ __this_cpu_sub(*sk->sk_prot->per_cpu_fw_alloc, local_reserve);
66+
+ atomic_long_add(local_reserve, sk->sk_prot->memory_allocated);
67+
+ }
68+
+ preempt_enable();
69+
++=======
70+
+ if (val)
71+
+ atomic_long_add(val, proto->memory_allocated);
72+
++>>>>>>> 3584718cf2ec (net: fix sk_memory_allocated_{add|sub} vs softirqs)
73+
}
74+
75+
static inline void
76+
- sk_memory_allocated_sub(struct sock *sk, int amt)
77+
+ sk_memory_allocated_add(const struct sock *sk, int val)
78+
{
79+
- int local_reserve;
80+
+ struct proto *proto = sk->sk_prot;
81+
82+
++<<<<<<< HEAD
83+
+ preempt_disable();
84+
+ local_reserve = __this_cpu_sub_return(*sk->sk_prot->per_cpu_fw_alloc, amt);
85+
+ if (local_reserve <= -SK_MEMORY_PCPU_RESERVE) {
86+
+ __this_cpu_sub(*sk->sk_prot->per_cpu_fw_alloc, local_reserve);
87+
+ atomic_long_add(local_reserve, sk->sk_prot->memory_allocated);
88+
+ }
89+
+ preempt_enable();
90+
++=======
91+
+ val = this_cpu_add_return(*proto->per_cpu_fw_alloc, val);
92+
+
93+
+ if (unlikely(val >= READ_ONCE(sysctl_mem_pcpu_rsv)))
94+
+ proto_memory_pcpu_drain(proto);
95+
+ }
96+
+
97+
+ static inline void
98+
+ sk_memory_allocated_sub(const struct sock *sk, int val)
99+
+ {
100+
+ struct proto *proto = sk->sk_prot;
101+
+
102+
+ val = this_cpu_sub_return(*proto->per_cpu_fw_alloc, val);
103+
+
104+
+ if (unlikely(val <= -READ_ONCE(sysctl_mem_pcpu_rsv)))
105+
+ proto_memory_pcpu_drain(proto);
106+
++>>>>>>> 3584718cf2ec (net: fix sk_memory_allocated_{add|sub} vs softirqs)
107+
}
108+
109+
#define SK_ALLOC_PERCPU_COUNTER_BATCH 16
110+
* Unmerged path include/net/sock.h

0 commit comments

Comments
 (0)