Skip to content

Commit f017899

Browse files
committed
mm: vmalloc: don't account for number of nodes for HUGE_VMAP allocations
jira LE-4694 Rebuild_History Non-Buildable kernel-6.12.0-55.43.1.el10_0 commit-author Mike Rapoport (Microsoft) <rppt@kernel.org> commit c82be0b vmalloc allocations with VM_ALLOW_HUGE_VMAP that do not explicitly specify node ID will use huge pages only if size_per_node is larger than a huge page. Still the actual allocated memory is not distributed between nodes and there is no advantage in such approach. On the contrary, BPF allocates SZ_2M * num_possible_nodes() for each new bpf_prog_pack, while it could do with a single huge page per pack. Don't account for number of nodes for VM_ALLOW_HUGE_VMAP with NUMA_NO_NODE and use huge pages whenever the requested allocation size is larger than a huge page. Link: https://lkml.kernel.org/r/20241023162711.2579610-3-rppt@kernel.org Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Tested-by: kdevops <kdevops@lists.linux.dev> Cc: Andreas Larsson <andreas@gaisler.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Brian Cain <bcain@quicinc.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Dinh Nguyen <dinguyen@kernel.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Guo Ren <guoren@kernel.org> Cc: Helge Deller <deller@gmx.de> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Liam R. Howlett <Liam.Howlett@Oracle.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu (Google) <mhiramat@kernel.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Weinberger <richard@nod.at> Cc: Russell King <linux@armlinux.org.uk> Cc: Song Liu <song@kernel.org> Cc: Stafford Horne <shorne@gmail.com> Cc: Steven Rostedt (Google) <rostedt@goodmis.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vineet Gupta <vgupta@kernel.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> (cherry picked from commit c82be0b) Signed-off-by: Jonathan Maple <jmaple@ciq.com>
1 parent 1e03304 commit f017899

File tree

1 file changed

+2
-7
lines changed

1 file changed

+2
-7
lines changed

mm/vmalloc.c

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3779,22 +3779,17 @@ void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align,
37793779
}
37803780

37813781
if (vmap_allow_huge && (vm_flags & VM_ALLOW_HUGE_VMAP)) {
3782-
unsigned long size_per_node;
3783-
37843782
/*
37853783
* Try huge pages. Only try for PAGE_KERNEL allocations,
37863784
* others like modules don't yet expect huge pages in
37873785
* their allocations due to apply_to_page_range not
37883786
* supporting them.
37893787
*/
37903788

3791-
size_per_node = size;
3792-
if (node == NUMA_NO_NODE)
3793-
size_per_node /= num_online_nodes();
3794-
if (arch_vmap_pmd_supported(prot) && size_per_node >= PMD_SIZE)
3789+
if (arch_vmap_pmd_supported(prot) && size >= PMD_SIZE)
37953790
shift = PMD_SHIFT;
37963791
else
3797-
shift = arch_vmap_pte_supported_shift(size_per_node);
3792+
shift = arch_vmap_pte_supported_shift(size);
37983793

37993794
align = max(real_align, 1UL << shift);
38003795
size = ALIGN(real_size, 1UL << shift);

0 commit comments

Comments
 (0)