Skip to content

Commit 1ce6473

Browse files
ioworker0akpm00
authored andcommitted
mm/thp: fix MTE tag mismatch when replacing zero-filled subpages
When both THP and MTE are enabled, splitting a THP and replacing its zero-filled subpages with the shared zeropage can cause MTE tag mismatch faults in userspace. Remapping zero-filled subpages to the shared zeropage is unsafe, as the zeropage has a fixed tag of zero, which may not match the tag expected by the userspace pointer. KSM already avoids this problem by using memcmp_pages(), which on arm64 intentionally reports MTE-tagged pages as non-identical to prevent unsafe merging. As suggested by David[1], this patch adopts the same pattern, replacing the memchr_inv() byte-level check with a call to pages_identical(). This leverages existing architecture-specific logic to determine if a page is truly identical to the shared zeropage. Having both the THP shrinker and KSM rely on pages_identical() makes the design more future-proof, IMO. Instead of handling quirks in generic code, we just let the architecture decide what makes two pages identical. [1] https://lore.kernel.org/all/ca2106a3-4bb2-4457-81af-301fd99fbef4@redhat.com Link: https://lkml.kernel.org/r/20250922021458.68123-1-lance.yang@linux.dev Fixes: b1f2020 ("mm: remap unused subpages to shared zeropage when splitting isolated thp") Signed-off-by: Lance Yang <lance.yang@linux.dev> Reported-by: Qun-wei Lin <Qun-wei.Lin@mediatek.com> Closes: https://lore.kernel.org/all/a7944523fcc3634607691c35311a5d59d1a3f8d4.camel@mediatek.com Suggested-by: David Hildenbrand <david@redhat.com> Acked-by: Zi Yan <ziy@nvidia.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Usama Arif <usamaarif642@gmail.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: andrew.yang <andrew.yang@mediatek.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Byungchul Park <byungchul@sk.com> Cc: Charlie Jenkins <charlie@rivosinc.com> Cc: Chinwen Chang <chinwen.chang@mediatek.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Domenico Cerasuolo <cerasuolodomenico@gmail.com> Cc: Gregory Price <gourry@gourry.net> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Joshua Hahn <joshua.hahnjy@gmail.com> Cc: Kairui Song <ryncsn@gmail.com> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Mathew Brost <matthew.brost@intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Mike Rapoport <rppt@kernel.org> Cc: Palmer Dabbelt <palmer@rivosinc.com> Cc: Rakie Kim <rakie.kim@sk.com> Cc: Rik van Riel <riel@surriel.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Samuel Holland <samuel.holland@sifive.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Yu Zhao <yuzhao@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent fcc0669 commit 1ce6473

File tree

2 files changed

+4
-19
lines changed

2 files changed

+4
-19
lines changed

mm/huge_memory.c

Lines changed: 3 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -4104,32 +4104,23 @@ static unsigned long deferred_split_count(struct shrinker *shrink,
41044104
static bool thp_underused(struct folio *folio)
41054105
{
41064106
int num_zero_pages = 0, num_filled_pages = 0;
4107-
void *kaddr;
41084107
int i;
41094108

41104109
if (khugepaged_max_ptes_none == HPAGE_PMD_NR - 1)
41114110
return false;
41124111

41134112
for (i = 0; i < folio_nr_pages(folio); i++) {
4114-
kaddr = kmap_local_folio(folio, i * PAGE_SIZE);
4115-
if (!memchr_inv(kaddr, 0, PAGE_SIZE)) {
4116-
num_zero_pages++;
4117-
if (num_zero_pages > khugepaged_max_ptes_none) {
4118-
kunmap_local(kaddr);
4113+
if (pages_identical(folio_page(folio, i), ZERO_PAGE(0))) {
4114+
if (++num_zero_pages > khugepaged_max_ptes_none)
41194115
return true;
4120-
}
41214116
} else {
41224117
/*
41234118
* Another path for early exit once the number
41244119
* of non-zero filled pages exceeds threshold.
41254120
*/
4126-
num_filled_pages++;
4127-
if (num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none) {
4128-
kunmap_local(kaddr);
4121+
if (++num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none)
41294122
return false;
4130-
}
41314123
}
4132-
kunmap_local(kaddr);
41334124
}
41344125
return false;
41354126
}

mm/migrate.c

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -300,9 +300,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
300300
unsigned long idx)
301301
{
302302
struct page *page = folio_page(folio, idx);
303-
bool contains_data;
304303
pte_t newpte;
305-
void *addr;
306304

307305
if (PageCompound(page))
308306
return false;
@@ -319,11 +317,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
319317
* this subpage has been non present. If the subpage is only zero-filled
320318
* then map it to the shared zeropage.
321319
*/
322-
addr = kmap_local_page(page);
323-
contains_data = memchr_inv(addr, 0, PAGE_SIZE);
324-
kunmap_local(addr);
325-
326-
if (contains_data)
320+
if (!pages_identical(page, ZERO_PAGE(0)))
327321
return false;
328322

329323
newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address),

0 commit comments

Comments
 (0)