Skip to content

Commit 5fbded5

Browse files
committed
mm: migrate high-order folios in swap cache correctly
commit fc346d0 Author: Charan Teja Kalla <quic_charante@quicinc.com> Date: Thu Dec 14 04:58:41 2023 +0000 mm: migrate high-order folios in swap cache correctly Large folios occupy N consecutive entries in the swap cache instead of using multi-index entries like the page cache. However, if a large folio is re-added to the LRU list, it can be migrated. The migration code was not aware of the difference between the swap cache and the page cache and assumed that a single xas_store() would be sufficient. This leaves potentially many stale pointers to the now-migrated folio in the swap cache, which can lead to almost arbitrary data corruption in the future. This can also manifest as infinite loops with the RCU read lock held. [willy@infradead.org: modifications to the changelog & tweaked the fix] Fixes: 3417013 ("mm/migrate: Add folio_migrate_mapping()") Link: https://lkml.kernel.org/r/20231214045841.961776-1-willy@infradead.org Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reported-by: Charan Teja Kalla <quic_charante@quicinc.com> Closes: https://lkml.kernel.org/r/1700569840-17327-1-git-send-email-quic_charante@quicinc.com Cc: David Hildenbrand <david@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> JIRA: https://issues.redhat.com/browse/RHEL-23654 Signed-off-by: Nico Pache <npache@redhat.com>
1 parent a7a48e2 commit 5fbded5

File tree

1 file changed

+8
-1
lines changed

1 file changed

+8
-1
lines changed

mm/migrate.c

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -396,6 +396,7 @@ int folio_migrate_mapping(struct address_space *mapping,
396396
int dirty;
397397
int expected_count = folio_expected_refs(mapping, folio) + extra_count;
398398
long nr = folio_nr_pages(folio);
399+
long entries, i;
399400

400401
if (!mapping) {
401402
/* Anonymous page without mapping */
@@ -433,8 +434,10 @@ int folio_migrate_mapping(struct address_space *mapping,
433434
folio_set_swapcache(newfolio);
434435
newfolio->private = folio_get_private(folio);
435436
}
437+
entries = nr;
436438
} else {
437439
VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
440+
entries = 1;
438441
}
439442

440443
/* Move dirty while page refs frozen and newpage not yet exposed */
@@ -444,7 +447,11 @@ int folio_migrate_mapping(struct address_space *mapping,
444447
folio_set_dirty(newfolio);
445448
}
446449

447-
xas_store(&xas, newfolio);
450+
/* Swap cache still stores N entries instead of a high-order entry */
451+
for (i = 0; i < entries; i++) {
452+
xas_store(&xas, newfolio);
453+
xas_next(&xas);
454+
}
448455

449456
/*
450457
* Drop cache reference from old page by unfreezing

0 commit comments

Comments
 (0)