Skip to content

Commit 9bf96c1

Browse files
nixprimegvisor-bot
authored andcommitted
mm: get huge pages for unaligned allocations spanning at least one huge page
When page faults trigger pgalloc.MemoryFile allocation, the size of the allocation never exceeds one huge page, and if it is exactly one huge page in size then it will be hugepage-aligned. However, when pgalloc.MemoryFile allocation is triggered by memory-mapped I/O (e.g. a read() to unfaulted memory), mmap(MAP_POPULATE), mlock(), or MM.Pin() (due to driver activity), the allocated range may be hugepage-unaligned but include one or more aligned huge pages; in these cases, split the allocation into up to three parts for the sub-hugepage subrange at the beginning, the sub-hugepage subrange at the end, and the hugepage-aligned subrange in between, so that the hugepage-aligned subrange can use a hugepage-backed allocation. PiperOrigin-RevId: 812961653
1 parent 76290f4 commit 9bf96c1

File tree

1 file changed

+28
-1
lines changed

1 file changed

+28
-1
lines changed

pkg/sentry/mm/pma.go

Lines changed: 28 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -282,7 +282,34 @@ func (mm *MemoryManager) getPMAsInternalLocked(ctx context.Context, vseg vmaIter
282282
allocAR := optAR.Intersect(hugeMaskAR)
283283
// Don't back stacks with huge pages due to low utilization
284284
// and because they're often fragmented by copy-on-write.
285-
huge := mm.mf.HugepagesEnabled() && allocAR.IsHugePageAligned() && !vma.growsDown && !vma.isStack
285+
mayHuge := mm.mf.HugepagesEnabled() && !vma.growsDown && !vma.isStack
286+
huge := false
287+
if mayHuge {
288+
if allocAR.IsHugePageAligned() {
289+
huge = true
290+
} else {
291+
startHugeRoundUp, ok := allocAR.Start.HugeRoundUp()
292+
endHugeRoundDown := allocAR.End.HugeRoundDown()
293+
if ok && startHugeRoundUp < endHugeRoundDown {
294+
// allocAR contains at least one full aligned
295+
// huge page.
296+
if allocAR.Start != startHugeRoundUp {
297+
// Shorten allocAR to exclude full huge
298+
// pages, so that a later iteration of this
299+
// loop can allocate huge pages for the
300+
// hugepage-aligned subrange.
301+
allocAR.End = startHugeRoundUp
302+
} else {
303+
// Shorten allocAR to include only full
304+
// huge pages, so that a later iteration of
305+
// this loop can allocate small pages for
306+
// the sub-hugepage tail of allocAR.
307+
allocAR.End = endHugeRoundDown
308+
huge = true
309+
}
310+
}
311+
}
312+
}
286313
allocOpts := pgalloc.AllocOpts{
287314
Kind: usage.Anonymous,
288315
MemCgID: memCgID,

0 commit comments

Comments
 (0)