Skip to content

Commit 100c8a5

Browse files
committed
udmabuf: pre-fault when first page fault
JIRA: https://issues.redhat.com/browse/RHEL-89519 commit f0bbcc2 Author: Huan Yang <link@vivo.com> Date: Wed, 18 Sep 2024 10:52:24 +0800 udmabuf: pre-fault when first page fault The current udmabuf mmap only fills the physical memory to the corresponding virtual address when the user actually accesses the virtual address. However, the current udmabuf has already obtained and pinned the folio upon completion of the creation.This means that the physical memory has already been acquired, rather than being accessed dynamically. As a result, the page fault has lost its purpose as a demanding page. Due to the fact that page fault requires trapping into kernel mode and filling in when accessing the corresponding virtual address in mmap, when creating a large size udmabuf, this represents a considerable overhead. This patch fill the pfn into page table, and then pre-fault each pfn into vma, when first access. Notice, if anything wrong , we do not return an error during this pre-fault step. However, an error will be returned if the failure occurs when the addr is truly accessed Suggested-by: Vivek Kasireddy <vivek.kasireddy@intel.com> Signed-off-by: Huan Yang <link@vivo.com> Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com> Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240918025238.2957823-2-link@vivo.com Signed-off-by: Waiman Long <longman@redhat.com>
1 parent 8dd580c commit 100c8a5

File tree

1 file changed

+31
-2
lines changed

1 file changed

+31
-2
lines changed

drivers/dma-buf/udmabuf.c

Lines changed: 31 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,15 +43,44 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
4343
struct vm_area_struct *vma = vmf->vma;
4444
struct udmabuf *ubuf = vma->vm_private_data;
4545
pgoff_t pgoff = vmf->pgoff;
46-
unsigned long pfn;
46+
unsigned long addr, pfn;
47+
vm_fault_t ret;
4748

4849
if (pgoff >= ubuf->pagecount)
4950
return VM_FAULT_SIGBUS;
5051

5152
pfn = folio_pfn(ubuf->folios[pgoff]);
5253
pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT;
5354

54-
return vmf_insert_pfn(vma, vmf->address, pfn);
55+
ret = vmf_insert_pfn(vma, vmf->address, pfn);
56+
if (ret & VM_FAULT_ERROR)
57+
return ret;
58+
59+
/* pre fault */
60+
pgoff = vma->vm_pgoff;
61+
addr = vma->vm_start;
62+
63+
for (; addr < vma->vm_end; pgoff++, addr += PAGE_SIZE) {
64+
if (addr == vmf->address)
65+
continue;
66+
67+
if (WARN_ON(pgoff >= ubuf->pagecount))
68+
break;
69+
70+
pfn = folio_pfn(ubuf->folios[pgoff]);
71+
pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT;
72+
73+
/**
74+
* If the below vmf_insert_pfn() fails, we do not return an
75+
* error here during this pre-fault step. However, an error
76+
* will be returned if the failure occurs when the addr is
77+
* truly accessed.
78+
*/
79+
if (vmf_insert_pfn(vma, addr, pfn) & VM_FAULT_ERROR)
80+
break;
81+
}
82+
83+
return ret;
5584
}
5685

5786
static const struct vm_operations_struct udmabuf_vm_ops = {

0 commit comments

Comments
 (0)