Skip to content

Commit efe1a53

Browse files
author
Brian Foster
committed
mm: convert pagecache_isize_extended to use a folio
JIRA: https://issues.redhat.com/browse/RHEL-109217 commit 2ebe90d Author: Matthew Wilcox (Oracle) <willy@infradead.org> Date: Fri Apr 5 19:00:36 2024 +0100 mm: convert pagecache_isize_extended to use a folio Remove four hidden calls to compound_head(). Also exit early if the filesystem block size is >= PAGE_SIZE instead of just equal to PAGE_SIZE. Link: https://lkml.kernel.org/r/20240405180038.2618624-1-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: Pankaj Raghav <p.raghav@samsung.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Brian Foster <bfoster@redhat.com>
1 parent 7d16a2c commit efe1a53

File tree

1 file changed

+17
-19
lines changed

1 file changed

+17
-19
lines changed

mm/truncate.c

Lines changed: 17 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -766,15 +766,15 @@ EXPORT_SYMBOL(truncate_setsize);
766766
* @from: original inode size
767767
* @to: new inode size
768768
*
769-
* Handle extension of inode size either caused by extending truncate or by
770-
* write starting after current i_size. We mark the page straddling current
771-
* i_size RO so that page_mkwrite() is called on the nearest write access to
772-
* the page. This way filesystem can be sure that page_mkwrite() is called on
773-
* the page before user writes to the page via mmap after the i_size has been
774-
* changed.
769+
* Handle extension of inode size either caused by extending truncate or
770+
* by write starting after current i_size. We mark the page straddling
771+
* current i_size RO so that page_mkwrite() is called on the first
772+
* write access to the page. The filesystem will update its per-block
773+
* information before user writes to the page via mmap after the i_size
774+
* has been changed.
775775
*
776776
* The function must be called after i_size is updated so that page fault
777-
* coming after we unlock the page will already see the new i_size.
777+
* coming after we unlock the folio will already see the new i_size.
778778
* The function must be called while we still hold i_rwsem - this not only
779779
* makes sure i_size is stable but also that userspace cannot observe new
780780
* i_size value before we are prepared to store mmap writes at new inode size.
@@ -783,31 +783,29 @@ void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to)
783783
{
784784
int bsize = i_blocksize(inode);
785785
loff_t rounded_from;
786-
struct page *page;
787-
pgoff_t index;
786+
struct folio *folio;
788787

789788
WARN_ON(to > inode->i_size);
790789

791-
if (from >= to || bsize == PAGE_SIZE)
790+
if (from >= to || bsize >= PAGE_SIZE)
792791
return;
793792
/* Page straddling @from will not have any hole block created? */
794793
rounded_from = round_up(from, bsize);
795794
if (to <= rounded_from || !(rounded_from & (PAGE_SIZE - 1)))
796795
return;
797796

798-
index = from >> PAGE_SHIFT;
799-
page = find_lock_page(inode->i_mapping, index);
800-
/* Page not cached? Nothing to do */
801-
if (!page)
797+
folio = filemap_lock_folio(inode->i_mapping, from / PAGE_SIZE);
798+
/* Folio not cached? Nothing to do */
799+
if (IS_ERR(folio))
802800
return;
803801
/*
804-
* See clear_page_dirty_for_io() for details why set_page_dirty()
802+
* See folio_clear_dirty_for_io() for details why folio_mark_dirty()
805803
* is needed.
806804
*/
807-
if (page_mkclean(page))
808-
set_page_dirty(page);
809-
unlock_page(page);
810-
put_page(page);
805+
if (folio_mkclean(folio))
806+
folio_mark_dirty(folio);
807+
folio_unlock(folio);
808+
folio_put(folio);
811809
}
812810
EXPORT_SYMBOL(pagecache_isize_extended);
813811

0 commit comments

Comments
 (0)