Skip to content

Commit 04be1a8

Browse files
author
Maxim Levitsky
committed
Revert "readahead: properly shorten readahead when falling back to do_page_cache_ra()"
JIRA: https://issues.redhat.com/browse/RHEL-55724 JIRA: https://issues.redhat.com/browse/RHEL-56929 CVE: CVE-2024-57839 commit a220d6b Author: Jan Kara <jack@suse.cz> Date: Tue Nov 26 15:52:08 2024 +0100 Revert "readahead: properly shorten readahead when falling back to do_page_cache_ra()" This reverts commit 7c87758. Anders and Philippe have reported that recent kernels occasionally hang when used with NFS in readahead code. The problem has been bisected to 7c87758 ("readahead: properly shorten readahead when falling back to do_page_cache_ra()"). The cause of the problem is that ra->size can be shrunk by read_pages() call and subsequently we end up calling do_page_cache_ra() with negative (read huge positive) number of pages. Let's revert 7c87758 for now until we can find a proper way how the logic in read_pages() and page_cache_ra_order() can coexist. This can lead to reduced readahead throughput due to readahead window confusion but that's better than outright hangs. Link: https://lkml.kernel.org/r/20241126145208.985-1-jack@suse.cz Fixes: 7c87758 ("readahead: properly shorten readahead when falling back to do_page_cache_ra()") Reported-by: Anders Blomdell <anders.blomdell@gmail.com> Reported-by: Philippe Troin <phil@fifi.org> Signed-off-by: Jan Kara <jack@suse.cz> Tested-by: Philippe Troin <phil@fifi.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
1 parent 91ed0af commit 04be1a8

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

mm/readahead.c

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -453,8 +453,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
453453
struct file_ra_state *ra, unsigned int new_order)
454454
{
455455
struct address_space *mapping = ractl->mapping;
456-
pgoff_t start = readahead_index(ractl);
457-
pgoff_t index = start;
456+
pgoff_t index = readahead_index(ractl);
458457
unsigned int min_order = mapping_min_folio_order(mapping);
459458
pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
460459
pgoff_t mark = index + ra->size - ra->async_size;
@@ -517,7 +516,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
517516
if (!err)
518517
return;
519518
fallback:
520-
do_page_cache_ra(ractl, ra->size - (index - start), ra->async_size);
519+
do_page_cache_ra(ractl, ra->size, ra->async_size);
521520
}
522521

523522
static unsigned long ractl_max_pages(struct readahead_control *ractl,

0 commit comments

Comments
 (0)