From be62e2e4f5b32e61f2a087aea8c67ca24cbb07ed Mon Sep 17 00:00:00 2001 From: Mel Gorman Date: Sun, 28 Apr 2024 10:01:47 +0800 Subject: [PATCH 1/2] mm/page_alloc: always attempt to allocate at least one page during bulk allocation stable inclusion from stable-v5.15.46 commit fb49bd85dfac8c25dffa4b79825570e4c63a41b3 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I9JPDJ Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=fb49bd85dfac8c25dffa4b79825570e4c63a41b3 -------------------------------- commit c572e4888ad1be123c1516ec577ad30a700bbec4 upstream. Peter Pavlisko reported the following problem on kernel bugzilla 216007. When I try to extract an uncompressed tar archive (2.6 milion files, 760.3 GiB in size) on newly created (empty) XFS file system, after first low tens of gigabytes extracted the process hangs in iowait indefinitely. One CPU core is 100% occupied with iowait, the other CPU core is idle (on 2-core Intel Celeron G1610T). It was bisected to c9fa563072e1 ("xfs: use alloc_pages_bulk_array() for buffers") but XFS is only the messenger. The problem is that nothing is waking kswapd to reclaim some pages at a time the PCP lists cannot be refilled until some reclaim happens. The bulk allocator checks that there are some pages in the array and the original intent was that a bulk allocator did not necessarily need all the requested pages and it was best to return as quickly as possible. This was fine for the first user of the API but both NFS and XFS require the requested number of pages be available before making progress. Both could be adjusted to call the page allocator directly if a bulk allocation fails but it puts a burden on users of the API. Adjust the semantics to attempt at least one allocation via __alloc_pages() before returning so kswapd is woken if necessary. It was reported via bugzilla that the patch addressed the problem and that the tar extraction completed successfully. This may also address bug 215975 but has yet to be confirmed. BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=216007 BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215975 Link: https://lkml.kernel.org/r/20220526091210.GC3441@techsingularity.net Fixes: 387ba26fb1cb ("mm/page_alloc: add a bulk page allocator") Signed-off-by: Mel Gorman Cc: "Darrick J. Wong" Cc: Dave Chinner Cc: Jan Kara Cc: Vlastimil Babka Cc: Jesper Dangaard Brouer Cc: Chuck Lever Cc: [5.13+] Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman Conflicts: mm/page_alloc.c [Ma Wupeng: partial add nr_account from commit 3e23060b2d0b] Signed-off-by: Ma Wupeng --- mm/page_alloc.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2d33f1aa9a354..1ba392f11e6bf 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5121,7 +5121,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, struct alloc_context ac; gfp_t alloc_gfp; unsigned int alloc_flags = ALLOC_WMARK_LOW; - int nr_populated = 0; + int nr_populated = 0, nr_account = 0; /* * Skip populated array elements to determine if any pages need @@ -5206,11 +5206,12 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, page = __rmqueue_pcplist(zone, ac.migratetype, alloc_flags, pcp, pcp_list); if (unlikely(!page)) { - /* Try and get at least one page */ - if (!nr_populated) + /* Try and allocate at least one page */ + if (!nr_account) goto failed_irq; break; } + nr_account++; /* * Ideally this would be batched but the best way to do -- Gitee From 67fee6e7b0f285d7eebe8607ac95ab30cc094d4a Mon Sep 17 00:00:00 2001 From: Miaohe Lin Date: Sun, 28 Apr 2024 10:01:48 +0800 Subject: [PATCH 2/2] mm/madvise: fix potential pte_unmap_unlock pte error mainline inclusion from mainline-v5.19-rc1 commit f3b9e8cc8b09ba3b41bb068c24a1061e8a70d26f category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I9JPDJ Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=f3b9e8cc8b09ba3b41bb068c24a1061e8a70d26f -------------------------------- We can't assume pte_offset_map_lock will return same orig_pte value. So it's necessary to reacquire the orig_pte or pte_unmap_unlock will unmap the stale pte. Link: https://lkml.kernel.org/r/20220416081416.23304-1-linmiaohe@huawei.com Fixes: 9c276cc65a58 ("mm: introduce MADV_COLD") Fixes: 854e9ed09ded ("mm: support madvise(MADV_FREE)") Signed-off-by: Miaohe Lin Cc: Johannes Weiner Cc: Michal Hocko Cc: Hugh Dickins Signed-off-by: Andrew Morton Signed-off-by: Ma Wupeng --- mm/madvise.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index 2b344256d4be7..926bf4523befc 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -451,12 +451,12 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, if (split_huge_page(page)) { unlock_page(page); put_page(page); - pte_offset_map_lock(mm, pmd, addr, &ptl); + orig_pte = pte_offset_map_lock(mm, pmd, addr, &ptl); break; } unlock_page(page); put_page(page); - pte = pte_offset_map_lock(mm, pmd, addr, &ptl); + orig_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl); pte--; addr -= PAGE_SIZE; continue; @@ -669,12 +669,12 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, if (split_huge_page(page)) { unlock_page(page); put_page(page); - pte_offset_map_lock(mm, pmd, addr, &ptl); + orig_pte = pte_offset_map_lock(mm, pmd, addr, &ptl); goto out; } unlock_page(page); put_page(page); - pte = pte_offset_map_lock(mm, pmd, addr, &ptl); + orig_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl); pte--; addr -= PAGE_SIZE; continue; -- Gitee