mm: optimise madvise WILLNEED
authorMatthew Wilcox (Oracle) <willy@infradead.org>
Tue, 13 Oct 2020 23:51:24 +0000 (16:51 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 14 Oct 2020 01:38:29 +0000 (18:38 -0700)
Instead of calling find_get_entry() for every page index, use an XArray
iterator to skip over NULL entries, and avoid calling get_page(),
because we only want the swap entries.

[willy@infradead.org: fix LTP soft lockups]
Link: https://lkml.kernel.org/r/20200914165032.GS6583@casper.infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: William Kucharski <william.kucharski@oracle.com>
Cc: Qian Cai <cai@redhat.com>
Link: https://lkml.kernel.org/r/20200910183318.20139-4-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/madvise.c

index 0e0d610..9b065d4 100644 (file)
@@ -224,25 +224,28 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma,
                unsigned long start, unsigned long end,
                struct address_space *mapping)
 {
-       pgoff_t index;
+       XA_STATE(xas, &mapping->i_pages, linear_page_index(vma, start));
+       pgoff_t end_index = end / PAGE_SIZE;
        struct page *page;
-       swp_entry_t swap;
 
-       for (; start < end; start += PAGE_SIZE) {
-               index = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
+       rcu_read_lock();
+       xas_for_each(&xas, page, end_index) {
+               swp_entry_t swap;
 
-               page = find_get_entry(mapping, index);
-               if (!xa_is_value(page)) {
-                       if (page)
-                               put_page(page);
+               if (!xa_is_value(page))
                        continue;
-               }
+               xas_pause(&xas);
+               rcu_read_unlock();
+
                swap = radix_to_swp_entry(page);
                page = read_swap_cache_async(swap, GFP_HIGHUSER_MOVABLE,
                                                        NULL, 0, false);
                if (page)
                        put_page(page);
+
+               rcu_read_lock();
        }
+       rcu_read_unlock();
 
        lru_add_drain();        /* Push any new pages onto the LRU now */
 }