mm/migrate.c: migrate PG_readahead flag
authorYang Shi <yang.shi@linux.alibaba.com>
Tue, 7 Apr 2020 03:04:21 +0000 (20:04 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 7 Apr 2020 17:43:38 +0000 (10:43 -0700)
Currently the migration code doesn't migrate PG_readahead flag.
Theoretically this would incur slight performance loss as the application
might have to ramp its readahead back up again.  Even though such problem
happens, it might be hidden by something else since migration is typically
triggered by compaction and NUMA balancing, any of which should be more
noticeable.

Migrate the flag after end_page_writeback() since it may clear PG_reclaim
flag, which is the same bit as PG_readahead, for the new page.

[akpm@linux-foundation.org: tweak comment]
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Link: http://lkml.kernel.org/r/1581640185-95731-1-git-send-email-yang.shi@linux.alibaba.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/migrate.c

index c550230..1a20550 100644 (file)
@@ -647,6 +647,14 @@ void migrate_page_states(struct page *newpage, struct page *page)
        if (PageWriteback(newpage))
                end_page_writeback(newpage);
 
+       /*
+        * PG_readahead shares the same bit with PG_reclaim.  The above
+        * end_page_writeback() may clear PG_readahead mistakenly, so set the
+        * bit after that.
+        */
+       if (PageReadahead(page))
+               SetPageReadahead(newpage);
+
        copy_page_owner(page, newpage);
 
        mem_cgroup_migrate(page, newpage);