mm/memory: fix IO cost for anonymous page
authorJoonsoo Kim <iamjoonsoo.kim@lge.com>
Fri, 26 Jun 2020 03:30:37 +0000 (20:30 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 26 Jun 2020 07:27:38 +0000 (00:27 -0700)
With synchronous IO swap device, swap-in is directly handled in fault
code.  Since IO cost notation isn't added there, with synchronous IO
swap device, LRU balancing could be wrongly biased.  Fix it to count it
in fault code.

Link: http://lkml.kernel.org/r/1592288204-27734-4-git-send-email-iamjoonsoo.kim@lge.com
Fixes: 314b57fb0460001 ("mm: balance LRU lists based on relative thrashing cache sizing")
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: Rik van Riel <riel@surriel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memory.c

index 0e5b25c..87ec87c 100644 (file)
@@ -3146,6 +3146,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
                                        goto out_page;
                                }
 
+                               /*
+                                * XXX: Move to lru_cache_add() when it
+                                * supports new vs putback
+                                */
+                               spin_lock_irq(&page_pgdat(page)->lru_lock);
+                               lru_note_cost_page(page);
+                               spin_unlock_irq(&page_pgdat(page)->lru_lock);
+
                                lru_cache_add(page);
                                swap_readpage(page, true);
                        }