mm/page_alloc.c: only tune sysctl_lowmem_reserve_ratio value once when changing it
authorBaoquan He <bhe@redhat.com>
Wed, 3 Jun 2020 22:58:48 +0000 (15:58 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 4 Jun 2020 03:09:44 +0000 (20:09 -0700)
Patch series "improvements about lowmem_reserve and /proc/zoneinfo", v2.

This patch (of 3):

When people write to /proc/sys/vm/lowmem_reserve_ratio to change
sysctl_lowmem_reserve_ratio[], setup_per_zone_lowmem_reserve() is called
to recalculate all ->lowmem_reserve[] for each zone of all nodes as below:

static void setup_per_zone_lowmem_reserve(void)
{
...
for_each_online_pgdat(pgdat) {
for (j = 0; j < MAX_NR_ZONES; j++) {
...
while (idx) {
...
if (sysctl_lowmem_reserve_ratio[idx] < 1) {
sysctl_lowmem_reserve_ratio[idx] = 0;
lower_zone->lowmem_reserve[j] = 0;
                                } else {
...
}
}
}
}

Meanwhile, here, sysctl_lowmem_reserve_ratio[idx] will be tuned if its
value is smaller than '1'.  As we know, sysctl_lowmem_reserve_ratio[] is
set for zone without regarding to which node it belongs to.  That means
the tuning will be done on all nodes, even though it has been done in the
first node.

And the tuning will be done too even when init_per_zone_wmark_min() calls
setup_per_zone_lowmem_reserve(), where actually nobody tries to change
sysctl_lowmem_reserve_ratio[].

So now move the tuning into lowmem_reserve_ratio_sysctl_handler(), to make
code logic more reasonable.

Signed-off-by: Baoquan He <bhe@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Baoquan He <bhe@redhat.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: David Rientjes <rientjes@google.com>
Link: http://lkml.kernel.org/r/20200402140113.3696-1-bhe@redhat.com
Link: http://lkml.kernel.org/r/20200402140113.3696-2-bhe@redhat.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/page_alloc.c

index a1a4f88..f3d5f0e 100644 (file)
@@ -7704,8 +7704,7 @@ static void setup_per_zone_lowmem_reserve(void)
                                idx--;
                                lower_zone = pgdat->node_zones + idx;
 
-                               if (sysctl_lowmem_reserve_ratio[idx] < 1) {
-                                       sysctl_lowmem_reserve_ratio[idx] = 0;
+                               if (!sysctl_lowmem_reserve_ratio[idx]) {
                                        lower_zone->lowmem_reserve[j] = 0;
                                } else {
                                        lower_zone->lowmem_reserve[j] =
@@ -7970,7 +7969,15 @@ int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
 int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *table, int write,
        void __user *buffer, size_t *length, loff_t *ppos)
 {
+       int i;
+
        proc_dointvec_minmax(table, write, buffer, length, ppos);
+
+       for (i = 0; i < MAX_NR_ZONES; i++) {
+               if (sysctl_lowmem_reserve_ratio[i] < 1)
+                       sysctl_lowmem_reserve_ratio[i] = 0;
+       }
+
        setup_per_zone_lowmem_reserve();
        return 0;
 }