zsmalloc: switch from alloc_vm_area to get_vm_area
authorChristoph Hellwig <hch@lst.de>
Sat, 17 Oct 2020 23:15:17 +0000 (16:15 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sun, 18 Oct 2020 16:27:10 +0000 (09:27 -0700)
Just manually pre-fault the PTEs using apply_to_page_range.

Co-developed-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Link: https://lkml.kernel.org/r/20201002122204.1534411-6-hch@lst.de
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/zsmalloc.c

index c36fdff..918c7b0 100644 (file)
@@ -1122,10 +1122,16 @@ static inline int __zs_cpu_up(struct mapping_area *area)
         */
        if (area->vm)
                return 0;
-       area->vm = alloc_vm_area(PAGE_SIZE * 2, NULL);
+       area->vm = get_vm_area(PAGE_SIZE * 2, 0);
        if (!area->vm)
                return -ENOMEM;
-       return 0;
+
+       /*
+        * Populate ptes in advance to avoid pte allocation with GFP_KERNEL
+        * in non-preemtible context of zs_map_object.
+        */
+       return apply_to_page_range(&init_mm, (unsigned long)area->vm->addr,
+                       PAGE_SIZE * 2, NULL, NULL);
 }
 
 static inline void __zs_cpu_down(struct mapping_area *area)