mm: allow a NULL fn callback in apply_to_page_range
authorChristoph Hellwig <hch@lst.de>
Sat, 17 Oct 2020 23:15:14 +0000 (16:15 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sun, 18 Oct 2020 16:27:10 +0000 (09:27 -0700)
Besides calling the callback on each page, apply_to_page_range also has
the effect of pre-faulting all PTEs for the range.  To support callers
that only need the pre-faulting, make the callback optional.

Based on a patch from Minchan Kim <minchan@kernel.org>.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Link: https://lkml.kernel.org/r/20201002122204.1534411-5-hch@lst.de
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memory.c

index 589afe4..c48f8df 100644 (file)
@@ -2391,13 +2391,15 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
 
        arch_enter_lazy_mmu_mode();
 
-       do {
-               if (create || !pte_none(*pte)) {
-                       err = fn(pte++, addr, data);
-                       if (err)
-                               break;
-               }
-       } while (addr += PAGE_SIZE, addr != end);
+       if (fn) {
+               do {
+                       if (create || !pte_none(*pte)) {
+                               err = fn(pte++, addr, data);
+                               if (err)
+                                       break;
+                       }
+               } while (addr += PAGE_SIZE, addr != end);
+       }
        *mask |= PGTBL_PTE_MODIFIED;
 
        arch_leave_lazy_mmu_mode();