mm: memory: make numa_migrate_prep() non-static
authorYang Shi <shy828301@gmail.com>
Thu, 1 Jul 2021 01:51:39 +0000 (18:51 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 1 Jul 2021 03:47:30 +0000 (20:47 -0700)
The numa_migrate_prep() will be used by huge NUMA fault as well in the
following patch, make it non-static.

Link: https://lkml.kernel.org/r/20210518200801.7413-3-shy828301@gmail.com
Signed-off-by: Yang Shi <shy828301@gmail.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/internal.h
mm/memory.c

index 6ec2cea..a4942f9 100644 (file)
@@ -672,4 +672,7 @@ int vmap_pages_range_noflush(unsigned long addr, unsigned long end,
 
 void vunmap_range_noflush(unsigned long start, unsigned long end);
 
+int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
+                     unsigned long addr, int page_nid, int *flags);
+
 #endif /* __MM_INTERNAL_H */
index 5029d79..64eda96 100644 (file)
@@ -4175,9 +4175,8 @@ static vm_fault_t do_fault(struct vm_fault *vmf)
        return ret;
 }
 
-static int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
-                               unsigned long addr, int page_nid,
-                               int *flags)
+int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
+                     unsigned long addr, int page_nid, int *flags)
 {
        get_page(page);