From f720b471fdb35619402293dcd421761fb1942e27 Mon Sep 17 00:00:00 2001 From: Kefeng Wang Date: Tue, 1 Aug 2023 10:31:44 +0800 Subject: [PATCH] mm: hugetlb: use flush_hugetlb_tlb_range() in move_hugetlb_page_tables() Archs may need to do special things when flushing hugepage tlb, so use the more applicable flush_hugetlb_tlb_range() instead of flush_tlb_range(). Link: https://lkml.kernel.org/r/20230801023145.17026-2-wangkefeng.wang@huawei.com Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma") Signed-off-by: Kefeng Wang Reviewed-by: Mike Kravetz Acked-by: Muchun Song Cc: Barry Song <21cnbao@gmail.com> Cc: Catalin Marinas Cc: Joel Fernandes (Google) Cc: Kalesh Singh Cc: "Kirill A. Shutemov" Cc: Mina Almasry Cc: Will Deacon Cc: William Kucharski Signed-off-by: Andrew Morton --- mm/hugetlb.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 26e87d6cc92f..102f83bd3a9f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5279,9 +5279,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, } if (shared_pmd) - flush_tlb_range(vma, range.start, range.end); + flush_hugetlb_tlb_range(vma, range.start, range.end); else - flush_tlb_range(vma, old_end - len, old_end); + flush_hugetlb_tlb_range(vma, old_end - len, old_end); mmu_notifier_invalidate_range_end(&range); i_mmap_unlock_write(mapping); hugetlb_vma_unlock_write(vma); -- 2.20.1