mm/hmm: bypass devmap pte when all pfn requested flags are fulfilled
authorLi Zhijian <lizhijian@cn.fujitsu.com>
Thu, 9 Sep 2021 01:10:02 +0000 (18:10 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 9 Sep 2021 01:45:52 +0000 (18:45 -0700)
Previously, we noticed the one rpma example was failed[1] since commit
36f30e486dce ("IB/core: Improve ODP to use hmm_range_fault()"), where it
will use ODP feature to do RDMA WRITE between fsdax files.

After digging into the code, we found hmm_vma_handle_pte() will still
return EFAULT even though all the its requesting flags has been
fulfilled.  That's because a DAX page will be marked as (_PAGE_SPECIAL |
PAGE_DEVMAP) by pte_mkdevmap().

Link: https://github.com/pmem/rpma/issues/1142
Link: https://lkml.kernel.org/r/20210830094232.203029-1-lizhijian@cn.fujitsu.com
Fixes: 405506274922 ("mm/hmm: add missing call to hmm_pte_need_fault in HMM_PFN_SPECIAL handling")
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/hmm.c

index fad6be2..842e265 100644 (file)
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -295,10 +295,13 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
                goto fault;
 
        /*
+        * Bypass devmap pte such as DAX page when all pfn requested
+        * flags(pfn_req_flags) are fulfilled.
         * Since each architecture defines a struct page for the zero page, just
         * fall through and treat it like a normal page.
         */
-       if (pte_special(pte) && !is_zero_pfn(pte_pfn(pte))) {
+       if (pte_special(pte) && !pte_devmap(pte) &&
+           !is_zero_pfn(pte_pfn(pte))) {
                if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
                        pte_unmap(ptep);
                        return -EFAULT;