vfio: fix FOLL_LONGTERM use, simplify get_user_pages_remote() call
authorJohn Hubbard <jhubbard@nvidia.com>
Fri, 31 Jan 2020 06:12:39 +0000 (22:12 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 31 Jan 2020 18:30:37 +0000 (10:30 -0800)
Update VFIO to take advantage of the recently loosened restriction on
FOLL_LONGTERM with get_user_pages_remote().  Also, now it is possible to
fix a bug: the VFIO caller is logically a FOLL_LONGTERM user, but it
wasn't setting FOLL_LONGTERM.

Also, remove an unnessary pair of calls that were releasing and
reacquiring the mmap_sem.  There is no need to avoid holding mmap_sem
just in order to call page_to_pfn().

Also, now that the the DAX check ("if a VMA is DAX, don't allow long
term pinning") is in the internals of get_user_pages_remote() and
__gup_longterm_locked(), there's no need for it at the VFIO call site.  So
remove it.

Link: http://lkml.kernel.org/r/20200107224558.2362728-8-jhubbard@nvidia.com
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Tested-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Suggested-by: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Björn Töpel <bjorn.topel@intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Hans Verkuil <hverkuil-cisco@xs4all.nl>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Leon Romanovsky <leonro@mellanox.com>
Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
drivers/vfio/vfio_iommu_type1.c

index 2ada8e6..b800fc9 100644 (file)
@@ -322,7 +322,6 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
 {
        struct page *page[1];
        struct vm_area_struct *vma;
-       struct vm_area_struct *vmas[1];
        unsigned int flags = 0;
        int ret;
 
@@ -330,33 +329,14 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
                flags |= FOLL_WRITE;
 
        down_read(&mm->mmap_sem);
-       if (mm == current->mm) {
-               ret = get_user_pages(vaddr, 1, flags | FOLL_LONGTERM, page,
-                                    vmas);
-       } else {
-               ret = get_user_pages_remote(NULL, mm, vaddr, 1, flags, page,
-                                           vmas, NULL);
-               /*
-                * The lifetime of a vaddr_get_pfn() page pin is
-                * userspace-controlled. In the fs-dax case this could
-                * lead to indefinite stalls in filesystem operations.
-                * Disallow attempts to pin fs-dax pages via this
-                * interface.
-                */
-               if (ret > 0 && vma_is_fsdax(vmas[0])) {
-                       ret = -EOPNOTSUPP;
-                       put_page(page[0]);
-               }
-       }
-       up_read(&mm->mmap_sem);
-
+       ret = get_user_pages_remote(NULL, mm, vaddr, 1, flags | FOLL_LONGTERM,
+                                   page, NULL, NULL);
        if (ret == 1) {
                *pfn = page_to_pfn(page[0]);
-               return 0;
+               ret = 0;
+               goto done;
        }
 
-       down_read(&mm->mmap_sem);
-
        vaddr = untagged_addr(vaddr);
 
        vma = find_vma_intersection(mm, vaddr, vaddr + 1);
@@ -366,7 +346,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
                if (is_invalid_reserved_pfn(*pfn))
                        ret = 0;
        }
-
+done:
        up_read(&mm->mmap_sem);
        return ret;
 }