mm: page_vma_mapped_walk(): use goto instead of while (1)
authorHugh Dickins <hughd@google.com>
Fri, 25 Jun 2021 01:39:20 +0000 (18:39 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 25 Jun 2021 02:40:53 +0000 (19:40 -0700)
page_vma_mapped_walk() cleanup: add a label this_pte, matching next_pte,
and use "goto this_pte", in place of the "while (1)" loop at the end.

Link: https://lkml.kernel.org/r/a52b234a-851-3616-2525-f42736e8934@google.com
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Wang Yugui <wangyugui@e16-tech.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/page_vma_mapped.c

index 5b5832d..6b93203 100644 (file)
@@ -144,6 +144,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 {
        struct mm_struct *mm = pvmw->vma->vm_mm;
        struct page *page = pvmw->page;
+       unsigned long end;
        pgd_t *pgd;
        p4d_t *p4d;
        pud_t *pud;
@@ -233,10 +234,7 @@ restart:
                }
                if (!map_pte(pvmw))
                        goto next_pte;
-       }
-       while (1) {
-               unsigned long end;
-
+this_pte:
                if (check_pte(pvmw))
                        return true;
 next_pte:
@@ -265,6 +263,7 @@ next_pte:
                        pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
                        spin_lock(pvmw->ptl);
                }
+               goto this_pte;
        }
 }