arm64: memory: Simplify virt_to_page() implementation
authorWill Deacon <will@kernel.org>
Tue, 13 Aug 2019 15:34:32 +0000 (16:34 +0100)
committerWill Deacon <will@kernel.org>
Wed, 14 Aug 2019 12:07:49 +0000 (13:07 +0100)
Build virt_to_page() on top of virt_to_pfn() so we can avoid the need
for explicit shifting.

Tested-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
arch/arm64/include/asm/memory.h

index 636d414..e6353d1 100644 (file)
@@ -311,7 +311,7 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define ARCH_PFN_OFFSET                ((unsigned long)PHYS_PFN_OFFSET)
 
 #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
-#define virt_to_page(kaddr)    pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
+#define virt_to_page(kaddr)    pfn_to_page(virt_to_pfn(kaddr))
 #else
 #define page_to_virt(x)        ({                                              \
        __typeof__(x) __page = x;                                       \