KVM: arm64: Clean up the checking for huge mapping
authorSuzuki K Poulose <suzuki.poulose@arm.com>
Thu, 7 May 2020 12:35:45 +0000 (20:35 +0800)
committerMarc Zyngier <maz@kernel.org>
Sat, 16 May 2020 14:05:02 +0000 (15:05 +0100)
If we are checking whether the stage2 can map PAGE_SIZE,
we don't have to do the boundary checks as both the host
VMA and the guest memslots are page aligned. Bail the case
easily.

While we're at it, fixup a typo in the comment below.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20200507123546.1875-2-yuzenghui@huawei.com
arch/arm64/kvm/mmu.c

index 9173633..ccb44e7 100644 (file)
@@ -1610,6 +1610,10 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,
        hva_t uaddr_start, uaddr_end;
        size_t size;
 
+       /* The memslot and the VMA are guaranteed to be aligned to PAGE_SIZE */
+       if (map_size == PAGE_SIZE)
+               return true;
+
        size = memslot->npages * PAGE_SIZE;
 
        gpa_start = memslot->base_gfn << PAGE_SHIFT;
@@ -1629,7 +1633,7 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,
         *    |abcde|fgh  Stage-1 block  |    Stage-1 block tv|xyz|
         *    +-----+--------------------+--------------------+---+
         *
-        *    memslot->base_gfn << PAGE_SIZE:
+        *    memslot->base_gfn << PAGE_SHIFT:
         *      +---+--------------------+--------------------+-----+
         *      |abc|def  Stage-2 block  |    Stage-2 block   |tvxyz|
         *      +---+--------------------+--------------------+-----+