KVM: x86: Retry to-be-emulated insn in "slow" unprotect path iff sp is zapped
authorSean Christopherson <seanjc@google.com>
Sat, 31 Aug 2024 00:15:20 +0000 (17:15 -0700)
committerSean Christopherson <seanjc@google.com>
Tue, 10 Sep 2024 03:16:21 +0000 (20:16 -0700)
Resume the guest and thus skip emulation of a non-PTE-writing instruction
if and only if unprotecting the gfn actually zapped at least one shadow
page.  If the gfn is write-protected for some reason other than shadow
paging, attempting to unprotect the gfn will effectively fail, and thus
retrying the instruction is all but guaranteed to be pointless.  This bug
has existed for a long time, but was effectively fudged around by the
retry RIP+address anti-loop detection.

Reviewed-by: Yuan Yao <yuan.yao@intel.com>
Link: https://lore.kernel.org/r/20240831001538.336683-6-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/x86.c

index 70219e4..5d4bcb9 100644 (file)
@@ -8965,14 +8965,14 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt,
        if (ctxt->eip == last_retry_eip && last_retry_addr == cr2_or_gpa)
                return false;
 
-       vcpu->arch.last_retry_eip = ctxt->eip;
-       vcpu->arch.last_retry_addr = cr2_or_gpa;
-
        if (!vcpu->arch.mmu->root_role.direct)
                gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);
 
-       kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa));
+       if (!kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)))
+               return false;
 
+       vcpu->arch.last_retry_eip = ctxt->eip;
+       vcpu->arch.last_retry_addr = cr2_or_gpa;
        return true;
 }