KVM: VMX: Optimize posted-interrupt delivery for timer fastpath
authorWanpeng Li <wanpengli@tencent.com>
Tue, 28 Apr 2020 06:23:27 +0000 (14:23 +0800)
committerPaolo Bonzini <pbonzini@redhat.com>
Fri, 15 May 2020 16:26:20 +0000 (12:26 -0400)
While optimizing posted-interrupt delivery especially for the timer
fastpath scenario, I measured kvm_x86_ops.deliver_posted_interrupt()
to introduce substantial latency because the processor has to perform
all vmentry tasks, ack the posted interrupt notification vector,
read the posted-interrupt descriptor etc.

This is not only slow, it is also unnecessary when delivering an
interrupt to the current CPU (as is the case for the LAPIC timer) because
PIR->IRR and IRR->RVI synchronization is already performed on vmentry
Therefore skip kvm_vcpu_trigger_posted_interrupt in this case, and
instead do vmx_sync_pir_to_irr() on the EXIT_FASTPATH_REENTER_GUEST
fastpath as well.

Tested-by: Haiwei Li <lihaiwei@tencent.com>
Cc: Haiwei Li <lihaiwei@tencent.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Message-Id: <1588055009-12677-6-git-send-email-wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/vmx/vmx.c
virt/kvm/kvm_main.c

index c7730d3..8d881fc 100644 (file)
@@ -3936,7 +3936,8 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
        if (pi_test_and_set_on(&vmx->pi_desc))
                return 0;
 
-       if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false))
+       if (vcpu != kvm_get_running_vcpu() &&
+           !kvm_vcpu_trigger_posted_interrupt(vcpu, false))
                kvm_vcpu_kick(vcpu);
 
        return 0;
@@ -6812,6 +6813,8 @@ reenter_guest:
                         * but it would incur the cost of a retpoline for now.
                         * Revisit once static calls are available.
                         */
+                       if (vcpu->arch.apicv_active)
+                               vmx_sync_pir_to_irr(vcpu);
                        goto reenter_guest;
                }
                exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
index bef3d8d..11844fa 100644 (file)
@@ -4646,6 +4646,7 @@ struct kvm_vcpu *kvm_get_running_vcpu(void)
 
        return vcpu;
 }
+EXPORT_SYMBOL_GPL(kvm_get_running_vcpu);
 
 /**
  * kvm_get_running_vcpus - get the per-CPU array of currently running vcpus.