KVM: VMX: clean up pi_wakeup_handler
authorLi RongQing <lirongqing@baidu.com>
Wed, 6 Apr 2022 11:25:02 +0000 (19:25 +0800)
committerPaolo Bonzini <pbonzini@redhat.com>
Thu, 12 May 2022 13:51:40 +0000 (09:51 -0400)
Passing per_cpu() to list_for_each_entry() causes the macro to be
evaluated N+1 times for N sleeping vCPUs.  This is a very small
inefficiency, and the code is cleaner if the address of the per-CPU
variable is loaded earlier.  Do this for both the list and the spinlock.

Signed-off-by: Li RongQing <lirongqing@baidu.com>
Message-Id: <1649244302-6777-1-git-send-email-lirongqing@baidu.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/vmx/posted_intr.c

index 37d0bff..07e5fcf 100644 (file)
@@ -202,16 +202,17 @@ void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu)
 void pi_wakeup_handler(void)
 {
        int cpu = smp_processor_id();
+       struct list_head *wakeup_list = &per_cpu(wakeup_vcpus_on_cpu, cpu);
+       raw_spinlock_t *spinlock = &per_cpu(wakeup_vcpus_on_cpu_lock, cpu);
        struct vcpu_vmx *vmx;
 
-       raw_spin_lock(&per_cpu(wakeup_vcpus_on_cpu_lock, cpu));
-       list_for_each_entry(vmx, &per_cpu(wakeup_vcpus_on_cpu, cpu),
-                           pi_wakeup_list) {
+       raw_spin_lock(spinlock);
+       list_for_each_entry(vmx, wakeup_list, pi_wakeup_list) {
 
                if (pi_test_on(&vmx->pi_desc))
                        kvm_vcpu_wake_up(&vmx->vcpu);
        }
-       raw_spin_unlock(&per_cpu(wakeup_vcpus_on_cpu_lock, cpu));
+       raw_spin_unlock(spinlock);
 }
 
 void __init pi_init_cpu(int cpu)