KVM: x86/mmu: Drop 'shared' param from tdp_mmu_link_page()
authorSean Christopherson <seanjc@google.com>
Tue, 10 Aug 2021 22:45:54 +0000 (15:45 -0700)
committerPaolo Bonzini <pbonzini@redhat.com>
Fri, 20 Aug 2021 20:06:35 +0000 (16:06 -0400)
Drop @shared from tdp_mmu_link_page() and hardcode it to work for
mmu_lock being held for read.  The helper has exactly one caller and
in all likelihood will only ever have exactly one caller.  Even if KVM
adds a path to install translations without an initiating page fault,
odds are very, very good that the path will just be a wrapper to the
"page fault" handler (both SNP and TDX RFCs propose patches to do
exactly that).

No functional change intended.

Cc: Ben Gardon <bgardon@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210810224554.2978735-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/mmu/tdp_mmu.c

index db63625..64ccfc1 100644 (file)
@@ -255,26 +255,17 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn,
  *
  * @kvm: kvm instance
  * @sp: the new page
- * @shared: This operation may not be running under the exclusive use of
- *         the MMU lock and the operation must synchronize with other
- *         threads that might be adding or removing pages.
  * @account_nx: This page replaces a NX large page and should be marked for
  *             eventual reclaim.
  */
 static void tdp_mmu_link_page(struct kvm *kvm, struct kvm_mmu_page *sp,
-                             bool shared, bool account_nx)
+                             bool account_nx)
 {
-       if (shared)
-               spin_lock(&kvm->arch.tdp_mmu_pages_lock);
-       else
-               lockdep_assert_held_write(&kvm->mmu_lock);
-
+       spin_lock(&kvm->arch.tdp_mmu_pages_lock);
        list_add(&sp->link, &kvm->arch.tdp_mmu_pages);
        if (account_nx)
                account_huge_nx_page(kvm, sp);
-
-       if (shared)
-               spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
+       spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
 }
 
 /**
@@ -1062,7 +1053,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
                                                     !shadow_accessed_mask);
 
                        if (tdp_mmu_set_spte_atomic_no_dirty_log(vcpu->kvm, &iter, new_spte)) {
-                               tdp_mmu_link_page(vcpu->kvm, sp, true,
+                               tdp_mmu_link_page(vcpu->kvm, sp,
                                                  huge_page_disallowed &&
                                                  req_level >= iter.level);