KVM: x86/mmu: Relocate kvm_mmu_page.tdp_mmu_page for better cache locality
authorSean Christopherson <seanjc@google.com>
Wed, 1 Sep 2021 22:10:22 +0000 (15:10 -0700)
committerPaolo Bonzini <pbonzini@redhat.com>
Mon, 6 Sep 2021 10:19:07 +0000 (06:19 -0400)
Move "tdp_mmu_page" into the 1-byte void left by the recently removed
"mmio_cached" so that it resides in the first 64 bytes of kvm_mmu_page,
i.e. in the same cache line as the most commonly accessed fields.

Don't bother wrapping tdp_mmu_page in CONFIG_X86_64, including the field in
32-bit builds doesn't affect the size of kvm_mmu_page, and a future patch
can always wrap the field in the unlikely event KVM gains a 1-byte flag
that is 32-bit specific.

Note, the size of kvm_mmu_page is also unchanged on CONFIG_X86_64=y due
to it previously sharing an 8-byte chunk with write_flooding_count.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210901221023.1303578-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/mmu/mmu_internal.h

index 6b6f108..4e7b634 100644 (file)
@@ -35,6 +35,7 @@ struct kvm_mmu_page {
        struct hlist_node hash_link;
        struct list_head lpage_disallowed_link;
 
+       bool tdp_mmu_page;
        bool unsync;
        u8 mmu_valid_gen;
        bool lpage_disallowed; /* Can't be replaced by an equiv large page */
@@ -70,8 +71,6 @@ struct kvm_mmu_page {
        atomic_t write_flooding_count;
 
 #ifdef CONFIG_X86_64
-       bool tdp_mmu_page;
-
        /* Used for freeing the page asynchronously if it is a TDP MMU page. */
        struct rcu_head rcu_head;
 #endif