KVM: x86: adjust kvm_mmu_page member to save 8 bytes
authorWei Yang <richard.weiyang@gmail.com>
Wed, 5 Sep 2018 21:58:16 +0000 (05:58 +0800)
committerPaolo Bonzini <pbonzini@redhat.com>
Tue, 16 Oct 2018 22:29:40 +0000 (00:29 +0200)
commit3ff519f29d98ecdc1961d825d105d68711093b6b
treee291a62f70b9553f6418a4b7e7ebce4f6b1ccf10
parentbd18bffca35397214ae68d85cf7203aca25c3c1d
KVM: x86: adjust kvm_mmu_page member to save 8 bytes

On a 64bits machine, struct is naturally aligned with 8 bytes. Since
kvm_mmu_page member *unsync* and *role* are less then 4 bytes, we can
rearrange the sequence to compace the struct.

As the comment shows, *role* and *gfn* are used to key the shadow page. In
order to keep the comment valid, this patch moves the *unsync* up and
exchange the position of *role* and *gfn*.

From /proc/slabinfo, it shows the size of kvm_mmu_page is 8 bytes less and
with one more object per slap after applying this patch.

    # name            <active_objs> <num_objs> <objsize> <objperslab>
    kvm_mmu_page_header      0           0       168         24

    kvm_mmu_page_header      0           0       160         25

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/include/asm/kvm_host.h