linux-2.6-microblaze.git
20 months agoRevert "KVM: nVMX: Do not expose MPX VMX controls when guest MPX disabled"
Paolo Bonzini [Fri, 22 Jul 2022 09:07:39 +0000 (05:07 -0400)]
Revert "KVM: nVMX: Do not expose MPX VMX controls when guest MPX disabled"

Since commit 5f76f6f5ff96 ("KVM: nVMX: Do not expose MPX VMX controls
when guest MPX disabled"), KVM has taken ownership of the "load
IA32_BNDCFGS" and "clear IA32_BNDCFGS" VMX entry/exit controls,
trying to set these bits in the IA32_VMX_TRUE_{ENTRY,EXIT}_CTLS
MSRs if the guest's CPUID supports MPX, and clear otherwise.

The intent of the patch was to apply it to L0 in order to work around
L1 kernels that lack the fix in commit 691bd4340bef ("kvm: vmx: allow
host to access guest MSR_IA32_BNDCFGS", 2017-07-04): by hiding the
control bits from L0, L1 hides BNDCFGS from KVM_GET_MSR_INDEX_LIST,
and the L1 bug is neutralized even in the lack of commit 691bd4340bef.

This was perhaps a sensible kludge at the time, but a horrible
idea in the long term and in fact it has not been extended to
other CPUID bits like these:

  X86_FEATURE_LM => VM_EXIT_HOST_ADDR_SPACE_SIZE, VM_ENTRY_IA32E_MODE,
                    VMX_MISC_SAVE_EFER_LMA

  X86_FEATURE_TSC => CPU_BASED_RDTSC_EXITING, CPU_BASED_USE_TSC_OFFSETTING,
                     SECONDARY_EXEC_TSC_SCALING

  X86_FEATURE_INVPCID_SINGLE => SECONDARY_EXEC_ENABLE_INVPCID

  X86_FEATURE_MWAIT => CPU_BASED_MONITOR_EXITING, CPU_BASED_MWAIT_EXITING

  X86_FEATURE_INTEL_PT => SECONDARY_EXEC_PT_CONCEAL_VMX, SECONDARY_EXEC_PT_USE_GPA,
                          VM_EXIT_CLEAR_IA32_RTIT_CTL, VM_ENTRY_LOAD_IA32_RTIT_CTL

  X86_FEATURE_XSAVES => SECONDARY_EXEC_XSAVES

These days it's sort of common knowledge that any MSR in
KVM_GET_MSR_INDEX_LIST must allow *at least* setting it with KVM_SET_MSR
to a default value, so it is unlikely that something like commit
5f76f6f5ff96 will be needed again.  So revert it, at the potential cost
of breaking L1s with a 6 year old kernel.  While in principle the L0 owner
doesn't control what runs on L1, such an old hypervisor would probably
have many other bugs.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: nVMX: Let userspace set nVMX MSR to any _host_ supported value
Sean Christopherson [Tue, 7 Jun 2022 21:35:54 +0000 (21:35 +0000)]
KVM: nVMX: Let userspace set nVMX MSR to any _host_ supported value

Restrict the nVMX MSRs based on KVM's config, not based on the guest's
current config.  Using the guest's config to audit the new config
prevents userspace from restoring the original config (KVM's config) if
at any point in the past the guest's config was restricted in any way.

Fixes: 62cc6b9dc61e ("KVM: nVMX: support restore of VMX capability MSRs")
Cc: stable@vger.kernel.org
Cc: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220607213604.3346000-6-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: nVMX: Rename handle_vm{on,off}() to handle_vmx{on,off}()
Sean Christopherson [Tue, 7 Jun 2022 21:35:53 +0000 (21:35 +0000)]
KVM: nVMX: Rename handle_vm{on,off}() to handle_vmx{on,off}()

Rename the exit handlers for VMXON and VMXOFF to match the instruction
names, the terms "vmon" and "vmoff" are not used anywhere in Intel's
documentation, nor are they used elsehwere in KVM.

Sadly, the exit reasons are exposed to userspace and so cannot be renamed
without breaking userspace. :-(

Fixes: ec378aeef9df ("KVM: nVMX: Implement VMXON and VMXOFF")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220607213604.3346000-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: nVMX: Inject #UD if VMXON is attempted with incompatible CR0/CR4
Sean Christopherson [Tue, 7 Jun 2022 21:35:52 +0000 (21:35 +0000)]
KVM: nVMX: Inject #UD if VMXON is attempted with incompatible CR0/CR4

Inject a #UD if L1 attempts VMXON with a CR0 or CR4 that is disallowed
per the associated nested VMX MSRs' fixed0/1 settings.  KVM cannot rely
on hardware to perform the checks, even for the few checks that have
higher priority than VM-Exit, as (a) KVM may have forced CR0/CR4 bits in
hardware while running the guest, (b) there may incompatible CR0/CR4 bits
that have lower priority than VM-Exit, e.g. CR0.NE, and (c) userspace may
have further restricted the allowed CR0/CR4 values by manipulating the
guest's nested VMX MSRs.

Note, despite a very strong desire to throw shade at Jim, commit
70f3aac964ae ("kvm: nVMX: Remove superfluous VMX instruction fault checks")
is not to blame for the buggy behavior (though the comment...).  That
commit only removed the CR0.PE, EFLAGS.VM, and COMPATIBILITY mode checks
(though it did erroneously drop the CPL check, but that has already been
remedied).  KVM may force CR0.PE=1, but will do so only when also
forcing EFLAGS.VM=1 to emulate Real Mode, i.e. hardware will still #UD.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=216033
Fixes: ec378aeef9df ("KVM: nVMX: Implement VMXON and VMXOFF")
Reported-by: Eric Li <ercli@ucdavis.edu>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220607213604.3346000-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: nVMX: Account for KVM reserved CR4 bits in consistency checks
Sean Christopherson [Tue, 7 Jun 2022 21:35:51 +0000 (21:35 +0000)]
KVM: nVMX: Account for KVM reserved CR4 bits in consistency checks

Check that the guest (L2) and host (L1) CR4 values that would be loaded
by nested VM-Enter and VM-Exit respectively are valid with respect to
KVM's (L0 host) allowed CR4 bits.  Failure to check KVM reserved bits
would allow L1 to load an illegal CR4 (or trigger hardware VM-Fail or
failed VM-Entry) by massaging guest CPUID to allow features that are not
supported by KVM.  Amusingly, KVM itself is an accomplice in its doom, as
KVM adjusts L1's MSR_IA32_VMX_CR4_FIXED1 to allow L1 to enable bits for
L2 based on L1's CPUID model.

Note, although nested_{guest,host}_cr4_valid() are _currently_ used if
and only if the vCPU is post-VMXON (nested.vmxon == true), that may not
be true in the future, e.g. emulating VMXON has a bug where it doesn't
check the allowed/required CR0/CR4 bits.

Cc: stable@vger.kernel.org
Fixes: 3899152ccbf4 ("KVM: nVMX: fix checks on CR{0,4} during virtual VMX operation")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220607213604.3346000-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86: Split kvm_is_valid_cr4() and export only the non-vendor bits
Sean Christopherson [Tue, 7 Jun 2022 21:35:50 +0000 (21:35 +0000)]
KVM: x86: Split kvm_is_valid_cr4() and export only the non-vendor bits

Split the common x86 parts of kvm_is_valid_cr4(), i.e. the reserved bits
checks, into a separate helper, __kvm_is_valid_cr4(), and export only the
inner helper to vendor code in order to prevent nested VMX from calling
back into vmx_is_valid_cr4() via kvm_is_valid_cr4().

On SVM, this is a nop as SVM doesn't place any additional restrictions on
CR4.

On VMX, this is also currently a nop, but only because nested VMX is
missing checks on reserved CR4 bits for nested VM-Enter.  That bug will
be fixed in a future patch, and could simply use kvm_is_valid_cr4() as-is,
but nVMX has _another_ bug where VMXON emulation doesn't enforce VMX's
restrictions on CR0/CR4.  The cleanest and most intuitive way to fix the
VMXON bug is to use nested_host_cr{0,4}_valid().  If the CR4 variant
routes through kvm_is_valid_cr4(), using nested_host_cr4_valid() won't do
the right thing for the VMXON case as vmx_is_valid_cr4() enforces VMX's
restrictions if and only if the vCPU is post-VMXON.

Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220607213604.3346000-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: selftests: Add an option to run vCPUs while disabling dirty logging
Sean Christopherson [Fri, 15 Jul 2022 23:21:07 +0000 (23:21 +0000)]
KVM: selftests: Add an option to run vCPUs while disabling dirty logging

Add a command line option to dirty_log_perf_test to run vCPUs for the
entire duration of disabling dirty logging.  By default, the test stops
running runs vCPUs before disabling dirty logging, which is faster but
less interesting as it doesn't stress KVM's handling of contention
between page faults and the zapping of collapsible SPTEs.  Enabling the
flag also lets the user verify that KVM is indeed rebuilding zapped SPTEs
as huge pages by checking KVM's pages_{1g,2m,4k} stats.  Without vCPUs to
fault in the zapped SPTEs, the stats will show that KVM is zapping pages,
but they never show whether or not KVM actually allows huge pages to be
recreated.

Note!  Enabling the flag can _significantly_ increase runtime, especially
if the thread that's disabling dirty logging doesn't have a dedicated
pCPU, e.g. if all pCPUs are used to run vCPUs.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220715232107.3775620-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86/mmu: Don't bottom out on leafs when zapping collapsible SPTEs
Sean Christopherson [Fri, 15 Jul 2022 23:21:06 +0000 (23:21 +0000)]
KVM: x86/mmu: Don't bottom out on leafs when zapping collapsible SPTEs

When zapping collapsible SPTEs in the TDP MMU, don't bottom out on a leaf
SPTE now that KVM doesn't require a PFN to compute the host mapping level,
i.e. now that there's no need to first find a leaf SPTE and then step
back up.

Drop the now unused tdp_iter_step_up(), as it is not the safest of
helpers (using any of the low level iterators requires some understanding
of the various side effects).

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220715232107.3775620-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86/mmu: Document the "rules" for using host_pfn_mapping_level()
Sean Christopherson [Fri, 15 Jul 2022 23:21:05 +0000 (23:21 +0000)]
KVM: x86/mmu: Document the "rules" for using host_pfn_mapping_level()

Add a comment to document how host_pfn_mapping_level() can be used safely,
as the line between safe and dangerous is quite thin.  E.g. if KVM were
to ever support in-place promotion to create huge pages, consuming the
level is safe if the caller holds mmu_lock and checks that there's an
existing _leaf_ SPTE, but unsafe if the caller only checks that there's a
non-leaf SPTE.

Opportunistically tweak the existing comments to explicitly document why
KVM needs to use READ_ONCE().

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220715232107.3775620-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86/mmu: Don't require refcounted "struct page" to create huge SPTEs
Sean Christopherson [Fri, 15 Jul 2022 23:21:04 +0000 (23:21 +0000)]
KVM: x86/mmu: Don't require refcounted "struct page" to create huge SPTEs

Drop the requirement that a pfn be backed by a refcounted, compound or
or ZONE_DEVICE, struct page, and instead rely solely on the host page
tables to identify huge pages.  The PageCompound() check is a remnant of
an old implementation that identified (well, attempt to identify) huge
pages without walking the host page tables.  The ZONE_DEVICE check was
added as an exception to the PageCompound() requirement.  In other words,
neither check is actually a hard requirement, if the primary has a pfn
backed with a huge page, then KVM can back the pfn with a huge page
regardless of the backing store.

Dropping the @pfn parameter will also allow KVM to query the max host
mapping level without having to first get the pfn, which is advantageous
for use outside of the page fault path where KVM wants to take action if
and only if a page can be mapped huge, i.e. avoids the pfn lookup for
gfns that can't be backed with a huge page.

Cc: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Mingwei Zhang <mizhang@google.com>
Message-Id: <20220715232107.3775620-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86/mmu: Restrict mapping level based on guest MTRR iff they're used
Sean Christopherson [Fri, 15 Jul 2022 23:00:16 +0000 (23:00 +0000)]
KVM: x86/mmu: Restrict mapping level based on guest MTRR iff they're used

Restrict the mapping level for SPTEs based on the guest MTRRs if and only
if KVM may actually use the guest MTRRs to compute the "real" memtype.
For all forms of paging, guest MTRRs are purely virtual in the sense that
they are completely ignored by hardware, i.e. they affect the memtype
only if software manually consumes them.  The only scenario where KVM
consumes the guest MTRRs is when shadow_memtype_mask is non-zero and the
guest has non-coherent DMA, in all other cases KVM simply leaves the PAT
field in SPTEs as '0' to encode WB memtype.

Note, KVM may still ultimately ignore guest MTRRs, e.g. if the backing
pfn is host MMIO, but false positives are ok as they only cause a slight
performance blip (unless the guest is doing weird things with its MTRRs,
which is extremely unlikely).

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20220715230016.3762909-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86/mmu: Add shadow mask for effective host MTRR memtype
Sean Christopherson [Fri, 15 Jul 2022 23:00:15 +0000 (23:00 +0000)]
KVM: x86/mmu: Add shadow mask for effective host MTRR memtype

Add shadow_memtype_mask to capture that EPT needs a non-zero memtype mask
instead of relying on TDP being enabled, as NPT doesn't need a non-zero
mask.  This is a glorified nop as kvm_x86_ops.get_mt_mask() returns zero
for NPT anyways.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20220715230016.3762909-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86: Drop unnecessary goto+label in kvm_arch_init()
Sean Christopherson [Fri, 15 Jul 2022 23:00:14 +0000 (23:00 +0000)]
KVM: x86: Drop unnecessary goto+label in kvm_arch_init()

Return directly if kvm_arch_init() detects an error before doing any real
work, jumping through a label obfuscates what's happening and carries the
unnecessary risk of leaving 'r' uninitialized.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20220715230016.3762909-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86: Reject loading KVM if host.PAT[0] != WB
Sean Christopherson [Fri, 15 Jul 2022 23:00:13 +0000 (23:00 +0000)]
KVM: x86: Reject loading KVM if host.PAT[0] != WB

Reject KVM if entry '0' in the host's IA32_PAT MSR is not programmed to
writeback (WB) memtype.  KVM subtly relies on IA32_PAT entry '0' to be
programmed to WB by leaving the PAT bits in shadow paging and NPT SPTEs
as '0'.  If something other than WB is in PAT[0], at _best_ guests will
suffer very poor performance, and at worst KVM will crash the system by
breaking cache-coherency expecations (e.g. using WC for guest memory).

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20220715230016.3762909-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: SVM: Fix x2APIC MSRs interception
Suravee Suthikulpanit [Mon, 18 Jul 2022 08:38:33 +0000 (03:38 -0500)]
KVM: SVM: Fix x2APIC MSRs interception

The index for svm_direct_access_msrs was incorrectly initialized with
the APIC MMIO register macros. Fix by introducing a macro for calculating
x2APIC MSRs.

Fixes: 5c127c85472c ("KVM: SVM: Adding support for configuring x2APIC MSRs interception")
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Message-Id: <20220718083833.222117-1-suravee.suthikulpanit@amd.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86/mmu: Remove underscores from __pte_list_remove()
Sean Christopherson [Fri, 15 Jul 2022 22:42:26 +0000 (22:42 +0000)]
KVM: x86/mmu: Remove underscores from __pte_list_remove()

Remove the underscores from __pte_list_remove(), the function formerly
known as pte_list_remove() is now named kvm_zap_one_rmap_spte() to show
that it zaps rmaps/PTEs, i.e. doesn't just remove an entry from a list.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220715224226.3749507-8-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86/mmu: Rename pte_list_{destroy,remove}() to show they zap SPTEs
Sean Christopherson [Fri, 15 Jul 2022 22:42:25 +0000 (22:42 +0000)]
KVM: x86/mmu: Rename pte_list_{destroy,remove}() to show they zap SPTEs

Rename pte_list_remove() and pte_list_destroy() to kvm_zap_one_rmap_spte()
and kvm_zap_all_rmap_sptes() respectively to document that (a) they zap
SPTEs and (b) to better document how they differ (remove vs. destroy does
not exactly scream "one vs. all").

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220715224226.3749507-7-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86/mmu: Rename rmap zap helpers to eliminate "unmap" wrapper
Sean Christopherson [Fri, 15 Jul 2022 22:42:24 +0000 (22:42 +0000)]
KVM: x86/mmu: Rename rmap zap helpers to eliminate "unmap" wrapper

Rename kvm_unmap_rmap() and kvm_zap_rmap() to kvm_zap_rmap() and
__kvm_zap_rmap() respectively to show that what was the "unmap" helper is
just a wrapper for the "zap" helper, i.e. that they do the exact same
thing, one just exists to deal with its caller passing in more params.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220715224226.3749507-6-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86/mmu: Rename __kvm_zap_rmaps() to align with other nomenclature
Sean Christopherson [Fri, 15 Jul 2022 22:42:23 +0000 (22:42 +0000)]
KVM: x86/mmu: Rename __kvm_zap_rmaps() to align with other nomenclature

Rename __kvm_zap_rmaps() to kvm_rmap_zap_gfn_range() to avoid future
confusion with a soon-to-be-introduced __kvm_zap_rmap().  Using a plural
"rmaps" is somewhat ambiguous without additional context, as it's not
obvious whether it's referring to multiple rmap lists, versus multiple
rmap entries within a single list.

Use kvm_rmap_zap_gfn_range() to align with the pattern established by
kvm_rmap_zap_collapsible_sptes(), without losing the information that it
zaps only rmap-based MMUs, i.e. don't rename it to __kvm_zap_gfn_range().

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220715224226.3749507-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86/mmu: Drop the "p is for pointer" from rmap helpers
Sean Christopherson [Fri, 15 Jul 2022 22:42:22 +0000 (22:42 +0000)]
KVM: x86/mmu: Drop the "p is for pointer" from rmap helpers

Drop the trailing "p" from rmap helpers, i.e. rename functions to simply
be kvm_<action>_rmap().  Declaring that a function takes a pointer is
completely unnecessary and goes against kernel style.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220715224226.3749507-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86/mmu: Directly "destroy" PTE list when recycling rmaps
Sean Christopherson [Fri, 15 Jul 2022 22:42:21 +0000 (22:42 +0000)]
KVM: x86/mmu: Directly "destroy" PTE list when recycling rmaps

Use pte_list_destroy() directly when recycling rmaps instead of bouncing
through kvm_unmap_rmapp() and kvm_zap_rmapp().  Calling kvm_unmap_rmapp()
is unnecessary and odd as it requires passing dummy parameters; passing
NULL for @slot when __rmap_add() already has a valid slot is especially
weird and confusing.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220715224226.3749507-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: x86/mmu: Return a u64 (the old SPTE) from mmu_spte_clear_track_bits()
Sean Christopherson [Fri, 15 Jul 2022 22:42:20 +0000 (22:42 +0000)]
KVM: x86/mmu: Return a u64 (the old SPTE) from mmu_spte_clear_track_bits()

Return a u64, not an int, from mmu_spte_clear_track_bits().  The return
value is the old SPTE value, which is very much a 64-bit value.  The sole
caller that consumes the return value, drop_spte(), already uses a u64.
The only reason that truncating the SPTE value is not problematic is
because drop_spte() only queries the shadow-present bit, which is in the
lower 32 bits.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220715224226.3749507-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
20 months agoKVM: nSVM: Pull CS.Base from actual VMCB12 for soft int/ex re-injection
Maciej S. Szmigiero [Mon, 18 Jul 2022 15:47:13 +0000 (17:47 +0200)]
KVM: nSVM: Pull CS.Base from actual VMCB12 for soft int/ex re-injection

enter_svm_guest_mode() first calls nested_vmcb02_prepare_control() to copy
control fields from VMCB12 to the current VMCB, then
nested_vmcb02_prepare_save() to perform a similar copy of the save area.

This means that nested_vmcb02_prepare_control() still runs with the
previous save area values in the current VMCB so it shouldn't take the L2
guest CS.Base from this area.

Explicitly pull CS.Base from the actual VMCB12 instead in
enter_svm_guest_mode().

Granted, having a non-zero CS.Base is a very rare thing (and even
impossible in 64-bit mode), having it change between nested VMRUNs is
probably even rarer, but if it happens it would create a really subtle bug
so it's better to fix it upfront.

Fixes: 6ef88d6e36c2 ("KVM: SVM: Re-inject INT3/INTO instead of retrying the instruction")
Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Message-Id: <4caa0f67589ae3c22c311ee0e6139496902f2edc.1658159083.git.maciej.szmigiero@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
21 months agoMerge tag 'kvm-s390-next-5.20-1' of https://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Fri, 22 Jul 2022 07:14:32 +0000 (03:14 -0400)]
Merge tag 'kvm-s390-next-5.20-1' of https://git./linux/kernel/git/kvms390/linux into HEAD

KVM: s390x: Fixes and features for 5.20

* First part of deferred teardown
* CPU Topology
* interpretive execution for PCI instructions
* PV attestation
* Minor fixes

21 months agoKVM: s390: resetting the Topology-Change-Report
Pierre Morel [Thu, 14 Jul 2022 19:43:34 +0000 (21:43 +0200)]
KVM: s390: resetting the Topology-Change-Report

During a subsystem reset the Topology-Change-Report is cleared.

Let's give userland the possibility to clear the MTCR in the case
of a subsystem reset.

To migrate the MTCR, we give userland the possibility to
query the MTCR state.

We indicate KVM support for the CPU topology facility with a new
KVM capability: KVM_CAP_S390_CPU_TOPOLOGY.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
Reviewed-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Message-Id: <20220714194334.127812-1-pmorel@linux.ibm.com>
Link: https://lore.kernel.org/all/20220714194334.127812-1-pmorel@linux.ibm.com/
[frankja@linux.ibm.com: Simple conflict resolution in Documentation/virt/kvm/api.rst]
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
21 months agoKVM: s390: guest support for topology function
Pierre Morel [Thu, 14 Jul 2022 10:18:23 +0000 (12:18 +0200)]
KVM: s390: guest support for topology function

We report a topology change to the guest for any CPU hotplug.

The reporting to the guest is done using the Multiprocessor
Topology-Change-Report (MTCR) bit of the utility entry in the guest's
SCA which will be cleared during the interpretation of PTF.

On every vCPU creation we set the MCTR bit to let the guest know the
next time it uses the PTF with command 2 instruction that the
topology changed and that it should use the STSI(15.1.x) instruction
to get the topology details.

STSI(15.1.x) gives information on the CPU configuration topology.
Let's accept the interception of STSI with the function code 15 and
let the userland part of the hypervisor handle it when userland
supports the CPU Topology facility.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
Reviewed-by: Nico Boehr <nrb@linux.ibm.com>
Reviewed-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Link: https://lore.kernel.org/r/20220714101824.101601-2-pmorel@linux.ibm.com
Message-Id: <20220714101824.101601-2-pmorel@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
21 months agoKVM: s390: Cleanup ipte lock access and SIIF facility checks
Pierre Morel [Wed, 4 May 2022 12:29:08 +0000 (14:29 +0200)]
KVM: s390: Cleanup ipte lock access and SIIF facility checks

We can check if SIIF is enabled by testing the sclp_info struct
instead of testing the sie control block eca variable as that
facility is always enabled if available.

Also let's cleanup all the ipte related struct member accesses
which currently happen by referencing the KVM struct via the
VCPU struct.
Making the KVM struct the parameter to the ipte_* functions
removes one level of indirection which makes the code more readable.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Nico Boehr <nrb@linux.ibm.com>
Link: https://lore.kernel.org/all/20220711084148.25017-2-pmorel@linux.ibm.com/
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
21 months agoKVM: s390: pv: don't present the ecall interrupt twice
Nico Boehr [Mon, 18 Jul 2022 13:04:34 +0000 (15:04 +0200)]
KVM: s390: pv: don't present the ecall interrupt twice

When the SIGP interpretation facility is present and a VCPU sends an
ecall to another VCPU in enabled wait, the sending VCPU receives a 56
intercept (partial execution), so KVM can wake up the receiving CPU.
Note that the SIGP interpretation facility will take care of the
interrupt delivery and KVM's only job is to wake the receiving VCPU.

For PV, the sending VCPU will receive a 108 intercept (pv notify) and
should continue like in the non-PV case, i.e. wake the receiving VCPU.

For PV and non-PV guests the interrupt delivery will occur through the
SIGP interpretation facility on SIE entry when SIE finds the X bit in
the status field set.

However, in handle_pv_notification(), there was no special handling for
SIGP, which leads to interrupt injection being requested by KVM for the
next SIE entry. This results in the interrupt being delivered twice:
once by the SIGP interpretation facility and once by KVM through the
IICTL.

Add the necessary special handling in handle_pv_notification(), similar
to handle_partial_execution(), which simply wakes the receiving VCPU and
leave interrupt delivery to the SIGP interpretation facility.

In contrast to external calls, emergency calls are not interpreted but
also cause a 108 intercept, which is why we still need to call
handle_instruction() for SIGP orders other than ecall.

Since kvm_s390_handle_sigp_pei() is now called for all SIGP orders which
cause a 108 intercept - even if they are actually handled by
handle_instruction() - move the tracepoint in kvm_s390_handle_sigp_pei()
to avoid possibly confusing trace messages.

Signed-off-by: Nico Boehr <nrb@linux.ibm.com>
Cc: <stable@vger.kernel.org> # 5.7
Fixes: da24a0cc58ed ("KVM: s390: protvirt: Instruction emulation")
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Link: https://lore.kernel.org/r/20220718130434.73302-1-nrb@linux.ibm.com
Message-Id: <20220718130434.73302-1-nrb@linux.ibm.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
21 months agoKVM: s390: pv: destroy the configuration before its memory
Claudio Imbrenda [Tue, 28 Jun 2022 13:56:13 +0000 (15:56 +0200)]
KVM: s390: pv: destroy the configuration before its memory

Move the Destroy Secure Configuration UVC before the loop to destroy
the memory. If the protected VM has memory, it will be cleaned up and
made accessible by the Destroy Secure Configuration UVC. The struct
page for the relevant pages will still have the protected bit set, so
the loop is still needed to clean that up.

Switching the order of those two operations does not change the
outcome, but it is significantly faster.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Nico Boehr <nrb@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Link: https://lore.kernel.org/r/20220628135619.32410-13-imbrenda@linux.ibm.com
Message-Id: <20220628135619.32410-13-imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
21 months agoKVM: s390: pv: refactoring of kvm_s390_pv_deinit_vm
Claudio Imbrenda [Tue, 28 Jun 2022 13:56:12 +0000 (15:56 +0200)]
KVM: s390: pv: refactoring of kvm_s390_pv_deinit_vm

Refactor kvm_s390_pv_deinit_vm to improve readability and simplify the
improvements that are coming in subsequent patches.

No functional change intended.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Link: https://lore.kernel.org/r/20220628135619.32410-12-imbrenda@linux.ibm.com
Message-Id: <20220628135619.32410-12-imbrenda@linux.ibm.com>
[frankja@linux.ibm.com: Dropped commit message line regarding review]
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
21 months agos390/mm: KVM: pv: when tearing down, try to destroy protected pages
Claudio Imbrenda [Tue, 28 Jun 2022 13:56:11 +0000 (15:56 +0200)]
s390/mm: KVM: pv: when tearing down, try to destroy protected pages

When ptep_get_and_clear_full is called for a mm teardown, we will now
attempt to destroy the secure pages. This will be faster than export.

In case it was not a teardown, or if for some reason the destroy page
UVC failed, we try with an export page, like before.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Nico Boehr <nrb@linux.ibm.com>
Link: https://lore.kernel.org/r/20220628135619.32410-11-imbrenda@linux.ibm.com
Message-Id: <20220628135619.32410-11-imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
21 months agoKVM: s390: pv: add mmu_notifier
Claudio Imbrenda [Tue, 28 Jun 2022 13:56:10 +0000 (15:56 +0200)]
KVM: s390: pv: add mmu_notifier

Add an mmu_notifier for protected VMs. The callback function is
triggered when the mm is torn down, and will attempt to convert all
protected vCPUs to non-protected. This allows the mm teardown to use
the destroy page UVC instead of export.

Also make KVM select CONFIG_MMU_NOTIFIER, needed to use mmu_notifiers.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Nico Boehr <nrb@linux.ibm.com>
Link: https://lore.kernel.org/r/20220628135619.32410-10-imbrenda@linux.ibm.com
Message-Id: <20220628135619.32410-10-imbrenda@linux.ibm.com>
[frankja@linux.ibm.com: Conflict resolution for mmu_notifier.h include
and struct kvm_s390_pv]
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
21 months agoKVM: x86: Check target, not vCPU's x2APIC ID, when applying hotplug hack
Sean Christopherson [Tue, 12 Jul 2022 18:46:53 +0000 (11:46 -0700)]
KVM: x86: Check target, not vCPU's x2APIC ID, when applying hotplug hack

When applying the hotplug hack to match x2APIC IDs for vCPUs in xAPIC
mode, check the target APID ID for being unaddressable in xAPIC mode
instead of checking the vCPU's x2APIC ID, and in that case proceed as
if apic_x2apic_mode(vcpu) were true.

Functionally, it does not matter whether you compare kvm_x2apic_id(apic)
or mda with 0xff, since the two values are then checked for equality.
But in isolation, checking the x2APIC ID takes an unnecessary dependency
on the x2APIC ID being read-only (which isn't strictly true on AMD CPUs,
and is difficult to document as well); it also requires KVM to fallthrough
and check the xAPIC ID as well to deal with a writable xAPIC ID, whereas
the xAPIC ID _can't_ match a target ID greater than 0xff.

Opportunistically reword the comment to call out the various subtleties,
and to fix a typo reported by Zhang Jiaming.

No functional change intended.

Cc: Zhang Jiaming <jiaming@nfschina.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
21 months agoMerge commit 'kvm-vmx-nested-tsc-fix' into kvm-next-5.20
Paolo Bonzini [Thu, 14 Jul 2022 14:04:12 +0000 (10:04 -0400)]
Merge commit 'kvm-vmx-nested-tsc-fix' into kvm-next-5.20

Merge bugfix needed in both 5.19 (because it's bad) and 5.20 (because
it is a prerequisite to test new features).

21 months agoKVM: x86: Restrict get_mt_mask() to a u8, use KVM_X86_OP_OPTIONAL_RET0
Sean Christopherson [Thu, 14 Jul 2022 15:37:07 +0000 (15:37 +0000)]
KVM: x86: Restrict get_mt_mask() to a u8, use KVM_X86_OP_OPTIONAL_RET0

Restrict get_mt_mask() to a u8 and reintroduce using a RET0 static_call
for the SVM implementation.  EPT stores the memtype information in the
lower 8 bits (bits 6:3 to be precise), and even returns a shifted u8
without an explicit cast to a larger type; there's no need to return a
full u64.

Note, RET0 doesn't play nice with a u64 return on 32-bit kernels, see
commit bf07be36cd88 ("KVM: x86: do not use KVM_X86_OP_OPTIONAL_RET0 for
get_mt_mask").

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220714153707.3239119-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
21 months agoKVM: x86: Add dedicated helper to get CPUID entry with significant index
Sean Christopherson [Tue, 12 Jul 2022 00:06:45 +0000 (02:06 +0200)]
KVM: x86: Add dedicated helper to get CPUID entry with significant index

Add a second CPUID helper, kvm_find_cpuid_entry_index(), to handle KVM
queries for CPUID leaves whose index _may_ be significant, and drop the
index param from the existing kvm_find_cpuid_entry().  Add a WARN in the
inner helper, cpuid_entry2_find(), to detect attempts to retrieve a CPUID
entry whose index is significant without explicitly providing an index.

Using an explicit magic number and letting callers omit the index avoids
confusion by eliminating the myriad cases where KVM specifies '0' as a
dummy value.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
21 months agoKVM: SVM: fix task switch emulation on INTn instruction.
Maxim Levitsky [Thu, 14 Jul 2022 12:44:53 +0000 (15:44 +0300)]
KVM: SVM: fix task switch emulation on INTn instruction.

Recently KVM's SVM code switched to re-injecting software interrupt events,
if something prevented their delivery.

Task switch due to task gate in the IDT, however is an exception
to this rule, because in this case, INTn instruction causes
a task switch intercept and its emulation completes the INTn
emulation as well.

Add a missing case to task_switch_interception for that.

This fixes 32 bit kvm unit test taskswitch2.

Fixes: 7e5b5ef8dca322 ("KVM: SVM: Re-inject INTn instead of retrying the insn on "failure"")

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Message-Id: <20220714124453.188655-1-mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
21 months agoKVM: x86/mmu: Fix typo and tweak comment for split_desc_cache capacity
Sean Christopherson [Tue, 12 Jul 2022 02:07:24 +0000 (02:07 +0000)]
KVM: x86/mmu: Fix typo and tweak comment for split_desc_cache capacity

Remove a spurious closing paranthesis and tweak the comment about the
cache capacity for PTE descriptors (rmaps) eager page splitting to tone
down the assertion slightly, and to call out that topup requires dropping
mmu_lock, which is the real motivation for avoiding topup (as opposed to
memory usage).

Cc: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220712020724.1262121-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
21 months agoKVM: x86/mmu: Expand quadrant comment for PG_LEVEL_4K shadow pages
Sean Christopherson [Tue, 12 Jul 2022 02:07:23 +0000 (02:07 +0000)]
KVM: x86/mmu: Expand quadrant comment for PG_LEVEL_4K shadow pages

Tweak the comment above the computation of the quadrant for PG_LEVEL_4K
shadow pages to explicitly call out how and why KVM uses role.quadrant to
consume gPTE bits.

Opportunistically wrap an unnecessarily long line.

No functional change intended.

Link: https://lore.kernel.org/all/YqvWvBv27fYzOFdE@google.com
Reviewed-by: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220712020724.1262121-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
21 months agoKVM: x86/mmu: Add optimized helper to retrieve an SPTE's index
Sean Christopherson [Tue, 12 Jul 2022 02:07:22 +0000 (02:07 +0000)]
KVM: x86/mmu: Add optimized helper to retrieve an SPTE's index

Add spte_index() to dedup all the code that calculates a SPTE's index
into its parent's page table and/or spt array.  Opportunistically tweak
the calculation to avoid pointer arithmetic, which is subtle (subtract in
8-byte chunks) and less performant (requires the compiler to generate the
subtraction).

Suggested-by: David Matlack <dmatlack@google.com>
Reviewed-by: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220712020724.1262121-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
21 months agoKVM: nVMX: Always enable TSC scaling for L2 when it was enabled for L1
Vitaly Kuznetsov [Tue, 12 Jul 2022 13:50:09 +0000 (15:50 +0200)]
KVM: nVMX: Always enable TSC scaling for L2 when it was enabled for L1

Windows 10/11 guests with Hyper-V role (WSL2) enabled are observed to
hang upon boot or shortly after when a non-default TSC frequency was
set for L1. The issue is observed on a host where TSC scaling is
supported. The problem appears to be that Windows doesn't use TSC
frequency for its guests even when the feature is advertised and KVM
filters SECONDARY_EXEC_TSC_SCALING out when creating L2 controls from
L1's. This leads to L2 running with the default frequency (matching
host's) while L1 is running with an altered one.

Keep SECONDARY_EXEC_TSC_SCALING in secondary exec controls for L2 when
it was set for L1. TSC_MULTIPLIER is already correctly computed and
written by prepare_vmcs02().

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20220712135009.952805-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
21 months agoKVM: VMX: Update PT MSR intercepts during filter change iff PT in host+guest
Sean Christopherson [Tue, 12 Jul 2022 01:58:38 +0000 (01:58 +0000)]
KVM: VMX: Update PT MSR intercepts during filter change iff PT in host+guest

Update the Processor Trace (PT) MSR intercepts during a filter change if
and only if PT may be exposed to the guest, i.e. only if KVM is operating
in the so called "host+guest" mode where PT can be used simultaneously by
both the host and guest.  If PT is in system mode, the host is the sole
owner of PT and the MSRs should never be passed through to the guest.

Luckily the missed check only results in unnecessary work, as select RTIT
MSRs are passed through only when RTIT tracing is enabled "in" the guest,
and tracing can't be enabled in the guest when KVM is in system mode
(writes to guest.MSR_IA32_RTIT_CTL are disallowed).

Cc: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Link: https://lore.kernel.org/r/20220712015838.1253995-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
21 months agoKVM: selftests: Drop unused SVM_CPUID_FUNC macro
Sean Christopherson [Tue, 14 Jun 2022 20:07:07 +0000 (20:07 +0000)]
KVM: selftests: Drop unused SVM_CPUID_FUNC macro

Drop SVM_CPUID_FUNC to reduce the probability of tests open coding CPUID
checks instead of using kvm_cpu_has() or this_cpu_has().

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-43-seanjc@google.com
21 months agoKVM: selftests: Use the common cpuid() helper in cpu_vendor_string_is()
Sean Christopherson [Tue, 14 Jun 2022 20:07:06 +0000 (20:07 +0000)]
KVM: selftests: Use the common cpuid() helper in cpu_vendor_string_is()

Use cpuid() to get CPUID.0x0 in cpu_vendor_string_is(), thus eliminating
the last open coded usage of CPUID (ignoring debug_regs.c, which emits
CPUID from the guest to trigger a VM-Exit and doesn't actually care about
the results of CPUID).

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-42-seanjc@google.com
21 months agoKVM: selftests: Clean up requirements for XFD-aware XSAVE features
Sean Christopherson [Tue, 14 Jun 2022 20:07:05 +0000 (20:07 +0000)]
KVM: selftests: Clean up requirements for XFD-aware XSAVE features

Provide informative error messages for the various checks related to
requesting access to XSAVE features that are buried behind XSAVE Feature
Disabling (XFD).

Opportunistically rename the helper to have "require" in the name so that
it's somewhat obvious that the helper may skip the test.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-41-seanjc@google.com
21 months agoKVM: selftests: Skip AMX test if ARCH_REQ_XCOMP_GUEST_PERM isn't supported
Sean Christopherson [Tue, 14 Jun 2022 20:07:04 +0000 (20:07 +0000)]
KVM: selftests: Skip AMX test if ARCH_REQ_XCOMP_GUEST_PERM isn't supported

Skip the AMX test instead of silently returning if the host kernel
doesn't support ARCH_REQ_XCOMP_GUEST_PERM.  KVM didn't support XFD until
v5.17, so it's extremely unlikely allowing the test to run on a pre-v5.15
kernel is the right thing to do.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-40-seanjc@google.com
21 months agoKVM: selftests: Check KVM's supported CPUID, not host CPUID, for XFD
Sean Christopherson [Tue, 14 Jun 2022 20:07:03 +0000 (20:07 +0000)]
KVM: selftests: Check KVM's supported CPUID, not host CPUID, for XFD

Use kvm_cpu_has() to check for XFD supported in vm_xsave_req_perm(),
simply checking host CPUID doesn't guarantee KVM supports AMX/XFD.

Opportunistically hoist the check above the bit check; if XFD isn't
supported, it's far better to get a "not supported at all" message, as
opposed to a "feature X isn't supported" message".

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-39-seanjc@google.com
21 months agoKVM: selftests: Inline "get max CPUID leaf" helpers
Sean Christopherson [Tue, 14 Jun 2022 20:07:02 +0000 (20:07 +0000)]
KVM: selftests: Inline "get max CPUID leaf" helpers

Make the "get max CPUID leaf" helpers static inline, there's no reason to
bury the one liners in processor.c.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-38-seanjc@google.com
21 months agoKVM: selftests: Rename kvm_get_supported_cpuid_index() to __..._entry()
Sean Christopherson [Tue, 14 Jun 2022 20:07:01 +0000 (20:07 +0000)]
KVM: selftests: Rename kvm_get_supported_cpuid_index() to __..._entry()

Rename kvm_get_supported_cpuid_index() to __kvm_get_supported_cpuid_entry()
to better show its relationship to kvm_get_supported_cpuid_entry(), and
because the helper returns a CPUID entry, not the index of an entry.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-37-seanjc@google.com
21 months agoKVM: selftests: Drop unnecessary use of kvm_get_supported_cpuid_index()
Sean Christopherson [Tue, 14 Jun 2022 20:07:00 +0000 (20:07 +0000)]
KVM: selftests: Drop unnecessary use of kvm_get_supported_cpuid_index()

Use kvm_get_supported_cpuid_entry() instead of
kvm_get_supported_cpuid_index() when passing in '0' for the index, which
just so happens to be the case in all remaining users of
kvm_get_supported_cpuid_index() except kvm_get_supported_cpuid_entry().

Keep the helper as there may be users in the future, and it's not doing
any harm.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-36-seanjc@google.com
21 months agoKVM: selftests: Use this_cpu_has() to detect SVM support in L1
Sean Christopherson [Tue, 14 Jun 2022 20:06:59 +0000 (20:06 +0000)]
KVM: selftests: Use this_cpu_has() to detect SVM support in L1

Replace an evil open coded instance of querying CPUID from L1 with
this_cpu_has(X86_FEATURE_SVM).

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-35-seanjc@google.com
21 months agoKVM: selftests: Use this_cpu_has() in CR4/CPUID sync test
Sean Christopherson [Tue, 14 Jun 2022 20:06:58 +0000 (20:06 +0000)]
KVM: selftests: Use this_cpu_has() in CR4/CPUID sync test

Use this_cpu_has() to query OSXSAVE from the L1 guest in the CR4=>CPUID
sync test.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-34-seanjc@google.com
21 months agoKVM: selftests: Add this_cpu_has() to query X86_FEATURE_* via cpuid()
Sean Christopherson [Tue, 14 Jun 2022 20:06:57 +0000 (20:06 +0000)]
KVM: selftests: Add this_cpu_has() to query X86_FEATURE_* via cpuid()

Add this_cpu_has() to query an X86_FEATURE_* via cpuid(), i.e. to query a
feature from L1 (or L2) guest code.  Arbitrarily select the AMX test to
be the first user.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-33-seanjc@google.com
21 months agoKVM: selftests: Set input function/index in raw CPUID helper(s)
Sean Christopherson [Tue, 14 Jun 2022 20:06:56 +0000 (20:06 +0000)]
KVM: selftests: Set input function/index in raw CPUID helper(s)

Set the function/index for CPUID in the helper instead of relying on the
caller to do so.  In addition to reducing the risk of consuming an
uninitialized ECX, having the function/index embedded in the call makes
it easier to understand what is being checked.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-32-seanjc@google.com
21 months agoKVM: selftests: Make get_supported_cpuid() returns "const"
Sean Christopherson [Tue, 14 Jun 2022 20:06:55 +0000 (20:06 +0000)]
KVM: selftests: Make get_supported_cpuid() returns "const"

Tag the returned CPUID pointers from kvm_get_supported_cpuid(),
kvm_get_supported_hv_cpuid(), and vcpu_get_supported_hv_cpuid() "const"
to prevent reintroducing the broken pattern of modifying the static
"cpuid" variable used by kvm_get_supported_cpuid() to cache the results
of KVM_GET_SUPPORTED_CPUID.

Update downstream consumers as needed.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-31-seanjc@google.com
21 months agoKVM: selftests: Use vcpu_clear_cpuid_feature() to clear x2APIC
Sean Christopherson [Tue, 14 Jun 2022 20:06:54 +0000 (20:06 +0000)]
KVM: selftests: Use vcpu_clear_cpuid_feature() to clear x2APIC

Add X86_FEATURE_X2APIC and use vcpu_clear_cpuid_feature() to clear x2APIC
support in the xAPIC state test.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-30-seanjc@google.com
21 months agoKVM: selftests: Use vcpu_{set,clear}_cpuid_feature() in nVMX state test
Sean Christopherson [Tue, 14 Jun 2022 20:06:53 +0000 (20:06 +0000)]
KVM: selftests: Use vcpu_{set,clear}_cpuid_feature() in nVMX state test

Use vcpu_{set,clear}_cpuid_feature() to toggle nested VMX support in the
vCPU CPUID module in the nVMX state test.  Drop CPUID_VMX as there are
no longer any users.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-29-seanjc@google.com
21 months agoKVM: selftests: Use vcpu_get_cpuid_entry() in CPUID test
Sean Christopherson [Tue, 14 Jun 2022 20:06:52 +0000 (20:06 +0000)]
KVM: selftests: Use vcpu_get_cpuid_entry() in CPUID test

Use vcpu_get_cpuid_entry() instead of an open coded equivalent in the
CPUID test.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-28-seanjc@google.com
21 months agoKVM: selftests: Use vCPU's CPUID directly in Hyper-V test
Sean Christopherson [Tue, 14 Jun 2022 20:06:51 +0000 (20:06 +0000)]
KVM: selftests: Use vCPU's CPUID directly in Hyper-V test

Use the vCPU's persistent CPUID array directly when manipulating the set
of exposed Hyper-V CPUID features.  Drop set_cpuid() to route all future
modification through the vCPU helpers; the Hyper-V features test was the
last user.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-27-seanjc@google.com
21 months agoKVM: selftests: Use vcpu_get_cpuid_entry() in PV features test (sort of)
Sean Christopherson [Tue, 14 Jun 2022 20:06:50 +0000 (20:06 +0000)]
KVM: selftests: Use vcpu_get_cpuid_entry() in PV features test (sort of)

Add a new helper, vcpu_clear_cpuid_entry(), to do a RMW operation on the
vCPU's CPUID model to clear a given CPUID entry, and use it to clear
KVM's paravirt feature instead of operating on kvm_get_supported_cpuid()'s
static "cpuid" variable.  This also eliminates a user of
the soon-be-defunct set_cpuid() helper.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-26-seanjc@google.com
21 months agoKVM: selftests: Use vcpu_clear_cpuid_feature() in monitor_mwait_test
Sean Christopherson [Fri, 8 Jul 2022 21:42:49 +0000 (14:42 -0700)]
KVM: selftests: Use vcpu_clear_cpuid_feature() in monitor_mwait_test

Use vcpu_clear_cpuid_feature() to the MONITOR/MWAIT CPUID feature bit in
the MONITOR/MWAIT quirk test.

Signed-off-by: Sean Christopherson <seanjc@google.com>
21 months agoKVM: selftests: Add and use helper to set vCPU's CPUID maxphyaddr
Sean Christopherson [Tue, 14 Jun 2022 20:06:49 +0000 (20:06 +0000)]
KVM: selftests: Add and use helper to set vCPU's CPUID maxphyaddr

Add a helper to set a vCPU's guest.MAXPHYADDR, and use it in the test
that verifies the emulator returns an error on an unknown instruction
when KVM emulates in response to an EPT violation with a GPA that is
legal in hardware but illegal with respect to the guest's MAXPHYADDR.

Add a helper even though there's only a single user at this time.  Before
its removal, mmu_role_test also stuffed guest.MAXPHYADDR, and the helper
provides a small amount of clarity.

More importantly, this eliminates a set_cpuid() user and an instance of
modifying kvm_get_supported_cpuid()'s static "cpuid".

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-25-seanjc@google.com
21 months agoKVM: selftests: Use vm->pa_bits to generate reserved PA bits
Sean Christopherson [Tue, 14 Jun 2022 20:06:48 +0000 (20:06 +0000)]
KVM: selftests: Use vm->pa_bits to generate reserved PA bits

Use vm->pa_bits to generate the mask of physical address bits that are
reserved in page table entries.  vm->pa_bits is set when the VM is
created, i.e. it's guaranteed to be valid when populating page tables.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-24-seanjc@google.com
21 months agoKVM: selftests: Add helpers to get and modify a vCPU's CPUID entries
Sean Christopherson [Tue, 14 Jun 2022 20:06:47 +0000 (20:06 +0000)]
KVM: selftests: Add helpers to get and modify a vCPU's CPUID entries

Add helpers to get a specific CPUID entry for a given vCPU, and to toggle
a specific CPUID-based feature for a vCPU.  The helpers will reduce the
amount of boilerplate code needed to tweak a vCPU's CPUID model, improve
code clarity, and most importantly move tests away from modifying the
static "cpuid" returned by kvm_get_supported_cpuid().

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-23-seanjc@google.com
21 months agoKVM: selftests: Use get_cpuid_entry() in kvm_get_supported_cpuid_index()
Sean Christopherson [Tue, 14 Jun 2022 20:06:46 +0000 (20:06 +0000)]
KVM: selftests: Use get_cpuid_entry() in kvm_get_supported_cpuid_index()

Use get_cpuid_entry() in kvm_get_supported_cpuid_index() to replace
functionally identical code.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-22-seanjc@google.com
21 months agoKVM: selftests: Rename and tweak get_cpuid() to get_cpuid_entry()
Sean Christopherson [Tue, 14 Jun 2022 20:06:45 +0000 (20:06 +0000)]
KVM: selftests: Rename and tweak get_cpuid() to get_cpuid_entry()

Rename get_cpuid() to get_cpuid_entry() to better reflect its behavior.
Leave set_cpuid() as is to avoid unnecessary churn, that helper will soon
be removed entirely.

Oppurtunistically tweak the implementation to avoid using a temporary
variable in anticipation of taggin the input @cpuid with "const".

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-21-seanjc@google.com
21 months agoKVM: selftests: Don't use a static local in vcpu_get_supported_hv_cpuid()
Sean Christopherson [Tue, 14 Jun 2022 20:06:44 +0000 (20:06 +0000)]
KVM: selftests: Don't use a static local in vcpu_get_supported_hv_cpuid()

Don't use a static variable for the Hyper-V supported CPUID array, the
helper unconditionally reallocates the array on every invocation (and all
callers free the array immediately after use).  The array is intentionally
recreated and refilled because the set of supported CPUID features is
dependent on vCPU state, e.g. whether or not eVMCS has been enabled.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-20-seanjc@google.com
21 months agoKVM: selftests: Cache CPUID in struct kvm_vcpu
Sean Christopherson [Tue, 14 Jun 2022 20:06:43 +0000 (20:06 +0000)]
KVM: selftests: Cache CPUID in struct kvm_vcpu

Cache a vCPU's CPUID information in "struct kvm_vcpu" to allow fixing the
mess where tests, often unknowingly, modify the global/static "cpuid"
allocated by kvm_get_supported_cpuid().

Add vcpu_init_cpuid() to handle stuffing an entirely different CPUID
model, e.g. during vCPU creation or when switching to the Hyper-V enabled
CPUID model.  Automatically refresh the cache on vcpu_set_cpuid() so that
any adjustments made by KVM are always reflected in the cache.  Drop
vcpu_get_cpuid() entirely to force tests to use the cache, and to allow
adding e.g. vcpu_get_cpuid_entry() in the future without creating a
conflicting set of APIs where vcpu_get_cpuid() does KVM_GET_CPUID2, but
vcpu_get_cpuid_entry() does not.

Opportunistically convert the VMX nested state test and KVM PV test to
manipulating the vCPU's CPUID (because it's easy), but use
vcpu_init_cpuid() for the Hyper-V features test and "emulator error" test
to effectively retain their current behavior as they're less trivial to
convert.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-19-seanjc@google.com
21 months agoKVM: selftests: Split out kvm_cpuid2_size() from allocate_kvm_cpuid2()
Sean Christopherson [Tue, 14 Jun 2022 20:06:42 +0000 (20:06 +0000)]
KVM: selftests: Split out kvm_cpuid2_size() from allocate_kvm_cpuid2()

Split out the computation of the effective size of a kvm_cpuid2 struct
from allocate_kvm_cpuid2(), and modify both to take an arbitrary number
of entries.  Future commits will add caching of a vCPU's CPUID model, and
will (a) be able to precisely size the entries array, and (b) will need
to know the effective size of the struct in order to copy to/from the
cache.

Expose the helpers so that the Hyper-V Features test can use them in the
(somewhat distant) future.  The Hyper-V test very, very subtly relies on
propagating CPUID info across vCPU instances, and will need to make a
copy of the previous vCPU's CPUID information when it switches to using
the per-vCPU cache.  Alternatively, KVM could provide helpers to
duplicate and/or copy a kvm_cpuid2 instance, but each is literally a
single line of code if the helpers are exposed, and it's not like the
size of kvm_cpuid2 is secret knowledge.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-18-seanjc@google.com
21 months agoKVM: selftests: Verify that kvm_cpuid2.entries layout is unchanged by KVM
Sean Christopherson [Tue, 14 Jun 2022 20:06:41 +0000 (20:06 +0000)]
KVM: selftests: Verify that kvm_cpuid2.entries layout is unchanged by KVM

In the CPUID test, verify that KVM doesn't modify the kvm_cpuid2.entries
layout, i.e. that the order of entries and their flags is identical
between what the test provides via KVM_SET_CPUID2 and what KVM returns
via KVM_GET_CPUID2.

Asserting that the layouts match simplifies the test as there's no need
to iterate over both arrays.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-17-seanjc@google.com
21 months agoKVM: selftests: Use kvm_cpu_has() for nSVM soft INT injection test
Sean Christopherson [Tue, 14 Jun 2022 20:06:40 +0000 (20:06 +0000)]
KVM: selftests: Use kvm_cpu_has() for nSVM soft INT injection test

Use kvm_cpu_has() to query for NRIPS support instead of open coding
equivalent functionality using kvm_get_supported_cpuid_entry().

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-16-seanjc@google.com
21 months agoKVM: selftests: Use kvm_cpu_has() for KVM's PV steal time
Sean Christopherson [Tue, 14 Jun 2022 20:06:39 +0000 (20:06 +0000)]
KVM: selftests: Use kvm_cpu_has() for KVM's PV steal time

Use kvm_cpu_has() in the stea-ltime test instead of open coding
equivalent functionality using kvm_get_supported_cpuid_entry().

Opportunistically define all of KVM's paravirt CPUID-based features.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-15-seanjc@google.com
21 months agoKVM: selftests: Remove the obsolete/dead MMU role test
Sean Christopherson [Tue, 14 Jun 2022 20:06:38 +0000 (20:06 +0000)]
KVM: selftests: Remove the obsolete/dead MMU role test

Remove the MMU role test, which was made obsolete by KVM commit
feb627e8d6f6 ("KVM: x86: Forbid KVM_SET_CPUID{,2} after KVM_RUN").  The
ongoing costs of keeping the test updated far outweigh any benefits,
e.g. the test _might_ be useful as an example or for documentation
purposes, but otherwise the test is dead weight.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-14-seanjc@google.com
21 months agoKVM: selftests: Use kvm_cpu_has() for XSAVE in cr4_cpuid_sync_test
Sean Christopherson [Tue, 14 Jun 2022 20:06:37 +0000 (20:06 +0000)]
KVM: selftests: Use kvm_cpu_has() for XSAVE in cr4_cpuid_sync_test

Use kvm_cpu_has() in the CR4/CPUID sync test instead of open coding
equivalent functionality using kvm_get_supported_cpuid_entry().

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-13-seanjc@google.com
21 months agoKVM: selftests: Use kvm_cpu_has() in AMX test
Sean Christopherson [Tue, 14 Jun 2022 20:06:36 +0000 (20:06 +0000)]
KVM: selftests: Use kvm_cpu_has() in AMX test

Use kvm_cpu_has() in the AMX test instead of open coding equivalent
functionality using kvm_get_supported_cpuid_entry() and
kvm_get_supported_cpuid_index().

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-12-seanjc@google.com
21 months agoKVM: selftests: Check for _both_ XTILE data and cfg in AMX test
Sean Christopherson [Tue, 14 Jun 2022 20:06:35 +0000 (20:06 +0000)]
KVM: selftests: Check for _both_ XTILE data and cfg in AMX test

Check for _both_ XTILE data and cfg support in the AMX test instead of
checking for _either_ feature.  Practically speaking, no sane CPU or vCPU
will support one but not the other, but the effective "or" behavior is
subtle and technically incorrect.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-11-seanjc@google.com
21 months agoKVM: selftests: Use kvm_cpu_has() for XSAVES in XSS MSR test
Sean Christopherson [Tue, 14 Jun 2022 20:06:34 +0000 (20:06 +0000)]
KVM: selftests: Use kvm_cpu_has() for XSAVES in XSS MSR test

Use kvm_cpu_has() in the XSS MSR test instead of open coding equivalent
functionality using kvm_get_supported_cpuid_index().

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-10-seanjc@google.com
21 months agoKVM: selftests: Drop redundant vcpu_set_cpuid() from PMU selftest
Sean Christopherson [Tue, 14 Jun 2022 20:06:33 +0000 (20:06 +0000)]
KVM: selftests: Drop redundant vcpu_set_cpuid() from PMU selftest

Drop a redundant vcpu_set_cpuid() from the PMU test.  The vCPU's CPUID is
set to KVM's supported CPUID by vm_create_with_one_vcpu(), which was also
true back when the helper was named vm_create_default().

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-9-seanjc@google.com
21 months agoKVM: selftests: Use kvm_cpu_has() to query PDCM in PMU selftest
Sean Christopherson [Tue, 14 Jun 2022 20:06:32 +0000 (20:06 +0000)]
KVM: selftests: Use kvm_cpu_has() to query PDCM in PMU selftest

Use kvm_cpu_has() in the PMU test to query PDCM support instead of open
coding equivalent functionality using kvm_get_supported_cpuid_index().

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-8-seanjc@google.com
21 months agoKVM: selftests: Use kvm_cpu_has() for nested VMX checks
Sean Christopherson [Tue, 14 Jun 2022 20:06:31 +0000 (20:06 +0000)]
KVM: selftests: Use kvm_cpu_has() for nested VMX checks

Use kvm_cpu_has() to check for nested VMX support, and drop the helpers
now that their functionality is trivial to implement.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-7-seanjc@google.com
21 months agoKVM: selftests: Use kvm_cpu_has() for nested SVM checks
Sean Christopherson [Tue, 14 Jun 2022 20:06:30 +0000 (20:06 +0000)]
KVM: selftests: Use kvm_cpu_has() for nested SVM checks

Use kvm_cpu_has() to check for nested SVM support, and drop the helpers
now that their functionality is trivial to implement.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-6-seanjc@google.com
21 months agoKVM: selftests: Use kvm_cpu_has() in the SEV migration test
Sean Christopherson [Tue, 14 Jun 2022 20:06:29 +0000 (20:06 +0000)]
KVM: selftests: Use kvm_cpu_has() in the SEV migration test

Use kvm_cpu_has() in the SEV migration test instead of open coding
equivalent functionality using kvm_get_supported_cpuid_entry().

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-5-seanjc@google.com
21 months agoKVM: selftests: Add framework to query KVM CPUID bits
Sean Christopherson [Tue, 14 Jun 2022 20:06:28 +0000 (20:06 +0000)]
KVM: selftests: Add framework to query KVM CPUID bits

Add X86_FEATURE_* magic in the style of KVM-Unit-Tests' implementation,
where the CPUID function, index, output register, and output bit position
are embedded in the macro value.  Add kvm_cpu_has() to query KVM's
supported CPUID and use it set_sregs_test, which is the most prolific
user of manual feature querying.

Opportunstically rename calc_cr4_feature_bits() to
calc_supported_cr4_feature_bits() to better capture how the CR4 bits are
chosen.

Link: https://lore.kernel.org/all/20210422005626.564163-1-ricarkol@google.com
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-4-seanjc@google.com
21 months agoKVM: sefltests: Use CPUID_* instead of X86_FEATURE_* for one-off usage
Sean Christopherson [Tue, 14 Jun 2022 20:06:27 +0000 (20:06 +0000)]
KVM: sefltests: Use CPUID_* instead of X86_FEATURE_* for one-off usage

Rename X86_FEATURE_* macros to CPUID_* in various tests to free up the
X86_FEATURE_* names for KVM-Unit-Tests style CPUID automagic where the
function, leaf, register, and bit for the feature is embedded in its
macro value.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-3-seanjc@google.com
21 months agoKVM: selftests: Set KVM's supported CPUID as vCPU's CPUID during recreate
Sean Christopherson [Tue, 14 Jun 2022 20:06:26 +0000 (20:06 +0000)]
KVM: selftests: Set KVM's supported CPUID as vCPU's CPUID during recreate

On x86-64, set KVM's supported CPUID as the vCPU's CPUID when recreating
a VM+vCPU to deduplicate code for state save/restore tests, and to
provide symmetry of sorts with respect to vm_create_with_one_vcpu().  The
extra KVM_SET_CPUID2 call is wasteful for Hyper-V, but ultimately is
nothing more than an expensive nop, and overriding the vCPU's CPUID with
the Hyper-V CPUID information is the only known scenario where a state
save/restore test wouldn't need/want the default CPUID.

Opportunistically use __weak for the default vm_compute_max_gfn(), it's
provided by tools' compiler.h.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220614200707.3315957-2-seanjc@google.com
21 months agoKVM: selftests: Fix filename reporting in guest asserts
Colton Lewis [Wed, 15 Jun 2022 19:31:16 +0000 (19:31 +0000)]
KVM: selftests: Fix filename reporting in guest asserts

Fix filename reporting in guest asserts by ensuring the GUEST_ASSERT
macro records __FILE__ and substituting REPORT_GUEST_ASSERT for many
repetitive calls to TEST_FAIL.

Previously filename was reported by using __FILE__ directly in the
selftest, wrongly assuming it would always be the same as where the
assertion failed.

Signed-off-by: Colton Lewis <coltonlewis@google.com>
Reported-by: Ricardo Koller <ricarkol@google.com>
Fixes: 4e18bccc2e5544f0be28fc1c4e6be47a469d6c60
Link: https://lore.kernel.org/r/20220615193116.806312-5-coltonlewis@google.com
[sean: convert more TEST_FAIL => REPORT_GUEST_ASSERT instances]
Signed-off-by: Sean Christopherson <seanjc@google.com>
21 months agoKVM: selftests: Write REPORT_GUEST_ASSERT macros to pair with GUEST_ASSERT
Colton Lewis [Wed, 15 Jun 2022 19:31:15 +0000 (19:31 +0000)]
KVM: selftests: Write REPORT_GUEST_ASSERT macros to pair with GUEST_ASSERT

Write REPORT_GUEST_ASSERT macros to pair with GUEST_ASSERT to abstract
and make consistent all guest assertion reporting. Every report
includes an explanatory string, a filename, and a line number.

Signed-off-by: Colton Lewis <coltonlewis@google.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Link: https://lore.kernel.org/r/20220615193116.806312-4-coltonlewis@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
21 months agoKVM: selftests: Increase UCALL_MAX_ARGS to 7
Colton Lewis [Wed, 15 Jun 2022 19:31:14 +0000 (19:31 +0000)]
KVM: selftests: Increase UCALL_MAX_ARGS to 7

Increase UCALL_MAX_ARGS to 7 to allow GUEST_ASSERT_4 to pass 3 builtin
ucall arguments specified in guest_assert_builtin_args plus 4
user-specified arguments.

Signed-off-by: Colton Lewis <coltonlewis@google.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Link: https://lore.kernel.org/r/20220615193116.806312-3-coltonlewis@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
21 months agoKVM: selftests: enumerate GUEST_ASSERT arguments
Colton Lewis [Wed, 15 Jun 2022 19:31:13 +0000 (19:31 +0000)]
KVM: selftests: enumerate GUEST_ASSERT arguments

Enumerate GUEST_ASSERT arguments to avoid magic indices to ucall.args.

Signed-off-by: Colton Lewis <coltonlewis@google.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Link: https://lore.kernel.org/r/20220615193116.806312-2-coltonlewis@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
21 months agoKVM: x86: WARN only once if KVM leaves a dangling userspace I/O request
Sean Christopherson [Mon, 11 Jul 2022 23:27:50 +0000 (23:27 +0000)]
KVM: x86: WARN only once if KVM leaves a dangling userspace I/O request

Change a WARN_ON() to separate WARN_ON_ONCE() if KVM has an outstanding
PIO or MMIO request without an associated callback, i.e. if KVM queued a
userspace I/O exit but didn't actually exit to userspace before moving
on to something else.  Warning on every KVM_RUN risks spamming the kernel
if KVM gets into a bad state.  Opportunistically split the WARNs so that
it's easier to triage failures when a WARN fires.

Deliberately do not use KVM_BUG_ON(), i.e. don't kill the VM.  While the
WARN is all but guaranteed to fire if and only if there's a KVM bug, a
dangling I/O request does not present a danger to KVM (that flag is truly
truly consumed only in a single emulator path), and any such bug is
unlikely to be fatal to the VM (KVM essentially failed to do something it
shouldn't have tried to do in the first place).  In other words, note the
bug, but let the VM keep running.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220711232750.1092012-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
21 months agoKVM: x86: Set error code to segment selector on LLDT/LTR non-canonical #GP
Sean Christopherson [Mon, 11 Jul 2022 23:27:49 +0000 (23:27 +0000)]
KVM: x86: Set error code to segment selector on LLDT/LTR non-canonical #GP

When injecting a #GP on LLDT/LTR due to a non-canonical LDT/TSS base, set
the error code to the selector.  Intel SDM's says nothing about the #GP,
but AMD's APM explicitly states that both LLDT and LTR set the error code
to the selector, not zero.

Note, a non-canonical memory operand on LLDT/LTR does generate a #GP(0),
but the KVM code in question is specific to the base from the descriptor.

Fixes: e37a75a13cda ("KVM: x86: Emulator ignores LDTR/TR extended base on LLDT/LTR")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220711232750.1092012-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
21 months agoKVM: x86: Mark TSS busy during LTR emulation _after_ all fault checks
Sean Christopherson [Mon, 11 Jul 2022 23:27:48 +0000 (23:27 +0000)]
KVM: x86: Mark TSS busy during LTR emulation _after_ all fault checks

Wait to mark the TSS as busy during LTR emulation until after all fault
checks for the LTR have passed.  Specifically, don't mark the TSS busy if
the new TSS base is non-canonical.

Opportunistically drop the one-off !seg_desc.PRESENT check for TR as the
only reason for the early check was to avoid marking a !PRESENT TSS as
busy, i.e. the common !PRESENT is now done before setting the busy bit.

Fixes: e37a75a13cda ("KVM: x86: Emulator ignores LDTR/TR extended base on LLDT/LTR")
Reported-by: syzbot+760a73552f47a8cd0fd9@syzkaller.appspotmail.com
Cc: stable@vger.kernel.org
Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: Hou Wenlong <houwenlong.hwl@antgroup.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220711232750.1092012-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
21 months agoKVM: x86: Tweak name of MONITOR/MWAIT #UD quirk to make it #UD specific
Sean Christopherson [Mon, 11 Jul 2022 22:57:53 +0000 (22:57 +0000)]
KVM: x86: Tweak name of MONITOR/MWAIT #UD quirk to make it #UD specific

Add a "UD" clause to KVM_X86_QUIRK_MWAIT_NEVER_FAULTS to make it clear
that the quirk only controls the #UD behavior of MONITOR/MWAIT.  KVM
doesn't currently enforce fault checks when MONITOR/MWAIT are supported,
but that could change in the future.  SVM also has a virtualization hole
in that it checks all faults before intercepts, and so "never faults" is
already a lie when running on SVM.

Fixes: bfbcc81bb82c ("KVM: x86: Add a quirk for KVM's "MONITOR/MWAIT are NOPs!" behavior")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220711225753.1073989-4-seanjc@google.com
21 months agoKVM: selftests: Use "a" and "d" to set EAX/EDX for wrmsr_safe()
Vitaly Kuznetsov [Thu, 14 Jul 2022 01:11:15 +0000 (01:11 +0000)]
KVM: selftests: Use "a" and "d" to set EAX/EDX for wrmsr_safe()

Do not use GCC's "A" constraint to load EAX:EDX in wrmsr_safe().  Per
GCC's documenation on x86-specific constraints, "A" will not actually
load a 64-bit value into EAX:EDX on x86-64.

  The a and d registers. This class is used for instructions that return
  double word results in the ax:dx register pair. Single word values will
  be allocated either in ax or dx. For example on i386 the following
  implements rdtsc:

  unsigned long long rdtsc (void)
  {
    unsigned long long tick;
    __asm__ __volatile__("rdtsc":"=A"(tick));
    return tick;
  }

  This is not correct on x86-64 as it would allocate tick in either ax or
  dx. You have to use the following variant instead:

  unsigned long long rdtsc (void)
  {
    unsigned int tickl, tickh;
    __asm__ __volatile__("rdtsc":"=a"(tickl),"=d"(tickh));
    return ((unsigned long long)tickh << 32)|tickl;
  }

Because a u64 fits in a single 64-bit register, using "A" for selftests,
which are 64-bit only, results in GCC loading the value into either RAX
or RDX instead of splitting it across EAX:EDX.

E.g.:

  kvm_exit:             reason MSR_WRITE rip 0x402919 info 0 0
  kvm_msr:              msr_write 40000118 = 0x60000000001 (#GP)
...

With "A":

  48 8b 43 08           mov    0x8(%rbx),%rax
  49 b9 ba da ca ba 0a  movabs $0xabacadaba,%r9
  00 00 00
  4c 8d 15 07 00 00 00  lea    0x7(%rip),%r10        # 402f44 <guest_msr+0x34>
  4c 8d 1d 06 00 00 00  lea    0x6(%rip),%r11        # 402f4a <guest_msr+0x3a>
  0f 30                 wrmsr

With "a"/"d":

  48 8b 53 08             mov    0x8(%rbx),%rdx
  89 d0                   mov    %edx,%eax
  48 c1 ea 20             shr    $0x20,%rdx
  49 b9 ba da ca ba 0a    movabs $0xabacadaba,%r9
  00 00 00
  4c 8d 15 07 00 00 00    lea    0x7(%rip),%r10        # 402fc3 <guest_msr+0xb3>
  4c 8d 1d 06 00 00 00    lea    0x6(%rip),%r11        # 402fc9 <guest_msr+0xb9>
  0f 30                   wrmsr

Fixes: 3b23054cd3f5 ("KVM: selftests: Add x86-64 support for exception fixup")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://gcc.gnu.org/onlinedocs/gcc/Machine-Constraints.html#Machine-Constraints
[sean: use "& -1u", provide GCC blurb and link to documentation]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220714011115.3135828-1-seanjc@google.com
21 months agoKVM: s390: pv: Add kvm_s390_cpus_from_pv to kvm-s390.h and add documentation
Claudio Imbrenda [Tue, 28 Jun 2022 13:56:09 +0000 (15:56 +0200)]
KVM: s390: pv: Add kvm_s390_cpus_from_pv to kvm-s390.h and add documentation

Future changes make it necessary to call this function from pv.c.

While we are at it, let's properly document kvm_s390_cpus_from_pv() and
kvm_s390_cpus_to_pv().

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Link: https://lore.kernel.org/r/20220628135619.32410-9-imbrenda@linux.ibm.com
Message-Id: <20220628135619.32410-9-imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
21 months agoKVM: s390: pv: clear the state without memset
Claudio Imbrenda [Tue, 28 Jun 2022 13:56:08 +0000 (15:56 +0200)]
KVM: s390: pv: clear the state without memset

Do not use memset to clean the whole struct kvm_s390_pv; instead,
explicitly clear the fields that need to be cleared.

Upcoming patches will introduce new fields in the struct kvm_s390_pv
that will not need to be cleared.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Link: https://lore.kernel.org/r/20220628135619.32410-8-imbrenda@linux.ibm.com
Message-Id: <20220628135619.32410-8-imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
21 months agoKVM: s390: pv: add export before import
Claudio Imbrenda [Tue, 28 Jun 2022 13:56:07 +0000 (15:56 +0200)]
KVM: s390: pv: add export before import

Due to upcoming changes, it will be possible to temporarily have
multiple protected VMs in the same address space, although only one
will be actually active.

In that scenario, it is necessary to perform an export of every page
that is to be imported, since the hardware does not allow a page
belonging to a protected guest to be imported into a different
protected guest.

This also applies to pages that are shared, and thus accessible by the
host.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Link: https://lore.kernel.org/r/20220628135619.32410-7-imbrenda@linux.ibm.com
Message-Id: <20220628135619.32410-7-imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
21 months agoKVM: s390: pv: usage counter instead of flag
Claudio Imbrenda [Tue, 28 Jun 2022 13:56:06 +0000 (15:56 +0200)]
KVM: s390: pv: usage counter instead of flag

Use the new protected_count field as a counter instead of the old
is_protected flag. This will be used in upcoming patches.

Increment the counter when a secure configuration is created, and
decrement it when it is destroyed. Previously the flag was set when the
set secure parameters UVC was performed.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
Link: https://lore.kernel.org/r/20220628135619.32410-6-imbrenda@linux.ibm.com
Message-Id: <20220628135619.32410-6-imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
21 months agoKVM: s390: pv: refactor s390_reset_acc
Claudio Imbrenda [Tue, 28 Jun 2022 13:56:05 +0000 (15:56 +0200)]
KVM: s390: pv: refactor s390_reset_acc

Refactor s390_reset_acc so that it can be reused in upcoming patches.

We don't want to hold all the locks used in a walk_page_range for too
long, and the destroy page UVC does take some time to complete.
Therefore we quickly gather the pages to destroy, and then destroy them
without holding all the locks.

The new refactored function optionally allows to return early without
completing if a fatal signal is pending (and return and appropriate
error code). Two wrappers are provided to call the new function.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Nico Boehr <nrb@linux.ibm.com>
Link: https://lore.kernel.org/r/20220628135619.32410-5-imbrenda@linux.ibm.com
Message-Id: <20220628135619.32410-5-imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
21 months agoKVM: s390: pv: handle secure storage exceptions for normal guests
Claudio Imbrenda [Tue, 28 Jun 2022 13:56:04 +0000 (15:56 +0200)]
KVM: s390: pv: handle secure storage exceptions for normal guests

With upcoming patches, normal guests might touch secure pages.

This patch extends the existing exception handler to convert the pages
to non secure also when the exception is triggered by a normal guest.

This can happen for example when a secure guest reboots; the first
stage of a secure guest is non secure, and in general a secure guest
can reboot into non-secure mode.

If the secure memory of the previous boot has not been cleared up
completely yet (which will be allowed to happen in an upcoming patch),
a non-secure guest might touch secure memory, which will need to be
handled properly.

This means that gmap faults must be handled and not cause termination
of the process. The handling is the same as userspace accesses, it's
enough to translate the gmap address to a user address and then let the
normal user fault code handle it.

Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Link: https://lore.kernel.org/r/20220628135619.32410-4-imbrenda@linux.ibm.com
Message-Id: <20220628135619.32410-4-imbrenda@linux.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>