linux-2.6-microblaze.git
3 years agoKVM: x86/xen: Add event channel interrupt vector upcall
David Woodhouse [Wed, 9 Dec 2020 20:08:30 +0000 (20:08 +0000)]
KVM: x86/xen: Add event channel interrupt vector upcall

It turns out that we can't handle event channels *entirely* in userspace
by delivering them as ExtINT, because KVM is a bit picky about when it
accepts ExtINT interrupts from a legacy PIC. The in-kernel local APIC
has to have LVT0 configured in APIC_MODE_EXTINT and unmasked, which
isn't necessarily the case for Xen guests especially on secondary CPUs.

To cope with this, add kvm_xen_get_interrupt() which checks the
evtchn_pending_upcall field in the Xen vcpu_info, and delivers the Xen
upcall vector (configured by KVM_XEN_ATTR_TYPE_UPCALL_VECTOR) if it's
set regardless of LAPIC LVT0 configuration. This gives us the minimum
support we need for completely userspace-based implementation of event
channels.

This does mean that vcpu_enter_guest() needs to check for the
evtchn_pending_upcall flag being set, because it can't rely on someone
having set KVM_REQ_EVENT unless we were to add some way for userspace to
do so manually.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: register vcpu time info region
Joao Martins [Mon, 23 Jul 2018 15:20:57 +0000 (11:20 -0400)]
KVM: x86/xen: register vcpu time info region

Allow the Xen emulated guest the ability to register secondary
vcpu time information. On Xen guests this is used in order to be
mapped to userspace and hence allow vdso gettimeofday to work.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: setup pvclock updates
Joao Martins [Fri, 1 Feb 2019 18:01:45 +0000 (13:01 -0500)]
KVM: x86/xen: setup pvclock updates

Parameterise kvm_setup_pvclock_page() a little bit so that it can be
invoked for different gfn_to_hva_cache structures, and with different
offsets. Then we can invoke it for the normal KVM pvclock and also for
the Xen one in the vcpu_info.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: register vcpu info
Joao Martins [Fri, 29 Jun 2018 14:52:52 +0000 (10:52 -0400)]
KVM: x86/xen: register vcpu info

The vcpu info supersedes the per vcpu area of the shared info page and
the guest vcpus will use this instead.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: Add KVM_XEN_VCPU_SET_ATTR/KVM_XEN_VCPU_GET_ATTR
David Woodhouse [Tue, 2 Feb 2021 16:53:25 +0000 (16:53 +0000)]
KVM: x86/xen: Add KVM_XEN_VCPU_SET_ATTR/KVM_XEN_VCPU_GET_ATTR

This will be used for per-vCPU setup such as runstate and vcpu_info.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: update wallclock region
Joao Martins [Thu, 28 Jun 2018 19:06:43 +0000 (15:06 -0400)]
KVM: x86/xen: update wallclock region

Wallclock on Xen is written in the shared_info page.

To that purpose, export kvm_write_wall_clock() and pass on the GPA of
its location to populate the shared_info wall clock data.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoxen: add wc_sec_hi to struct shared_info
David Woodhouse [Thu, 3 Dec 2020 21:02:23 +0000 (21:02 +0000)]
xen: add wc_sec_hi to struct shared_info

Xen added this in 2015 (Xen 4.6). On x86_64 and Arm it fills what was
previously a 32-bit hole in the generic shared_info structure; on
i386 it had to go at the end of struct arch_shared_info.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: register shared_info page
Joao Martins [Sat, 16 Jun 2018 01:17:14 +0000 (21:17 -0400)]
KVM: x86/xen: register shared_info page

Add KVM_XEN_ATTR_TYPE_SHARED_INFO to allow hypervisor to know where the
guest's shared info page is.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: add definitions of compat_shared_info, compat_vcpu_info
David Woodhouse [Thu, 3 Dec 2020 18:45:22 +0000 (18:45 +0000)]
KVM: x86/xen: add definitions of compat_shared_info, compat_vcpu_info

There aren't a lot of differences for the things that the kernel needs
to care about, but there are a few.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: latch long_mode when hypercall page is set up
David Woodhouse [Thu, 3 Dec 2020 16:20:32 +0000 (16:20 +0000)]
KVM: x86/xen: latch long_mode when hypercall page is set up

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: add KVM_XEN_HVM_SET_ATTR/KVM_XEN_HVM_GET_ATTR
Joao Martins [Thu, 3 Dec 2020 15:52:25 +0000 (15:52 +0000)]
KVM: x86/xen: add KVM_XEN_HVM_SET_ATTR/KVM_XEN_HVM_GET_ATTR

This will be used to set up shared info pages etc.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: Add kvm_xen_enabled static key
David Woodhouse [Tue, 2 Feb 2021 15:48:05 +0000 (15:48 +0000)]
KVM: x86/xen: Add kvm_xen_enabled static key

The code paths for Xen support are all fairly lightweight but if we hide
them behind this, they're even *more* lightweight for any system which
isn't actually hosting Xen guests.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: Move KVM_XEN_HVM_CONFIG handling to xen.c
David Woodhouse [Tue, 2 Feb 2021 13:19:35 +0000 (13:19 +0000)]
KVM: x86/xen: Move KVM_XEN_HVM_CONFIG handling to xen.c

This is already more complex than the simple memcpy it originally had.
Move it to xen.c with the rest of the Xen support.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: Fix coexistence of Xen and Hyper-V hypercalls
Joao Martins [Wed, 13 Jun 2018 13:55:44 +0000 (09:55 -0400)]
KVM: x86/xen: Fix coexistence of Xen and Hyper-V hypercalls

Disambiguate Xen vs. Hyper-V calls by adding 'orl $0x80000000, %eax'
at the start of the Hyper-V hypercall page when Xen hypercalls are
also enabled.

That bit is reserved in the Hyper-V ABI, and those hypercall numbers
will never be used by Xen (because it does precisely the same trick).

Switch to using kvm_vcpu_write_guest() while we're at it, instead of
open-coding it.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: intercept xen hypercalls if enabled
Joao Martins [Wed, 13 Jun 2018 13:55:44 +0000 (09:55 -0400)]
KVM: x86/xen: intercept xen hypercalls if enabled

Add a new exit reason for emulator to handle Xen hypercalls.

Since this means KVM owns the ABI, dispense with the facility for the
VMM to provide its own copy of the hypercall pages; just fill them in
directly using VMCALL/VMMCALL as we do for the Hyper-V hypercall page.

This behaviour is enabled by a new INTERCEPT_HCALL flag in the
KVM_XEN_HVM_CONFIG ioctl structure, and advertised by the same flag
being returned from the KVM_CAP_XEN_HVM check.

Rename xen_hvm_config() to kvm_xen_write_hypercall_page() and move it
to the nascent xen.c while we're at it, and add a test case.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: Fix __user pointer handling for hypercall page installation
David Woodhouse [Tue, 2 Feb 2021 11:05:10 +0000 (11:05 +0000)]
KVM: x86/xen: Fix __user pointer handling for hypercall page installation

The address we give to memdup_user() isn't correctly tagged as __user.
This is harmless enough as it's a one-off use and we're doing exactly
the right thing, but fix it anyway to shut the checker up. Otherwise
it'll whine when the (now legacy) code gets moved around in a later
patch.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/xen: fix Xen hypercall page msr handling
Joao Martins [Wed, 13 Jun 2018 10:10:37 +0000 (06:10 -0400)]
KVM: x86/xen: fix Xen hypercall page msr handling

Xen usually places its MSR at 0x40000000 or 0x40000200 depending on
whether it is running in viridian mode or not. Note that this is not
ABI guaranteed, so it is possible for Xen to advertise the MSR some
place else.

Given the way xen_hvm_config() is handled, if the former address is
selected, this will conflict with Hyper-V's MSR
(HV_X64_MSR_GUEST_OS_ID) which unconditionally uses the same address.

Given that the MSR location is arbitrary, move the xen_hvm_config()
handling to the top of kvm_set_msr_common() before falling through.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
3 years agoKVM: x86/mmu: Allow parallel page faults for the TDP MMU
Ben Gardon [Tue, 2 Feb 2021 18:57:29 +0000 (10:57 -0800)]
KVM: x86/mmu: Allow parallel page faults for the TDP MMU

Make the last few changes necessary to enable the TDP MMU to handle page
faults in parallel while holding the mmu_lock in read mode.

Reviewed-by: Peter Feiner <pfeiner@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-24-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Mark SPTEs in disconnected pages as removed
Ben Gardon [Tue, 2 Feb 2021 18:57:28 +0000 (10:57 -0800)]
KVM: x86/mmu: Mark SPTEs in disconnected pages as removed

When clearing TDP MMU pages what have been disconnected from the paging
structure root, set the SPTEs to a special non-present value which will
not be overwritten by other threads. This is needed to prevent races in
which a thread is clearing a disconnected page table, but another thread
has already acquired a pointer to that memory and installs a mapping in
an already cleared entry. This can lead to memory leaks and accounting
errors.

Reviewed-by: Peter Feiner <pfeiner@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-23-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Flush TLBs after zap in TDP MMU PF handler
Ben Gardon [Tue, 2 Feb 2021 18:57:27 +0000 (10:57 -0800)]
KVM: x86/mmu: Flush TLBs after zap in TDP MMU PF handler

When the TDP MMU is allowed to handle page faults in parallel there is
the possiblity of a race where an SPTE is cleared and then imediately
replaced with a present SPTE pointing to a different PFN, before the
TLBs can be flushed. This race would violate architectural specs. Ensure
that the TLBs are flushed properly before other threads are allowed to
install any present value for the SPTE.

Reviewed-by: Peter Feiner <pfeiner@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-22-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Use atomic ops to set SPTEs in TDP MMU map
Ben Gardon [Tue, 2 Feb 2021 18:57:26 +0000 (10:57 -0800)]
KVM: x86/mmu: Use atomic ops to set SPTEs in TDP MMU map

To prepare for handling page faults in parallel, change the TDP MMU
page fault handler to use atomic operations to set SPTEs so that changes
are not lost if multiple threads attempt to modify the same SPTE.

Reviewed-by: Peter Feiner <pfeiner@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-21-bgardon@google.com>
[Document new locking rules. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Factor out functions to add/remove TDP MMU pages
Ben Gardon [Tue, 2 Feb 2021 18:57:25 +0000 (10:57 -0800)]
KVM: x86/mmu: Factor out functions to add/remove TDP MMU pages

Move the work of adding and removing TDP MMU pages to/from  "secondary"
data structures to helper functions. These functions will be built on in
future commits to enable MMU operations to proceed (mostly) in parallel.

No functional change expected.

Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-20-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Use an rwlock for the x86 MMU
Ben Gardon [Tue, 2 Feb 2021 18:57:24 +0000 (10:57 -0800)]
KVM: x86/mmu: Use an rwlock for the x86 MMU

Add a read / write lock to be used in place of the MMU spinlock on x86.
The rwlock will enable the TDP MMU to handle page faults, and other
operations in parallel in future commits.

Reviewed-by: Peter Feiner <pfeiner@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-19-bgardon@google.com>
[Introduce virt/kvm/mmu_lock.h - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agosched: Add cond_resched_rwlock
Ben Gardon [Tue, 2 Feb 2021 18:57:14 +0000 (10:57 -0800)]
sched: Add cond_resched_rwlock

Safely rescheduling while holding a spin lock is essential for keeping
long running kernel operations running smoothly. Add the facility to
cond_resched rwlocks.

CC: Ingo Molnar <mingo@redhat.com>
CC: Will Deacon <will@kernel.org>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Waiman Long <longman@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-9-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agosched: Add needbreak for rwlocks
Ben Gardon [Tue, 2 Feb 2021 18:57:13 +0000 (10:57 -0800)]
sched: Add needbreak for rwlocks

Contention awareness while holding a spin lock is essential for reducing
latency when long running kernel operations can hold that lock. Add the
same contention detection interface for read/write spin locks.

CC: Ingo Molnar <mingo@redhat.com>
CC: Will Deacon <will@kernel.org>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Waiman Long <longman@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-8-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agolocking/rwlocks: Add contention detection for rwlocks
Ben Gardon [Tue, 2 Feb 2021 18:57:12 +0000 (10:57 -0800)]
locking/rwlocks: Add contention detection for rwlocks

rwlocks do not currently have any facility to detect contention
like spinlocks do. In order to allow users of rwlocks to better manage
latency, add contention detection for queued rwlocks.

CC: Ingo Molnar <mingo@redhat.com>
CC: Will Deacon <will@kernel.org>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Davidlohr Bueso <dbueso@suse.de>
Acked-by: Waiman Long <longman@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-7-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Protect TDP MMU page table memory with RCU
Ben Gardon [Tue, 2 Feb 2021 18:57:23 +0000 (10:57 -0800)]
KVM: x86/mmu: Protect TDP MMU page table memory with RCU

In order to enable concurrent modifications to the paging structures in
the TDP MMU, threads must be able to safely remove pages of page table
memory while other threads are traversing the same memory. To ensure
threads do not access PT memory after it is freed, protect PT memory
with RCU.

Protecting concurrent accesses to page table memory from use-after-free
bugs could also have been acomplished using
walk_shadow_page_lockless_begin/end() and READING_SHADOW_PAGE_TABLES,
coupling with the barriers in a TLB flush. The use of RCU for this case
has several distinct advantages over that approach.
1. Disabling interrupts for long running operations is not desirable.
   Future commits will allow operations besides page faults to operate
   without the exclusive protection of the MMU lock and those operations
   are too long to disable iterrupts for their duration.
2. The use of RCU here avoids long blocking / spinning operations in
   perfromance critical paths. By freeing memory with an asynchronous
   RCU API we avoid the longer wait times TLB flushes experience when
   overlapping with a thread in walk_shadow_page_lockless_begin/end().
3. RCU provides a separation of concerns when removing memory from the
   paging structure. Because the RCU callback to free memory can be
   scheduled immediately after a TLB flush, there's no need for the
   thread to manually free a queue of pages later, as commit_zap_pages
   does.

Fixes: 95fb5b0258b7 ("kvm: x86/mmu: Support MMIO in the TDP MMU")
Reviewed-by: Peter Feiner <pfeiner@google.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-18-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Clear dirtied pages mask bit before early break
Ben Gardon [Tue, 2 Feb 2021 18:57:22 +0000 (10:57 -0800)]
KVM: x86/mmu: Clear dirtied pages mask bit before early break

In clear_dirty_pt_masked, the loop is intended to exit early after
processing each of the GFNs with corresponding bits set in mask. This
does not work as intended if another thread has already cleared the
dirty bit or writable bit on the SPTE. In that case, the loop would
proceed to the next iteration early and the bit in mask would not be
cleared. As a result the loop could not exit early and would proceed
uselessly. Move the unsetting of the mask bit before the check for a
no-op SPTE change.

Fixes: a6a0b05da9f3 ("kvm: x86/mmu: Support dirty logging for the TDP
MMU")

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-17-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Skip no-op changes in TDP MMU functions
Ben Gardon [Tue, 2 Feb 2021 18:57:21 +0000 (10:57 -0800)]
KVM: x86/mmu: Skip no-op changes in TDP MMU functions

Skip setting SPTEs if no change is expected.

Reviewed-by: Peter Feiner <pfeiner@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-16-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Yield in TDU MMU iter even if no SPTES changed
Ben Gardon [Tue, 2 Feb 2021 18:57:20 +0000 (10:57 -0800)]
KVM: x86/mmu: Yield in TDU MMU iter even if no SPTES changed

Given certain conditions, some TDP MMU functions may not yield
reliably / frequently enough. For example, if a paging structure was
very large but had few, if any writable entries, wrprot_gfn_range
could traverse many entries before finding a writable entry and yielding
because the check for yielding only happens after an SPTE is modified.

Fix this issue by moving the yield to the beginning of the loop.

Fixes: a6a0b05da9f3 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
Reviewed-by: Peter Feiner <pfeiner@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-15-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Ensure forward progress when yielding in TDP MMU iter
Ben Gardon [Tue, 2 Feb 2021 18:57:19 +0000 (10:57 -0800)]
KVM: x86/mmu: Ensure forward progress when yielding in TDP MMU iter

In some functions the TDP iter risks not making forward progress if two
threads livelock yielding to one another. This is possible if two threads
are trying to execute wrprot_gfn_range. Each could write protect an entry
and then yield. This would reset the tdp_iter's walk over the paging
structure and the loop would end up repeating the same entry over and
over, preventing either thread from making forward progress.

Fix this issue by only yielding if the loop has made forward progress
since the last yield.

Fixes: a6a0b05da9f3 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
Reviewed-by: Peter Feiner <pfeiner@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-14-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Rename goal_gfn to next_last_level_gfn
Ben Gardon [Tue, 2 Feb 2021 18:57:18 +0000 (10:57 -0800)]
KVM: x86/mmu: Rename goal_gfn to next_last_level_gfn

The goal_gfn field in tdp_iter can be misleading as it implies that it
is the iterator's final goal. It is really a target for the lowest gfn
mapped by the leaf level SPTE the iterator will traverse towards. Change
the field's name to be more precise.

Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-13-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Merge flush and non-flush tdp_mmu_iter_cond_resched
Ben Gardon [Tue, 2 Feb 2021 18:57:17 +0000 (10:57 -0800)]
KVM: x86/mmu: Merge flush and non-flush tdp_mmu_iter_cond_resched

The flushing and non-flushing variants of tdp_mmu_iter_cond_resched have
almost identical implementations. Merge the two functions and add a
flush parameter.

Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-12-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Fix braces in kvm_recover_nx_lpages
Ben Gardon [Tue, 2 Feb 2021 18:57:15 +0000 (10:57 -0800)]
KVM: x86/mmu: Fix braces in kvm_recover_nx_lpages

No functional change intended.

Fixes: 29cf0f5007a2 ("kvm: x86/mmu: NX largepage recovery for TDP MMU")
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-10-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Factor out handling of removed page tables
Ben Gardon [Tue, 2 Feb 2021 18:57:11 +0000 (10:57 -0800)]
KVM: x86/mmu: Factor out handling of removed page tables

Factor out the code to handle a disconnected subtree of the TDP paging
structure from the code to handle the change to an individual SPTE.
Future commits will build on this to allow asynchronous page freeing.

No functional change intended.

Reviewed-by: Peter Feiner <pfeiner@google.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-6-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Don't redundantly clear TDP MMU pt memory
Ben Gardon [Tue, 2 Feb 2021 18:57:10 +0000 (10:57 -0800)]
KVM: x86/mmu: Don't redundantly clear TDP MMU pt memory

The KVM MMU caches already guarantee that shadow page table memory will
be zeroed, so there is no reason to re-zero the page in the TDP MMU page
fault handler.

No functional change intended.

Reviewed-by: Peter Feiner <pfeiner@google.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-5-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Add lockdep when setting a TDP MMU SPTE
Ben Gardon [Tue, 2 Feb 2021 18:57:09 +0000 (10:57 -0800)]
KVM: x86/mmu: Add lockdep when setting a TDP MMU SPTE

Add lockdep to __tdp_mmu_set_spte to ensure that SPTEs are only modified
under the MMU lock.

No functional change intended.

Reviewed-by: Peter Feiner <pfeiner@google.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-4-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Add comment on __tdp_mmu_set_spte
Ben Gardon [Tue, 2 Feb 2021 18:57:08 +0000 (10:57 -0800)]
KVM: x86/mmu: Add comment on __tdp_mmu_set_spte

__tdp_mmu_set_spte is a very important function in the TDP MMU which
already accepts several arguments and will take more in future commits.
To offset this complexity, add a comment to the function describing each
of the arguemnts.

No functional change intended.

Reviewed-by: Peter Feiner <pfeiner@google.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-3-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: change TDP MMU yield function returns to match cond_resched
Ben Gardon [Tue, 2 Feb 2021 18:57:07 +0000 (10:57 -0800)]
KVM: x86/mmu: change TDP MMU yield function returns to match cond_resched

Currently the TDP MMU yield / cond_resched functions either return
nothing or return true if the TLBs were not flushed. These are confusing
semantics, especially when making control flow decisions in calling
functions.

To clean things up, change both functions to have the same
return value semantics as cond_resched: true if the thread yielded,
false if it did not. If the function yielded in the _flush_ version,
then the TLBs will have been flushed.

Reviewed-by: Peter Feiner <pfeiner@google.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-2-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: move kvm_inject_gp up from kvm_set_xcr to callers
Paolo Bonzini [Mon, 14 Dec 2020 12:49:54 +0000 (07:49 -0500)]
KVM: x86: move kvm_inject_gp up from kvm_set_xcr to callers

Push the injection of #GP up to the callers, so that they can just use
kvm_complete_insn_gp.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: cleanup DR6/DR7 reserved bits checks
Paolo Bonzini [Fri, 13 Nov 2020 13:31:09 +0000 (08:31 -0500)]
KVM: cleanup DR6/DR7 reserved bits checks

kvm_dr6_valid and kvm_dr7_valid check that bits 63:32 are zero.  Using
them makes it easier to review the code for inconsistencies.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: move EXIT_FASTPATH_REENTER_GUEST to common code
Paolo Bonzini [Tue, 2 Feb 2021 15:44:23 +0000 (10:44 -0500)]
KVM: move EXIT_FASTPATH_REENTER_GUEST to common code

Now that KVM is using static calls, calling vmx_vcpu_run and
vmx_sync_pir_to_irr does not incur anymore the cost of a
retpoline.

Therefore there is no need anymore to handle EXIT_FASTPATH_REENTER_GUEST
in vendor code.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoselftest: kvm: x86: test KVM_GET_CPUID2 and guest visible CPUIDs against KVM_GET_SUPP...
Vitaly Kuznetsov [Fri, 29 Jan 2021 16:18:21 +0000 (17:18 +0100)]
selftest: kvm: x86: test KVM_GET_CPUID2 and guest visible CPUIDs against KVM_GET_SUPPORTED_CPUID

Commit 181f494888d5 ("KVM: x86: fix CPUID entries returned by
KVM_GET_CPUID2 ioctl") revealed that we're not testing KVM_GET_CPUID2
ioctl at all. Add a test for it and also check that from inside the guest
visible CPUIDs are equal to it's output.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210129161821.74635-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Add '__func__' in rmap_printk()
Stephen Zhang [Wed, 27 Jan 2021 02:08:45 +0000 (10:08 +0800)]
KVM: x86/mmu: Add '__func__' in rmap_printk()

Given the common pattern:

rmap_printk("%s:"..., __func__,...)

we could improve this by adding '__func__' in rmap_printk().

Signed-off-by: Stephen Zhang <stephenzhangzsd@gmail.com>
Message-Id: <1611713325-3591-1-git-send-email-stephenzhangzsd@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: Replace hard-coded value with #define
Krish Sadhukhan [Wed, 3 Feb 2021 01:28:40 +0000 (20:28 -0500)]
KVM: SVM: Replace hard-coded value with #define

Replace the hard-coded value for bit# 1 in EFLAGS, with the available
#define.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Message-Id: <20210203012842.101447-2-krish.sadhukhan@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: use .prepare_guest_switch() to handle CPU register save/setup
Michael Roth [Tue, 2 Feb 2021 19:01:26 +0000 (13:01 -0600)]
KVM: SVM: use .prepare_guest_switch() to handle CPU register save/setup

Currently we save host state like user-visible host MSRs, and do some
initial guest register setup for MSR_TSC_AUX and MSR_AMD64_TSC_RATIO
in svm_vcpu_load(). Defer this until just before we enter the guest by
moving the handling to kvm_x86_ops.prepare_guest_switch() similarly to
how it is done for the VMX implementation.

Additionally, since handling of saving/restoring host user MSRs is the
same both with/without SEV-ES enabled, move that handling to common
code.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-Id: <20210202190126.2185715-4-michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: remove uneeded fields from host_save_users_msrs
Michael Roth [Tue, 2 Feb 2021 19:01:25 +0000 (13:01 -0600)]
KVM: SVM: remove uneeded fields from host_save_users_msrs

Now that the set of host user MSRs that need to be individually
saved/restored are the same with/without SEV-ES, we can drop the
.sev_es_restored flag and just iterate through the list unconditionally
for both cases. A subsequent patch can then move these loops to a
common path.

Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-Id: <20210202190126.2185715-3-michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: use vmsave/vmload for saving/restoring additional host state
Michael Roth [Tue, 2 Feb 2021 19:01:24 +0000 (13:01 -0600)]
KVM: SVM: use vmsave/vmload for saving/restoring additional host state

Using a guest workload which simply issues 'hlt' in a tight loop to
generate VMEXITs, it was observed (on a recent EPYC processor) that a
significant amount of the VMEXIT overhead measured on the host was the
result of MSR reads/writes in svm_vcpu_load/svm_vcpu_put according to
perf:

  67.49%--kvm_arch_vcpu_ioctl_run
          |
          |--23.13%--vcpu_put
          |          kvm_arch_vcpu_put
          |          |
          |          |--21.31%--native_write_msr
          |          |
          |           --1.27%--svm_set_cr4
          |
          |--16.11%--vcpu_load
          |          |
          |           --15.58%--kvm_arch_vcpu_load
          |                     |
          |                     |--13.97%--svm_set_cr4
          |                     |          |
          |                     |          |--12.64%--native_read_msr

Most of these MSRs relate to 'syscall'/'sysenter' and segment bases, and
can be saved/restored using 'vmsave'/'vmload' instructions rather than
explicit MSR reads/writes. In doing so there is a significant reduction
in the svm_vcpu_load/svm_vcpu_put overhead measured for the above
workload:

  50.92%--kvm_arch_vcpu_ioctl_run
          |
          |--19.28%--disable_nmi_singlestep
          |
          |--13.68%--vcpu_load
          |          kvm_arch_vcpu_load
          |          |
          |          |--9.19%--svm_set_cr4
          |          |          |
          |          |           --6.44%--native_read_msr
          |          |
          |           --3.55%--native_write_msr
          |
          |--6.05%--kvm_inject_nmi
          |--2.80%--kvm_sev_es_mmio_read
          |--2.19%--vcpu_put
          |          |
          |           --1.25%--kvm_arch_vcpu_put
          |                     native_write_msr

Quantifying this further, if we look at the raw cycle counts for a
normal iteration of the above workload (according to 'rdtscp'),
kvm_arch_vcpu_ioctl_run() takes ~4600 cycles from start to finish with
the current behavior. Using 'vmsave'/'vmload', this is reduced to
~2800 cycles, a savings of 39%.

While this approach doesn't seem to manifest in any noticeable
improvement for more realistic workloads like UnixBench, netperf, and
kernel builds, likely due to their exit paths generally involving IO
with comparatively high latencies, it does improve overall overhead
of KVM_RUN significantly, which may still be noticeable for certain
situations. It also simplifies some aspects of the code.

With this change, explicit save/restore is no longer needed for the
following host MSRs, since they are documented[1] as being part of the
VMCB State Save Area:

  MSR_STAR, MSR_LSTAR, MSR_CSTAR,
  MSR_SYSCALL_MASK, MSR_KERNEL_GS_BASE,
  MSR_IA32_SYSENTER_CS,
  MSR_IA32_SYSENTER_ESP,
  MSR_IA32_SYSENTER_EIP,
  MSR_FS_BASE, MSR_GS_BASE

and only the following MSR needs individual handling in
svm_vcpu_put/svm_vcpu_load:

  MSR_TSC_AUX

We could drop the host_save_user_msrs array/loop and instead handle
MSR read/write of MSR_TSC_AUX directly, but we leave that for now as
a potential follow-up.

Since 'vmsave'/'vmload' also handles the LDTR and FS/GS segment
registers (and associated hidden state)[2], some of the code
previously used to handle this is no longer needed, so we drop it
as well.

The first public release of the SVM spec[3] also documents the same
handling for the host state in question, so we make these changes
unconditionally.

Also worth noting is that we 'vmsave' to the same page that is
subsequently used by 'vmrun' to record some host additional state. This
is okay, since, in accordance with the spec[2], the additional state
written to the page by 'vmrun' does not overwrite any fields written by
'vmsave'. This has also been confirmed through testing (for the above
CPU, at least).

[1] AMD64 Architecture Programmer's Manual, Rev 3.33, Volume 2, Appendix B, Table B-2
[2] AMD64 Architecture Programmer's Manual, Rev 3.31, Volume 3, Chapter 4, VMSAVE/VMLOAD
[3] Secure Virtual Machine Architecture Reference Manual, Rev 3.01

Suggested-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
Message-Id: <20210202190126.2185715-2-michael.roth@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: Use asm goto to handle unexpected #UD on SVM instructions
Sean Christopherson [Thu, 31 Dec 2020 00:27:00 +0000 (16:27 -0800)]
KVM: SVM: Use asm goto to handle unexpected #UD on SVM instructions

Add svm_asm*() macros, a la the existing vmx_asm*() macros, to handle
faults on SVM instructions instead of using the generic __ex(), a.k.a.
__kvm_handle_fault_on_reboot().  Using asm goto generates slightly
better code as it eliminates the in-line JMP+CALL sequences that are
needed by __kvm_handle_fault_on_reboot() to avoid triggering BUG()
from fixup (which generates bad stack traces).

Using SVM specific macros also drops the last user of __ex() and the
the last asm linkage to kvm_spurious_fault(), and adds a helper for
VMSAVE, which may gain an addition call site in the future (as part
of optimizing the SVM context switching).

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20201231002702.2223707-8-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: VMX: Use the kernel's version of VMXOFF
Sean Christopherson [Thu, 31 Dec 2020 00:26:59 +0000 (16:26 -0800)]
KVM: VMX: Use the kernel's version of VMXOFF

Drop kvm_cpu_vmxoff() in favor of the kernel's cpu_vmxoff().  Modify the
latter to return -EIO on fault so that KVM can invoke
kvm_spurious_fault() when appropriate.  In addition to the obvious code
reuse, dropping kvm_cpu_vmxoff() also eliminates VMX's last usage of the
__ex()/__kvm_handle_fault_on_reboot() macros, thus helping pave the way
toward dropping them entirely.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20201231002702.2223707-7-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: VMX: Move Intel PT shenanigans out of VMXON/VMXOFF flows
Sean Christopherson [Thu, 31 Dec 2020 00:26:58 +0000 (16:26 -0800)]
KVM: VMX: Move Intel PT shenanigans out of VMXON/VMXOFF flows

Move the Intel PT tracking outside of the VMXON/VMXOFF helpers so that
a future patch can drop KVM's kvm_cpu_vmxoff() in favor of the kernel's
cpu_vmxoff() without an associated PT functional change, and without
losing symmetry between the VMXON and VMXOFF flows.

Barring undocumented behavior, this should have no meaningful effects
as Intel PT behavior does not interact with CR4.VMXE.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20201231002702.2223707-6-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM/nVMX: Use __vmx_vcpu_run in nested_vmx_check_vmentry_hw
Uros Bizjak [Thu, 31 Dec 2020 00:26:57 +0000 (16:26 -0800)]
KVM/nVMX: Use __vmx_vcpu_run in nested_vmx_check_vmentry_hw

Replace inline assembly in nested_vmx_check_vmentry_hw
with a call to __vmx_vcpu_run.  The function is not
performance critical, so (double) GPR save/restore
in __vmx_vcpu_run can be tolerated, as far as performance
effects are concerned.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Reviewed-and-tested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
[sean: dropped versioning info from changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20201231002702.2223707-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agox86/virt: Mark flags and memory as clobbered by VMXOFF
David P. Reed [Thu, 31 Dec 2020 00:26:56 +0000 (16:26 -0800)]
x86/virt: Mark flags and memory as clobbered by VMXOFF

Explicitly tell the compiler that VMXOFF modifies flags (like all VMX
instructions), and mark memory as clobbered since VMXOFF must not be
reordered and also may have memory side effects (though the kernel
really shouldn't be accessing the root VMCS anyways).

Practically speaking, adding the clobbers is most likely a nop; the
primary motivation is to properly document VMXOFF's behavior.

For the flags clobber, both Clang and GCC automatically mark flags as
clobbered; this is noted in commit 4b1e54786e48 ("KVM/x86: Use assembly
instruction mnemonics instead of .byte streams"), which intentionally
removed the previous clobber.  But, neither Clang nor GCC documents
this behavior, and there's no downside to including the clobber.

For the memory clobber, the RFLAGS.IF and CR4.VMXE manipulations that
immediately follow VMXOFF have compiler barriers of their own, i.e.
VMXOFF can't get reordered after clearing CR4.VMXE, which is really
what's of interest.

Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: David P. Reed <dpreed@deepplum.com>
[sean: rewrote changelog, dropped comment adjustments]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20201231002702.2223707-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agox86/reboot: Force all cpus to exit VMX root if VMX is supported
Sean Christopherson [Thu, 31 Dec 2020 00:26:55 +0000 (16:26 -0800)]
x86/reboot: Force all cpus to exit VMX root if VMX is supported

Force all CPUs to do VMXOFF (via NMI shootdown) during an emergency
reboot if VMX is _supported_, as VMX being off on the current CPU does
not prevent other CPUs from being in VMX root (post-VMXON).  This fixes
a bug where a crash/panic reboot could leave other CPUs in VMX root and
prevent them from being woken via INIT-SIPI-SIPI in the new kernel.

Fixes: d176720d34c7 ("x86: disable VMX on all CPUs on reboot")
Cc: stable@vger.kernel.org
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: David P. Reed <dpreed@deepplum.com>
[sean: reworked changelog and further tweaked comment]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20201231002702.2223707-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agox86/virt: Eat faults on VMXOFF in reboot flows
Sean Christopherson [Thu, 31 Dec 2020 00:26:54 +0000 (16:26 -0800)]
x86/virt: Eat faults on VMXOFF in reboot flows

Silently ignore all faults on VMXOFF in the reboot flows as such faults
are all but guaranteed to be due to the CPU not being in VMX root.
Because (a) VMXOFF may be executed in NMI context, e.g. after VMXOFF but
before CR4.VMXE is cleared, (b) there's no way to query the CPU's VMX
state without faulting, and (c) the whole point is to get out of VMX
root, eating faults is the simplest way to achieve the desired behaior.

Technically, VMXOFF can fault (or fail) for other reasons, but all other
fault and failure scenarios are mode related, i.e. the kernel would have
to magically end up in RM, V86, compat mode, at CPL>0, or running with
the SMI Transfer Monitor active.  The kernel is beyond hosed if any of
those scenarios are encountered; trying to do something fancy in the
error path to handle them cleanly is pointless.

Fixes: 1e9931146c74 ("x86: asm/virtext.h: add cpu_vmxoff() inline function")
Reported-by: David P. Reed <dpreed@deepplum.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20201231002702.2223707-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: use static calls to reduce kvm_x86_ops overhead
Jason Baron [Fri, 15 Jan 2021 03:27:56 +0000 (22:27 -0500)]
KVM: x86: use static calls to reduce kvm_x86_ops overhead

Convert kvm_x86_ops to use static calls. Note that all kvm_x86_ops are
covered here except for 'pmu_ops and 'nested ops'.

Here are some numbers running cpuid in a loop of 1 million calls averaged
over 5 runs, measured in the vm (lower is better).

Intel Xeon 3000MHz:

           |default    |mitigations=off
-------------------------------------
vanilla    |.671s      |.486s
static call|.573s(-15%)|.458s(-6%)

AMD EPYC 2500MHz:

           |default    |mitigations=off
-------------------------------------
vanilla    |.710s      |.609s
static call|.664s(-6%) |.609s(0%)

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Signed-off-by: Jason Baron <jbaron@akamai.com>
Message-Id: <e057bf1b8a7ad15652df6eeba3f907ae758d3399.1610680941.git.jbaron@akamai.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: introduce definitions to support static calls for kvm_x86_ops
Jason Baron [Fri, 15 Jan 2021 03:27:55 +0000 (22:27 -0500)]
KVM: x86: introduce definitions to support static calls for kvm_x86_ops

Use static calls to improve kvm_x86_ops performance. Introduce the
definitions that will be used by a subsequent patch to actualize the
savings. Add a new kvm-x86-ops.h header that can be used for the
definition of static calls. This header is also intended to be
used to simplify the defition of svm_kvm_ops and vmx_x86_ops.

Note that all functions in kvm_x86_ops are covered here except for
'pmu_ops' and 'nested ops'. I think they can be covered by static
calls in a simlilar manner, but were omitted from this series to
reduce scope and because I don't think they have as large of a
performance impact.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Jason Baron <jbaron@akamai.com>
Message-Id: <e5cc82ead7ab37b2dceb0837a514f3f8bea4f8d1.1610680941.git.jbaron@akamai.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: X86: prepend vmx/svm prefix to additional kvm_x86_ops functions
Jason Baron [Fri, 15 Jan 2021 03:27:54 +0000 (22:27 -0500)]
KVM: X86: prepend vmx/svm prefix to additional kvm_x86_ops functions

A subsequent patch introduces macros in preparation for simplifying the
definition for vmx_x86_ops and svm_x86_ops. Making the naming more uniform
expands the coverage of the macros. Add vmx/svm prefix to the following
functions: update_exception_bitmap(), enable_nmi_window(),
enable_irq_window(), update_cr8_intercept and enable_smi_window().

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Jason Baron <jbaron@akamai.com>
Message-Id: <ed594696f8e2c2b2bfc747504cee9bbb2a269300.1610680941.git.jbaron@akamai.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: Stop using deprecated jump label APIs
Cun Li [Mon, 11 Jan 2021 15:24:35 +0000 (23:24 +0800)]
KVM: Stop using deprecated jump label APIs

The use of 'struct static_key' and 'static_key_false' is
deprecated. Use the new API.

Signed-off-by: Cun Li <cun.jia.li@gmail.com>
Message-Id: <20210111152435.50275-1-cun.jia.li@gmail.com>
[Make it compile.  While at it, rename kvm_no_apic_vcpu to
 kvm_has_noapic_vcpu; the former reads too much like "true if
 no vCPU has an APIC". - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: Fix #GP handling for doubly-nested virtualization
Wei Huang [Tue, 26 Jan 2021 08:18:31 +0000 (03:18 -0500)]
KVM: SVM: Fix #GP handling for doubly-nested virtualization

Under the case of nested on nested (L0, L1, L2 are all hypervisors),
we do not support emulation of the vVMLOAD/VMSAVE feature, the
L0 hypervisor can inject the proper #VMEXIT to inform L1 of what is
happening and L1 can avoid invoking the #GP workaround.  For this
reason we turns on guest VM's X86_FEATURE_SVME_ADDR_CHK bit for KVM
running inside VM to receive the notification and change behavior.

Similarly we check if vcpu is under guest mode before emulating the
vmware-backdoor instructions. For the case of nested on nested, we
let the guest handle it.

Co-developed-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Tested-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20210126081831.570253-5-wei.huang2@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: Add support for SVM instruction address check change
Wei Huang [Tue, 26 Jan 2021 08:18:30 +0000 (03:18 -0500)]
KVM: SVM: Add support for SVM instruction address check change

New AMD CPUs have a change that checks #VMEXIT intercept on special SVM
instructions before checking their EAX against reserved memory region.
This change is indicated by CPUID_0x8000000A_EDX[28]. If it is 1, #VMEXIT
is triggered before #GP. KVM doesn't need to intercept and emulate #GP
faults as #GP is supposed to be triggered.

Co-developed-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20210126081831.570253-4-wei.huang2@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: Add emulation support for #GP triggered by SVM instructions
Bandan Das [Tue, 26 Jan 2021 08:18:29 +0000 (03:18 -0500)]
KVM: SVM: Add emulation support for #GP triggered by SVM instructions

While running SVM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
CPUs check EAX against reserved memory regions (e.g. SMM memory on host)
before checking VMCB's instruction intercept. If EAX falls into such
memory areas, #GP is triggered before VMEXIT. This causes problem under
nested virtualization. To solve this problem, KVM needs to trap #GP and
check the instructions triggering #GP. For VM execution instructions,
KVM emulates these instructions.

Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Bandan Das <bsd@redhat.com>
Message-Id: <20210126081831.570253-3-wei.huang2@amd.com>
[Conditionally enable #GP intercept. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: Factor out x86 instruction emulation with decoding
Wei Huang [Tue, 26 Jan 2021 08:18:28 +0000 (03:18 -0500)]
KVM: x86: Factor out x86 instruction emulation with decoding

Move the instruction decode part out of x86_emulate_instruction() for it
to be used in other places. Also kvm_clear_exception_queue() is moved
inside the if-statement as it doesn't apply when KVM are coming back from
userspace.

Co-developed-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Bandan Das <bsd@redhat.com>
Signed-off-by: Wei Huang <wei.huang2@amd.com>
Message-Id: <20210126081831.570253-2-wei.huang2@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: X86: Rename DR6_INIT to DR6_ACTIVE_LOW
Chenyi Qiang [Tue, 2 Feb 2021 09:04:31 +0000 (17:04 +0800)]
KVM: X86: Rename DR6_INIT to DR6_ACTIVE_LOW

DR6_INIT contains the 1-reserved bits as well as the bit that is cleared
to 0 when the condition (e.g. RTM) happens. The value can be used to
initialize dr6 and also be the XOR mask between the #DB exit
qualification (or payload) and DR6.

Concerning that DR6_INIT is used as initial value only once, rename it
to DR6_ACTIVE_LOW and apply it in other places, which would make the
incoming changes for bus lock debug exception more simple.

Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210202090433.13441-2-chenyi.qiang@intel.com>
[Define DR6_FIXED_1 from DR6_ACTIVE_LOW and DR6_VOLATILE. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoselftests: kvm/x86: add test for pmu msr MSR_IA32_PERF_CAPABILITIES
Like Xu [Mon, 1 Feb 2021 05:10:39 +0000 (13:10 +0800)]
selftests: kvm/x86: add test for pmu msr MSR_IA32_PERF_CAPABILITIES

This test will check the effect of various CPUID settings on the
MSR_IA32_PERF_CAPABILITIES MSR, check that whatever user space writes
with KVM_SET_MSR is _not_ modified from the guest and can be retrieved
with KVM_GET_MSR, and check that invalid LBR formats are rejected.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Message-Id: <20210201051039.255478-12-like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: vmx/pmu: Expose LBR_FMT in the MSR_IA32_PERF_CAPABILITIES
Like Xu [Mon, 1 Feb 2021 05:10:38 +0000 (13:10 +0800)]
KVM: vmx/pmu: Expose LBR_FMT in the MSR_IA32_PERF_CAPABILITIES

Userspace could enable guest LBR feature when the exactly supported
LBR format value is initialized to the MSR_IA32_PERF_CAPABILITIES
and the LBR is also compatible with vPMU version and host cpu model.

The LBR could be enabled on the guest if host perf supports LBR
(checked via x86_perf_get_lbr()) and the vcpu model is compatible
with the host one.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Message-Id: <20210201051039.255478-11-like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: vmx/pmu: Release guest LBR event via lazy release mechanism
Like Xu [Mon, 1 Feb 2021 05:10:37 +0000 (13:10 +0800)]
KVM: vmx/pmu: Release guest LBR event via lazy release mechanism

The vPMU uses GUEST_LBR_IN_USE_IDX (bit 58) in 'pmu->pmc_in_use' to
indicate whether a guest LBR event is still needed by the vcpu. If the
vcpu no longer accesses LBR related registers within a scheduling time
slice, and the enable bit of LBR has been unset, vPMU will treat the
guest LBR event as a bland event of a vPMC counter and release it
as usual. Also, the pass-through state of LBR records msrs is cancelled.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Message-Id: <20210201051039.255478-10-like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: vmx/pmu: Emulate legacy freezing LBRs on virtual PMI
Like Xu [Mon, 1 Feb 2021 05:10:36 +0000 (13:10 +0800)]
KVM: vmx/pmu: Emulate legacy freezing LBRs on virtual PMI

The current vPMU only supports Architecture Version 2. According to
Intel SDM "17.4.7 Freezing LBR and Performance Counters on PMI", if
IA32_DEBUGCTL.Freeze_LBR_On_PMI = 1, the LBR is frozen on the virtual
PMI and the KVM would emulate to clear the LBR bit (bit 0) in
IA32_DEBUGCTL. Also, guest needs to re-enable IA32_DEBUGCTL.LBR
to resume recording branches.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Message-Id: <20210201051039.255478-9-like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: vmx/pmu: Reduce the overhead of LBR pass-through or cancellation
Like Xu [Mon, 1 Feb 2021 05:10:35 +0000 (13:10 +0800)]
KVM: vmx/pmu: Reduce the overhead of LBR pass-through or cancellation

When the LBR records msrs has already been pass-through, there is no
need to call vmx_update_intercept_for_lbr_msrs() again and again, and
vice versa.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Message-Id: <20210201051039.255478-8-like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: vmx/pmu: Pass-through LBR msrs when the guest LBR event is ACTIVE
Like Xu [Mon, 1 Feb 2021 05:10:34 +0000 (13:10 +0800)]
KVM: vmx/pmu: Pass-through LBR msrs when the guest LBR event is ACTIVE

In addition to DEBUGCTLMSR_LBR, any KVM trap caused by LBR msrs access
will result in a creation of guest LBR event per-vcpu.

If the guest LBR event is scheduled on with the corresponding vcpu context,
KVM will pass-through all LBR records msrs to the guest. The LBR callstack
mechanism implemented in the host could help save/restore the guest LBR
records during the event context switches, which reduces a lot of overhead
if we save/restore tens of LBR msrs (e.g. 32 LBR records entries) in the
much more frequent VMX transitions.

To avoid reclaiming LBR resources from any higher priority event on host,
KVM would always check the exist of guest LBR event and its state before
vm-entry as late as possible. A negative result would cancel the
pass-through state, and it also prevents real registers accesses and
potential data leakage. If host reclaims the LBR between two checks, the
interception state and LBR records can be safely preserved due to native
save/restore support from guest LBR event.

The KVM emits a pr_warn() when the LBR hardware is unavailable to the
guest LBR event. The administer is supposed to reminder users that the
guest result may be inaccurate if someone is using LBR to record
hypervisor on the host side.

Suggested-by: Andi Kleen <ak@linux.intel.com>
Co-developed-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Message-Id: <20210201051039.255478-7-like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: vmx/pmu: Create a guest LBR event when vcpu sets DEBUGCTLMSR_LBR
Like Xu [Mon, 1 Feb 2021 05:10:33 +0000 (13:10 +0800)]
KVM: vmx/pmu: Create a guest LBR event when vcpu sets DEBUGCTLMSR_LBR

When vcpu sets DEBUGCTLMSR_LBR in the MSR_IA32_DEBUGCTLMSR, the KVM handler
would create a guest LBR event which enables the callstack mode and none of
hardware counter is assigned. The host perf would schedule and enable this
event as usual but in an exclusive way.

The guest LBR event will be released when the vPMU is reset but soon,
the lazy release mechanism would be applied to this event like a vPMC.

Suggested-by: Andi Kleen <ak@linux.intel.com>
Co-developed-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Message-Id: <20210201051039.255478-6-like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: vmx/pmu: Add PMU_CAP_LBR_FMT check when guest LBR is enabled
Like Xu [Mon, 1 Feb 2021 05:10:31 +0000 (13:10 +0800)]
KVM: vmx/pmu: Add PMU_CAP_LBR_FMT check when guest LBR is enabled

Usespace could set the bits [0, 5] of the IA32_PERF_CAPABILITIES
MSR which tells about the record format stored in the LBR records.

The LBR will be enabled on the guest if host perf supports LBR
(checked via x86_perf_get_lbr()) and the vcpu model is compatible
with the host one.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Message-Id: <20210201051039.255478-4-like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: vmx/pmu: Add PMU_CAP_LBR_FMT check when guest LBR is enabled
Paolo Bonzini [Tue, 2 Feb 2021 14:36:08 +0000 (09:36 -0500)]
KVM: vmx/pmu: Add PMU_CAP_LBR_FMT check when guest LBR is enabled

Usespace could set the bits [0, 5] of the IA32_PERF_CAPABILITIES
MSR which tells about the record format stored in the LBR records.

The LBR will be enabled on the guest if host perf supports LBR
(checked via x86_perf_get_lbr()) and the vcpu model is compatible
with the host one.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Message-Id: <20210201051039.255478-4-like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/pmu: preserve IA32_PERF_CAPABILITIES across CPUID refresh
Paolo Bonzini [Tue, 2 Feb 2021 14:32:35 +0000 (09:32 -0500)]
KVM: x86/pmu: preserve IA32_PERF_CAPABILITIES across CPUID refresh

Once MSR_IA32_PERF_CAPABILITIES is changed via vmx_set_msr(), the
value should not be changed by cpuid(). To ensure that the new value
is kept, the default initialization path is moved to intel_pmu_init().
The effective value of the MSR will be 0 if PDCM is clear, however.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/vmx: Make vmx_set_intercept_for_msr() non-static
Like Xu [Mon, 1 Feb 2021 05:10:29 +0000 (13:10 +0800)]
KVM: x86/vmx: Make vmx_set_intercept_for_msr() non-static

To make code responsibilities clear, we may resue and invoke the
vmx_set_intercept_for_msr() in other vmx-specific files (e.g. pmu_intel.c),
so expose it to passthrough LBR msrs later.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Message-Id: <20210201051039.255478-2-like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: VMX: read/write MSR_IA32_DEBUGCTLMSR from GUEST_IA32_DEBUGCTL
Like Xu [Fri, 8 Jan 2021 01:36:55 +0000 (09:36 +0800)]
KVM: VMX: read/write MSR_IA32_DEBUGCTLMSR from GUEST_IA32_DEBUGCTL

SVM already has specific handlers of MSR_IA32_DEBUGCTLMSR in the
svm_get/set_msr, so the x86 common part can be safely moved to VMX.
This allows KVM to store the bits it supports in GUEST_IA32_DEBUGCTL.

Add vmx_supported_debugctl() to refactor the throwing logic of #GP.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Message-Id: <20210108013704.134985-2-like.xu@linux.intel.com>
[Merge parts of Chenyi Qiang's "KVM: X86: Expose bus lock debug exception
 to guest". - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: VMX: Use x2apic_mode to avoid RDMSR when querying PI state
Sean Christopherson [Fri, 15 Jan 2021 22:03:54 +0000 (14:03 -0800)]
KVM: VMX: Use x2apic_mode to avoid RDMSR when querying PI state

Use x2apic_mode instead of x2apic_enabled() when adjusting the
destination ID during Posted Interrupt updates.  This avoids the costly
RDMSR that is hidden behind x2apic_enabled().

Reported-by: luferry <luferry@163.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210115220354.434807-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agox86/apic: Export x2apic_mode for use by KVM in "warm" path
Sean Christopherson [Fri, 15 Jan 2021 22:03:53 +0000 (14:03 -0800)]
x86/apic: Export x2apic_mode for use by KVM in "warm" path

Export x2apic_mode so that KVM can query whether x2APIC is active
without having to incur the RDMSR in x2apic_enabled().  When Posted
Interrupts are in use for a guest with an assigned device, KVM ends up
checking for x2APIC at least once every time a vCPU halts.  KVM could
obviously snapshot x2apic_enabled() to avoid the RDMSR, but that's
rather silly given that x2apic_mode holds the exact info needed by KVM.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210115220354.434807-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: X86: Add the Document for KVM_CAP_X86_BUS_LOCK_EXIT
Chenyi Qiang [Fri, 6 Nov 2020 09:03:15 +0000 (17:03 +0800)]
KVM: X86: Add the Document for KVM_CAP_X86_BUS_LOCK_EXIT

Introduce a new capability named KVM_CAP_X86_BUS_LOCK_EXIT, which is
used to handle bus locks detected in guest. It allows the userspace to
do custom throttling policies to mitigate the 'noisy neighbour' problem.

Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20201106090315.18606-5-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: VMX: Enable bus lock VM exit
Chenyi Qiang [Fri, 6 Nov 2020 09:03:14 +0000 (17:03 +0800)]
KVM: VMX: Enable bus lock VM exit

Virtual Machine can exploit bus locks to degrade the performance of
system. Bus lock can be caused by split locked access to writeback(WB)
memory or by using locks on uncacheable(UC) memory. The bus lock is
typically >1000 cycles slower than an atomic operation within a cache
line. It also disrupts performance on other cores (which must wait for
the bus lock to be released before their memory operations can
complete).

To address the threat, bus lock VM exit is introduced to notify the VMM
when a bus lock was acquired, allowing it to enforce throttling or other
policy based mitigations.

A VMM can enable VM exit due to bus locks by setting a new "Bus Lock
Detection" VM-execution control(bit 30 of Secondary Processor-based VM
execution controls). If delivery of this VM exit was preempted by a
higher priority VM exit (e.g. EPT misconfiguration, EPT violation, APIC
access VM exit, APIC write VM exit, exception bitmap exiting), bit 26 of
exit reason in vmcs field is set to 1.

In current implementation, the KVM exposes this capability through
KVM_CAP_X86_BUS_LOCK_EXIT. The user can get the supported mode bitmap
(i.e. off and exit) and enable it explicitly (disabled by default). If
bus locks in guest are detected by KVM, exit to user space even when
current exit reason is handled by KVM internally. Set a new field
KVM_RUN_BUS_LOCK in vcpu->run->flags to inform the user space that there
is a bus lock detected in guest.

Document for Bus Lock VM exit is now available at the latest "Intel
Architecture Instruction Set Extensions Programming Reference".

Document Link:
https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html

Co-developed-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20201106090315.18606-4-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: X86: Reset the vcpu->run->flags at the beginning of vcpu_run
Chenyi Qiang [Fri, 6 Nov 2020 09:03:13 +0000 (17:03 +0800)]
KVM: X86: Reset the vcpu->run->flags at the beginning of vcpu_run

Reset the vcpu->run->flags at the beginning of kvm_arch_vcpu_ioctl_run.
It can avoid every thunk of code that needs to set the flag clear it,
which increases the odds of missing a case and ending up with a flag in
an undefined state.

Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20201106090315.18606-3-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: VMX: Convert vcpu_vmx.exit_reason to a union
Sean Christopherson [Fri, 6 Nov 2020 09:03:12 +0000 (17:03 +0800)]
KVM: VMX: Convert vcpu_vmx.exit_reason to a union

Convert vcpu_vmx.exit_reason from a u32 to a union (of size u32).  The
full VM_EXIT_REASON field is comprised of a 16-bit basic exit reason in
bits 15:0, and single-bit modifiers in bits 31:16.

Historically, KVM has only had to worry about handling the "failed
VM-Entry" modifier, which could only be set in very specific flows and
required dedicated handling.  I.e. manually stripping the FAILED_VMENTRY
bit was a somewhat viable approach.  But even with only a single bit to
worry about, KVM has had several bugs related to comparing a basic exit
reason against the full exit reason store in vcpu_vmx.

Upcoming Intel features, e.g. SGX, will add new modifier bits that can
be set on more or less any VM-Exit, as opposed to the significantly more
restricted FAILED_VMENTRY, i.e. correctly handling everything in one-off
flows isn't scalable.  Tracking exit reason in a union forces code to
explicitly choose between consuming the full exit reason and the basic
exit, and is a convenient way to document and access the modifiers.

No functional change intended.

Cc: Xiaoyao Li <xiaoyao.li@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20201106090315.18606-2-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM/SVM: add support for SEV attestation command
Brijesh Singh [Mon, 4 Jan 2021 15:17:49 +0000 (09:17 -0600)]
KVM/SVM: add support for SEV attestation command

The SEV FW version >= 0.23 added a new command that can be used to query
the attestation report containing the SHA-256 digest of the guest memory
encrypted through the KVM_SEV_LAUNCH_UPDATE_{DATA, VMSA} commands and
sign the report with the Platform Endorsement Key (PEK).

See the SEV FW API spec section 6.8 for more details.

Note there already exist a command (KVM_SEV_LAUNCH_MEASURE) that can be
used to get the SHA-256 digest. The main difference between the
KVM_SEV_LAUNCH_MEASURE and KVM_SEV_ATTESTATION_REPORT is that the latter
can be called while the guest is running and the measurement value is
signed with PEK.

Cc: James Bottomley <jejb@linux.ibm.com>
Cc: Tom Lendacky <Thomas.Lendacky@amd.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: John Allen <john.allen@amd.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: linux-crypto@vger.kernel.org
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: David Rientjes <rientjes@google.com>
Tested-by: James Bottomley <jejb@linux.ibm.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Message-Id: <20210104151749.30248-1-brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Disable dirty logging with vCPUs running
Ben Gardon [Tue, 2 Feb 2021 18:57:34 +0000 (10:57 -0800)]
KVM: selftests: Disable dirty logging with vCPUs running

Disabling dirty logging is much more intestesting from a testing
perspective if the vCPUs are still running. This also excercises the
code-path in which collapsible SPTEs must be faulted back in at a higher
level after disabling dirty logging.

To: linux-kselftest@vger.kernel.org
CC: Peter Xu <peterx@redhat.com>
CC: Andrew Jones <drjones@redhat.com>
CC: Thomas Huth <thuth@redhat.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-29-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Add backing src parameter to dirty_log_perf_test
Ben Gardon [Tue, 2 Feb 2021 18:57:33 +0000 (10:57 -0800)]
KVM: selftests: Add backing src parameter to dirty_log_perf_test

Add a parameter to control the backing memory type for
dirty_log_perf_test so that the test can be run with hugepages.

To: linux-kselftest@vger.kernel.org
CC: Peter Xu <peterx@redhat.com>
CC: Andrew Jones <drjones@redhat.com>
CC: Thomas Huth <thuth@redhat.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210202185734.1680553-28-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Add memslot modification stress test
Ben Gardon [Tue, 12 Jan 2021 21:42:53 +0000 (13:42 -0800)]
KVM: selftests: Add memslot modification stress test

Add a memslot modification stress test in which a memslot is repeatedly
created and removed while vCPUs access memory in another memslot. Most
userspaces do not create or remove memslots on running VMs which makes
it hard to test races in adding and removing memslots without a
dedicated test. Adding and removing a memslot also has the effect of
tearing down the entire paging structure, which leads to more page
faults and pressure on the page fault handling path than a one-and-done
memory population test.

Reviewed-by: Jacob Xu <jacobhxu@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210112214253.463999-7-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Add option to overlap vCPU memory access
Ben Gardon [Tue, 12 Jan 2021 21:42:52 +0000 (13:42 -0800)]
KVM: selftests: Add option to overlap vCPU memory access

Add an option to overlap the ranges of memory each vCPU accesses instead
of partitioning them. This option will increase the probability of
multiple vCPUs faulting on the same page at the same time, and causing
interesting races, if there are bugs in the page fault handler or
elsewhere in the kernel.

Reviewed-by: Jacob Xu <jacobhxu@google.com>
Reviewed-by: Makarand Sonare <makarandsonare@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210112214253.463999-6-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Fix population stage in dirty_log_perf_test
Ben Gardon [Tue, 12 Jan 2021 21:42:51 +0000 (13:42 -0800)]
KVM: selftests: Fix population stage in dirty_log_perf_test

Currently the population stage in the dirty_log_perf_test does nothing
as the per-vCPU iteration counters are not initialized and the loop does
not wait for each vCPU. Remedy those errors.

Reviewed-by: Jacob Xu <jacobhxu@google.com>
Reviewed-by: Makarand Sonare <makarandsonare@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210112214253.463999-5-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Convert iterations to int in dirty_log_perf_test
Ben Gardon [Tue, 12 Jan 2021 21:42:50 +0000 (13:42 -0800)]
KVM: selftests: Convert iterations to int in dirty_log_perf_test

In order to add an iteration -1 to indicate that the memory population
phase has not yet completed, convert the interations counters to ints.

No functional change intended.

Reviewed-by: Jacob Xu <jacobhxu@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210112214253.463999-4-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Avoid flooding debug log while populating memory
Ben Gardon [Tue, 12 Jan 2021 21:42:49 +0000 (13:42 -0800)]
KVM: selftests: Avoid flooding debug log while populating memory

Peter Xu pointed out that a log message printed while waiting for the
memory population phase of the dirty_log_perf_test will flood the debug
logs as there is no delay after printing the message. Since the message
does not provide much value anyway, remove it.

Reviewed-by: Jacob Xu <jacobhxu@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210112214253.463999-3-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Rename timespec_diff_now to timespec_elapsed
Ben Gardon [Tue, 12 Jan 2021 21:42:48 +0000 (13:42 -0800)]
KVM: selftests: Rename timespec_diff_now to timespec_elapsed

In response to some earlier comments from Peter Xu, rename
timespec_diff_now to the much more sensible timespec_elapsed.

No functional change intended.

Reviewed-by: Jacob Xu <jacobhxu@google.com>
Reviewed-by: Makarand Sonare <makarandsonare@google.com>
Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20210112214253.463999-2-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Remove the defunct update_pte() paging hook
Sean Christopherson [Fri, 15 Jan 2021 00:40:51 +0000 (16:40 -0800)]
KVM: x86/mmu: Remove the defunct update_pte() paging hook

Remove the update_pte() shadow paging logic, which was obsoleted by
commit 4731d4c7a077 ("KVM: MMU: out of sync shadow core"), but never
removed.  As pointed out by Yu, KVM never write protects leaf page
tables for the purposes of shadow paging, and instead marks their
associated shadow page as unsync so that the guest can write PTEs at
will.

The update_pte() path, which predates the unsync logic, optimizes COW
scenarios by refreshing leaf SPTEs when they are written, as opposed to
zapping the SPTE, restarting the guest, and installing the new SPTE on
the subsequent fault.  Since KVM no longer write-protects leaf page
tables, update_pte() is unreachable and can be dropped.

Reported-by: Yu Zhang <yu.c.zhang@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210115004051.4099250-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Test IPI to halted vCPU in xAPIC while backing page moves
Peter Shier [Thu, 5 Nov 2020 22:38:23 +0000 (14:38 -0800)]
KVM: selftests: Test IPI to halted vCPU in xAPIC while backing page moves

When a guest is using xAPIC KVM allocates a backing page for the required
EPT entry for the APIC access address set in the VMCS. If mm decides to
move that page the KVM mmu notifier will update the VMCS with the new
HPA. This test induces a page move to test that APIC access continues to
work correctly. It is a directed test for
commit e649b3f0188f "KVM: x86: Fix APIC page invalidation race".

Tested: ran for 1 hour on a skylake, migrating backing page every 1ms

Depends on patch "selftests: kvm: Add exception handling to selftests"
from aaronlewis@google.com that has not yet been queued.

Signed-off-by: Peter Shier <pshier@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Ricardo Koller <ricarkol@google.com>
Message-Id: <20201105223823.850068-1-pshier@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: Expose AVX_VNNI instruction to guset
Yang Zhong [Tue, 5 Jan 2021 00:49:09 +0000 (08:49 +0800)]
KVM: Expose AVX_VNNI instruction to guset

Expose AVX (VEX-encoded) versions of the Vector Neural Network
Instructions to guest.

The bit definition:
CPUID.(EAX=7,ECX=1):EAX[bit 4] AVX_VNNI

The following instructions are available when this feature is
present in the guest.
  1. VPDPBUS: Multiply and Add Unsigned and Signed Bytes
  2. VPDPBUSDS: Multiply and Add Unsigned and Signed Bytes with Saturation
  3. VPDPWSSD: Multiply and Add Signed Word Integers
  4. VPDPWSSDS: Multiply and Add Signed Integers with Saturation

This instruction is currently documented in the latest "extensions"
manual (ISE). It will appear in the "main" manual (SDM) in the future.

Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Message-Id: <20210105004909.42000-3-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoEnumerate AVX Vector Neural Network instructions
Kyung Min Park [Tue, 5 Jan 2021 00:49:08 +0000 (08:49 +0800)]
Enumerate AVX Vector Neural Network instructions

Add AVX version of the Vector Neural Network (VNNI) Instructions.

A processor supports AVX VNNI instructions if CPUID.0x07.0x1:EAX[4] is
present. The following instructions are available when this feature is
present.
  1. VPDPBUS: Multiply and Add Unsigned and Signed Bytes
  2. VPDPBUSDS: Multiply and Add Unsigned and Signed Bytes with Saturation
  3. VPDPWSSD: Multiply and Add Signed Word Integers
  4. VPDPWSSDS: Multiply and Add Signed Integers with Saturation

The only in-kernel usage of this is kvm passthrough. The CPU feature
flag is shown as "avx_vnni" in /proc/cpuinfo.

This instruction is currently documented in the latest "extensions"
manual (ISE). It will appear in the "main" manual (SDM) in the future.

Signed-off-by: Kyung Min Park <kyung.min.park@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Message-Id: <20210105004909.42000-2-yang.zhong@intel.com>
Acked-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agox86: kvm: style: Simplify bool comparison
YANG LI [Mon, 11 Jan 2021 09:32:58 +0000 (17:32 +0800)]
x86: kvm: style: Simplify bool comparison

Fix the following coccicheck warning:
./arch/x86/kvm/x86.c:8012:5-48: WARNING: Comparison to bool

Signed-off-by: YANG LI <abaci-bugfix@linux.alibaba.com>
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Message-Id: <1610357578-66081-1-git-send-email-abaci-bugfix@linux.alibaba.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: Zap the oldest MMU pages, not the newest
Sean Christopherson [Wed, 13 Jan 2021 20:50:30 +0000 (12:50 -0800)]
KVM: x86: Zap the oldest MMU pages, not the newest

Walk the list of MMU pages in reverse in kvm_mmu_zap_oldest_mmu_pages().
The list is FIFO, meaning new pages are inserted at the head and thus
the oldest pages are at the tail.  Using a "forward" iterator causes KVM
to zap MMU pages that were just added, which obliterates guest
performance once the max number of shadow MMU pages is reached.

Fixes: 6b82ef2c9cf1 ("KVM: x86/mmu: Batch zap MMU pages when recycling oldest pages")
Reported-by: Zdenek Kaspar <zkaspar82@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210113205030.3481307-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Use boolean returns for (S)PTE accessors
Sean Christopherson [Sat, 23 Jan 2021 00:30:03 +0000 (16:30 -0800)]
KVM: x86/mmu: Use boolean returns for (S)PTE accessors

Return a 'bool' instead of an 'int' for various PTE accessors that are
boolean in nature, e.g. is_shadow_present_pte().  Returning an int is
goofy and potentially dangerous, e.g. if a flag being checked is moved
into the upper 32 bits of a SPTE, then the compiler may silently squash
the entire check since casting to an int is guaranteed to yield a
return value of '0'.

Opportunistically refactor is_last_spte() so that it naturally returns
a bool value instead of letting it implicitly cast 0/1 to false/true.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210123003003.3137525-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: X86: use vzalloc() instead of vmalloc/memset
Tian Tao [Mon, 25 Jan 2021 03:57:25 +0000 (11:57 +0800)]
KVM: X86: use vzalloc() instead of vmalloc/memset

fixed the following warning:
/virt/kvm/dirty_ring.c:70:20-27: WARNING: vzalloc should be used for
ring -> dirty_gfns, instead of vmalloc/memset.

Signed-off-by: Tian Tao <tiantao6@hisilicon.com>
Message-Id: <1611547045-13669-1-git-send-email-tiantao6@hisilicon.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: Take KVM's SRCU lock only if steal time update is needed
Sean Christopherson [Sat, 23 Jan 2021 00:03:34 +0000 (16:03 -0800)]
KVM: x86: Take KVM's SRCU lock only if steal time update is needed

Enter a SRCU critical section for a memslots lookup during steal time
update if and only if a steal time update is actually needed.  Taking
the lock can be avoided if steal time is disabled by the guest, or if
KVM knows it has already flagged the vCPU as being preempted.

Reword the comment to be more precise as to exactly why memslots will
be queried.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210123000334.3123628-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>