1 .. SPDX-License-Identifier: GPL-2.0
10 The acquisition orders for mutexes are as follows:
12 - kvm->lock is taken outside vcpu->mutex
14 - kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock
16 - kvm->slots_lock is taken outside kvm->irq_lock, though acquiring
17 them together is quite rare.
19 - kvm->mn_active_invalidate_count ensures that pairs of
20 invalidate_range_start() and invalidate_range_end() callbacks
21 use the same memslots array. kvm->slots_lock and kvm->slots_arch_lock
22 are taken on the waiting side in install_new_memslots, so MMU notifiers
23 must not take either kvm->slots_lock or kvm->slots_arch_lock.
27 - ``synchronize_srcu(&kvm->srcu)`` is called inside critical sections
28 for kvm->lock, vcpu->mutex and kvm->slots_lock. These locks _cannot_
29 be taken inside a kvm->srcu read-side critical section; that is, the
32 srcu_read_lock(&kvm->srcu);
33 mutex_lock(&kvm->slots_lock);
35 - kvm->slots_arch_lock instead is released before the call to
36 ``synchronize_srcu()``. It _can_ therefore be taken inside a
37 kvm->srcu read-side critical section, for example while processing
42 - vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock
44 - kvm->arch.mmu_lock is an rwlock. kvm->arch.tdp_mmu_pages_lock and
45 kvm->arch.mmu_unsync_pages_lock are taken inside kvm->arch.mmu_lock, and
46 cannot be taken without already holding kvm->arch.mmu_lock (typically with
47 ``read_lock`` for the TDP MMU, thus the need for additional spinlocks).
49 Everything else is a leaf: no other lock is taken inside the critical
57 Fast page fault is the fast path which fixes the guest page fault out of
58 the mmu-lock on x86. Currently, the page fault can be fast in one of the
61 1. Access Tracking: The SPTE is not present, but it is marked for access
62 tracking. That means we need to restore the saved R/X bits. This is
63 described in more detail later below.
65 2. Write-Protection: The SPTE is present and the fault is caused by
66 write-protect. That means we just need to change the W bit of the spte.
68 What we use to avoid all the race is the Host-writable bit and MMU-writable bit
71 - Host-writable means the gfn is writable in the host kernel page tables and in
73 - MMU-writable means the gfn is writable in the guest's mmu and it is not
74 write-protected by shadow page write-protection.
76 On fast page fault path, we will use cmpxchg to atomically set the spte W
77 bit if spte.HOST_WRITEABLE = 1 and spte.WRITE_PROTECT = 1, to restore the saved
78 R/X bits if for an access-traced spte, or both. This is safe because whenever
79 changing these bits can be detected by cmpxchg.
81 But we need carefully check these cases:
83 1) The mapping from gfn to pfn
85 The mapping from gfn to pfn may be changed since we can only ensure the pfn
86 is not changed during cmpxchg. This is a ABA problem, for example, below case
89 +------------------------------------------------------------------------+
90 | At the beginning:: |
93 | gfn1 is mapped to pfn1 on host |
94 | spte is the shadow page table entry corresponding with gpte and |
96 +------------------------------------------------------------------------+
97 | On fast page fault path: |
98 +------------------------------------+-----------------------------------+
100 +------------------------------------+-----------------------------------+
103 | old_spte = *spte; | |
104 +------------------------------------+-----------------------------------+
105 | | pfn1 is swapped out:: |
109 | | pfn1 is re-alloced for gfn2. |
111 | | gpte is changed to point to |
112 | | gfn2 by the guest:: |
115 +------------------------------------+-----------------------------------+
118 | if (cmpxchg(spte, old_spte, old_spte+W) |
119 | mark_page_dirty(vcpu->kvm, gfn1) |
121 +------------------------------------------------------------------------+
123 We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap.
125 For direct sp, we can easily avoid it since the spte of direct sp is fixed
126 to gfn. For indirect sp, we disabled fast page fault for simplicity.
128 A solution for indirect sp could be to pin the gfn, for example via
129 kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg. After the pinning:
131 - We have held the refcount of pfn that means the pfn can not be freed and
132 be reused for another gfn.
133 - The pfn is writable and therefore it cannot be shared between different gfns
136 Then, we can ensure the dirty bitmaps is correctly set for a gfn.
138 2) Dirty bit tracking
140 In the origin code, the spte can be fast updated (non-atomically) if the
141 spte is read-only and the Accessed bit has already been set since the
142 Accessed bit and Dirty bit can not be lost.
144 But it is not true after fast page fault since the spte can be marked
145 writable between reading spte and updating spte. Like below case:
147 +------------------------------------------------------------------------+
148 | At the beginning:: |
151 | spte.Accessed = 1 |
152 +------------------------------------+-----------------------------------+
154 +------------------------------------+-----------------------------------+
155 | In mmu_spte_clear_track_bits():: | |
157 | old_spte = *spte; | |
160 | /* 'if' condition is satisfied. */| |
161 | if (old_spte.Accessed == 1 && | |
162 | old_spte.W == 0) | |
164 +------------------------------------+-----------------------------------+
165 | | on fast page fault path:: |
169 | | memory write on the spte:: |
172 +------------------------------------+-----------------------------------+
176 | old_spte = xchg(spte, 0ull) | |
177 | if (old_spte.Accessed == 1) | |
178 | kvm_set_pfn_accessed(spte.pfn);| |
179 | if (old_spte.Dirty == 1) | |
180 | kvm_set_pfn_dirty(spte.pfn); | |
182 +------------------------------------+-----------------------------------+
184 The Dirty bit is lost in this case.
186 In order to avoid this kind of issue, we always treat the spte as "volatile"
187 if it can be updated out of mmu-lock, see spte_has_volatile_bits(), it means,
188 the spte is always atomically updated in this case.
190 3) flush tlbs due to spte updated
192 If the spte is updated from writable to readonly, we should flush all TLBs,
193 otherwise rmap_write_protect will find a read-only spte, even though the
194 writable spte might be cached on a CPU's TLB.
196 As mentioned before, the spte can be updated to writable out of mmu-lock on
197 fast page fault path, in order to easily audit the path, we see if TLBs need
198 be flushed caused by this reason in mmu_spte_update() since this is a common
199 function to update spte (present -> present).
201 Since the spte is "volatile" if it can be updated out of mmu-lock, we always
202 atomically update the spte, the race caused by fast page fault can be avoided,
203 See the comments in spte_has_volatile_bits() and mmu_spte_update().
205 Lockless Access Tracking:
207 This is used for Intel CPUs that are using EPT but do not support the EPT A/D
208 bits. In this case, PTEs are tagged as A/D disabled (using ignored bits), and
209 when the KVM MMU notifier is called to track accesses to a page (via
210 kvm_mmu_notifier_clear_flush_young), it marks the PTE not-present in hardware
211 by clearing the RWX bits in the PTE and storing the original R & X bits in more
212 unused/ignored bits. When the VM tries to access the page later on, a fault is
213 generated and the fast page fault mechanism described above is used to
214 atomically restore the PTE to a Present state. The W bit is not saved when the
215 PTE is marked for access tracking and during restoration to the Present state,
216 the W bit is set depending on whether or not it was a write access. If it
217 wasn't, then the W bit will remain clear until a write access happens, at which
218 time it will be set using the Dirty tracking mechanism described above.
233 :Type: raw_spinlock_t
235 :Protects: - hardware virtualization enable/disable
236 :Comment: 'raw' because hardware enabling/disabling must be atomic /wrt
239 ``kvm->mn_invalidate_lock``
240 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
244 :Protects: mn_active_invalidate_count, mn_memslots_update_rcuwait
246 ``kvm_arch::tsc_write_lock``
247 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
249 :Type: raw_spinlock_t
251 :Protects: - kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset}
253 :Comment: 'raw' because updating the tsc offsets must not be preempted.
257 :Type: spinlock_t or rwlock_t
259 :Protects: -shadow page/shadow tlb entry
260 :Comment: it is a spinlock since it is used in mmu notifier.
266 :Protects: - kvm->memslots
268 :Comment: The srcu read lock must be held while accessing memslots (e.g.
269 when using gfn_to_* functions) and while accessing in-kernel
270 MMIO/PIO address->device structure mapping (kvm->buses).
271 The srcu index can be stored in kvm_vcpu->srcu_idx per vcpu
272 if it is needed by multiple functions.
274 ``kvm->slots_arch_lock``
275 ^^^^^^^^^^^^^^^^^^^^^^^^
277 :Arch: any (only needed on x86 though)
278 :Protects: any arch-specific fields of memslots that have to be modified
279 in a ``kvm->srcu`` read-side critical section.
280 :Comment: must be held before reading the pointer to the current memslots,
281 until after all changes to the memslots are complete
283 ``wakeup_vcpus_on_cpu_lock``
284 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
287 :Protects: wakeup_vcpus_on_cpu
288 :Comment: This is a per-CPU lock and it is used for VT-d posted-interrupts.
289 When VT-d posted-interrupts is supported and the VM has assigned
290 devices, we put the blocked vCPU on the list blocked_vcpu_on_cpu
291 protected by blocked_vcpu_on_cpu_lock, when VT-d hardware issues
292 wakeup notification event since external interrupts from the
293 assigned devices happens, we will find the vCPU on the list to