Yury Norov [Thu, 19 May 2022 17:15:04 +0000 (10:15 -0700)]
KVM: x86: hyper-v: fix type of valid_bank_mask
In kvm_hv_flush_tlb(), valid_bank_mask is declared as unsigned long,
but is used as u64, which is wrong for i386, and has been spotted by
LKP after applying "KVM: x86: hyper-v: replace bitmap_weight() with
hweight64()"
https://lore.kernel.org/lkml/
20220510154750.212913-12-yury.norov@gmail.com/
But it's wrong even without that patch because now bitmap_weight()
dereferences a word after valid_bank_mask on i386.
>> include/asm-generic/bitops/const_hweight.h:21:76: warning: right shift count >= width of type
+[-Wshift-count-overflow]
21 | #define __const_hweight64(w) (__const_hweight32(w) + __const_hweight32((w) >> 32))
| ^~
include/asm-generic/bitops/const_hweight.h:10:16: note: in definition of macro '__const_hweight8'
10 | ((!!((w) & (1ULL << 0))) + \
| ^
include/asm-generic/bitops/const_hweight.h:20:31: note: in expansion of macro '__const_hweight16'
20 | #define __const_hweight32(w) (__const_hweight16(w) + __const_hweight16((w) >> 16))
| ^~~~~~~~~~~~~~~~~
include/asm-generic/bitops/const_hweight.h:21:54: note: in expansion of macro '__const_hweight32'
21 | #define __const_hweight64(w) (__const_hweight32(w) + __const_hweight32((w) >> 32))
| ^~~~~~~~~~~~~~~~~
include/asm-generic/bitops/const_hweight.h:29:49: note: in expansion of macro '__const_hweight64'
29 | #define hweight64(w) (__builtin_constant_p(w) ? __const_hweight64(w) : __arch_hweight64(w))
| ^~~~~~~~~~~~~~~~~
arch/x86/kvm/hyperv.c:1983:36: note: in expansion of macro 'hweight64'
1983 | if (hc->var_cnt != hweight64(valid_bank_mask))
| ^~~~~~~~~
CC: Borislav Petkov <bp@alien8.de>
CC: Dave Hansen <dave.hansen@linux.intel.com>
CC: H. Peter Anvin <hpa@zytor.com>
CC: Ingo Molnar <mingo@redhat.com>
CC: Jim Mattson <jmattson@google.com>
CC: Joerg Roedel <joro@8bytes.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Sean Christopherson <seanjc@google.com>
CC: Thomas Gleixner <tglx@linutronix.de>
CC: Vitaly Kuznetsov <vkuznets@redhat.com>
CC: Wanpeng Li <wanpengli@tencent.com>
CC: kvm@vger.kernel.org
CC: linux-kernel@vger.kernel.org
CC: x86@kernel.org
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Message-Id: <
20220519171504.
1238724-1-yury.norov@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 18 May 2022 00:38:42 +0000 (00:38 +0000)]
KVM: Free new dirty bitmap if creating a new memslot fails
Fix a goof in kvm_prepare_memory_region() where KVM fails to free the
new memslot's dirty bitmap during a CREATE action if
kvm_arch_prepare_memory_region() fails. The logic is supposed to detect
if the bitmap was allocated and thus needs to be freed, versus if the
bitmap was inherited from the old memslot and thus needs to be kept. If
there is no old memslot, then obviously the bitmap can't have been
inherited
The bug was exposed by commit
86931ff7207b ("KVM: x86/mmu: Do not create
SPTEs for GFNs that exceed host.MAXPHYADDR"), which made it trivally easy
for syzkaller to trigger failure during kvm_arch_prepare_memory_region(),
but the bug can be hit other ways too, e.g. due to -ENOMEM when
allocating x86's memslot metadata.
The backtrace from kmemleak:
__vmalloc_node_range+0xb40/0xbd0 mm/vmalloc.c:3195
__vmalloc_node mm/vmalloc.c:3232 [inline]
__vmalloc+0x49/0x50 mm/vmalloc.c:3246
__vmalloc_array mm/util.c:671 [inline]
__vcalloc+0x49/0x70 mm/util.c:694
kvm_alloc_dirty_bitmap virt/kvm/kvm_main.c:1319
kvm_prepare_memory_region virt/kvm/kvm_main.c:1551
kvm_set_memslot+0x1bd/0x690 virt/kvm/kvm_main.c:1782
__kvm_set_memory_region+0x689/0x750 virt/kvm/kvm_main.c:1949
kvm_set_memory_region virt/kvm/kvm_main.c:1962
kvm_vm_ioctl_set_memory_region virt/kvm/kvm_main.c:1974
kvm_vm_ioctl+0x377/0x13a0 virt/kvm/kvm_main.c:4528
vfs_ioctl fs/ioctl.c:51
__do_sys_ioctl fs/ioctl.c:870
__se_sys_ioctl fs/ioctl.c:856
__x64_sys_ioctl+0xfc/0x140 fs/ioctl.c:856
do_syscall_x64 arch/x86/entry/common.c:50
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x44/0xae
And the relevant sequence of KVM events:
ioctl(3, KVM_CREATE_VM, 0) = 4
ioctl(4, KVM_SET_USER_MEMORY_REGION, {slot=0,
flags=KVM_MEM_LOG_DIRTY_PAGES,
guest_phys_addr=0x10000000000000,
memory_size=4096,
userspace_addr=0x20fe8000}
) = -1 EINVAL (Invalid argument)
Fixes:
244893fa2859 ("KVM: Dynamically allocate "new" memslots from the get-go")
Cc: stable@vger.kernel.org
Reported-by: syzbot+8606b8a9cc97a63f1c87@syzkaller.appspotmail.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220518003842.
1341782-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wanpeng Li [Thu, 19 May 2022 08:49:13 +0000 (01:49 -0700)]
KVM: eventfd: Fix false positive RCU usage warning
The splat below can be seen when running kvm-unit-test:
=============================
WARNING: suspicious RCU usage
5.18.0-rc7 #5 Tainted: G IOE
-----------------------------
/home/kernel/linux/arch/x86/kvm/../../../virt/kvm/eventfd.c:80 RCU-list traversed in non-reader section!!
other info that might help us debug this:
rcu_scheduler_active = 2, debug_locks = 1
4 locks held by qemu-system-x86/35124:
#0:
ffff9725391d80b8 (&vcpu->mutex){+.+.}-{4:4}, at: kvm_vcpu_ioctl+0x77/0x710 [kvm]
#1:
ffffbd25cfb2a0b8 (&kvm->srcu){....}-{0:0}, at: vcpu_enter_guest+0xdeb/0x1900 [kvm]
#2:
ffffbd25cfb2b920 (&kvm->irq_srcu){....}-{0:0}, at: kvm_hv_notify_acked_sint+0x79/0x1e0 [kvm]
#3:
ffffbd25cfb2b920 (&kvm->irq_srcu){....}-{0:0}, at: irqfd_resampler_ack+0x5/0x110 [kvm]
stack backtrace:
CPU: 2 PID: 35124 Comm: qemu-system-x86 Tainted: G IOE 5.18.0-rc7 #5
Call Trace:
<TASK>
dump_stack_lvl+0x6c/0x9b
irqfd_resampler_ack+0xfd/0x110 [kvm]
kvm_notify_acked_gsi+0x32/0x90 [kvm]
kvm_hv_notify_acked_sint+0xc5/0x1e0 [kvm]
kvm_hv_set_msr_common+0xec1/0x1160 [kvm]
kvm_set_msr_common+0x7c3/0xf60 [kvm]
vmx_set_msr+0x394/0x1240 [kvm_intel]
kvm_set_msr_ignored_check+0x86/0x200 [kvm]
kvm_emulate_wrmsr+0x4f/0x1f0 [kvm]
vmx_handle_exit+0x6fb/0x7e0 [kvm_intel]
vcpu_enter_guest+0xe5a/0x1900 [kvm]
kvm_arch_vcpu_ioctl_run+0x16e/0xac0 [kvm]
kvm_vcpu_ioctl+0x279/0x710 [kvm]
__x64_sys_ioctl+0x83/0xb0
do_syscall_64+0x3b/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
resampler-list is protected by irq_srcu (see kvm_irqfd_assign), so fix
the false positive by using list_for_each_entry_srcu().
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Message-Id: <
1652950153-12489-1-git-send-email-wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Aaron Lewis [Tue, 17 May 2022 05:12:38 +0000 (05:12 +0000)]
selftests: kvm/x86: Verify the pmu event filter matches the correct event
Add a test to demonstrate that when the guest programs an event select
it is matched correctly in the pmu event filter and not inadvertently
filtered. This could happen on AMD if the high nybble[1] in the event
select gets truncated away only leaving the bottom byte[2] left for
matching.
This is a contrived example used for the convenience of demonstrating
this issue, however, this can be applied to event selects 0x28A (OC
Mode Switch) and 0x08A (L1 BTB Correction), where 0x08A could end up
being denied when the event select was only set up to deny 0x28A.
[1] bits 35:32 in the event select register and bits 11:8 in the event
select.
[2] bits 7:0 in the event select register and bits 7:0 in the event
select.
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Message-Id: <
20220517051238.
2566934-3-aaronlewis@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Aaron Lewis [Tue, 17 May 2022 05:12:37 +0000 (05:12 +0000)]
selftests: kvm/x86: Add the helper function create_pmu_event_filter
Add a helper function that creates a pmu event filter given an event
list. Currently, a pmu event filter can only be created with the same
hard coded event list. Add a way to create one given a different event
list.
Also, rename make_pmu_event_filter to alloc_pmu_event_filter to clarify
it's purpose given the introduction of create_pmu_event_filter.
No functional changes intended.
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Message-Id: <
20220517051238.
2566934-2-aaronlewis@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Aaron Lewis [Tue, 17 May 2022 05:12:36 +0000 (05:12 +0000)]
kvm: x86/pmu: Fix the compare function used by the pmu event filter
When returning from the compare function the u64 is truncated to an
int. This results in a loss of the high nybble[1] in the event select
and its sign if that nybble is in use. Switch from using a result that
can end up being truncated to a result that can only be: 1, 0, -1.
[1] bits 35:32 in the event select register and bits 11:8 in the event
select.
Fixes:
7ff775aca48ad ("KVM: x86/pmu: Use binary search to check filtered events")
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220517051238.
2566934-1-aaronlewis@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 17 May 2022 17:26:33 +0000 (13:26 -0400)]
Merge tag 'kvmarm-fixes-5.18-3' of git://git./linux/kernel/git/kvmarm/kvmarm into HEAD
KVM/arm64 fixes for 5.18, take #3
- Correctly expose GICv3 support even if no irqchip is created
so that userspace doesn't observe it changing pointlessly
(fixing a regression with QEMU)
- Don't issue a hypercall to set the id-mapped vectors when
protected mode is enabled (fix for pKVM in combination with
CPUs affected by Spectre-v3a)
Quentin Perret [Fri, 13 May 2022 09:26:07 +0000 (09:26 +0000)]
KVM: arm64: Don't hypercall before EL2 init
Will reported the following splat when running with Protected KVM
enabled:
[ 2.427181] ------------[ cut here ]------------
[ 2.427668] WARNING: CPU: 3 PID: 1 at arch/arm64/kvm/mmu.c:489 __create_hyp_private_mapping+0x118/0x1ac
[ 2.428424] Modules linked in:
[ 2.429040] CPU: 3 PID: 1 Comm: swapper/0 Not tainted
5.18.0-rc2-00084-g8635adc4efc7 #1
[ 2.429589] Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
[ 2.430286] pstate:
80000005 (Nzcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 2.430734] pc : __create_hyp_private_mapping+0x118/0x1ac
[ 2.431091] lr : create_hyp_exec_mappings+0x40/0x80
[ 2.431377] sp :
ffff80000803baf0
[ 2.431597] x29:
ffff80000803bb00 x28:
0000000000000000 x27:
0000000000000000
[ 2.432156] x26:
0000000000000000 x25:
0000000000000000 x24:
0000000000000000
[ 2.432561] x23:
ffffcd96c343b000 x22:
0000000000000000 x21:
ffff80000803bb40
[ 2.433004] x20:
0000000000000004 x19:
0000000000001800 x18:
0000000000000000
[ 2.433343] x17:
0003e68cf7efdd70 x16:
0000000000000004 x15:
fffffc81f602a2c8
[ 2.434053] x14:
ffffdf8380000000 x13:
ffffcd9573200000 x12:
ffffcd96c343b000
[ 2.434401] x11:
0000000000000004 x10:
ffffcd96c1738000 x9 :
0000000000000004
[ 2.434812] x8 :
ffff80000803bb40 x7 :
7f7f7f7f7f7f7f7f x6 :
544f422effff306b
[ 2.435136] x5 :
000000008020001e x4 :
ffff207d80a88c00 x3 :
0000000000000005
[ 2.435480] x2 :
0000000000001800 x1 :
000000014f4ab800 x0 :
000000000badca11
[ 2.436149] Call trace:
[ 2.436600] __create_hyp_private_mapping+0x118/0x1ac
[ 2.437576] create_hyp_exec_mappings+0x40/0x80
[ 2.438180] kvm_init_vector_slots+0x180/0x194
[ 2.458941] kvm_arch_init+0x80/0x274
[ 2.459220] kvm_init+0x48/0x354
[ 2.459416] arm_init+0x20/0x2c
[ 2.459601] do_one_initcall+0xbc/0x238
[ 2.459809] do_initcall_level+0x94/0xb4
[ 2.460043] do_initcalls+0x54/0x94
[ 2.460228] do_basic_setup+0x1c/0x28
[ 2.460407] kernel_init_freeable+0x110/0x178
[ 2.460610] kernel_init+0x20/0x1a0
[ 2.460817] ret_from_fork+0x10/0x20
[ 2.461274] ---[ end trace
0000000000000000 ]---
Indeed, the Protected KVM mode promotes __create_hyp_private_mapping()
to a hypercall as EL1 no longer has access to the hypervisor's stage-1
page-table. However, the call from kvm_init_vector_slots() happens after
pKVM has been initialized on the primary CPU, but before it has been
initialized on secondaries. As such, if the KVM initcall procedure is
migrated from one CPU to another in this window, the hypercall may end up
running on a CPU for which EL2 has not been initialized.
Fortunately, the pKVM hypervisor doesn't rely on the host to re-map the
vectors in the private range, so the hypercall in question is in fact
superfluous. Skip it when pKVM is enabled.
Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Quentin Perret <qperret@google.com>
[maz: simplified the checks slightly]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220513092607.35233-1-qperret@google.com
Marc Zyngier [Tue, 3 May 2022 21:14:24 +0000 (22:14 +0100)]
KVM: arm64: vgic-v3: Consistently populate ID_AA64PFR0_EL1.GIC
When adding support for the slightly wonky Apple M1, we had to
populate ID_AA64PFR0_EL1.GIC==1 to present something to the guest,
as the HW itself doesn't advertise the feature.
However, we gated this on the in-kernel irqchip being created.
This causes some trouble for QEMU, which snapshots the state of
the registers before creating a virtual GIC, and then tries to
restore these registers once the GIC has been created. Obviously,
between the two stages, ID_AA64PFR0_EL1.GIC has changed value,
and the write fails.
The fix is to actually emulate the HW, and always populate the
field if the HW is capable of it.
Fixes:
562e530fd770 ("KVM: arm64: Force ID_AA64PFR0_EL1.GIC=1 when exposing a virtual GICv3")
Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Oliver Upton <oupton@google.com>
Link: https://lore.kernel.org/r/20220503211424.3375263-1-maz@kernel.org
Sean Christopherson [Wed, 11 May 2022 14:51:22 +0000 (14:51 +0000)]
KVM: x86/mmu: Update number of zapped pages even if page list is stable
When zapping obsolete pages, update the running count of zapped pages
regardless of whether or not the list has become unstable due to zapping
a shadow page with its own child shadow pages. If the VM is backed by
mostly 4kb pages, KVM can zap an absurd number of SPTEs without bumping
the batch count and thus without yielding. In the worst case scenario,
this can cause a soft lokcup.
watchdog: BUG: soft lockup - CPU#12 stuck for 22s! [dirty_log_perf_:13020]
RIP: 0010:workingset_activation+0x19/0x130
mark_page_accessed+0x266/0x2e0
kvm_set_pfn_accessed+0x31/0x40
mmu_spte_clear_track_bits+0x136/0x1c0
drop_spte+0x1a/0xc0
mmu_page_zap_pte+0xef/0x120
__kvm_mmu_prepare_zap_page+0x205/0x5e0
kvm_mmu_zap_all_fast+0xd7/0x190
kvm_mmu_invalidate_zap_pages_in_memslot+0xe/0x10
kvm_page_track_flush_slot+0x5c/0x80
kvm_arch_flush_shadow_memslot+0xe/0x10
kvm_set_memslot+0x1a8/0x5d0
__kvm_set_memory_region+0x337/0x590
kvm_vm_ioctl+0xb08/0x1040
Fixes:
fbb158cb88b6 ("KVM: x86/mmu: Revert "Revert "KVM: MMU: zap pages in batch""")
Reported-by: David Matlack <dmatlack@google.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220511145122.
3133334-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Mon, 2 May 2022 22:18:50 +0000 (22:18 +0000)]
KVM: VMX: Exit to userspace if vCPU has injected exception and invalid state
Exit to userspace with an emulation error if KVM encounters an injected
exception with invalid guest state, in addition to the existing check of
bailing if there's a pending exception (KVM doesn't support emulating
exceptions except when emulating real mode via vm86).
In theory, KVM should never get to such a situation as KVM is supposed to
exit to userspace before injecting an exception with invalid guest state.
But in practice, userspace can intervene and manually inject an exception
and/or stuff registers to force invalid guest state while a previously
injected exception is awaiting reinjection.
Fixes:
fc4fad79fc3d ("KVM: VMX: Reject KVM_RUN if emulation is required with pending exception")
Reported-by: syzbot+cfafed3bb76d3e37581b@syzkaller.appspotmail.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220502221850.131873-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Peter Gonda [Mon, 2 May 2022 16:58:07 +0000 (09:58 -0700)]
KVM: SEV: Mark nested locking of vcpu->lock
svm_vm_migrate_from() uses sev_lock_vcpus_for_migration() to lock all
source and target vcpu->locks. Unfortunately there is an 8 subclass
limit, so a new subclass cannot be used for each vCPU. Instead maintain
ownership of the first vcpu's mutex.dep_map using a role specific
subclass: source vs target. Release the other vcpu's mutex.dep_maps.
Fixes:
b56639318bb2b ("KVM: SEV: Add support for SEV intra host migration")
Reported-by: John Sperbeck<jsperbeck@google.com>
Suggested-by: David Rientjes <rientjes@google.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Hillf Danton <hdanton@sina.com>
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Peter Gonda <pgonda@google.com>
Message-Id: <
20220502165807.529624-1-pgonda@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 3 May 2022 11:57:40 +0000 (07:57 -0400)]
Merge branch 'kvm-amd-pmu-fixes' into HEAD
Sandipan Das [Wed, 27 Apr 2022 11:31:49 +0000 (17:01 +0530)]
kvm: x86/cpuid: Only provide CPUID leaf 0xA if host has architectural PMU
On some x86 processors, CPUID leaf 0xA provides information
on Architectural Performance Monitoring features. It
advertises a PMU version which Qemu uses to determine the
availability of additional MSRs to manage the PMCs.
Upon receiving a KVM_GET_SUPPORTED_CPUID ioctl request for
the same, the kernel constructs return values based on the
x86_pmu_capability irrespective of the vendor.
This leaf and the additional MSRs are not supported on AMD
and Hygon processors. If AMD PerfMonV2 is detected, the PMU
version is set to 2 and guest startup breaks because of an
attempt to access a non-existent MSR. Return zeros to avoid
this.
Fixes:
a6c06ed1a60a ("KVM: Expose the architectural performance monitoring CPUID leaf")
Reported-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Sandipan Das <sandipan.das@amd.com>
Message-Id: <
3fef83d9c2b2f7516e8ff50d60851f29a4bcb716.
1651058600.git.sandipan.das@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Kyle Huey [Tue, 3 May 2022 05:01:36 +0000 (22:01 -0700)]
KVM: x86/svm: Account for family 17h event renumberings in amd_pmc_perf_hw_id
Zen renumbered some of the performance counters that correspond to the
well known events in perf_hw_id. This code in KVM was never updated for
that, so guest that attempt to use counters on Zen that correspond to the
pre-Zen perf_hw_id values will silently receive the wrong values.
This has been observed in the wild with rr[0] when running in Zen 3
guests. rr uses the retired conditional branch counter 00d1 which is
incorrectly recognized by KVM as PERF_COUNT_HW_STALLED_CYCLES_BACKEND.
[0] https://rr-project.org/
Signed-off-by: Kyle Huey <me@kylehuey.com>
Message-Id: <
20220503050136.86298-1-khuey@kylehuey.com>
Cc: stable@vger.kernel.org
[Check guest family, not host. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 3 May 2022 11:23:08 +0000 (07:23 -0400)]
Merge branch 'kvm-tdp-mmu-atomicity-fix' into HEAD
We are dropping A/D bits (and W bits) in the TDP MMU. Even if mmu_lock
is held for write, as volatile SPTEs can be written by other tasks/vCPUs
outside of mmu_lock.
Attempting to prove that bug exposed another notable goof, which has been
lurking for a decade, give or take: KVM treats _all_ MMU-writable SPTEs
as volatile, even though KVM never clears WRITABLE outside of MMU lock.
As a result, the legacy MMU (and the TDP MMU if not fixed) uses XCHG to
update writable SPTEs.
The fix does not seem to have an easily-measurable affect on performance;
page faults are so slow that wasting even a few hundred cycles is dwarfed
by the base cost.
Sean Christopherson [Sat, 23 Apr 2022 03:47:43 +0000 (03:47 +0000)]
KVM: x86/mmu: Use atomic XCHG to write TDP MMU SPTEs with volatile bits
Use an atomic XCHG to write TDP MMU SPTEs that have volatile bits, even
if mmu_lock is held for write, as volatile SPTEs can be written by other
tasks/vCPUs outside of mmu_lock. If a vCPU uses the to-be-modified SPTE
to write a page, the CPU can cache the translation as WRITABLE in the TLB
despite it being seen by KVM as !WRITABLE, and/or KVM can clobber the
Accessed/Dirty bits and not properly tag the backing page.
Exempt non-leaf SPTEs from atomic updates as KVM itself doesn't modify
non-leaf SPTEs without holding mmu_lock, they do not have Dirty bits, and
KVM doesn't consume the Accessed bit of non-leaf SPTEs.
Dropping the Dirty and/or Writable bits is most problematic for dirty
logging, as doing so can result in a missed TLB flush and eventually a
missed dirty page. In the unlikely event that the only dirty page(s) is
a clobbered SPTE, clear_dirty_gfn_range() will see the SPTE as not dirty
(based on the Dirty or Writable bit depending on the method) and so not
update the SPTE and ultimately not flush. If the SPTE is cached in the
TLB as writable before it is clobbered, the guest can continue writing
the associated page without ever taking a write-protect fault.
For most (all?) file back memory, dropping the Dirty bit is a non-issue.
The primary MMU write-protects its PTEs on writeback, i.e. KVM's dirty
bit is effectively ignored because the primary MMU will mark that page
dirty when the write-protection is lifted, e.g. when KVM faults the page
back in for write.
The Accessed bit is a complete non-issue. Aside from being unused for
non-leaf SPTEs, KVM doesn't do a TLB flush when aging SPTEs, i.e. the
Accessed bit may be dropped anyways.
Lastly, the Writable bit is also problematic as an extension of the Dirty
bit, as KVM (correctly) treats the Dirty bit as volatile iff the SPTE is
!DIRTY && WRITABLE. If KVM fixes an MMU-writable, but !WRITABLE, SPTE
out of mmu_lock, then it can allow the CPU to set the Dirty bit despite
the SPTE being !WRITABLE when it is checked by KVM. But that all depends
on the Dirty bit being problematic in the first place.
Fixes:
2f2fad0897cb ("kvm: x86/mmu: Add functions to handle changed TDP SPTEs")
Cc: stable@vger.kernel.org
Cc: Ben Gardon <bgardon@google.com>
Cc: David Matlack <dmatlack@google.com>
Cc: Venkatesh Srinivas <venkateshs@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220423034752.
1161007-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Sat, 23 Apr 2022 03:47:42 +0000 (03:47 +0000)]
KVM: x86/mmu: Move shadow-present check out of spte_has_volatile_bits()
Move the is_shadow_present_pte() check out of spte_has_volatile_bits()
and into its callers. Well, caller, since only one of its two callers
doesn't already do the shadow-present check.
Opportunistically move the helper to spte.c/h so that it can be used by
the TDP MMU, which is also the primary motivation for the shadow-present
change. Unlike the legacy MMU, the TDP MMU uses a single path for clear
leaf and non-leaf SPTEs, and to avoid unnecessary atomic updates, the TDP
MMU will need to check is_last_spte() prior to calling
spte_has_volatile_bits(), and calling is_last_spte() without first
calling is_shadow_present_spte() is at best odd, and at worst a violation
of KVM's loosely defines SPTE rules.
Note, mmu_spte_clear_track_bits() could likely skip the write entirely
for SPTEs that are not shadow-present. Leave that cleanup for a future
patch to avoid introducing a functional change, and because the
shadow-present check can likely be moved further up the stack, e.g.
drop_large_spte() appears to be the only path that doesn't already
explicitly check for a shadow-present SPTE.
No functional change intended.
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220423034752.
1161007-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Sat, 23 Apr 2022 03:47:41 +0000 (03:47 +0000)]
KVM: x86/mmu: Don't treat fully writable SPTEs as volatile (modulo A/D)
Don't treat SPTEs that are truly writable, i.e. writable in hardware, as
being volatile (unless they're volatile for other reasons, e.g. A/D bits).
KVM _sets_ the WRITABLE bit out of mmu_lock, but never _clears_ the bit
out of mmu_lock, so if the WRITABLE bit is set, it cannot magically get
cleared just because the SPTE is MMU-writable.
Rename the wrapper of MMU-writable to be more literal, the previous name
of spte_can_locklessly_be_made_writable() is wrong and misleading.
Fixes:
c7ba5b48cc8d ("KVM: MMU: fast path of handling guest page fault")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220423034752.
1161007-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Fri, 29 Apr 2022 18:43:04 +0000 (14:43 -0400)]
KVM: x86: work around QEMU issue with synthetic CPUID leaves
Synthesizing AMD leaves up to 0x80000021 caused problems with QEMU,
which assumes the *host* CPUID[0x80000000].EAX is higher or equal
to what KVM_GET_SUPPORTED_CPUID reports.
This causes QEMU to issue bogus host CPUIDs when preparing the input
to KVM_SET_CPUID2. It can even get into an infinite loop, which is
only terminated by an abort():
cpuid_data is full, no space for cpuid(eax:0x8000001d,ecx:0x3e)
To work around this, only synthesize those leaves if 0x8000001d exists
on the host. The synthetic 0x80000021 leaf is mostly useful on Zen2,
which satisfies the condition.
Fixes:
f144c49e8c39 ("KVM: x86: synthesize CPUID leaf 0x80000021h if useful")
Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Fri, 29 Apr 2022 14:57:53 +0000 (07:57 -0700)]
Revert "x86/mm: Introduce lookup_address_in_mm()"
Drop lookup_address_in_mm() now that KVM is providing it's own variant
of lookup_address_in_pgd() that is safe for use with user addresses, e.g.
guards against page tables being torn down. A variant that provides a
non-init mm is inherently dangerous and flawed, as the only reason to use
an mm other than init_mm is to walk a userspace mapping, and
lookup_address_in_pgd() does not play nice with userspace mappings, e.g.
doesn't disable IRQs to block TLB shootdowns and doesn't use READ_ONCE()
to ensure an upper level entry isn't converted to a huge page between
checking the PAGE_SIZE bit and grabbing the address of the next level
down.
This reverts commit
13c72c060f1ba6f4eddd7b1c4f52a8aded43d6d9.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <YmwIi3bXr/1yhYV/@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Fri, 29 Apr 2022 10:38:56 +0000 (06:38 -0400)]
Merge branch 'kvm-fixes-for-5.18-rc5' into HEAD
Fixes for (relatively) old bugs, to be merged in both the -rc and next
development trees:
* Fix potential races when walking host page table
* Fix bad user ABI for KVM_EXIT_SYSTEM_EVENT
* Fix shadow page table leak when KVM runs nested
Mingwei Zhang [Fri, 29 Apr 2022 03:17:57 +0000 (03:17 +0000)]
KVM: x86/mmu: fix potential races when walking host page table
KVM uses lookup_address_in_mm() to detect the hugepage size that the host
uses to map a pfn. The function suffers from several issues:
- no usage of READ_ONCE(*). This allows multiple dereference of the same
page table entry. The TOCTOU problem because of that may cause KVM to
incorrectly treat a newly generated leaf entry as a nonleaf one, and
dereference the content by using its pfn value.
- the information returned does not match what KVM needs; for non-present
entries it returns the level at which the walk was terminated, as long
as the entry is not 'none'. KVM needs level information of only 'present'
entries, otherwise it may regard a non-present PXE entry as a present
large page mapping.
- the function is not safe for mappings that can be torn down, because it
does not disable IRQs and because it returns a PTE pointer which is never
safe to dereference after the function returns.
So implement the logic for walking host page tables directly in KVM, and
stop using lookup_address_in_mm().
Cc: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Mingwei Zhang <mizhang@google.com>
Message-Id: <
20220429031757.
2042406-1-mizhang@google.com>
[Inline in host_pfn_mapping_level, ensure no semantic change for its
callers. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Fri, 22 Apr 2022 10:30:13 +0000 (12:30 +0200)]
KVM: fix bad user ABI for KVM_EXIT_SYSTEM_EVENT
When KVM_EXIT_SYSTEM_EVENT was introduced, it included a flags
member that at the time was unused. Unfortunately this extensibility
mechanism has several issues:
- x86 is not writing the member, so it would not be possible to use it
on x86 except for new events
- the member is not aligned to 64 bits, so the definition of the
uAPI struct is incorrect for 32- on 64-bit userspace. This is a
problem for RISC-V, which supports CONFIG_KVM_COMPAT, but fortunately
usage of flags was only introduced in 5.18.
Since padding has to be introduced, place a new field in there
that tells if the flags field is valid. To allow further extensibility,
in fact, change flags to an array of 16 values, and store how many
of the values are valid. The availability of the new ndata field
is tied to a system capability; all architectures are changed to
fill in the field.
To avoid breaking compilation of userspace that was using the flags
field, provide a userspace-only union to overlap flags with data[0].
The new field is placed at the same offset for both 32- and 64-bit
userspace.
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Peter Gonda <pgonda@google.com>
Cc: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: kernel test robot <lkp@intel.com>
Message-Id: <
20220422103013.34832-1-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Thu, 28 Apr 2022 23:34:16 +0000 (23:34 +0000)]
KVM: x86/mmu: Do not create SPTEs for GFNs that exceed host.MAXPHYADDR
Disallow memslots and MMIO SPTEs whose gpa range would exceed the host's
MAXPHYADDR, i.e. don't create SPTEs for gfns that exceed host.MAXPHYADDR.
The TDP MMU bounds its zapping based on host.MAXPHYADDR, and so if the
guest, possibly with help from userspace, manages to coerce KVM into
creating a SPTE for an "impossible" gfn, KVM will leak the associated
shadow pages (page tables):
WARNING: CPU: 10 PID: 1122 at arch/x86/kvm/mmu/tdp_mmu.c:57
kvm_mmu_uninit_tdp_mmu+0x4b/0x60 [kvm]
Modules linked in: kvm_intel kvm irqbypass
CPU: 10 PID: 1122 Comm: set_memory_regi Tainted: G W 5.18.0-rc1+ #293
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
RIP: 0010:kvm_mmu_uninit_tdp_mmu+0x4b/0x60 [kvm]
Call Trace:
<TASK>
kvm_arch_destroy_vm+0x130/0x1b0 [kvm]
kvm_destroy_vm+0x162/0x2d0 [kvm]
kvm_vm_release+0x1d/0x30 [kvm]
__fput+0x82/0x240
task_work_run+0x5b/0x90
exit_to_user_mode_prepare+0xd2/0xe0
syscall_exit_to_user_mode+0x1d/0x40
entry_SYSCALL_64_after_hwframe+0x44/0xae
</TASK>
On bare metal, encountering an impossible gpa in the page fault path is
well and truly impossible, barring CPU bugs, as the CPU will signal #PF
during the gva=>gpa translation (or a similar failure when stuffing a
physical address into e.g. the VMCS/VMCB). But if KVM is running as a VM
itself, the MAXPHYADDR enumerated to KVM may not be the actual MAXPHYADDR
of the underlying hardware, in which case the hardware will not fault on
the illegal-from-KVM's-perspective gpa.
Alternatively, KVM could continue allowing the dodgy behavior and simply
zap the max possible range. But, for hosts with MAXPHYADDR < 52, that's
a (minor) waste of cycles, and more importantly, KVM can't reasonably
support impossible memslots when running on bare metal (or with an
accurate MAXPHYADDR as a VM). Note, limiting the overhead by checking if
KVM is running as a guest is not a safe option as the host isn't required
to announce itself to the guest in any way, e.g. doesn't need to set the
HYPERVISOR CPUID bit.
A second alternative to disallowing the memslot behavior would be to
disallow creating a VM with guest.MAXPHYADDR > host.MAXPHYADDR. That
restriction is undesirable as there are legitimate use cases for doing
so, e.g. using the highest host.MAXPHYADDR out of a pool of heterogeneous
systems so that VMs can be migrated between hosts with different
MAXPHYADDRs without running afoul of the allow_smaller_maxphyaddr mess.
Note that any guest.MAXPHYADDR is valid with shadow paging, and it is
even useful in order to test KVM with MAXPHYADDR=52 (i.e. without
any reserved physical address bits).
The now common kvm_mmu_max_gfn() is inclusive instead of exclusive.
The memslot and TDP MMU code want an exclusive value, but the name
implies the returned value is inclusive, and the MMIO path needs an
inclusive check.
Fixes:
faaf05b00aec ("kvm: x86/mmu: Support zapping SPTEs in the TDP MMU")
Fixes:
524a1e4e381f ("KVM: x86/mmu: Don't leak non-leaf SPTEs when zapping all SPTEs")
Cc: stable@vger.kernel.org
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Cc: Ben Gardon <bgardon@google.com>
Cc: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220428233416.
2446833-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Fri, 29 Apr 2022 16:32:14 +0000 (12:32 -0400)]
Merge tag 'kvmarm-fixes-5.18-2' of git://git./linux/kernel/git/kvmarm/kvmarm into HEAD
KVM/arm64 fixes for 5.18, take #2
- Take care of faults occuring between the PARange and
IPA range by injecting an exception
- Fix S2 faults taken from a host EL0 in protected mode
- Work around Oops caused by a PMU access from a 32bit
guest when PMU has been created. This is a temporary
bodge until we fix it for good.
Marc Zyngier [Thu, 21 Apr 2022 14:38:10 +0000 (15:38 +0100)]
KVM: arm64: Inject exception on out-of-IPA-range translation fault
When taking a translation fault for an IPA that is outside of
the range defined by the hypervisor (between the HW PARange and
the IPA range), we stupidly treat it as an IO and forward the access
to userspace. Of course, userspace can't do much with it, and things
end badly.
Arguably, the guest is braindead, but we should at least catch the
case and inject an exception.
Check the faulting IPA against:
- the sanitised PARange: inject an address size fault
- the IPA size: inject an abort
Reported-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Alexandru Elisei [Mon, 25 Apr 2022 14:55:30 +0000 (15:55 +0100)]
KVM/arm64: Don't emulate a PMU for 32-bit guests if feature not set
kvm->arch.arm_pmu is set when userspace attempts to set the first PMU
attribute. As certain attributes are mandatory, arm_pmu ends up always
being set to a valid arm_pmu, otherwise KVM will refuse to run the VCPU.
However, this only happens if the VCPU has the PMU feature. If the VCPU
doesn't have the feature bit set, kvm->arch.arm_pmu will be left
uninitialized and equal to NULL.
KVM doesn't do ID register emulation for 32-bit guests and accesses to the
PMU registers aren't gated by the pmu_visibility() function. This is done
to prevent injecting unexpected undefined exceptions in guests which have
detected the presence of a hardware PMU. But even though the VCPU feature
is missing, KVM still attempts to emulate certain aspects of the PMU when
PMU registers are accessed. This leads to a NULL pointer dereference like
this one, which happens on an odroid-c4 board when running the
kvm-unit-tests pmu-cycle-counter test with kvmtool and without the PMU
feature being set:
[ 454.402699] Unable to handle kernel NULL pointer dereference at virtual address
0000000000000150
[ 454.405865] Mem abort info:
[ 454.408596] ESR = 0x96000004
[ 454.411638] EC = 0x25: DABT (current EL), IL = 32 bits
[ 454.416901] SET = 0, FnV = 0
[ 454.419909] EA = 0, S1PTW = 0
[ 454.423010] FSC = 0x04: level 0 translation fault
[ 454.427841] Data abort info:
[ 454.430687] ISV = 0, ISS = 0x00000004
[ 454.434484] CM = 0, WnR = 0
[ 454.437404] user pgtable: 4k pages, 48-bit VAs, pgdp=
000000000c924000
[ 454.443800] [
0000000000000150] pgd=
0000000000000000, p4d=
0000000000000000
[ 454.450528] Internal error: Oops:
96000004 [#1] PREEMPT SMP
[ 454.456036] Modules linked in:
[ 454.459053] CPU: 1 PID: 267 Comm: kvm-vcpu-0 Not tainted 5.18.0-rc4 #113
[ 454.465697] Hardware name: Hardkernel ODROID-C4 (DT)
[ 454.470612] pstate:
60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 454.477512] pc : kvm_pmu_event_mask.isra.0+0x14/0x74
[ 454.482427] lr : kvm_pmu_set_counter_event_type+0x2c/0x80
[ 454.487775] sp :
ffff80000a9839c0
[ 454.491050] x29:
ffff80000a9839c0 x28:
ffff000000a83a00 x27:
0000000000000000
[ 454.498127] x26:
0000000000000000 x25:
0000000000000000 x24:
ffff00000a510000
[ 454.505198] x23:
ffff000000a83a00 x22:
ffff000003b01000 x21:
0000000000000000
[ 454.512271] x20:
000000000000001f x19:
00000000000003ff x18:
0000000000000000
[ 454.519343] x17:
000000008003fe98 x16:
0000000000000000 x15:
0000000000000000
[ 454.526416] x14:
0000000000000000 x13:
0000000000000000 x12:
0000000000000000
[ 454.533489] x11:
000000008003fdbc x10:
0000000000009d20 x9 :
000000000000001b
[ 454.540561] x8 :
0000000000000000 x7 :
0000000000000d00 x6 :
0000000000009d00
[ 454.547633] x5 :
0000000000000037 x4 :
0000000000009d00 x3 :
0d09000000000000
[ 454.554705] x2 :
000000000000001f x1 :
0000000000000000 x0 :
0000000000000000
[ 454.561779] Call trace:
[ 454.564191] kvm_pmu_event_mask.isra.0+0x14/0x74
[ 454.568764] kvm_pmu_set_counter_event_type+0x2c/0x80
[ 454.573766] access_pmu_evtyper+0x128/0x170
[ 454.577905] perform_access+0x34/0x80
[ 454.581527] kvm_handle_cp_32+0x13c/0x160
[ 454.585495] kvm_handle_cp15_32+0x1c/0x30
[ 454.589462] handle_exit+0x70/0x180
[ 454.592912] kvm_arch_vcpu_ioctl_run+0x1c4/0x5e0
[ 454.597485] kvm_vcpu_ioctl+0x23c/0x940
[ 454.601280] __arm64_sys_ioctl+0xa8/0xf0
[ 454.605160] invoke_syscall+0x48/0x114
[ 454.608869] el0_svc_common.constprop.0+0xd4/0xfc
[ 454.613527] do_el0_svc+0x28/0x90
[ 454.616803] el0_svc+0x34/0xb0
[ 454.619822] el0t_64_sync_handler+0xa4/0x130
[ 454.624049] el0t_64_sync+0x18c/0x190
[ 454.627675] Code:
a9be7bfd 910003fd f9000bf3 52807ff3 (
b9415001)
[ 454.633714] ---[ end trace
0000000000000000 ]---
In this particular case, Linux hasn't detected the presence of a hardware
PMU because the PMU node is missing from the DTB, so userspace would have
been unable to set the VCPU PMU feature even if it attempted it. What
happens is that the 32-bit guest reads ID_DFR0, which advertises the
presence of the PMU, and when it tries to program a counter, it triggers
the NULL pointer dereference because kvm->arch.arm_pmu is NULL.
kvm-arch.arm_pmu was introduced by commit
46b187821472 ("KVM: arm64:
Keep a per-VM pointer to the default PMU"). Until that commit, this
error would be triggered instead:
[ 73.388140] ------------[ cut here ]------------
[ 73.388189] Unknown PMU version 0
[ 73.390420] WARNING: CPU: 1 PID: 264 at arch/arm64/kvm/pmu-emul.c:36 kvm_pmu_event_mask.isra.0+0x6c/0x74
[ 73.399821] Modules linked in:
[ 73.402835] CPU: 1 PID: 264 Comm: kvm-vcpu-0 Not tainted 5.17.0 #114
[ 73.409132] Hardware name: Hardkernel ODROID-C4 (DT)
[ 73.414048] pstate:
60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 73.420948] pc : kvm_pmu_event_mask.isra.0+0x6c/0x74
[ 73.425863] lr : kvm_pmu_event_mask.isra.0+0x6c/0x74
[ 73.430779] sp :
ffff80000a8db9b0
[ 73.434055] x29:
ffff80000a8db9b0 x28:
ffff000000dbaac0 x27:
0000000000000000
[ 73.441131] x26:
ffff000000dbaac0 x25:
00000000c600000d x24:
0000000000180720
[ 73.448203] x23:
ffff800009ffbe10 x22:
ffff00000b612000 x21:
0000000000000000
[ 73.455276] x20:
000000000000001f x19:
0000000000000000 x18:
ffffffffffffffff
[ 73.462348] x17:
000000008003fe98 x16:
0000000000000000 x15:
0720072007200720
[ 73.469420] x14:
0720072007200720 x13:
ffff800009d32488 x12:
00000000000004e6
[ 73.476493] x11:
00000000000001a2 x10:
ffff800009d32488 x9 :
ffff800009d32488
[ 73.483565] x8 :
00000000ffffefff x7 :
ffff800009d8a488 x6 :
ffff800009d8a488
[ 73.490638] x5 :
ffff0000f461a9d8 x4 :
0000000000000000 x3 :
0000000000000001
[ 73.497710] x2 :
0000000000000000 x1 :
0000000000000000 x0 :
ffff000000dbaac0
[ 73.504784] Call trace:
[ 73.507195] kvm_pmu_event_mask.isra.0+0x6c/0x74
[ 73.511768] kvm_pmu_set_counter_event_type+0x2c/0x80
[ 73.516770] access_pmu_evtyper+0x128/0x16c
[ 73.520910] perform_access+0x34/0x80
[ 73.524532] kvm_handle_cp_32+0x13c/0x160
[ 73.528500] kvm_handle_cp15_32+0x1c/0x30
[ 73.532467] handle_exit+0x70/0x180
[ 73.535917] kvm_arch_vcpu_ioctl_run+0x20c/0x6e0
[ 73.540489] kvm_vcpu_ioctl+0x2b8/0x9e0
[ 73.544283] __arm64_sys_ioctl+0xa8/0xf0
[ 73.548165] invoke_syscall+0x48/0x114
[ 73.551874] el0_svc_common.constprop.0+0xd4/0xfc
[ 73.556531] do_el0_svc+0x28/0x90
[ 73.559808] el0_svc+0x28/0x80
[ 73.562826] el0t_64_sync_handler+0xa4/0x130
[ 73.567054] el0t_64_sync+0x1a0/0x1a4
[ 73.570676] ---[ end trace
0000000000000000 ]---
[ 73.575382] kvm: pmu event creation failed -2
The root cause remains the same: kvm->arch.pmuver was never set to
something sensible because the VCPU feature itself was never set.
The odroid-c4 is somewhat of a special case, because Linux doesn't probe
the PMU. But the above errors can easily be reproduced on any hardware,
with or without a PMU driver, as long as userspace doesn't set the PMU
feature.
Work around the fact that KVM advertises a PMU even when the VCPU feature
is not set by gating all PMU emulation on the feature. The guest can still
access the registers without KVM injecting an undefined exception.
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220425145530.723858-1-alexandru.elisei@arm.com
Will Deacon [Wed, 27 Apr 2022 17:13:32 +0000 (18:13 +0100)]
KVM: arm64: Handle host stage-2 faults from 32-bit EL0
When pKVM is enabled, host memory accesses are translated by an identity
mapping at stage-2, which is populated lazily in response to synchronous
exceptions from 64-bit EL1 and EL0.
Extend this handling to cover exceptions originating from 32-bit EL0 as
well. Although these are very unlikely to occur in practice, as the
kernel typically ensures that user pages are initialised before mapping
them in, drivers could still map previously untouched device pages into
userspace and expect things to work rather than panic the system.
Cc: Quentin Perret <qperret@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220427171332.13635-1-will@kernel.org
Paolo Bonzini [Wed, 20 Apr 2022 10:27:27 +0000 (06:27 -0400)]
kvm: selftests: introduce and use more page size-related constants
Clean up code that was hardcoding masks for various fields,
now that the masks are included in processor.h.
For more cleanup, define PAGE_SIZE and PAGE_MASK just like in Linux.
PAGE_SIZE in particular was defined by several tests.
Suggested-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 20 Apr 2022 10:27:27 +0000 (06:27 -0400)]
kvm: selftests: do not use bitfields larger than 32-bits for PTEs
Red Hat's QE team reported test failure on access_tracking_perf_test:
Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages
guest physical test memory offset: 0x3fffbffff000
Populating memory : 0.684014577s
Writing to populated memory : 0.006230175s
Reading from populated memory : 0.004557805s
==== Test Assertion Failure ====
lib/kvm_util.c:1411: false
pid=125806 tid=125809 errno=4 - Interrupted system call
1 0x0000000000402f7c: addr_gpa2hva at kvm_util.c:1411
2 (inlined by) addr_gpa2hva at kvm_util.c:1405
3 0x0000000000401f52: lookup_pfn at access_tracking_perf_test.c:98
4 (inlined by) mark_vcpu_memory_idle at access_tracking_perf_test.c:152
5 (inlined by) vcpu_thread_main at access_tracking_perf_test.c:232
6 0x00007fefe9ff81ce: ?? ??:0
7 0x00007fefe9c64d82: ?? ??:0
No vm physical memory at 0xffbffff000
I can easily reproduce it with a Intel(R) Xeon(R) CPU E5-2630 with 46 bits
PA.
It turns out that the address translation for clearing idle page tracking
returned a wrong result; addr_gva2gpa()'s last step, which is based on
"pte[index[0]].pfn", did the calculation with 40 bits length and the
high 12 bits got truncated. In above case the GPA address to be returned
should be 0x3fffbffff000 for GVA 0xc0000000, but it got truncated into
0xffbffff000 and the subsequent gpa2hva lookup failed.
The width of operations on bit fields greater than 32-bit is
implementation defined, and differs between GCC (which uses the bitfield
precision) and clang (which uses 64-bit arithmetic), so this is a
potential minefield. Remove the bit fields and using manual masking
instead.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=
2075036
Reported-by: Nana Liu <nanliu@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Mingwei Zhang [Thu, 21 Apr 2022 03:14:07 +0000 (03:14 +0000)]
KVM: SEV: add cache flush to solve SEV cache incoherency issues
Flush the CPU caches when memory is reclaimed from an SEV guest (where
reclaim also includes it being unmapped from KVM's memslots). Due to lack
of coherency for SEV encrypted memory, failure to flush results in silent
data corruption if userspace is malicious/broken and doesn't ensure SEV
guest memory is properly pinned and unpinned.
Cache coherency is not enforced across the VM boundary in SEV (AMD APM
vol.2 Section 15.34.7). Confidential cachelines, generated by confidential
VM guests have to be explicitly flushed on the host side. If a memory page
containing dirty confidential cachelines was released by VM and reallocated
to another user, the cachelines may corrupt the new user at a later time.
KVM takes a shortcut by assuming all confidential memory remain pinned
until the end of VM lifetime. Therefore, KVM does not flush cache at
mmu_notifier invalidation events. Because of this incorrect assumption and
the lack of cache flushing, malicous userspace can crash the host kernel:
creating a malicious VM and continuously allocates/releases unpinned
confidential memory pages when the VM is running.
Add cache flush operations to mmu_notifier operations to ensure that any
physical memory leaving the guest VM get flushed. In particular, hook
mmu_notifier_invalidate_range_start and mmu_notifier_release events and
flush cache accordingly. The hook after releasing the mmu lock to avoid
contention with other vCPUs.
Cc: stable@vger.kernel.org
Suggested-by: Sean Christpherson <seanjc@google.com>
Reported-by: Mingwei Zhang <mizhang@google.com>
Signed-off-by: Mingwei Zhang <mizhang@google.com>
Message-Id: <
20220421031407.
2516575-4-mizhang@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Mingwei Zhang [Thu, 21 Apr 2022 03:14:06 +0000 (03:14 +0000)]
KVM: SVM: Flush when freeing encrypted pages even on SME_COHERENT CPUs
Use clflush_cache_range() to flush the confidential memory when
SME_COHERENT is supported in AMD CPU. Cache flush is still needed since
SME_COHERENT only support cache invalidation at CPU side. All confidential
cache lines are still incoherent with DMA devices.
Cc: stable@vger.kerel.org
Fixes:
add5e2f04541 ("KVM: SVM: Add support for the SEV-ES VMSA")
Reviewed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Mingwei Zhang <mizhang@google.com>
Message-Id: <
20220421031407.
2516575-3-mizhang@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Thu, 21 Apr 2022 03:14:05 +0000 (03:14 +0000)]
KVM: SVM: Simplify and harden helper to flush SEV guest page(s)
Rework sev_flush_guest_memory() to explicitly handle only a single page,
and harden it to fall back to WBINVD if VM_PAGE_FLUSH fails. Per-page
flushing is currently used only to flush the VMSA, and in its current
form, the helper is completely broken with respect to flushing actual
guest memory, i.e. won't work correctly for an arbitrary memory range.
VM_PAGE_FLUSH takes a host virtual address, and is subject to normal page
walks, i.e. will fault if the address is not present in the host page
tables or does not have the correct permissions. Current AMD CPUs also
do not honor SMAP overrides (undocumented in kernel versions of the APM),
so passing in a userspace address is completely out of the question. In
other words, KVM would need to manually walk the host page tables to get
the pfn, ensure the pfn is stable, and then use the direct map to invoke
VM_PAGE_FLUSH. And the latter might not even work, e.g. if userspace is
particularly evil/clever and backs the guest with Secret Memory (which
unmaps memory from the direct map).
Signed-off-by: Sean Christopherson <seanjc@google.com>
Fixes:
add5e2f04541 ("KVM: SVM: Add support for the SEV-ES VMSA")
Reported-by: Mingwei Zhang <mizhang@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Mingwei Zhang <mizhang@google.com>
Message-Id: <
20220421031407.
2516575-2-mizhang@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Thu, 14 Apr 2022 10:30:31 +0000 (12:30 +0200)]
KVM: selftests: Silence compiler warning in the kvm_page_table_test
When compiling kvm_page_table_test.c, I get this compiler warning
with gcc 11.2:
kvm_page_table_test.c: In function 'pre_init_before_test':
../../../../tools/include/linux/kernel.h:44:24: warning: comparison of
distinct pointer types lacks a cast
44 | (void) (&_max1 == &_max2); \
| ^~
kvm_page_table_test.c:281:21: note: in expansion of macro 'max'
281 | alignment = max(0x100000, alignment);
| ^~~
Fix it by adjusting the type of the absolute value.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Message-Id: <
20220414103031.565037-1-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Like Xu [Sat, 9 Apr 2022 01:52:26 +0000 (09:52 +0800)]
KVM: x86/pmu: Update AMD PMC sample period to fix guest NMI-watchdog
NMI-watchdog is one of the favorite features of kernel developers,
but it does not work in AMD guest even with vPMU enabled and worse,
the system misrepresents this capability via /proc.
This is a PMC emulation error. KVM does not pass the latest valid
value to perf_event in time when guest NMI-watchdog is running, thus
the perf_event corresponding to the watchdog counter will enter the
old state at some point after the first guest NMI injection, forcing
the hardware register PMC0 to be constantly written to 0x800000000001.
Meanwhile, the running counter should accurately reflect its new value
based on the latest coordinated pmc->counter (from vPMC's point of view)
rather than the value written directly by the guest.
Fixes:
168d918f2643 ("KVM: x86: Adjust counter sample period after a wrmsr")
Reported-by: Dongli Cao <caodongli@kingsoft.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
Tested-by: Yanan Wang <wangyanan55@huawei.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Message-Id: <
20220409015226.38619-1-likexu@tencent.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wanpeng Li [Mon, 18 Apr 2022 07:42:32 +0000 (00:42 -0700)]
x86/kvm: Preserve BSP MSR_KVM_POLL_CONTROL across suspend/resume
MSR_KVM_POLL_CONTROL is cleared on reset, thus reverting guests to
host-side polling after suspend/resume. Non-bootstrap CPUs are
restored correctly by the haltpoll driver because they are hot-unplugged
during suspend and hot-plugged during resume; however, the BSP
is not hotpluggable and remains in host-sde polling mode after
the guest resume. The makes the guest pay for the cost of vmexits
every time the guest enters idle.
Fix it by recording BSP's haltpoll state and resuming it during guest
resume.
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Message-Id: <
1650267752-46796-1-git-send-email-wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tom Rix [Sun, 10 Apr 2022 15:38:40 +0000 (11:38 -0400)]
KVM: SPDX style and spelling fixes
SPDX comments use use /* */ style comments in headers anad
// style comments in .c files. Also fix two spelling mistakes.
Signed-off-by: Tom Rix <trix@redhat.com>
Message-Id: <
20220410153840.55506-1-trix@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 20 Apr 2022 01:37:32 +0000 (01:37 +0000)]
KVM: x86: Skip KVM_GUESTDBG_BLOCKIRQ APICv update if APICv is disabled
Skip the APICv inhibit update for KVM_GUESTDBG_BLOCKIRQ if APICv is
disabled at the module level to avoid having to acquire the mutex and
potentially process all vCPUs. The DISABLE inhibit will (barring bugs)
never be lifted, so piling on more inhibits is unnecessary.
Fixes:
cae72dcc3b21 ("KVM: x86: inhibit APICv when KVM_GUESTDBG_BLOCKIRQ active")
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <
20220420013732.
3308816-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 20 Apr 2022 01:37:31 +0000 (01:37 +0000)]
KVM: x86: Pend KVM_REQ_APICV_UPDATE during vCPU creation to fix a race
Make a KVM_REQ_APICV_UPDATE request when creating a vCPU with an
in-kernel local APIC and APICv enabled at the module level. Consuming
kvm_apicv_activated() and stuffing vcpu->arch.apicv_active directly can
race with __kvm_set_or_clear_apicv_inhibit(), as vCPU creation happens
before the vCPU is fully onlined, i.e. it won't get the request made to
"all" vCPUs. If APICv is globally inhibited between setting apicv_active
and onlining the vCPU, the vCPU will end up running with APICv enabled
and trigger KVM's sanity check.
Mark APICv as active during vCPU creation if APICv is enabled at the
module level, both to be optimistic about it's final state, e.g. to avoid
additional VMWRITEs on VMX, and because there are likely bugs lurking
since KVM checks apicv_active in multiple vCPU creation paths. While
keeping the current behavior of consuming kvm_apicv_activated() is
arguably safer from a regression perspective, force apicv_active so that
vCPU creation runs with deterministic state and so that if there are bugs,
they are found sooner than later, i.e. not when some crazy race condition
is hit.
WARNING: CPU: 0 PID: 484 at arch/x86/kvm/x86.c:9877 vcpu_enter_guest+0x2ae3/0x3ee0 arch/x86/kvm/x86.c:9877
Modules linked in:
CPU: 0 PID: 484 Comm: syz-executor361 Not tainted 5.16.13 #2
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1~cloud0 04/01/2014
RIP: 0010:vcpu_enter_guest+0x2ae3/0x3ee0 arch/x86/kvm/x86.c:9877
Call Trace:
<TASK>
vcpu_run arch/x86/kvm/x86.c:10039 [inline]
kvm_arch_vcpu_ioctl_run+0x337/0x15e0 arch/x86/kvm/x86.c:10234
kvm_vcpu_ioctl+0x4d2/0xc80 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3727
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:874 [inline]
__se_sys_ioctl fs/ioctl.c:860 [inline]
__x64_sys_ioctl+0x16d/0x1d0 fs/ioctl.c:860
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x38/0x90 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x44/0xae
The bug was hit by a syzkaller spamming VM creation with 2 vCPUs and a
call to KVM_SET_GUEST_DEBUG.
r0 = openat$kvm(0xffffffffffffff9c, &(0x7f0000000000), 0x0, 0x0)
r1 = ioctl$KVM_CREATE_VM(r0, 0xae01, 0x0)
ioctl$KVM_CAP_SPLIT_IRQCHIP(r1, 0x4068aea3, &(0x7f0000000000)) (async)
r2 = ioctl$KVM_CREATE_VCPU(r1, 0xae41, 0x0) (async)
r3 = ioctl$KVM_CREATE_VCPU(r1, 0xae41, 0x400000000000002)
ioctl$KVM_SET_GUEST_DEBUG(r3, 0x4048ae9b, &(0x7f00000000c0)={0x5dda9c14aa95f5c5})
ioctl$KVM_RUN(r2, 0xae80, 0x0)
Reported-by: Gaoning Pan <pgn@zju.edu.cn>
Reported-by: Yongkang Jia <kangel@zju.edu.cn>
Fixes:
8df14af42f00 ("kvm: x86: Add support for dynamic APICv activation")
Cc: stable@vger.kernel.org
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <
20220420013732.
3308816-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 20 Apr 2022 01:37:30 +0000 (01:37 +0000)]
KVM: nVMX: Defer APICv updates while L2 is active until L1 is active
Defer APICv updates that occur while L2 is active until nested VM-Exit,
i.e. until L1 regains control. vmx_refresh_apicv_exec_ctrl() assumes L1
is active and (a) stomps all over vmcs02 and (b) neglects to ever updated
vmcs01. E.g. if vmcs12 doesn't enable the TPR shadow for L2 (and thus no
APICv controls), L1 performs nested VM-Enter APICv inhibited, and APICv
becomes unhibited while L2 is active, KVM will set various APICv controls
in vmcs02 and trigger a failed VM-Entry. The kicker is that, unless
running with nested_early_check=1, KVM blames L1 and chaos ensues.
In all cases, ignoring vmcs02 and always deferring the inhibition change
to vmcs01 is correct (or at least acceptable). The ABSENT and DISABLE
inhibitions cannot truly change while L2 is active (see below).
IRQ_BLOCKING can change, but it is firmly a best effort debug feature.
Furthermore, only L2's APIC is accelerated/virtualized to the full extent
possible, e.g. even if L1 passes through its APIC to L2, normal MMIO/MSR
interception will apply to the virtual APIC managed by KVM.
The exception is the SELF_IPI register when x2APIC is enabled, but that's
an acceptable hole.
Lastly, Hyper-V's Auto EOI can technically be toggled if L1 exposes the
MSRs to L2, but for that to work in any sane capacity, L1 would need to
pass through IRQs to L2 as well, and IRQs must be intercepted to enable
virtual interrupt delivery. I.e. exposing Auto EOI to L2 and enabling
VID for L2 are, for all intents and purposes, mutually exclusive.
Lack of dynamic toggling is also why this scenario is all but impossible
to encounter in KVM's current form. But a future patch will pend an
APICv update request _during_ vCPU creation to plug a race where a vCPU
that's being created doesn't get included in the "all vCPUs request"
because it's not yet visible to other vCPUs. If userspaces restores L2
after VM creation (hello, KVM selftests), the first KVM_RUN will occur
while L2 is active and thus service the APICv update request made during
VM creation.
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220420013732.
3308816-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 20 Apr 2022 01:37:29 +0000 (01:37 +0000)]
KVM: x86: Tag APICv DISABLE inhibit, not ABSENT, if APICv is disabled
Set the DISABLE inhibit, not the ABSENT inhibit, if APICv is disabled via
module param. A recent refactoring to add a wrapper for setting/clearing
inhibits unintentionally changed the flag, probably due to a copy+paste
goof.
Fixes:
4f4c4a3ee53c ("KVM: x86: Trace all APICv inhibit changes and capture overall status")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <
20220420013732.
3308816-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Fri, 15 Apr 2022 00:46:22 +0000 (00:46 +0000)]
KVM: Initialize debugfs_dentry when a VM is created to avoid NULL deref
Initialize debugfs_entry to its semi-magical -ENOENT value when the VM
is created. KVM's teardown when VM creation fails is kludgy and calls
kvm_uevent_notify_change() and kvm_destroy_vm_debugfs() even if KVM never
attempted kvm_create_vm_debugfs(). Because debugfs_entry is zero
initialized, the IS_ERR() checks pass and KVM derefs a NULL pointer.
BUG: kernel NULL pointer dereference, address:
0000000000000018
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD
1068b1067 P4D
1068b1067 PUD
1068b0067 PMD 0
Oops: 0000 [#1] SMP
CPU: 0 PID: 871 Comm: repro Not tainted 5.18.0-rc1+ #825
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
RIP: 0010:__dentry_path+0x7b/0x130
Call Trace:
<TASK>
dentry_path_raw+0x42/0x70
kvm_uevent_notify_change.part.0+0x10c/0x200 [kvm]
kvm_put_kvm+0x63/0x2b0 [kvm]
kvm_dev_ioctl+0x43a/0x920 [kvm]
__x64_sys_ioctl+0x83/0xb0
do_syscall_64+0x31/0x50
entry_SYSCALL_64_after_hwframe+0x44/0xae
</TASK>
Modules linked in: kvm_intel kvm irqbypass
Fixes:
a44a4cc1c969 ("KVM: Don't create VM debugfs files outside of the VM directory")
Cc: stable@vger.kernel.org
Cc: Marc Zyngier <maz@kernel.org>
Cc: Oliver Upton <oupton@google.com>
Reported-by: syzbot+df6fbbd2ee39f21289ef@syzkaller.appspotmail.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Oliver Upton <oupton@google.com>
Message-Id: <
20220415004622.
2207751-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Fri, 15 Apr 2022 00:43:43 +0000 (00:43 +0000)]
KVM: Add helpers to wrap vcpu->srcu_idx and yell if it's abused
Add wrappers to acquire/release KVM's SRCU lock when stashing the index
in vcpu->src_idx, along with rudimentary detection of illegal usage,
e.g. re-acquiring SRCU and thus overwriting vcpu->src_idx. Because the
SRCU index is (currently) either 0 or 1, illegal nesting bugs can go
unnoticed for quite some time and only cause problems when the nested
lock happens to get a different index.
Wrap the WARNs in PROVE_RCU=y, and make them ONCE, otherwise KVM will
likely yell so loudly that it will bring the kernel to its knees.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Tested-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <
20220415004343.
2203171-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Fri, 15 Apr 2022 00:43:42 +0000 (00:43 +0000)]
KVM: RISC-V: Use kvm_vcpu.srcu_idx, drop RISC-V's unnecessary copy
Use the generic kvm_vcpu's srcu_idx instead of using an indentical field
in RISC-V's version of kvm_vcpu_arch. Generic KVM very intentionally
does not touch vcpu->srcu_idx, i.e. there's zero chance of running afoul
of common code.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220415004343.
2203171-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Fri, 15 Apr 2022 00:43:41 +0000 (00:43 +0000)]
KVM: x86: Don't re-acquire SRCU lock in complete_emulated_io()
Don't re-acquire SRCU in complete_emulated_io() now that KVM acquires the
lock in kvm_arch_vcpu_ioctl_run(). More importantly, don't overwrite
vcpu->srcu_idx. If the index acquired by complete_emulated_io() differs
from the one acquired by kvm_arch_vcpu_ioctl_run(), KVM will effectively
leak a lock and hang if/when synchronize_srcu() is invoked for the
relevant grace period.
Fixes:
8d25b7beca7e ("KVM: x86: pull kvm->srcu read-side to kvm_arch_vcpu_ioctl_run")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <
20220415004343.
2203171-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Thu, 21 Apr 2022 15:58:57 +0000 (11:58 -0400)]
Merge tag 'kvm-riscv-fixes-5.18-2' of https://github.com/kvm-riscv/linux into HEAD
KVM/riscv fixes for 5.18, take #2
- Remove 's' & 'u' as valid ISA extension
- Do not allow disabling the base extensions 'i'/'m'/'a'/'c'
Atish Patra [Wed, 20 Apr 2022 01:32:58 +0000 (18:32 -0700)]
RISC-V: KVM: Restrict the extensions that can be disabled
Currently, the config isa register allows us to disable all allowed
single letter ISA extensions. It shouldn't be the case as vmm shouldn't
be able to disable base extensions (imac).
These extensions should always be enabled as long as they are enabled
in the host ISA.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Fixes:
92ad82002c39 ("RISC-V: KVM: Implement
KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls")
Atish Patra [Wed, 20 Apr 2022 01:32:57 +0000 (18:32 -0700)]
RISC-V: KVM: Remove 's' & 'u' as valid ISA extension
There are no ISA extension defined as 's' & 'u' in RISC-V specifications.
The misa register defines 's' & 'u' bit as Supervisor/User privilege mode
enabled. But it should not appear in the ISA extension in the device tree.
Remove those from the allowed ISA extension for kvm.
Fixes:
a33c72faf2d7 ("RISC-V: KVM: Implement VCPU create, init and
destroy functions")
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Signed-off-by: Anup Patel <anup@brainfault.org>
Linus Torvalds [Sun, 17 Apr 2022 20:57:31 +0000 (13:57 -0700)]
Linux 5.18-rc3
Linus Torvalds [Sun, 17 Apr 2022 17:29:10 +0000 (10:29 -0700)]
Merge tag 'for-linus-5.18-rc3-tag' of git://git./linux/kernel/git/xen/tip
Pull xen fixlet from Juergen Gross:
"A single cleanup patch for the Xen balloon driver"
* tag 'for-linus-5.18-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
xen/balloon: don't use PV mode extra memory for zone device allocations
Linus Torvalds [Sun, 17 Apr 2022 16:55:59 +0000 (09:55 -0700)]
Merge tag 'x86-urgent-2022-04-17' of git://git./linux/kernel/git/tip/tip
Pull x86 fixes from Thomas Gleixner:
"Two x86 fixes related to TSX:
- Use either MSR_TSX_FORCE_ABORT or MSR_IA32_TSX_CTRL to disable TSX
to cover all CPUs which allow to disable it.
- Disable TSX development mode at boot so that a microcode update
which provides TSX development mode does not suddenly make the
system vulnerable to TSX Asynchronous Abort"
* tag 'x86-urgent-2022-04-17' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/tsx: Disable TSX development mode at boot
x86/tsx: Use MSR_TSX_CTRL to clear CPUID bits
Linus Torvalds [Sun, 17 Apr 2022 16:53:01 +0000 (09:53 -0700)]
Merge tag 'timers-urgent-2022-04-17' of git://git./linux/kernel/git/tip/tip
Pull timer fixes from Thomas Gleixner:
"A small set of fixes for the timers core:
- Fix the warning condition in __run_timers() which does not take
into account that a CPU base (especially the deferrable base) never
has a timer armed on it and therefore the next_expiry value can
become stale.
- Replace a WARN_ON() in the NOHZ code with a WARN_ON_ONCE() to
prevent endless spam in dmesg.
- Remove the double star from a comment which is not meant to be in
kernel-doc format"
* tag 'timers-urgent-2022-04-17' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
tick/sched: Fix non-kernel-doc comment
tick/nohz: Use WARN_ON_ONCE() to prevent console saturation
timers: Fix warning condition in __run_timers()
Linus Torvalds [Sun, 17 Apr 2022 16:46:15 +0000 (09:46 -0700)]
Merge tag 'smp-urgent-2022-04-17' of git://git./linux/kernel/git/tip/tip
Pull SMP fixes from Thomas Gleixner:
"Two fixes for the SMP core:
- Make the warning condition in flush_smp_call_function_queue()
correct, which checked a just emptied list head for being empty
instead of validating that there was no pending entry on the
offlined CPU at all.
- The @cpu member of struct cpuhp_cpu_state is initialized when the
CPU hotplug thread for the upcoming CPU is created. That's too late
because the creation of the thread can fail and then the following
rollback operates on CPU0. Get rid of the CPU member and hand the
CPU number to the involved functions directly"
* tag 'smp-urgent-2022-04-17' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
cpu/hotplug: Remove the 'cpu' member of cpuhp_cpu_state
smp: Fix offline cpu check in flush_smp_call_function_queue()
Linus Torvalds [Sun, 17 Apr 2022 16:42:03 +0000 (09:42 -0700)]
Merge tag 'irq-urgent-2022-04-17' of git://git./linux/kernel/git/tip/tip
Pull irq fix from Thomas Gleixner:
"A single fix for the interrupt affinity spreading logic to take into
account that there can be an imbalance between present and possible
CPUs, which causes already assigned bits to be overwritten"
* tag 'irq-urgent-2022-04-17' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
genirq/affinity: Consider that CPUs on nodes can be unbalanced
Linus Torvalds [Sun, 17 Apr 2022 16:36:27 +0000 (09:36 -0700)]
Merge tag 'for-v5.18-rc' of git://git./linux/kernel/git/sre/linux-power-supply
Pull power supply fixes from Sebastian Reichel:
- Fix a regression with battery data failing to load from DT
* tag 'for-v5.18-rc' of git://git.kernel.org/pub/scm/linux/kernel/git/sre/linux-power-supply:
power: supply: Reset err after not finding static battery
power: supply: samsung-sdi-battery: Add missing charge restart voltages
Linus Torvalds [Sun, 17 Apr 2022 16:31:27 +0000 (09:31 -0700)]
Merge branch 'i2c/for-current' of git://git./linux/kernel/git/wsa/linux
Pull i2c fixes from Wolfram Sang:
"Regular set of fixes for drivers and the dev-interface"
* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
i2c: ismt: Fix undefined behavior due to shift overflowing the constant
i2c: dev: Force case user pointers in compat_i2cdev_ioctl()
i2c: dev: check return value when calling dev_set_name()
i2c: qcom-geni: Use dev_err_probe() for GPI DMA error
i2c: imx: Implement errata ERR007805 or e7805 bus frequency limit
i2c: pasemi: Wait for write xfers to finish
Linus Torvalds [Sun, 17 Apr 2022 00:07:50 +0000 (17:07 -0700)]
Merge tag 'devicetree-fixes-for-5.18-2' of git://git./linux/kernel/git/robh/linux
Pull devicetree fixes from Rob Herring:
- Fix scalar property schemas with array constraints
- Fix 'enum' lists with duplicate entries
- Fix incomplete if/then/else schemas
- Add Renesas RZ/V2L SoC support to Mali Bifrost binding
- Maintainers update for Marvell irqchip
* tag 'devicetree-fixes-for-5.18-2' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
dt-bindings: display: panel-timing: Define a single type for properties
dt-bindings: Fix array constraints on scalar properties
dt-bindings: gpu: mali-bifrost: Document RZ/V2L SoC
dt-bindings: net: snps: remove duplicate name
dt-bindings: Fix 'enum' lists with duplicate entries
dt-bindings: irqchip: mrvl,intc: refresh maintainers
dt-bindings: Fix incomplete if/then/else schemas
dt-bindings: power: renesas,apmu: Fix cpus property limits
dt-bindings: extcon: maxim,max77843: fix ports type
Linus Torvalds [Sun, 17 Apr 2022 00:01:43 +0000 (17:01 -0700)]
Merge tag 'gpio-fixes-for-v5.18-rc3' of git://git./linux/kernel/git/brgl/linux
Pull gpio fixes from Bartosz Golaszewski:
"A single fix for gpio-sim and two patches for GPIO ACPI pulled from
Andy:
- fix the set/get_multiple() callbacks in gpio-sim
- use correct format characters in gpiolib-acpi
- use an unsigned type for pins in gpiolib-acpi"
* tag 'gpio-fixes-for-v5.18-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux:
gpio: sim: fix setting and getting multiple lines
gpiolib: acpi: Convert type for pin to be unsigned
gpiolib: acpi: use correct format characters
Linus Torvalds [Sat, 16 Apr 2022 23:51:39 +0000 (16:51 -0700)]
Merge tag 'soc-fixes-5.18-2' of git://git./linux/kernel/git/soc/soc
Pull ARM SoC fixes from Arnd Bergmann:
"There are a number of SoC bugfixes that came in since the merge
window, and more of them are already pending.
This batch includes:
- A boot time regression fix for davinci that triggered on
multi_v5_defconfig when booting any platform
- Defconfig updates to address removed features, changed symbol names
or dependencies, for gemini, ux500, and pxa
- Email address changes for Krzysztof Kozlowski
- Build warning fixes for ep93xx and iop32x
- Devicetree warning fixes across many platforms
- Minor bugfixes for the reset controller, memory controller and SCMI
firmware subsystems plus the versatile-express board"
* tag 'soc-fixes-5.18-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (34 commits)
ARM: config: Update Gemini defconfig
arm64: dts: qcom/sdm845-shift-axolotl: Fix boolean properties with values
ARM: dts: align SPI NOR node name with dtschema
ARM: dts: Fix more boolean properties with values
arm/arm64: dts: qcom: Fix boolean properties with values
arm64: dts: imx: Fix imx8*-var-som touchscreen property sizes
arm: dts: imx: Fix boolean properties with values
arm64: dts: tegra: Fix boolean properties with values
arm: dts: at91: Fix boolean properties with values
arm: configs: imote2: Drop defconfig as board support dropped.
ep93xx: clock: Don't use plain integer as NULL pointer
ep93xx: clock: Fix UAF in ep93xx_clk_register_gate()
ARM: vexpress/spc: Fix all the kernel-doc build warnings
ARM: vexpress/spc: Fix kernel-doc build warning for ve_spc_cpu_in_wfi
ARM: config: u8500: Re-enable AB8500 battery charging
ARM: config: u8500: Add some common hardware
memory: fsl_ifc: populate child nodes of buses and mfd devices
ARM: config: Refresh U8500 defconfig
firmware: arm_scmi: Fix sparse warnings in OPTEE transport driver
firmware: arm_scmi: Replace zero-length array with flexible-array member
...
Linus Torvalds [Sat, 16 Apr 2022 23:42:53 +0000 (16:42 -0700)]
Merge tag 'random-5.18-rc3-for-linus' of git://git./linux/kernel/git/crng/random
Pull random number generator fixes from Jason Donenfeld:
- Per your suggestion, random reads now won't fail if there's a page
fault after some non-zero amount of data has been read, which makes
the behavior consistent with all other reads in the kernel.
- Rather than an inconsistent mix of random_get_entropy() returning an
unsigned long or a cycles_t, now it just returns an unsigned long.
- A memcpy() was replaced with an memmove(), because the addresses are
sometimes overlapping. In practice the destination is always before
the source, so not really an issue, but better to be correct than
not.
* tag 'random-5.18-rc3-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random:
random: use memmove instead of memcpy for remaining 32 bytes
random: make random_get_entropy() return an unsigned long
random: allow partial reads if later user copies fail
Linus Torvalds [Sat, 16 Apr 2022 20:38:26 +0000 (13:38 -0700)]
Merge tag 'scsi-fixes' of git://git./linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
"13 fixes, all in drivers.
The most extensive changes are in the iscsi series (affecting drivers
qedi, cxgbi and bnx2i), the next most is scsi_debug, but that's just a
simple revert and then minor updates to pm80xx"
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
scsi: iscsi: MAINTAINERS: Add Mike Christie as co-maintainer
scsi: qedi: Fix failed disconnect handling
scsi: iscsi: Fix NOP handling during conn recovery
scsi: iscsi: Merge suspend fields
scsi: iscsi: Fix unbound endpoint error handling
scsi: iscsi: Fix conn cleanup and stop race during iscsid restart
scsi: iscsi: Fix endpoint reuse regression
scsi: iscsi: Release endpoint ID when its freed
scsi: iscsi: Fix offload conn cleanup when iscsid restarts
scsi: iscsi: Move iscsi_ep_disconnect()
scsi: pm80xx: Enable upper inbound, outbound queues
scsi: pm80xx: Mask and unmask upper interrupt vectors 32-63
Revert "scsi: scsi_debug: Address races following module load"
Bartosz Golaszewski [Sat, 16 Apr 2022 19:57:00 +0000 (21:57 +0200)]
Merge tag 'intel-gpio-v5.18-2' of gitolite.pub/scm/linux/kernel/git/andy/linux-gpio-intel into gpio/for-current
intel-gpio for v5.18-2
* Couple of fixes related to handling unsigned value of the pin from ACPI
gpiolib:
- acpi: Convert type for pin to be unsigned
- acpi: use correct format characters
Linus Torvalds [Sat, 16 Apr 2022 18:20:21 +0000 (11:20 -0700)]
Merge tag 'dma-mapping-5.18-2' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping fix from Christoph Hellwig:
- avoid a double memory copy for swiotlb (Chao Gao)
* tag 'dma-mapping-5.18-2' of git://git.infradead.org/users/hch/dma-mapping:
dma-direct: avoid redundant memory sync for swiotlb
Jason A. Donenfeld [Wed, 13 Apr 2022 23:50:38 +0000 (01:50 +0200)]
random: use memmove instead of memcpy for remaining 32 bytes
In order to immediately overwrite the old key on the stack, before
servicing a userspace request for bytes, we use the remaining 32 bytes
of block 0 as the key. This means moving indices 8,9,a,b,c,d,e,f ->
4,5,6,7,8,9,a,b. Since 4 < 8, for the kernel implementations of
memcpy(), this doesn't actually appear to be a problem in practice. But
relying on that characteristic seems a bit brittle. So let's change that
to a proper memmove(), which is the by-the-books way of handling
overlapping memory copies.
Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Linus Torvalds [Fri, 15 Apr 2022 22:57:18 +0000 (15:57 -0700)]
Merge branch 'akpm' (patches from Andrew)
Merge misc fixes from Andrew Morton:
"14 patches.
Subsystems affected by this patch series: MAINTAINERS, binfmt, and
mm (tmpfs, secretmem, kasan, kfence, pagealloc, zram, compaction,
hugetlb, vmalloc, and kmemleak)"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
mm: kmemleak: take a full lowmem check in kmemleak_*_phys()
mm/vmalloc: fix spinning drain_vmap_work after reading from /proc/vmcore
revert "fs/binfmt_elf: use PT_LOAD p_align values for static PIE"
revert "fs/binfmt_elf: fix PT_LOAD p_align values for loaders"
hugetlb: do not demote poisoned hugetlb pages
mm: compaction: fix compiler warning when CONFIG_COMPACTION=n
mm: fix unexpected zeroed page mapping with zram swap
mm, page_alloc: fix build_zonerefs_node()
mm, kfence: support kmem_dump_obj() for KFENCE objects
kasan: fix hw tags enablement when KUNIT tests are disabled
irq_work: use kasan_record_aux_stack_noalloc() record callstack
mm/secretmem: fix panic when growing a memfd_secret
tmpfs: fix regressions from wider use of ZERO_PAGE
MAINTAINERS: Broadcom internal lists aren't maintainers
Linus Torvalds [Fri, 15 Apr 2022 22:20:59 +0000 (15:20 -0700)]
Merge tag 'for-5.18/dm-fixes-2' of git://git./linux/kernel/git/device-mapper/linux-dm
Pull device mapper fixes from Mike Snitzer:
- Fix memory corruption in DM integrity target when tag_size is less
than digest size.
- Fix DM multipath's historical-service-time path selector to not use
sched_clock() and ktime_get_ns(); only use ktime_get_ns().
- Fix dm_io->orig_bio NULL pointer dereference in dm_zone_map_bio() due
to 5.18 changes that overlooked DM zone's use of ->orig_bio
- Fix for regression that broke the use of dm_accept_partial_bio() for
"abnormal" IO (e.g. WRITE ZEROES) that does not need duplicate bios
- Fix DM's issuing of empty flush bio so that it's size is 0.
* tag 'for-5.18/dm-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm: fix bio length of empty flush
dm: allow dm_accept_partial_bio() for dm_io without duplicate bios
dm zone: fix NULL pointer dereference in dm_zone_map_bio
dm mpath: only use ktime_get_ns() in historical selector
dm integrity: fix memory corruption when tag_size is less than digest size
Patrick Wang [Fri, 15 Apr 2022 02:14:04 +0000 (19:14 -0700)]
mm: kmemleak: take a full lowmem check in kmemleak_*_phys()
The kmemleak_*_phys() apis do not check the address for lowmem's min
boundary, while the caller may pass an address below lowmem, which will
trigger an oops:
# echo scan > /sys/kernel/debug/kmemleak
Unable to handle kernel paging request at virtual address
ff5fffffffe00000
Oops [#1]
Modules linked in:
CPU: 2 PID: 134 Comm: bash Not tainted 5.18.0-rc1-next-
20220407 #33
Hardware name: riscv-virtio,qemu (DT)
epc : scan_block+0x74/0x15c
ra : scan_block+0x72/0x15c
epc :
ffffffff801e5806 ra :
ffffffff801e5804 sp :
ff200000104abc30
gp :
ffffffff815cd4e8 tp :
ff60000004cfa340 t0 :
0000000000000200
t1 :
00aaaaaac23954cc t2 :
00000000000003ff s0 :
ff200000104abc90
s1 :
ffffffff81b0ff28 a0 :
0000000000000000 a1 :
ff5fffffffe01000
a2 :
ffffffff81b0ff28 a3 :
0000000000000002 a4 :
0000000000000001
a5 :
0000000000000000 a6 :
ff200000104abd7c a7 :
0000000000000005
s2 :
ff5fffffffe00ff9 s3 :
ffffffff815cd998 s4 :
ffffffff815d0e90
s5 :
ffffffff81b0ff28 s6 :
0000000000000020 s7 :
ffffffff815d0eb0
s8 :
ffffffffffffffff s9 :
ff5fffffffe00000 s10:
ff5fffffffe01000
s11:
0000000000000022 t3 :
00ffffffaa17db4c t4 :
000000000000000f
t5 :
0000000000000001 t6 :
0000000000000000
status:
0000000000000100 badaddr:
ff5fffffffe00000 cause:
000000000000000d
scan_gray_list+0x12e/0x1a6
kmemleak_scan+0x2aa/0x57e
kmemleak_write+0x32a/0x40c
full_proxy_write+0x56/0x82
vfs_write+0xa6/0x2a6
ksys_write+0x6c/0xe2
sys_write+0x22/0x2a
ret_from_syscall+0x0/0x2
The callers may not quite know the actual address they pass(e.g. from
devicetree). So the kmemleak_*_phys() apis should guarantee the address
they finally use is in lowmem range, so check the address for lowmem's
min boundary.
Link: https://lkml.kernel.org/r/20220413122925.33856-1-patrick.wang.shcn@gmail.com
Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Omar Sandoval [Fri, 15 Apr 2022 02:14:01 +0000 (19:14 -0700)]
mm/vmalloc: fix spinning drain_vmap_work after reading from /proc/vmcore
Commit
3ee48b6af49c ("mm, x86: Saving vmcore with non-lazy freeing of
vmas") introduced set_iounmap_nonlazy(), which sets vmap_lazy_nr to
lazy_max_pages() + 1, ensuring that any future vunmaps() immediately
purge the vmap areas instead of doing it lazily.
Commit
690467c81b1a ("mm/vmalloc: Move draining areas out of caller
context") moved the purging from the vunmap() caller to a worker thread.
Unfortunately, set_iounmap_nonlazy() can cause the worker thread to spin
(possibly forever). For example, consider the following scenario:
1. Thread reads from /proc/vmcore. This eventually calls
__copy_oldmem_page() -> set_iounmap_nonlazy(), which sets
vmap_lazy_nr to lazy_max_pages() + 1.
2. Then it calls free_vmap_area_noflush() (via iounmap()), which adds 2
pages (one page plus the guard page) to the purge list and
vmap_lazy_nr. vmap_lazy_nr is now lazy_max_pages() + 3, so the
drain_vmap_work is scheduled.
3. Thread returns from the kernel and is scheduled out.
4. Worker thread is scheduled in and calls drain_vmap_area_work(). It
frees the 2 pages on the purge list. vmap_lazy_nr is now
lazy_max_pages() + 1.
5. This is still over the threshold, so it tries to purge areas again,
but doesn't find anything.
6. Repeat 5.
If the system is running with only one CPU (which is typicial for kdump)
and preemption is disabled, then this will never make forward progress:
there aren't any more pages to purge, so it hangs. If there is more
than one CPU or preemption is enabled, then the worker thread will spin
forever in the background. (Note that if there were already pages to be
purged at the time that set_iounmap_nonlazy() was called, this bug is
avoided.)
This can be reproduced with anything that reads from /proc/vmcore
multiple times. E.g., vmcore-dmesg /proc/vmcore.
It turns out that improvements to vmap() over the years have obsoleted
the need for this "optimization". I benchmarked `dd if=/proc/vmcore
of=/dev/null` with 4k and 1M read sizes on a system with a 32GB vmcore.
The test was run on 5.17, 5.18-rc1 with a fix that avoided the hang, and
5.18-rc1 with set_iounmap_nonlazy() removed entirely:
|5.17 |5.18+fix|5.18+removal
4k|40.86s| 40.09s| 26.73s
1M|24.47s| 23.98s| 21.84s
The removal was the fastest (by a wide margin with 4k reads). This
patch removes set_iounmap_nonlazy().
Link: https://lkml.kernel.org/r/52f819991051f9b865e9ce25605509bfdbacadcd.1649277321.git.osandov@fb.com
Fixes:
690467c81b1a ("mm/vmalloc: Move draining areas out of caller context")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Acked-by: Chris Down <chris@chrisdown.name>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Baoquan He <bhe@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andrew Morton [Fri, 15 Apr 2022 02:13:58 +0000 (19:13 -0700)]
revert "fs/binfmt_elf: use PT_LOAD p_align values for static PIE"
Despite Mike's attempted fix (
925346c129da117122), regressions reports
continue:
https://lore.kernel.org/lkml/
cb5b81bd-9882-e5dc-cd22-
54bdbaaefbbc@leemhuis.info/
https://bugzilla.kernel.org/show_bug.cgi?id=215720
https://lkml.kernel.org/r/
b685f3d0-da34-531d-1aa9-
479accd3e21b@leemhuis.info
So revert this patch.
Fixes:
9630f0d60fec ("fs/binfmt_elf: use PT_LOAD p_align values for static PIE")
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Chris Kennelly <ckennelly@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Fangrui Song <maskray@google.com>
Cc: H.J. Lu <hjl.tools@gmail.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Sandeep Patil <sspatil@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Song Liu <songliubraving@fb.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Thorsten Leemhuis <regressions@leemhuis.info>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andrew Morton [Fri, 15 Apr 2022 02:13:55 +0000 (19:13 -0700)]
revert "fs/binfmt_elf: fix PT_LOAD p_align values for loaders"
Commit
925346c129da11 ("fs/binfmt_elf: fix PT_LOAD p_align values for
loaders") was an attempt to fix regressions due to
9630f0d60fec5f
("fs/binfmt_elf: use PT_LOAD p_align values for static PIE").
But regressionss continue to be reported:
https://lore.kernel.org/lkml/
cb5b81bd-9882-e5dc-cd22-
54bdbaaefbbc@leemhuis.info/
https://bugzilla.kernel.org/show_bug.cgi?id=215720
https://lkml.kernel.org/r/
b685f3d0-da34-531d-1aa9-
479accd3e21b@leemhuis.info
This patch reverts the fix, so the original can also be reverted.
Fixes:
925346c129da11 ("fs/binfmt_elf: fix PT_LOAD p_align values for loaders")
Cc: H.J. Lu <hjl.tools@gmail.com>
Cc: Chris Kennelly <ckennelly@google.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Sandeep Patil <sspatil@google.com>
Cc: Fangrui Song <maskray@google.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thorsten Leemhuis <regressions@leemhuis.info>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mike Kravetz [Fri, 15 Apr 2022 02:13:52 +0000 (19:13 -0700)]
hugetlb: do not demote poisoned hugetlb pages
It is possible for poisoned hugetlb pages to reside on the free lists.
The huge page allocation routines which dequeue entries from the free
lists make a point of avoiding poisoned pages. There is no such check
and avoidance in the demote code path.
If a hugetlb page on the is on a free list, poison will only be set in
the head page rather then the page with the actual error. If such a
page is demoted, then the poison flag may follow the wrong page. A page
without error could have poison set, and a page with poison could not
have the flag set.
Check for poison before attempting to demote a hugetlb page. Also,
return -EBUSY to the caller if only poisoned pages are on the free list.
Link: https://lkml.kernel.org/r/20220307215707.50916-1-mike.kravetz@oracle.com
Fixes:
8531fc6f52f5 ("hugetlb: add hugetlb demote page support")
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Charan Teja Kalla [Fri, 15 Apr 2022 02:13:49 +0000 (19:13 -0700)]
mm: compaction: fix compiler warning when CONFIG_COMPACTION=n
The below warning is reported when CONFIG_COMPACTION=n:
mm/compaction.c:56:27: warning: 'HPAGE_FRAG_CHECK_INTERVAL_MSEC' defined but not used [-Wunused-const-variable=]
56 | static const unsigned int HPAGE_FRAG_CHECK_INTERVAL_MSEC = 500;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fix it by moving 'HPAGE_FRAG_CHECK_INTERVAL_MSEC' under
CONFIG_COMPACTION defconfig.
Also since this is just a 'static const int' type, use #define for it.
Link: https://lkml.kernel.org/r/1647608518-20924-1-git-send-email-quic_charante@quicinc.com
Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com>
Reported-by: kernel test robot <lkp@intel.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Nitin Gupta <nigupta@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minchan Kim [Fri, 15 Apr 2022 02:13:46 +0000 (19:13 -0700)]
mm: fix unexpected zeroed page mapping with zram swap
Two processes under CLONE_VM cloning, user process can be corrupted by
seeing zeroed page unexpectedly.
CPU A CPU B
do_swap_page do_swap_page
SWP_SYNCHRONOUS_IO path SWP_SYNCHRONOUS_IO path
swap_readpage valid data
swap_slot_free_notify
delete zram entry
swap_readpage zeroed(invalid) data
pte_lock
map the *zero data* to userspace
pte_unlock
pte_lock
if (!pte_same)
goto out_nomap;
pte_unlock
return and next refault will
read zeroed data
The swap_slot_free_notify is bogus for CLONE_VM case since it doesn't
increase the refcount of swap slot at copy_mm so it couldn't catch up
whether it's safe or not to discard data from backing device. In the
case, only the lock it could rely on to synchronize swap slot freeing is
page table lock. Thus, this patch gets rid of the swap_slot_free_notify
function. With this patch, CPU A will see correct data.
CPU A CPU B
do_swap_page do_swap_page
SWP_SYNCHRONOUS_IO path SWP_SYNCHRONOUS_IO path
swap_readpage original data
pte_lock
map the original data
swap_free
swap_range_free
bd_disk->fops->swap_slot_free_notify
swap_readpage read zeroed data
pte_unlock
pte_lock
if (!pte_same)
goto out_nomap;
pte_unlock
return
on next refault will see mapped data by CPU B
The concern of the patch would increase memory consumption since it
could keep wasted memory with compressed form in zram as well as
uncompressed form in address space. However, most of cases of zram uses
no readahead and do_swap_page is followed by swap_free so it will free
the compressed form from in zram quickly.
Link: https://lkml.kernel.org/r/YjTVVxIAsnKAXjTd@google.com
Fixes:
0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device")
Reported-by: Ivan Babrou <ivan@cloudflare.com>
Tested-by: Ivan Babrou <ivan@cloudflare.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: David Hildenbrand <david@redhat.com>
Cc: <stable@vger.kernel.org> [4.14+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Juergen Gross [Fri, 15 Apr 2022 02:13:43 +0000 (19:13 -0700)]
mm, page_alloc: fix build_zonerefs_node()
Since commit
6aa303defb74 ("mm, vmscan: only allocate and reclaim from
zones with pages managed by the buddy allocator") only zones with free
memory are included in a built zonelist. This is problematic when e.g.
all memory of a zone has been ballooned out when zonelists are being
rebuilt.
The decision whether to rebuild the zonelists when onlining new memory
is done based on populated_zone() returning 0 for the zone the memory
will be added to. The new zone is added to the zonelists only, if it
has free memory pages (managed_zone() returns a non-zero value) after
the memory has been onlined. This implies, that onlining memory will
always free the added pages to the allocator immediately, but this is
not true in all cases: when e.g. running as a Xen guest the onlined new
memory will be added only to the ballooned memory list, it will be freed
only when the guest is being ballooned up afterwards.
Another problem with using managed_zone() for the decision whether a
zone is being added to the zonelists is, that a zone with all memory
used will in fact be removed from all zonelists in case the zonelists
happen to be rebuilt.
Use populated_zone() when building a zonelist as it has been done before
that commit.
There was a report that QubesOS (based on Xen) is hitting this problem.
Xen has switched to use the zone device functionality in kernel 5.9 and
QubesOS wants to use memory hotplugging for guests in order to be able
to start a guest with minimal memory and expand it as needed. This was
the report leading to the patch.
Link: https://lkml.kernel.org/r/20220407120637.9035-1-jgross@suse.com
Fixes:
6aa303defb74 ("mm, vmscan: only allocate and reclaim from zones with pages managed by the buddy allocator")
Signed-off-by: Juergen Gross <jgross@suse.com>
Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Marco Elver [Fri, 15 Apr 2022 02:13:40 +0000 (19:13 -0700)]
mm, kfence: support kmem_dump_obj() for KFENCE objects
Calling kmem_obj_info() via kmem_dump_obj() on KFENCE objects has been
producing garbage data due to the object not actually being maintained
by SLAB or SLUB.
Fix this by implementing __kfence_obj_info() that copies relevant
information to struct kmem_obj_info when the object was allocated by
KFENCE; this is called by a common kmem_obj_info(), which also calls the
slab/slub/slob specific variant now called __kmem_obj_info().
For completeness, kmem_dump_obj() now displays if the object was
allocated by KFENCE.
Link: https://lore.kernel.org/all/20220323090520.GG16885@xsang-OptiPlex-9020/
Link: https://lkml.kernel.org/r/20220406131558.3558585-1-elver@google.com
Fixes:
b89fb5ef0ce6 ("mm, kfence: insert KFENCE hooks for SLUB")
Fixes:
d3fb45f370d9 ("mm, kfence: insert KFENCE hooks for SLAB")
Signed-off-by: Marco Elver <elver@google.com>
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Reported-by: kernel test robot <oliver.sang@intel.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz> [slab]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Vincenzo Frascino [Fri, 15 Apr 2022 02:13:37 +0000 (19:13 -0700)]
kasan: fix hw tags enablement when KUNIT tests are disabled
Kasan enables hw tags via kasan_enable_tagging() which based on the mode
passed via kernel command line selects the correct hw backend.
kasan_enable_tagging() is meant to be invoked indirectly via the cpu
features framework of the architectures that support these backends.
Currently the invocation of this function is guarded by
CONFIG_KASAN_KUNIT_TEST which allows the enablement of the correct backend
only when KUNIT tests are enabled in the kernel.
This inconsistency was introduced in commit:
ed6d74446cbf ("kasan: test: support async (again) and asymm modes for HW_TAGS")
... and prevents to enable MTE on arm64 when KUNIT tests for kasan hw_tags are
disabled.
Fix the issue making sure that the CONFIG_KASAN_KUNIT_TEST guard does not
prevent the correct invocation of kasan_enable_tagging().
Link: https://lkml.kernel.org/r/20220408124323.10028-1-vincenzo.frascino@arm.com
Fixes:
ed6d74446cbf ("kasan: test: support async (again) and asymm modes for HW_TAGS")
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Zqiang [Fri, 15 Apr 2022 02:13:34 +0000 (19:13 -0700)]
irq_work: use kasan_record_aux_stack_noalloc() record callstack
On PREEMPT_RT kernel and KASAN is enabled. the kasan_record_aux_stack()
may call alloc_pages(), and the rt-spinlock will be acquired, if currently
in atomic context, will trigger warning:
BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:46
in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 239, name: bootlogd
Preemption disabled at:
[<
ffffffffbab1a531>] rt_mutex_slowunlock+0xa1/0x4e0
CPU: 3 PID: 239 Comm: bootlogd Tainted: G W 5.17.1-rt17-yocto-preempt-rt+ #105
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS
rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014
Call Trace:
__might_resched.cold+0x13b/0x173
rt_spin_lock+0x5b/0xf0
get_page_from_freelist+0x20c/0x1610
__alloc_pages+0x25e/0x5e0
__stack_depot_save+0x3c0/0x4a0
kasan_save_stack+0x3a/0x50
__kasan_record_aux_stack+0xb6/0xc0
kasan_record_aux_stack+0xe/0x10
irq_work_queue_on+0x6a/0x1c0
pull_rt_task+0x631/0x6b0
do_balance_callbacks+0x56/0x80
__balance_callbacks+0x63/0x90
rt_mutex_setprio+0x349/0x880
rt_mutex_slowunlock+0x22a/0x4e0
rt_spin_unlock+0x49/0x80
uart_write+0x186/0x2b0
do_output_char+0x2e9/0x3a0
n_tty_write+0x306/0x800
file_tty_write.isra.0+0x2af/0x450
tty_write+0x22/0x30
new_sync_write+0x27c/0x3a0
vfs_write+0x3f7/0x5d0
ksys_write+0xd9/0x180
__x64_sys_write+0x43/0x50
do_syscall_64+0x44/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
Fix it by using kasan_record_aux_stack_noalloc() to avoid the call to
alloc_pages().
Link: https://lkml.kernel.org/r/20220402142555.2699582-1-qiang1.zhang@intel.com
Signed-off-by: Zqiang <qiang1.zhang@intel.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Axel Rasmussen [Fri, 15 Apr 2022 02:13:31 +0000 (19:13 -0700)]
mm/secretmem: fix panic when growing a memfd_secret
When one tries to grow an existing memfd_secret with ftruncate, one gets
a panic [1]. For example, doing the following reliably induces the
panic:
fd = memfd_secret();
ftruncate(fd, 10);
ptr = mmap(NULL, 10, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
strcpy(ptr, "
123456789");
munmap(ptr, 10);
ftruncate(fd, 20);
The basic reason for this is, when we grow with ftruncate, we call down
into simple_setattr, and then truncate_inode_pages_range, and eventually
we try to zero part of the memory. The normal truncation code does this
via the direct map (i.e., it calls page_address() and hands that to
memset()).
For memfd_secret though, we specifically don't map our pages via the
direct map (i.e. we call set_direct_map_invalid_noflush() on every
fault). So the address returned by page_address() isn't useful, and
when we try to memset() with it we panic.
This patch avoids the panic by implementing a custom setattr for
memfd_secret, which detects resizes specifically (setting the size for
the first time works just fine, since there are no existing pages to try
to zero), and rejects them with EINVAL.
One could argue growing should be supported, but I think that will
require a significantly more lengthy change. So, I propose a minimal
fix for the benefit of stable kernels, and then perhaps to extend
memfd_secret to support growing in a separate patch.
[1]:
BUG: unable to handle page fault for address:
ffffa0a889277028
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD
afa01067 P4D
afa01067 PUD
83f909067 PMD
83f8bf067 PTE
800ffffef6d88060
Oops: 0002 [#1] PREEMPT SMP DEBUG_PAGEALLOC PTI
CPU: 0 PID: 281 Comm: repro Not tainted 5.17.0-dbg-DEV #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
RIP: 0010:memset_erms+0x9/0x10
Code: c1 e9 03 40 0f b6 f6 48 b8 01 01 01 01 01 01 01 01 48 0f af c6 f3 48 ab 89 d1 f3 aa 4c 89 c8 c3 90 49 89 f9 40 88 f0 48 89 d1 <f3> aa 4c 89 c8 c3 90 49 89 fa 40 0f b6 ce 48 b8 01 01 01 01 01 01
RSP: 0018:
ffffb932c09afbf0 EFLAGS:
00010246
RAX:
0000000000000000 RBX:
ffffda63c4249dc0 RCX:
0000000000000fd8
RDX:
0000000000000fd8 RSI:
0000000000000000 RDI:
ffffa0a889277028
RBP:
ffffb932c09afc00 R08:
0000000000001000 R09:
ffffa0a889277028
R10:
0000000000020023 R11:
0000000000000000 R12:
ffffda63c4249dc0
R13:
ffffa0a890d70d98 R14:
0000000000000028 R15:
0000000000000fd8
FS:
00007f7294899580(0000) GS:
ffffa0af9bc00000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
ffffa0a889277028 CR3:
0000000107ef6006 CR4:
0000000000370ef0
DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
Call Trace:
? zero_user_segments+0x82/0x190
truncate_inode_partial_folio+0xd4/0x2a0
truncate_inode_pages_range+0x380/0x830
truncate_setsize+0x63/0x80
simple_setattr+0x37/0x60
notify_change+0x3d8/0x4d0
do_sys_ftruncate+0x162/0x1d0
__x64_sys_ftruncate+0x1c/0x20
do_syscall_64+0x44/0xa0
entry_SYSCALL_64_after_hwframe+0x44/0xae
Modules linked in: xhci_pci xhci_hcd virtio_net net_failover failover virtio_blk virtio_balloon uhci_hcd ohci_pci ohci_hcd evdev ehci_pci ehci_hcd 9pnet_virtio 9p netfs 9pnet
CR2:
ffffa0a889277028
[lkp@intel.com: secretmem_iops can be static]
Signed-off-by: kernel test robot <lkp@intel.com>
[axelrasmussen@google.com: return EINVAL]
Link: https://lkml.kernel.org/r/20220324210909.1843814-1-axelrasmussen@google.com
Link: https://lkml.kernel.org/r/20220412193023.279320-1-axelrasmussen@google.com
Signed-off-by: Axel Rasmussen <axelrasmussen@google.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: <stable@vger.kernel.org>
Cc: kernel test robot <lkp@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hugh Dickins [Fri, 15 Apr 2022 02:13:27 +0000 (19:13 -0700)]
tmpfs: fix regressions from wider use of ZERO_PAGE
Chuck Lever reported fsx-based xfstests generic 075 091 112 127 failing
when 5.18-rc1 NFS server exports tmpfs: bisected to recent tmpfs change.
Whilst nfsd_splice_action() does contain some questionable handling of
repeated pages, and Chuck was able to work around there, history from
Mark Hemment makes clear that there might be similar dangers elsewhere:
it was not a good idea for me to pass ZERO_PAGE down to unknown actors.
Revert shmem_file_read_iter() to using ZERO_PAGE for holes only when
iter_is_iovec(); in other cases, use the more natural iov_iter_zero()
instead of copy_page_to_iter().
We would use iov_iter_zero() throughout, but the x86 clear_user() is not
nearly so well optimized as copy to user (dd of 1T sparse tmpfs file
takes 57 seconds rather than 44 seconds).
And now pagecache_init() does not need to SetPageUptodate(ZERO_PAGE(0)):
which had caused boot failure on arm noMMU STM32F7 and STM32H7 boards
Link: https://lkml.kernel.org/r/9a978571-8648-e830-5735-1f4748ce2e30@google.com
Fixes:
56a8c8eb1eaf ("tmpfs: do not allocate pages on read")
Signed-off-by: Hugh Dickins <hughd@google.com>
Reported-by: Patrice CHOTARD <patrice.chotard@foss.st.com>
Reported-by: Chuck Lever III <chuck.lever@oracle.com>
Tested-by: Chuck Lever III <chuck.lever@oracle.com>
Cc: Mark Hemment <markhemm@googlemail.com>
Cc: Patrice CHOTARD <patrice.chotard@foss.st.com>
Cc: Mikulas Patocka <mpatocka@redhat.com>
Cc: Lukas Czerner <lczerner@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Joe Perches [Fri, 15 Apr 2022 02:13:24 +0000 (19:13 -0700)]
MAINTAINERS: Broadcom internal lists aren't maintainers
Convert the broadcom internal list M: and L: entries to R: as exploder
email addresses are neither maintainers nor mailing lists.
Reorder the entries as necessary.
Link: https://lkml.kernel.org/r/04eb301f5b3adbefdd78e76657eff0acb3e3d87f.camel@perches.com
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Borislav Petkov [Tue, 5 Apr 2022 15:15:11 +0000 (17:15 +0200)]
i2c: ismt: Fix undefined behavior due to shift overflowing the constant
Fix:
drivers/i2c/busses/i2c-ismt.c: In function ‘ismt_hw_init’:
drivers/i2c/busses/i2c-ismt.c:770:2: error: case label does not reduce to an integer constant
case ISMT_SPGT_SPD_400K:
^~~~
drivers/i2c/busses/i2c-ismt.c:773:2: error: case label does not reduce to an integer constant
case ISMT_SPGT_SPD_1M:
^~~~
See https://lore.kernel.org/r/YkwQ6%2BtIH8GQpuct@zn.tnic for the gory
details as to why it triggers with older gccs only.
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Seth Heasley <seth.heasley@intel.com>
Signed-off-by: Wolfram Sang <wsa@kernel.org>
Andy Shevchenko [Mon, 11 Apr 2022 18:07:52 +0000 (21:07 +0300)]
i2c: dev: Force case user pointers in compat_i2cdev_ioctl()
Sparse has warned us about wrong address space for user pointers:
i2c-dev.c:561:50: warning: incorrect type in initializer (different address spaces)
i2c-dev.c:561:50: expected unsigned char [usertype] *buf
i2c-dev.c:561:50: got void [noderef] __user *
Force cast the pointer to (__u8 *) that is used by I²C core code.
Note, this is an additional fix to the previously addressed similar issue
in the I2C_RDWR case in the same function.
Fixes:
3265a7e6b41b ("i2c: dev: Add __user annotation")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Wolfram Sang <wsa@kernel.org>
Andy Shevchenko [Mon, 11 Apr 2022 18:07:51 +0000 (21:07 +0300)]
i2c: dev: check return value when calling dev_set_name()
If dev_set_name() fails, the dev_name() is null, check the return
value of dev_set_name() to avoid the null-ptr-deref.
Fixes:
1413ef638aba ("i2c: dev: Fix the race between the release of i2c_dev and cdev")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Wolfram Sang <wsa@kernel.org>
Bjorn Andersson [Tue, 12 Apr 2022 21:26:01 +0000 (14:26 -0700)]
i2c: qcom-geni: Use dev_err_probe() for GPI DMA error
The GPI DMA engine driver can be compiled as a module, in which case the
likely probe deferral "error" shows up in the kernel log. Switch to
using dev_err_probe() to silence this warning and to ensure that
"devices_deferred" in debugfs carries this information.
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Reviewed-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Wolfram Sang <wsa@kernel.org>
Marek Vasut [Fri, 8 Apr 2022 17:15:24 +0000 (19:15 +0200)]
i2c: imx: Implement errata ERR007805 or e7805 bus frequency limit
The i.MX8MP Mask Set Errata for Mask 1P33A, Rev. 2.0 has description of
errata ERR007805 as below. This errata is found on all MX8M{M,N,P,Q},
MX7{S,D}, MX6{UL{,L,Z},S{,LL,X},S,D,DL,Q,DP,QP} . MX7ULP, MX8Q, MX8X
are not affected. MX53 and older status is unknown, as the errata
first appears in MX6 errata sheets from 2016 and the latest errata
sheet for MX53 is from 2015. Older SoC errata sheets predate the
MX53 errata sheet. MX8ULP and MX9 status is unknown as the errata
sheet is not available yet.
"
ERR007805 I2C: When the I2C clock speed is configured for 400 kHz,
the SCL low period violates the I2C spec of 1.3 uS min
Description: When the I2C module is programmed to operate at the
maximum clock speed of 400 kHz (as defined by the I2C spec), the SCL
clock low period violates the I2C spec of 1.3 uS min. The user must
reduce the clock speed to obtain the SCL low time to meet the 1.3us
I2C minimum required. This behavior means the SoC is not compliant
to the I2C spec at 400kHz.
Workaround: To meet the clock low period requirement in fast speed
mode, SCL must be configured to 384KHz or less.
"
Implement the workaround by matching on the affected SoC specific
compatible strings and by limiting the maximum bus frequency in case
the SoC is affected.
Signed-off-by: Marek Vasut <marex@denx.de>
To: linux-i2c@vger.kernel.org
Acked-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: Wolfram Sang <wsa@kernel.org>
Martin Povišer [Tue, 29 Mar 2022 18:38:17 +0000 (20:38 +0200)]
i2c: pasemi: Wait for write xfers to finish
Wait for completion of write transfers before returning from the driver.
At first sight it may seem advantageous to leave write transfers queued
for the controller to carry out on its own time, but there's a couple of
issues with it:
* Driver doesn't check for FIFO space.
* The queued writes can complete while the driver is in its I2C read
transfer path which means it will get confused by the raising of
XEN (the 'transaction ended' signal). This can cause a spurious
ENODATA error due to premature reading of the MRXFIFO register.
Adding the wait fixes some unreliability issues with the driver. There's
some efficiency cost to it (especially with pasemi_smb_waitready doing
its polling), but that will be alleviated once the driver receives
interrupt support.
Fixes:
beb58aa39e6e ("i2c: PA Semi SMBus driver")
Signed-off-by: Martin Povišer <povik+lin@cutebit.org>
Reviewed-by: Sven Peter <sven@svenpeter.dev>
Signed-off-by: Wolfram Sang <wsa@kernel.org>
Shin'ichiro Kawasaki [Fri, 15 Apr 2022 08:45:13 +0000 (17:45 +0900)]
dm: fix bio length of empty flush
The commit
92986f6b4c8a ("dm: use bio_clone_fast in alloc_io/alloc_tio")
removed bio_clone_fast() call from alloc_tio() when ci->io->tio is
available. In this case, ci->bio is not copied to ci->io->tio.clone.
This is fine since init_clone_info() sets same values to ci->bio and
ci->io->tio.clone.
However, when incoming bios have REQ_PREFLUSH flag, __send_empty_flush()
prepares a zero length bio on stack and set it to ci->bio. At this time,
ci->io->tio.clone still keeps non-zero length. When alloc_tio() chooses
this ci->io->tio.clone as the bio to map, it is passed to targets as
non-empty flush bio. It causes bio length check failure in dm-zoned and
unexpected operation such as dm_accept_partial_bio() call.
To avoid the non-empty flush bio, set zero length to ci->io->tio.clone
in __send_empty_flush().
Fixes:
92986f6b4c8a ("dm: use bio_clone_fast in alloc_io/alloc_tio")
Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Signed-off-by: Mike Snitzer <snitzer@kernel.org>
Linus Torvalds [Fri, 15 Apr 2022 18:38:55 +0000 (11:38 -0700)]
Merge tag 'block-5.18-2022-04-15' of git://git.kernel.dk/linux-block
Pull block fixes from Jens Axboe:
- Moving of lower_48_bits() to the block layer and a fix for the
unaligned_be48 added with that originally (Alexander, Keith)
- Fix a bad WARN_ON() for trim size checking (Ming)
- A polled IO timeout fix for null_blk (Ming)
- Silence IO error printing for dead disks (Christoph)
- Compat mode range fix (Khazhismel)
- NVMe pull request via Christoph:
- Tone down the error logging added this merge window a bit
(Chaitanya Kulkarni)
- Quirk devices with non-unique unique identifiers (Christoph)
* tag 'block-5.18-2022-04-15' of git://git.kernel.dk/linux-block:
block: don't print I/O error warning for dead disks
block/compat_ioctl: fix range check in BLKGETSIZE
nvme-pci: disable namespace identifiers for Qemu controllers
nvme-pci: disable namespace identifiers for the MAXIO MAP1002/1202
nvme: add a quirk to disable namespace identifiers
nvme: don't print verbose errors for internal passthrough requests
block: null_blk: end timed out poll request
block: fix offset/size check in bio_trim()
asm-generic: fix __get_unaligned_be48() on 32 bit platforms
block: move lower_48_bits() to block
Linus Torvalds [Fri, 15 Apr 2022 18:33:20 +0000 (11:33 -0700)]
Merge tag 'io_uring-5.18-2022-04-14' of git://git.kernel.dk/linux-block
Pull io_uring fixes from Jens Axboe:
- Ensure we check and -EINVAL any use of reserved or struct padding.
Although we generally always do that, it's missed in two spots for
resource updates, one for the ring fd registration from this merge
window, and one for the extended arg. Make sure we have all of them
handled. (Dylan)
- A few fixes for the deferred file assignment (me, Pavel)
- Add a feature flag for the deferred file assignment so apps can tell
we handle it correctly (me)
- Fix a small perf regression with the current file position fix in
this merge window (me)
* tag 'io_uring-5.18-2022-04-14' of git://git.kernel.dk/linux-block:
io_uring: abort file assignment prior to assigning creds
io_uring: fix poll error reporting
io_uring: fix poll file assign deadlock
io_uring: use right issue_flags for splice/tee
io_uring: verify pad field is 0 in io_get_ext_arg
io_uring: verify resv is 0 in ringfd register/unregister
io_uring: verify that resv2 is 0 in io_uring_rsrc_update2
io_uring: move io_uring_rsrc_update2 validation
io_uring: fix assign file locking issue
io_uring: stop using io_wq_work as an fd placeholder
io_uring: move apoll->events cache
io_uring: io_kiocb_update_pos() should not touch file for non -1 offset
io_uring: flag the fact that linked file assignment is sane
Linus Torvalds [Fri, 15 Apr 2022 18:24:32 +0000 (11:24 -0700)]
Merge tag 'linux-kselftest-fixes-5.18-rc3' of git://git./linux/kernel/git/shuah/linux-kselftest
Pull Kselftest fixes from Shuah Khan:
"A mqueue perf test memory leak bug fix.
mq_perf_tests failed to call CPU_FREE to free memory allocated by
CPU_SET"
* tag 'linux-kselftest-fixes-5.18-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
testing/selftests/mqueue: Fix mq_perf_tests to free the allocated cpu set
Linus Torvalds [Fri, 15 Apr 2022 18:15:02 +0000 (11:15 -0700)]
Merge tag 'perf-tools-fixes-for-v5.18-2022-04-14' of git://git./linux/kernel/git/acme/linux
Pull perf tools fixes from Arnaldo Carvalho de Melo:
- 'perf record --per-thread' mode doesn't have the CPU mask setup, so
it can use it to figure out the number of mmaps, fix it.
- Fix segfault accessing sample_id xyarray out of bounds, noticed while
using Intel PT where we have a dummy event to capture text poke perf
metadata events and we mixup the set of CPUs specified by the user
with the all CPUs map needed for text poke.
- Fix 'perf bench numa' to check if CPU used to bind task is online.
- Fix 'perf bench numa' usage of affinity for machines with more than
1000 CPUs.
- Fix misleading add event PMU debug message, noticed while using the
'intel_pt' PMU.
- Fix error check return value of hashmap__new() in 'perf stat', it
must use IS_ERR().
* tag 'perf-tools-fixes-for-v5.18-2022-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux:
perf bench: Fix numa bench to fix usage of affinity for machines with #CPUs > 1K
perf bench: Fix numa testcase to check if CPU used to bind task is online
perf record: Fix per-thread option
perf tools: Fix segfault accessing sample_id xyarray
perf stat: Fix error check return value of hashmap__new(), must use IS_ERR()
perf tools: Fix misleading add event PMU debug message
Jens Axboe [Fri, 15 Apr 2022 12:33:49 +0000 (06:33 -0600)]
Merge tag 'nvme-5.18-2022-04-15' of git://git.infradead.org/nvme into block-5.18
Pull NVMe fixes from Christoph:
"nvme fixes for Linux 5.18
- tone down the error logging added this merge window a bit
(Chaitanya Kulkarni)
- quirk devices with non-unique unique identifiers (me)"
* tag 'nvme-5.18-2022-04-15' of git://git.infradead.org/nvme:
nvme-pci: disable namespace identifiers for Qemu controllers
nvme-pci: disable namespace identifiers for the MAXIO MAP1002/1202
nvme: add a quirk to disable namespace identifiers
nvme: don't print verbose errors for internal passthrough requests
Christoph Hellwig [Wed, 23 Mar 2022 16:38:15 +0000 (17:38 +0100)]
block: don't print I/O error warning for dead disks
When a disk has been marked dead, don't print warnings for I/O errors
as they are very much expected.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20220323163815.1526998-1-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Khazhismel Kumykov [Thu, 14 Apr 2022 22:40:56 +0000 (15:40 -0700)]
block/compat_ioctl: fix range check in BLKGETSIZE
kernel ulong and compat_ulong_t may not be same width. Use type directly
to eliminate mismatches.
This would result in truncation rather than EFBIG for 32bit mode for
large disks.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Khazhismel Kumykov <khazhy@google.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20220414224056.2875681-1-khazhy@google.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Tue, 12 Apr 2022 05:07:56 +0000 (07:07 +0200)]
nvme-pci: disable namespace identifiers for Qemu controllers
Qemu unconditionally reports a UUID, which depending on the qemu version
is either all-null (which is incorrect but harmless) or contains a single
bit set for all controllers. In addition it can also optionally report
a eui64 which needs to be manually set. Disable namespace identifiers
for Qemu controlles entirely even if in some cases they could be set
correctly through manual intervention.
Reported-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Christoph Hellwig [Mon, 11 Apr 2022 06:05:27 +0000 (08:05 +0200)]
nvme-pci: disable namespace identifiers for the MAXIO MAP1002/1202
The MAXIO MAP1002/1202 controllers reports completely bogus Namespace
identifiers that even change after suspend cycles. Disable using
the Identifiers entirely.
Reported-by: 金韬 <me@kingtous.cn>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Tested-by: 金韬 <me@kingtous.cn>
Christoph Hellwig [Mon, 11 Apr 2022 06:05:27 +0000 (08:05 +0200)]
nvme: add a quirk to disable namespace identifiers
Add a quirk to disable using and exporting namespace identifiers for
controllers where they are broken beyond repair.
The most directly visible problem with non-unique namespace identifiers
is that they break the /dev/disk/by-id/ links, with the link for a
supposedly unique identifier now pointing to one of multiple possible
namespaces that share the same ID, and a somewhat random selection of
which one actually shows up.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Chaitanya Kulkarni [Mon, 11 Apr 2022 03:12:49 +0000 (20:12 -0700)]
nvme: don't print verbose errors for internal passthrough requests
Use the RQF_QUIET flag to skip the newly added verbose error reporting,
and set the flag in __nvme_submit_sync_cmd, which is used for most
internal passthrough requests where we do expect errors (e.g. due to
probing for optional functionality). This is similar to what the SCSI
verbose error logging does.
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Alan Adamson <alan.adamson@oracle.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Tested-by: Alan Adamson <alan.adamson@oracle.com>
Tested-by: Yi Zhang <yi.zhang@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Jens Axboe [Fri, 15 Apr 2022 02:23:40 +0000 (20:23 -0600)]
io_uring: abort file assignment prior to assigning creds
We need to either restore creds properly if we fail on the file
assignment, or just do the file assignment first instead. Let's do
the latter as it's simpler, should make no difference here for
file assignment.
Link: https://lore.kernel.org/lkml/000000000000a7edb305dca75a50@google.com/
Reported-by: syzbot+60c52ca98513a8760a91@syzkaller.appspotmail.com
Fixes:
6bf9c47a3989 ("io_uring: defer file assignment")
Signed-off-by: Jens Axboe <axboe@kernel.dk>