Uros Bizjak [Sun, 2 Jun 2019 19:11:56 +0000 (21:11 +0200)]
KVM: VMX: remove unneeded 'asm volatile ("")' from vmcs_write64
__vmcs_writel uses volatile asm, so there is no need to insert another
one between the first and the second call to __vmcs_writel in order
to prevent unwanted code moves for 32bit targets.
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Gustavo A. R. Silva [Fri, 31 May 2019 19:24:53 +0000 (14:24 -0500)]
KVM: irqchip: Use struct_size() in kzalloc()
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:
struct foo {
int stuff;
struct boo entry[];
};
instance = kzalloc(sizeof(struct foo) + count * sizeof(struct boo), GFP_KERNEL);
Instead of leaving these open-coded and prone to type mistakes, we can
now use the new struct_size() helper:
instance = kzalloc(struct_size(instance, entry, count), GFP_KERNEL);
This code was detected with the help of Coccinelle.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Jan Beulich [Mon, 27 May 2019 08:45:44 +0000 (02:45 -0600)]
x86/kvm/VMX: drop bad asm() clobber from nested_vmx_check_vmentry_hw()
While upstream gcc doesn't detect conflicts on cc (yet), it really
should, and hence "cc" should not be specified for asm()-s also having
"=@cc<cond>" outputs. (It is quite pointless anyway to specify a "cc"
clobber in x86 inline assembly, since the compiler assumes it to be
always clobbered, and has no means [yet] to suppress this behavior.)
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Fixes:
bbc0b8239257 ("KVM: nVMX: Capture VM-Fail via CC_{SET,OUT} in nested early checks")
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Andrew Jones [Mon, 27 May 2019 14:31:41 +0000 (16:31 +0200)]
kvm: selftests: introduce aarch64_vcpu_add_default
This is the same as vm_vcpu_add_default, but it also takes a
kvm_vcpu_init struct pointer.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Andrew Jones [Mon, 27 May 2019 14:31:40 +0000 (16:31 +0200)]
kvm: selftests: introduce aarch64_vcpu_setup
This allows aarch64 tests to run on more targets, such as the Arm
simulator that doesn't like KVM_ARM_TARGET_GENERIC_V8. And it also
allows aarch64 tests to provide vcpu features in struct kvm_vcpu_init.
Additionally it drops the unused memslot parameters.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Tue, 4 Jun 2019 17:13:46 +0000 (19:13 +0200)]
kvm: selftests: hide vcpu_setup in processor code
This removes the processor-dependent arguments from vm_vcpu_add.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Andrew Jones [Mon, 27 May 2019 12:30:06 +0000 (14:30 +0200)]
kvm: selftests: ucall improvements
Make sure we complete the I/O after determining we have a ucall,
which is I/O. Also allow the *uc parameter to optionally be NULL.
It's quite possible that a test case will only care about the
return value, like for example when looping on a check for
UCALL_DONE.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wanpeng Li [Tue, 21 May 2019 06:06:54 +0000 (14:06 +0800)]
KVM: X86: Emulate MSR_IA32_MISC_ENABLE MWAIT bit
MSR IA32_MISC_ENABLE bit 18, according to SDM:
| When this bit is set to 0, the MONITOR feature flag is not set (CPUID.01H:ECX[bit 3] = 0).
| This indicates that MONITOR/MWAIT are not supported.
|
| Software attempts to execute MONITOR/MWAIT will cause #UD when this bit is 0.
|
| When this bit is set to 1 (default), MONITOR/MWAIT are supported (CPUID.01H:ECX[bit 3] = 1).
The CPUID.01H:ECX[bit 3] ought to mirror the value of the MSR bit,
CPUID.01H:ECX[bit 3] is a better guard than kvm_mwait_in_guest().
kvm_mwait_in_guest() affects the behavior of MONITOR/MWAIT, not its
guest visibility.
This patch implements toggling of the CPUID bit based on guest writes
to the MSR.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Liran Alon <liran.alon@oracle.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
[Fixes for backwards compatibility - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wanpeng Li [Tue, 21 May 2019 06:06:53 +0000 (14:06 +0800)]
KVM: X86: Provide a capability to disable cstate msr read intercepts
Allow guest reads CORE cstate when exposing host CPU power management capabilities
to the guest. PKG cstate is restricted to avoid a guest to get the whole package
information in multi-tenant scenario.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wanpeng Li [Tue, 21 May 2019 06:06:52 +0000 (14:06 +0800)]
KVM: Documentation: Add disable pause exits to KVM_CAP_X86_DISABLE_EXITS
Commit
b31c114b (KVM: X86: Provide a capability to disable PAUSE intercepts)
forgot to add the KVM_X86_DISABLE_EXITS_PAUSE into api doc. This patch adds
it.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Xiaoyao Li [Fri, 19 Apr 2019 02:16:24 +0000 (10:16 +0800)]
kvm: x86: refine kvm_get_arch_capabilities()
1. Using X86_FEATURE_ARCH_CAPABILITIES to enumerate the existence of
MSR_IA32_ARCH_CAPABILITIES to avoid using rdmsrl_safe().
2. Since kvm_get_arch_capabilities() is only used in this file, making
it static.
Signed-off-by: Xiaoyao Li <xiaoyao.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Sat, 20 Apr 2019 05:18:17 +0000 (22:18 -0700)]
KVM: Directly return result from kvm_arch_check_processor_compat()
Add a wrapper to invoke kvm_arch_check_processor_compat() so that the
boilerplate ugliness of checking virtualization support on all CPUs is
hidden from the arch specific code. x86's implementation in particular
is quite heinous, as it unnecessarily propagates the out-param pattern
into kvm_x86_ops.
While the x86 specific issue could be resolved solely by changing
kvm_x86_ops, make the change for all architectures as returning a value
directly is prettier and technically more robust, e.g. s390 doesn't set
the out param, which could lead to subtle breakage in the (highly
unlikely) scenario where the out-param was not pre-initialized by the
caller.
Opportunistically annotate svm_check_processor_compat() with __init.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Suthikulpanit, Suravee [Fri, 3 May 2019 13:38:53 +0000 (06:38 -0700)]
kvm: svm/avic: Do not send AVIC doorbell to self
AVIC doorbell is used to notify a running vCPU that interrupts
has been injected into the vCPU AVIC backing page. Current logic
checks only if a VCPU is running before sending a doorbell.
However, the doorbell is not necessary if the destination
CPU is itself.
Add logic to check currently running CPU before sending doorbell.
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Alexander Graf <graf@amazon.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wanpeng Li [Mon, 20 May 2019 08:18:09 +0000 (16:18 +0800)]
KVM: LAPIC: Optimize timer latency further
Advance lapic timer tries to hidden the hypervisor overhead between the
host emulated timer fires and the guest awares the timer is fired. However,
it just hidden the time between apic_timer_fn/handle_preemption_timer ->
wait_lapic_expire, instead of the real position of vmentry which is
mentioned in the orignial commit
d0659d946be0 ("KVM: x86: add option to
advance tscdeadline hrtimer expiration"). There is 700+ cpu cycles between
the end of wait_lapic_expire and before world switch on my haswell desktop.
This patch tries to narrow the last gap(wait_lapic_expire -> world switch),
it takes the real overhead time between apic_timer_fn/handle_preemption_timer
and before world switch into consideration when adaptively tuning timer
advancement. The patch can reduce 40% latency (~1600+ cycles to ~1000+ cycles
on a haswell desktop) for kvm-unit-tests/tscdeadline_latency when testing
busy waits.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wanpeng Li [Mon, 20 May 2019 08:18:08 +0000 (16:18 +0800)]
KVM: LAPIC: Delay trace_kvm_wait_lapic_expire tracepoint to after vmexit
wait_lapic_expire() call was moved above guest_enter_irqoff() because of
its tracepoint, which violated the RCU extended quiescent state invoked
by guest_enter_irqoff()[1][2]. This patch simply moves the tracepoint
below guest_exit_irqoff() in vcpu_enter_guest(). Snapshot the delta before
VM-Enter, but trace it after VM-Exit. This can help us to move
wait_lapic_expire() just before vmentry in the later patch.
[1] Commit
8b89fe1f6c43 ("kvm: x86: move tracepoints outside extended quiescent state")
[2] https://patchwork.kernel.org/patch/
7821111/
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Liran Alon <liran.alon@oracle.com>
Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
[Track whether wait_lapic_expire was called, and do not invoke the tracepoint
if not. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wanpeng Li [Mon, 20 May 2019 08:18:05 +0000 (16:18 +0800)]
KVM: LAPIC: Extract adaptive tune timer advancement logic
Extract adaptive tune timer advancement logic to a single function.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
[Rename new function. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 4 Jun 2019 16:09:39 +0000 (18:09 +0200)]
KVM/nSVM: properly map nested VMCB
Commit
8c5fbf1a7231 ("KVM/nSVM: Use the new mapping API for mapping guest
memory") broke nested SVM completely: kvm_vcpu_map()'s second parameter is
GFN so vmcb_gpa needs to be converted with gpa_to_gfn(), not the other way
around.
Fixes:
8c5fbf1a7231 ("KVM/nSVM: Use the new mapping API for mapping guest memory")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Kai Huang [Fri, 3 May 2019 10:08:53 +0000 (03:08 -0700)]
kvm: x86: Fix reserved bits related calculation errors caused by MKTME
Intel MKTME repurposes several high bits of physical address as 'keyID'
for memory encryption thus effectively reduces platform's maximum
physical address bits. Exactly how many bits are reduced is configured
by BIOS. To honor such HW behavior, the repurposed bits are reduced from
cpuinfo_x86->x86_phys_bits when MKTME is detected in CPU detection.
Similarly, AMD SME/SEV also reduces physical address bits for memory
encryption, and cpuinfo->x86_phys_bits is reduced too when SME/SEV is
detected, so for both MKTME and SME/SEV, boot_cpu_data.x86_phys_bits
doesn't hold physical address bits reported by CPUID anymore.
Currently KVM treats bits from boot_cpu_data.x86_phys_bits to 51 as
reserved bits, but it's not true anymore for MKTME, since MKTME treats
those reduced bits as 'keyID', but not reserved bits. Therefore
boot_cpu_data.x86_phys_bits cannot be used to calculate reserved bits
anymore, although we can still use it for AMD SME/SEV since SME/SEV
treats the reduced bits differently -- they are treated as reserved
bits, the same as other reserved bits in page table entity [1].
Fix by introducing a new 'shadow_phys_bits' variable in KVM x86 MMU code
to store the effective physical bits w/o reserved bits -- for MKTME,
it equals to physical address reported by CPUID, and for SME/SEV, it is
boot_cpu_data.x86_phys_bits.
Note that for the physical address bits reported to guest should remain
unchanged -- KVM should report physical address reported by CPUID to
guest, but not boot_cpu_data.x86_phys_bits. Because for Intel MKTME,
there's no harm if guest sets up 'keyID' bits in guest page table (since
MKTME only works at physical address level), and KVM doesn't even expose
MKTME to guest. Arguably, for AMD SME/SEV, guest is aware of SEV thus it
should adjust boot_cpu_data.x86_phys_bits when it detects SEV, therefore
KVM should still reports physcial address reported by CPUID to guest.
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Kai Huang [Fri, 3 May 2019 10:08:52 +0000 (03:08 -0700)]
kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c
As a prerequisite to fix several SPTE reserved bits related calculation
errors caused by MKTME, which requires kvm_set_mmio_spte_mask() to use
local static variable defined in mmu.c.
Also move call site of kvm_set_mmio_spte_mask() from kvm_arch_init() to
kvm_mmu_module_init() so that kvm_set_mmio_spte_mask() can be static.
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Fri, 31 May 2019 22:49:02 +0000 (00:49 +0200)]
Merge tag 'kvm-s390-master-5.2-2' of git://git./linux/kernel/git/kvms390/linux into kvm-master
KVM: s390: Fixes
- fix compilation for !CONFIG_PCI
- fix the output of KVM_CAP_MAX_VCPU_ID
Paolo Bonzini [Fri, 31 May 2019 22:48:45 +0000 (00:48 +0200)]
Merge tag 'kvm-ppc-fixes-5.2-1' of git://git./linux/kernel/git/paulus/powerpc into kvm-master
PPC KVM fixes for 5.2
- Several bug fixes for the new XIVE-native code.
- Replace kvm->lock by other mutexes in several places where we hold a
vcpu mutex, to avoid lock order inversions.
- Fix a lockdep warning on guest entry for radix-mode guests.
- Fix a bug causing user-visible corruption of SPRG3 on the host.
Suraj Jitindar Singh [Thu, 30 May 2019 02:17:18 +0000 (12:17 +1000)]
KVM: PPC: Book3S HV: Restore SPRG3 in kvmhv_p9_guest_entry()
The sprgs are a set of 4 general purpose sprs provided for software use.
SPRG3 is special in that it can also be read from userspace. Thus it is
used on linux to store the cpu and numa id of the process to speed up
syscall access to this information.
This register is overwritten with the guest value on kvm guest entry,
and so needs to be restored on exit again. Thus restore the value on
the guest exit path in kvmhv_p9_guest_entry().
Cc: stable@vger.kernel.org # v4.20+
Fixes:
95a6432ce9038 ("KVM: PPC: Book3S HV: Streamlined guest entry/exit path on P9 for radix guests")
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Tue, 28 May 2019 05:01:59 +0000 (15:01 +1000)]
KVM: PPC: Book3S HV: Fix lockdep warning when entering guest on POWER9
Commit
3309bec85e60 ("KVM: PPC: Book3S HV: Fix lockdep warning when
entering the guest") moved calls to trace_hardirqs_{on,off} in the
entry path used for HPT guests. Similar code exists in the new
streamlined entry path used for radix guests on POWER9. This makes
the same change there, so as to avoid lockdep warnings such as this:
[ 228.686461] DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled)
[ 228.686480] WARNING: CPU: 116 PID: 3803 at ../kernel/locking/lockdep.c:4219 check_flags.part.23+0x21c/0x270
[ 228.686544] Modules linked in: vhost_net vhost xt_CHECKSUM iptable_mangle xt_MASQUERADE iptable_nat nf_nat
+xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ipt_REJECT nf_reject_ipv4 tun bridge stp llc ebtable_filter
+ebtables ip6table_filter ip6_tables iptable_filter fuse kvm_hv kvm at24 ipmi_powernv regmap_i2c ipmi_devintf
+uio_pdrv_genirq ofpart ipmi_msghandler uio powernv_flash mtd ibmpowernv opal_prd ip_tables ext4 mbcache jbd2 btrfs
+zstd_decompress zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx libcrc32c xor
+raid6_pq raid1 raid0 ses sd_mod enclosure scsi_transport_sas ast i2c_opal i2c_algo_bit drm_kms_helper syscopyarea
+sysfillrect sysimgblt fb_sys_fops ttm drm i40e e1000e cxl aacraid tg3 drm_panel_orientation_quirks i2c_core
[ 228.686859] CPU: 116 PID: 3803 Comm: qemu-system-ppc Kdump: loaded Not tainted 5.2.0-rc1-xive+ #42
[ 228.686911] NIP:
c0000000001b394c LR:
c0000000001b3948 CTR:
c000000000bfad20
[ 228.686963] REGS:
c000200cdb50f570 TRAP: 0700 Not tainted (5.2.0-rc1-xive+)
[ 228.687001] MSR:
9000000002823033 <SF,HV,VEC,VSX,FP,ME,IR,DR,RI,LE> CR:
48222222 XER:
20040000
[ 228.687060] CFAR:
c000000000116db0 IRQMASK: 1
[ 228.687060] GPR00:
c0000000001b3948 c000200cdb50f800 c0000000015e7600 000000000000002e
[ 228.687060] GPR04:
0000000000000001 c0000000001c71a0 000000006e655f73 72727563284e4f5f
[ 228.687060] GPR08:
0000200e60680000 0000000000000000 c000200cdb486180 0000000000000000
[ 228.687060] GPR12:
0000000000002000 c000200fff61a680 0000000000000000 00007fffb75c0000
[ 228.687060] GPR16:
0000000000000000 0000000000000000 c0000000017d6900 c000000001124900
[ 228.687060] GPR20:
0000000000000074 c008000006916f68 0000000000000074 0000000000000074
[ 228.687060] GPR24:
ffffffffffffffff ffffffffffffffff 0000000000000003 c000200d4b600000
[ 228.687060] GPR28:
c000000001627e58 c000000001489908 c000000001627e58 c000000002304de0
[ 228.687377] NIP [
c0000000001b394c] check_flags.part.23+0x21c/0x270
[ 228.687415] LR [
c0000000001b3948] check_flags.part.23+0x218/0x270
[ 228.687466] Call Trace:
[ 228.687488] [
c000200cdb50f800] [
c0000000001b3948] check_flags.part.23+0x218/0x270 (unreliable)
[ 228.687542] [
c000200cdb50f870] [
c0000000001b6548] lock_is_held_type+0x188/0x1c0
[ 228.687595] [
c000200cdb50f8d0] [
c0000000001d939c] rcu_read_lock_sched_held+0xdc/0x100
[ 228.687646] [
c000200cdb50f900] [
c0000000001dd704] rcu_note_context_switch+0x304/0x340
[ 228.687701] [
c000200cdb50f940] [
c0080000068fcc58] kvmhv_run_single_vcpu+0xdb0/0x1120 [kvm_hv]
[ 228.687756] [
c000200cdb50fa20] [
c0080000068fd5b0] kvmppc_vcpu_run_hv+0x5e8/0xe40 [kvm_hv]
[ 228.687816] [
c000200cdb50faf0] [
c0080000071797dc] kvmppc_vcpu_run+0x34/0x48 [kvm]
[ 228.687863] [
c000200cdb50fb10] [
c0080000071755dc] kvm_arch_vcpu_ioctl_run+0x244/0x420 [kvm]
[ 228.687916] [
c000200cdb50fba0] [
c008000007165ccc] kvm_vcpu_ioctl+0x424/0x838 [kvm]
[ 228.687957] [
c000200cdb50fd10] [
c000000000433a24] do_vfs_ioctl+0xd4/0xcd0
[ 228.687995] [
c000200cdb50fdb0] [
c000000000434724] ksys_ioctl+0x104/0x120
[ 228.688033] [
c000200cdb50fe00] [
c000000000434768] sys_ioctl+0x28/0x80
[ 228.688072] [
c000200cdb50fe20] [
c00000000000b888] system_call+0x5c/0x70
[ 228.688109] Instruction dump:
[ 228.688142]
4bf6342d 60000000 0fe00000 e8010080 7c0803a6 4bfffe60 3c82ff87 3c62ff87
[ 228.688196]
388472d0 3863d738 4bf63405 60000000 <
0fe00000>
4bffff4c 3c82ff87 3c62ff87
[ 228.688251] irq event stamp: 205
[ 228.688287] hardirqs last enabled at (205): [<
c0080000068fc1b4>] kvmhv_run_single_vcpu+0x30c/0x1120 [kvm_hv]
[ 228.688344] hardirqs last disabled at (204): [<
c0080000068fbff0>] kvmhv_run_single_vcpu+0x148/0x1120 [kvm_hv]
[ 228.688412] softirqs last enabled at (180): [<
c000000000c0b2ac>] __do_softirq+0x4ac/0x5d4
[ 228.688464] softirqs last disabled at (169): [<
c000000000122aa8>] irq_exit+0x1f8/0x210
[ 228.688513] ---[ end trace
eb16f6260022a812 ]---
[ 228.688548] possible reason: unannotated irqs-off.
[ 228.688571] irq event stamp: 205
[ 228.688607] hardirqs last enabled at (205): [<
c0080000068fc1b4>] kvmhv_run_single_vcpu+0x30c/0x1120 [kvm_hv]
[ 228.688664] hardirqs last disabled at (204): [<
c0080000068fbff0>] kvmhv_run_single_vcpu+0x148/0x1120 [kvm_hv]
[ 228.688719] softirqs last enabled at (180): [<
c000000000c0b2ac>] __do_softirq+0x4ac/0x5d4
[ 228.688758] softirqs last disabled at (169): [<
c000000000122aa8>] irq_exit+0x1f8/0x210
Cc: stable@vger.kernel.org # v4.20+
Fixes:
95a6432ce903 ("KVM: PPC: Book3S HV: Streamlined guest entry/exit path on P9 for radix guests")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Cédric Le Goater [Tue, 28 May 2019 21:13:24 +0000 (23:13 +0200)]
KVM: PPC: Book3S HV: XIVE: Fix page offset when clearing ESB pages
Under XIVE, the ESB pages of an interrupt are used for interrupt
management (EOI) and triggering. They are made available to guests
through a mapping of the XIVE KVM device.
When a device is passed-through, the passthru_irq helpers,
kvmppc_xive_set_mapped() and kvmppc_xive_clr_mapped(), clear the ESB
pages of the guest IRQ number being mapped and let the VM fault
handler repopulate with the correct page.
The ESB pages are mapped at offset 4 (KVM_XIVE_ESB_PAGE_OFFSET) in the
KVM device mapping. Unfortunately, this offset was not taken into
account when clearing the pages. This lead to issues with the
passthrough devices for which the interrupts were not functional under
some guest configuration (tg3 and single CPU) or in any configuration
(e1000e adapter).
Reviewed-by: Greg Kurz <groug@kaod.org>
Tested-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Cédric Le Goater [Tue, 28 May 2019 12:17:16 +0000 (14:17 +0200)]
KVM: PPC: Book3S HV: XIVE: Take the srcu read lock when accessing memslots
According to Documentation/virtual/kvm/locking.txt, the srcu read lock
should be taken when accessing the memslots of the VM. The XIVE KVM
device needs to do so when configuring the page of the OS event queue
of vCPU for a given priority and when marking the same page dirty
before migration.
This avoids warnings such as :
[ 208.224882] =============================
[ 208.224884] WARNING: suspicious RCU usage
[ 208.224889] 5.2.0-rc2-xive+ #47 Not tainted
[ 208.224890] -----------------------------
[ 208.224894] ../include/linux/kvm_host.h:633 suspicious rcu_dereference_check() usage!
[ 208.224896]
other info that might help us debug this:
[ 208.224898]
rcu_scheduler_active = 2, debug_locks = 1
[ 208.224901] no locks held by qemu-system-ppc/3923.
[ 208.224902]
stack backtrace:
[ 208.224907] CPU: 64 PID: 3923 Comm: qemu-system-ppc Kdump: loaded Not tainted 5.2.0-rc2-xive+ #47
[ 208.224909] Call Trace:
[ 208.224918] [
c000200cdd98fa30] [
c000000000be1934] dump_stack+0xe8/0x164 (unreliable)
[ 208.224924] [
c000200cdd98fa80] [
c0000000001aec80] lockdep_rcu_suspicious+0x110/0x180
[ 208.224935] [
c000200cdd98fb00] [
c0080000075933a0] gfn_to_memslot+0x1c8/0x200 [kvm]
[ 208.224943] [
c000200cdd98fb40] [
c008000007599600] gfn_to_pfn+0x28/0x60 [kvm]
[ 208.224951] [
c000200cdd98fb70] [
c008000007599658] gfn_to_page+0x20/0x40 [kvm]
[ 208.224959] [
c000200cdd98fb90] [
c0080000075b495c] kvmppc_xive_native_set_attr+0x8b4/0x1480 [kvm]
[ 208.224967] [
c000200cdd98fca0] [
c00800000759261c] kvm_device_ioctl_attr+0x64/0xb0 [kvm]
[ 208.224974] [
c000200cdd98fcf0] [
c008000007592730] kvm_device_ioctl+0xc8/0x110 [kvm]
[ 208.224979] [
c000200cdd98fd10] [
c000000000433a24] do_vfs_ioctl+0xd4/0xcd0
[ 208.224981] [
c000200cdd98fdb0] [
c000000000434724] ksys_ioctl+0x104/0x120
[ 208.224984] [
c000200cdd98fe00] [
c000000000434768] sys_ioctl+0x28/0x80
[ 208.224988] [
c000200cdd98fe20] [
c00000000000b888] system_call+0x5c/0x70
legoater@boss01:~$
Fixes:
13ce3297c576 ("KVM: PPC: Book3S HV: XIVE: Add controls for the EQ configuration")
Fixes:
e6714bd1671d ("KVM: PPC: Book3S HV: XIVE: Add a control to dirty the XIVE EQ pages")
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Cédric Le Goater [Tue, 28 May 2019 12:17:15 +0000 (14:17 +0200)]
KVM: PPC: Book3S HV: XIVE: Do not clear IRQ data of passthrough interrupts
The passthrough interrupts are defined at the host level and their IRQ
data should not be cleared unless specifically deconfigured (shutdown)
by the host. They differ from the IPI interrupts which are allocated
by the XIVE KVM device and reserved to the guest usage only.
This fixes a host crash when destroying a VM in which a PCI adapter
was passed-through. In this case, the interrupt is cleared and freed
by the KVM device and then shutdown by vfio at the host level.
[ 1007.360265] BUG: Kernel NULL pointer dereference at 0x00000d00
[ 1007.360285] Faulting instruction address: 0xc00000000009da34
[ 1007.360296] Oops: Kernel access of bad area, sig: 7 [#1]
[ 1007.360303] LE PAGE_SIZE=64K MMU=Radix MMU=Hash SMP NR_CPUS=2048 NUMA PowerNV
[ 1007.360314] Modules linked in: vhost_net vhost iptable_mangle ipt_MASQUERADE iptable_nat nf_nat xt_conntrack nf_conntrack nf_defrag_ipv4 ipt_REJECT nf_reject_ipv4 tun bridge stp llc kvm_hv kvm xt_tcpudp iptable_filter squashfs fuse binfmt_misc vmx_crypto ib_iser rdma_cm iw_cm ib_cm libiscsi scsi_transport_iscsi nfsd ip_tables x_tables autofs4 btrfs zstd_decompress zstd_compress lzo_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq multipath mlx5_ib ib_uverbs ib_core crc32c_vpmsum mlx5_core
[ 1007.360425] CPU: 9 PID: 15576 Comm: CPU 18/KVM Kdump: loaded Not tainted
5.1.0-gad7e7d0ef #4
[ 1007.360454] NIP:
c00000000009da34 LR:
c00000000009e50c CTR:
c00000000009e5d0
[ 1007.360482] REGS:
c000007f24ccf330 TRAP: 0300 Not tainted (
5.1.0-gad7e7d0ef)
[ 1007.360500] MSR:
900000000280b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR:
24002484 XER:
00000000
[ 1007.360532] CFAR:
c00000000009da10 DAR:
0000000000000d00 DSISR:
00080000 IRQMASK: 1
[ 1007.360532] GPR00:
c00000000009e62c c000007f24ccf5c0 c000000001510600 c000007fe7f947c0
[ 1007.360532] GPR04:
0000000000000d00 0000000000000000 0000000000000000 c000005eff02d200
[ 1007.360532] GPR08:
0000000000400000 0000000000000000 0000000000000000 fffffffffffffffd
[ 1007.360532] GPR12:
c00000000009e5d0 c000007fffff7b00 0000000000000031 000000012c345718
[ 1007.360532] GPR16:
0000000000000000 0000000000000008 0000000000418004 0000000000040100
[ 1007.360532] GPR20:
0000000000000000 0000000008430000 00000000003c0000 0000000000000027
[ 1007.360532] GPR24:
00000000000000ff 0000000000000000 00000000000000ff c000007faa90d98c
[ 1007.360532] GPR28:
c000007faa90da40 00000000000fe040 ffffffffffffffff c000007fe7f947c0
[ 1007.360689] NIP [
c00000000009da34] xive_esb_read+0x34/0x120
[ 1007.360706] LR [
c00000000009e50c] xive_do_source_set_mask.part.0+0x2c/0x50
[ 1007.360732] Call Trace:
[ 1007.360738] [
c000007f24ccf5c0] [
c000000000a6383c] snooze_loop+0x15c/0x270 (unreliable)
[ 1007.360775] [
c000007f24ccf5f0] [
c00000000009e62c] xive_irq_shutdown+0x5c/0xe0
[ 1007.360795] [
c000007f24ccf630] [
c00000000019e4a0] irq_shutdown+0x60/0xe0
[ 1007.360813] [
c000007f24ccf660] [
c000000000198c44] __free_irq+0x3a4/0x420
[ 1007.360831] [
c000007f24ccf700] [
c000000000198dc8] free_irq+0x78/0xe0
[ 1007.360849] [
c000007f24ccf730] [
c00000000096c5a8] vfio_msi_set_vector_signal+0xa8/0x350
[ 1007.360878] [
c000007f24ccf7f0] [
c00000000096c938] vfio_msi_set_block+0xe8/0x1e0
[ 1007.360899] [
c000007f24ccf850] [
c00000000096cae0] vfio_msi_disable+0xb0/0x110
[ 1007.360912] [
c000007f24ccf8a0] [
c00000000096cd04] vfio_pci_set_msi_trigger+0x1c4/0x3d0
[ 1007.360922] [
c000007f24ccf910] [
c00000000096d910] vfio_pci_set_irqs_ioctl+0xa0/0x170
[ 1007.360941] [
c000007f24ccf930] [
c00000000096b400] vfio_pci_disable+0x80/0x5e0
[ 1007.360963] [
c000007f24ccfa10] [
c00000000096b9bc] vfio_pci_release+0x5c/0x90
[ 1007.360991] [
c000007f24ccfa40] [
c000000000963a9c] vfio_device_fops_release+0x3c/0x70
[ 1007.361012] [
c000007f24ccfa70] [
c0000000003b5668] __fput+0xc8/0x2b0
[ 1007.361040] [
c000007f24ccfac0] [
c0000000001409b0] task_work_run+0x140/0x1b0
[ 1007.361059] [
c000007f24ccfb20] [
c000000000118f8c] do_exit+0x3ac/0xd00
[ 1007.361076] [
c000007f24ccfc00] [
c0000000001199b0] do_group_exit+0x60/0x100
[ 1007.361094] [
c000007f24ccfc40] [
c00000000012b514] get_signal+0x1a4/0x8f0
[ 1007.361112] [
c000007f24ccfd30] [
c000000000021cc8] do_notify_resume+0x1a8/0x430
[ 1007.361141] [
c000007f24ccfe20] [
c00000000000e444] ret_from_except_lite+0x70/0x74
[ 1007.361159] Instruction dump:
[ 1007.361175]
38422c00 e9230000 712a0004 41820010 548a2036 7d442378 78840020 71290020
[ 1007.361194]
4082004c e9230010 7c892214 7c0004ac <
e9240000>
0c090000 4c00012c 792a0022
Cc: stable@vger.kernel.org # v4.12+
Fixes:
5af50993850a ("KVM: PPC: Book3S HV: Native usage of the XIVE interrupt controller")
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Cédric Le Goater [Fri, 24 May 2019 13:20:30 +0000 (15:20 +0200)]
KVM: PPC: Book3S HV: XIVE: Introduce a new mutex for the XIVE device
The XICS-on-XIVE KVM device needs to allocate XIVE event queues when a
priority is used by the OS. This is referred as EQ provisioning and it
is done under the hood when :
1. a CPU is hot-plugged in the VM
2. the "set-xive" is called at VM startup
3. sources are restored at VM restore
The kvm->lock mutex is used to protect the different XIVE structures
being modified but in some contexts, kvm->lock is taken under the
vcpu->mutex which is not permitted by the KVM locking rules.
Introduce a new mutex 'lock' for the KVM devices for them to
synchronize accesses to the XIVE device structures.
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Cédric Le Goater [Mon, 20 May 2019 07:15:14 +0000 (09:15 +0200)]
KVM: PPC: Book3S HV: XIVE: Fix the enforced limit on the vCPU identifier
When a vCPU is connected to the KVM device, it is done using its vCPU
identifier in the guest. Fix the enforced limit on the vCPU identifier
by taking into account the SMT mode.
Reported-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
Tested-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Cédric Le Goater [Mon, 20 May 2019 07:15:13 +0000 (09:15 +0200)]
KVM: PPC: Book3S HV: XIVE: Do not test the EQ flag validity when resetting
When a CPU is hot-unplugged, the EQ is deconfigured using a zero size
and a zero address. In this case, there is no need to check the flag
and queue size validity. Move the checks after the queue reset code
section to fix CPU hot-unplug.
Reported-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
Tested-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Cédric Le Goater [Mon, 20 May 2019 07:15:12 +0000 (09:15 +0200)]
KVM: PPC: Book3S HV: XIVE: Clear file mapping when device is released
Improve the release of the XIVE KVM device by clearing the file
address_space, which is used to unmap the interrupt ESB pages when a
device is passed-through.
Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Thu, 23 May 2019 06:36:32 +0000 (16:36 +1000)]
KVM: PPC: Book3S HV: Don't take kvm->lock around kvm_for_each_vcpu
Currently the HV KVM code takes the kvm->lock around calls to
kvm_for_each_vcpu() and kvm_get_vcpu_by_id() (which can call
kvm_for_each_vcpu() internally). However, that leads to a lock
order inversion problem, because these are called in contexts where
the vcpu mutex is held, but the vcpu mutexes nest within kvm->lock
according to Documentation/virtual/kvm/locking.txt. Hence there
is a possibility of deadlock.
To fix this, we simply don't take the kvm->lock mutex around these
calls. This is safe because the implementations of kvm_for_each_vcpu()
and kvm_get_vcpu_by_id() have been designed to be able to be called
locklessly.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Wed, 29 May 2019 01:54:00 +0000 (11:54 +1000)]
KVM: PPC: Book3S: Use new mutex to synchronize access to rtas token list
Currently the Book 3S KVM code uses kvm->lock to synchronize access
to the kvm->arch.rtas_tokens list. Because this list is scanned
inside kvmppc_rtas_hcall(), which is called with the vcpu mutex held,
taking kvm->lock cause a lock inversion problem, which could lead to
a deadlock.
To fix this, we add a new mutex, kvm->arch.rtas_token_lock, which nests
inside the vcpu mutexes, and use that instead of kvm->lock when
accessing the rtas token list.
This removes the lockdep_assert_held() in kvmppc_rtas_tokens_free().
At this point we don't hold the new mutex, but that is OK because
kvmppc_rtas_tokens_free() is only called when the whole VM is being
destroyed, and at that point nothing can be looking up a token in
the list.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Thu, 23 May 2019 06:35:34 +0000 (16:35 +1000)]
KVM: PPC: Book3S HV: Use new mutex to synchronize MMU setup
Currently the HV KVM code uses kvm->lock in conjunction with a flag,
kvm->arch.mmu_ready, to synchronize MMU setup and hold off vcpu
execution until the MMU-related data structures are ready. However,
this means that kvm->lock is being taken inside vcpu->mutex, which
is contrary to Documentation/virtual/kvm/locking.txt and results in
lockdep warnings.
To fix this, we add a new mutex, kvm->arch.mmu_setup_lock, which nests
inside the vcpu mutexes, and is taken in the places where kvm->lock
was taken that are related to MMU setup.
Additionally we take the new mutex in the vcpu creation code at the
point where we are creating a new vcore, in order to provide mutual
exclusion with kvmppc_update_lpcr() and ensure that an update to
kvm->arch.lpcr doesn't get missed, which could otherwise lead to a
stale vcore->lpcr value.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Paul Mackerras [Thu, 23 May 2019 06:35:07 +0000 (16:35 +1000)]
KVM: PPC: Book3S HV: Avoid touching arch.mmu_ready in XIVE release functions
Currently, kvmppc_xive_release() and kvmppc_xive_native_release() clear
kvm->arch.mmu_ready and call kick_all_cpus_sync() as a way of ensuring
that no vcpus are executing in the guest. However, future patches will
change the mutex associated with kvm->arch.mmu_ready to a new mutex that
nests inside the vcpu mutexes, making it difficult to continue to use
this method.
In fact, taking the vcpu mutex for a vcpu excludes execution of that
vcpu, and we already take the vcpu mutex around the call to
kvmppc_xive_[native_]cleanup_vcpu(). Once the cleanup function is
done and we release the vcpu mutex, the vcpu can execute once again,
but because we have cleared vcpu->arch.xive_vcpu, vcpu->arch.irq_type,
vcpu->arch.xive_esc_vaddr and vcpu->arch.xive_esc_raddr, that vcpu will
not be going into XIVE code any more. Thus, once we have cleaned up
all of the vcpus, we are safe to clean up the rest of the XIVE state,
and we don't need to use kvm->arch.mmu_ready to hold off vcpu execution.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Thomas Huth [Thu, 23 May 2019 16:43:08 +0000 (18:43 +0200)]
KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID
KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
architectures. However, on s390x, the amount of usable CPUs is determined
during runtime - it is depending on the features of the machine the code
is running on. Since we are using the vcpu_id as an index into the SCA
structures that are defined by the hardware (see e.g. the sca_add_vcpu()
function), it is not only the amount of CPUs that is limited by the hard-
ware, but also the range of IDs that we can use.
Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
code into the architecture specific code, and on s390x we have to return
the same value here as for KVM_CAP_MAX_VCPUS.
This problem has been discovered with the kvm_create_max_vcpus selftest.
With this change applied, the selftest now passes on s390x, too.
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <
20190523164309.13345-9-thuth@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Christian Borntraeger [Mon, 27 May 2019 08:28:25 +0000 (10:28 +0200)]
kvm: fix compile on s390 part 2
We also need to fence the memunmap part.
Fixes:
e45adf665a53 ("KVM: Introduce a new guest mapping API")
Fixes:
d30b214d1d0a (kvm: fix compilation on s390)
Cc: Michal Kubecek <mkubecek@suse.cz>
Cc: KarimAllah Ahmed <karahmed@amazon.de>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Linus Torvalds [Sun, 26 May 2019 23:49:19 +0000 (16:49 -0700)]
Linux 5.2-rc2
Linus Torvalds [Sun, 26 May 2019 20:49:40 +0000 (13:49 -0700)]
Merge tag 'trace-v5.2-rc1-2' of git://git./linux/kernel/git/rostedt/linux-trace
Pull tracing warning fix from Steven Rostedt:
"Make the GCC 9 warning for sub struct memset go away.
GCC 9 now warns about calling memset() on partial structures when it
goes across multiple fields. This adds a helper for the place in
tracing that does this type of clearing of a structure"
* tag 'trace-v5.2-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
tracing: Silence GCC 9 array bounds warning
Linus Torvalds [Sun, 26 May 2019 20:45:15 +0000 (13:45 -0700)]
Merge tag 'for-linus' of git://git./virt/kvm/kvm
Pull KVM fixes from Paolo Bonzini:
"The usual smattering of fixes and tunings that came in too late for
the merge window, but should not wait four months before they appear
in a release.
I also travelled a bit more than usual in the first part of May, which
didn't help with picking up patches and reports promptly"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (33 commits)
KVM: x86: fix return value for reserved EFER
tools/kvm_stat: fix fields filter for child events
KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86 guard
kvm: selftests: aarch64: compile with warnings on
kvm: selftests: aarch64: fix default vm mode
kvm: selftests: aarch64: dirty_log_test: fix unaligned memslot size
KVM: s390: fix memory slot handling for KVM_SET_USER_MEMORY_REGION
KVM: x86/pmu: do not mask the value that is written to fixed PMUs
KVM: x86/pmu: mask the result of rdpmc according to the width of the counters
x86/kvm/pmu: Set AMD's virt PMU version to 1
KVM: x86: do not spam dmesg with VMCS/VMCB dumps
kvm: Check irqchip mode before assign irqfd
kvm: svm/avic: fix off-by-one in checking host APIC ID
KVM: selftests: do not blindly clobber registers in guest asm
KVM: selftests: Remove duplicated TEST_ASSERT in hyperv_cpuid.c
KVM: LAPIC: Expose per-vCPU timer_advance_ns to userspace
KVM: LAPIC: Fix lapic_timer_advance_ns parameter overflow
kvm: vmx: Fix -Wmissing-prototypes warnings
KVM: nVMX: Fix using __this_cpu_read() in preemptible context
kvm: fix compilation on s390
...
Linus Torvalds [Sun, 26 May 2019 15:30:16 +0000 (08:30 -0700)]
Merge tag 'random_for_linus_stable' of git://git./linux/kernel/git/tytso/random
Pull /dev/random fix from Ted Ts'o:
"Fix a soft lockup regression when reading from /dev/random in early
boot"
* tag 'random_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
random: fix soft lockup when trying to read from an uninitialized blocking pool
Theodore Ts'o [Wed, 22 May 2019 16:02:16 +0000 (12:02 -0400)]
random: fix soft lockup when trying to read from an uninitialized blocking pool
Fixes:
eb9d1bf079bb: "random: only read from /dev/random after its pool has received 128 bits"
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Miguel Ojeda [Thu, 23 May 2019 12:45:35 +0000 (14:45 +0200)]
tracing: Silence GCC 9 array bounds warning
Starting with GCC 9, -Warray-bounds detects cases when memset is called
starting on a member of a struct but the size to be cleared ends up
writing over further members.
Such a call happens in the trace code to clear, at once, all members
after and including `seq` on struct trace_iterator:
In function 'memset',
inlined from 'ftrace_dump' at kernel/trace/trace.c:8914:3:
./include/linux/string.h:344:9: warning: '__builtin_memset' offset
[8505, 8560] from the object at 'iter' is out of the bounds of
referenced subobject 'seq' with type 'struct trace_seq' at offset
4368 [-Warray-bounds]
344 | return __builtin_memset(p, c, size);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to avoid GCC complaining about it, we compute the address
ourselves by adding the offsetof distance instead of referring
directly to the member.
Since there are two places doing this clear (trace.c and trace_kdb.c),
take the chance to move the workaround into a single place in
the internal header.
Link: http://lkml.kernel.org/r/20190523124535.GA12931@gmail.com
Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
[ Removed unnecessary parenthesis around "iter" ]
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Linus Torvalds [Sat, 25 May 2019 22:03:12 +0000 (15:03 -0700)]
Merge tag 'ext4_for_linus_stable' of git://git./linux/kernel/git/tytso/ext4
Pull ext4 fixes from Ted Ts'o:
"Bug fixes (including a regression fix) for ext4"
* tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
ext4: fix dcache lookup of !casefolded directories
ext4: do not delete unlinked inode from orphan list on failed truncate
ext4: wait for outstanding dio during truncate in nojournal mode
ext4: don't perform block validity checks on the journal inode
Linus Torvalds [Sat, 25 May 2019 17:11:23 +0000 (10:11 -0700)]
Merge tag 'libnvdimm-fixes-5.2-rc2' of git://git./linux/kernel/git/nvdimm/nvdimm
Pull libnvdimm fixes from Dan Williams:
- Fix a regression that disabled device-mapper dax support
- Remove unnecessary hardened-user-copy overhead (>30%) for dax
read(2)/write(2).
- Fix some compilation warnings.
* tag 'libnvdimm-fixes-5.2-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
libnvdimm/pmem: Bypass CONFIG_HARDENED_USERCOPY overhead
dax: Arrange for dax_supported check to span multiple devices
libnvdimm: Fix compilation warnings with W=1
Linus Torvalds [Sat, 25 May 2019 17:08:14 +0000 (10:08 -0700)]
Merge tag 'trace-v5.2-rc1' of git://git./linux/kernel/git/rostedt/linux-trace
Pull tracing fixes from Steven Rostedt:
"Tom Zanussi sent me some small fixes and cleanups to the histogram
code and I forgot to incorporate them.
I also added a small clean up patch that was sent to me a while ago
and I just noticed it"
* tag 'trace-v5.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
kernel/trace/trace.h: Remove duplicate header of trace_seq.h
tracing: Add a check_val() check before updating cond_snapshot() track_val
tracing: Check keys for variable references in expressions too
tracing: Prevent hist_field_var_ref() from accessing NULL tracing_map_elts
Gabriel Krisman Bertazi [Sat, 25 May 2019 03:48:23 +0000 (23:48 -0400)]
ext4: fix dcache lookup of !casefolded directories
Found by visual inspection, this wasn't caught by my xfstest, since it's
effect is ignoring positive dentries in the cache the fallback just goes
to the disk. it was introduced in the last iteration of the
case-insensitive patch.
d_compare should return 0 when the entries match, so make sure we are
correctly comparing the entire string if the encoding feature is set and
we are on a case-INsensitive directory.
Fixes:
b886ee3e778e ("ext4: Support case-insensitive file name lookups")
Signed-off-by: Gabriel Krisman Bertazi <krisman@collabora.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Linus Torvalds [Sat, 25 May 2019 00:30:28 +0000 (17:30 -0700)]
Merge tag 'scsi-fixes' of git://git./linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
"This is the same set of patches sent in the merge window as the final
pull except that Martin's read only rework is replaced with a simple
revert of the original change that caused the regression.
Everything else is an obvious fix or small cleanup"
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
Revert "scsi: sd: Keep disk read-only when re-reading partition"
scsi: bnx2fc: fix incorrect cast to u64 on shift operation
scsi: smartpqi: Reporting unhandled SCSI errors
scsi: myrs: Fix uninitialized variable
scsi: lpfc: Update lpfc version to 12.2.0.2
scsi: lpfc: add check for loss of ndlp when sending RRQ
scsi: lpfc: correct rcu unlock issue in lpfc_nvme_info_show
scsi: lpfc: resolve lockdep warnings
scsi: qedi: remove set but not used variables 'cdev' and 'udev'
scsi: qedi: remove memset/memcpy to nfunc and use func instead
scsi: qla2xxx: Add cleanup for PCI EEH recovery
Linus Torvalds [Fri, 24 May 2019 23:02:14 +0000 (16:02 -0700)]
Merge tag 'for-linus-
20190524' of git://git.kernel.dk/linux-block
Pull block fixes from Jens Axboe:
- NVMe pull request from Keith, with fixes from a few folks.
- bio and sbitmap before atomic barrier fixes (Andrea)
- Hang fix for blk-mq freeze and unfreeze (Bob)
- Single segment count regression fix (Christoph)
- AoE now has a new maintainer
- tools/io_uring/ Makefile fix, and sync with liburing (me)
* tag 'for-linus-
20190524' of git://git.kernel.dk/linux-block: (23 commits)
tools/io_uring: sync with liburing
tools/io_uring: fix Makefile for pthread library link
blk-mq: fix hang caused by freeze/unfreeze sequence
block: remove the bi_seg_{front,back}_size fields in struct bio
block: remove the segment size check in bio_will_gap
block: force an unlimited segment size on queues with a virt boundary
block: don't decrement nr_phys_segments for physically contigous segments
sbitmap: fix improper use of smp_mb__before_atomic()
bio: fix improper use of smp_mb__before_atomic()
aoe: list new maintainer for aoe driver
nvme-pci: use blk-mq mapping for unmanaged irqs
nvme: update MAINTAINERS
nvme: copy MTFA field from identify controller
nvme: fix memory leak for power latency tolerance
nvme: release namespace SRCU protection before performing controller ioctls
nvme: merge nvme_ns_ioctl into nvme_ioctl
nvme: remove the ifdef around nvme_nvm_ioctl
nvme: fix srcu locking on error return in nvme_get_ns_from_disk
nvme: Fix known effects
nvme-pci: Sync queues on reset
...
Linus Torvalds [Fri, 24 May 2019 22:21:05 +0000 (15:21 -0700)]
Merge tag 'linux-kselftest-5.2-rc2' of git://git./linux/kernel/git/shuah/linux-kselftest
Pull Kselftest fixes from Shuah Khan:
- Two fixes to regressions introduced in kselftest Makefile test run
output refactoring work (Kees Cook)
- Adding Atom support to syscall_arg_fault test (Tong Bo)
* tag 'linux-kselftest-5.2-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
selftests/timers: Add missing fflush(stdout) calls
selftests: Remove forced unbuffering for test running
selftests/x86: Support Atom for syscall_arg_fault test
Linus Torvalds [Fri, 24 May 2019 22:16:46 +0000 (15:16 -0700)]
Merge tag 'devicetree-fixes-for-5.2' of git://git./linux/kernel/git/robh/linux
Pull Devicetree fixes from Rob Herring:
- Update checkpatch.pl to use DT vendor-prefixes.yaml
- Fix DT binding references to files converted to DT schema
- Clean-up Arm CPU binding examples to match schema
- Add Sifive block versioning scheme documentation
- Pass binding directory base to validation tools for reference lookups
* tag 'devicetree-fixes-for-5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
checkpatch.pl: Update DT vendor prefix check
dt: bindings: mtd: replace references to nand.txt with nand-controller.yaml
dt-bindings: interrupt-controller: arm,gic: Fix schema errors in example
dt-bindings: arm: Clean up CPU binding examples
dt: fix refs that were renamed to json with the same file name
dt-bindings: Pass binding directory to validation tools
dt-bindings: sifive: describe sifive-blocks versioning
Linus Torvalds [Fri, 24 May 2019 21:31:58 +0000 (14:31 -0700)]
Merge tag 'spdx-5.2-rc2-2' of git://git./linux/kernel/git/gregkh/driver-core
Pule more SPDX updates from Greg KH:
"Here is another set of reviewed patches that adds SPDX tags to
different kernel files, based on a set of rules that are being used to
parse the comments to try to determine that the license of the file is
"GPL-2.0-or-later".
Only the "obvious" versions of these matches are included here, a
number of "non-obvious" variants of text have been found but those
have been postponed for later review and analysis.
These patches have been out for review on the linux-spdx@vger mailing
list, and while they were created by automatic tools, they were
hand-verified by a bunch of different people, all whom names are on
the patches are reviewers"
* tag 'spdx-5.2-rc2-2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (85 commits)
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 125
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 123
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 122
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 121
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 120
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 119
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 118
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 116
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 114
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 113
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 112
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 111
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 110
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 106
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 105
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 104
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 103
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 102
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 101
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 98
...
Waiman Long [Fri, 24 May 2019 19:42:22 +0000 (15:42 -0400)]
locking/lock_events: Use this_cpu_add() when necessary
The kernel test robot has reported that the use of __this_cpu_add()
causes bug messages like:
BUG: using __this_cpu_add() in preemptible [
00000000] code: ...
Given the imprecise nature of the count and the possibility of resetting
the count and doing the measurement again, this is not really a big
problem to use the unprotected __this_cpu_*() functions.
To make the preemption checking code happy, the this_cpu_*() functions
will be used if CONFIG_DEBUG_PREEMPT is defined.
The imprecise nature of the locking counts are also documented with
the suggestion that we should run the measurement a few times with the
counts reset in between to get a better picture of what is going on
under the hood.
Fixes:
a8654596f0371 ("locking/rwsem: Enable lock event counting")
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Paolo Bonzini [Fri, 24 May 2019 19:52:46 +0000 (21:52 +0200)]
KVM: x86: fix return value for reserved EFER
Commit
11988499e62b ("KVM: x86: Skip EFER vs. guest CPUID checks for
host-initiated writes", 2019-04-02) introduced a "return false" in a
function returning int, and anyway set_efer has a "nonzero on error"
conventon so it should be returning 1.
Reported-by: Pavel Machek <pavel@denx.de>
Fixes:
11988499e62b ("KVM: x86: Skip EFER vs. guest CPUID checks for host-initiated writes")
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Stefan Raspl [Sun, 21 Apr 2019 13:26:24 +0000 (15:26 +0200)]
tools/kvm_stat: fix fields filter for child events
The fields filter would not work with child fields, as the respective
parents would not be included. No parents displayed == no childs displayed.
To reproduce, run on s390 (would work on other platforms, too, but would
require a different filter name):
- Run 'kvm_stat -d'
- Press 'f'
- Enter 'instruct'
Notice that events like instruction_diag_44 or instruction_diag_500 are not
displayed - the output remains empty.
With this patch, we will filter by matching events and their parents.
However, consider the following example where we filter by
instruction_diag_44:
kvm statistics - summary
regex filter: instruction_diag_44
Event Total %Total CurAvg/s
exit_instruction 276 100.0 12
instruction_diag_44 256 92.8 11
Total 276 12
Note that the parent ('exit_instruction') displays the total events, but
the childs listed do not match its total (256 instead of 276). This is
intended (since we're filtering all but one child), but might be confusing
on first sight.
Signed-off-by: Stefan Raspl <raspl@linux.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Thu, 23 May 2019 09:31:14 +0000 (11:31 +0200)]
KVM: selftests: Wrap vcpu_nested_state_get/set functions with x86 guard
struct kvm_nested_state is only available on x86 so far. To be able
to compile the code on other architectures as well, we need to wrap
the related code with #ifdefs.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Andrew Jones [Thu, 23 May 2019 10:16:34 +0000 (12:16 +0200)]
kvm: selftests: aarch64: compile with warnings on
aarch64 fixups needed to compile with warnings as errors.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Andrew Jones [Thu, 23 May 2019 11:05:46 +0000 (13:05 +0200)]
kvm: selftests: aarch64: fix default vm mode
VM_MODE_P52V48_4K is not a valid mode for AArch64. Replace its
use in vm_create_default() with a mode that works and represents
a good AArch64 default. (We didn't ever see a problem with this
because we don't have any unit tests using vm_create_default(),
but it's good to get it fixed in advance.)
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Andrew Jones [Thu, 23 May 2019 09:34:05 +0000 (11:34 +0200)]
kvm: selftests: aarch64: dirty_log_test: fix unaligned memslot size
The memory slot size must be aligned to the host's page size. When
testing a guest with a 4k page size on a host with a 64k page size,
then 3 guest pages are not host page size aligned. Since we just need
a nearly arbitrary number of extra pages to ensure the memslot is not
aligned to a 64 host-page boundary for this test, then we can use
16, as that's 64k aligned, but not 64 * 64k aligned.
Fixes:
76d58e0f07ec ("KVM: fix KVM_CLEAR_DIRTY_LOG for memory slots of unaligned size", 2019-04-17)
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Christian Borntraeger [Fri, 24 May 2019 14:06:23 +0000 (16:06 +0200)]
KVM: s390: fix memory slot handling for KVM_SET_USER_MEMORY_REGION
kselftests exposed a problem in the s390 handling for memory slots.
Right now we only do proper memory slot handling for creation of new
memory slots. Neither MOVE, nor DELETION are handled properly. Let us
implement those.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 20 May 2019 15:34:30 +0000 (17:34 +0200)]
KVM: x86/pmu: do not mask the value that is written to fixed PMUs
According to the SDM, for MSR_IA32_PERFCTR0/1 "the lower-order 32 bits of
each MSR may be written with any value, and the high-order 8 bits are
sign-extended according to the value of bit 31", but the fixed counters
in real hardware are limited to the width of the fixed counters ("bits
beyond the width of the fixed-function counter are reserved and must be
written as zeros"). Fix KVM to do the same.
Reported-by: Nadav Amit <nadav.amit@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 20 May 2019 15:20:40 +0000 (17:20 +0200)]
KVM: x86/pmu: mask the result of rdpmc according to the width of the counters
This patch will simplify the changes in the next, by enforcing the
masking of the counters to RDPMC and RDMSR.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Borislav Petkov [Wed, 8 May 2019 17:02:48 +0000 (19:02 +0200)]
x86/kvm/pmu: Set AMD's virt PMU version to 1
After commit:
672ff6cff80c ("KVM: x86: Raise #GP when guest vCPU do not support PMU")
my AMD guests started #GPing like this:
general protection fault: 0000 [#1] PREEMPT SMP
CPU: 1 PID: 4355 Comm: bash Not tainted 5.1.0-rc6+ #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
RIP: 0010:x86_perf_event_update+0x3b/0xa0
with Code: pointing to RDPMC. It is RDPMC because the guest has the
hardware watchdog CONFIG_HARDLOCKUP_DETECTOR_PERF enabled which uses
perf. Instrumenting kvm_pmu_rdpmc() some, showed that it fails due to:
if (!pmu->version)
return 1;
which the above commit added. Since AMD's PMU leaves the version at 0,
that causes the #GP injection into the guest.
Set pmu->version arbitrarily to 1 and move it above the non-applicable
struct kvm_pmu members.
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
Cc: kvm@vger.kernel.org
Cc: Liran Alon <liran.alon@oracle.com>
Cc: Mihai Carabas <mihai.carabas@oracle.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: stable@vger.kernel.org
Fixes:
672ff6cff80c ("KVM: x86: Raise #GP when guest vCPU do not support PMU")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 20 May 2019 13:34:35 +0000 (15:34 +0200)]
KVM: x86: do not spam dmesg with VMCS/VMCB dumps
Userspace can easily set up invalid processor state in such a way that
dmesg will be filled with VMCS or VMCB dumps. Disable this by default
using a module parameter.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Peter Xu [Sun, 5 May 2019 08:56:42 +0000 (16:56 +0800)]
kvm: Check irqchip mode before assign irqfd
When assigning kvm irqfd we didn't check the irqchip mode but we allow
KVM_IRQFD to succeed with all the irqchip modes. However it does not
make much sense to create irqfd even without the kernel chips. Let's
provide a arch-dependent helper to check whether a specific irqfd is
allowed by the arch. At least for x86, it should make sense to check:
- when irqchip mode is NONE, all irqfds should be disallowed, and,
- when irqchip mode is SPLIT, irqfds that are with resamplefd should
be disallowed.
For either of the case, previously we'll silently ignore the irq or
the irq ack event if the irqchip mode is incorrect. However that can
cause misterious guest behaviors and it can be hard to triage. Let's
fail KVM_IRQFD even earlier to detect these incorrect configurations.
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Radim Krčmář <rkrcmar@redhat.com>
CC: Alex Williamson <alex.williamson@redhat.com>
CC: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Suthikulpanit, Suravee [Tue, 14 May 2019 15:49:52 +0000 (15:49 +0000)]
kvm: svm/avic: fix off-by-one in checking host APIC ID
Current logic does not allow VCPU to be loaded onto CPU with
APIC ID 255. This should be allowed since the host physical APIC ID
field in the AVIC Physical APIC table entry is an 8-bit value,
and APIC ID 255 is valid in system with x2APIC enabled.
Instead, do not allow VCPU load if the host APIC ID cannot be
represented by an 8-bit value.
Also, use the more appropriate AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK
instead of AVIC_MAX_PHYSICAL_ID_COUNT.
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 20 May 2019 11:31:02 +0000 (13:31 +0200)]
KVM: selftests: do not blindly clobber registers in guest asm
The guest_code of sync_regs_test is assuming that the compiler will not
touch %r11 outside the asm that increments it, which is a bit brittle.
Instead, we can increment a variable and use a dummy asm to ensure the
increment is not optimized away. However, we also need to use a
callee-save register or the compiler will insert a save/restore around
the vmexit, breaking the whole idea behind the test.
(Yes, "if it ain't broken...", but I would like the test to be clean
before it is copied into the upcoming s390 selftests).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Mon, 20 May 2019 10:55:11 +0000 (12:55 +0200)]
KVM: selftests: Remove duplicated TEST_ASSERT in hyperv_cpuid.c
The check for entry->index == 0 is done twice. One time should
be sufficient.
Suggested-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wanpeng Li [Mon, 20 May 2019 08:18:07 +0000 (16:18 +0800)]
KVM: LAPIC: Expose per-vCPU timer_advance_ns to userspace
Expose per-vCPU timer_advance_ns to userspace, so it is able to
query the auto-adjusted value.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wanpeng Li [Mon, 20 May 2019 08:18:06 +0000 (16:18 +0800)]
KVM: LAPIC: Fix lapic_timer_advance_ns parameter overflow
After commit
c3941d9e0 (KVM: lapic: Allow user to disable adaptive tuning of
timer advancement), '-1' enables adaptive tuning starting from default
advancment of 1000ns. However, we should expose an int instead of an overflow
uint module parameter.
Before patch:
/sys/module/kvm/parameters/lapic_timer_advance_ns:
4294967295
After patch:
/sys/module/kvm/parameters/lapic_timer_advance_ns:-1
Fixes:
c3941d9e0 (KVM: lapic: Allow user to disable adaptive tuning of timer advancement)
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Yi Wang [Mon, 20 May 2019 04:27:47 +0000 (12:27 +0800)]
kvm: vmx: Fix -Wmissing-prototypes warnings
We get a warning when build kernel W=1:
arch/x86/kvm/vmx/vmx.c:6365:6: warning: no previous prototype for ‘vmx_update_host_rsp’ [-Wmissing-prototypes]
void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
Add the missing declaration to fix this.
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wanpeng Li [Fri, 17 May 2019 08:49:50 +0000 (16:49 +0800)]
KVM: nVMX: Fix using __this_cpu_read() in preemptible context
BUG: using __this_cpu_read() in preemptible [
00000000] code: qemu-system-x86/4590
caller is nested_vmx_enter_non_root_mode+0xebd/0x1790 [kvm_intel]
CPU: 4 PID: 4590 Comm: qemu-system-x86 Tainted: G OE 5.1.0-rc4+ #1
Call Trace:
dump_stack+0x67/0x95
__this_cpu_preempt_check+0xd2/0xe0
nested_vmx_enter_non_root_mode+0xebd/0x1790 [kvm_intel]
nested_vmx_run+0xda/0x2b0 [kvm_intel]
handle_vmlaunch+0x13/0x20 [kvm_intel]
vmx_handle_exit+0xbd/0x660 [kvm_intel]
kvm_arch_vcpu_ioctl_run+0xa2c/0x1e50 [kvm]
kvm_vcpu_ioctl+0x3ad/0x6d0 [kvm]
do_vfs_ioctl+0xa5/0x6e0
ksys_ioctl+0x6d/0x80
__x64_sys_ioctl+0x1a/0x20
do_syscall_64+0x6f/0x6c0
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Accessing per-cpu variable should disable preemption, this patch extends the
preemption disable region for __this_cpu_read().
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Fixes:
52017608da33 ("KVM: nVMX: add option to perform early consistency checks via H/W")
Cc: stable@vger.kernel.org
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 20 May 2019 10:06:36 +0000 (12:06 +0200)]
kvm: fix compilation on s390
s390 does not have memremap, even though in this particular case it
would be useful.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Jim Mattson [Wed, 27 Mar 2019 20:15:37 +0000 (13:15 -0700)]
kvm: x86: Include CPUID leaf 0x8000001e in kvm's supported CPUID
Kvm now supports extended CPUID functions through 0x8000001f. CPUID
leaf 0x8000001e is AMD's Processor Topology Information leaf. This
contains similar information to CPUID leaf 0xb (Intel's Extended
Topology Enumeration leaf), and should be included in the output of
KVM_GET_SUPPORTED_CPUID, even though userspace is likely to override
some of this information based upon the configuration of the
particular VM.
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Borislav Petkov <bp@suse.de>
Fixes:
8765d75329a38 ("KVM: X86: Extend CPUID range to include new leaf")
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Marc Orr <marcorr@google.com>
Reviewed-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Jim Mattson [Wed, 27 Mar 2019 20:15:36 +0000 (13:15 -0700)]
kvm: x86: Include multiple indices with CPUID leaf 0x8000001d
Per the APM, "CPUID Fn8000_001D_E[D,C,B,A]X reports cache topology
information for the cache enumerated by the value passed to the
instruction in ECX, referred to as Cache n in the following
description. To gather information for all cache levels, software must
repeatedly execute CPUID with 8000_001Dh in EAX and ECX set to
increasing values beginning with 0 until a value of 00h is returned in
the field CacheType (EAX[4:0]) indicating no more cache descriptions
are available for this processor."
The termination condition is the same as leaf 4, so we can reuse that
code block for leaf 0x8000001d.
Fixes:
8765d75329a38 ("KVM: X86: Extend CPUID range to include new leaf")
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Borislav Petkov <bp@suse.de>
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Marc Orr <marcorr@google.com>
Reviewed-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Thomas Huth [Fri, 17 May 2019 09:04:45 +0000 (11:04 +0200)]
KVM: selftests: Compile code with warnings enabled
So far the KVM selftests are compiled without any compiler warnings
enabled. That's quite bad, since we miss a lot of possible bugs this
way. Let's enable at least "-Wall" and some other useful warning flags
now, and fix at least the trivial problems in the code (like unused
variables).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 20 May 2019 10:02:16 +0000 (12:02 +0200)]
kvm: selftests: avoid type punning
Avoid warnings from -Wstrict-aliasing by using memcpy.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Dan Carpenter [Tue, 14 May 2019 10:34:51 +0000 (13:34 +0300)]
KVM: selftests: Fix a condition in test_hv_cpuid()
The code is trying to check that all the padding is zeroed out and it
does this:
entry->padding[0] == entry->padding[1] == entry->padding[2] == 0
Assume everything is zeroed correctly, then the first comparison is
true, the next comparison is false and false is equal to zero so the
overall condition is true. This bug doesn't affect run time very
badly, but the code should instead just check that all three paddings
are zero individually.
Also the error message was copy and pasted from an earlier error and it
wasn't correct.
Fixes:
7edcb7343327 ("KVM: selftests: Add hyperv_cpuid test")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wanpeng Li [Fri, 17 May 2019 08:49:49 +0000 (16:49 +0800)]
KVM: Fix spinlock taken warning during host resume
WARNING: CPU: 0 PID: 13554 at kvm/arch/x86/kvm//../../../virt/kvm/kvm_main.c:4183 kvm_resume+0x3c/0x40 [kvm]
CPU: 0 PID: 13554 Comm: step_after_susp Tainted: G OE 5.1.0-rc4+ #1
RIP: 0010:kvm_resume+0x3c/0x40 [kvm]
Call Trace:
syscore_resume+0x63/0x2d0
suspend_devices_and_enter+0x9d1/0xa40
pm_suspend+0x33a/0x3b0
state_store+0x82/0xf0
kobj_attr_store+0x12/0x20
sysfs_kf_write+0x4b/0x60
kernfs_fop_write+0x120/0x1a0
__vfs_write+0x1b/0x40
vfs_write+0xcd/0x1d0
ksys_write+0x5f/0xe0
__x64_sys_write+0x1a/0x20
do_syscall_64+0x6f/0x6c0
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Commit
ca84d1a24 (KVM: x86: Add clock sync request to hardware enable) mentioned
that "we always hold kvm_lock when hardware_enable is called. The one place that
doesn't need to worry about it is resume, as resuming a frozen CPU, the spinlock
won't be taken." However, commit
6706dae9 (virt/kvm: Replace spin_is_locked() with
lockdep) introduces a bug, it asserts when the lock is not held which is contrary
to the original goal.
This patch fixes it by WARN_ON when the lock is held.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Fixes:
6706dae9 ("virt/kvm: Replace spin_is_locked() with lockdep")
[Wrap with #ifdef CONFIG_LOCKDEP - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 8 May 2019 18:04:32 +0000 (11:04 -0700)]
KVM: nVMX: Clear nested_run_pending if setting nested state fails
VMX's nested_run_pending flag is subtly consumed when stuffing state to
enter guest mode, i.e. needs to be set according before KVM knows if
setting guest state is successful. If setting guest state fails, clear
the flag as a nested run is obviously not pending.
Reported-by: Aaron Lewis <aaronlewis@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Mon, 20 May 2019 09:55:36 +0000 (11:55 +0200)]
KVM: nVMX: really fix the size checks on KVM_SET_NESTED_STATE
The offset for reading the shadow VMCS is sizeof(*kvm_state)+VMCS12_SIZE,
so the correct size must be that plus sizeof(*vmcs12). This could lead
to KVM reading garbage data from userspace and not reporting an error,
but is otherwise not sensitive.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Fri, 24 May 2019 19:27:00 +0000 (21:27 +0200)]
Merge tag 'kvmarm-fixes-for-5.2' of git://git./linux/kernel/git/kvmarm/kvmarm into HEAD
KVM/arm updates for 5.2-rc2
- Correctly annotate HYP-callable code to be non-traceable
- Remove Christoffer from the MAINTAINERS file as his request
Linus Torvalds [Fri, 24 May 2019 18:03:26 +0000 (11:03 -0700)]
Merge tag 'arm64-fixes' of git://git./linux/kernel/git/arm64/linux
Pull more arm64 fixes from Will Deacon:
- Fix incorrect LDADD instruction encoding in our disassembly macros
- Disable the broken ARM64_PSEUDO_NMI support for now
- Add workaround for Cortex-A76 CPU erratum #
1463225
- Handle Cortex-A76/Neoverse-N1 erratum #
1418040 w/ existing workaround
- Fix IORT build failure if IOMMU_SUPPORT=n
- Fix place-relative module relocation range checking and its
interaction with KASLR
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: insn: Add BUILD_BUG_ON() for invalid masks
arm64: insn: Fix ldadd instruction encoding
arm64: Kconfig: Make ARM64_PSEUDO_NMI depend on BROKEN for now
arm64: Handle erratum
1418040 as a superset of erratum
1188873
arm64/module: deal with ambiguity in PRELxx relocation ranges
ACPI/IORT: Fix build error when IOMMU_SUPPORT is disabled
arm64/kernel: kaslr: reduce module randomization range to 2 GB
arm64: errata: Add workaround for Cortex-A76 erratum #
1463225
arm64: Remove useless message during oops
Linus Torvalds [Fri, 24 May 2019 17:19:26 +0000 (10:19 -0700)]
Merge tag 'platform-drivers-x86-v5.2-2' of git://git.infradead.org/linux-platform-drivers-x86
Pull x86 platform driver fixes from Andy Shevchenko:
"Some of Intel Cherrytrail based platforms depend on PMC clock to be
always on. Here are a couple of quirks to the driver to support
affected hardware"
* tag 'platform-drivers-x86-v5.2-2' of git://git.infradead.org/linux-platform-drivers-x86:
platform/x86: pmc_atom: Add several Beckhoff Automation boards to critclk_systems DMI table
platform/x86: pmc_atom: Add Lex 3I380D industrial PC to critclk_systems DMI table
Linus Torvalds [Fri, 24 May 2019 17:04:17 +0000 (10:04 -0700)]
Merge branch 'fixes' of git://git./linux/kernel/git/evalenti/linux-soc-thermal
Pull thermal SoC fixes from Eduardo Valentin:
- revert pinctrl settings on rockchip which causes boot failure on
rk3288. The proper follow-up patch is being discussed, meanwhile
the revert gets those booting again.
- minor fixes on rcar and tegra
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux-soc-thermal:
thermal: rcar_gen3_thermal: Update temperature conversion method
thermal: rcar_gen3_thermal: Update calculation formula of IRQTEMP
thermal: rcar_gen3_thermal: Update value of Tj_1
thermal: tegra: Make tegra210_tsensor_thermtrips static
Revert "thermal: rockchip: fix up the tsadc pinctrl setting error"
Linus Torvalds [Fri, 24 May 2019 16:57:11 +0000 (09:57 -0700)]
Merge tag 'mmc-v5.2-2' of git://git./linux/kernel/git/ulfh/mmc
Pull MMC fixes from Ulf Hansson:
"Fix HS50 data hold time problem for a few variants of sdhci-iproc"
* tag 'mmc-v5.2-2' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc:
mmc: sdhci-iproc: Set NO_HISPD bit to fix HS50 data hold time problem
mmc: sdhci-iproc: cygnus: Set NO_HISPD bit to fix HS50 data hold time problem
Linus Torvalds [Fri, 24 May 2019 16:12:46 +0000 (09:12 -0700)]
Merge tag 'drm-fixes-2019-05-24-1' of git://anongit.freedesktop.org/drm/drm
Pull drm fixes from Dave Airlie:
"Nothing too unusual here for rc2. Except the amdgpu DMCU firmware
loading fix caused build breakage with a different set of Kconfig
options. I've just reverted it for now until the AMD folks can rewrite
it to avoid that problem.
i915:
- boosting fix
- bump ready task fixes
- GVT - reset fix, error return, TRTT handling fix
amdgpu:
- DMCU firmware loading fix
- Polaris 10 pci id for kfd
- picasso screen corruption fix
- SR-IOV fixes
- vega driver reload fixes
- SMU locking fix
- compute profile fix for kfd
vmwgfx:
- integer overflow fixes
- dma sg fix
sun4i:
- HDMI phy fixes
gma500:
- LVDS detection fix
panfrost:
- devfreq selection fix"
* tag 'drm-fixes-2019-05-24-1' of git://anongit.freedesktop.org/drm/drm: (32 commits)
Revert "drm/amd/display: Don't load DMCU for Raven 1"
drm/panfrost: Select devfreq
drm/gma500/cdv: Check vbt config bits when detecting lvds panels
drm/vmwgfx: integer underflow in vmw_cmd_dx_set_shader() leading to an invalid read
drm/vmwgfx: NULL pointer dereference from vmw_cmd_dx_view_define()
drm/vmwgfx: Use the dma scatter-gather iterator to get dma addresses
drm/vmwgfx: Fix compat mode shader operation
drm/vmwgfx: Fix user space handle equal to zero
drm/vmwgfx: Don't send drm sysfs hotplug events on initial master set
drm/i915/gvt: Fix an error code in ppgtt_populate_spt_by_guest_entry()
drm/i915/gvt: do not let TRTTE and 0x4dfc write passthrough to hardware
drm/i915/gvt: add 0x4dfc to gen9 save-restore list
drm/i915/gvt: Tiled Resources mmios are in-context mmios for gen9+
drm/i915/gvt: use cmd to restore in-context mmios to hw for gen9 platform
drm/i915/gvt: emit init breadcrumb for gvt request
drm/amdkfd: Fix compute profile switching
drm/amdgpu: skip fw pri bo alloc for SRIOV
drm/amd/powerplay: fix locking in smu_feature_set_supported()
drm/amdgpu/gmc9: set vram_width properly for SR-IOV
drm/amdgpu/soc15: skip reset on init
...
Thomas Gleixner [Thu, 23 May 2019 09:15:02 +0000 (11:15 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 125
Based on 1 normalized pattern(s):
osl gpl code release authorized by [jalil] [fadavi] this program is
free software you can redistribute it and or modify it under the
terms of the gnu general public license as published by the free
software foundation either version 2 or at your option any later
version this program is distributed in the hope that it will be
useful but without any warranty without even the implied warranty of
merchantability or fitness for a particular purpose see the gnu
general public license for more details you should have received a
copy of the gnu general public license along with this program see
the file copying if not write to the free software foundation 675
mass ave cambridge ma 02139 usa
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 1 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091651.689335553@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:15:00 +0000 (11:15 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 123
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms and conditions of the gnu general public license
version 2 or later as published by the free software foundation this
program is distributed in the hope that it will be useful but
without any warranty without even the implied warranty of
merchantability or fitness for a particular purpose see the gnu
general public license for more details you should have received a
copy of the gnu general public license along with this program if
not see http www gnu org licenses
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 7 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091651.504392586@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:14:59 +0000 (11:14 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 122
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 2 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091651.414247666@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:14:58 +0000 (11:14 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 121
Based on 1 normalized pattern(s):
licensed under the gplv2 or at your option any later version
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 1 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091651.323272658@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:14:57 +0000 (11:14 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 120
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details you
should have received a copy of the gnu general public license along
with this program if not see the file copying or write to the free
software foundation inc
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 12 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091651.231300438@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:14:56 +0000 (11:14 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 119
Based on 1 normalized pattern(s):
released under the gpl this program is free software you can
redistribute it and or modify it under the terms of the gnu general
public license as published by the free software foundation either
version 2 of the license or at your option any later version this
program is distributed in the hope that it will be useful but
without any warranty without even the implied warranty of
merchantability or fitness for a particular purpose see the gnu
general public license for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 2 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091651.124582774@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:14:55 +0000 (11:14 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 118
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 or at your option any
later version this program is distributed in the hope that it will
be useful but without any warranty without even the implied warranty
of merchantability or fitness for a particular purpose see the gnu
general public license for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 44 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091651.032047323@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:14:53 +0000 (11:14 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 116
Based on 1 normalized pattern(s):
gpl license this program is free software you can redistribute it
and or modify it under the terms of the gnu general public license
as published by the free software foundation either version 2 of the
license or at your option any later version this program is
distributed in the hope that it will be useful but without any
warranty without even the implied warranty of merchantability or
fitness for a particular purpose see the gnu general public license
for more details you should have received a copy of the gnu general
public license along with this program if not write to the free
software foundation inc 59 temple place suite 330 boston ma 02111
1307 usa
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 1 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091650.847878191@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:14:51 +0000 (11:14 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 114
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details you
should have received a copy of the gnu general public license along
with this program
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 8 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091650.663497195@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:14:50 +0000 (11:14 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 113
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this program is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details to
obtain the license point your browser to http www gnu org copyleft
gpl html
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 26 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091650.572604764@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:14:49 +0000 (11:14 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 112
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or
later version
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 4 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091650.480557885@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:14:48 +0000 (11:14 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 111
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
i t under the terms of the gnu general public license as published
by th e free software foundation either version 2 of the license or
at you r option any later version
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 1 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091650.375638818@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:14:47 +0000 (11:14 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 110
Based on 1 normalized pattern(s):
this file is free software you can redistribute it and or modify it
under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this file is distributed in the hope
that it will be useful but without any warranty without even the
implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 1 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091650.284757242@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Thu, 23 May 2019 09:14:43 +0000 (11:14 +0200)]
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 106
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version this software is distributed in the
hope that it will be useful but without any warranty without even
the implied warranty of merchantability or fitness for a particular
purpose see the gnu general public license for more details you
should have received a copy of the gnu general public license along
with atmel wireless lan drivers if not see http www gnu org licenses
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 2 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190523091649.881590905@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>