linux-2.6-microblaze.git
3 years agoKVM: arm64: Filter out v8.1+ events on v8.0 HW
Marc Zyngier [Thu, 21 Jan 2021 10:56:36 +0000 (10:56 +0000)]
KVM: arm64: Filter out v8.1+ events on v8.0 HW

When running on v8.0 HW, make sure we don't try to advertise
events in the 0x4000-0x403f range.

Cc: stable@vger.kernel.org
Fixes: 88865beca9062 ("KVM: arm64: Mask out filtered events in PCMEID{0,1}_EL1")
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210121105636.1478491-1-maz@kernel.org
3 years agoKVM: arm64: Compute TPIDR_EL2 ignoring MTE tag
Steven Price [Fri, 8 Jan 2021 16:12:54 +0000 (16:12 +0000)]
KVM: arm64: Compute TPIDR_EL2 ignoring MTE tag

KASAN in HW_TAGS mode will store MTE tags in the top byte of the
pointer. When computing the offset for TPIDR_EL2 we don't want anything
in the top byte, so remove the tag to ensure the computation is correct
no matter what the tag.

Fixes: 94ab5b61ee16 ("kasan, arm64: enable CONFIG_KASAN_HW_TAGS")
Signed-off-by: Steven Price <steven.price@arm.com>
[maz: added comment]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210108161254.53674-1-steven.price@arm.com
3 years agoKVM: arm64: Use the reg_to_encoding() macro instead of sys_reg()
Alexandru Elisei [Wed, 6 Jan 2021 14:42:18 +0000 (14:42 +0000)]
KVM: arm64: Use the reg_to_encoding() macro instead of sys_reg()

The reg_to_encoding() macro is a wrapper over sys_reg() and conveniently
takes a sys_reg_desc or a sys_reg_params argument and returns the 32 bit
register encoding. Use it instead of calling sys_reg() directly.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210106144218.110665-1-alexandru.elisei@arm.com
3 years agoKVM: arm64: Allow PSCI SYSTEM_OFF/RESET to return
David Brazdil [Tue, 29 Dec 2020 16:00:59 +0000 (16:00 +0000)]
KVM: arm64: Allow PSCI SYSTEM_OFF/RESET to return

The KVM/arm64 PSCI relay assumes that SYSTEM_OFF and SYSTEM_RESET should
not return, as dictated by the PSCI spec. However, there is firmware out
there which breaks this assumption, leading to a hyp panic. Make KVM
more robust to broken firmware by allowing these to return.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201229160059.64135-1-dbrazdil@google.com
3 years agoKVM: arm64: Simplify handling of absent PMU system registers
Marc Zyngier [Wed, 6 Jan 2021 17:22:28 +0000 (17:22 +0000)]
KVM: arm64: Simplify handling of absent PMU system registers

Now that all PMU registers are gated behind a .visibility callback,
remove the other checks against an absent PMU.

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Hide PMU registers from userspace when not available
Marc Zyngier [Wed, 6 Jan 2021 17:22:27 +0000 (17:22 +0000)]
KVM: arm64: Hide PMU registers from userspace when not available

It appears that while we are now able to properly hide PMU
registers from the guest when a PMU isn't available (either
because none has been configured, the host doesn't have
the PMU support compiled in, or that the HW doesn't have
one at all), we are still exposing more than we should to
userspace.

Introduce a visibility callback gating all the PMU registers,
which covers both usrespace and guest.

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoarm64: cpufeature: remove non-exist CONFIG_KVM_ARM_HOST
Shannon Zhao [Mon, 4 Jan 2021 11:38:44 +0000 (19:38 +0800)]
arm64: cpufeature: remove non-exist CONFIG_KVM_ARM_HOST

Commit d82755b2e781 ("KVM: arm64: Kill off CONFIG_KVM_ARM_HOST") deletes
CONFIG_KVM_ARM_HOST option, it should use CONFIG_KVM instead.

Just remove CONFIG_KVM_ARM_HOST here.

Fixes: d82755b2e781 ("KVM: arm64: Kill off CONFIG_KVM_ARM_HOST")
Signed-off-by: Shannon Zhao <shannon.zhao@linux.alibaba.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/1609760324-92271-1-git-send-email-shannon.zhao@linux.alibaba.com
3 years agoKVM: arm64: Replace KVM_ARM_PMU with HW_PERF_EVENTS
Marc Zyngier [Mon, 4 Jan 2021 16:50:16 +0000 (16:50 +0000)]
KVM: arm64: Replace KVM_ARM_PMU with HW_PERF_EVENTS

KVM_ARM_PMU only existed for the benefit of 32bit ARM hosts,
and makes no sense now that we are 64bit only. Get rid of it.

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Remove spurious semicolon in reg_to_encoding()
Marc Zyngier [Thu, 31 Dec 2020 15:05:46 +0000 (15:05 +0000)]
KVM: arm64: Remove spurious semicolon in reg_to_encoding()

Although not a problem right now, it flared up while working
on some other aspects of the code-base. Remove the useless
semicolon.

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Fix hyp_cpu_pm_{init,exit} __init annotation
Marc Zyngier [Wed, 23 Dec 2020 12:08:54 +0000 (12:08 +0000)]
KVM: arm64: Fix hyp_cpu_pm_{init,exit} __init annotation

The __init annotations on hyp_cpu_pm_{init,exit} are obviously incorrect,
and the build system shouts at you if you enable DEBUG_SECTION_MISMATCH.

Nothing really bad happens as we never execute that code outside of the
init context, but we can't label the callers as __int either, as kvm_init
isn't __init itself. Oh well.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
Link: https://lore.kernel.org/r/20201223120854.255347-1-maz@kernel.org
3 years agoKVM: arm64: Consolidate dist->ready setting into kvm_vgic_map_resources()
Marc Zyngier [Sun, 27 Dec 2020 14:28:34 +0000 (14:28 +0000)]
KVM: arm64: Consolidate dist->ready setting into kvm_vgic_map_resources()

dist->ready setting is pointlessly spread across the two vgic
backends, while it could be consolidated in kvm_vgic_map_resources().

Move it there, and slightly simplify the flows in both backends.

Suggested-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Remove redundant call to kvm_pmu_vcpu_reset()
Alexandru Elisei [Tue, 1 Dec 2020 15:01:57 +0000 (15:01 +0000)]
KVM: arm64: Remove redundant call to kvm_pmu_vcpu_reset()

KVM_ARM_VCPU_INIT ioctl calls kvm_reset_vcpu(), which in turn resets the
PMU with a call to kvm_pmu_vcpu_reset(). The function zeroes the PMU
chained counters bitmap and stops all the counters with a perf event
attached. Because it is called before the VCPU has had the chance to run,
no perf events are in use and none are released.

kvm_arm_pmu_v3_enable(), called by kvm_vcpu_first_run_init() only if the
VCPU has been initialized, also resets the PMU. kvm_pmu_vcpu_reset() in
this case does the exact same thing as the previous call, so remove it.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201201150157.223625-6-alexandru.elisei@arm.com
3 years agoKVM: arm64: Update comment in kvm_vgic_map_resources()
Alexandru Elisei [Tue, 1 Dec 2020 15:01:56 +0000 (15:01 +0000)]
KVM: arm64: Update comment in kvm_vgic_map_resources()

vgic_v3_map_resources() returns -EBUSY if the VGIC isn't initialized,
update the comment to kvm_vgic_map_resources() to match what the function
does.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201201150157.223625-5-alexandru.elisei@arm.com
3 years agoKVM: arm64: Move double-checked lock to kvm_vgic_map_resources()
Alexandru Elisei [Tue, 1 Dec 2020 15:01:55 +0000 (15:01 +0000)]
KVM: arm64: Move double-checked lock to kvm_vgic_map_resources()

kvm_vgic_map_resources() is called when a VCPU if first run and it maps all
the VGIC MMIO regions. To prevent double-initialization, the VGIC uses the
ready variable to keep track of the state of resources and the global KVM
mutex to protect against concurrent accesses. After the lock is taken, the
variable is checked again in case another VCPU took the lock between the
current VCPU reading ready equals false and taking the lock.

The double-checked lock pattern is spread across four different functions:
in kvm_vcpu_first_run_init(), in kvm_vgic_map_resource() and in
vgic_{v2,v3}_map_resources(), which makes it hard to reason about and
introduces minor code duplication. Consolidate the checks in
kvm_vgic_map_resources(), where the lock is taken.

No functional change intended.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201201150157.223625-4-alexandru.elisei@arm.com
3 years agoKVM: arm64: arch_timer: Remove VGIC initialization check
Alexandru Elisei [Tue, 1 Dec 2020 15:01:54 +0000 (15:01 +0000)]
KVM: arm64: arch_timer: Remove VGIC initialization check

kvm_timer_enable() is called in kvm_vcpu_first_run_init() after
kvm_vgic_map_resources() if the VGIC wasn't ready. kvm_vgic_map_resources()
is the only place where kvm->arch.vgic.ready is set to true.

For a v2 VGIC, kvm_vgic_map_resources() will attempt to initialize the VGIC
and set the initialized flag.

For a v3 VGIC, kvm_vgic_map_resources() will return an error code if the
VGIC isn't already initialized.

The end result is that if we've reached kvm_timer_enable(), the VGIC is
initialzed and ready and vgic_initialized() will always be true, so remove
this check.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
[maz: added comment about vgic initialisation, as suggested by Eric]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201201150157.223625-3-alexandru.elisei@arm.com
3 years agoKVM: Documentation: Add arm64 KVM_RUN error codes
Alexandru Elisei [Tue, 1 Dec 2020 15:01:53 +0000 (15:01 +0000)]
KVM: Documentation: Add arm64 KVM_RUN error codes

The API documentation states that general error codes are not detailed, but
errors with specific meanings are. On arm64, KVM_RUN can return error
numbers with a different meaning than what is described by POSIX or the C99
standard (as taken from man 3 errno).

Absent from the newly documented error codes is ERANGE which can be
returned when making a change to the EL2 stage 1 tables if the address is
larger than the largest supported input address. Assuming no bugs in the
implementation, that is not possible because the input addresses which are
mapped are the result of applying the macro kern_hyp_va() on kernel virtual
addresses.

CC: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201201150157.223625-2-alexandru.elisei@arm.com
3 years agoKVM: arm64: Declutter host PSCI 0.1 handling
Marc Zyngier [Tue, 22 Dec 2020 12:46:41 +0000 (12:46 +0000)]
KVM: arm64: Declutter host PSCI 0.1 handling

Although there is nothing wrong with the current host PSCI relay
implementation, we can clean it up and remove some of the helpers
that do not improve the overall readability of the legacy PSCI 0.1
handling.

Opportunity is taken to turn the bitmap into a set of booleans,
and creative use of preprocessor macros make init and check
more concise/readable.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Move skip_host_instruction to adjust_pc.h
David Brazdil [Tue, 8 Dec 2020 14:24:52 +0000 (14:24 +0000)]
KVM: arm64: Move skip_host_instruction to adjust_pc.h

Move function for skipping host instruction in the host trap handler to
a header file containing analogical helpers for guests.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201208142452.87237-7-dbrazdil@google.com
3 years agoKVM: arm64: Remove unused includes in psci-relay.c
David Brazdil [Tue, 8 Dec 2020 14:24:51 +0000 (14:24 +0000)]
KVM: arm64: Remove unused includes in psci-relay.c

Minor cleanup removing unused includes.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201208142452.87237-6-dbrazdil@google.com
3 years agoKVM: arm64: Minor cleanup of hyp variables used in host
David Brazdil [Tue, 8 Dec 2020 14:24:50 +0000 (14:24 +0000)]
KVM: arm64: Minor cleanup of hyp variables used in host

Small cleanup moving declarations of hyp-exported variables to
kvm_host.h and using macros to avoid having to refer to them with
kvm_nvhe_sym() in host.

No functional change intended.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201208142452.87237-5-dbrazdil@google.com
3 years agoKVM: arm64: Skip computing hyp VA layout for VHE
David Brazdil [Tue, 8 Dec 2020 14:24:49 +0000 (14:24 +0000)]
KVM: arm64: Skip computing hyp VA layout for VHE

Computing the hyp VA layout is redundant when the kernel runs in EL2 and
hyp shares its VA mappings. Make calling kvm_compute_layout()
conditional on not just CONFIG_KVM but also !is_kernel_in_hyp_mode().

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201208142452.87237-4-dbrazdil@google.com
3 years agoKVM: arm64: Use lm_alias in nVHE-only VA conversion
David Brazdil [Tue, 8 Dec 2020 14:24:48 +0000 (14:24 +0000)]
KVM: arm64: Use lm_alias in nVHE-only VA conversion

init_hyp_physvirt_offset() computes PA from a kernel VA. Conversion to
kernel linear-map is required first but the code used kvm_ksym_ref() for
this purpose. Under VHE that is a NOP and resulted in a runtime warning.
Replace kvm_ksym_ref with lm_alias.

Reported-by: Qian Cai <qcai@redhat.com>
Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201208142452.87237-3-dbrazdil@google.com
3 years agoKVM: arm64: Prevent use of invalid PSCI v0.1 function IDs
David Brazdil [Tue, 8 Dec 2020 14:24:47 +0000 (14:24 +0000)]
KVM: arm64: Prevent use of invalid PSCI v0.1 function IDs

PSCI driver exposes a struct containing the PSCI v0.1 function IDs
configured in the DT. However, the struct does not convey the
information whether these were set from DT or contain the default value
zero. This could be a problem for PSCI proxy in KVM protected mode.

Extend config passed to KVM with a bit mask with individual bits set
depending on whether the corresponding function pointer in psci_ops is
set, eg. set bit for PSCI_CPU_SUSPEND if psci_ops.cpu_suspend != NULL.

Previously config was split into multiple global variables. Put
everything into a single struct for convenience.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201208142452.87237-2-dbrazdil@google.com
3 years agoKVM: arm64: Don't access PMCR_EL0 when no PMU is available
Marc Zyngier [Thu, 10 Dec 2020 08:30:59 +0000 (08:30 +0000)]
KVM: arm64: Don't access PMCR_EL0 when no PMU is available

We reset the guest's view of PMCR_EL0 unconditionally, based on
the host's view of this register. It is however legal for an
implementation not to provide any PMU, resulting in an UNDEF.

The obvious fix is to skip the reset of this shadow register
when no PMU is available, sidestepping the issue entirely.
If no PMU is available, the guest is not able to request
a virtual PMU anyway, so not doing nothing is the right thing
to do!

It is unlikely that this bug can hit any HW implementation
though, as they all provide a PMU. It has been found using nested
virt with the host KVM not implementing the PMU itself.

Fixes: ab9468340d2bc ("arm64: KVM: Add access handler for PMCR register")
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201210083059.1277162-1-maz@kernel.org
3 years agoMerge remote-tracking branch 'origin/kvm-arm64/psci-relay' into kvmarm-master/next
Marc Zyngier [Wed, 9 Dec 2020 10:00:24 +0000 (10:00 +0000)]
Merge remote-tracking branch 'origin/kvm-arm64/psci-relay' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Fix nVHE boot on VHE systems
Marc Zyngier [Tue, 8 Dec 2020 19:51:49 +0000 (19:51 +0000)]
KVM: arm64: Fix nVHE boot on VHE systems

Conflict resolution gone astray results in the kernel not booting
on VHE-capable HW when VHE support is disabled. Thankfully spotted
by David.

Reported-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge remote-tracking branch 'origin/kvm-arm64/misc-5.11' into kvmarm-master/queue
Marc Zyngier [Fri, 4 Dec 2020 10:12:55 +0000 (10:12 +0000)]
Merge remote-tracking branch 'origin/kvm-arm64/misc-5.11' into kvmarm-master/queue

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Fix EL2 mode availability checks
David Brazdil [Wed, 2 Dec 2020 18:41:22 +0000 (18:41 +0000)]
KVM: arm64: Fix EL2 mode availability checks

With protected nVHE hyp code interception host's PSCI SMCs, the host
starts seeing new CPUs boot in EL1 instead of EL2. The kernel logic
that keeps track of the boot mode needs to be adjusted.

Add a static key enabled if KVM protected mode initialization is
successful.

When the key is enabled, is_hyp_mode_available continues to report
`true` because its users either treat it as a check whether KVM will be
/ was initialized, or whether stub HVCs can be made (eg. hibernate).

is_hyp_mode_mismatched is changed to report `false` when the key is
enabled. That's because all cores' modes matched at the point of KVM
init and KVM will not allow cores not present at init to boot. That
said, the function is never used after KVM is initialized.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-27-dbrazdil@google.com
3 years agoKVM: arm64: Trap host SMCs in protected mode
David Brazdil [Wed, 2 Dec 2020 18:41:21 +0000 (18:41 +0000)]
KVM: arm64: Trap host SMCs in protected mode

While protected KVM is installed, start trapping all host SMCs.
For now these are simply forwarded to EL3, except PSCI
CPU_ON/CPU_SUSPEND/SYSTEM_SUSPEND which are intercepted and the
hypervisor installed on newly booted cores.

Create new constant HCR_HOST_NVHE_PROTECTED_FLAGS with the new set of HCR
flags to use while the nVHE vector is installed when the kernel was
booted with the protected flag enabled. Switch back to the default HCR
flags when switching back to the stub vector.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-26-dbrazdil@google.com
3 years agoKVM: arm64: Keep nVHE EL2 vector installed
David Brazdil [Wed, 2 Dec 2020 18:41:20 +0000 (18:41 +0000)]
KVM: arm64: Keep nVHE EL2 vector installed

KVM by default keeps the stub vector installed and installs the nVHE
vector only briefly for init and later on demand. Change this policy
to install the vector at init and then never uninstall it if the kernel
was given the protected KVM command line parameter.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-25-dbrazdil@google.com
3 years agoKVM: arm64: Intercept host's SYSTEM_SUSPEND PSCI SMCs
David Brazdil [Wed, 2 Dec 2020 18:41:19 +0000 (18:41 +0000)]
KVM: arm64: Intercept host's SYSTEM_SUSPEND PSCI SMCs

Add a handler of SYSTEM_SUSPEND host PSCI SMCs. The semantics are
equivalent to CPU_SUSPEND, typically called on the last online CPU.
Reuse the same entry point and boot args struct as CPU_SUSPEND.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-24-dbrazdil@google.com
3 years agoKVM: arm64: Intercept host's CPU_SUSPEND PSCI SMCs
David Brazdil [Wed, 2 Dec 2020 18:41:18 +0000 (18:41 +0000)]
KVM: arm64: Intercept host's CPU_SUSPEND PSCI SMCs

Add a handler of CPU_SUSPEND host PSCI SMCs. The SMC can either enter
a sleep state indistinguishable from a WFI or a deeper sleep state that
behaves like a CPU_OFF+CPU_ON except that the core is still considered
online while asleep.

The handler saves r0,pc of the host and makes the same call to EL3 with
the hyp CPU entry point. It either returns back to the handler and then
back to the host, or wakes up into the entry point and initializes EL2
state before dropping back to EL1. No EL2 state needs to be
saved/restored for this purpose.

CPU_ON and CPU_SUSPEND are both implemented using struct psci_boot_args
to store the state upon powerup, with each CPU having separate structs
for CPU_ON and CPU_SUSPEND so that CPU_SUSPEND can operate locklessly
and so that a CPU_ON call targeting a CPU cannot interfere with
a concurrent CPU_SUSPEND call on that CPU.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-23-dbrazdil@google.com
3 years agoKVM: arm64: Intercept host's CPU_ON SMCs
David Brazdil [Wed, 2 Dec 2020 18:41:17 +0000 (18:41 +0000)]
KVM: arm64: Intercept host's CPU_ON SMCs

Add a handler of the CPU_ON PSCI call from host. When invoked, it looks
up the logical CPU ID corresponding to the provided MPIDR and populates
the state struct of the target CPU with the provided x0, pc. It then
calls CPU_ON itself, with an entry point in hyp that initializes EL2
state before returning ERET to the provided PC in EL1.

There is a simple atomic lock around the boot args struct. If it is
already locked, CPU_ON will return PENDING_ON error code.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-22-dbrazdil@google.com
3 years agoKVM: arm64: Add function to enter host from KVM nVHE hyp code
David Brazdil [Wed, 2 Dec 2020 18:41:16 +0000 (18:41 +0000)]
KVM: arm64: Add function to enter host from KVM nVHE hyp code

All nVHE hyp code is currently executed as handlers of host's HVCs. This
will change as nVHE starts intercepting host's PSCI CPU_ON SMCs. The
newly booted CPU will need to initialize EL2 state and then enter the
host. Add __host_enter function that branches into the existing
host state-restoring code after the trap handler would have returned.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-21-dbrazdil@google.com
3 years agoKVM: arm64: Extract __do_hyp_init into a helper function
David Brazdil [Wed, 2 Dec 2020 18:41:15 +0000 (18:41 +0000)]
KVM: arm64: Extract __do_hyp_init into a helper function

In preparation for adding a CPU entry point in nVHE hyp code, extract
most of __do_hyp_init hypervisor initialization code into a common
helper function. This will be invoked by the entry point to install KVM
on the newly booted CPU.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-20-dbrazdil@google.com
3 years agoKVM: arm64: Forward safe PSCI SMCs coming from host
David Brazdil [Wed, 2 Dec 2020 18:41:14 +0000 (18:41 +0000)]
KVM: arm64: Forward safe PSCI SMCs coming from host

Forward the following PSCI SMCs issued by host to EL3 as they do not
require the hypervisor's intervention. This assumes that EL3 correctly
implements the PSCI specification.

Only function IDs implemented in Linux are included.

Where both 32-bit and 64-bit variants exist, it is assumed that the host
will always use the 64-bit variant.

 * SMCs that only return information about the system
   * PSCI_VERSION        - PSCI version implemented by EL3
   * PSCI_FEATURES       - optional features supported by EL3
   * AFFINITY_INFO       - power state of core/cluster
   * MIGRATE_INFO_TYPE   - whether Trusted OS can be migrated
   * MIGRATE_INFO_UP_CPU - resident core of Trusted OS
 * operations which do not affect the hypervisor
   * MIGRATE             - migrate Trusted OS to a different core
   * SET_SUSPEND_MODE    - toggle OS-initiated mode
 * system shutdown/reset
   * SYSTEM_OFF
   * SYSTEM_RESET
   * SYSTEM_RESET2

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-19-dbrazdil@google.com
3 years agoKVM: arm64: Add offset for hyp VA <-> PA conversion
David Brazdil [Wed, 2 Dec 2020 18:41:13 +0000 (18:41 +0000)]
KVM: arm64: Add offset for hyp VA <-> PA conversion

Add a host-initialized constant to KVM nVHE hyp code for converting
between EL2 linear map virtual addresses and physical addresses.
Also add `__hyp_pa` macro that performs the conversion.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-18-dbrazdil@google.com
3 years agoKVM: arm64: Bootstrap PSCI SMC handler in nVHE EL2
David Brazdil [Wed, 2 Dec 2020 18:41:12 +0000 (18:41 +0000)]
KVM: arm64: Bootstrap PSCI SMC handler in nVHE EL2

Add a handler of PSCI SMCs in nVHE hyp code. The handler is initialized
with the version used by the host's PSCI driver and the function IDs it
was configured with. If the SMC function ID matches one of the
configured PSCI calls (for v0.1) or falls into the PSCI function ID
range (for v0.2+), the SMC is handled by the PSCI handler. For now, all
SMCs return PSCI_RET_NOT_SUPPORTED.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-17-dbrazdil@google.com
3 years agoKVM: arm64: Add SMC handler in nVHE EL2
David Brazdil [Wed, 2 Dec 2020 18:41:11 +0000 (18:41 +0000)]
KVM: arm64: Add SMC handler in nVHE EL2

Add handler of host SMCs in KVM nVHE trap handler. Forward all SMCs to
EL3 and propagate the result back to EL1. This is done in preparation
for validating host SMCs in KVM protected mode.

The implementation assumes that firmware uses SMCCC v1.2 or older. That
means x0-x17 can be used both for arguments and results, other GPRs are
preserved.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-16-dbrazdil@google.com
3 years agoKVM: arm64: Create nVHE copy of cpu_logical_map
David Brazdil [Wed, 2 Dec 2020 18:41:10 +0000 (18:41 +0000)]
KVM: arm64: Create nVHE copy of cpu_logical_map

When KVM starts validating host's PSCI requests, it will need to map
MPIDR back to the CPU ID. To this end, copy cpu_logical_map into nVHE
hyp memory when KVM is initialized.

Only copy the information for CPUs that are online at the point of KVM
initialization so that KVM rejects CPUs whose features were not checked
against the finalized capabilities.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-15-dbrazdil@google.com
3 years agoKVM: arm64: Support per_cpu_ptr in nVHE hyp code
David Brazdil [Wed, 2 Dec 2020 18:41:09 +0000 (18:41 +0000)]
KVM: arm64: Support per_cpu_ptr in nVHE hyp code

When compiling with __KVM_NVHE_HYPERVISOR__, redefine per_cpu_offset()
to __hyp_per_cpu_offset() which looks up the base of the nVHE per-CPU
region of the given cpu and computes its offset from the
.hyp.data..percpu section.

This enables use of per_cpu_ptr() helpers in nVHE hyp code. Until now
only this_cpu_ptr() was supported by setting TPIDR_EL2.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-14-dbrazdil@google.com
3 years agoKVM: arm64: Add .hyp.data..ro_after_init ELF section
David Brazdil [Wed, 2 Dec 2020 18:41:08 +0000 (18:41 +0000)]
KVM: arm64: Add .hyp.data..ro_after_init ELF section

Add rules for renaming the .data..ro_after_init ELF section in KVM nVHE
object files to .hyp.data..ro_after_init, linking it into the kernel
and mapping it in hyp at runtime.

The section is RW to the host, then mapped RO in hyp. The expectation is
that the host populates the variables in the section and they are never
changed by hyp afterwards.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-13-dbrazdil@google.com
3 years agoKVM: arm64: Init MAIR/TCR_EL2 from params struct
David Brazdil [Wed, 2 Dec 2020 18:41:07 +0000 (18:41 +0000)]
KVM: arm64: Init MAIR/TCR_EL2 from params struct

MAIR_EL2 and TCR_EL2 are currently initialized from their _EL1 values.
This will not work once KVM starts intercepting PSCI ON/SUSPEND SMCs
and initializing EL2 state before EL1 state.

Obtain the EL1 values during KVM init and store them in the init params
struct. The struct will stay in memory and can be used when booting new
cores.

Take the opportunity to move copying the T0SZ value from idmap_t0sz in
KVM init rather than in .hyp.idmap.text. This avoids the need for the
idmap_t0sz symbol alias.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-12-dbrazdil@google.com
3 years agoKVM: arm64: Move hyp-init params to a per-CPU struct
David Brazdil [Wed, 2 Dec 2020 18:41:06 +0000 (18:41 +0000)]
KVM: arm64: Move hyp-init params to a per-CPU struct

Once we start initializing KVM on newly booted cores before the rest of
the kernel, parameters to __do_hyp_init will need to be provided by EL2
rather than EL1. At that point it will not be possible to pass its three
arguments directly because PSCI_CPU_ON only supports one context
argument.

Refactor __do_hyp_init to accept its parameters in a struct. This
prepares the code for KVM booting cores as well as removes any limits on
the number of __do_hyp_init arguments.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-11-dbrazdil@google.com
3 years agoKVM: arm64: Remove vector_ptr param of hyp-init
David Brazdil [Wed, 2 Dec 2020 18:41:05 +0000 (18:41 +0000)]
KVM: arm64: Remove vector_ptr param of hyp-init

KVM precomputes the hyp VA of __kvm_hyp_host_vector, essentially a
constant (minus ASLR), before passing it to __kvm_hyp_init.
Now that we have alternatives for converting kimg VA to hyp VA, replace
this with computing the constant inside __kvm_hyp_init, thus removing
the need for an argument.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-10-dbrazdil@google.com
3 years agoarm64: Extract parts of el2_setup into a macro
David Brazdil [Wed, 2 Dec 2020 18:41:04 +0000 (18:41 +0000)]
arm64: Extract parts of el2_setup into a macro

When a CPU is booted in EL2, the kernel checks for VHE support and
initializes the CPU core accordingly. For nVHE it also installs the stub
vectors and drops down to EL1.

Once KVM gains the ability to boot cores without going through the
kernel entry point, it will need to initialize the CPU the same way.
Extract the relevant bits of el2_setup into an init_el2_state macro
with an argument specifying whether to initialize for VHE or nVHE.

The following ifdefs are removed:
 * CONFIG_ARM_GIC_V3 - always selected on arm64
 * CONFIG_COMPAT - hstr_el2 can be set even without 32-bit support

No functional change intended. Size of el2_setup increased by
148 bytes due to duplication.

Signed-off-by: David Brazdil <dbrazdil@google.com>
[maz: reworked to fit the new PSTATE initial setup code]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-9-dbrazdil@google.com
3 years agoarm64: Make cpu_logical_map() take unsigned int
David Brazdil [Wed, 2 Dec 2020 18:41:03 +0000 (18:41 +0000)]
arm64: Make cpu_logical_map() take unsigned int

CPU index should never be negative. Change the signature of
(set_)cpu_logical_map to take an unsigned int.

This still works even if the users treat the CPU index as an int,
and will allow the hypervisor's implementation to check that the index
is valid with a single upper-bound check.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-8-dbrazdil@google.com
3 years agopsci: Add accessor for psci_0_1_function_ids
David Brazdil [Wed, 2 Dec 2020 18:41:02 +0000 (18:41 +0000)]
psci: Add accessor for psci_0_1_function_ids

Make it possible to retrieve a copy of the psci_0_1_function_ids struct.
This is useful for KVM if it is configured to intercept host's PSCI SMCs.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20201202184122.26046-7-dbrazdil@google.com
3 years agopsci: Replace psci_function_id array with a struct
David Brazdil [Wed, 2 Dec 2020 18:41:01 +0000 (18:41 +0000)]
psci: Replace psci_function_id array with a struct

Small refactor that replaces array of v0.1 function IDs indexed by an
enum of function-name constants with a struct of function IDs "indexed"
by field names. This is done in preparation for exposing the IDs to
other parts of the kernel. Exposing a struct avoids the need for
bounds checking.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20201202184122.26046-6-dbrazdil@google.com
3 years agopsci: Split functions to v0.1 and v0.2+ variants
David Brazdil [Wed, 2 Dec 2020 18:41:00 +0000 (18:41 +0000)]
psci: Split functions to v0.1 and v0.2+ variants

Refactor implementation of v0.1+ functions (CPU_SUSPEND, CPU_OFF,
CPU_ON, MIGRATE) to have two functions psci_0_1_foo / psci_0_2_foo that
select the function ID and call a common helper __psci_foo.

This is a small cleanup so that the function ID array is only used for
v0.1 configurations.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20201202184122.26046-5-dbrazdil@google.com
3 years agopsci: Support psci_ops.get_version for v0.1
David Brazdil [Wed, 2 Dec 2020 18:40:59 +0000 (18:40 +0000)]
psci: Support psci_ops.get_version for v0.1

KVM's host PSCI SMC filter needs to be aware of the PSCI version of the
system but currently it is impossible to distinguish between v0.1 and
PSCI disabled because both have get_version == NULL.

Populate get_version for v0.1 with a function that returns a constant.

psci_opt.get_version is currently unused so this has no effect on
existing functionality.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20201202184122.26046-4-dbrazdil@google.com
3 years agoKVM: arm64: Add ARM64_KVM_PROTECTED_MODE CPU capability
David Brazdil [Wed, 2 Dec 2020 18:40:58 +0000 (18:40 +0000)]
KVM: arm64: Add ARM64_KVM_PROTECTED_MODE CPU capability

Expose the boolean value whether the system is running with KVM in
protected mode (nVHE + kernel param). CPU capability was selected over
a global variable to allow use in alternatives.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-3-dbrazdil@google.com
3 years agoKVM: arm64: Add kvm-arm.mode early kernel parameter
David Brazdil [Wed, 2 Dec 2020 18:40:57 +0000 (18:40 +0000)]
KVM: arm64: Add kvm-arm.mode early kernel parameter

Add an early parameter that allows users to select the mode of operation
for KVM/arm64.

For now, the only supported value is "protected". By passing this flag
users opt into the hypervisor placing additional restrictions on the
host kernel. These allow the hypervisor to spawn guests whose state is
kept private from the host. Restrictions will include stage-2 address
translation to prevent host from accessing guest memory, filtering its
SMC calls, etc.

Without this parameter, the default behaviour remains selecting VHE/nVHE
based on hardware support and CONFIG_ARM64_VHE.

Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201202184122.26046-2-dbrazdil@google.com
3 years agoMerge remote-tracking branch 'arm64/for-next/uaccess' into HEAD
Marc Zyngier [Fri, 4 Dec 2020 08:43:37 +0000 (08:43 +0000)]
Merge remote-tracking branch 'arm64/for-next/uaccess' into HEAD

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge remote-tracking branch 'origin/kvm-arm64/csv3' into kvmarm-master/queue
Marc Zyngier [Thu, 3 Dec 2020 19:12:24 +0000 (19:12 +0000)]
Merge remote-tracking branch 'origin/kvm-arm64/csv3' into kvmarm-master/queue

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Use kvm_write_guest_lock when init stolen time
Keqian Zhu [Mon, 17 Aug 2020 11:07:28 +0000 (19:07 +0800)]
KVM: arm64: Use kvm_write_guest_lock when init stolen time

There is a lock version kvm_write_guest. Use it to simplify code.

Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Link: https://lore.kernel.org/r/20200817110728.12196-3-zhukeqian1@huawei.com
3 years agoKVM: arm64: Some fixes of PV-time interface document
Keqian Zhu [Mon, 17 Aug 2020 11:07:27 +0000 (19:07 +0800)]
KVM: arm64: Some fixes of PV-time interface document

Rename PV_FEATURES to PV_TIME_FEATURES.

Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Link: https://lore.kernel.org/r/20200817110728.12196-2-zhukeqian1@huawei.com
3 years agoarm64: mark __system_matches_cap as __maybe_unused
Mark Rutland [Thu, 3 Dec 2020 15:24:03 +0000 (15:24 +0000)]
arm64: mark __system_matches_cap as __maybe_unused

Now that the PAN toggling has been removed, the only user of
__system_matches_cap() is has_generic_auth(), which is only built when
CONFIG_ARM64_PTR_AUTH is selected, and Qian reports that this results in
a build-time warning when CONFIG_ARM64_PTR_AUTH is not selected:

| arch/arm64/kernel/cpufeature.c:2649:13: warning: '__system_matches_cap' defined but not used [-Wunused-function]
|  static bool __system_matches_cap(unsigned int n)
|              ^~~~~~~~~~~~~~~~~~~~

It's tricky to restructure things to prevent this, so let's mark
__system_matches_cap() as __maybe_unused, as we used to do for the other
user of __system_matches_cap() which we just removed.

Reported-by: Qian Cai <qcai@redhat.com>
Suggested-by: Qian Cai <qcai@redhat.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20201203152403.26100-1-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: uaccess: remove vestigal UAO support
Mark Rutland [Wed, 2 Dec 2020 13:15:58 +0000 (13:15 +0000)]
arm64: uaccess: remove vestigal UAO support

Now that arm64 no longer uses UAO, remove the vestigal feature detection
code and Kconfig text.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201202131558.39270-13-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: uaccess: remove redundant PAN toggling
Mark Rutland [Wed, 2 Dec 2020 13:15:57 +0000 (13:15 +0000)]
arm64: uaccess: remove redundant PAN toggling

Some code (e.g. futex) needs to make privileged accesses to userspace
memory, and uses uaccess_{enable,disable}_privileged() in order to
permit this. All other uaccess primitives use LDTR/STTR, and never need
to toggle PAN.

Remove the redundant PAN toggling.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201202131558.39270-12-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: uaccess: remove addr_limit_user_check()
Mark Rutland [Wed, 2 Dec 2020 13:15:56 +0000 (13:15 +0000)]
arm64: uaccess: remove addr_limit_user_check()

Now that set_fs() is gone, addr_limit_user_check() is redundant. Remove
the checks and associated thread flag.

To ensure that _TIF_WORK_MASK can be used as an immediate value in an
AND instruction (as it is in `ret_to_user`), TIF_MTE_ASYNC_FAULT is
renumbered to keep the constituent bits of _TIF_WORK_MASK contiguous.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201202131558.39270-11-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: uaccess: remove set_fs()
Mark Rutland [Wed, 2 Dec 2020 13:15:55 +0000 (13:15 +0000)]
arm64: uaccess: remove set_fs()

Now that the uaccess primitives dont take addr_limit into account, we
have no need to manipulate this via set_fs() and get_fs(). Remove
support for these, along with some infrastructure this renders
redundant.

We no longer need to flip UAO to access kernel memory under KERNEL_DS,
and head.S unconditionally clears UAO for all kernel configurations via
an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
nor do we need to context-switch it. However, we still need to adjust
PAN during SDEI entry.

Masking of __user pointers no longer needs to use the dynamic value of
addr_limit, and can use a constant derived from the maximum possible
userspace task size. A new TASK_SIZE_MAX constant is introduced for
this, which is also used by core code. In configurations supporting
52-bit VAs, this may include a region of unusable VA space above a
48-bit TTBR0 limit, but never includes any portion of TTBR1.

Note that TASK_SIZE_MAX is an exclusive limit, while USER_DS and
KERNEL_DS were inclusive limits, and is converted to a mask by
subtracting one.

As the SDEI entry code repurposes the otherwise unnecessary
pt_regs::orig_addr_limit field to store the TTBR1 of the interrupted
context, for now we rename that to pt_regs::sdei_ttbr1. In future we can
consider factoring that out.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: James Morse <james.morse@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201202131558.39270-10-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: uaccess cleanup macro naming
Mark Rutland [Wed, 2 Dec 2020 13:15:54 +0000 (13:15 +0000)]
arm64: uaccess cleanup macro naming

Now the uaccess primitives use LDTR/STTR unconditionally, the
uao_{ldp,stp,user_alternative} asm macros are misnamed, and have a
redundant argument. Let's remove the redundant argument and rename these
to user_{ldp,stp,ldst} respectively to clean this up.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Robin Murohy <robin.murphy@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201202131558.39270-9-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: uaccess: split user/kernel routines
Mark Rutland [Wed, 2 Dec 2020 13:15:53 +0000 (13:15 +0000)]
arm64: uaccess: split user/kernel routines

This patch separates arm64's user and kernel memory access primitives
into distinct routines, adding new __{get,put}_kernel_nofault() helpers
to access kernel memory, upon which core code builds larger copy
routines.

The kernel access routines (using LDR/STR) are not affected by PAN (when
legitimately accessing kernel memory), nor are they affected by UAO.
Switching to KERNEL_DS may set UAO, but this does not adversely affect
the kernel access routines.

The user access routines (using LDTR/STTR) are not affected by PAN (when
legitimately accessing user memory), but are affected by UAO. As these
are only legitimate to use under USER_DS with UAO clear, this should not
be problematic.

Routines performing atomics to user memory (futex and deprecated
instruction emulation) still need to transiently clear PAN, and these
are left as-is. These are never used on kernel memory.

Subsequent patches will refactor the uaccess helpers to remove redundant
code, and will also remove the redundant PAN/UAO manipulation.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201202131558.39270-8-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: uaccess: refactor __{get,put}_user
Mark Rutland [Wed, 2 Dec 2020 13:15:52 +0000 (13:15 +0000)]
arm64: uaccess: refactor __{get,put}_user

As a step towards implementing __{get,put}_kernel_nofault(), this patch
splits most user-memory specific logic out of __{get,put}_user(), with
the memory access and fault handling in new __{raw_get,put}_mem()
helpers.

For now the LDR/LDTR patching is left within the *get_mem() helpers, and
will be removed in a subsequent patch.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201202131558.39270-7-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: uaccess: simplify __copy_user_flushcache()
Mark Rutland [Wed, 2 Dec 2020 13:15:51 +0000 (13:15 +0000)]
arm64: uaccess: simplify __copy_user_flushcache()

Currently __copy_user_flushcache() open-codes raw_copy_from_user(), and
doesn't use uaccess_mask_ptr() on the user address. Let's have it call
raw_copy_from_user(), which is both a simplification and ensures that
user pointers are masked under speculation.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201202131558.39270-6-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: uaccess: rename privileged uaccess routines
Mark Rutland [Wed, 2 Dec 2020 13:15:50 +0000 (13:15 +0000)]
arm64: uaccess: rename privileged uaccess routines

We currently have many uaccess_*{enable,disable}*() variants, which
subsequent patches will cut down as part of removing set_fs() and
friends. Once this simplification is made, most uaccess routines will
only need to ensure that the user page tables are mapped in TTBR0, as is
currently dealt with by uaccess_ttbr0_{enable,disable}().

The existing uaccess_{enable,disable}() routines ensure that user page
tables are mapped in TTBR0, and also disable PAN protections, which is
necessary to be able to use atomics on user memory, but also permit
unrelated privileged accesses to access user memory.

As preparatory step, let's rename uaccess_{enable,disable}() to
uaccess_{enable,disable}_privileged(), highlighting this caveat and
discouraging wider misuse. Subsequent patches can reuse the
uaccess_{enable,disable}() naming for the common case of ensuring the
user page tables are mapped in TTBR0.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201202131558.39270-5-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: sdei: explicitly simulate PAN/UAO entry
Mark Rutland [Wed, 2 Dec 2020 13:15:48 +0000 (13:15 +0000)]
arm64: sdei: explicitly simulate PAN/UAO entry

In preparation for removing addr_limit and set_fs() we must decouple the
SDEI PAN/UAO manipulation from the uaccess code, and explicitly
reinitialize these as required.

SDEI enters the kernel with a non-architectural exception, and prior to
the most recent revision of the specification (ARM DEN 0054B), PSTATE
bits (e.g. PAN, UAO) are not manipulated in the same way as for
architectural exceptions. Notably, older versions of the spec can be
read ambiguously as to whether PSTATE bits are inherited unchanged from
the interrupted context or whether they are generated from scratch, with
TF-A doing the latter.

We have three cases to consider:

1) The existing TF-A implementation of SDEI will clear PAN and clear UAO
   (along with other bits in PSTATE) when delivering an SDEI exception.

2) In theory, implementations of SDEI prior to revision B could inherit
   PAN and UAO (along with other bits in PSTATE) unchanged from the
   interrupted context. However, in practice such implementations do not
   exist.

3) Going forward, new implementations of SDEI must clear UAO, and
   depending on SCTLR_ELx.SPAN must either inherit or set PAN.

As we can ignore (2) we can assume that upon SDEI entry, UAO is always
clear, though PAN may be clear, inherited, or set per SCTLR_ELx.SPAN.
Therefore, we must explicitly initialize PAN, but do not need to do
anything for UAO.

Considering what we need to do:

* When set_fs() is removed, force_uaccess_begin() will have no HW
  side-effects. As this only clears UAO, which we can assume has already
  been cleared upon entry, this is not a problem. We do not need to add
  code to manipulate UAO explicitly.

* PAN may be cleared upon entry (in case 1 above), so where a kernel is
  built to use PAN and this is supported by all CPUs, the kernel must
  set PAN upon entry to ensure expected behaviour.

* PAN may be inherited from the interrupted context (in case 3 above),
  and so where a kernel is not built to use PAN or where PAN support is
  not uniform across CPUs, the kernel must clear PAN to ensure expected
  behaviour.

This patch reworks the SDEI code accordingly, explicitly setting PAN to
the expected state in all cases. To cater for the cases where the kernel
does not use PAN or this is not uniformly supported by hardware we add a
new cpu_has_pan() helper which can be used regardless of whether the
kernel is built to use PAN.

The existing system_uses_ttbr0_pan() is redefined in terms of
system_uses_hw_pan() both for clarity and as a minor optimization when
HW PAN is not selected.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201202131558.39270-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: sdei: move uaccess logic to arch/arm64/
Mark Rutland [Wed, 2 Dec 2020 13:15:47 +0000 (13:15 +0000)]
arm64: sdei: move uaccess logic to arch/arm64/

The SDEI support code is split across arch/arm64/ and drivers/firmware/,
largley this is split so that the arch-specific portions are under
arch/arm64, and the management logic is under drivers/firmware/.
However, exception entry fixups are currently under drivers/firmware.

Let's move the exception entry fixups under arch/arm64/. This
de-clutters the management logic, and puts all the arch-specific
portions in one place. Doing this also allows the fixups to be applied
earlier, so things like PAN and UAO will be in a known good state before
we run other logic. This will also make subsequent refactoring easier.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201202131558.39270-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: head.S: always initialize PSTATE
Mark Rutland [Fri, 13 Nov 2020 12:49:25 +0000 (12:49 +0000)]
arm64: head.S: always initialize PSTATE

As with SCTLR_ELx and other control registers, some PSTATE bits are
UNKNOWN out-of-reset, and we may not be able to rely on hardware or
firmware to initialize them to our liking prior to entry to the kernel,
e.g. in the primary/secondary boot paths and return from idle/suspend.

It would be more robust (and easier to reason about) if we consistently
initialized PSTATE to a default value, as we do with control registers.
This will ensure that the kernel is not adversely affected by bits it is
not aware of, e.g. when support for a feature such as PAN/UAO is
disabled.

This patch ensures that PSTATE is consistently initialized at boot time
via an ERET. This is not intended to relax the existing requirements
(e.g. DAIF bits must still be set prior to entering the kernel). For
features detected dynamically (which may require system-wide support),
it is still necessary to subsequently modify PSTATE.

As ERET is not always a Context Synchronization Event, an ISB is placed
before each exception return to ensure updates to control registers have
taken effect. This handles the kernel being entered with SCTLR_ELx.EOS
clear (or any future control bits being in an UNKNOWN state).

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201113124937.20574-6-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: head.S: cleanup SCTLR_ELx initialization
Mark Rutland [Fri, 13 Nov 2020 12:49:24 +0000 (12:49 +0000)]
arm64: head.S: cleanup SCTLR_ELx initialization

Let's make SCTLR_ELx initialization a bit clearer by using meaningful
names for the initialization values, following the same scheme for
SCTLR_EL1 and SCTLR_EL2.

These definitions will be used more widely in subsequent patches.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201113124937.20574-5-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: head.S: rename el2_setup -> init_kernel_el
Mark Rutland [Fri, 13 Nov 2020 12:49:23 +0000 (12:49 +0000)]
arm64: head.S: rename el2_setup -> init_kernel_el

For a while now el2_setup has performed some basic initialization of EL1
even when the kernel is booted at EL1, so the name is a little
misleading. Further, some comments are stale as with VHE it doesn't drop
the CPU to EL1.

To clarify things, rename el2_setup to init_kernel_el, and update
comments to be clearer as to the function's purpose.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201113124937.20574-4-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: add C wrappers for SET_PSTATE_*()
Mark Rutland [Fri, 13 Nov 2020 12:49:22 +0000 (12:49 +0000)]
arm64: add C wrappers for SET_PSTATE_*()

To make callsites easier to read, add trivial C wrappers for the
SET_PSTATE_*() helpers, and convert trivial uses over to these. The new
wrappers will be used further in subsequent patches.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201113124937.20574-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: ensure ERET from kthread is illegal
Mark Rutland [Fri, 13 Nov 2020 12:49:21 +0000 (12:49 +0000)]
arm64: ensure ERET from kthread is illegal

For consistency, all tasks have a pt_regs reserved at the highest
portion of their task stack. Among other things, this ensures that a
task's SP is always pointing within its stack rather than pointing
immediately past the end.

While it is never legitimate to ERET from a kthread, we take pains to
initialize pt_regs for kthreads as if this were legitimate. As this is
never legitimate, the effects of an erroneous return are rarely tested.

Let's simplify things by initializing a kthread's pt_regs such that an
ERET is caught as an illegal exception return, and removing the explicit
initialization of other exception context. Note that as
spectre_v4_enable_task_mitigation() only manipulates the PSTATE within
the unused regs this is safe to remove.

As user tasks will have their exception context initialized via
start_thread() or start_compat_thread(), this should only impact cases
where something has gone very wrong and we'd like that to be clearly
indicated.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20201113124937.20574-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoKVM: arm64: Advertise ID_AA64PFR0_EL1.CSV3=1 if the CPUs are Meltdown-safe
Marc Zyngier [Thu, 26 Nov 2020 17:27:13 +0000 (17:27 +0000)]
KVM: arm64: Advertise ID_AA64PFR0_EL1.CSV3=1 if the CPUs are Meltdown-safe

Cores that predate the introduction of ID_AA64PFR0_EL1.CSV3 to
the ARMv8 architecture have this field set to 0, even of some of
them are not affected by the vulnerability.

The kernel maintains a list of unaffected cores (A53, A55 and a few
others) so that it doesn't impose an expensive mitigation uncessarily.

As we do for CSV2, let's expose the CSV3 property to guests that run
on HW that is effectively not vulnerable. This can be reset to zero
by writing to the ID register from userspace, ensuring that VMs can
be migrated despite the new property being set.

Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Delay the polling of the GICR_VPENDBASER.Dirty bit
Shenming Lu [Sat, 28 Nov 2020 14:18:57 +0000 (22:18 +0800)]
KVM: arm64: Delay the polling of the GICR_VPENDBASER.Dirty bit

In order to reduce the impact of the VPT parsing happening on the GIC,
we can split the vcpu reseidency in two phases:

- programming GICR_VPENDBASER: this still happens in vcpu_load()
- checking for the VPT parsing to be complete: this can happen
  on vcpu entry (in kvm_vgic_flush_hwstate())

This allows the GIC and the CPU to work in parallel, rewmoving some
of the entry overhead.

Suggested-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Shenming Lu <lushenming@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201128141857.983-3-lushenming@huawei.com
3 years agoarm64: Make the Meltdown mitigation state available
Marc Zyngier [Thu, 26 Nov 2020 17:25:30 +0000 (17:25 +0000)]
arm64: Make the Meltdown mitigation state available

Our Meltdown mitigation state isn't exposed outside of cpufeature.c,
contrary to the rest of the Spectre mitigation state. As we are going
to use it in KVM, expose a arm64_get_meltdown_state() helper which
returns the same possible values as arm64_get_spectre_v?_state().

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge branch 'kvm-arm64/misc-5.11' into kvmarm-master/next
Marc Zyngier [Fri, 27 Nov 2020 19:48:24 +0000 (19:48 +0000)]
Merge branch 'kvm-arm64/misc-5.11' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge branch 'kvm-arm64/cache-demux' into kvmarm-master/next
Marc Zyngier [Fri, 27 Nov 2020 19:48:12 +0000 (19:48 +0000)]
Merge branch 'kvm-arm64/cache-demux' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: selftests: Filter out DEMUX registers
Andrew Jones [Thu, 26 Nov 2020 13:46:41 +0000 (14:46 +0100)]
KVM: arm64: selftests: Filter out DEMUX registers

DEMUX register presence depends on the host's hardware (the
CLIDR_EL1 register to be precise). This means there's no set
of them that we can bless and that it's possible to encounter
new ones when running on different hardware (which would
generate "Consider adding them ..." messages, but we'll never
want to add them.)

Remove the ones we have in the blessed list and filter them
out of the new list, but also provide a new command line switch
to list them if one so desires.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201126134641.35231-3-drjones@redhat.com
3 years agoKVM: arm64: CSSELR_EL1 max is 13
Andrew Jones [Thu, 26 Nov 2020 13:46:40 +0000 (14:46 +0100)]
KVM: arm64: CSSELR_EL1 max is 13

Not counting TnD, which KVM doesn't currently consider, CSSELR_EL1
can have a maximum value of 0b1101 (13), which corresponds to an
instruction cache at level 7. With CSSELR_MAX set to 12 we can
only select up to cache level 6. Change it to 14.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201126134641.35231-2-drjones@redhat.com
3 years agoKVM: arm64: Remove unused __extended_idmap_trampoline() prototype
Will Deacon [Wed, 18 Nov 2020 19:44:02 +0000 (19:44 +0000)]
KVM: arm64: Remove unused __extended_idmap_trampoline() prototype

__extended_idmap_trampoline() was removed a long time ago by
3421e9d88d7a ("arm64: KVM: Simplify HYP init/teardown") so remove the
unused function prototype.

Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201118194402.2892-4-will@kernel.org
3 years agoKVM: arm64: Remove kvm_arch_vm_ioctl_check_extension()
Will Deacon [Wed, 18 Nov 2020 19:44:01 +0000 (19:44 +0000)]
KVM: arm64: Remove kvm_arch_vm_ioctl_check_extension()

kvm_arch_vm_ioctl_check_extension() is only called from
kvm_vm_ioctl_check_extension(), so we can inline it and remove the extra
function.

Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201118194402.2892-3-will@kernel.org
3 years agoKVM: arm64: Move 'struct kvm_arch_memory_slot' out of uapi/
Will Deacon [Wed, 18 Nov 2020 19:44:00 +0000 (19:44 +0000)]
KVM: arm64: Move 'struct kvm_arch_memory_slot' out of uapi/

'struct kvm_arch_memory_slot' isn't part of the user ABI, so move it out
of the uapi/ headers in case we start using it in future and accidentally
back ourselves into a corner.

Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201118194402.2892-2-will@kernel.org
3 years agoMerge branch 'kvm-arm64/vector-rework' into kvmarm-master/next
Marc Zyngier [Fri, 27 Nov 2020 11:47:08 +0000 (11:47 +0000)]
Merge branch 'kvm-arm64/vector-rework' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge branch 'kvm-arm64/pmu-undef' into kvmarm-master/next
Marc Zyngier [Fri, 27 Nov 2020 11:46:47 +0000 (11:46 +0000)]
Merge branch 'kvm-arm64/pmu-undef' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Get rid of the PMU ready state
Marc Zyngier [Fri, 13 Nov 2020 16:42:08 +0000 (16:42 +0000)]
KVM: arm64: Get rid of the PMU ready state

The PMU ready state has no user left. Goodbye.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Gate kvm_pmu_update_state() on the PMU feature
Marc Zyngier [Fri, 13 Nov 2020 16:41:40 +0000 (16:41 +0000)]
KVM: arm64: Gate kvm_pmu_update_state() on the PMU feature

We currently gate the update of the PMU state on the PMU being "ready".
The "ready" state is only set to true when the first vcpu run is
successful, and if it isn't, we never reach the update code.

So the "ready" state is never the right thing to check for, and it
should instead be the presence of the PMU feature, which makes
a bit more sense.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Remove dead PMU sysreg decoding code
Marc Zyngier [Thu, 12 Nov 2020 18:50:06 +0000 (18:50 +0000)]
KVM: arm64: Remove dead PMU sysreg decoding code

The handling of traps in access_pmu_evcntr() has a couple of
omminous "else return false;" statements that don't make any sense:
the decoding tree coverse all the registers that trap to this handler,
and returning false implies that we change PC, which we don't.

Get rid of what is evidently dead code.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Remove PMU RAZ/WI handling
Marc Zyngier [Fri, 13 Nov 2020 14:12:53 +0000 (14:12 +0000)]
KVM: arm64: Remove PMU RAZ/WI handling

There is no RAZ/WI handling allowed for the PMU registers in the
ARMv8 architecture. Nobody can remember how we cam to the conclusion
that we could do this, but the ARMv8 ARM is pretty clear that we cannot.

Remove the RAZ/WI handling of the PMU system registers when it is
not configured.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Inject UNDEF on PMU access when no PMU configured
Marc Zyngier [Thu, 12 Nov 2020 18:49:28 +0000 (18:49 +0000)]
KVM: arm64: Inject UNDEF on PMU access when no PMU configured

The ARMv8 architecture says that in the absence of FEAT_PMUv3,
all the PMU-related register generate an UNDEF. Let's make
sure that all our PMU handers catch this case by hooking into
check_pmu_access_disabled(), and add checks in a couple of
other places.

Note that we still cannot deliver an exception into the guest
as the offending cases are already caught by the RAZ/WI handling.
But this puts us one step away to architectural compliance.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Refuse illegal KVM_ARM_VCPU_PMU_V3 at reset time
Marc Zyngier [Thu, 12 Nov 2020 18:13:27 +0000 (18:13 +0000)]
KVM: arm64: Refuse illegal KVM_ARM_VCPU_PMU_V3 at reset time

We accept to configure a PMU when a vcpu is created, even if the
HW (or the host) doesn't support it. This results in failures
when attributes get set, which is a bit odd as we should have
failed the vcpu creation the first place.

Move the check to the point where we check the vcpu feature set,
and fail early if we cannot support a PMU. This further simplifies
the attribute handling.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Set ID_AA64DFR0_EL1.PMUVer to 0 when no PMU support
Marc Zyngier [Thu, 12 Nov 2020 18:00:30 +0000 (18:00 +0000)]
KVM: arm64: Set ID_AA64DFR0_EL1.PMUVer to 0 when no PMU support

We always expose the HW view of PMU in ID_AA64FDR0_EL1.PMUver,
even when the PMU feature is disabled, while the architecture
says that FEAT_PMUv3 not being implemented should result in this
field being zero.

Let's follow the architecture's guidance.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Refuse to run VCPU if PMU is not initialized
Alexandru Elisei [Thu, 26 Nov 2020 14:49:16 +0000 (14:49 +0000)]
KVM: arm64: Refuse to run VCPU if PMU is not initialized

When enabling the PMU in kvm_arm_pmu_v3_enable(), KVM returns early if the
PMU flag created is false and skips any other checks. Because PMU emulation
is gated only on the VCPU feature being set, this makes it possible for
userspace to get away with setting the VCPU feature but not doing any
initialization for the PMU. Fix it by returning an error when trying to run
the VCPU if the PMU hasn't been initialized correctly.

The PMU is marked as created only if the interrupt ID has been set when
using an in-kernel irqchip. This means the same check in
kvm_arm_pmu_v3_enable() is redundant, remove it.

Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20201126144916.164075-1-alexandru.elisei@arm.com
3 years agoKVM: arm64: Add kvm_vcpu_has_pmu() helper
Marc Zyngier [Fri, 13 Nov 2020 16:39:44 +0000 (16:39 +0000)]
KVM: arm64: Add kvm_vcpu_has_pmu() helper

There are a number of places where we check for the KVM_ARM_VCPU_PMU_V3
feature. Wrap this check into a new kvm_vcpu_has_pmu(), and use
it at the existing locations.

No functional change.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge branch 'kvm-arm64/host-hvc-table' into kvmarm-master/next
Marc Zyngier [Fri, 27 Nov 2020 11:33:27 +0000 (11:33 +0000)]
Merge branch 'kvm-arm64/host-hvc-table' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge branch 'kvm-arm64/copro-no-more' into kvmarm-master/next
Marc Zyngier [Fri, 27 Nov 2020 11:33:16 +0000 (11:33 +0000)]
Merge branch 'kvm-arm64/copro-no-more' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge branch 'kvm-arm64/el2-pc' into kvmarm-master/next
Marc Zyngier [Fri, 27 Nov 2020 11:33:10 +0000 (11:33 +0000)]
Merge branch 'kvm-arm64/el2-pc' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception
Marc Zyngier [Sat, 24 Oct 2020 09:56:55 +0000 (10:56 +0100)]
KVM: arm64: Avoid repetitive stack access on host EL1 to EL2 exception

Registers x0/x1 get repeateadly pushed and poped during a host
HVC call. Instead, leave the registers on the stack, trading
a store instruction on the fast path for an add on the slow path.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: arm64: Simplify __kvm_enable_ssbs()
Marc Zyngier [Wed, 21 Oct 2020 20:52:54 +0000 (21:52 +0100)]
KVM: arm64: Simplify __kvm_enable_ssbs()

Move the setting of SSBS directly into the HVC handler, using
the C helpers rather than the inline asssembly code.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>