linux-2.6-microblaze.git
19 months agoarm64: mte: Avoid setting PG_mte_tagged if no tags cleared or restored
Catalin Marinas [Thu, 6 Oct 2022 16:33:54 +0000 (17:33 +0100)]
arm64: mte: Avoid setting PG_mte_tagged if no tags cleared or restored

Prior to commit 69e3b846d8a7 ("arm64: mte: Sync tags for pages where PTE
is untagged"), mte_sync_tags() was only called for pte_tagged() entries
(those mapped with PROT_MTE). Therefore mte_sync_tags() could safely use
test_and_set_bit(PG_mte_tagged, &page->flags) without inadvertently
setting PG_mte_tagged on an untagged page.

The above commit was required as guests may enable MTE without any
control at the stage 2 mapping, nor a PROT_MTE mapping in the VMM.
However, the side-effect was that any page with a PTE that looked like
swap (or migration) was getting PG_mte_tagged set automatically. A
subsequent page copy (e.g. migration) copied the tags to the destination
page even if the tags were owned by KASAN.

This issue was masked by the page_kasan_tag_reset() call introduced in
commit e5b8d9218951 ("arm64: mte: reset the page tag in page->flags").
When this commit was reverted (20794545c146), KASAN started reporting
access faults because the overriding tags in a page did not match the
original page->flags (with CONFIG_KASAN_HW_TAGS=y):

  BUG: KASAN: invalid-access in copy_page+0x10/0xd0 arch/arm64/lib/copy_page.S:26
  Read at addr f5ff000017f2e000 by task syz-executor.1/2218
  Pointer tag: [f5], memory tag: [f2]

Move the PG_mte_tagged bit setting from mte_sync_tags() to the actual
place where tags are cleared (mte_sync_page_tags()) or restored
(mte_restore_tags()).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: syzbot+c2c79c6d6eddc5262b77@syzkaller.appspotmail.com
Fixes: 69e3b846d8a7 ("arm64: mte: Sync tags for pages where PTE is untagged")
Cc: <stable@vger.kernel.org> # 5.14.x
Cc: Steven Price <steven.price@arm.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/0000000000004387dc05e5888ae5@google.com/
Reviewed-by: Steven Price <steven.price@arm.com>
Link: https://lore.kernel.org/r/20221006163354.3194102-1-catalin.marinas@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agoMAINTAINERS: rectify file entry in ALIBABA PMU DRIVER
Lukas Bulwahn [Thu, 29 Sep 2022 12:29:37 +0000 (14:29 +0200)]
MAINTAINERS: rectify file entry in ALIBABA PMU DRIVER

Commit cf7b61073e45 ("drivers/perf: add DDR Sub-System Driveway PMU driver
for Yitian 710 SoC") adds the DDR Sub-System Driveway PMU driver here:

  drivers/perf/alibaba_uncore_drw_pmu.c

The file entry in MAINTAINERS for the ALIBABA PMU DRIVER, introduced with
commit d813a19e7d2e ("MAINTAINERS: add maintainers for Alibaba' T-Head PMU
driver"), however refers to:

  drivers/perf/alibaba_uncore_dwr_pmu.c

Note the swapping of characters.

Hence, ./scripts/get_maintainer.pl --self-test=patterns complains about a
broken file pattern.

Repair this file entry in ALIBABA PMU DRIVER.

Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220929122937.20132-1-lukas.bulwahn@gmail.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agodrivers/perf: ALIBABA_UNCORE_DRW_PMU should depend on ACPI
Geert Uytterhoeven [Tue, 27 Sep 2022 13:37:16 +0000 (15:37 +0200)]
drivers/perf: ALIBABA_UNCORE_DRW_PMU should depend on ACPI

The Alibaba T-Head Yitian 710 DDR Sub-system Driveway PMU driver relies
solely on ACPI for matching.  Hence add a dependency on ACPI, to prevent
asking the user about this driver when configuring a kernel without ACPI
support.

Fixes: cf7b61073e45 ("drivers/perf: add DDR Sub-System Driveway PMU driver for Yitian 710 SoC")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/2a4407bb598285660fa5e604e56823ddb12bb0aa.1664285774.git.geert+renesas@glider.be
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agodrivers/perf: fix return value check in ali_drw_pmu_probe()
Sun Ke [Sat, 24 Sep 2022 03:21:27 +0000 (11:21 +0800)]
drivers/perf: fix return value check in ali_drw_pmu_probe()

In case of error, devm_ioremap_resource() returns ERR_PTR(),
and never returns NULL. The NULL test in the return value
check should be replaced with IS_ERR().

Fixes: cf7b61073e45 ("drivers/perf: add DDR Sub-System Driveway PMU driver for Yitian 710 SoC")
Signed-off-by: Sun Ke <sunke32@huawei.com>
Acked-by: Will Deacon <will@kernel.org>
Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
Link: https://lore.kernel.org/r/20220924032127.313156-1-sunke32@huawei.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agoarm64: errata: Add Cortex-A55 to the repeat tlbi list
James Morse [Fri, 30 Sep 2022 13:19:59 +0000 (14:19 +0100)]
arm64: errata: Add Cortex-A55 to the repeat tlbi list

Cortex-A55 is affected by an erratum where in rare circumstances the
CPUs may not handle a race between a break-before-make sequence on one
CPU, and another CPU accessing the same page. This could allow a store
to a page that has been unmapped.

Work around this by adding the affected CPUs to the list that needs
TLB sequences to be done twice.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20220930131959.3082594-1-james.morse@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agoarm64/sysreg: Fix typo in SCTR_EL1.SPINTMASK
Mark Brown [Wed, 5 Oct 2022 18:16:42 +0000 (19:16 +0100)]
arm64/sysreg: Fix typo in SCTR_EL1.SPINTMASK

SPINTMASK was typoed as SPINMASK, fix it.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20221005181642.711734-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agoarm64: alternatives: Use vdso/bits.h instead of linux/bits.h
Nathan Chancellor [Mon, 3 Oct 2022 19:37:59 +0000 (12:37 -0700)]
arm64: alternatives: Use vdso/bits.h instead of linux/bits.h

When building with CONFIG_LTO after commit ba00c2a04fa5 ("arm64: fix the
build with binutils 2.27"), the following build error occurs:

  In file included from arch/arm64/kernel/module-plts.c:6:
  In file included from include/linux/elf.h:6:
  In file included from arch/arm64/include/asm/elf.h:8:
  In file included from arch/arm64/include/asm/hwcap.h:9:
  In file included from arch/arm64/include/asm/cpufeature.h:9:
  In file included from arch/arm64/include/asm/alternative-macros.h:5:
  In file included from include/linux/bits.h:22:
  In file included from include/linux/build_bug.h:5:
  In file included from include/linux/compiler.h:248:
  In file included from arch/arm64/include/asm/rwonce.h:71:
  include/asm-generic/rwonce.h:67:9: error: expected string literal in 'asm'
          return __READ_ONCE(*(unsigned long *)addr);
                ^
  arch/arm64/include/asm/rwonce.h:43:16: note: expanded from macro '__READ_ONCE'
                  asm volatile(__LOAD_RCPC(b, %w0, %1)                    \
                              ^
  arch/arm64/include/asm/rwonce.h:17:2: note: expanded from macro '__LOAD_RCPC'
          ALTERNATIVE(                                                    \
          ^

Similar to the issue resolved by commit 0072dc1b53c3 ("arm64: avoid
BUILD_BUG_ON() in alternative-macros"), there is a circular include
dependency through <linux/bits.h> when CONFIG_LTO is enabled due to
<asm/rwonce.h> appearing in the include chain before the contents of
<asm/alternative-macros.h>, which results in ALTERNATIVE() not getting
expanded properly because it has not been defined yet.

Avoid this issue by including <vdso/bits.h>, which includes the
definition of the BIT() macro, instead of <linux/bits.h>, as BIT() is the
only macro from bits.h that is relevant to this header.

Fixes: ba00c2a04fa5 ("arm64: fix the build with binutils 2.27")
Link: https://github.com/ClangBuiltLinux/linux/issues/1728
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Tested-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20221003193759.1141709-1-nathan@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agoMerge branch 'for-next/misc' into for-next/core
Catalin Marinas [Fri, 30 Sep 2022 08:18:26 +0000 (09:18 +0100)]
Merge branch 'for-next/misc' into for-next/core

* for-next/misc:
  : Miscellaneous patches
  arm64/kprobe: Optimize the performance of patching single-step slot
  ARM64: reloc_test: add __init/__exit annotations to module init/exit funcs
  arm64/mm: fold check for KFENCE into can_set_direct_map()
  arm64: uaccess: simplify uaccess_mask_ptr()
  arm64: mte: move register initialization to C
  arm64: mm: handle ARM64_KERNEL_USES_PMD_MAPS in vmemmap_populate()
  arm64: dma: Drop cache invalidation from arch_dma_prep_coherent()
  arm64: support huge vmalloc mappings
  arm64: spectre: increase parameters that can be used to turn off bhb mitigation individually
  arm64: run softirqs on the per-CPU IRQ stack
  arm64: compat: Implement misalignment fixups for multiword loads

19 months agoMerge branch 'for-next/alternatives' into for-next/core
Catalin Marinas [Fri, 30 Sep 2022 08:18:22 +0000 (09:18 +0100)]
Merge branch 'for-next/alternatives' into for-next/core

* for-next/alternatives:
  : Alternatives (code patching) improvements
  arm64: fix the build with binutils 2.27
  arm64: avoid BUILD_BUG_ON() in alternative-macros
  arm64: alternatives: add shared NOP callback
  arm64: alternatives: add alternative_has_feature_*()
  arm64: alternatives: have callbacks take a cap
  arm64: alternatives: make alt_region const
  arm64: alternatives: hoist print out of __apply_alternatives()
  arm64: alternatives: proton-pack: prepare for cap changes
  arm64: alternatives: kvm: prepare for cap changes
  arm64: cpufeature: make cpus_have_cap() noinstr-safe

19 months agoMerge branch 'for-next/kselftest' into for-next/core
Catalin Marinas [Fri, 30 Sep 2022 08:18:11 +0000 (09:18 +0100)]
Merge branch 'for-next/kselftest' into for-next/core

* for-next/kselftest: (28 commits)
  : Kselftest updates for arm64
  kselftest/arm64: Handle EINTR while reading data from children
  kselftest/arm64: Flag fp-stress as exiting when we begin finishing up
  kselftest/arm64: Don't repeat termination handler for fp-stress
  kselftest/arm64: Don't enable v8.5 for MTE selftest builds
  kselftest/arm64: Fix typo in hwcap check
  kselftest/arm64: Add hwcap test for RNG
  kselftest/arm64: Add SVE 2 to the tested hwcaps
  kselftest/arm64: Add missing newline in hwcap output
  kselftest/arm64: Fix spelling misakes of signal names
  kselftest/arm64: Enforce actual ABI for SVE syscalls
  kselftest/arm64: Correct buffer allocation for SVE Z registers
  kselftest/arm64: Include larger SVE and SME VLs in signal tests
  kselftest/arm64: Allow larger buffers in get_signal_context()
  kselftest/arm64: Preserve any EXTRA_CONTEXT in handle_signal_copyctx()
  kselftest/arm64: Validate contents of EXTRA_CONTEXT blocks
  kselftest/arm64: Only validate each signal context once
  kselftest/arm64: Remove unneeded protype for validate_extra_context()
  kselftest/arm64: Fix validation of EXTRA_CONTEXT signal context location
  kselftest/arm64: Fix validatation termination record after EXTRA_CONTEXT
  kselftest/arm64: Validate signal ucontext in place
  ...

19 months agoMerge branches 'for-next/doc', 'for-next/sve', 'for-next/sysreg', 'for-next/gettimeof...
Catalin Marinas [Fri, 30 Sep 2022 08:17:57 +0000 (09:17 +0100)]
Merge branches 'for-next/doc', 'for-next/sve', 'for-next/sysreg', 'for-next/gettimeofday', 'for-next/stacktrace', 'for-next/atomics', 'for-next/el1-exceptions', 'for-next/a510-erratum-2658417', 'for-next/defconfig', 'for-next/tpidr2_el0' and 'for-next/ftrace', remote-tracking branch 'arm64/for-next/perf' into for-next/core

* arm64/for-next/perf:
  arm64: asm/perf_regs.h: Avoid C++-style comment in UAPI header
  arm64/sve: Add Perf extensions documentation
  perf: arm64: Add SVE vector granule register to user regs
  MAINTAINERS: add maintainers for Alibaba' T-Head PMU driver
  drivers/perf: add DDR Sub-System Driveway PMU driver for Yitian 710 SoC
  docs: perf: Add description for Alibaba's T-Head PMU driver

* for-next/doc:
  : Documentation/arm64 updates
  arm64/sve: Document our actual ABI for clearing registers on syscall

* for-next/sve:
  : SVE updates
  arm64/sysreg: Add hwcap for SVE EBF16

* for-next/sysreg: (35 commits)
  : arm64 system registers generation (more conversions)
  arm64/sysreg: Fix a few missed conversions
  arm64/sysreg: Convert ID_AA64AFRn_EL1 to automatic generation
  arm64/sysreg: Convert ID_AA64DFR1_EL1 to automatic generation
  arm64/sysreg: Convert ID_AA64FDR0_EL1 to automatic generation
  arm64/sysreg: Use feature numbering for PMU and SPE revisions
  arm64/sysreg: Add _EL1 into ID_AA64DFR0_EL1 definition names
  arm64/sysreg: Align field names in ID_AA64DFR0_EL1 with architecture
  arm64/sysreg: Add defintion for ALLINT
  arm64/sysreg: Convert SCXTNUM_EL1 to automatic generation
  arm64/sysreg: Convert TIPDR_EL1 to automatic generation
  arm64/sysreg: Convert ID_AA64PFR1_EL1 to automatic generation
  arm64/sysreg: Convert ID_AA64PFR0_EL1 to automatic generation
  arm64/sysreg: Convert ID_AA64MMFR2_EL1 to automatic generation
  arm64/sysreg: Convert ID_AA64MMFR1_EL1 to automatic generation
  arm64/sysreg: Convert ID_AA64MMFR0_EL1 to automatic generation
  arm64/sysreg: Convert HCRX_EL2 to automatic generation
  arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 SME enumeration
  arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 BTI enumeration
  arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 fractional version fields
  arm64/sysreg: Standardise naming for MTE feature enumeration
  ...

* for-next/gettimeofday:
  : Use self-synchronising counter access in gettimeofday() (if FEAT_ECV)
  arm64: vdso: use SYS_CNTVCTSS_EL0 for gettimeofday
  arm64: alternative: patch alternatives in the vDSO
  arm64: module: move find_section to header

* for-next/stacktrace:
  : arm64 stacktrace cleanups and improvements
  arm64: stacktrace: track hyp stacks in unwinder's address space
  arm64: stacktrace: track all stack boundaries explicitly
  arm64: stacktrace: remove stack type from fp translator
  arm64: stacktrace: rework stack boundary discovery
  arm64: stacktrace: add stackinfo_on_stack() helper
  arm64: stacktrace: move SDEI stack helpers to stacktrace code
  arm64: stacktrace: rename unwind_next_common() -> unwind_next_frame_record()
  arm64: stacktrace: simplify unwind_next_common()
  arm64: stacktrace: fix kerneldoc comments

* for-next/atomics:
  : arm64 atomics improvements
  arm64: atomic: always inline the assembly
  arm64: atomics: remove LL/SC trampolines

* for-next/el1-exceptions:
  : Improve the reporting of EL1 exceptions
  arm64: rework BTI exception handling
  arm64: rework FPAC exception handling
  arm64: consistently pass ESR_ELx to die()
  arm64: die(): pass 'err' as long
  arm64: report EL1 UNDEFs better

* for-next/a510-erratum-2658417:
  : Cortex-A510: 2658417: remove BF16 support due to incorrect result
  arm64: errata: remove BF16 HWCAP due to incorrect result on Cortex-A510
  arm64: cpufeature: Expose get_arm64_ftr_reg() outside cpufeature.c
  arm64: cpufeature: Force HWCAP to be based on the sysreg visible to user-space

* for-next/defconfig:
  : arm64 defconfig updates
  arm64: defconfig: Add Coresight as module
  arm64: Enable docker support in defconfig
  arm64: defconfig: Enable memory hotplug and hotremove config
  arm64: configs: Enable all PMUs provided by Arm

* for-next/tpidr2_el0:
  : arm64 ptrace() support for TPIDR2_EL0
  kselftest/arm64: Add coverage of TPIDR2_EL0 ptrace interface
  arm64/ptrace: Support access to TPIDR2_EL0
  arm64/ptrace: Document extension of NT_ARM_TLS to cover TPIDR2_EL0
  kselftest/arm64: Add test coverage for NT_ARM_TLS

* for-next/ftrace:
  : arm64 ftraces updates/fixes
  arm64: ftrace: fix module PLTs with mcount
  arm64: module: Remove unused plt_entry_is_initialized()
  arm64: module: Make plt_equals_entry() static

19 months agoarm64/kprobe: Optimize the performance of patching single-step slot
Liao Chang [Tue, 27 Sep 2022 02:24:35 +0000 (10:24 +0800)]
arm64/kprobe: Optimize the performance of patching single-step slot

Single-step slot would not be used until kprobe is enabled, that means
no race condition occurs on it under SMP, hence it is safe to pacth ss
slot without stopping machine.

Since I and D caches are coherent within single-step slot from
aarch64_insn_patch_text_nosync(), hence no need to do it again via
flush_icache_range().

Acked-by: Will Deacon <will@kernel.org>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Liao Chang <liaochang1@huawei.com>
Link: https://lore.kernel.org/r/20220927022435.129965-4-liaochang1@huawei.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agoarm64: defconfig: Add Coresight as module
James Clark [Thu, 22 Sep 2022 14:24:00 +0000 (15:24 +0100)]
arm64: defconfig: Add Coresight as module

Add Coresight to defconfig so that build errors are caught.
CONFIG_CORESIGHT_SOURCE_ETM4X is excluded because it depends on
CONFIG_PID_IN_CONTEXTIDR which has a performance cost.

Signed-off-by: James Clark <james.clark@arm.com>
Link: https://lore.kernel.org/r/20220922142400.478815-2-james.clark@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agokselftest/arm64: Handle EINTR while reading data from children
Mark Brown [Wed, 21 Sep 2022 18:13:45 +0000 (19:13 +0100)]
kselftest/arm64: Handle EINTR while reading data from children

Currently we treat any error when reading from the child as a failure and
don't read any more output from that child as a result. This ignores the
fact that it is valid for read() to return EINTR as the error code if there
is a signal pending so we could stop handling the output of children,
especially during exit when we will get some SIGCHLD signals delivered to
us. Fix this by pulling the read handling out into a separate function
which returns a flag if reads should be continued and wrapping it in a
loop.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220921181345.618085-4-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agokselftest/arm64: Flag fp-stress as exiting when we begin finishing up
Mark Brown [Wed, 21 Sep 2022 18:13:44 +0000 (19:13 +0100)]
kselftest/arm64: Flag fp-stress as exiting when we begin finishing up

Once we have started exiting the termination handler will have the same
effect as what we're already running so set the termination flag at that
point.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220921181345.618085-3-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agokselftest/arm64: Don't repeat termination handler for fp-stress
Mark Brown [Wed, 21 Sep 2022 18:13:43 +0000 (19:13 +0100)]
kselftest/arm64: Don't repeat termination handler for fp-stress

When fp-stress gets a termination signal it sets a flag telling itself to
exit and sends a termination signal to all the children. If the flag is set
then don't bother repeating this process, it isn't going to accomplish
anything other than consume CPU time which can be an issue when running in
emulation.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220921181345.618085-2-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agoARM64: reloc_test: add __init/__exit annotations to module init/exit funcs
Xiu Jianfeng [Sun, 11 Sep 2022 03:47:47 +0000 (11:47 +0800)]
ARM64: reloc_test: add __init/__exit annotations to module init/exit funcs

Add missing __init/__exit annotations to module init/exit funcs.

Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com>
Link: https://lore.kernel.org/r/20220911034747.132098-1-xiujianfeng@huawei.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agoarm64/mm: fold check for KFENCE into can_set_direct_map()
Mike Rapoport [Wed, 21 Sep 2022 07:48:41 +0000 (10:48 +0300)]
arm64/mm: fold check for KFENCE into can_set_direct_map()

KFENCE requires linear map to be mapped at page granularity, so that it
is possible to protect/unprotect single pages, just like with
rodata_full and DEBUG_PAGEALLOC.

Instead of repating

can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)

make can_set_direct_map() handle the KFENCE case.

This also prevents potential false positives in kernel_page_present()
that may return true for non-present page if CONFIG_KFENCE is enabled.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/20220921074841.382615-1-rppt@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agoarm64: ftrace: fix module PLTs with mcount
Mark Rutland [Thu, 29 Sep 2022 13:45:25 +0000 (14:45 +0100)]
arm64: ftrace: fix module PLTs with mcount

Li Huafei reports that mcount-based ftrace with module PLTs was broken
by commit:

  a6253579977e4c6f ("arm64: ftrace: consistently handle PLTs.")

When a module PLTs are used and a module is loaded sufficiently far away
from the kernel, we'll create PLTs for any branches which are
out-of-range. These are separate from the special ftrace trampoline
PLTs, which the module PLT code doesn't directly manipulate.

When mcount is in use this is a problem, as each mcount callsite in a
module will be initialized to point to a module PLT, but since commit
a6253579977e4c6f ftrace_make_nop() will assume that the callsite has
been initialized to point to the special ftrace trampoline PLT, and
ftrace_find_callable_addr() rejects other cases.

This means that when ftrace tries to initialize a callsite via
ftrace_make_nop(), the call to ftrace_find_callable_addr() will find
that the `_mcount` stub is out-of-range and is not handled by the ftrace
PLT, resulting in a splat:

| ftrace_test: loading out-of-tree module taints kernel.
| ftrace: no module PLT for _mcount
| ------------[ ftrace bug ]------------
| ftrace failed to modify
| [<ffff800029180014>] 0xffff800029180014
|  actual:   44:00:00:94
| Initializing ftrace call sites
| ftrace record flags: 2000000
|  (0)
|  expected tramp: ffff80000802eb3c
| ------------[ cut here ]------------
| WARNING: CPU: 3 PID: 157 at kernel/trace/ftrace.c:2120 ftrace_bug+0x94/0x270
| Modules linked in:
| CPU: 3 PID: 157 Comm: insmod Tainted: G           O       6.0.0-rc6-00151-gcd722513a189-dirty #22
| Hardware name: linux,dummy-virt (DT)
| pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
| pc : ftrace_bug+0x94/0x270
| lr : ftrace_bug+0x21c/0x270
| sp : ffff80000b2bbaf0
| x29: ffff80000b2bbaf0 x28: 0000000000000000 x27: ffff0000c4d38000
| x26: 0000000000000001 x25: ffff800009d7e000 x24: ffff0000c4d86e00
| x23: 0000000002000000 x22: ffff80000a62b000 x21: ffff8000098ebea8
| x20: ffff0000c4d38000 x19: ffff80000aa24158 x18: ffffffffffffffff
| x17: 0000000000000000 x16: 0a0d2d2d2d2d2d2d x15: ffff800009aa9118
| x14: 0000000000000000 x13: 6333626532303830 x12: 3030303866666666
| x11: 203a706d61727420 x10: 6465746365707865 x9 : 3362653230383030
| x8 : c0000000ffffefff x7 : 0000000000017fe8 x6 : 000000000000bff4
| x5 : 0000000000057fa8 x4 : 0000000000000000 x3 : 0000000000000001
| x2 : ad2cb14bb5438900 x1 : 0000000000000000 x0 : 0000000000000022
| Call trace:
|  ftrace_bug+0x94/0x270
|  ftrace_process_locs+0x308/0x430
|  ftrace_module_init+0x44/0x60
|  load_module+0x15b4/0x1ce8
|  __do_sys_init_module+0x1ec/0x238
|  __arm64_sys_init_module+0x24/0x30
|  invoke_syscall+0x54/0x118
|  el0_svc_common.constprop.4+0x84/0x100
|  do_el0_svc+0x3c/0xd0
|  el0_svc+0x1c/0x50
|  el0t_64_sync_handler+0x90/0xb8
|  el0t_64_sync+0x15c/0x160
| ---[ end trace 0000000000000000 ]---
| ---------test_init-----------

Fix this by reverting to the old behaviour of ignoring the old
instruction when initialising an mcount callsite in a module, which was
the behaviour prior to commit a6253579977e4c6f.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Fixes: a6253579977e ("arm64: ftrace: consistently handle PLTs.")
Reported-by: Li Huafei <lihuafei1@huawei.com>
Link: https://lore.kernel.org/linux-arm-kernel/20220929094134.99512-1-lihuafei1@huawei.com
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220929134525.798593-1-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agoarm64: module: Remove unused plt_entry_is_initialized()
Li Huafei [Thu, 29 Sep 2022 09:41:33 +0000 (17:41 +0800)]
arm64: module: Remove unused plt_entry_is_initialized()

Since commit f1a54ae9af0d ("arm64: module/ftrace: intialize PLT at load
time"), plt_entry_is_initialized() is unused anymore , so remove it.

Signed-off-by: Li Huafei <lihuafei1@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220929094134.99512-3-lihuafei1@huawei.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agoarm64: module: Make plt_equals_entry() static
Li Huafei [Thu, 29 Sep 2022 09:41:32 +0000 (17:41 +0800)]
arm64: module: Make plt_equals_entry() static

Since commit 4e69ecf4da1e ("arm64/module: ftrace: deal with place
relative nature of PLTs"), plt_equals_entry() is not used outside of
module-plts.c, so make it static.

Signed-off-by: Li Huafei <lihuafei1@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220929094134.99512-2-lihuafei1@huawei.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agoarm64: fix the build with binutils 2.27
Mark Rutland [Thu, 29 Sep 2022 15:02:27 +0000 (16:02 +0100)]
arm64: fix the build with binutils 2.27

Jon Hunter reports that for some toolchains the build has been broken
since commit:

  4c0bd995d73ed889 ("arm64: alternatives: have callbacks take a cap")

... with a stream of build-time splats of the form:

|   CC      arch/arm64/kvm/hyp/vhe/debug-sr.o
| /tmp/ccY3kbki.s: Assembler messages:
| /tmp/ccY3kbki.s:1600: Error: found 'L', expected: ')'
| /tmp/ccY3kbki.s:1600: Error: found 'L', expected: ')'
| /tmp/ccY3kbki.s:1600: Error: found 'L', expected: ')'
| /tmp/ccY3kbki.s:1600: Error: found 'L', expected: ')'
| /tmp/ccY3kbki.s:1600: Error: junk at end of line, first unrecognized character
| is `L'
| /tmp/ccY3kbki.s:1723: Error: found 'L', expected: ')'
| /tmp/ccY3kbki.s:1723: Error: found 'L', expected: ')'
| /tmp/ccY3kbki.s:1723: Error: found 'L', expected: ')'
| /tmp/ccY3kbki.s:1723: Error: found 'L', expected: ')'
| /tmp/ccY3kbki.s:1723: Error: junk at end of line, first unrecognized character
| is `L'
| scripts/Makefile.build:249: recipe for target
| 'arch/arm64/kvm/hyp/vhe/debug-sr.o' failed

The issue here is that older versions of binutils (up to and including
2.27.0) don't like an 'L' suffix on constants. For plain assembly files,
UL() avoids this suffix, but in C files this gets added, and so for
inline assembly we can't directly use a constant defined with `UL()`.

We could avoid this by passing the constant as an input parameter, but
this isn't practical given the way we use the alternative macros.
Instead, just open code the constant without the `UL` suffix, and for
consistency do this for both the inline assembly macro and the regular
assembly macro.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Fixes: 4c0bd995d73e ("arm64: alternatives: have callbacks take a cap")
Reported-by: Jon Hunter <jonathanh@nvidia.com>
Link: https://lore.kernel.org/linux-arm-kernel/3cecc3a5-30b0-f0bd-c3de-9e09bd21909b@nvidia.com/
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220929150227.1028556-1-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
19 months agokselftest/arm64: Don't enable v8.5 for MTE selftest builds
Mark Brown [Wed, 28 Sep 2022 15:45:17 +0000 (16:45 +0100)]
kselftest/arm64: Don't enable v8.5 for MTE selftest builds

Currently we set -march=armv8.5+memtag when building the MTE selftests,
allowing the compiler to emit v8.5 and MTE instructions for anything it
generates. This means that we may get code that will generate SIGILLs when
run on older systems rather than skipping on non-MTE systems as should be
the case. Most toolchains don't select any incompatible instructions but
I have seen some reports which suggest that some may be appearing which do
so. This is also potentially problematic in that if the compiler chooses to
emit any MTE instructions for the C code it may interfere with the MTE
usage we are trying to test.

Since the only reason we are specifying this option is to allow us to
assemble MTE instructions in mte_helper.S we can avoid these issues by
moving to using a .arch directive there and adding the -march explicitly to
the toolchain support check instead of the generic CFLAGS.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220928154517.173108-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: uaccess: simplify uaccess_mask_ptr()
Mark Rutland [Thu, 22 Sep 2022 15:10:53 +0000 (16:10 +0100)]
arm64: uaccess: simplify uaccess_mask_ptr()

We introduced uaccess pointer masking for arm64 in commit:

  4d8efc2d5ee4c9cc ("arm64: Use pointer masking to limit uaccess speculation")

Which was intended to prevent speculative uaccesses to kernel memory on
CPUs where access permissions were not respected under speculation.

At the time, the uaccess primitives were occasionally used to access
kernel memory, with the maximum permitted address held in
thread_info::addr_limit. Consequently, the address masking needed to
take this dynamic limit into account.

Subsequently the uaccess primitives were reworked such that they are
only used for user memory, and as of commit:

  3d2403fd10a1dbb3 ("arm64: uaccess: remove set_fs()")

... the address limit was made a compile-time constant, but the logic
was otherwise unchanged.

Regardless of the configured VA size or whether TBI is in use, the
address space can be divided into three ranges:

* The TTBR0 VA range, for which any valid pointer has bit 55 *clear*,
  and any non-tag bits [63-56] must match bit 55 (i.e. must be clear).

* The TTBR1 VA range, for which any valid pointer has bit 55 *set*, and
  any non-tag bits [63-56] must match bit 55 (i.e. must be set).

* The gap between the TTBR0 and TTBR1 ranges, where bit 55 may be set or
  clear, but any access will result in a fault.

As the uaccess primitives are now only used for user memory in the TTBR0
VA range, we can prevent generation of TTBR1 addresses by clearing bit
55, which will either result in a TTBR0 address or a faulting address
between the TTBR VA ranges.

This is beneficial for code generation as:

* We no longer clobber the condition codes.

* We no longer burn a register on (TASK_SIZE_MAX - 1).

* We no longer need to consume the untagged pointer.

When building a defconfig v6.0-rc3 with GCC 12.1.0, this change makes
the resulting Image 64KiB smaller.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/20220922151053.3520750-1-mark.rutland@arm.com
[catalin.marinas@arm.com: remove csdb() as the bit clearing is unconditional]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: asm/perf_regs.h: Avoid C++-style comment in UAPI header
Will Deacon [Thu, 22 Sep 2022 21:11:10 +0000 (22:11 +0100)]
arm64: asm/perf_regs.h: Avoid C++-style comment in UAPI header

An arm64 'allmodconfig' build fails with GCC due to use of a C++-style
comment for the new SVE vector granule 'enum perf_event_arm_regs' entry:

  | /usr/include/asm/perf_regs.h:42:26: error: C++ style comments are not allowed in ISO C90

Use good ol' /* */ comment syntax to keep things rosey.

Link: https://lore.kernel.org/r/632cceb2.170a0220.599ec.0a3a@mx.google.com
Fixes: cbb0c02caf4b ("perf: arm64: Add SVE vector granule register to user regs")
Signed-off-by: Will Deacon <will@kernel.org>
20 months agokselftest/arm64: Fix typo in hwcap check
Mark Brown [Wed, 7 Sep 2022 11:33:59 +0000 (12:33 +0100)]
kselftest/arm64: Fix typo in hwcap check

We use a local variable hwcap to refer to the element of the hwcaps array
which we are currently checking. When checking for the relevant hwcap bit
being set in testing we were dereferencing hwcaps rather than hwcap in
fetching the AT_HWCAP to use, which is perfectly valid C but means we were
always checking the bit was set in the hwcap for whichever feature is first
in the array. Remove the stray s.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220907113400.12982-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: mte: move register initialization to C
Peter Collingbourne [Thu, 15 Sep 2022 22:20:53 +0000 (15:20 -0700)]
arm64: mte: move register initialization to C

If FEAT_MTE2 is disabled via the arm64.nomte command line argument on a
CPU that claims to support FEAT_MTE2, the kernel will use Tagged Normal
in the MAIR. If we interpret arm64.nomte to mean that the CPU does not
in fact implement FEAT_MTE2, setting the system register like this may
lead to UNSPECIFIED behavior. Fix it by arranging for MAIR to be set
in the C function cpu_enable_mte which is called based on the sanitized
version of the system register.

There is no need for the rest of the MTE-related system register
initialization to happen from assembly, with the exception of TCR_EL1,
which must be set to include at least TBI1 because the secondary CPUs
access KASan-allocated data structures early. Therefore, make the TCR_EL1
initialization unconditional and move the rest of the initialization to
cpu_enable_mte so that we no longer have a dependency on the unsanitized
ID register value.

Co-developed-by: Evgenii Stepanov <eugenis@google.com>
Signed-off-by: Peter Collingbourne <pcc@google.com>
Signed-off-by: Evgenii Stepanov <eugenis@google.com>
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: kernel test robot <lkp@intel.com>
Fixes: 3b714d24ef17 ("arm64: mte: CPU feature detection and initial sysreg configuration")
Cc: <stable@vger.kernel.org> # 5.10.x
Link: https://lore.kernel.org/r/20220915222053.3484231-1-eugenis@google.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: mm: handle ARM64_KERNEL_USES_PMD_MAPS in vmemmap_populate()
Kefeng Wang [Tue, 20 Sep 2022 01:49:51 +0000 (09:49 +0800)]
arm64: mm: handle ARM64_KERNEL_USES_PMD_MAPS in vmemmap_populate()

Directly check ARM64_SWAPPER_USES_SECTION_MAPS to choose base page
or PMD level huge page mapping in vmemmap_populate() to simplify
code a bit.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Link: https://lore.kernel.org/r/20220920014951.196191-1-wangkefeng.wang@huawei.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: dma: Drop cache invalidation from arch_dma_prep_coherent()
Will Deacon [Tue, 23 Aug 2022 12:21:11 +0000 (13:21 +0100)]
arm64: dma: Drop cache invalidation from arch_dma_prep_coherent()

arch_dma_prep_coherent() is called when preparing a non-cacheable region
for a consistent DMA buffer allocation. Since the buffer pages may
previously have been written via a cacheable mapping and consequently
allocated as dirty cachelines, the purpose of this function is to remove
these dirty lines from the cache, writing them back so that the
non-coherent device is able to see them.

On arm64, this operation can be achieved with a clean to the point of
coherency; a subsequent invalidation is not required and serves little
purpose in the presence of a cacheable alias (e.g. the linear map),
since clean lines can be speculatively fetched back into the cache after
the invalidation operation has completed.

Relax the cache maintenance in arch_dma_prep_coherent() so that only a
clean, and not a clean-and-invalidate operation is performed.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220823122111.17439-1-will@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sve: Add Perf extensions documentation
James Clark [Thu, 1 Sep 2022 13:26:58 +0000 (14:26 +0100)]
arm64/sve: Add Perf extensions documentation

Document that the VG register is available in Perf samples

Signed-off-by: James Clark <james.clark@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220901132658.1024635-3-james.clark@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
20 months agoperf: arm64: Add SVE vector granule register to user regs
James Clark [Thu, 1 Sep 2022 13:26:57 +0000 (14:26 +0100)]
perf: arm64: Add SVE vector granule register to user regs

Dwarf based unwinding in a function that pushes SVE registers onto
the stack requires the unwinder to know the length of the SVE register
to calculate the stack offsets correctly. This was added to the Arm
specific Dwarf spec as the VG pseudo register[1].

Add the vector length at position 46 if it's requested by userspace and
SVE is supported. If it's not supported then fail to open the event.

The vector length must be on each sample because it can be changed
at runtime via a prctl or ptrace call. Also by adding it as a register
rather than a separate attribute, minimal changes will be required in an
unwinder that already indexes into the register list.

[1]: https://github.com/ARM-software/abi-aa/blob/main/aadwarf64/aadwarf64.rst

Reviewed-by: Mark Brown <broonie@kernel.org>
Signed-off-by: James Clark <james.clark@arm.com>
Link: https://lore.kernel.org/r/20220901132658.1024635-2-james.clark@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
20 months agoMAINTAINERS: add maintainers for Alibaba' T-Head PMU driver
Shuai Xue [Thu, 18 Aug 2022 03:18:22 +0000 (11:18 +0800)]
MAINTAINERS: add maintainers for Alibaba' T-Head PMU driver

Add maintainers for Alibaba PMU document and driver

Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Link: https://lore.kernel.org/r/20220818031822.38415-4-xueshuai@linux.alibaba.com
Signed-off-by: Will Deacon <will@kernel.org>
20 months agodrivers/perf: add DDR Sub-System Driveway PMU driver for Yitian 710 SoC
Shuai Xue [Thu, 18 Aug 2022 03:18:21 +0000 (11:18 +0800)]
drivers/perf: add DDR Sub-System Driveway PMU driver for Yitian 710 SoC

Add the DDR Sub-System Driveway Performance Monitoring Unit (PMU) driver
support for Alibaba T-Head Yitian 710 SoC chip. Yitian supports DDR5/4
DRAM and targets cloud computing and HPC.

Each PMU is registered as a device in /sys/bus/event_source/devices, and
users can select event to monitor in each sub-channel, independently. For
example, ali_drw_21000 and ali_drw_21080 are two PMU devices for two
sub-channels of the same channel in die 0. And the PMU device of die 1 is
prefixed with ali_drw_400XXXXX, e.g. ali_drw_40021000.

Due to hardware limitation, one of DDRSS Driveway PMU overflow interrupt
shares the same irq number with MPAM ERR_IRQ. To register DDRSS PMU and
MPAM drivers successfully, add IRQF_SHARED flag.

Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
Co-developed-by: Hongbo Yao <yaohongbo@linux.alibaba.com>
Signed-off-by: Hongbo Yao <yaohongbo@linux.alibaba.com>
Co-developed-by: Neng Chen <nengchen@linux.alibaba.com>
Signed-off-by: Neng Chen <nengchen@linux.alibaba.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Link: https://lore.kernel.org/r/20220818031822.38415-3-xueshuai@linux.alibaba.com
Signed-off-by: Will Deacon <will@kernel.org>
20 months agodocs: perf: Add description for Alibaba's T-Head PMU driver
Shuai Xue [Wed, 14 Sep 2022 02:23:24 +0000 (10:23 +0800)]
docs: perf: Add description for Alibaba's T-Head PMU driver

Alibaba's T-Head SoC implements uncore PMU for performance and functional
debugging to facilitate system maintenance. Document it to provide guidance
on how to use it.

Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Link: https://lore.kernel.org/r/20220914022326.88550-2-xueshuai@linux.alibaba.com
Signed-off-by: Will Deacon <will@kernel.org>
20 months agokselftest/arm64: Add coverage of TPIDR2_EL0 ptrace interface
Mark Brown [Mon, 29 Aug 2022 15:49:21 +0000 (16:49 +0100)]
kselftest/arm64: Add coverage of TPIDR2_EL0 ptrace interface

Extend the ptrace test support for NT_ARM_TLS to cover TPIDR2_EL0 - on
systems that support SME the NT_ARM_TLS regset can be up to 2 elements
long with the second element containing TPIDR2_EL0. On systems
supporting SME we verify that this value can be read and written while
on systems that do not support SME we verify correct truncation of reads
and writes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220829154921.837871-5-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/ptrace: Support access to TPIDR2_EL0
Mark Brown [Mon, 29 Aug 2022 15:49:20 +0000 (16:49 +0100)]
arm64/ptrace: Support access to TPIDR2_EL0

SME introduces an additional EL0 register, TPIDR2_EL0, intended for use
by userspace as part of the SME. Provide ptrace access to it through the
existing NT_ARM_TLS regset used for TPIDR_EL0 by expanding it to two
registers with TPIDR2_EL0 being the second one.

Existing programs that query the size of the register set will be able
to observe the increased size of the register set. Programs that assume
the register set is single register will see no change. On systems that
do not support SME TPIDR2_EL0 will read as 0 and writes will be ignored,
support for SME should be queried via hwcaps as normal.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220829154921.837871-4-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/ptrace: Document extension of NT_ARM_TLS to cover TPIDR2_EL0
Mark Brown [Mon, 29 Aug 2022 15:49:19 +0000 (16:49 +0100)]
arm64/ptrace: Document extension of NT_ARM_TLS to cover TPIDR2_EL0

In order to allow debuggers to discover lazily saved SME state we need
to provide access to TPIDR2_EL0, we will extend the existing NT_ARM_TLS
used for TPIDR to also include TPIDR2_EL0 as the second register in the
regset.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220829154921.837871-3-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agokselftest/arm64: Add test coverage for NT_ARM_TLS
Mark Brown [Mon, 29 Aug 2022 15:49:18 +0000 (16:49 +0100)]
kselftest/arm64: Add test coverage for NT_ARM_TLS

In preparation for extending support for NT_ARM_TLS to cover additional
TPIDRs add some tests for the existing interface. Do this in a generic
ptrace test program to provide a place to collect additional tests in
the future.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220829154921.837871-2-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: Enable docker support in defconfig
Vincenzo Frascino [Wed, 7 Sep 2022 11:02:35 +0000 (12:02 +0100)]
arm64: Enable docker support in defconfig

The arm64 defconfig does not support the docker usecase.

Enable the missing configuration options.

The resulting .config was validated with [1].

...

Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- apparmor: enabled and tools installed
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_IPVS: enabled (as module)
- CONFIG_NETFILTER_XT_MARK: enabled (as module)
- CONFIG_IP_NF_NAT: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_POSIX_MQUEUE: enabled
- CONFIG_CGROUP_BPF: enabled

...

[1] https://github.com/moby/moby/blob/master/contrib/check-config.sh

Cc: Will Deacon <will@kernel.org>
Cc: Arnd Bergmann <arnd@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20220907110235.14708-1-vincenzo.frascino@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: defconfig: Enable memory hotplug and hotremove config
Kefeng Wang [Wed, 29 Jun 2022 09:35:24 +0000 (17:35 +0800)]
arm64: defconfig: Enable memory hotplug and hotremove config

Let's enable ACPI_HMAT, ACPI_HOTPLUG_MEMORY, MEMORY_HOTPLUG
and MEMORY_HOTREMOVE for more test coverage, also there are
useful for heterogeneous memory scene.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220629093524.34801-1-wangkefeng.wang@huawei.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: configs: Enable all PMUs provided by Arm
Mark Brown [Mon, 19 Sep 2022 16:27:53 +0000 (17:27 +0100)]
arm64: configs: Enable all PMUs provided by Arm

The selection of PMUs enabled in the defconfig is currently a bit random
and does not include a number of those provided by Arm and present in a
fairly wide range of SoCs. Improve coverage and defconfig utility by
enabling all the Arm provided PMUs by default.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: James Clark <james.clark@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/20220919162753.3079869-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: avoid BUILD_BUG_ON() in alternative-macros
Mark Rutland [Tue, 20 Sep 2022 14:00:44 +0000 (15:00 +0100)]
arm64: avoid BUILD_BUG_ON() in alternative-macros

Nathan reports that the build fails when using clang and LTO:

|  In file included from kernel/bounds.c:10:
|  In file included from ./include/linux/page-flags.h:10:
|  In file included from ./include/linux/bug.h:5:
|  In file included from ./arch/arm64/include/asm/bug.h:26:
|  In file included from ./include/asm-generic/bug.h:5:
|  In file included from ./include/linux/compiler.h:248:
|  In file included from ./arch/arm64/include/asm/rwonce.h:11:
|  ./arch/arm64/include/asm/alternative-macros.h:224:2: error: call to undeclared function 'BUILD_BUG_ON'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
|          BUILD_BUG_ON(feature >= ARM64_NCAPS);
|          ^
|  ./arch/arm64/include/asm/alternative-macros.h:241:2: error: call to undeclared function 'BUILD_BUG_ON'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
|          BUILD_BUG_ON(feature >= ARM64_NCAPS);
|          ^
|  2 errors generated.

... the problem being that when LTO is enabled, <asm/rwonce.h> includes
<asm/alternative-macros.h>, and causes a circular include dependency
through <linux/bug.h>. This manifests as BUILD_BUG_ON() not being
defined when used within <asm/alternative-macros.h>.

This patch avoids the problem and simplifies the include dependencies by
using compiletime_assert() instead of BUILD_BUG_ON().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Fixes: 21fb26bfb01f ("arm64: alternatives: add alternative_has_feature_*()")
Reported-by: Nathan Chancellor <nathan@kernel.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Link: http://lore.kernel.org/r/YyigTrxhE3IRPzjs@dev-arch.thelio-3990X
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220920140044.1709073-1-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Fix a few missed conversions
Nathan Chancellor [Mon, 19 Sep 2022 16:09:28 +0000 (09:09 -0700)]
arm64/sysreg: Fix a few missed conversions

After the conversion to automatically generating the ID_AA64DFR0_EL1
definition names, the build fails in a few different places because some
of the definitions were not changed to their new names along the way.
Update the names to resolve the build errors.

Fixes: c0357a73fa4a ("arm64/sysreg: Align field names in ID_AA64DFR0_EL1 with architecture")
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220919160928.3905780-1-nathan@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: alternatives: add shared NOP callback
Mark Rutland [Mon, 12 Sep 2022 16:22:10 +0000 (17:22 +0100)]
arm64: alternatives: add shared NOP callback

For each instance of an alternative, the compiler outputs a distinct
copy of the alternative instructions into a subsection. As the compiler
doesn't have special knowledge of alternatives, it cannot coalesce these
to save space.

In a defconfig kernel built with GCC 12.1.0, there are approximately
10,000 instances of alternative_has_feature_likely(), where the
replacement instruction is always a NOP. As NOPs are
position-independent, we don't need a unique copy per alternative
sequence.

This patch adds a callback to patch an alternative sequence with NOPs,
and make use of this in alternative_has_feature_likely(). So that this
can be used for other sites in future, this is written to patch multiple
instructions up to the original sequence length.

For NVHE, an alias is added to image-vars.h.

For modules, the callback is exported. Note that as modules are loaded
within 2GiB of the kernel, an alt_instr entry in a module can always
refer directly to the callback, and no special handling is necessary.

When building with GCC 12.1.0, the vmlinux is ~158KiB smaller, though
the resulting Image size is unchanged due to alignment constraints and
padding:

| % ls -al vmlinux-*
| -rwxr-xr-x 1 mark mark 134644592 Sep  1 14:52 vmlinux-after
| -rwxr-xr-x 1 mark mark 134486232 Sep  1 14:50 vmlinux-before
| % ls -al Image-*
| -rw-r--r-- 1 mark mark 37108224 Sep  1 14:52 Image-after
| -rw-r--r-- 1 mark mark 37108224 Sep  1 14:50 Image-before

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220912162210.3626215-9-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: alternatives: add alternative_has_feature_*()
Mark Rutland [Mon, 12 Sep 2022 16:22:09 +0000 (17:22 +0100)]
arm64: alternatives: add alternative_has_feature_*()

Currrently we use a mixture of alternative sequences and static branches
to handle features detected at boot time. For ease of maintenance we
generally prefer to use static branches in C code, but this has a few
downsides:

* Each static branch has metadata in the __jump_table section, which is
  not discarded after features are finalized. This wastes some space,
  and slows down the patching of other static branches.

* The static branches are patched at a different point in time from the
  alternatives, so changes are not atomic. This leaves a transient
  period where there could be a mismatch between the behaviour of
  alternatives and static branches, which could be problematic for some
  features (e.g. pseudo-NMI).

* More (instrumentable) kernel code is executed to patch each static
  branch, which can be risky when patching certain features (e.g.
  irqflags management for pseudo-NMI).

* When CONFIG_JUMP_LABEL=n, static branches are turned into a load of a
  flag and a conditional branch. This means it isn't safe to use such
  static branches in an alternative address space (e.g. the NVHE/PKVM
  hyp code), where the generated address isn't safe to acccess.

To deal with these issues, this patch introduces new
alternative_has_feature_*() helpers, which work like static branches but
are patched using alternatives. This ensures the patching is performed
at the same time as other alternative patching, allows the metadata to
be freed after patching, and is safe for use in alternative address
spaces.

Note that all supported toolchains have asm goto support, and since
commit:

  a0a12c3ed057af57 ("asm goto: eradicate CC_HAS_ASM_GOTO)"

... the CC_HAS_ASM_GOTO Kconfig symbol has been removed, so no feature
check is necessary, and we can always make use of asm goto.

Additionally, note that:

* This has no impact on cpus_have_cap(), which is a dynamic check.

* This has no functional impact on cpus_have_const_cap(). The branches
  are patched slightly later than before this patch, but these branches
  are not reachable until caps have been finalised.

* It is now invalid to use cpus_have_final_cap() in the window between
  feature detection and patching. All existing uses are only expected
  after patching anyway, so this should not be a problem.

* The LSE atomics will now be enabled during alternatives patching
  rather than immediately before. As the LL/SC an LSE atomics are
  functionally equivalent this should not be problematic.

When building defconfig with GCC 12.1.0, the resulting Image is 64KiB
smaller:

| % ls -al Image-*
| -rw-r--r-- 1 mark mark 37108224 Aug 23 09:56 Image-after
| -rw-r--r-- 1 mark mark 37173760 Aug 23 09:54 Image-before

According to bloat-o-meter.pl:

| add/remove: 44/34 grow/shrink: 602/1294 up/down: 39692/-61108 (-21416)
| Function                                     old     new   delta
| [...]
| Total: Before=16618336, After=16596920, chg -0.13%
| add/remove: 0/2 grow/shrink: 0/0 up/down: 0/-1296 (-1296)
| Data                                         old     new   delta
| arm64_const_caps_ready                        16       -     -16
| cpu_hwcap_keys                              1280       -   -1280
| Total: Before=8987120, After=8985824, chg -0.01%
| add/remove: 0/0 grow/shrink: 0/0 up/down: 0/0 (0)
| RO Data                                      old     new   delta
| Total: Before=18408, After=18408, chg +0.00%

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220912162210.3626215-8-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: alternatives: have callbacks take a cap
Mark Rutland [Mon, 12 Sep 2022 16:22:08 +0000 (17:22 +0100)]
arm64: alternatives: have callbacks take a cap

Today, callback alternatives are special-cased within
__apply_alternatives(), and are applied alongside patching for system
capabilities as ARM64_NCAPS is not part of the boot_capabilities feature
mask.

This special-casing is less than ideal. Giving special meaning to
ARM64_NCAPS for this requires some structures and loops to use
ARM64_NCAPS + 1 (AKA ARM64_NPATCHABLE), while others use ARM64_NCAPS.
It's also not immediately clear callback alternatives are only applied
when applying alternatives for system-wide features.

To make this a bit clearer, changes the way that callback alternatives
are identified to remove the special-casing of ARM64_NCAPS, and to allow
callback alternatives to be associated with a cpucap as with all other
alternatives.

New cpucaps, ARM64_ALWAYS_BOOT and ARM64_ALWAYS_SYSTEM are added which
are always detected alongside boot cpu capabilities and system
capabilities respectively. All existing callback alternatives are made
to use ARM64_ALWAYS_SYSTEM, and so will be patched at the same point
during the boot flow as before.

Subsequent patches will make more use of these new cpucaps.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220912162210.3626215-7-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: alternatives: make alt_region const
Mark Rutland [Mon, 12 Sep 2022 16:22:07 +0000 (17:22 +0100)]
arm64: alternatives: make alt_region const

We never alter a struct alt_region after creation, and we open-code the
bounds of the kernel alternatives region in two functions. The
duplication is a bit unfortunate for clarity (and in future we're likely
to have more functions altering alternative regions), and to avoid
accidents it would be good to make the structure const.

This patch adds a shared struct `kernel_alternatives` alt_region for the
main kernel image, and marks the alt_regions as const to prevent
unintentional modification.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220912162210.3626215-6-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: alternatives: hoist print out of __apply_alternatives()
Mark Rutland [Mon, 12 Sep 2022 16:22:06 +0000 (17:22 +0100)]
arm64: alternatives: hoist print out of __apply_alternatives()

Printing in the middle of __apply_alternatives() is potentially unsafe
and not all that helpful given these days we practically always patch
*something*.

Hoist the print out of __apply_alternatives(), and add separate prints
to __apply_alternatives() and apply_alternatives_all(), which will make
it easier to spot if either patching call goes wrong.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220912162210.3626215-5-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: alternatives: proton-pack: prepare for cap changes
Mark Rutland [Mon, 12 Sep 2022 16:22:05 +0000 (17:22 +0100)]
arm64: alternatives: proton-pack: prepare for cap changes

The spectre patching callbacks use cpus_have_final_cap(), and subsequent
patches will make it invalid to call cpus_have_final_cap() before
alternatives patching has completed.

In preparation for said change, this patch modifies the spectre patching
callbacks use cpus_have_cap(). This is not subject to patching, and will
dynamically check the cpu_hwcaps array, which is functionally equivalent
to the existing behaviour.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220912162210.3626215-4-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: alternatives: kvm: prepare for cap changes
Mark Rutland [Mon, 12 Sep 2022 16:22:04 +0000 (17:22 +0100)]
arm64: alternatives: kvm: prepare for cap changes

The KVM patching callbacks use cpus_have_final_cap() internally within
has_vhe(), and subsequent patches will make it invalid to call
cpus_have_final_cap() before alternatives patching has completed, and
will mean that cpus_have_const_cap() will always fall back to dynamic
checks prior to alternatives patching.

In preparation for said change, this patch modifies the KVM patching
callbacks to use cpus_have_cap() directly. This is not subject to
patching, and will dynamically check the cpu_hwcaps array, which is
functionally equivalent to the existing behaviour.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220912162210.3626215-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: cpufeature: make cpus_have_cap() noinstr-safe
Mark Rutland [Mon, 12 Sep 2022 16:22:03 +0000 (17:22 +0100)]
arm64: cpufeature: make cpus_have_cap() noinstr-safe

Currently it isn't safe to use cpus_have_cap() from noinstr code as
test_bit() is explicitly instrumented, and were cpus_have_cap() placed
out-of-line, cpus_have_cap() itself could be instrumented.

Make cpus_have_cap() noinstr safe by marking it __always_inline and
using arch_test_bit().

Aside from the prevention of instrumentation, there should be no
functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220912162210.3626215-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: errata: remove BF16 HWCAP due to incorrect result on Cortex-A510
James Morse [Fri, 9 Sep 2022 16:59:38 +0000 (17:59 +0100)]
arm64: errata: remove BF16 HWCAP due to incorrect result on Cortex-A510

Cortex-A510's erratum #2658417 causes two BF16 instructions to return the
wrong result in rare circumstances when a pair of A510 CPUs are using
shared neon hardware.

The two instructions affected are BFMMLA and VMMLA, support for these is
indicated by the BF16 HWCAP. Remove it on affected platforms.

Signed-off-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20220909165938.3931307-4-james.morse@arm.com
[catalin.marinas@arm.com: add revision to the Kconfig help; remove .type]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: cpufeature: Expose get_arm64_ftr_reg() outside cpufeature.c
James Morse [Fri, 9 Sep 2022 16:59:37 +0000 (17:59 +0100)]
arm64: cpufeature: Expose get_arm64_ftr_reg() outside cpufeature.c

get_arm64_ftr_reg() returns the properties of a system register based
on its instruction encoding.

This is needed by erratum workaround in cpu_errata.c to modify the
user-space visible view of id registers.

Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20220909165938.3931307-3-james.morse@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: cpufeature: Force HWCAP to be based on the sysreg visible to user-space
James Morse [Fri, 9 Sep 2022 16:59:36 +0000 (17:59 +0100)]
arm64: cpufeature: Force HWCAP to be based on the sysreg visible to user-space

arm64 advertises hardware features to user-space via HWCAPs, and by
emulating access to the CPUs id registers. The cpufeature code has a
sanitised system-wide view of an id register, and a sanitised user-space
view of an id register, where some features use their 'safe' value
instead of the hardware value.

It is currently possible for a HWCAP to be advertised where the user-space
view of the id register does not show the feature as supported.
Erratum workaround need to remove both the HWCAP, and the feature from
the user-space view of the id register. This involves duplicating the
code, and spreading it over cpufeature.c and cpu_errata.c.

Make the HWCAP code use the user-space view of id registers. This ensures
the values never diverge, and allows erratum workaround to remove HWCAP
by modifying the user-space view of the id register.

Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20220909165938.3931307-2-james.morse@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agokselftest/arm64: Add hwcap test for RNG
Mark Brown [Tue, 13 Sep 2022 14:11:01 +0000 (15:11 +0100)]
kselftest/arm64: Add hwcap test for RNG

Validate the RNG hwcap and make sure we don't generate a SIGILL reading
RNDR when it is reported.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220913141101.151400-4-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agokselftest/arm64: Add SVE 2 to the tested hwcaps
Mark Brown [Tue, 13 Sep 2022 14:11:00 +0000 (15:11 +0100)]
kselftest/arm64: Add SVE 2 to the tested hwcaps

Include SVE 2 and the various subfeatures it adds in the set of
hwcaps we check for.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220913141101.151400-3-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agokselftest/arm64: Add missing newline in hwcap output
Mark Brown [Tue, 13 Sep 2022 14:10:59 +0000 (15:10 +0100)]
kselftest/arm64: Add missing newline in hwcap output

Clean up the output of the test by adding a missing newline, the fix had
been done locally but didn't make it into the applied version.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220913141101.151400-2-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Convert ID_AA64AFRn_EL1 to automatic generation
Mark Brown [Sat, 10 Sep 2022 16:33:54 +0000 (17:33 +0100)]
arm64/sysreg: Convert ID_AA64AFRn_EL1 to automatic generation

Convert ID_AA64AFRn_EL1 to automatic generation as per DDI0487I.a, no
functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220910163354.860255-7-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Convert ID_AA64DFR1_EL1 to automatic generation
Mark Brown [Sat, 10 Sep 2022 16:33:53 +0000 (17:33 +0100)]
arm64/sysreg: Convert ID_AA64DFR1_EL1 to automatic generation

Convert ID_AA64FDR1_EL1 to automatic generation as per DDI0487I.a, no
functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220910163354.860255-6-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Convert ID_AA64FDR0_EL1 to automatic generation
Mark Brown [Sat, 10 Sep 2022 16:33:52 +0000 (17:33 +0100)]
arm64/sysreg: Convert ID_AA64FDR0_EL1 to automatic generation

Convert ID_AA64DFR0_EL1 to automatic generation as per DDI0487I.a, no
functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220910163354.860255-5-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Use feature numbering for PMU and SPE revisions
Mark Brown [Sat, 10 Sep 2022 16:33:51 +0000 (17:33 +0100)]
arm64/sysreg: Use feature numbering for PMU and SPE revisions

Currently the kernel refers to the versions of the PMU and SPE features by
the version of the architecture where those features were updated but the
ARM refers to them using the FEAT_ names for the features. To improve
consistency and help with updating for newer features and since v9 will
make our current naming scheme a bit more confusing update the macros
identfying features to use the FEAT_ based scheme.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220910163354.860255-4-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Add _EL1 into ID_AA64DFR0_EL1 definition names
Mark Brown [Sat, 10 Sep 2022 16:33:50 +0000 (17:33 +0100)]
arm64/sysreg: Add _EL1 into ID_AA64DFR0_EL1 definition names

Normally we include the full register name in the defines for fields within
registers but this has not been followed for ID registers. In preparation
for automatic generation of defines add the _EL1s into the defines for
ID_AA64DFR0_EL1 to follow the convention. No functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220910163354.860255-3-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Align field names in ID_AA64DFR0_EL1 with architecture
Mark Brown [Sat, 10 Sep 2022 16:33:49 +0000 (17:33 +0100)]
arm64/sysreg: Align field names in ID_AA64DFR0_EL1 with architecture

The naming scheme the architecture uses for the fields in ID_AA64DFR0_EL1
does not align well with kernel conventions, using as it does a lot of
MixedCase in various arrangements. In preparation for automatically
generating the defines for this register rename the defines used to match
what is in the architecture.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220910163354.860255-2-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: rework BTI exception handling
Mark Rutland [Tue, 13 Sep 2022 10:17:32 +0000 (11:17 +0100)]
arm64: rework BTI exception handling

If a BTI exception is taken from EL1, the entry code will treat this as
an unhandled exception and will panic() the kernel. This is inconsistent
with the way we handle FPAC exceptions, which have a dedicated handler
and only necessarily kill the thread from which the exception was taken
from, and we don't log all the information that could be relevant to
debug the issue.

The code in do_bti() has:

BUG_ON(!user_mode(regs));

... and it seems like the intent was to call this for EL1 BTI
exceptions, as with FPAC, but this was omitted due to an oversight.

This patch adds separate EL0 and EL1 BTI exception handlers, with the
latter calling die() directly to report the original context the BTI
exception was taken from. This matches our handling of FPAC exceptions.

Prior to this patch, a BTI failure is reported as:

| Unhandled 64-bit el1h sync exception on CPU0, ESR 0x0000000034000002 -- BTI
| CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-rc3-00131-g7d937ff0221d-dirty #9
| Hardware name: linux,dummy-virt (DT)
| pstate: 20400809 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=-c)
| pc : test_bti_callee+0x4/0x10
| lr : test_bti_caller+0x1c/0x28
| sp : ffff80000800bdf0
| x29: ffff80000800bdf0 x28: 0000000000000000 x27: 0000000000000000
| x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000
| x23: ffff80000a2b8000 x22: 0000000000000000 x21: 0000000000000000
| x20: ffff8000099fa5b0 x19: ffff800009ff7000 x18: fffffbfffda37000
| x17: 3120676e696d7573 x16: 7361202c6e6f6974 x15: 0000000041a90000
| x14: 0040000000000041 x13: 0040000000000001 x12: ffff000001a90000
| x11: fffffbfffda37480 x10: 0068000000000703 x9 : 0001000040000000
| x8 : 0000000000090000 x7 : 0068000000000f03 x6 : 0060000000000f83
| x5 : ffff80000a2b6000 x4 : ffff0000028d0000 x3 : ffff800009f78378
| x2 : 0000000000000000 x1 : 0000000040210000 x0 : ffff8000080257e4
| Kernel panic - not syncing: Unhandled exception
| CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-rc3-00131-g7d937ff0221d-dirty #9
| Hardware name: linux,dummy-virt (DT)
| Call trace:
|  dump_backtrace.part.0+0xcc/0xe0
|  show_stack+0x18/0x5c
|  dump_stack_lvl+0x64/0x80
|  dump_stack+0x18/0x34
|  panic+0x170/0x360
|  arm64_exit_nmi.isra.0+0x0/0x80
|  el1h_64_sync_handler+0x64/0xd0
|  el1h_64_sync+0x64/0x68
|  test_bti_callee+0x4/0x10
|  smp_cpus_done+0xb0/0xbc
|  smp_init+0x7c/0x8c
|  kernel_init_freeable+0x128/0x28c
|  kernel_init+0x28/0x13c
|  ret_from_fork+0x10/0x20

With this patch applied, a BTI failure is reported as:

| Internal error: Oops - BTI: 0000000034000002 [#1] PREEMPT SMP
| Modules linked in:
| CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-rc3-00132-g0ad98265d582-dirty #8
| Hardware name: linux,dummy-virt (DT)
| pstate: 20400809 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=-c)
| pc : test_bti_callee+0x4/0x10
| lr : test_bti_caller+0x1c/0x28
| sp : ffff80000800bdf0
| x29: ffff80000800bdf0 x28: 0000000000000000 x27: 0000000000000000
| x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000
| x23: ffff80000a2b8000 x22: 0000000000000000 x21: 0000000000000000
| x20: ffff8000099fa5b0 x19: ffff800009ff7000 x18: fffffbfffda37000
| x17: 3120676e696d7573 x16: 7361202c6e6f6974 x15: 0000000041a90000
| x14: 0040000000000041 x13: 0040000000000001 x12: ffff000001a90000
| x11: fffffbfffda37480 x10: 0068000000000703 x9 : 0001000040000000
| x8 : 0000000000090000 x7 : 0068000000000f03 x6 : 0060000000000f83
| x5 : ffff80000a2b6000 x4 : ffff0000028d0000 x3 : ffff800009f78378
| x2 : 0000000000000000 x1 : 0000000040210000 x0 : ffff800008025804
| Call trace:
|  test_bti_callee+0x4/0x10
|  smp_cpus_done+0xb0/0xbc
|  smp_init+0x7c/0x8c
|  kernel_init_freeable+0x128/0x28c
|  kernel_init+0x28/0x13c
|  ret_from_fork+0x10/0x20
| Code: d50323bf d53cd040 d65f03c0 d503233f (d50323bf)

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220913101732.3925290-6-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: rework FPAC exception handling
Mark Rutland [Tue, 13 Sep 2022 10:17:31 +0000 (11:17 +0100)]
arm64: rework FPAC exception handling

If an FPAC exception is taken from EL1, the entry code will call
do_ptrauth_fault(), where due to:

BUG_ON(!user_mode(regs))

... the kernel will report a problem within do_ptrauth_fault() rather
than reporting the original context the FPAC exception was taken from.
The pt_regs and ESR value reported will be from within
do_ptrauth_fault() and the code dump will be for the BRK in BUG_ON(),
which isn't sufficient to debug the cause of the original exception.

This patch makes the reporting better by having separate EL0 and EL1
FPAC exception handlers, with the latter calling die() directly to
report the original context the FPAC exception was taken from.

Note that we only need to prevent kprobes of the EL1 FPAC handler, since
the EL0 FPAC handler cannot be called recursively.

For consistency with do_el0_svc*(), I've named the split functions
do_el{0,1}_fpac() rather than do_el{0,1}_ptrauth_fault(). I've also
clarified the comment to not imply there are casues other than FPAC
exceptions.

Prior to this patch FPAC exceptions are reported as:

| kernel BUG at arch/arm64/kernel/traps.c:517!
| Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP
| Modules linked in:
| CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-rc3-00130-g9c8a180a1cdf-dirty #12
| Hardware name: FVP Base RevC (DT)
| pstate: 00400009 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
| pc : do_ptrauth_fault+0x3c/0x40
| lr : el1_fpac+0x34/0x54
| sp : ffff80000a3bbc80
| x29: ffff80000a3bbc80 x28: ffff0008001d8000 x27: 0000000000000000
| x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000
| x23: 0000000020400009 x22: ffff800008f70fa4 x21: ffff80000a3bbe00
| x20: 0000000072000000 x19: ffff80000a3bbcb0 x18: fffffbfffda37000
| x17: 3120676e696d7573 x16: 7361202c6e6f6974 x15: 0000000081a90000
| x14: 0040000000000041 x13: 0040000000000001 x12: ffff000001a90000
| x11: fffffbfffda37480 x10: 0068000000000703 x9 : 0001000080000000
| x8 : 0000000000090000 x7 : 0068000000000f03 x6 : 0060000000000783
| x5 : ffff80000a3bbcb0 x4 : ffff0008001d8000 x3 : 0000000072000000
| x2 : 0000000000000000 x1 : 0000000020400009 x0 : ffff80000a3bbcb0
| Call trace:
|  do_ptrauth_fault+0x3c/0x40
|  el1h_64_sync_handler+0xc4/0xd0
|  el1h_64_sync+0x64/0x68
|  test_pac+0x8/0x10
|  smp_init+0x7c/0x8c
|  kernel_init_freeable+0x128/0x28c
|  kernel_init+0x28/0x13c
|  ret_from_fork+0x10/0x20
| Code: 97fffe5e a8c17bfd d50323bf d65f03c0 (d4210000)

With this patch applied FPAC exceptions are reported as:

| Internal error: Oops - FPAC: 0000000072000000 [#1] PREEMPT SMP
| Modules linked in:
| CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-rc3-00132-g78846e1c4757-dirty #11
| Hardware name: FVP Base RevC (DT)
| pstate: 20400009 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
| pc : test_pac+0x8/0x10
| lr : 0x0
| sp : ffff80000a3bbe00
| x29: ffff80000a3bbe00 x28: 0000000000000000 x27: 0000000000000000
| x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000
| x23: ffff80000a2c8000 x22: 0000000000000000 x21: 0000000000000000
| x20: ffff8000099fa5b0 x19: ffff80000a007000 x18: fffffbfffda37000
| x17: 3120676e696d7573 x16: 7361202c6e6f6974 x15: 0000000081a90000
| x14: 0040000000000041 x13: 0040000000000001 x12: ffff000001a90000
| x11: fffffbfffda37480 x10: 0068000000000703 x9 : 0001000080000000
| x8 : 0000000000090000 x7 : 0068000000000f03 x6 : 0060000000000783
| x5 : ffff80000a2c6000 x4 : ffff0008001d8000 x3 : ffff800009f88378
| x2 : 0000000000000000 x1 : 0000000080210000 x0 : ffff000001a90000
| Call trace:
|  test_pac+0x8/0x10
|  smp_init+0x7c/0x8c
|  kernel_init_freeable+0x128/0x28c
|  kernel_init+0x28/0x13c
|  ret_from_fork+0x10/0x20
| Code: d50323bf d65f03c0 d503233f aa1f03fe (d50323bf)

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220913101732.3925290-5-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: consistently pass ESR_ELx to die()
Mark Rutland [Tue, 13 Sep 2022 10:17:30 +0000 (11:17 +0100)]
arm64: consistently pass ESR_ELx to die()

Currently, bug_handler() and kasan_handler() call die() with '0' as the
'err' value, whereas die_kernel_fault() passes the ESR_ELx value.

For consistency, this patch ensures we always pass the ESR_ELx value to
die(). As this is only called for exceptions taken from kernel mode,
there should be no user-visible change as a result of this patch.

For UNDEFINED exceptions, I've had to modify do_undefinstr() and its
callers to pass the ESR_ELx value. In all cases the ESR_ELx value had
already been read and was available.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220913101732.3925290-4-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: die(): pass 'err' as long
Mark Rutland [Tue, 13 Sep 2022 10:17:29 +0000 (11:17 +0100)]
arm64: die(): pass 'err' as long

Recently, we reworked a lot of code to consistentlt pass ESR_ELx as a
64-bit quantity. However, we missed that this can be passed into die()
and __die() as the 'err' parameter where it is truncated to a 32-bit
int.

As notify_die() already takes 'err' as a long, this patch changes die()
and __die() to also take 'err' as a long, ensuring that the full value
of ESR_ELx is retained.

At the same time, die() is updated to consistently log 'err' as a
zero-padded 64-bit quantity.

Subsequent patches will pass the ESR_ELx value to die() for a number of
exceptions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220913101732.3925290-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: report EL1 UNDEFs better
Mark Rutland [Tue, 13 Sep 2022 10:17:28 +0000 (11:17 +0100)]
arm64: report EL1 UNDEFs better

If an UNDEFINED exception is taken from EL1, and do_undefinstr() doesn't
find any suitable undef_hook, it will call:

BUG_ON(!user_mode(regs))

... and the kernel will report a failure witin do_undefinstr() rather
than reporting the original context that the UNDEFINED exception was
taken from. The pt_regs and ESR value reported within the BUG() handler
will be from within do_undefinstr() and the code dump will be for the
BRK in BUG_ON(), which isn't sufficient to debug the cause of the
original exception.

This patch makes the reporting better by having do_undefinstr() call
die() directly in this case to report the original context from which
the UNDEFINED exception was taken.

Prior to this patch, an undefined instruction is reported as:

| kernel BUG at arch/arm64/kernel/traps.c:497!
| Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
| Modules linked in:
| CPU: 0 PID: 0 Comm: swapper Not tainted 5.19.0-rc3-00127-geff044f1b04e-dirty #3
| Hardware name: linux,dummy-virt (DT)
| pstate: 000000c5 (nzcv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
| pc : do_undefinstr+0x28c/0x2ac
| lr : do_undefinstr+0x298/0x2ac
| sp : ffff800009f63bc0
| x29: ffff800009f63bc0 x28: ffff800009f73c00 x27: ffff800009644a70
| x26: ffff8000096778a8 x25: 0000000000000040 x24: 0000000000000000
| x23: 00000000800000c5 x22: ffff800009894060 x21: ffff800009f63d90
| x20: 0000000000000000 x19: ffff800009f63c40 x18: 0000000000000006
| x17: 0000000000403000 x16: 00000000bfbfd000 x15: ffff800009f63830
| x14: ffffffffffffffff x13: 0000000000000000 x12: 0000000000000019
| x11: 0101010101010101 x10: 0000000000161b98 x9 : 0000000000000000
| x8 : 0000000000000000 x7 : 0000000000000000 x6 : 0000000000000000
| x5 : ffff800009f761d0 x4 : 0000000000000000 x3 : ffff80000a2b80f8
| x2 : 0000000000000000 x1 : ffff800009f73c00 x0 : 00000000800000c5
| Call trace:
|  do_undefinstr+0x28c/0x2ac
|  el1_undef+0x2c/0x4c
|  el1h_64_sync_handler+0x84/0xd0
|  el1h_64_sync+0x64/0x68
|  setup_arch+0x550/0x598
|  start_kernel+0x88/0x6ac
|  __primary_switched+0xb8/0xc0
| Code: 17ffff95 a9425bf5 17ffffb8 a9025bf5 (d4210000)

With this patch applied, an undefined instruction is reported as:

| Internal error: Oops - Undefined instruction: 0 [#1] PREEMPT SMP
| Modules linked in:
| CPU: 0 PID: 0 Comm: swapper Not tainted 5.19.0-rc3-00128-gf27cfcc80e52-dirty #5
| Hardware name: linux,dummy-virt (DT)
| pstate: 800000c5 (Nzcv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
| pc : setup_arch+0x550/0x598
| lr : setup_arch+0x50c/0x598
| sp : ffff800009f63d90
| x29: ffff800009f63d90 x28: 0000000081000200 x27: ffff800009644a70
| x26: ffff8000096778c8 x25: 0000000000000040 x24: 0000000000000000
| x23: 0000000000000100 x22: ffff800009f69a58 x21: ffff80000a2b80b8
| x20: 0000000000000000 x19: 0000000000000000 x18: 0000000000000006
| x17: 0000000000403000 x16: 00000000bfbfd000 x15: ffff800009f63830
| x14: ffffffffffffffff x13: 0000000000000000 x12: 0000000000000019
| x11: 0101010101010101 x10: 0000000000161b98 x9 : 0000000000000000
| x8 : 0000000000000000 x7 : 0000000000000000 x6 : 0000000000000000
| x5 : 0000000000000008 x4 : 0000000000000010 x3 : 0000000000000000
| x2 : 0000000000000000 x1 : 0000000000000000 x0 : 0000000000000000
| Call trace:
|  setup_arch+0x550/0x598
|  start_kernel+0x88/0x6ac
|  __primary_switched+0xb8/0xc0
| Code: b4000080 90ffed80 912ac000 97db745f (00000000)

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/20220913101732.3925290-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: support huge vmalloc mappings
Kefeng Wang [Sun, 11 Sep 2022 04:44:23 +0000 (12:44 +0800)]
arm64: support huge vmalloc mappings

As commit 559089e0a93d ("vmalloc: replace VM_NO_HUGE_VMAP with
VM_ALLOW_HUGE_VMAP"), the use of hugepage mappings for vmalloc
is an opt-in strategy, so it is saftly to support huge vmalloc
mappings on arm64, for now, it is used in kvmalloc() and
alloc_large_system_hash().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Link: https://lore.kernel.org/r/20220911044423.139229-1-wangkefeng.wang@huawei.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: spectre: increase parameters that can be used to turn off bhb mitigation indiv...
Liu Song [Fri, 26 Aug 2022 11:40:50 +0000 (19:40 +0800)]
arm64: spectre: increase parameters that can be used to turn off bhb mitigation individually

In our environment, it was found that the mitigation BHB has a great
impact on the benchmark performance. For example, in the lmbench test,
the "process fork && exit" test performance drops by 20%.
So it is necessary to have the ability to turn off the mitigation
individually through cmdline, thus avoiding having to compile the
kernel by adjusting the config.

Signed-off-by: Liu Song <liusong@linux.alibaba.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/1661514050-22263-1-git-send-email-liusong@linux.alibaba.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: run softirqs on the per-CPU IRQ stack
Qi Zheng [Mon, 15 Aug 2022 12:47:39 +0000 (20:47 +0800)]
arm64: run softirqs on the per-CPU IRQ stack

Currently arm64 supports per-CPU IRQ stack, but softirqs
are still handled in the task context.

Since any call to local_bh_enable() at any level in the task's
call stack may trigger a softirq processing run, which could
potentially cause a task stack overflow if the combined stack
footprints exceed the stack's size, let's run these softirqs
on the IRQ stack as well.

Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220815124739.15948-1-zhengqi.arch@bytedance.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: atomic: always inline the assembly
Mark Rutland [Wed, 17 Aug 2022 15:59:14 +0000 (16:59 +0100)]
arm64: atomic: always inline the assembly

The __lse_*() and __ll_sc_*() atomic implementations are marked as
inline rather than __always_inline, permitting a compiler to generate
out-of-line versions, which may be instrumented.

We marked the atomic wrappers as __always_inline in commit:

  c35a824c31834d94 ("arm64: make atomic helpers __always_inline")

... but did not think to do the same for the underlying implementations.

If the compiler were to out-of-line an LSE or LL/SC atomic, this could
break noinstr code. Ensure this doesn't happen by marking the underlying
implementations as __always_inline.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220817155914.3975112-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: atomics: remove LL/SC trampolines
Mark Rutland [Wed, 17 Aug 2022 15:59:13 +0000 (16:59 +0100)]
arm64: atomics: remove LL/SC trampolines

When CONFIG_ARM64_LSE_ATOMICS=y, each use of an LL/SC atomic results in
a fragment of code being generated in a subsection without a clear
association with its caller. A trampoline in the caller branches to the
LL/SC atomic with with a direct branch, and the atomic directly branches
back into its trampoline.

This breaks backtracing, as any PC within the out-of-line fragment will
be symbolized as an offset from the nearest prior symbol (which may not
be the function using the atomic), and since the atomic returns with a
direct branch, the caller's PC may be missing from the backtrace.

For example, with secondary_start_kernel() hacked to contain
atomic_inc(NULL), the resulting exception can be reported as being taken
from cpus_are_stuck_in_kernel():

| Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000
| Mem abort info:
|   ESR = 0x0000000096000004
|   EC = 0x25: DABT (current EL), IL = 32 bits
|   SET = 0, FnV = 0
|   EA = 0, S1PTW = 0
|   FSC = 0x04: level 0 translation fault
| Data abort info:
|   ISV = 0, ISS = 0x00000004
|   CM = 0, WnR = 0
| [0000000000000000] user address but active_mm is swapper
| Internal error: Oops: 96000004 [#1] PREEMPT SMP
| Modules linked in:
| CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.19.0-11219-geb555cb5b794-dirty #3
| Hardware name: linux,dummy-virt (DT)
| pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
| pc : cpus_are_stuck_in_kernel+0xa4/0x120
| lr : secondary_start_kernel+0x164/0x170
| sp : ffff80000a4cbe90
| x29: ffff80000a4cbe90 x28: 0000000000000000 x27: 0000000000000000
| x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000
| x23: 0000000000000000 x22: 0000000000000000 x21: 0000000000000000
| x20: 0000000000000001 x19: 0000000000000001 x18: 0000000000000008
| x17: 3030383832343030 x16: 3030303030307830 x15: ffff80000a4cbab0
| x14: 0000000000000001 x13: 5d31666130663133 x12: 3478305b20313030
| x11: 3030303030303078 x10: 3020726f73736563 x9 : 726f737365636f72
| x8 : ffff800009ff2ef0 x7 : 0000000000000003 x6 : 0000000000000000
| x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000100
| x2 : 0000000000000000 x1 : ffff0000029bd880 x0 : 0000000000000000
| Call trace:
|  cpus_are_stuck_in_kernel+0xa4/0x120
|  __secondary_switched+0xb0/0xb4
| Code: 35ffffa3 17fffc6c d53cd040 f9800011 (885f7c01)
| ---[ end trace 0000000000000000 ]---

This is confusing and hinders debugging, and will be problematic for
CONFIG_LIVEPATCH as these cases cannot be unwound reliably.

This is very similar to recent issues with out-of-line exception fixups,
which were removed in commits:

  35d67794b8828333 ("arm64: lib: __arch_clear_user(): fold fixups into body")
  4012e0e22739eef9 ("arm64: lib: __arch_copy_from_user(): fold fixups into body")
  139f9ab73d60cf76 ("arm64: lib: __arch_copy_to_user(): fold fixups into body")

When the trampolines were introduced in commit:

  addfc38672c73efd ("arm64: atomics: avoid out-of-line ll/sc atomics")

The rationale was to improve icache performance by grouping the LL/SC
atomics together. This has never been measured, and this theoretical
benefit is outweighed by other factors:

* As the subsections are collapsed into sections at object file
  granularity, these are spread out throughout the kernel and can share
  cachelines with unrelated code regardless.

* GCC 12.1.0 has been observed to place the trampoline out-of-line in
  specialised __ll_sc_*() functions, introducing more branching than was
  intended.

* Removing the trampolines has been observed to shrink a defconfig
  kernel Image by 64KiB when building with GCC 12.1.0.

This patch removes the LL/SC trampolines, meaning that the LL/SC atomics
will be inlined into their callers (or placed in out-of line functions
using regular BL/RET pairs). When CONFIG_ARM64_LSE_ATOMICS=y, the LL/SC
atomics are always called in an unlikely branch, and will be placed in a
cold portion of the function, so this should have minimal impact to the
hot paths.

Other than the improved backtracing, there should be no functional
change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220817155914.3975112-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: stacktrace: track hyp stacks in unwinder's address space
Mark Rutland [Thu, 1 Sep 2022 13:06:46 +0000 (14:06 +0100)]
arm64: stacktrace: track hyp stacks in unwinder's address space

Currently unwind_next_frame_record() has an optional callback to convert
the address space of the FP. This is necessary for the NVHE unwinder,
which tracks the stacks in the hyp VA space, but accesses the frame
records in the kernel VA space.

This is a bit unfortunate since it clutters unwind_next_frame_record(),
which will get in the way of future rework.

Instead, this patch changes the NVHE unwinder to track the stacks in the
kernel's VA space and translate to FP prior to calling
unwind_next_frame_record(). This removes the need for the translate_fp()
callback, as all unwinders consistently track stacks in the native
address space of the unwinder.

At the same time, this patch consolidates the generation of the stack
addresses behind the stackinfo_get_*() helpers.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Fuad Tabba <tabba@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220901130646.1316937-10-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: stacktrace: track all stack boundaries explicitly
Mark Rutland [Thu, 1 Sep 2022 13:06:45 +0000 (14:06 +0100)]
arm64: stacktrace: track all stack boundaries explicitly

Currently we call an on_accessible_stack() callback for each step of the
unwinder, requiring redundant work to be performed in the core of the
unwind loop (e.g. disabling preemption around accesses to per-cpu
variables containing stack boundaries). To prevent unwind loops which go
through a stack multiple times, we have to track the set of unwound
stacks, requiring a stack_type enum which needs to cater for all the
stacks of all possible callees. To prevent loops within a stack, we must
track the prior FP values.

This patch reworks the unwinder to minimize the work in the core of the
unwinder, and to remove the need for the stack_type enum. The set of
accessible stacks (and their boundaries) are determined at the start of
the unwind, and the current stack is tracked during the unwind, with
completed stacks removed from the set of accessible stacks. This makes
the boundary checks more accurate (e.g. detecting overlapped frame
records), and removes the need for separate tracking of the prior FP and
visited stacks.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Fuad Tabba <tabba@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220901130646.1316937-9-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: stacktrace: remove stack type from fp translator
Mark Rutland [Thu, 1 Sep 2022 13:06:44 +0000 (14:06 +0100)]
arm64: stacktrace: remove stack type from fp translator

In subsequent patches we'll remove the stack_type enum, and move the FP
translation logic out of the raw FP unwind code.

In preparation for doing so, this patch removes the type parameter from
the FP translation callback, and modifies kvm_nvhe_stack_kern_va() to
determine the relevant stack directly.

So that kvm_nvhe_stack_kern_va() can use the stackinfo_*() helpers,
these are moved earlier in the file, but are not modified in any way.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Fuad Tabba <tabba@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220901130646.1316937-8-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: stacktrace: rework stack boundary discovery
Mark Rutland [Thu, 1 Sep 2022 13:06:43 +0000 (14:06 +0100)]
arm64: stacktrace: rework stack boundary discovery

In subsequent patches we'll want to acquire the stack boundaries
ahead-of-time, and we'll need to be able to acquire the relevant
stack_info regardless of whether we have an object the happens to be on
the stack.

This patch replaces the on_XXX_stack() helpers with stackinfo_get_XXX()
helpers, with the caller being responsible for the checking whether an
object is on a relevant stack. For the moment this is moved into the
on_accessible_stack() functions, making these slightly larger;
subsequent patches will remove the on_accessible_stack() functions and
simplify the logic.

The on_irq_stack() and on_task_stack() helpers are kept as these are
used by IRQ entry sequences and stackleak respectively. As they're only
used as predicates, the stack_info pointer parameter is removed in both
cases.

As the on_accessible_stack() functions are always passed a non-NULL info
pointer, these now update info unconditionally. When updating the type
to STACK_TYPE_UNKNOWN, the low/high bounds are also modified, but as
these will not be consumed this should have no adverse affect.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Fuad Tabba <tabba@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220901130646.1316937-7-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: stacktrace: add stackinfo_on_stack() helper
Mark Rutland [Thu, 1 Sep 2022 13:06:42 +0000 (14:06 +0100)]
arm64: stacktrace: add stackinfo_on_stack() helper

Factor the core predicate out of on_stack() into a helper which can be
used on a pre-populated stack_info.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Fuad Tabba <tabba@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220901130646.1316937-6-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: stacktrace: move SDEI stack helpers to stacktrace code
Mark Rutland [Thu, 1 Sep 2022 13:06:41 +0000 (14:06 +0100)]
arm64: stacktrace: move SDEI stack helpers to stacktrace code

For clarity and ease of maintenance, it would be helpful for all the
stack helpers to be in the same place.

Move the SDEI stack helpers into the stacktrace code where all the other
stack helpers live.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Fuad Tabba <tabba@google.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220901130646.1316937-5-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: stacktrace: rename unwind_next_common() -> unwind_next_frame_record()
Mark Rutland [Thu, 1 Sep 2022 13:06:40 +0000 (14:06 +0100)]
arm64: stacktrace: rename unwind_next_common() -> unwind_next_frame_record()

The unwind_next_common() function unwinds a single frame record. There
are other unwind steps (e.g. unwinding through trampolines) which are
handled in the regular kernel unwinder, and in future there may be other
common unwind helpers.

Clarify the purpose of unwind_next_common() by renaming it to
unwind_next_frame_record(). At the same time, add commentary, and delete
the redundant comment at the top of asm/stacktrace/common.h.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Fuad Tabba <tabba@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220901130646.1316937-4-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: stacktrace: simplify unwind_next_common()
Mark Rutland [Thu, 1 Sep 2022 13:06:39 +0000 (14:06 +0100)]
arm64: stacktrace: simplify unwind_next_common()

Currently unwind_next_common() takes a pointer to a stack_info which is
only ever used within unwind_next_common().

Make it a local variable and simplify callers.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Fuad Tabba <tabba@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220901130646.1316937-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: stacktrace: fix kerneldoc comments
Mark Rutland [Thu, 1 Sep 2022 13:06:38 +0000 (14:06 +0100)]
arm64: stacktrace: fix kerneldoc comments

Many of the comment blocks in the arm64 stacktrace code are *almost*
kerneldoc, but not quite.

Convert them to kerneldoc, as was presumably originally intended.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Fuad Tabba <tabba@google.com>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220901130646.1316937-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: vdso: use SYS_CNTVCTSS_EL0 for gettimeofday
Joey Gouly [Tue, 30 Aug 2022 10:48:33 +0000 (11:48 +0100)]
arm64: vdso: use SYS_CNTVCTSS_EL0 for gettimeofday

If FEAT_ECV is implemented, the self-synchronized counter CNTVCTSS_EL0 can
be used, removing the need for an ISB.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220830104833.34636-4-joey.gouly@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: alternative: patch alternatives in the vDSO
Joey Gouly [Tue, 30 Aug 2022 10:48:32 +0000 (11:48 +0100)]
arm64: alternative: patch alternatives in the vDSO

Make it possible to use alternatives in the vDSO, so that better
implementations can be used if possible.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220830104833.34636-3-joey.gouly@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64: module: move find_section to header
Joey Gouly [Tue, 30 Aug 2022 10:48:31 +0000 (11:48 +0100)]
arm64: module: move find_section to header

Move it to the header so that the implementation can be shared
by the alternatives code.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20220830104833.34636-2-joey.gouly@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Add defintion for ALLINT
Mark Brown [Mon, 5 Sep 2022 22:54:25 +0000 (23:54 +0100)]
arm64/sysreg: Add defintion for ALLINT

The FEAT_NMI extension adds a new system register ALLINT for controlling
NMI related interrupt masking, add a definition of this register as per
DDI0487H.a.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-29-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Convert SCXTNUM_EL1 to automatic generation
Mark Brown [Mon, 5 Sep 2022 22:54:24 +0000 (23:54 +0100)]
arm64/sysreg: Convert SCXTNUM_EL1 to automatic generation

Convert SCXTNUM_EL1 to automatic generation as per DDI0487H.a, no
functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-28-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Convert TIPDR_EL1 to automatic generation
Mark Brown [Mon, 5 Sep 2022 22:54:23 +0000 (23:54 +0100)]
arm64/sysreg: Convert TIPDR_EL1 to automatic generation

Convert TPIDR_EL1 to automatic generation as per DDI0487H.a, no functional
changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-27-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Convert ID_AA64PFR1_EL1 to automatic generation
Mark Brown [Mon, 5 Sep 2022 22:54:22 +0000 (23:54 +0100)]
arm64/sysreg: Convert ID_AA64PFR1_EL1 to automatic generation

Convert ID_AA64PFR1_EL1 to be automatically generated as per DDI04187H.a,
no functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-26-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Convert ID_AA64PFR0_EL1 to automatic generation
Mark Brown [Mon, 5 Sep 2022 22:54:21 +0000 (23:54 +0100)]
arm64/sysreg: Convert ID_AA64PFR0_EL1 to automatic generation

Automatically generate the constants for ID_AA64PFR0_EL1 as per DDI0487I.a,
no functional changes. The generic defines for the ELx fields are left in
place as they remain useful.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-25-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Convert ID_AA64MMFR2_EL1 to automatic generation
Mark Brown [Mon, 5 Sep 2022 22:54:20 +0000 (23:54 +0100)]
arm64/sysreg: Convert ID_AA64MMFR2_EL1 to automatic generation

Convert ID_AA64MMFR2_EL1 defines to automatic generation as per DDI0487H.a,
no functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-24-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Convert ID_AA64MMFR1_EL1 to automatic generation
Kristina Martsenko [Mon, 5 Sep 2022 22:54:19 +0000 (23:54 +0100)]
arm64/sysreg: Convert ID_AA64MMFR1_EL1 to automatic generation

Convert ID_AA64MMFR1_EL1 to be automatically generated as per DDI0487H.a
plus ECBHB which was RES0 in DDI0487H.a but has been subsequently
defined and is already present in mainline. No functional changes.

Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-23-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Convert ID_AA64MMFR0_EL1 to automatic generation
Mark Brown [Mon, 5 Sep 2022 22:54:18 +0000 (23:54 +0100)]
arm64/sysreg: Convert ID_AA64MMFR0_EL1 to automatic generation

Automatically generate most of the defines for ID_AA64MMFR0_EL1 mostly as
per DDI0487H.a. Due to the large amount of MixedCase in this register which
isn't really consistent with either the kernel style or the majority of the
architecture the use of upper case is preserved. We also leave in place a
number of min/max/default value definitions which don't flow from the
architecture definitions.

No functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-22-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Convert HCRX_EL2 to automatic generation
Mark Brown [Mon, 5 Sep 2022 22:54:17 +0000 (23:54 +0100)]
arm64/sysreg: Convert HCRX_EL2 to automatic generation

Convert HCRX_EL2 to be automatically generated as per DDI04187H.a, n
functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-21-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 SME enumeration
Mark Brown [Mon, 5 Sep 2022 22:54:16 +0000 (23:54 +0100)]
arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 SME enumeration

In preparation for automatic generation of constants update the define for
SME being implemented to the convention we are using, no functional change.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-20-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 BTI enumeration
Mark Brown [Mon, 5 Sep 2022 22:54:15 +0000 (23:54 +0100)]
arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 BTI enumeration

In preparation for automatic generation of constants update the define for
BTI being implemented to the convention we are using, no functional change.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-19-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 fractional version fields
Mark Brown [Mon, 5 Sep 2022 22:54:14 +0000 (23:54 +0100)]
arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 fractional version fields

The naming for fractional versions fields in ID_AA64PFR1_EL1 does not align
with that in the architecture, lacking underscores and using upper case
where the architecture uses lower case. In preparation for automatic
generation of defines bring the code in sync with the architecture, no
functional change.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-18-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Standardise naming for MTE feature enumeration
Mark Brown [Mon, 5 Sep 2022 22:54:13 +0000 (23:54 +0100)]
arm64/sysreg: Standardise naming for MTE feature enumeration

In preparation for conversion to automatic generation refresh the names
given to the items in the MTE feture enumeration to reflect our standard
pattern for naming, corresponding to the architecture feature names they
reflect. No functional change.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-17-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Standardise naming for SSBS feature enumeration
Mark Brown [Mon, 5 Sep 2022 22:54:12 +0000 (23:54 +0100)]
arm64/sysreg: Standardise naming for SSBS feature enumeration

In preparation for conversion to automatic generation refresh the names
given to the items in the SSBS feature enumeration to reflect our standard
pattern for naming, corresponding to the architecture feature names they
reflect. No functional change.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-16-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
20 months agoarm64/sysreg: Standardise naming for ID_AA64PFR0_EL1.AdvSIMD constants
Mark Brown [Mon, 5 Sep 2022 22:54:11 +0000 (23:54 +0100)]
arm64/sysreg: Standardise naming for ID_AA64PFR0_EL1.AdvSIMD constants

The architecture refers to the register field identifying advanced SIMD as
AdvSIMD but the kernel refers to it as ASIMD. Use the architecture's
naming. No functional changes.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-15-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>