linux-2.6-microblaze.git
3 years agopowerpc/irq: Inline call_do_irq() and call_do_softirq()
Christophe Leroy [Fri, 19 Mar 2021 06:34:43 +0000 (06:34 +0000)]
powerpc/irq: Inline call_do_irq() and call_do_softirq()

call_do_irq() and call_do_softirq() are simple enough to be
worth inlining.

Inlining them avoids an mflr/mtlr pair plus a save/reload on stack.

This is inspired from S390 arch. Several other arches do more or
less the same. The way sparc arch does seems odd thought.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210320122227.345427-1-mpe@ellerman.id.au
3 years agopowerpc/setup_64: Fix sparse warnings
He Ying [Tue, 16 Mar 2021 04:11:48 +0000 (00:11 -0400)]
powerpc/setup_64: Fix sparse warnings

Sparse warns:
  warning: symbol 'rfi_flush' was not declared.
  warning: symbol 'entry_flush' was not declared.
  warning: symbol 'uaccess_flush' was not declared.

Define 'entry_flush' and 'uaccess_flush' as static because they are
not referenced outside the file. Include asm/security_features.h in
which 'rfi_flush' is declared.

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: He Ying <heying24@huawei.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210316041148.29694-1-heying24@huawei.com
3 years agopowerpc/embedded6xx: Remove CONFIG_MV64X60
Christophe Leroy [Thu, 18 Mar 2021 17:25:07 +0000 (17:25 +0000)]
powerpc/embedded6xx: Remove CONFIG_MV64X60

Commit 92c8c16f3457 ("powerpc/embedded6xx: Remove C2K board support")
moved the last selector of CONFIG_MV64X60.

As it is not a user selectable config, it can be removed.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Wolfram Sang <wsa@kernel.org> # for I2C
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/19e57d16692dcd1ca67ba880d7273a57fab416aa.1616085654.git.christophe.leroy@csgroup.eu
3 years agopowerpc/iommu/debug: fix ifnullfree.cocci warnings
kernel test robot [Thu, 18 Mar 2021 23:44:41 +0000 (07:44 +0800)]
powerpc/iommu/debug: fix ifnullfree.cocci warnings

arch/powerpc/kernel/iommu.c:76:2-16: WARNING: NULL check before some freeing functions is not needed.

 NULL check before some freeing functions is not needed.

 Based on checkpatch warning
 "kfree(NULL) is safe this check is probably not required"
 and kfreeaddr.cocci by Julia Lawall.

Generated by: scripts/coccinelle/free/ifnullfree.cocci

Fixes: 691602aab9c3 ("powerpc/iommu/debug: Add debugfs entries for IOMMU tables")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: kernel test robot <lkp@intel.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210318234441.GA63469@f8e20a472e81
3 years agopowerpc: Fix arch_stack_walk() to have running function as first entry
Christophe Leroy [Tue, 16 Mar 2021 07:57:16 +0000 (07:57 +0000)]
powerpc: Fix arch_stack_walk() to have running function as first entry

It seems like other architectures, namely x86 and arm64 and riscv
at least, include the running function as top entry when saving
stack trace with save_stack_trace_regs().

Functionnalities like KFENCE expect it.

Do the same on powerpc, it allows KFENCE and other users to
properly identify the faulting function as depicted below.
Before the patch KFENCE was identifying finish_task_switch.isra
as the faulting function.

[   14.937370] ==================================================================
[   14.948692] BUG: KFENCE: invalid read in test_invalid_access+0x54/0x108
[   14.948692]
[   14.956814] Invalid read at 0xdf98800a:
[   14.960664]  test_invalid_access+0x54/0x108
[   14.964876]  finish_task_switch.isra.0+0x54/0x23c
[   14.969606]  kunit_try_run_case+0x5c/0xd0
[   14.973658]  kunit_generic_run_threadfn_adapter+0x24/0x30
[   14.979079]  kthread+0x15c/0x174
[   14.982342]  ret_from_kernel_thread+0x14/0x1c
[   14.986731]
[   14.988236] CPU: 0 PID: 111 Comm: kunit_try_catch Tainted: G    B             5.12.0-rc1-01537-g95f6e2088d7e-dirty #4682
[   14.999795] NIP:  c016ec2c LR: c02f517c CTR: c016ebd8
[   15.004851] REGS: e2449d90 TRAP: 0301   Tainted: G    B              (5.12.0-rc1-01537-g95f6e2088d7e-dirty)
[   15.015274] MSR:  00009032 <EE,ME,IR,DR,RI>  CR: 22000004  XER: 00000000
[   15.022043] DAR: df98800a DSISR: 20000000
[   15.022043] GPR00: c02f517c e2449e50 c1142080 e100dd24 c084b13c 00000008 c084b32b c016ebd8
[   15.022043] GPR08: c0850000 df988000 c0d10000 e2449eb0 22000288
[   15.040581] NIP [c016ec2c] test_invalid_access+0x54/0x108
[   15.046010] LR [c02f517c] kunit_try_run_case+0x5c/0xd0
[   15.051181] Call Trace:
[   15.053637] [e2449e50] [c005a68c] finish_task_switch.isra.0+0x54/0x23c (unreliable)
[   15.061338] [e2449eb0] [c02f517c] kunit_try_run_case+0x5c/0xd0
[   15.067215] [e2449ed0] [c02f648c] kunit_generic_run_threadfn_adapter+0x24/0x30
[   15.074472] [e2449ef0] [c004e7b0] kthread+0x15c/0x174
[   15.079571] [e2449f30] [c001317c] ret_from_kernel_thread+0x14/0x1c
[   15.085798] Instruction dump:
[   15.088784] 8129d608 38e7ebd8 81020280 911f004c 39000000 995f0024 907f0028 90ff001c
[   15.096613] 3949000a 915f0020 3d40c0d1 3d00c085 <8929000a3908adb0 812a4b98 3d40c02f
[   15.104612] ==================================================================

Fixes: 35de3b1aa168 ("powerpc: Implement save_stack_trace_regs() to enable kprobe stack tracing")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Marco Elver <elver@google.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/21324f9e2f21d1640c8397b4d1d857a9355a2283.1615881400.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Convert stacktrace to generic ARCH_STACKWALK
Christophe Leroy [Tue, 16 Mar 2021 07:57:15 +0000 (07:57 +0000)]
powerpc: Convert stacktrace to generic ARCH_STACKWALK

This patch converts powerpc stacktrace to the generic ARCH_STACKWALK
implemented by commit 214d8ca6ee85 ("stacktrace: Provide common
infrastructure")

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/73b36bbb101299760b95ecd2cd3a46554bea8bf9.1615881400.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Rename 'tsk' parameter into 'task'
Christophe Leroy [Tue, 16 Mar 2021 07:57:14 +0000 (07:57 +0000)]
powerpc: Rename 'tsk' parameter into 'task'

To better match generic code, rename 'tsk' to 'task' in
some stacktrace functions in preparation of following
patch which converts powerpc to generic ARCH_STACKWALK.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/117f0200e11961af6c0fdf85c98373e5dcf96a47.1615881400.git.christophe.leroy@csgroup.eu
3 years agopowerpc: Activate HAVE_RELIABLE_STACKTRACE for all
Christophe Leroy [Tue, 16 Mar 2021 07:57:13 +0000 (07:57 +0000)]
powerpc: Activate HAVE_RELIABLE_STACKTRACE for all

CONFIG_HAVE_RELIABLE_STACKTRACE is applicable to all, no
reason to limit it to book3s/64le

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/955248c6423cb068c5965923121ba31d4dd2fdde.1615881400.git.christophe.leroy@csgroup.eu
3 years agopowerpc/book3s64/kuap: Move Kconfig varriables to BOOK3S_64
Aneesh Kumar K.V [Thu, 18 Mar 2021 03:48:29 +0000 (09:18 +0530)]
powerpc/book3s64/kuap: Move Kconfig varriables to BOOK3S_64

With below two commits:
commit c91435d95c49 ("powerpc/book3s64/hash/kuep: Enable KUEP on hash")
commit b2ff33a10c8b ("powerpc/book3s64/hash/kuap: Enable kuap on hash")
the kernel now supports kuap/kuep with hash translation. Hence select the
Kconfig even when radix is disabled.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210318034829.72255-1-aneesh.kumar@linux.ibm.com
3 years agopowerpc/kernel: Trivial typo fix in kgdb.c
Bhaskar Chowdhury [Wed, 17 Mar 2021 09:04:13 +0000 (14:34 +0530)]
powerpc/kernel: Trivial typo fix in kgdb.c

s/procesing/processing/

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210317090413.120891-1-unixbhaskar@gmail.com
3 years agopowerpc/64s: Fix hash fault to use TRAP accessor
Nicholas Piggin [Tue, 16 Mar 2021 10:52:05 +0000 (20:52 +1000)]
powerpc/64s: Fix hash fault to use TRAP accessor

Hash faults use the trap vector to decide whether this is an
instruction or data fault. This should use the TRAP accessor
rather than open access regs->trap.

This won't cause a problem at the moment because 64s only uses
trap flags for system call interrupts (the norestart flag), but
that could change if any other trap flags get used in future.

Fixes: a4922f5442e7e ("powerpc/64s: move the hash fault handling logic to C")
Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210316105205.407767-1-npiggin@gmail.com
3 years agopowerpc/mm: Remove unneeded #ifdef CONFIG_PPC_MEM_KEYS
Christophe Leroy [Mon, 15 Mar 2021 14:52:51 +0000 (14:52 +0000)]
powerpc/mm: Remove unneeded #ifdef CONFIG_PPC_MEM_KEYS

In fault.c, #ifdef CONFIG_PPC_MEM_KEYS is not needed because all
functions are always defined, and arch_vma_access_permitted()
always returns true when CONFIG_PPC_MEM_KEYS is not defined so
access_pkey_error() will return false so bad_access_pkey()
will never be called.

Include linux/pkeys.h to get a definition of vma_pkeys() for
bad_access_pkey().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/8038392f38d81f2ad169347efac29146f553b238.1615819955.git.christophe.leroy@csgroup.eu
3 years agopowerpc/fsl-pci: Fix section mismatch warning
Michael Ellerman [Sun, 14 Mar 2021 09:33:41 +0000 (20:33 +1100)]
powerpc/fsl-pci: Fix section mismatch warning

Section mismatch in reference from the function .fsl_add_bridge() to
the function .init.text:.setup_pci_cmd()

fsl_add_bridge() is not __init, and can't be, and is the only caller
of setup_pci_cmd(). Fix it by making setup_pci_cmd() non-init.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210314093341.132986-1-mpe@ellerman.id.au
3 years agopowerpc: Fix section mismatch warning in smp_setup_pacas()
Michael Ellerman [Sun, 14 Mar 2021 09:33:33 +0000 (20:33 +1100)]
powerpc: Fix section mismatch warning in smp_setup_pacas()

Section mismatch in reference from the function .smp_setup_pacas() to
the function .init.text:.allocate_paca()

The only caller of smp_setup_pacas() is setup_arch() which is __init,
so mark smp_setup_pacas() __init.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210314093333.132657-1-mpe@ellerman.id.au
3 years agopowerpc/64s: Fold update_current_thread_[i]amr() into their only callers
Michael Ellerman [Sun, 14 Mar 2021 09:33:20 +0000 (20:33 +1100)]
powerpc/64s: Fold update_current_thread_[i]amr() into their only callers

lkp reported warnings in some configuration due to
update_current_thread_amr() being unused:

  arch/powerpc/mm/book3s64/pkeys.c:284:20: error: unused function 'update_current_thread_amr'
  static inline void update_current_thread_amr(u64 value)

Which is because it's only use is inside an ifdef. We could move it
inside the ifdef, but it's a single line function and only has one
caller, so just fold it in.

Similarly update_current_thread_iamr() is small and only called once,
so fold it in also.

Fixes: 48a8ab4eeb82 ("powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode.")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210314093320.132331-1-mpe@ellerman.id.au
3 years agopowerpc/eeh: Fix build failure with CONFIG_PROC_FS=n
Michael Ellerman [Sun, 14 Mar 2021 09:33:00 +0000 (20:33 +1100)]
powerpc/eeh: Fix build failure with CONFIG_PROC_FS=n

The build fails with CONFIG_PROC_FS=n:

  arch/powerpc/kernel/eeh.c:1571:12: error: ‘proc_eeh_show’ defined but not used
   1571 | static int proc_eeh_show(struct seq_file *m, void *v)

Wrap proc_eeh_show() in an ifdef to avoid it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210314093300.131998-1-mpe@ellerman.id.au
3 years agopowerpc/pci: fix warning comparing pointer to 0
Jiapeng Chong [Mon, 15 Mar 2021 07:35:24 +0000 (15:35 +0800)]
powerpc/pci: fix warning comparing pointer to 0

Fix the following coccicheck warning:

./arch/powerpc/platforms/maple/pci.c:37:16-17: WARNING comparing pointer
to 0.

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1615793724-97015-1-git-send-email-jiapeng.chong@linux.alibaba.com
3 years agopowerpc/xive: use true and false for bool variable
Yang Li [Mon, 15 Mar 2021 07:24:56 +0000 (15:24 +0800)]
powerpc/xive: use true and false for bool variable

fixed the following coccicheck:
./arch/powerpc/sysdev/xive/spapr.c:552:8-9: WARNING: return of 0/1 in
function 'xive_spapr_match' with return type bool

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1615793096-83758-1-git-send-email-yang.lee@linux.alibaba.com
3 years agopowerpc/asm-offsets: GPR14 is not needed either
Christophe Leroy [Mon, 15 Mar 2021 11:01:26 +0000 (11:01 +0000)]
powerpc/asm-offsets: GPR14 is not needed either

Commit aac6a91fea93 ("powerpc/asm: Remove unused symbols in
asm-offsets.c") removed GPR15 to GPR31 but kept GPR14,
probably because it pops up in a couple of comments when doing
a grep.

However, it was never used either, so remove it as well.

Fixes: aac6a91fea93 ("powerpc/asm: Remove unused symbols in asm-offsets.c")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/9881c68fbca004f9ea18fc9473f630e11ccd6417.1615806071.git.christophe.leroy@csgroup.eu
3 years agopowerpc/math: Fix missing __user qualifier for get_user() and other sparse warnings
Christophe Leroy [Mon, 15 Mar 2021 12:00:09 +0000 (12:00 +0000)]
powerpc/math: Fix missing __user qualifier for get_user() and other sparse warnings

Sparse reports the following problems:

arch/powerpc/math-emu/math.c:228:21: warning: Using plain integer as NULL pointer
arch/powerpc/math-emu/math.c:228:31: warning: Using plain integer as NULL pointer
arch/powerpc/math-emu/math.c:228:41: warning: Using plain integer as NULL pointer
arch/powerpc/math-emu/math.c:228:51: warning: Using plain integer as NULL pointer
arch/powerpc/math-emu/math.c:237:13: warning: incorrect type in initializer (different address spaces)
arch/powerpc/math-emu/math.c:237:13:    expected unsigned int [noderef] __user *_gu_addr
arch/powerpc/math-emu/math.c:237:13:    got unsigned int [usertype] *
arch/powerpc/math-emu/math.c:226:1: warning: symbol 'do_mathemu' was not declared. Should it be static?

Add missing __user qualifier when casting pointer used in get_user()

Use NULL instead of 0 to initialise opX local variables.

Add a prototype for do_mathemu() (Added in processor.h like sparc)

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e4d1aae7604d89c98a52dfd8ce8443462e595670.1615809591.git.christophe.leroy@csgroup.eu
3 years agopowerpc/mm/book3s64: Fix a typo in mmu_context.c
Bhaskar Chowdhury [Fri, 12 Mar 2021 11:25:37 +0000 (16:55 +0530)]
powerpc/mm/book3s64: Fix a typo in mmu_context.c

s/detalis/details/

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210312112537.4585-1-unixbhaskar@gmail.com
3 years agopowerpc/64e: Trivial spelling fixes throughout head_fsl_booke.S
Bhaskar Chowdhury [Sun, 14 Mar 2021 22:04:36 +0000 (03:34 +0530)]
powerpc/64e: Trivial spelling fixes throughout head_fsl_booke.S

Trivial spelling fixes throughout the file.

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210314220436.3417083-1-unixbhaskar@gmail.com
3 years agopowerpc/Makefile: Remove workaround for gcc versions below 4.9
Christophe Leroy [Wed, 10 Mar 2021 12:43:12 +0000 (12:43 +0000)]
powerpc/Makefile: Remove workaround for gcc versions below 4.9

Commit 6ec4476ac825 ("Raise gcc version requirement to 4.9")
made it impossible to build with gcc 4.8 and under.

Remove related workaround.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/a1e552006b8c51f23edd2f6cabdd9a986c631146.1615380184.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Manage KUAP in C
Christophe Leroy [Fri, 12 Mar 2021 12:50:51 +0000 (12:50 +0000)]
powerpc/32: Manage KUAP in C

Move all KUAP management in C.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/199365ddb58d579daf724815f2d0acb91cc49d19.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/8xx: Create C version of kuap save/restore/check helpers
Christophe Leroy [Fri, 12 Mar 2021 12:50:50 +0000 (12:50 +0000)]
powerpc/8xx: Create C version of kuap save/restore/check helpers

In preparation of porting PPC32 to C syscall entry/exit,
create C version of kuap_save_and_lock() and kuap_user_restore() and
kuap_kernel_restore() and kuap_assert_locked() and
kuap_get_and_assert_locked() on 8xx.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/156a7c4b669d26785391422a5581a1d919544c9a.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Create C version of kuap save/restore/check helpers
Christophe Leroy [Fri, 12 Mar 2021 12:50:49 +0000 (12:50 +0000)]
powerpc/32s: Create C version of kuap save/restore/check helpers

In preparation of porting PPC32 to C syscall entry/exit,
create C version of kuap_save_and_lock() and kuap_user_restore() and
kuap_kernel_restore() and kuap_assert_locked() and
kuap_get_and_assert_locked() on book3s/32.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/2be8fb729da4a0f9863b25e1b9d547174fcd5056.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/64s: Make kuap_check_amr() and kuap_get_and_check_amr() generic
Christophe Leroy [Fri, 12 Mar 2021 12:50:48 +0000 (12:50 +0000)]
powerpc/64s: Make kuap_check_amr() and kuap_get_and_check_amr() generic

In preparation of porting powerpc32 to C syscall entry/exit,
rename kuap_check_amr() and kuap_get_and_check_amr() as
kuap_assert_locked() and kuap_get_and_assert_locked(), and move in the
generic asm/kup.h the stub for when CONFIG_PPC_KUAP is not selected.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/f82614d9b17b83abd739aa18fc08811815d0c2e3.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Move KUEP locking/unlocking in C
Christophe Leroy [Fri, 12 Mar 2021 12:50:47 +0000 (12:50 +0000)]
powerpc/32s: Move KUEP locking/unlocking in C

This can be done in C, do it.

Unrolling the loop gains approx. 15% performance.

From now on, prepare_transfer_to_handler() is only for
interrupts from kernel.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/4eadd873927e9a73c3d1dfe2f9497353465514cf.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Only use prepare_transfer_to_handler function on book3s/32 and e500
Christophe Leroy [Fri, 12 Mar 2021 12:50:46 +0000 (12:50 +0000)]
powerpc/32: Only use prepare_transfer_to_handler function on book3s/32 and e500

Only book3s/32 and e500 have significative work to do in
prepare_transfer_to_handler.

Other 32 bit have nothing to do at all.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/b5e29ca0e557c11340415a13fe8b107189d315e1.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Return directly from power_save_ppc32_restore()
Christophe Leroy [Fri, 12 Mar 2021 12:50:45 +0000 (12:50 +0000)]
powerpc/32: Return directly from power_save_ppc32_restore()

transfer_to_handler_cont: is now just a blr.

Directly perform blr in power_save_ppc32_restore().

Also remove useless setting of r11 in e500 version of
power_save_ppc32_restore().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e337506e08a4df95b11d2290104b92f0dcdb5548.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Save remaining registers in exception prolog
Christophe Leroy [Fri, 12 Mar 2021 12:50:44 +0000 (12:50 +0000)]
powerpc/32: Save remaining registers in exception prolog

Save non volatile registers, XER, CTR, MSR and NIP in exception prolog.

Also assign proper value to r2 and r3 there.

For now, recalculate thread pointer in prepare_transfer_to_handler.
It will disappear once KUAP is ported to C.

And remove the comment which is now completely wrong.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/56f0cde9dd0362edf2ddba4d887552013eee7329.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Refactor saving of volatile registers in exception prologs
Christophe Leroy [Fri, 12 Mar 2021 12:50:43 +0000 (12:50 +0000)]
powerpc/32: Refactor saving of volatile registers in exception prologs

Exception prologs all do the same at the end:
- Save trapno in stack
- Mark stack with exception marker
- Save r0
- Save r3 to r8

Refactor that into a COMMON_EXCEPTION_PROLOG_END macro.
At the same time use r1 instead of r11.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e1c45d2e895e0693c42d2a6840df1105a148efea.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Remove the xfer parameter in EXCEPTION() macro
Christophe Leroy [Fri, 12 Mar 2021 12:50:42 +0000 (12:50 +0000)]
powerpc/32: Remove the xfer parameter in EXCEPTION() macro

The xfer parameter is not used anymore, remove it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/17c7d68bd18f7d2f1ab24a1a20d9ed33bbcda741.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Dismantle EXC_XFER_STD/LITE/TEMPLATE
Christophe Leroy [Fri, 12 Mar 2021 12:50:41 +0000 (12:50 +0000)]
powerpc/32: Dismantle EXC_XFER_STD/LITE/TEMPLATE

In order to get more control in exception prolog, dismantle
all non standard exception macros, finishing with EXC_XFER_STD
and EXC_XFER_LITE and EXC_XFER_TEMPLATE.

Also remove transfer_to_handler_full and ret_from_except and
ret_from_except_full as they are not used anymore.

Last parameter of EXCEPTION() is now ignored, will be removed
in a later patch to avoid too much churn.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ca5795d04a220586b7037dbbbe6951dfa9e768eb.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Only restore non volatile registers when required
Christophe Leroy [Fri, 12 Mar 2021 12:50:40 +0000 (12:50 +0000)]
powerpc/32: Only restore non volatile registers when required

Until now, non volatile registers were restored everytime they
were saved, ie using EXC_XFER_STD meant saving and restoring
them while EXC_XFER_LITE meant neither saving not restoring them.

Now that they are always saved, EXC_XFER_STD means to restore
them and EXC_XFER_LITE means to not restore them.

Most of the users of EXC_XFER_STD only need to retrieve the
non volatile registers. For them there is no need to restore
the non volatile registers as they have not been modified.

Only very few exceptions require non volatile registers restore.

Opencode the few places which require saving of non volatile
registers.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d1cb12d8023cc6afc1f07150565571373c04945c.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Add a prepare_transfer_to_handler macro for exception prologs
Christophe Leroy [Fri, 12 Mar 2021 12:50:39 +0000 (12:50 +0000)]
powerpc/32: Add a prepare_transfer_to_handler macro for exception prologs

In order to increase flexibility, add a macro that will for now
call transfer_to_handler.

As transfer_to_handler doesn't do the actual transfer anymore,
also name it prepare_transfer_to_handler. The following patches
will progressively remove the use of transfer_to_handler label.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/7f757c52518ab1d7b27ad5113b10f860e803f467.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Save trap number on stack in exception prolog
Christophe Leroy [Fri, 12 Mar 2021 12:50:38 +0000 (12:50 +0000)]
powerpc/32: Save trap number on stack in exception prolog

Saving the trap number into the stack goes into
the exception prolog, as EXC_XFER_xxx will soon disappear.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/2ac7a0c9cde2ec2b23cd79e3a54cfedd816a91ae.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Call bad_page_fault() from do_page_fault()
Christophe Leroy [Fri, 12 Mar 2021 12:50:37 +0000 (12:50 +0000)]
powerpc/32: Call bad_page_fault() from do_page_fault()

Now that non volatile registers are saved at all time, no
need to split bad_page_fault() out of do_page_fault().

Remove handle_page_fault() and use do_page_fault() directly.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/cfb95be8863204cc2bf45a22ea44dd1d0dc16b7f.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Set regs parameter in r3 in transfer_to_handler
Christophe Leroy [Fri, 12 Mar 2021 12:50:36 +0000 (12:50 +0000)]
powerpc/32: Set regs parameter in r3 in transfer_to_handler

All exception handlers take regs as first parameter.

Instead of setting r3 just before each call to a handler, set
it in transfer_to_handler.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/f994a379bb895a2cbd518cb82460ad3f3d3ccdf5.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Don't save thread.regs on interrupt entry
Christophe Leroy [Fri, 12 Mar 2021 12:50:35 +0000 (12:50 +0000)]
powerpc/32: Don't save thread.regs on interrupt entry

Since commit 06d67d54741a ("powerpc: make process.c suitable for both
32-bit and 64-bit"), thread.regs is set on task creation, no need to
set it again and again at each interrupt entry as it never change.

Suggested-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20d52c627303d63e461797df13e6890fc04017d0.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Replace ASM exception exit by C exception exit from ppc64
Christophe Leroy [Fri, 12 Mar 2021 12:50:34 +0000 (12:50 +0000)]
powerpc/32: Replace ASM exception exit by C exception exit from ppc64

This patch replaces the PPC32 ASM exception exit by C exception exit.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/48f8bae91da899d8e73fc0d75c9af66cc97b4d5b.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Always save non volatile registers on exception entry
Christophe Leroy [Fri, 12 Mar 2021 12:50:33 +0000 (12:50 +0000)]
powerpc/32: Always save non volatile registers on exception entry

In preparation of handling exception entry and exit in C,
in order to simplify the handling, always save non volatile registers
when entering an exception.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/3ce8ced87a4f1467fa36fcc50763d53b45e466c1.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Perform normal function call in exception entry
Christophe Leroy [Fri, 12 Mar 2021 12:50:32 +0000 (12:50 +0000)]
powerpc/32: Perform normal function call in exception entry

Now that the MMU is re-enabled before calling the transfer function,
we don't need anymore that hack with the address of the handler and
the return function sitting just after the 'bl' to the transfer
fonction, that function is retrieving via a read relative to 'lr'.

Do a regular call to the transfer function, then to the handler,
then branch to the return function.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/73c00f3361ca280ef8fd7814c291bd1f5b6e2081.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Refactor booke critical registers saving
Christophe Leroy [Fri, 12 Mar 2021 12:50:31 +0000 (12:50 +0000)]
powerpc/32: Refactor booke critical registers saving

Refactor booke critical registers saving into a few macros
and move it into the exception prolog directly.

Keep the dedicated transfert_to_handler entry point for the
moment allthough they are empty. They will be removed in a
later patch to reduce churn.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/269171496f1f5f22afa621695bded22976c9d48d.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Provide a name to exception prolog continuation in virtual mode
Christophe Leroy [Fri, 12 Mar 2021 12:50:30 +0000 (12:50 +0000)]
powerpc/32: Provide a name to exception prolog continuation in virtual mode

Now that the prolog continuation is separated in .text, give it a name
and mark it _ASM_NOKPROBE_SYMBOL.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d96374218815a6627e1e922ab2aba994050fb87a.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Move exception prolog code into .text once MMU is back on
Christophe Leroy [Fri, 12 Mar 2021 12:50:29 +0000 (12:50 +0000)]
powerpc/32: Move exception prolog code into .text once MMU is back on

The space in the head section is rather constrained by the fact that
exception vectors are spread every 0x100 bytes and sometimes we
need to have "out of line" code because it doesn't fit.

Now that we are enabling MMU early in the prolog, take that opportunity
to jump somewhere else in the .text section where we don't have any
space constraint.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/38b31ca4bc782a4985bc7952a675404d7ff27c24.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Use START_EXCEPTION() as much as possible
Christophe Leroy [Fri, 12 Mar 2021 12:50:28 +0000 (12:50 +0000)]
powerpc/32: Use START_EXCEPTION() as much as possible

Everywhere where it is possible, use START_EXCEPTION().

This will help for proper exception init in future patches.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d47c1cc242bbbef8658327503726abdaef9b63ef.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Add vmap_stack_overflow label inside the macro
Christophe Leroy [Fri, 12 Mar 2021 12:50:27 +0000 (12:50 +0000)]
powerpc/32: Add vmap_stack_overflow label inside the macro

For consistency, add in the macro the label used by exception prolog
to branch to stack overflow processing.

While at it, enclose the macro in #ifdef CONFIG_VMAP_STACK on the 8xx
as already done on book3s/32.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/cf80056f5b946572ad98aea9d915dd25b23beda6.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Statically initialise first emergency context
Christophe Leroy [Fri, 12 Mar 2021 12:50:25 +0000 (12:50 +0000)]
powerpc/32: Statically initialise first emergency context

The check of the emergency context initialisation in
vmap_stack_overflow is buggy for the SMP case, as it
compares r1 with 0 while in the SMP case r1 is offseted
by the CPU id.

Instead of fixing it, just perform static initialisation
of the first emergency context.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/4a67ba422be75713286dca0c86ee0d3df2eb6dfa.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Enable instruction translation at the same time as data translation
Christophe Leroy [Fri, 12 Mar 2021 12:50:24 +0000 (12:50 +0000)]
powerpc/32: Enable instruction translation at the same time as data translation

On 40x and 8xx, kernel text is pinned.
On book3s/32, kernel text is mapped by BATs.

Enable instruction translation at the same time as data translation, it
makes things simpler.

In syscall handler, MSR_RI can also be set at the same time because
srr0/srr1 are already saved and r1 is set properly.

On booke, translation is always on, so at the end all PPC32
have translation on early. Just update msr.

Also update comment in power_save_ppc32_restore().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5269c7e5f5d2117358af3a89744d75a116be27b0.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Tag DAR in EXCEPTION_PROLOG_2 for the 8xx
Christophe Leroy [Fri, 12 Mar 2021 12:50:23 +0000 (12:50 +0000)]
powerpc/32: Tag DAR in EXCEPTION_PROLOG_2 for the 8xx

8xx requires to tag the DAR with a magic value in order to
fixup DAR on faults generated by 'dcbX', as the 8xx
forgets to update the DAR for those faults.

Do the tagging as early as possible, that is before enabling MMU.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/853a2e28ca7c5fc85617037030f99fe6070c9536.1615552867.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Always enable data translation in exception prolog
Christophe Leroy [Fri, 12 Mar 2021 12:50:22 +0000 (12:50 +0000)]
powerpc/32: Always enable data translation in exception prolog

If the code can use a stack in vm area, it can also use a
stack in linear space.

Simplify code by removing old non VMAP stack code on PPC32.

That means the data translation is now re-enabled early in
exception prolog in all cases, not only when using VMAP stacks.

While we are touching EXCEPTION_PROLOG macros, remove the
unused for_rtas parameter in EXCEPTION_PROLOG_1.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/7cd6440c60a7e8f4f035b245c57720f51e225aae.1615552866.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Remove ksp_limit
Christophe Leroy [Fri, 12 Mar 2021 12:50:21 +0000 (12:50 +0000)]
powerpc/32: Remove ksp_limit

ksp_limit is there to help detect stack overflows.
That is specific to ppc32 as it was removed from ppc64 in
commit cbc9565ee826 ("powerpc: Remove ksp_limit on ppc64").

There are other means for detecting stack overflows.

As ppc64 has proven to not need it, ppc32 should be able to do
without it too.

Lets remove it and simplify exception handling.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d789c3385b22e07bedc997613c0d26074cb513e7.1615552866.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Use fast instruction to set MSR RI in exception prolog on 8xx
Christophe Leroy [Fri, 12 Mar 2021 12:50:20 +0000 (12:50 +0000)]
powerpc/32: Use fast instruction to set MSR RI in exception prolog on 8xx

8xx has registers SPRN_NRI, SPRN_EID and SPRN_EIE for changing
MSR EE and RI.

Use SPRN_EID in exception prolog to set RI.

On an 8xx, it reduces the null_syscall test by 3 cycles.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/65f6bda827c2a2abce71ea7e07543e791163da33.1615552866.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Handle bookE debugging in C in exception entry
Christophe Leroy [Fri, 12 Mar 2021 12:50:19 +0000 (12:50 +0000)]
powerpc/32: Handle bookE debugging in C in exception entry

The handling of SPRN_DBCR0 and other registers can easily
be done in C instead of ASM.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/6d6b2497115890b90cfa72a2b3ab1da5f78123c2.1615552866.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Entry cpu time accounting in C
Christophe Leroy [Fri, 12 Mar 2021 12:50:18 +0000 (12:50 +0000)]
powerpc/32: Entry cpu time accounting in C

There is no need for this to be in asm,
use the new interrupt entry wrapper.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/daca4c3e05cdfe54d237162a0718b3aaca897662.1615552866.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32: Reconcile interrupts in C
Christophe Leroy [Fri, 12 Mar 2021 12:50:17 +0000 (12:50 +0000)]
powerpc/32: Reconcile interrupts in C

There is no need for this to be in asm anymore,
use the new interrupt entry wrapper.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/602e1ec47e15ca540f7edb9cf6feb6c249911bd6.1615552866.git.christophe.leroy@csgroup.eu
3 years agopowerpc/40x: Prepare normal exception handler for enabling MMU early
Christophe Leroy [Fri, 12 Mar 2021 12:50:16 +0000 (12:50 +0000)]
powerpc/40x: Prepare normal exception handler for enabling MMU early

Ensure normal exception handler are able to manage stuff with
MMU enabled. For that we use CONFIG_VMAP_STACK related code
allthough there is no intention to really activate CONFIG_VMAP_STACK
on powerpc 40x for the moment.

40x uses SPRN_DEAR instead of SPRN_DAR and SPRN_ESR instead of
SPRN_DSISR. Take it into account in common macros.

40x MSR value doesn't fit on 15 bits, use LOAD_REG_IMMEDIATE() in
common macros that will be used also with 40x.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/01963af2b83037bca270d7bf1336ffcf35da8282.1615552866.git.christophe.leroy@csgroup.eu
3 years agopowerpc/40x: Prepare for enabling MMU in critical exception prolog
Christophe Leroy [Fri, 12 Mar 2021 12:50:15 +0000 (12:50 +0000)]
powerpc/40x: Prepare for enabling MMU in critical exception prolog

In order the enable MMU early in exception prolog, implement
CONFIG_VMAP_STACK principles in critical exception prolog.

There is no intention to use CONFIG_VMAP_STACK on 40x,
but related code will be used to enable MMU early in exception
in a later patch.

Also address (critirq_ctx - PAGE_OFFSET) directly instead of
using tophys() in order to win one instruction.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/3fd75ee54c48307119acdbf66cfea966c1463bbd.1615552866.git.christophe.leroy@csgroup.eu
3 years agopowerpc/40x: Reorder a few instructions in critical exception prolog
Christophe Leroy [Fri, 12 Mar 2021 12:50:14 +0000 (12:50 +0000)]
powerpc/40x: Reorder a few instructions in critical exception prolog

In order to ease preparation for CONFIG_VMAP_STACK, reorder
a few instruction, especially save r1 into stack frame earlier.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c895ecf958c86d1736bdd2ff6f36626b55f35fd2.1615552866.git.christophe.leroy@csgroup.eu
3 years agopowerpc/40x: Save SRR0/SRR1 and r10/r11 earlier in critical exception
Christophe Leroy [Fri, 12 Mar 2021 12:50:13 +0000 (12:50 +0000)]
powerpc/40x: Save SRR0/SRR1 and r10/r11 earlier in critical exception

In order to be able to switch MMU on in exception prolog, save
SRR0 and SRR1 earlier.

Also save r10 and r11 into stack earlier to better match with the
normal exception prolog.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/79a93f253d72dc97ac968c9c62b5066960b688ed.1615552866.git.christophe.leroy@csgroup.eu
3 years agopowerpc/40x: Change CRITICAL_EXCEPTION_PROLOG macro to a gas macro
Christophe Leroy [Fri, 12 Mar 2021 12:50:12 +0000 (12:50 +0000)]
powerpc/40x: Change CRITICAL_EXCEPTION_PROLOG macro to a gas macro

Change CRITICAL_EXCEPTION_PROLOG macro to a gas macro to
remove the ugly ; and \ on each line.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/73291fb9dc9ec58182c27a40dfc3db204e3f4024.1615552866.git.christophe.leroy@csgroup.eu
3 years agopowerpc/40x: Don't use SPRN_SPRG_SCRATCH0/1 in TLB miss handlers
Christophe Leroy [Fri, 12 Mar 2021 12:50:11 +0000 (12:50 +0000)]
powerpc/40x: Don't use SPRN_SPRG_SCRATCH0/1 in TLB miss handlers

SPRN_SPRG_SCRATCH5 is used to save SPRN_PID.
SPRN_SPRG_SCRATCH6 is already available.

SPRN_PID is only 8 bits. We have r12 that contains CR.
We only need to preserve CR0, so we have space available in r12
to save PID.

Keep PID in r12 and free up SPRN_SPRG_SCRATCH5.

Then In TLB miss handlers, instead of using SPRN_SPRG_SCRATCH0 and
SPRN_SPRG_SCRATCH1, use SPRN_SPRG_SCRATCH5 and SPRN_SPRG_SCRATCH6
to avoid future conflicts with normal exception prologs.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/4cdaa85d38e14d594ba902424060ec55babf2c42.1615552866.git.christophe.leroy@csgroup.eu
3 years agopowerpc/traps: Declare unrecoverable_exception() as __noreturn
Christophe Leroy [Fri, 12 Mar 2021 12:50:10 +0000 (12:50 +0000)]
powerpc/traps: Declare unrecoverable_exception() as __noreturn

unrecoverable_exception() is never expected to return, most callers
have an infiniteloop in case it returns.

Ensure it really never returns by terminating it with a BUG(), and
declare it __no_return.

It always GCC to really simplify functions calling it. In the exemple
below, it avoids the stack frame in the likely fast path and avoids
code duplication for the exit.

With this patch:

00000348 <interrupt_exit_kernel_prepare>:
 348: 81 43 00 84  lwz     r10,132(r3)
 34c: 71 48 00 02  andi.   r8,r10,2
 350: 41 82 00 2c  beq     37c <interrupt_exit_kernel_prepare+0x34>
 354: 71 4a 40 00  andi.   r10,r10,16384
 358: 40 82 00 20  bne     378 <interrupt_exit_kernel_prepare+0x30>
 35c: 80 62 00 70  lwz     r3,112(r2)
 360: 74 63 00 01  andis.  r3,r3,1
 364: 40 82 00 28  bne     38c <interrupt_exit_kernel_prepare+0x44>
 368: 7d 40 00 a6  mfmsr   r10
 36c: 7c 11 13 a6  mtspr   81,r0
 370: 7c 12 13 a6  mtspr   82,r0
 374: 4e 80 00 20  blr
 378: 48 00 00 00  b       378 <interrupt_exit_kernel_prepare+0x30>
 37c: 94 21 ff f0  stwu    r1,-16(r1)
 380: 7c 08 02 a6  mflr    r0
 384: 90 01 00 14  stw     r0,20(r1)
 388: 48 00 00 01  bl      388 <interrupt_exit_kernel_prepare+0x40>
388: R_PPC_REL24 unrecoverable_exception
 38c: 38 e2 00 70  addi    r7,r2,112
 390: 3d 00 00 01  lis     r8,1
 394: 7c c0 38 28  lwarx   r6,0,r7
 398: 7c c6 40 78  andc    r6,r6,r8
 39c: 7c c0 39 2d  stwcx.  r6,0,r7
 3a0: 40 a2 ff f4  bne     394 <interrupt_exit_kernel_prepare+0x4c>
 3a4: 38 60 00 01  li      r3,1
 3a8: 4b ff ff c0  b       368 <interrupt_exit_kernel_prepare+0x20>

Without this patch:

00000348 <interrupt_exit_kernel_prepare>:
 348: 94 21 ff f0  stwu    r1,-16(r1)
 34c: 93 e1 00 0c  stw     r31,12(r1)
 350: 7c 7f 1b 78  mr      r31,r3
 354: 81 23 00 84  lwz     r9,132(r3)
 358: 71 2a 00 02  andi.   r10,r9,2
 35c: 41 82 00 34  beq     390 <interrupt_exit_kernel_prepare+0x48>
 360: 71 29 40 00  andi.   r9,r9,16384
 364: 40 82 00 28  bne     38c <interrupt_exit_kernel_prepare+0x44>
 368: 80 62 00 70  lwz     r3,112(r2)
 36c: 74 63 00 01  andis.  r3,r3,1
 370: 40 82 00 3c  bne     3ac <interrupt_exit_kernel_prepare+0x64>
 374: 7d 20 00 a6  mfmsr   r9
 378: 7c 11 13 a6  mtspr   81,r0
 37c: 7c 12 13 a6  mtspr   82,r0
 380: 83 e1 00 0c  lwz     r31,12(r1)
 384: 38 21 00 10  addi    r1,r1,16
 388: 4e 80 00 20  blr
 38c: 48 00 00 00  b       38c <interrupt_exit_kernel_prepare+0x44>
 390: 7c 08 02 a6  mflr    r0
 394: 90 01 00 14  stw     r0,20(r1)
 398: 48 00 00 01  bl      398 <interrupt_exit_kernel_prepare+0x50>
398: R_PPC_REL24 unrecoverable_exception
 39c: 80 01 00 14  lwz     r0,20(r1)
 3a0: 81 3f 00 84  lwz     r9,132(r31)
 3a4: 7c 08 03 a6  mtlr    r0
 3a8: 4b ff ff b8  b       360 <interrupt_exit_kernel_prepare+0x18>
 3ac: 39 02 00 70  addi    r8,r2,112
 3b0: 3d 40 00 01  lis     r10,1
 3b4: 7c e0 40 28  lwarx   r7,0,r8
 3b8: 7c e7 50 78  andc    r7,r7,r10
 3bc: 7c e0 41 2d  stwcx.  r7,0,r8
 3c0: 40 a2 ff f4  bne     3b4 <interrupt_exit_kernel_prepare+0x6c>
 3c4: 38 60 00 01  li      r3,1
 3c8: 4b ff ff ac  b       374 <interrupt_exit_kernel_prepare+0x2c>

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1e883e9d93fdb256853d1434c8ad77c257349b2d.1615552866.git.christophe.leroy@csgroup.eu
3 years agocxl: don't manipulate the mm.mm_users field directly
Laurent Dufour [Wed, 10 Mar 2021 17:44:05 +0000 (18:44 +0100)]
cxl: don't manipulate the mm.mm_users field directly

It is better to rely on the API provided by the MM layer instead of
directly manipulating the mm_users field.

Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.ibm.com>
Acked-by: Andrew Donnellan <ajd@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210310174405.51044-1-ldufour@linux.ibm.com
3 years agopowerpc/uprobes: Validation for prefixed instruction
Ravi Bangoria [Thu, 11 Mar 2021 09:15:38 +0000 (14:45 +0530)]
powerpc/uprobes: Validation for prefixed instruction

As per ISA 3.1, prefixed instruction should not cross 64-byte
boundary. So don't allow Uprobe on such prefixed instruction.

There are two ways probed instruction is changed in mapped pages.
First, when Uprobe is activated, it searches for all the relevant
pages and replace instruction in them. In this case, if that probe
is on the 64-byte unaligned prefixed instruction, error out
directly. Second, when Uprobe is already active and user maps a
relevant page via mmap(), instruction is replaced via mmap() code
path. But because Uprobe is invalid, entire mmap() operation can
not be stopped. In this case just print an error and continue.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210311091538.368590-1-ravi.bangoria@linux.ibm.com
3 years agopowerpc/signal: Use __get_user() to copy sigset_t
Christopher M. Riedl [Sat, 27 Feb 2021 01:12:59 +0000 (19:12 -0600)]
powerpc/signal: Use __get_user() to copy sigset_t

Usually sigset_t is exactly 8B which is a "trivial" size and does not
warrant using __copy_from_user(). Use __get_user() directly in
anticipation of future work to remove the trivial size optimizations
from __copy_from_user().

The ppc32 implementation of get_sigset_t() previously called
copy_from_user() which, unlike __copy_from_user(), calls access_ok().
Replacing this w/ __get_user() (no access_ok()) is fine here since both
callsites in signal_32.c are preceded by an earlier access_ok().

Signed-off-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210227011259.11992-11-cmr@codefail.de
3 years agopowerpc/signal64: Rewrite rt_sigreturn() to minimise uaccess switches
Daniel Axtens [Sat, 27 Feb 2021 01:12:58 +0000 (19:12 -0600)]
powerpc/signal64: Rewrite rt_sigreturn() to minimise uaccess switches

Add uaccess blocks and use the 'unsafe' versions of functions doing user
access where possible to reduce the number of times uaccess has to be
opened/closed.

Co-developed-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210227011259.11992-10-cmr@codefail.de
3 years agopowerpc/signal64: Rewrite handle_rt_signal64() to minimise uaccess switches
Daniel Axtens [Sat, 27 Feb 2021 01:12:57 +0000 (19:12 -0600)]
powerpc/signal64: Rewrite handle_rt_signal64() to minimise uaccess switches

Add uaccess blocks and use the 'unsafe' versions of functions doing user
access where possible to reduce the number of times uaccess has to be
opened/closed.

There is no 'unsafe' version of copy_siginfo_to_user, so move it
slightly to allow for a "longer" uaccess block.

Co-developed-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210227011259.11992-9-cmr@codefail.de
3 years agopowerpc/signal64: Replace restore_sigcontext() w/ unsafe_restore_sigcontext()
Christopher M. Riedl [Sat, 27 Feb 2021 01:12:56 +0000 (19:12 -0600)]
powerpc/signal64: Replace restore_sigcontext() w/ unsafe_restore_sigcontext()

Previously restore_sigcontext() performed a costly KUAP switch on every
uaccess operation. These repeated uaccess switches cause a significant
drop in signal handling performance.

Rewrite restore_sigcontext() to assume that a userspace read access
window is open by replacing all uaccess functions with their 'unsafe'
versions. Modify the callers to first open, call
unsafe_restore_sigcontext(), and then close the uaccess window.

Signed-off-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210227011259.11992-8-cmr@codefail.de
3 years agopowerpc/signal64: Replace setup_sigcontext() w/ unsafe_setup_sigcontext()
Christopher M. Riedl [Sat, 27 Feb 2021 01:12:55 +0000 (19:12 -0600)]
powerpc/signal64: Replace setup_sigcontext() w/ unsafe_setup_sigcontext()

Previously setup_sigcontext() performed a costly KUAP switch on every
uaccess operation. These repeated uaccess switches cause a significant
drop in signal handling performance.

Rewrite setup_sigcontext() to assume that a userspace write access window
is open by replacing all uaccess functions with their 'unsafe' versions.
Modify the callers to first open, call unsafe_setup_sigcontext() and
then close the uaccess window.

Signed-off-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210227011259.11992-7-cmr@codefail.de
3 years agopowerpc/signal64: Remove TM ifdefery in middle of if/else block
Christopher M. Riedl [Sat, 27 Feb 2021 01:12:54 +0000 (19:12 -0600)]
powerpc/signal64: Remove TM ifdefery in middle of if/else block

Both rt_sigreturn() and handle_rt_signal_64() contain TM-related ifdefs
which break-up an if/else block. Provide stubs for the ifdef-guarded TM
functions and remove the need for an ifdef in rt_sigreturn().

Rework the remaining TM ifdef in handle_rt_signal64() similar to
commit f1cf4f93de2f ("powerpc/signal32: Remove ifdefery in middle of if/else").

Unlike in the commit for ppc32, the ifdef can't be removed entirely
since uc_transact in sigframe depends on CONFIG_PPC_TRANSACTIONAL_MEM.

Signed-off-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210227011259.11992-6-cmr@codefail.de
3 years agopowerpc: Reference parameter in MSR_TM_ACTIVE() macro
Christopher M. Riedl [Sat, 27 Feb 2021 01:12:53 +0000 (19:12 -0600)]
powerpc: Reference parameter in MSR_TM_ACTIVE() macro

Unlike the other MSR_TM_* macros, MSR_TM_ACTIVE does not reference or
use its parameter unless CONFIG_PPC_TRANSACTIONAL_MEM is defined. This
causes an 'unused variable' compile warning unless the variable is also
guarded with CONFIG_PPC_TRANSACTIONAL_MEM.

Reference but do nothing with the argument in the macro to avoid a
potential compile warning.

Signed-off-by: Christopher M. Riedl <cmr@codefail.de>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210227011259.11992-5-cmr@codefail.de
3 years agopowerpc/signal64: Remove non-inline calls from setup_sigcontext()
Christopher M. Riedl [Sat, 27 Feb 2021 01:12:52 +0000 (19:12 -0600)]
powerpc/signal64: Remove non-inline calls from setup_sigcontext()

The majority of setup_sigcontext() can be refactored to execute in an
"unsafe" context assuming an open uaccess window except for some
non-inline function calls. Move these out into a separate
prepare_setup_sigcontext() function which must be called first and
before opening up a uaccess window. Non-inline function calls should be
avoided during a uaccess window for a few reasons:

- KUAP should be enabled for as much kernel code as possible.
  Opening a uaccess window disables KUAP which means any code
  executed during this time contributes to a potential attack
  surface.

- Non-inline functions default to traceable which means they are
  instrumented for ftrace. This adds more code which could run
  with KUAP disabled.

- Powerpc does not currently support the objtool UACCESS checks.
  All code running with uaccess must be audited manually which
  means: less code -> less work -> fewer problems (in theory).

A follow-up commit converts setup_sigcontext() to be "unsafe".

Signed-off-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210227011259.11992-4-cmr@codefail.de
3 years agopowerpc/signal: Add unsafe_copy_{vsx, fpr}_from_user()
Christopher M. Riedl [Sat, 27 Feb 2021 01:12:51 +0000 (19:12 -0600)]
powerpc/signal: Add unsafe_copy_{vsx, fpr}_from_user()

Reuse the "safe" implementation from signal.c but call unsafe_get_user()
directly in a loop to avoid the intermediate copy into a local buffer.

Signed-off-by: Christopher M. Riedl <cmr@codefail.de>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210227011259.11992-3-cmr@codefail.de
3 years agopowerpc/uaccess: Add unsafe_copy_from_user()
Christopher M. Riedl [Sat, 27 Feb 2021 01:12:50 +0000 (19:12 -0600)]
powerpc/uaccess: Add unsafe_copy_from_user()

Use the same approach as unsafe_copy_to_user() but instead call
unsafe_get_user() in a loop.

Signed-off-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210227011259.11992-2-cmr@codefail.de
3 years agopowerpc/qspinlock: Use generic smp_cond_load_relaxed
Davidlohr Bueso [Tue, 9 Mar 2021 01:59:50 +0000 (17:59 -0800)]
powerpc/qspinlock: Use generic smp_cond_load_relaxed

49a7d46a06c3 (powerpc: Implement smp_cond_load_relaxed()) added
busy-waiting pausing with a preferred SMT priority pattern, lowering
the priority (reducing decode cycles) during the whole loop slowpath.

However, data shows that while this pattern works well with simple
spinlocks, queued spinlocks benefit more being kept in medium priority,
with a cpu_relax() instead, being a low+medium combo on powerpc.

Data is from three benchmarks on a Power9: 9008-22L 64 CPUs with
2 sockets and 8 threads per core.

1. locktorture.

This is data for the lowest and most artificial/pathological level,
with increasing thread counts pounding on the lock. Metrics are total
ops/minute. Despite some small hits in the 4-8 range, scenarios are
either neutral or favorable to this patch.

+=========+==========+==========+=======+
| # tasks | vanilla  | dirty    | %diff |
+=========+==========+==========+=======+
| 2       | 46718565 | 48751350 | 4.35  |
+---------+----------+----------+-------+
| 4       | 51740198 | 50369082 | -2.65 |
+---------+----------+----------+-------+
| 8       | 63756510 | 62568821 | -1.86 |
+---------+----------+----------+-------+
| 16      | 67824531 | 70966546 | 4.63  |
+---------+----------+----------+-------+
| 32      | 53843519 | 61155508 | 13.58 |
+---------+----------+----------+-------+
| 64      | 53005778 | 53104412 | 0.18  |
+---------+----------+----------+-------+
| 128     | 53331980 | 54606910 | 2.39  |
+=========+==========+==========+=======+

2. sockperf (tcp throughput)

Here a client will do one-way throughput tests to a localhost server, with
increasing message sizes, dealing with the sk_lock. This patch shows to put
the performance of the qspinlock back to par with that of the simple lock:

     simple-spinlock           vanilla dirty
Hmean     14        73.50 (   0.00%)       54.44 * -25.93%*       73.45 * -0.07%*
Hmean     100      654.47 (   0.00%)      385.61 * -41.08%*      771.43 * 17.87%*
Hmean     300     2719.39 (   0.00%)     2181.67 * -19.77%*     2666.50 * -1.94%*
Hmean     500     4400.59 (   0.00%)     3390.77 * -22.95%*     4322.14 * -1.78%*
Hmean     850     6726.21 (   0.00%)     5264.03 * -21.74%*     6863.12 * 2.04%*

3. dbench (tmpfs)

Configured to run with up to ncpusx8 clients, it shows both latency and
throughput metrics. For the latency, with the exception of the 64 case,
there is really nothing to go by:
     vanilla                dirty
Amean     latency-1          1.67 (   0.00%)        1.67 *   0.09%*
Amean     latency-2          2.15 (   0.00%)        2.08 *   3.36%*
Amean     latency-4          2.50 (   0.00%)        2.56 *  -2.27%*
Amean     latency-8          2.49 (   0.00%)        2.48 *   0.31%*
Amean     latency-16         2.69 (   0.00%)        2.72 *  -1.37%*
Amean     latency-32         2.96 (   0.00%)        3.04 *  -2.60%*
Amean     latency-64         7.78 (   0.00%)        8.17 *  -5.07%*
Amean     latency-512      186.91 (   0.00%)      186.41 *   0.27%*

For the dbench4 Throughput (misleading but traditional) there's a small
but rather constant improvement:

     vanilla                dirty
Hmean     1        849.13 (   0.00%)      851.51 *   0.28%*
Hmean     2       1664.03 (   0.00%)     1663.94 *  -0.01%*
Hmean     4       3073.70 (   0.00%)     3104.29 *   1.00%*
Hmean     8       5624.02 (   0.00%)     5694.16 *   1.25%*
Hmean     16      9169.49 (   0.00%)     9324.43 *   1.69%*
Hmean     32     11969.37 (   0.00%)    12127.09 *   1.32%*
Hmean     64     15021.12 (   0.00%)    15243.14 *   1.48%*
Hmean     512    14891.27 (   0.00%)    15162.11 *   1.82%*

Measuring the dbench4 Per-VFS Operation latency, shows some very minor
differences within the noise level, around the 0-1% ranges.

Fixes: 49a7d46a06c3 ("powerpc: Implement smp_cond_load_relaxed()")
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210318204702.71417-1-dave@stgolabs.net
3 years agopowerpc/spinlock: Unserialize spin_is_locked
Davidlohr Bueso [Tue, 9 Mar 2021 01:59:49 +0000 (17:59 -0800)]
powerpc/spinlock: Unserialize spin_is_locked

c6f5d02b6a0f (locking/spinlocks/arm64: Remove smp_mb() from
arch_spin_is_locked()) made it pretty official that the call
semantics do not imply any sort of barriers, and any user that
gets creative must explicitly do any serialization.

This creativity, however, is nowadays pretty limited:

1. spin_unlock_wait() has been removed from the kernel in favor
of a lock/unlock combo. Furthermore, queued spinlocks have now
for a number of years no longer relied on _Q_LOCKED_VAL for the
call, but any non-zero value to indicate a locked state. There
were cases where the delayed locked store could lead to breaking
mutual exclusion with crossed locking; such as with sysv ipc and
netfilter being the most extreme.

2. The auditing Andrea did in verified that remaining spin_is_locked()
no longer rely on such semantics. Most callers just use it to assert
a lock is taken, in a debug nature. The only user that gets cute is
NOLOCK qdisc, as of:

   96009c7d500e (sched: replace __QDISC_STATE_RUNNING bit with a spin lock)

... which ironically went in the next day after c6f5d02b6a0f. This
change replaces test_bit() with spin_is_locked() to know whether
to take the busylock heuristic to reduce contention on the main
qdisc lock. So any races against spin_is_locked() for archs that
use LL/SC for spin_lock() will be benign and not break any mutual
exclusion; furthermore, both the seqlock and busylock have the same
scope.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210309015950.27688-3-dave@stgolabs.net
3 years agopowerpc/spinlock: Define smp_mb__after_spinlock only once
Davidlohr Bueso [Tue, 9 Mar 2021 01:59:48 +0000 (17:59 -0800)]
powerpc/spinlock: Define smp_mb__after_spinlock only once

Instead of both queued and simple spinlocks doing it. Move
it into the arch's spinlock.h.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210309015950.27688-2-dave@stgolabs.net
3 years agopowerpc/ptrace: Convert gpr32_set_common() to user access block
Christophe Leroy [Wed, 10 Mar 2021 17:57:07 +0000 (17:57 +0000)]
powerpc/ptrace: Convert gpr32_set_common() to user access block

Use user access block in gpr32_set_common() instead of
repetitive __get_user() which imply repetitive KUAP open/close.

To get it clean, force inlining of the small set of tiny functions
called inside the block.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/bdcb8652c3bb4ab5b8b3bfd08147434be8fc04c9.1615398498.git.christophe.leroy@csgroup.eu
3 years agopowerpc/futex: Switch to user_access block
Christophe Leroy [Wed, 10 Mar 2021 17:57:06 +0000 (17:57 +0000)]
powerpc/futex: Switch to user_access block

Use user_access_begin() instead of the access_ok/allow_access sequence.

This brings the missing might_fault() check.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/6cd202cdc4f939d47822e4ddd3c0856210431a58.1615398498.git.christophe.leroy@csgroup.eu
3 years agopowerpc/net: Switch csum_and_copy_{to/from}_user to user_access block
Christophe Leroy [Wed, 10 Mar 2021 17:57:05 +0000 (17:57 +0000)]
powerpc/net: Switch csum_and_copy_{to/from}_user to user_access block

Use user_access_begin() instead of the
might_sleep/access_ok/allow_access sequence.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/2dee286d2d6dc9a27d99e31ac564bad4fae2cb49.1615398498.git.christophe.leroy@csgroup.eu
3 years agopowerpc/lib: Don't use __put_user_asm_goto() outside of uaccess.h
Christophe Leroy [Wed, 10 Mar 2021 17:57:04 +0000 (17:57 +0000)]
powerpc/lib: Don't use __put_user_asm_goto() outside of uaccess.h

__put_user_asm_goto() is internal to uaccess.h

Use __put_kernel_nofault() instead. The generated code is identical.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/3e32c4f0361933909368b68f5ee569e5de661c1b.1615398498.git.christophe.leroy@csgroup.eu
3 years agopowerpc/syscalls: Use sys_old_select() in ppc_select()
Christophe Leroy [Wed, 10 Mar 2021 17:57:03 +0000 (17:57 +0000)]
powerpc/syscalls: Use sys_old_select() in ppc_select()

Instead of opencodying the copy of parameters, use
the generic sys_old_select().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/4de983ad254739da1fe6e9f273baf387b7043ae0.1615398498.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Move copy_mc_xxx() functions down
Christophe Leroy [Wed, 10 Mar 2021 17:57:02 +0000 (17:57 +0000)]
powerpc/uaccess: Move copy_mc_xxx() functions down

copy_mc_xxx() functions are in the middle of raw_copy functions.

For clarity, move them out of the raw_copy functions block.

They are using access_ok, so they need to be after the general
functions in order to eventually allow the inclusion of
asm-generic/uaccess.h in some future.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/2cdecb6e5a2fcee6c158d18dd254b71ec0e0da4d.1615398498.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Swap clear_user() and __clear_user()
Christophe Leroy [Wed, 10 Mar 2021 17:57:01 +0000 (17:57 +0000)]
powerpc/uaccess: Swap clear_user() and __clear_user()

It is clear_user() which is expected to call __clear_user(),
not the reverse.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d8ec01fb22f33d87321451d5e5f01cb56dacaa39.1615398498.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Also perform 64 bits copies in unsafe_copy_to_user() on ppc32
Christophe Leroy [Wed, 10 Mar 2021 17:57:00 +0000 (17:57 +0000)]
powerpc/uaccess: Also perform 64 bits copies in unsafe_copy_to_user() on ppc32

ppc32 has an efficiant 64 bits __put_user(), so also use it in
order to unroll loops more.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ccc08a16eea682d6fa4acc957ffe34003a8f0844.1615398498.git.christophe.leroy@csgroup.eu
3 years agopowerpc/pseries: export LPAR security flavor in lparcfg
Laurent Dufour [Fri, 5 Mar 2021 12:55:54 +0000 (13:55 +0100)]
powerpc/pseries: export LPAR security flavor in lparcfg

This is helpful to read the security flavor from inside the LPAR.

In /sys/kernel/debug/powerpc/security_features it can be seen if
mitigations are on or off but not the level set through the ASMI menu.
Furthermore, reporting it through /proc/powerpc/lparcfg allows an easy
processing by the lparstat command [1].

Export it like this in /proc/powerpc/lparcfg:

  $ grep security_flavor /proc/powerpc/lparcfg
  security_flavor=1

Value follows what is documented on the IBM support page [2]:

  0 Speculative execution fully enabled
  1 Speculative execution controls to mitigate user-to-kernel attacks
  2 Speculative execution controls to mitigate user-to-kernel and
    user-to-user side-channel attacks

[1] https://groups.google.com/g/powerpc-utils-devel/c/NaKXvdyl_UI/m/wa2stpIDAQAJ
[2] https://www.ibm.com/support/pages/node/715841

Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210305125554.5165-1-ldufour@linux.ibm.com
3 years agopowerpc: Enable KFENCE for PPC32
Christophe Leroy [Thu, 4 Mar 2021 14:35:09 +0000 (14:35 +0000)]
powerpc: Enable KFENCE for PPC32

Add architecture specific implementation details for KFENCE and enable
KFENCE for the ppc32 architecture. In particular, this implements the
required interface in <asm/kfence.h>.

KFENCE requires that attributes for pages from its memory pool can
individually be set. Therefore, force the Read/Write linear map to be
mapped at page granularity.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Marco Elver <elver@google.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/8dfe1bd2abde26337c1d8c1ad0acfcc82185e0d5.1614868445.git.christophe.leroy@csgroup.eu
3 years agopowerpc/ptrace: Remove duplicate check from pt_regs_check()
Denis Efremov [Fri, 5 Mar 2021 11:28:07 +0000 (14:28 +0300)]
powerpc/ptrace: Remove duplicate check from pt_regs_check()

"offsetof(struct pt_regs, msr) == offsetof(struct user_pt_regs, msr)"
checked in pt_regs_check() twice in a row. Remove the second check.

Signed-off-by: Denis Efremov <efremov@linux.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210305112807.26299-1-efremov@linux.com
3 years agopowerpc/pseries: Move hvc_vio_init_early() prototype to shared location
Lee Jones [Wed, 3 Mar 2021 12:46:03 +0000 (12:46 +0000)]
powerpc/pseries: Move hvc_vio_init_early() prototype to shared location

Fixes the following W=1 kernel build warning(s):

 drivers/tty/hvc/hvc_vio.c:385:13: warning: no previous prototype for ‘hvc_vio_init_early’
 385 | void __init hvc_vio_init_early(void)
 | ^~~~~~~~~~~~~~~~~~

Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210303124603.3150175-1-lee.jones@linaro.org
3 years agopowerpc: Fix misspellings in tlbflush.h
Zhang Yunkai [Thu, 4 Mar 2021 03:13:18 +0000 (19:13 -0800)]
powerpc: Fix misspellings in tlbflush.h

The comment marking the end of the include guard is wrong, fix it up.

Signed-off-by: Zhang Yunkai <zhang.yunkai@zte.com.cn>
[mpe: Rewrite commit message]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210304031318.188447-1-zhang.yunkai@zte.com.cn
3 years agopowerpc: Remove duplicate includes
Zhang Yunkai [Thu, 4 Mar 2021 04:49:43 +0000 (20:49 -0800)]
powerpc: Remove duplicate includes

asm/tm.h included in traps.c is duplicated. It is also included on
the 62nd line.

asm/udbg.h included in setup-common.c is duplicated. It is also
included on the 61st line.

asm/bug.h included in arch/powerpc/include/asm/book3s/64/mmu-hash.h
is duplicated. It is also included on the 12th line.

asm/tlbflush.h included in arch/powerpc/include/asm/pgtable.h is
duplicated. It is also included on the 11th line.

asm/page.h included in arch/powerpc/include/asm/thread_info.h is
duplicated. It is also included on the 13th line.

Signed-off-by: Zhang Yunkai <zhang.yunkai@zte.com.cn>
[mpe: Squash together from multiple commits]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
3 years agopowerpc/prom: Mark identical_pvr_fixup as __init
Nathan Chancellor [Tue, 2 Mar 2021 20:08:29 +0000 (13:08 -0700)]
powerpc/prom: Mark identical_pvr_fixup as __init

If identical_pvr_fixup() is not inlined, there are two modpost warnings:

WARNING: modpost: vmlinux.o(.text+0x54e8): Section mismatch in reference
from the function identical_pvr_fixup() to the function
.init.text:of_get_flat_dt_prop()
The function identical_pvr_fixup() references
the function __init of_get_flat_dt_prop().
This is often because identical_pvr_fixup lacks a __init
annotation or the annotation of of_get_flat_dt_prop is wrong.

WARNING: modpost: vmlinux.o(.text+0x551c): Section mismatch in reference
from the function identical_pvr_fixup() to the function
.init.text:identify_cpu()
The function identical_pvr_fixup() references
the function __init identify_cpu().
This is often because identical_pvr_fixup lacks a __init
annotation or the annotation of identify_cpu is wrong.

identical_pvr_fixup() calls two functions marked as __init and is only
called by a function marked as __init so it should be marked as __init
as well. At the same time, remove the inline keywork as it is not
necessary to inline this function. The compiler is still free to do so
if it feels it is worthwhile since commit 889b3c1245de ("compiler:
remove CONFIG_OPTIMIZE_INLINING entirely").

Fixes: 14b3d926a22b ("[POWERPC] 4xx: update 440EP(x)/440GR(x) identical PVR issue workaround")
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://github.com/ClangBuiltLinux/linux/issues/1316
Link: https://lore.kernel.org/r/20210302200829.2680663-1-nathan@kernel.org
3 years agopowerpc/fadump: Mark fadump_calculate_reserve_size as __init
Nathan Chancellor [Tue, 2 Mar 2021 19:50:14 +0000 (12:50 -0700)]
powerpc/fadump: Mark fadump_calculate_reserve_size as __init

If fadump_calculate_reserve_size() is not inlined, there is a modpost
warning:

WARNING: modpost: vmlinux.o(.text+0x5196c): Section mismatch in
reference from the function fadump_calculate_reserve_size() to the
function .init.text:parse_crashkernel()
The function fadump_calculate_reserve_size() references
the function __init parse_crashkernel().
This is often because fadump_calculate_reserve_size lacks a __init
annotation or the annotation of parse_crashkernel is wrong.

fadump_calculate_reserve_size() calls parse_crashkernel(), which is
marked as __init and fadump_calculate_reserve_size() is called from
within fadump_reserve_mem(), which is also marked as __init.

Mark fadump_calculate_reserve_size() as __init to fix the section
mismatch. Additionally, remove the inline keyword as it is not necessary
to inline this function; the compiler is still free to do so if it feels
it is worthwhile since commit 889b3c1245de ("compiler: remove
CONFIG_OPTIMIZE_INLINING entirely").

Fixes: 11550dc0a00b ("powerpc/fadump: reuse crashkernel parameter for fadump memory reservation")
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://github.com/ClangBuiltLinux/linux/issues/1300
Link: https://lore.kernel.org/r/20210302195013.2626335-1-nathan@kernel.org
3 years agoselftests/powerpc: Fix L1D flushing tests for Power10
Russell Currey [Tue, 23 Feb 2021 07:02:27 +0000 (17:02 +1000)]
selftests/powerpc: Fix L1D flushing tests for Power10

The rfi_flush and entry_flush selftests work by using the PM_LD_MISS_L1
perf event to count L1D misses.  The value of this event has changed
over time:

- Power7 uses 0x400f0
- Power8 and Power9 use both 0x400f0 and 0x3e054
- Power10 uses only 0x3e054

Rather than relying on raw values, configure perf to count L1D read
misses in the most explicit way available.

This fixes the selftests to work on systems without 0x400f0 as
PM_LD_MISS_L1, and should change no behaviour for systems that the tests
already worked on.

The only potential downside is that referring to a specific perf event
requires PMU support implemented in the kernel for that platform.

Signed-off-by: Russell Currey <ruscur@russell.cc>
Acked-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210223070227.2916871-1-ruscur@russell.cc
3 years agopowerpc: Fix spelling of "droping" to "dropping" in traps.c
Bhaskar Chowdhury [Wed, 24 Feb 2021 07:55:47 +0000 (13:25 +0530)]
powerpc: Fix spelling of "droping" to "dropping" in traps.c

s/droping/dropping/

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210224075547.763063-1-unixbhaskar@gmail.com
3 years agopowerpc: remove unneeded semicolon
Jiapeng Chong [Wed, 24 Feb 2021 07:29:21 +0000 (15:29 +0800)]
powerpc: remove unneeded semicolon

Fix the following coccicheck warnings:

./arch/powerpc/kernel/prom_init.c:2986:2-3: Unneeded semicolon.

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1614151761-53721-1-git-send-email-jiapeng.chong@linux.alibaba.com
3 years agopowerpc/chrp: Make hydra_init() static
Geert Uytterhoeven [Tue, 23 Feb 2021 09:53:45 +0000 (10:53 +0100)]
powerpc/chrp: Make hydra_init() static

Commit 407d418f2fd4c20a ("powerpc/chrp: Move PHB discovery") moved the
sole call to hydra_init() to the source file where it is defined, so it
can be made static.

Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210223095345.2139416-1-geert@linux-m68k.org
3 years agopowerpc/mm: Move the linear_mapping_mutex to the ifdef where it is used
Sebastian Andrzej Siewior [Fri, 19 Feb 2021 16:56:48 +0000 (17:56 +0100)]
powerpc/mm: Move the linear_mapping_mutex to the ifdef where it is used

The mutex linear_mapping_mutex is defined at the of the file while its
only two user are within the CONFIG_MEMORY_HOTPLUG block.
A compile without CONFIG_MEMORY_HOTPLUG set fails on PREEMPT_RT because
its mutex implementation is smart enough to realize that it is unused.

Move the definition of linear_mapping_mutex to ifdef block where it is
used.

Fixes: 1f73ad3e8d755 ("powerpc/mm: print warning in arch_remove_linear_mapping()")
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210219165648.2505482-1-bigeasy@linutronix.de