linux-2.6-microblaze.git
3 years agopowerpc/64s: syscall real mode entry use mtmsrd rather than rfid
Nicholas Piggin [Mon, 8 Feb 2021 06:33:26 +0000 (16:33 +1000)]
powerpc/64s: syscall real mode entry use mtmsrd rather than rfid

Have the real mode system call entry handler branch to the kernel
0xc000... address and then use mtmsrd to enable the MMU, rather than use
SRRs and rfid.

Commit 8729c26e675c ("powerpc/64s/exception: Move real to virt switch
into the common handler") implemented this style of real mode entry for
other interrupt handlers, so this brings system calls into line with
them, which is the main motivcation for the change.

This tends to be slightly faster due to avoiding the mtsprs, and it also
does not clobber the SRR registers, which becomes important in a
subsequent change. The real mode entry points don't tend to be too
important for performance these days, but it is possible for a
hypervisor to run guests in AIL=0 mode for certian reasons.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210208063326.331502-1-npiggin@gmail.com
3 years agopowerpc/kuap: Restore AMR after replaying soft interrupts
Alexey Kardashevskiy [Tue, 2 Feb 2021 09:15:41 +0000 (20:15 +1100)]
powerpc/kuap: Restore AMR after replaying soft interrupts

Since de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace
Access Protection"), user access helpers call user_{read|write}_access_{begin|end}
when user space access is allowed.

Commit 890274c2dc4c ("powerpc/64s: Implement KUAP for Radix MMU") made
the mentioned helpers program a AMR special register to allow such
access for a short period of time, most of the time AMR is expected to
block user memory access by the kernel.

Since the code accesses the user space memory, unsafe_get_user() calls
might_fault() which calls arch_local_irq_restore() if either
CONFIG_PROVE_LOCKING or CONFIG_DEBUG_ATOMIC_SLEEP is enabled.
arch_local_irq_restore() then attempts to replay pending soft
interrupts as KUAP regions have hardware interrupts enabled.

If a pending interrupt happens to do user access (performance
interrupts do that), it enables access for a short period of time so
after returning from the replay, the user access state remains blocked
and if a user page fault happens - "Bug: Read fault blocked by AMR!"
appears and SIGSEGV is sent.

An example trace:
  Bug: Read fault blocked by AMR!
  WARNING: CPU: 0 PID: 1603 at /home/aik/p/kernel/arch/powerpc/include/asm/book3s/64/kup-radix.h:145
  CPU: 0 PID: 1603 Comm: amr Not tainted 5.10.0-rc6_v5.10-rc6_a+fstn1 #24
  NIP:  c00000000009ece8 LR: c00000000009ece4 CTR: 0000000000000000
  REGS: c00000000dc63560 TRAP: 0700   Not tainted  (5.10.0-rc6_v5.10-rc6_a+fstn1)
  MSR:  8000000000021033 <SF,ME,IR,DR,RI,LE>  CR: 28002888  XER: 20040000
  CFAR: c0000000001fa928 IRQMASK: 1
  GPR00: c00000000009ece4 c00000000dc637f0 c000000002397600 000000000000001f
  GPR04: c0000000020eb318 0000000000000000 c00000000dc63494 0000000000000027
  GPR08: c00000007fe4de68 c00000000dfe9180 0000000000000000 0000000000000001
  GPR12: 0000000000002000 c0000000030a0000 0000000000000000 0000000000000000
  GPR16: 0000000000000000 0000000000000000 0000000000000000 bfffffffffffffff
  GPR20: 0000000000000000 c0000000134a4020 c0000000019c2218 0000000000000fe0
  GPR24: 0000000000000000 0000000000000000 c00000000d106200 0000000040000000
  GPR28: 0000000000000000 0000000000000300 c00000000dc63910 c000000001946730
  NIP __do_page_fault+0xb38/0xde0
  LR  __do_page_fault+0xb34/0xde0
  Call Trace:
    __do_page_fault+0xb34/0xde0 (unreliable)
    handle_page_fault+0x10/0x2c
  --- interrupt: 300 at strncpy_from_user+0x290/0x440
      LR = strncpy_from_user+0x284/0x440
    strncpy_from_user+0x2f0/0x440 (unreliable)
    getname_flags+0x88/0x2c0
    do_sys_openat2+0x2d4/0x5f0
    do_sys_open+0xcc/0x140
    system_call_exception+0x160/0x240
    system_call_common+0xf0/0x27c

To fix it save/restore the AMR when replaying interrupts, and also
add a check if AMR was not blocked prior to replaying interrupts.

Originally found by syzkaller.

Fixes: 890274c2dc4c ("powerpc/64s: Implement KUAP for Radix MMU")
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Use normal commit citation format and add full oops log to
      change log, move kuap_check_amr() into the restore routine to
      avoid warnings about unreconciled IRQ state]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210202091541.36499-1-aik@ozlabs.ru
3 years agopowerpc/uaccess: Avoid might_fault() when user access is enabled
Alexey Kardashevskiy [Mon, 8 Feb 2021 05:57:40 +0000 (16:57 +1100)]
powerpc/uaccess: Avoid might_fault() when user access is enabled

The amount of code executed with enabled user space access (unlocked
KUAP) should be minimal. However with CONFIG_PROVE_LOCKING or
CONFIG_DEBUG_ATOMIC_SLEEP enabled, might_fault() calls into various
parts of the kernel, and may even end up replaying interrupts which in
turn may access user space and forget to restore the KUAP state.

The problem places are:
  1. strncpy_from_user (and similar) which unlock KUAP and call
     unsafe_get_user -> __get_user_allowed -> __get_user_nocheck()
     with do_allow=false to skip KUAP as the caller took care of it.
  2. __unsafe_put_user_goto() which is called with unlocked KUAP.

eg:
  WARNING: CPU: 30 PID: 1 at arch/powerpc/include/asm/book3s/64/kup.h:324 arch_local_irq_restore+0x160/0x190
  NIP arch_local_irq_restore+0x160/0x190
  LR  lock_is_held_type+0x140/0x200
  Call Trace:
    0xc00000007f392ff8 (unreliable)
    ___might_sleep+0x180/0x320
    __might_fault+0x50/0xe0
    filldir64+0x2d0/0x5d0
    call_filldir+0xc8/0x180
    ext4_readdir+0x948/0xb40
    iterate_dir+0x1ec/0x240
    sys_getdents64+0x80/0x290
    system_call_exception+0x160/0x280
    system_call_common+0xf0/0x27c

Change __get_user_nocheck() to look at `do_allow` to decide whether to
skip might_fault(). Since strncpy_from_user/etc call might_fault()
anyway before unlocking KUAP, there should be no visible change.

Drop might_fault() in __unsafe_put_user_goto() as it is only called
from unsafe_put_user(), which already has KUAP unlocked.

Since keeping might_fault() is still desirable for debugging, add
calls to it in user_[read|write]_access_begin(). That also allows us
to drop the is_kernel_addr() test, because there should be no code
using user_[read|write]_access_begin() in order to access a kernel
address.

Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection")
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
[mpe: Combine with related patch from myself, merge change logs]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210204121612.32721-1-aik@ozlabs.ru
3 years agopowerpc/uaccess: Simplify unsafe_put_user() implementation
Michael Ellerman [Mon, 8 Feb 2021 05:57:01 +0000 (16:57 +1100)]
powerpc/uaccess: Simplify unsafe_put_user() implementation

Currently unsafe_put_user() expands to __put_user_goto(), which
expands to __put_user_nocheck_goto().

There are no other uses of __put_user_nocheck_goto(), and although
there are some other uses of __put_user_goto() those could just use
unsafe_put_user().

Every layer of indirection introduces the possibility that some code
is calling that layer, and makes keeping track of the required
semantics at each point more complicated.

So drop __put_user_goto(), and rename __put_user_nocheck_goto() to
__unsafe_put_user_goto(). The "nocheck" is implied by "unsafe".

Replace the few uses of __put_user_goto() with unsafe_put_user().

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210208135717.2618798-1-mpe@ellerman.id.au
3 years agopowerpc/amigaone: Make amigaone_discover_phbs() static
Michael Ellerman [Wed, 10 Feb 2021 13:08:04 +0000 (00:08 +1100)]
powerpc/amigaone: Make amigaone_discover_phbs() static

It's only used in setup.c, so make it static.

Fixes: 053d58c87029 ("powerpc/amigaone: Move PHB discovery")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210210130804.3190952-3-mpe@ellerman.id.au
3 years agopowerpc/mm/64s: Fix no previous prototype warning
Michael Ellerman [Wed, 10 Feb 2021 13:08:03 +0000 (00:08 +1100)]
powerpc/mm/64s: Fix no previous prototype warning

As reported by lkp:

  arch/powerpc/mm/book3s64/radix_tlb.c:646:6: warning: no previous
  prototype for function 'exit_lazy_flush_tlb'

Fix it by moving the prototype into the existing header.

Fixes: 032b7f08932c ("powerpc/64s/radix: serialize_against_pte_lookup IPIs trim mm_cpumask")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210210130804.3190952-2-mpe@ellerman.id.au
3 years agopowerpc/83xx: Fix build error when CONFIG_PCI=n
Michael Ellerman [Wed, 10 Feb 2021 13:08:02 +0000 (00:08 +1100)]
powerpc/83xx: Fix build error when CONFIG_PCI=n

As reported by lkp:

  arch/powerpc/platforms/83xx/km83xx.c:183:19: error: 'mpc83xx_setup_pci' undeclared here (not in a function)
     183 |  .discover_phbs = mpc83xx_setup_pci,
 |                   ^~~~~~~~~~~~~~~~~
 |                   mpc83xx_setup_arch

There is a stub defined for the CONFIG_PCI=n case, but now that
mpc83xx_setup_pci() is being assigned to discover_phbs the correct
empty value is NULL.

Fixes: 83f84041ff1c ("powerpc/83xx: Move PHB discovery")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210210130804.3190952-1-mpe@ellerman.id.au
3 years agopowerpc: remove interrupt handler functions from the noinstr section
Nicholas Piggin [Thu, 11 Feb 2021 06:36:36 +0000 (16:36 +1000)]
powerpc: remove interrupt handler functions from the noinstr section

The allyesconfig ppc64 kernel fails to link with relocations unable to
fit after commit 3a96570ffceb ("powerpc: convert interrupt handlers to
use wrappers"), which is due to the interrupt handler functions being
put into the .noinstr.text section, which the linker script places on
the opposite side of the main .text section from the interrupt entry
asm code which calls the handlers.

This results in a lot of linker stubs that overwhelm the 252-byte sized
space we allow for them, or in the case of BE a .opd relocation link
error for some reason.

It's not required to put interrupt handlers in the .noinstr section,
previously they used NOKPROBE_SYMBOL, so take them out and replace
with a NOKPROBE_SYMBOL in the wrapper macro. Remove the explicit
NOKPROBE_SYMBOL macros in the interrupt handler functions. This makes
a number of interrupt handlers nokprobe that were not prior to the
interrupt wrappers commit, but since that commit they were made
nokprobe due to being in .noinstr.text, so this fix does not change
that.

The fixes tag is different to the commit that first exposes the problem
because it is where the wrapper macros were introduced.

Fixes: 8d41fc618ab8 ("powerpc: interrupt handler wrapper functions")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Slightly fix up comment wording]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210211063636.236420-1-npiggin@gmail.com
3 years agopowerpc/powernv/pci: Use kzalloc() for phb related allocations
Michael Ellerman [Thu, 11 Feb 2021 11:23:57 +0000 (22:23 +1100)]
powerpc/powernv/pci: Use kzalloc() for phb related allocations

As part of commit fbbefb320214 ("powerpc/pci: Move PHB discovery for
PCI_DN using platforms"), I switched some allocations from
memblock_alloc() to kmalloc(), otherwise memblock would warn that it
was being called after slab init.

However I missed that the code relied on the allocations being zeroed,
without which we could end up crashing:

  pci_bus 0000:00: busn_res: [bus 00-ff] end is updated to ff
  BUG: Unable to handle kernel data access on read at 0x6b6b6b6b6b6b6af7
  Faulting instruction address: 0xc0000000000dbc90
  Oops: Kernel access of bad area, sig: 11 [#1]
  LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA PowerNV
  ...
  NIP  pnv_ioda_get_pe_state+0xe0/0x1d0
  LR   pnv_ioda_get_pe_state+0xb4/0x1d0
  Call Trace:
    pnv_ioda_get_pe_state+0xb4/0x1d0 (unreliable)
    pnv_pci_config_check_eeh.isra.9+0x78/0x270
    pnv_pci_read_config+0xf8/0x160
    pci_bus_read_config_dword+0xa4/0x120
    pci_bus_generic_read_dev_vendor_id+0x54/0x270
    pci_scan_single_device+0xb8/0x140
    pci_scan_slot+0x80/0x1b0
    pci_scan_child_bus_extend+0x94/0x490
    pcibios_scan_phb+0x1f8/0x3c0
    pcibios_init+0x8c/0x12c
    do_one_initcall+0x94/0x510
    kernel_init_freeable+0x35c/0x3fc
    kernel_init+0x2c/0x168
    ret_from_kernel_thread+0x5c/0x70

Switch them to kzalloc().

Fixes: fbbefb320214 ("powerpc/pci: Move PHB discovery for PCI_DN using platforms")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210211112749.3410771-1-mpe@ellerman.id.au
3 years agopowerpc/64s: Handle program checks in wrong endian during early boot
Michael Ellerman [Tue, 2 Feb 2021 13:02:07 +0000 (00:02 +1100)]
powerpc/64s: Handle program checks in wrong endian during early boot

There's a short window during boot where although the kernel is
running little endian, any exceptions will cause the CPU to switch
back to big endian. This situation persists until we call
configure_exceptions(), which calls either the hypervisor or OPAL to
configure the CPU so that exceptions will be taken in little
endian (via HID0[HILE]).

We don't intend to take exceptions during early boot, but one way we
sometimes do is via a WARN/BUG etc. Those all boil down to a trap
instruction, which will cause a program check exception.

The first instruction of the program check handler is an mtsprg, which
when executed in the wrong endian is an lhzu with a ~3GB displacement
from r3. The content of r3 is random, so that becomes a load from some
random location, and depending on the system (installed RAM etc.) can
easily lead to a checkstop, or an infinitely recursive page fault.
That prevents whatever the WARN/BUG was complaining about being
printed to the console, and the user just sees a dead system.

We can fix it by having a trampoline at the beginning of the program
check handler that detects we are in the wrong endian, and flips us
back to the correct endian.

We can't flip MSR[LE] using mtmsr (alas), so we have to use rfid. That
requires backing up SRR0/1 as well as a GPR. To do that we use
SPRG0/2/3 (SPRG1 is already used for the paca). SPRG3 is user
readable, but this trampoline is only active very early in boot, and
SPRG3 will be reinitialised in vdso_getcpu_init() before userspace
starts.

With this trampoline in place we can survive a WARN early in boot and
print a stack trace, which is eventually printed to the console once
the console is up, eg:

  [83565.758545] kexec_core: Starting new kernel
  [    0.000000] ------------[ cut here ]------------
  [    0.000000] static_key_enable_cpuslocked(): static key '0xc000000000ea6160' used before call to jump_label_init()
  [    0.000000] WARNING: CPU: 0 PID: 0 at kernel/jump_label.c:166 static_key_enable_cpuslocked+0xfc/0x120
  [    0.000000] Modules linked in:
  [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.10.0-gcc-8.2.0-dirty #618
  [    0.000000] NIP:  c0000000002fd46c LR: c0000000002fd468 CTR: c000000000170660
  [    0.000000] REGS: c000000001227940 TRAP: 0700   Not tainted  (5.10.0-gcc-8.2.0-dirty)
  [    0.000000] MSR:  9000000002823003 <SF,HV,VEC,VSX,FP,ME,RI,LE>  CR: 24882422  XER: 20040000
  [    0.000000] CFAR: 0000000000000730 IRQMASK: 1
  [    0.000000] GPR00: c0000000002fd468 c000000001227bd0 c000000001228300 0000000000000065
  [    0.000000] GPR04: 0000000000000001 0000000000000065 c0000000010cf970 000000000000000d
  [    0.000000] GPR08: 0000000000000000 0000000000000000 0000000000000000 c00000000122763f
  [    0.000000] GPR12: 0000000000002000 c000000000f8a980 0000000000000000 0000000000000000
  [    0.000000] GPR16: 0000000000000000 0000000000000000 c000000000f88c8e c000000000f88c9a
  [    0.000000] GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  [    0.000000] GPR24: 0000000000000000 c000000000dea3a8 0000000000000000 c000000000f35114
  [    0.000000] GPR28: 0000002800000000 c000000000f88c9a c000000000f88c8e c000000000ea6160
  [    0.000000] NIP [c0000000002fd46c] static_key_enable_cpuslocked+0xfc/0x120
  [    0.000000] LR [c0000000002fd468] static_key_enable_cpuslocked+0xf8/0x120
  [    0.000000] Call Trace:
  [    0.000000] [c000000001227bd0] [c0000000002fd468] static_key_enable_cpuslocked+0xf8/0x120 (unreliable)
  [    0.000000] [c000000001227c40] [c0000000002fd4c0] static_key_enable+0x30/0x50
  [    0.000000] [c000000001227c70] [c000000000f6629c] early_page_poison_param+0x58/0x9c
  [    0.000000] [c000000001227cb0] [c000000000f351b8] do_early_param+0xa4/0x10c
  [    0.000000] [c000000001227d30] [c00000000011e020] parse_args+0x270/0x5e0
  [    0.000000] [c000000001227e20] [c000000000f35864] parse_early_options+0x48/0x5c
  [    0.000000] [c000000001227e40] [c000000000f358d0] parse_early_param+0x58/0x84
  [    0.000000] [c000000001227e70] [c000000000f3a368] early_init_devtree+0xc4/0x490
  [    0.000000] [c000000001227f10] [c000000000f3bca0] early_setup+0xc8/0x1c8
  [    0.000000] [c000000001227f90] [000000000000c320] 0xc320
  [    0.000000] Instruction dump:
  [    0.000000] 4bfffddd 7c2004ac 39200001 913f0000 4bffffb8 7c651b78 3c82ffac 3c62ffc0
  [    0.000000] 38841b00 3863f310 4bdf03a5 60000000 <0fe000004bffff38 60000000 60000000
  [    0.000000] random: get_random_bytes called from print_oops_end_marker+0x40/0x80 with crng_init=0
  [    0.000000] ---[ end trace 0000000000000000 ]---
  [    0.000000] dt-cpu-ftrs: setup for ISA 3000

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210202130207.1303975-2-mpe@ellerman.id.au
3 years agopowerpc/64: Make stack tracing work during very early boot
Michael Ellerman [Tue, 2 Feb 2021 13:02:06 +0000 (00:02 +1100)]
powerpc/64: Make stack tracing work during very early boot

If we try to stack trace very early during boot, either due to a
WARN/BUG or manual dump_stack(), we will oops in
valid_emergency_stack() when we try to dereference the paca_ptrs
array.

The fix is simple, we just return false if paca_ptrs isn't allocated
yet. The stack pointer definitely isn't part of any emergency stack
because we haven't allocated any yet.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210202130207.1303975-1-mpe@ellerman.id.au
3 years agopowerpc64/idle: Fix SP offsets when saving GPRs
Christopher M. Riedl [Sat, 6 Feb 2021 07:23:42 +0000 (01:23 -0600)]
powerpc64/idle: Fix SP offsets when saving GPRs

The idle entry/exit code saves/restores GPRs in the stack "red zone"
(Protected Zone according to PowerPC64 ELF ABI v2). However, the offset
used for the first GPR is incorrect and overwrites the back chain - the
Protected Zone actually starts below the current SP. In practice this is
probably not an issue, but it's still incorrect so fix it.

Also expand the comments to explain why using the stack "red zone"
instead of creating a new stackframe is appropriate here.

Signed-off-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210206072342.5067-1-cmr@codefail.de
3 years agopowerpc/32s: Allow constant folding in mtsr()/mfsr()
Christophe Leroy [Sat, 6 Feb 2021 11:47:28 +0000 (11:47 +0000)]
powerpc/32s: Allow constant folding in mtsr()/mfsr()

On the same way as we did in wrtee(), add an alternative
using mtsr/mfsr instructions instead of mtsrin/mfsrin
when the segment register can be determined at compile time.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/9baed0ff9d76723ec90f1b567ddd4ac1ecc7a190.1612612022.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: mfsrin()/mtsrin() become mfsr()/mtsr()
Christophe Leroy [Sat, 6 Feb 2021 11:47:27 +0000 (11:47 +0000)]
powerpc/32s: mfsrin()/mtsrin() become mfsr()/mtsr()

Function names should tell what the function does, not how.

mfsrin() and mtsrin() are read/writing segment registers.

They are called that way because they are using mfsrin and mtsrin
instructions, but it doesn't matter for the caller.

In preparation of following patch, change their name to mfsr() and mtsr()
in order to make it obvious they manipulate segment registers without
messing up with how they do it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/f92d99f4349391b77766745900231aa880a0efb5.1612612022.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Change mfsrin() into a static inline function
Christophe Leroy [Sat, 6 Feb 2021 11:47:26 +0000 (11:47 +0000)]
powerpc/32s: Change mfsrin() into a static inline function

mfsrin() is a macro.

Change in into an inline function to avoid conflicts in KVM
and make it more evolutive.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/72c7b9879e2e2e6f5c27dadda6486386c2b50f23.1612612022.git.christophe.leroy@csgroup.eu
3 years agopowerpc/uaccess: Perform barrier_nospec() in KUAP allowance helpers
Christophe Leroy [Sun, 7 Feb 2021 10:08:11 +0000 (10:08 +0000)]
powerpc/uaccess: Perform barrier_nospec() in KUAP allowance helpers

barrier_nospec() in uaccess helpers is there to protect against
speculative accesses around access_ok().

When using user_access_begin() sequences together with
unsafe_get_user() like macros, barrier_nospec() is called for
every single read although we know the access_ok() is done
onece.

Since all user accesses must be granted by a call to either
allow_read_from_user() or allow_read_write_user() which will
always happen after the access_ok() check, move the barrier_nospec()
there.

Reported-by: Christopher M. Riedl <cmr@codefail.de>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c72f014730823b413528e90ab6c4d3bcb79f8497.1612692067.git.christophe.leroy@csgroup.eu
3 years agopowerpc/sstep: Fix darn emulation
Sandipan Das [Thu, 4 Feb 2021 08:07:44 +0000 (13:37 +0530)]
powerpc/sstep: Fix darn emulation

Commit 8813ff49607e ("powerpc/sstep: Check instruction validity
against ISA version before emulation") introduced a proper way to skip
unknown instructions. This makes sure that the same is used for the
darn instruction when the range selection bits have a reserved value.

Fixes: a23987ef267a ("powerpc: sstep: Add support for darn instruction")
Signed-off-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210204080744.135785-2-sandipan@linux.ibm.com
3 years agopowerpc/sstep: Fix load-store and update emulation
Sandipan Das [Thu, 4 Feb 2021 08:07:43 +0000 (13:37 +0530)]
powerpc/sstep: Fix load-store and update emulation

The Power ISA says that the fixed-point load and update instructions
must neither use R0 for the base address (RA) nor have the
destination (RT) and the base address (RA) as the same register.
Similarly, for fixed-point stores and floating-point loads and stores,
the instruction is invalid when R0 is used as the base address (RA).

This is applicable to the following instructions.
  * Load Byte and Zero with Update (lbzu)
  * Load Byte and Zero with Update Indexed (lbzux)
  * Load Halfword and Zero with Update (lhzu)
  * Load Halfword and Zero with Update Indexed (lhzux)
  * Load Halfword Algebraic with Update (lhau)
  * Load Halfword Algebraic with Update Indexed (lhaux)
  * Load Word and Zero with Update (lwzu)
  * Load Word and Zero with Update Indexed (lwzux)
  * Load Word Algebraic with Update Indexed (lwaux)
  * Load Doubleword with Update (ldu)
  * Load Doubleword with Update Indexed (ldux)
  * Load Floating Single with Update (lfsu)
  * Load Floating Single with Update Indexed (lfsux)
  * Load Floating Double with Update (lfdu)
  * Load Floating Double with Update Indexed (lfdux)
  * Store Byte with Update (stbu)
  * Store Byte with Update Indexed (stbux)
  * Store Halfword with Update (sthu)
  * Store Halfword with Update Indexed (sthux)
  * Store Word with Update (stwu)
  * Store Word with Update Indexed (stwux)
  * Store Doubleword with Update (stdu)
  * Store Doubleword with Update Indexed (stdux)
  * Store Floating Single with Update (stfsu)
  * Store Floating Single with Update Indexed (stfsux)
  * Store Floating Double with Update (stfdu)
  * Store Floating Double with Update Indexed (stfdux)

E.g. the following behaviour is observed for an invalid load and
update instruction having RA = RT.

While a userspace program having an instruction word like 0xe9ce0001,
i.e. ldu r14, 0(r14), runs without getting receiving a SIGILL on a
Power system (observed on P8 and P9), the outcome of executing that
instruction word varies and its behaviour can be considered to be
undefined.

Attaching an uprobe at that instruction's address results in emulation
which currently performs the load as well as writes the effective
address back to the base register. This might not match the outcome
from hardware.

To remove any inconsistencies, this adds additional checks for the
aforementioned instructions to make sure that the emulation
infrastructure treats them as unknown. The kernel can then fallback to
executing such instructions on hardware.

Fixes: 0016a4cf5582 ("powerpc: Emulate most Book I instructions in emulate_step()")
Signed-off-by: Sandipan Das <sandipan@linux.ibm.com>
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210204080744.135785-1-sandipan@linux.ibm.com
3 years agopowerpc/8xx: Fix software emulation interrupt
Christophe Leroy [Fri, 5 Feb 2021 08:56:13 +0000 (08:56 +0000)]
powerpc/8xx: Fix software emulation interrupt

For unimplemented instructions or unimplemented SPRs, the 8xx triggers
a "Software Emulation Exception" (0x1000). That interrupt doesn't set
reason bits in SRR1 as the "Program Check Exception" does.

Go through emulation_assist_interrupt() to set REASON_ILLEGAL.

Fixes: fbbcc3bb139e ("powerpc/8xx: Remove SoftwareEmulation()")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ad782af87a222efc79cfb06079b0fd23d4224eaf.1612515180.git.christophe.leroy@csgroup.eu
3 years agopowerpc/perf: Record counter overflow always if SAMPLE_IP is unset
Athira Rajeev [Fri, 5 Feb 2021 09:14:52 +0000 (04:14 -0500)]
powerpc/perf: Record counter overflow always if SAMPLE_IP is unset

While sampling for marked events, currently we record the sample only
if the SIAR valid bit of Sampled Instruction Event Register (SIER) is
set. SIAR_VALID bit is used for fetching the instruction address from
Sampled Instruction Address Register(SIAR). But there are some
usecases, where the user is interested only in the PMU stats at each
counter overflow and the exact IP of the overflow event is not
required. Dropping SIAR invalid samples will fail to record some of
the counter overflows in such cases.

Example of such usecase is dumping the PMU stats (event counts) after
some regular amount of instructions/events from the userspace (ex: via
ptrace). Here counter overflow is indicated to userspace via signal
handler, and captured by monitoring and enabling I/O signaling on the
event file descriptor. In these cases, we expect to get
sample/overflow indication after each specified sample_period.

Perf event attribute will not have PERF_SAMPLE_IP set in the
sample_type if exact IP of the overflow event is not requested. So
while profiling if SAMPLE_IP is not set, just record the counter
overflow irrespective of SIAR_VALID check.

Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[mpe: Reflow comment and if formatting]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1612516492-1428-1-git-send-email-atrajeev@linux.vnet.ibm.com
3 years agopowerpc/pseries/dlpar: handle ibm, configure-connector delay status
Nathan Lynch [Thu, 7 Jan 2021 02:59:00 +0000 (20:59 -0600)]
powerpc/pseries/dlpar: handle ibm, configure-connector delay status

dlpar_configure_connector() has two problems in its handling of
ibm,configure-connector's return status:

1. When the status is -2 (busy, call again), we call
   ibm,configure-connector again immediately without checking whether
   to schedule, which can result in monopolizing the CPU.
2. Extended delay status (9900..9905) goes completely unhandled,
   causing the configuration to unnecessarily terminate.

Fix both of these issues by using rtas_busy_delay().

Fixes: ab519a011caa ("powerpc/pseries: Kernel DLPAR Infrastructure")
Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com>
Reviewed-by: Tyrel Datwyler <tyreld@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210107025900.410369-1-nathanl@linux.ibm.com
3 years agopowerpc/64s: Implement ptep_clear_flush_young that does not flush TLBs
Nicholas Piggin [Thu, 17 Dec 2020 13:47:31 +0000 (23:47 +1000)]
powerpc/64s: Implement ptep_clear_flush_young that does not flush TLBs

Similarly to the x86 commit b13b1d2d8692 ("x86/mm: In the PTE swapout
page reclaim case clear the accessed bit instead of flushing the TLB"),
implement ptep_clear_flush_young that does not actually flush the TLB
in the case the referenced bit is cleared.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201217134731.488135-8-npiggin@gmail.com
3 years agopowerpc/64s/radix: serialize_against_pte_lookup IPIs trim mm_cpumask
Nicholas Piggin [Thu, 17 Dec 2020 13:47:30 +0000 (23:47 +1000)]
powerpc/64s/radix: serialize_against_pte_lookup IPIs trim mm_cpumask

serialize_against_pte_lookup() performs IPIs to all CPUs in mm_cpumask.
Take this opportunity to try trim the CPU out of mm_cpumask. This can
reduce the cost of future serialize_against_pte_lookup() and/or the
cost of future TLB flushes.

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201217134731.488135-7-npiggin@gmail.com
3 years agopowerpc/64s/radix: occasionally attempt to trim mm_cpumask
Nicholas Piggin [Thu, 17 Dec 2020 13:47:29 +0000 (23:47 +1000)]
powerpc/64s/radix: occasionally attempt to trim mm_cpumask

A single-threaded process that is flushing its own address space is
so far the only case where the mm_cpumask is attempted to be trimmed.
This patch expands that to flush in other situations, multi-threaded
processes and external sources. For now it's a relatively simple
occasional trim attempt. The main aim is to add the mechanism,
tweaking and tuning can come with more data.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201217134731.488135-6-npiggin@gmail.com
3 years agopowerpc/64s/radix: Allow mm_cpumask trimming from external sources
Nicholas Piggin [Thu, 17 Dec 2020 13:47:28 +0000 (23:47 +1000)]
powerpc/64s/radix: Allow mm_cpumask trimming from external sources

mm_cpumask trimming is currently restricted to be issued by the current
thread of a single-threaded mm. This patch relaxes that and allows the
mask to be trimmed from any context.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201217134731.488135-5-npiggin@gmail.com
3 years agopowerpc/64s/radix: Check for no TLB flush required
Nicholas Piggin [Thu, 17 Dec 2020 13:47:27 +0000 (23:47 +1000)]
powerpc/64s/radix: Check for no TLB flush required

If there are no CPUs in mm_cpumask, no TLB flush is required at all.
This patch adds a check for this case.

Currently it's not tested for, in fact mm_is_thread_local() returns
false if the current CPU is not in mm_cpumask, so it's treated as a
global flush.

This can come up in some cases like exec failure before the new mm has
ever been switched to. This patch reduces TLBIE instructions required
to build a kernel from about 120,000 to 45,000. Another situation it
could help is page reclaim, KSM, THP, etc., (i.e., asynch operations
external to the process) where the process is sleeping and has all TLBs
flushed out of all CPUs.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201217134731.488135-4-npiggin@gmail.com
3 years agopowerpc/64s/radix: refactor TLB flush type selection
Nicholas Piggin [Thu, 17 Dec 2020 13:47:26 +0000 (23:47 +1000)]
powerpc/64s/radix: refactor TLB flush type selection

The logic to decide what kind of TLB flush is required (local, global,
or IPI) is spread multiple times over the several kinds of TLB flushes.

Move it all into a single function which may issue IPIs if necessary,
and also returns a flush type that is to be used.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201217134731.488135-3-npiggin@gmail.com
3 years agopowerpc/64s/radix: add warning and comments in mm_cpumask trim
Nicholas Piggin [Thu, 17 Dec 2020 13:47:25 +0000 (23:47 +1000)]
powerpc/64s/radix: add warning and comments in mm_cpumask trim

Add a comment explaining part of the logic for mm_cpumask trimming, and
add a (hopefully graceful) check and warning in case something gets it
wrong.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201217134731.488135-2-npiggin@gmail.com
3 years agopowerpc/perf: Expose Performance Monitor Counter SPR's as part of extended regs
Athira Rajeev [Wed, 3 Feb 2021 06:55:36 +0000 (01:55 -0500)]
powerpc/perf: Expose Performance Monitor Counter SPR's as part of extended regs

Currently Monitor Mode Control Registers and Sampling registers are
part of extended regs. Patch adds support to include Performance Monitor
Counter Registers (PMC1 to PMC6 ) as part of extended registers.

PMCs are saved in the perf interrupt handler as part of
per-cpu array 'pmcs' in struct cpu_hw_events. While capturing
the register values for extended regs, fetch these saved PMC values.

Simplified the PERF_REG_PMU_MASK_300/31 definition to include PMU
SPRs MMCR0 to PMC6. Exclude the unsupported SPRs (MMCR3, SIER2, SIER3)
from extended mask value for CPU_FTR_ARCH_300 in the new definition.

PERF_REG_EXTENDED_MAX is used to check if any index beyond the extended
registers is requested in the sample. Have one PERF_REG_EXTENDED_MAX
for CPU_FTR_ARCH_300/CPU_FTR_ARCH_31 since perf_reg_validate function
already checks the extended mask for the presence of any unsupported
register.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1612335337-1888-3-git-send-email-atrajeev@linux.vnet.ibm.com
3 years agopowerpc/perf: Include PMCs as part of per-cpu cpuhw_events struct
Athira Rajeev [Wed, 3 Feb 2021 06:55:35 +0000 (01:55 -0500)]
powerpc/perf: Include PMCs as part of per-cpu cpuhw_events struct

To support capturing of PMC's as part of extended registers, the
value of SPR's PMC1 to PMC6 has to be saved in the starting of PMI
interrupt handler. This is needed since we are resetting the
overflown PMC before creating sample and hence directly reading
SPRN_PMCx in 'perf_reg_value' will be capturing the modified value.

To solve this, add a per-cpu array as part of structure cpu_hw_events
and use this array to capture PMC values in the perf interrupt handler.
Patch also re-factor's the interrupt handler code to use this per-cpu
array instead of current local array.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1612335337-1888-2-git-send-email-atrajeev@linux.vnet.ibm.com
3 years agopowerpc/pkeys: Remove unused code
Sandipan Das [Tue, 2 Feb 2021 15:00:50 +0000 (20:30 +0530)]
powerpc/pkeys: Remove unused code

This removes arch_supports_pkeys(), arch_usable_pkeys() and
thread_pkey_regs_*() which are remnants from the following:

commit 06bb53b33804 ("powerpc: store and restore the pkey state across context switches")
commit 2cd4bd192ee9 ("powerpc/pkeys: Fix handling of pkey state across fork()")
commit cf43d3b26452 ("powerpc: Enable pkey subsystem")

arch_supports_pkeys() and arch_usable_pkeys() were unused
since their introduction while thread_pkey_regs_*() became
unused after the introduction of the following:

commit d5fa30e6993f ("powerpc/book3s64/pkeys: Reset userspace AMR correctly on exec")
commit 48a8ab4eeb82 ("powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode")

Signed-off-by: Sandipan Das <sandipan@linux.ibm.com>
Reviewed-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210202150050.75335-1-sandipan@linux.ibm.com
3 years agopowerpc/44x: Fix a spelling mismach to mismatch in head_44x.S
Bhaskar Chowdhury [Tue, 2 Feb 2021 09:37:46 +0000 (15:07 +0530)]
powerpc/44x: Fix a spelling mismach to mismatch in head_44x.S

s/mismach/mismatch/

Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210202093746.5198-1-unixbhaskar@gmail.com
3 years agopowerpc: remove unneeded semicolons
Chengyang Fan [Mon, 25 Jan 2021 09:53:38 +0000 (17:53 +0800)]
powerpc: remove unneeded semicolons

Remove superfluous semicolons after function definitions.

Signed-off-by: Chengyang Fan <cy.fan@huawei.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210125095338.1719405-1-cy.fan@huawei.com
3 years agopowerpc/akebono: Fix unmet dependency errors
Michael Ellerman [Mon, 1 Feb 2021 01:25:03 +0000 (12:25 +1100)]
powerpc/akebono: Fix unmet dependency errors

The AKEBONO config has various selects under it, including some with
user-selectable dependencies, which means those dependencies can be
disabled. This leads to warnings from Kconfig.

This can be seen with eg:

  $ make allnoconfig
  $ ./scripts/config --file build~/.config -k -e CONFIG_44x -k -e CONFIG_PPC_47x -e CONFIG_AKEBONO
  $ make olddefconfig

  WARNING: unmet direct dependencies detected for ATA
    Depends on [n]: HAS_IOMEM [=y] && BLOCK [=n]
    Selected by [y]:
    - AKEBONO [=y] && PPC_47x [=y]

  WARNING: unmet direct dependencies detected for NETDEVICES
    Depends on [n]: NET [=n]
    Selected by [y]:
    - AKEBONO [=y] && PPC_47x [=y]

  WARNING: unmet direct dependencies detected for ETHERNET
    Depends on [n]: NETDEVICES [=y] && NET [=n]
    Selected by [y]:
    - AKEBONO [=y] && PPC_47x [=y]

  WARNING: unmet direct dependencies detected for MMC_SDHCI
    Depends on [n]: MMC [=n] && HAS_DMA [=y]
    Selected by [y]:
    - AKEBONO [=y] && PPC_47x [=y]

  WARNING: unmet direct dependencies detected for MMC_SDHCI_PLTFM
    Depends on [n]: MMC [=n] && MMC_SDHCI [=y]
    Selected by [y]:
    - AKEBONO [=y] && PPC_47x [=y]

The problem is that AKEBONO is using select to enable things that are
not true dependencies, but rather things you probably want enabled in
an AKEBONO kernel. That is what a defconfig is for.

So drop those selects and instead move those symbols into the
defconfig. This fixes all the kconfig warnings, and the result of make
44x/akebono_defconfig is the same before and after the patch.

Reported-by: Yury Norov <yury.norov@gmail.com>
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Reported-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Tested-by: Randy Dunlap <rdunlap@infradead.org>
Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Link: https://lore.kernel.org/r/20210201012503.940145-1-mpe@ellerman.id.au
3 years agopowerpc/64s: runlatch interrupt handling in C
Nicholas Piggin [Sat, 30 Jan 2021 13:08:51 +0000 (23:08 +1000)]
powerpc/64s: runlatch interrupt handling in C

There is no need for this to be in asm, use the new intrrupt entry wrapper.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-42-npiggin@gmail.com
3 years agopowerpc/64s: move NMI soft-mask handling to C
Nicholas Piggin [Sat, 30 Jan 2021 13:08:50 +0000 (23:08 +1000)]
powerpc/64s: move NMI soft-mask handling to C

Saving and restoring soft-mask state can now be done in C using the
interrupt handler wrapper functions.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-41-npiggin@gmail.com
3 years agopowerpc: move NMI entry/exit code into wrapper
Nicholas Piggin [Sat, 30 Jan 2021 13:08:49 +0000 (23:08 +1000)]
powerpc: move NMI entry/exit code into wrapper

This moves the common NMI entry and exit code into the interrupt handler
wrappers.

This changes the behaviour of soft-NMI (watchdog) and HMI interrupts, and
also MCE interrupts on 64e, by adding missing parts of the NMI entry to
them.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-40-npiggin@gmail.com
3 years agopowerpc/pseries/mce: restore msr before returning from handler
Nicholas Piggin [Sun, 7 Feb 2021 12:54:12 +0000 (22:54 +1000)]
powerpc/pseries/mce: restore msr before returning from handler

The pseries real-mode machine check handler can enable the MMU, and
return from the handler with the MMU still enabled.

This works, but real-mode handler wrapper exit handlers want to rely
on the MMU being in real-mode. So change the pseries handler to
restore the MSR after it has finished virtual mode tasks.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1612702361.lm7fqo56re.astroid@bobo.none
3 years agopowerpc/64: entry cpu time accounting in C
Nicholas Piggin [Sat, 30 Jan 2021 13:08:48 +0000 (23:08 +1000)]
powerpc/64: entry cpu time accounting in C

There is no need for this to be in asm, use the new interrupt entry wrapper.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-39-npiggin@gmail.com
3 years agopowerpc/64: move account_stolen_time into its own function
Nicholas Piggin [Sat, 30 Jan 2021 13:08:47 +0000 (23:08 +1000)]
powerpc/64: move account_stolen_time into its own function

This will be used by interrupt entry as well.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-38-npiggin@gmail.com
3 years agopowerpc/64s: reconcile interrupts in C
Nicholas Piggin [Sat, 30 Jan 2021 13:08:46 +0000 (23:08 +1000)]
powerpc/64s: reconcile interrupts in C

There is no need for this to be in asm, use the new intrrupt entry wrapper.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-37-npiggin@gmail.com
3 years agopowerpc/64s: move context tracking exit to interrupt exit path
Nicholas Piggin [Sat, 30 Jan 2021 13:08:45 +0000 (23:08 +1000)]
powerpc/64s: move context tracking exit to interrupt exit path

The interrupt handler wrapper functions are not the ideal place to
maintain context tracking because after they return, the low level exit
code must then determine if there are interrupts to replay, or if the
task should be preempted, etc. Those paths (e.g., schedule_user) include
their own exception_enter/exit pairs to fix this up but it's a bit hacky
(see schedule_user() comments).

Ideally context tracking will go to user mode only when there are no
more interrupts or context switches or other exit processing work to
handle.

64e can not do this because it does not use the C interrupt exit code.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-36-npiggin@gmail.com
3 years agopowerpc: handle irq_enter/irq_exit in interrupt handler wrappers
Nicholas Piggin [Sat, 30 Jan 2021 13:08:44 +0000 (23:08 +1000)]
powerpc: handle irq_enter/irq_exit in interrupt handler wrappers

Move irq_enter/irq_exit into asynchronous interrupt handler wrappers.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-35-npiggin@gmail.com
3 years agopowerpc/64: add context tracking to asynchronous interrupts
Nicholas Piggin [Sat, 30 Jan 2021 13:08:43 +0000 (23:08 +1000)]
powerpc/64: add context tracking to asynchronous interrupts

Previously context tracking was not done for asynchronous interrupts,
(those that run in interrupt context), and if those would cause a
reschedule when they exit, then scheduling functions (schedule_user,
preempt_schedule_irq) call exception_enter/exit to fix this up and
exit user context.

This is a hack we would like to get away from, so do context tracking
for asynchronous interrupts too.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-34-npiggin@gmail.com
3 years agopowerpc/64: context tracking move to interrupt wrappers
Nicholas Piggin [Sat, 30 Jan 2021 13:08:42 +0000 (23:08 +1000)]
powerpc/64: context tracking move to interrupt wrappers

This moves exception_enter/exit calls to wrapper functions for
synchronous interrupts. More interrupt handlers are covered by
this than previously.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-33-npiggin@gmail.com
3 years agopowerpc/64s/hash: improve context tracking of hash faults
Nicholas Piggin [Sat, 30 Jan 2021 13:08:41 +0000 (23:08 +1000)]
powerpc/64s/hash: improve context tracking of hash faults

This moves the 64s/hash context tracking from hash_page_mm() to
__do_hash_fault(), so it's no longer called by OCXL / SPU
accelerators, which was certainly the wrong thing to be doing,
because those callers are not low level interrupt handlers, so
should have entered a kernel context tracking already.

Then remain in kernel context for the duration of the fault,
rather than enter/exit for the hash fault then enter/exit for
the page fault, which is pointless.

Even still, calling exception_enter/exit in __do_hash_fault seems
questionable because that's touching per-cpu variables, tracing,
etc., which might have been interrupted by this hash fault or
themselves cause hash faults. But maybe I miss something because
hash_page_mm very deliberately calls trace_hash_fault too, for
example. So for now go with it, it's no worse than before, in this
regard.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-32-npiggin@gmail.com
3 years agopowerpc/64: context tracking remove _TIF_NOHZ
Nicholas Piggin [Sat, 30 Jan 2021 13:08:40 +0000 (23:08 +1000)]
powerpc/64: context tracking remove _TIF_NOHZ

Add context tracking to the system call handler explicitly, and remove
_TIF_NOHZ.

This improves system call performance when nohz_full is enabled. On a
POWER9, gettid scv system call cost on a nohz_full CPU improves from
1129 cycles to 1004 cycles and on a housekeeping CPU from 550 cycles
to 430 cycles.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-31-npiggin@gmail.com
3 years agopowerpc: add interrupt_cond_local_irq_enable helper
Nicholas Piggin [Sat, 30 Jan 2021 13:08:39 +0000 (23:08 +1000)]
powerpc: add interrupt_cond_local_irq_enable helper

Simple helper for synchronous interrupt handlers (i.e., process-context)
to enable interrupts if it was taken in an interrupts-enabled context.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-30-npiggin@gmail.com
3 years agopowerpc: convert interrupt handlers to use wrappers
Nicholas Piggin [Sat, 30 Jan 2021 13:08:38 +0000 (23:08 +1000)]
powerpc: convert interrupt handlers to use wrappers

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-29-npiggin@gmail.com
3 years agopowerpc/traps: factor common code from program check and emulation assist
Nicholas Piggin [Sun, 7 Feb 2021 12:56:43 +0000 (22:56 +1000)]
powerpc/traps: factor common code from program check and emulation assist

Move the program check handling into a function called by both, rather
than have the emulation assist handler call the program check handler.

This allows each of these handlers to be implemented with "interrupt
wrappers" in a later change.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1612702475.d6qyt6qtfy.astroid@bobo.none
3 years agopowerpc: add interrupt wrapper entry / exit stub functions
Nicholas Piggin [Sat, 30 Jan 2021 13:08:37 +0000 (23:08 +1000)]
powerpc: add interrupt wrapper entry / exit stub functions

These will be used by subsequent patches.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-28-npiggin@gmail.com
3 years agopowerpc: interrupt handler wrapper functions
Nicholas Piggin [Sat, 30 Jan 2021 13:08:36 +0000 (23:08 +1000)]
powerpc: interrupt handler wrapper functions

Add wrapper functions (derived from x86 macros) for interrupt handler
functions. This allows interrupt entry code to be written in C.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-27-npiggin@gmail.com
3 years agopowerpc: improve handling of unrecoverable system reset
Nicholas Piggin [Sat, 30 Jan 2021 13:08:35 +0000 (23:08 +1000)]
powerpc: improve handling of unrecoverable system reset

If an unrecoverable system reset hits in process context, the system
does not have to panic. Similar to machine check, call nmi_exit()
before die().

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-26-npiggin@gmail.com
3 years agopowerpc/mce: ensure machine check handler always tests RI
Nicholas Piggin [Sat, 30 Jan 2021 13:08:34 +0000 (23:08 +1000)]
powerpc/mce: ensure machine check handler always tests RI

A machine check that is handled must still check MSR[RI] for
recoverability of the interrupted context. Without this patch
it's possible for a handled machine check to return to a
context where it has clobbered live registers.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-25-npiggin@gmail.com
3 years agopowerpc: introduce die_mce
Nicholas Piggin [Sat, 30 Jan 2021 13:08:33 +0000 (23:08 +1000)]
powerpc: introduce die_mce

As explained by commit daf00ae71dad ("powerpc/traps: restore
recoverability of machine_check interrupts"), die() can't be called from
within nmi_enter to nicely kill a process context that was interrupted.
nmi_exit must be called first.

This adds a function die_mce which takes care of this for machine check
handlers.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-24-npiggin@gmail.com
3 years agopowerpc/cell: tidy up pervasive declarations
Nicholas Piggin [Sat, 30 Jan 2021 13:08:32 +0000 (23:08 +1000)]
powerpc/cell: tidy up pervasive declarations

These are declared in ras.h and defined in ras.c so remove them from
pervasive.h

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-23-npiggin@gmail.com
3 years agopowerpc: add and use unknown_async_exception
Nicholas Piggin [Sat, 30 Jan 2021 13:08:31 +0000 (23:08 +1000)]
powerpc: add and use unknown_async_exception

This is currently the same as unknown_exception, but it will diverge
after interrupt wrappers are added and code moved out of asm into the
wrappers (e.g., async handlers will check FINISH_NAP).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-22-npiggin@gmail.com
3 years agopowerpc/time: move timer_broadcast_interrupt prototype to asm/time.h
Nicholas Piggin [Sat, 30 Jan 2021 13:08:30 +0000 (23:08 +1000)]
powerpc/time: move timer_broadcast_interrupt prototype to asm/time.h

Interrupt handler prototypes are going to be rearranged in a
future patch, so tidy this out of the way first.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-21-npiggin@gmail.com
3 years agopowerpc/perf: move perf irq/nmi handling details into traps.c
Nicholas Piggin [Sat, 30 Jan 2021 13:08:29 +0000 (23:08 +1000)]
powerpc/perf: move perf irq/nmi handling details into traps.c

This is required in order to allow more significant differences between
NMI type interrupt handlers and regular asynchronous handlers.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-20-npiggin@gmail.com
3 years agopowerpc/traps: add NOKPROBE_SYMBOL for sreset and mce
Nicholas Piggin [Sat, 30 Jan 2021 13:08:28 +0000 (23:08 +1000)]
powerpc/traps: add NOKPROBE_SYMBOL for sreset and mce

These NMIs could fire any time including inside kprobe code, so
exclude them from kprobes.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-19-npiggin@gmail.com
3 years agopowerpc/64s: slb comment update
Nicholas Piggin [Sat, 30 Jan 2021 13:08:27 +0000 (23:08 +1000)]
powerpc/64s: slb comment update

This makes a small improvement to the description of the SLB interrupt
environment. Move the memory access restrictions into one paragraph,
and the interrupt restrictions into the next rather than mix them.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-18-npiggin@gmail.com
3 years agopowerpc/mm: Remove stale do_page_fault comment referring to SLB faults
Nicholas Piggin [Sat, 30 Jan 2021 13:08:26 +0000 (23:08 +1000)]
powerpc/mm: Remove stale do_page_fault comment referring to SLB faults

SLB faults no longer call do_page_fault, this was removed somewhere
between 2.6.0 and 2.6.12.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-17-npiggin@gmail.com
3 years agopowerpc/64s: split do_hash_fault
Nicholas Piggin [Sat, 30 Jan 2021 13:08:25 +0000 (23:08 +1000)]
powerpc/64s: split do_hash_fault

This is required for subsequent interrupt wrapper implementation.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-16-npiggin@gmail.com
3 years agopowerpc/64s: move bad_page_fault handling to C
Nicholas Piggin [Sat, 30 Jan 2021 13:08:24 +0000 (23:08 +1000)]
powerpc/64s: move bad_page_fault handling to C

This simplifies code, and it is also useful when introducing
interrupt handler wrappers when introducing wrapper functionality
that doesn't cope with asm entry code calling into more than one
handler function.

32-bit and 64e still have some such cases, which limits some ways
they can use interrupt wrappers.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-15-npiggin@gmail.com
3 years agopowerpc: rearrange do_page_fault error case to be inside exception_enter
Nicholas Piggin [Sat, 30 Jan 2021 13:08:23 +0000 (23:08 +1000)]
powerpc: rearrange do_page_fault error case to be inside exception_enter

This keeps the context tracking over the entire interrupt handler which
helps later with moving context tracking into interrupt wrappers.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-14-npiggin@gmail.com
3 years agopowerpc/64s: add do_bad_page_fault_segv handler
Nicholas Piggin [Sat, 30 Jan 2021 13:08:22 +0000 (23:08 +1000)]
powerpc/64s: add do_bad_page_fault_segv handler

This function acts like an interrupt handler so it needs to follow
the standard interrupt handler function signature which will be
introduced in a future change.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-13-npiggin@gmail.com
3 years agopowerpc: bad_page_fault get registers from regs
Nicholas Piggin [Sat, 30 Jan 2021 13:08:21 +0000 (23:08 +1000)]
powerpc: bad_page_fault get registers from regs

Similar to the previous patch this makes interrupt handler function
types more regular so they can be wrapped with the next patch.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-12-npiggin@gmail.com
3 years agopowerpc/32: transfer can avoid saving r4/r5 over trace call
Nicholas Piggin [Sat, 30 Jan 2021 13:08:20 +0000 (23:08 +1000)]
powerpc/32: transfer can avoid saving r4/r5 over trace call

Now that handlers get all registers from pt_regs, r4 and r5 are no
longer live here and may be clobbered.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-11-npiggin@gmail.com
3 years agopowerpc: DebugException remove args
Nicholas Piggin [Sat, 30 Jan 2021 13:08:19 +0000 (23:08 +1000)]
powerpc: DebugException remove args

Like other interrupt handler conversions, switch to getting registers
from the pt_regs argument.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-10-npiggin@gmail.com
3 years agopowerpc: do_break get registers from regs
Nicholas Piggin [Sat, 30 Jan 2021 13:08:18 +0000 (23:08 +1000)]
powerpc: do_break get registers from regs

Similar to the previous patch this makes interrupt handler function
types more regular so they can be wrapped with the next patch.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-9-npiggin@gmail.com
3 years agopowerpc/fsl_booke/32: CacheLockingException remove args
Nicholas Piggin [Sat, 30 Jan 2021 13:08:17 +0000 (23:08 +1000)]
powerpc/fsl_booke/32: CacheLockingException remove args

Like other interrupt handler conversions, switch to getting registers
from the pt_regs argument.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-8-npiggin@gmail.com
3 years agopowerpc: remove arguments from fault handler functions
Nicholas Piggin [Sat, 30 Jan 2021 13:08:16 +0000 (23:08 +1000)]
powerpc: remove arguments from fault handler functions

Make mm fault handlers all just take the pt_regs * argument and load
DAR/DSISR from that. Make those that return a value return long.

This is done to make the function signatures match other handlers, which
will help with a future patch to add wrappers. Explicit arguments could
be added for performance but that would require more wrapper macro
variants.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-7-npiggin@gmail.com
3 years agopowerpc/64s: move the hash fault handling logic to C
Nicholas Piggin [Sat, 30 Jan 2021 13:08:15 +0000 (23:08 +1000)]
powerpc/64s: move the hash fault handling logic to C

The fault handling still has some complex logic particularly around
hash table handling, in asm. Implement most of this in C.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-6-npiggin@gmail.com
3 years agopowerpc/64s: move DABR match out of handle_page_fault
Nicholas Piggin [Sat, 30 Jan 2021 13:08:14 +0000 (23:08 +1000)]
powerpc/64s: move DABR match out of handle_page_fault

Similar to the 32/s change, move the test and call to the do_break
handler to the DSI.

Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-5-npiggin@gmail.com
3 years agopowerpc/32s: move DABR match out of handle_page_fault
Christophe Leroy [Sat, 30 Jan 2021 13:08:13 +0000 (23:08 +1000)]
powerpc/32s: move DABR match out of handle_page_fault

handle_page_fault() has some code dedicated to book3s/32 to
call do_break() when the DSI is a DABR match.

On other platforms, do_break() is handled separately.

Do the same for book3s/32, do it earlier in the process of DSI.

This change also avoid doing the test on ISI.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-4-npiggin@gmail.com
3 years agoKVM: PPC: Book3S HV: Context tracking exit guest context before enabling irqs
Nicholas Piggin [Sat, 30 Jan 2021 13:08:12 +0000 (23:08 +1000)]
KVM: PPC: Book3S HV: Context tracking exit guest context before enabling irqs

Interrupts that occur in kernel mode expect that context tracking
is set to kernel. Enabling local irqs before context tracking
switches from guest to host means interrupts can come in and trigger
warnings about wrong context, and possibly worse.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-3-npiggin@gmail.com
3 years agopowerpc/64s: interrupt exit improve bounding of interrupt recursion
Nicholas Piggin [Sat, 30 Jan 2021 13:08:11 +0000 (23:08 +1000)]
powerpc/64s: interrupt exit improve bounding of interrupt recursion

When replaying pending soft-masked interrupts when an interrupt returns
to an irqs-enabled context, there is a special case required if this was
an asynchronous interrupt to avoid unbounded interrupt recursion.

This case was not tested for in the case the asynchronous interrupt hit
in user context, because a subsequent nested interrupt would by definition
hit in kernel mode, which then exits via the kernel path which does test
this case.

There is no reason to allow this for such interrupts. While recursion is
bounded at the next level, it's simpler and uses less stack to apply the
replay logic consistently.

This also expands the comment which was really pretty poor and didn't
explain the problem (I can say that because I wrote it).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210130130852.2952424-2-npiggin@gmail.com
3 years agopowerpc/pasemi: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:22 +0000 (15:35 +1100)]
powerpc/pasemi: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-17-oohall@gmail.com
3 years agopowerpc/embedded6xx/mve5100: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:21 +0000 (15:35 +1100)]
powerpc/embedded6xx/mve5100: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-16-oohall@gmail.com
3 years agopowerpc/embedded6xx/mpc7448: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:20 +0000 (15:35 +1100)]
powerpc/embedded6xx/mpc7448: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-15-oohall@gmail.com
3 years agopowerpc/embedded6xx/linkstation: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:19 +0000 (15:35 +1100)]
powerpc/embedded6xx/linkstation: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-14-oohall@gmail.com
3 years agopowerpc/embedded6xx/holly: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:18 +0000 (15:35 +1100)]
powerpc/embedded6xx/holly: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-13-oohall@gmail.com
3 years agopowerpc/chrp: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:17 +0000 (15:35 +1100)]
powerpc/chrp: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-12-oohall@gmail.com
3 years agopowerpc/amigaone: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:16 +0000 (15:35 +1100)]
powerpc/amigaone: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-11-oohall@gmail.com
3 years agopowerpc/83xx: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:15 +0000 (15:35 +1100)]
powerpc/83xx: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-10-oohall@gmail.com
3 years agopowerpc/82xx/*: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:14 +0000 (15:35 +1100)]
powerpc/82xx/*: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-9-oohall@gmail.com
3 years agopowerpc/52xx/mpc5200_simple: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:13 +0000 (15:35 +1100)]
powerpc/52xx/mpc5200_simple: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-8-oohall@gmail.com
3 years agopowerpc/52xx/media5200: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:12 +0000 (15:35 +1100)]
powerpc/52xx/media5200: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-7-oohall@gmail.com
3 years agopowerpc/52xx/lite5200: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:11 +0000 (15:35 +1100)]
powerpc/52xx/lite5200: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-6-oohall@gmail.com
3 years agopowerpc/52xx/efika: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:10 +0000 (15:35 +1100)]
powerpc/52xx/efika: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-5-oohall@gmail.com
3 years agopowerpc/512x: Move PHB discovery
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:09 +0000 (15:35 +1100)]
powerpc/512x: Move PHB discovery

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-4-oohall@gmail.com
3 years agopowerpc/pci: Move PHB discovery for PCI_DN using platforms
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:07 +0000 (15:35 +1100)]
powerpc/pci: Move PHB discovery for PCI_DN using platforms

Make powernv, pseries, powermac and maple use ppc_mc.discover_phbs.
These platforms need to be done together because they all depend on
pci_dn's being created from the DT. The pci_dn contains a pointer to
the relevant pci_controller so they need to be created after the
pci_controller structures are available, but before PCI devices are
scanned. Currently this ordering is provided by initcalls and the
sequence is:

  1. PHBs are discovered (setup_arch) (early boot, pre-initcalls)
  2. pci_dn are created from the unflattended DT (core initcall)
  3. PHBs are scanned pcibios_init() (subsys initcall)

The new ppc_md.discover_phbs() function is also a core_initcall so we
can't guarantee ordering between the creation of pci_controllers and
the creation of pci_dn's which require a pci_controller. We could use
the postcore, or core_sync initcall levels, but it's cleaner to just
move the pci_dn setup into the per-PHB inits which occur inside of
.discover_phb() for these platforms. This brings the boot-time path in
line with the PHB hotplug path that is used for pseries DLPAR
operations too.

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
[mpe: Squash powermac & maple in to avoid breakage those platforms,
      convert memblock allocs to use kmalloc to avoid warnings]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-2-oohall@gmail.com
3 years agopowerpc/pci: Add ppc_md.discover_phbs()
Oliver O'Halloran [Tue, 3 Nov 2020 04:35:06 +0000 (15:35 +1100)]
powerpc/pci: Add ppc_md.discover_phbs()

On many powerpc platforms the discovery and initalisation of
pci_controllers (PHBs) happens inside of setup_arch(). This is very early
in boot (pre-initcalls) and means that we're initialising the PHB long
before many basic kernel services (slab allocator, debugfs, a real ioremap)
are available.

On PowerNV this causes an additional problem since we map the PHB registers
with ioremap(). As of commit d538aadc2718 ("powerpc/ioremap: warn on early
use of ioremap()") a warning is printed because we're using the "incorrect"
API to setup and MMIO mapping in searly boot. The kernel does provide
early_ioremap(), but that is not intended to create long-lived MMIO
mappings and a seperate warning is printed by generic code if
early_ioremap() mappings are "leaked."

This is all fixable with dumb hacks like using early_ioremap() to setup
the initial mapping then replacing it with a real ioremap later on in
boot, but it does raise the question: Why the hell are we setting up the
PHB's this early in boot?

The old and wise claim it's due to "hysterical rasins." Aside from amused
grapes there doesn't appear to be any real reason to maintain the current
behaviour. Already most of the newer embedded platforms perform PHB
discovery in an arch_initcall and between the end of setup_arch() and the
start of initcalls none of the generic kernel code does anything PCI
related. On powerpc scanning PHBs occurs in a subsys_initcall so it should
be possible to move the PHB discovery to a core, postcore or arch initcall.

This patch adds the ppc_md.discover_phbs hook and a core_initcall stub that
calls it. The core_initcalls are the earliest to be called so this will
any possibly issues with dependency between initcalls. This isn't just an
academic issue either since on pseries and PowerNV EEH init occurs in an
arch_initcall and depends on the pci_controllers being available, similarly
the creation of pci_dns occurs at core_initcall_sync (i.e. between core and
postcore initcalls). These problems need to be addressed seperately.

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
[mpe: Make discover_phbs() static]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20201103043523.916109-1-oohall@gmail.com
3 years agopowerpc/powernv/pci: Drop pnv_phb->initialized
Oliver O'Halloran [Wed, 2 Sep 2020 01:36:57 +0000 (11:36 +1000)]
powerpc/powernv/pci: Drop pnv_phb->initialized

The pnv_phb->initialized flag is an odd beast. It was added back in 2012 in
commit db1266c85261 ("powerpc/powernv: Skip check on PE if necessary") to
allow devices to be enabled even if the device had not yet been assigned to
a PE. Allowing the device to be enabled before the PE is configured may
cause spurious EEH events since none of the IOMMU context has been setup.

I'm not entirely sure why this was ever necessary. My best guess is that it
was an workaround for a bug or some other undesireable behaviour from the
PCI core. Either way, it's unnecessary now since as of commit dc3d8f85bb57
("powerpc/powernv/pci: Re-work bus PE configuration") we can guarantee that
the PE will be configured before the PCI core will allow drivers to bind to
the device.

It's also worth pointing out that the ->initialized flag is only set in
pnv_pci_ioda_create_dbgfs(). That function has its entire body wrapped
in #ifdef CONFIG_DEBUG_FS. As a result, for kernels built without debugfs
(i.e. petitboot) the other checks in pnv_pci_enable_device_hook() are
bypassed entirely.

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200902013657.1753830-1-oohall@gmail.com
3 years agopowerpc/xmon: Select CONSOLE_POLL for the 8xx
Christophe Leroy [Wed, 23 Dec 2020 09:38:47 +0000 (09:38 +0000)]
powerpc/xmon: Select CONSOLE_POLL for the 8xx

Powerpc 8xx requires CONSOLE_POLL to get udbg_putc() and
udbg_getc() in CPM uart driver.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/3d10a274516e9be8c4b0dc679a2840cdc1588872.1608716197.git.christophe.leroy@csgroup.eu
3 years agopowerpc/xmon: Enable breakpoints on 8xx
Christophe Leroy [Wed, 23 Dec 2020 09:38:48 +0000 (09:38 +0000)]
powerpc/xmon: Enable breakpoints on 8xx

Since commit 4ad8622dc548 ("powerpc/8xx: Implement hw_breakpoint"),
8xx has breakpoints so there is no reason to opt breakpoint logic
out of xmon for the 8xx.

Fixes: 4ad8622dc548 ("powerpc/8xx: Implement hw_breakpoint")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/b0607f1113d1558e73476bb06db0ee16d31a6e5b.1608716197.git.christophe.leroy@csgroup.eu
3 years agopowerpc/32s: Only build hash code when CONFIG_PPC_BOOK3S_604 is selected
Christophe Leroy [Fri, 18 Dec 2020 06:56:05 +0000 (06:56 +0000)]
powerpc/32s: Only build hash code when CONFIG_PPC_BOOK3S_604 is selected

It is now possible to only build book3s/32 kernel for
CPUs without hash table.

Opt out hash related code when CONFIG_PPC_BOOK3S_604 is not selected.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/62df436454ef06e104cc334a0859a2878d7888d5.1608274548.git.christophe.leroy@csgroup.eu
3 years agopowerpc/setup: Adjust six seq_printf() calls in show_cpuinfo()
Markus Elfring [Tue, 2 Jul 2019 12:41:42 +0000 (14:41 +0200)]
powerpc/setup: Adjust six seq_printf() calls in show_cpuinfo()

A bit of information should be put into a sequence.
Thus improve the execution speed for this data output by better usage
of corresponding functions.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5b62379e-a35f-4f56-f1b5-6350f76007e7@web.de
3 years agopowerpc/82xx: Use common error handling code in pq2ads_pci_init_irq()
Markus Elfring [Tue, 27 Aug 2019 07:19:32 +0000 (09:19 +0200)]
powerpc/82xx: Use common error handling code in pq2ads_pci_init_irq()

Adjust jump targets so that a bit of exception handling can be better
reused at the end of this function.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1a4bafee-562f-5eb4-d2bd-34704f8c5ab3@web.de
3 years agopowerpc/82xx: Delete an unnecessary of_node_put() call in pq2ads_pci_init_irq()
Markus Elfring [Tue, 27 Aug 2019 06:44:20 +0000 (08:44 +0200)]
powerpc/82xx: Delete an unnecessary of_node_put() call in pq2ads_pci_init_irq()

A null pointer would be passed to a call of the function “of_node_put”
immediately after a call of the function “of_find_compatible_node” failed
at one place.
Remove this superfluous function call.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/9c060a41-438b-6fb8-d549-37c72fae4898@web.de