linux-2.6-microblaze.git
6 years agopowerpc/embedded6xx: Remove C2K board support
Mark Greer [Fri, 6 Apr 2018 01:17:16 +0000 (18:17 -0700)]
powerpc/embedded6xx: Remove C2K board support

The C2K platform appears to be orphaned so remove code supporting it.

CC: Remi Machet <rmachet@nvidia.com>
Signed-off-by: Mark Greer <mgreer@animalcreek.com>
Acked-by: Remi Machet <remi@machet.us>
Signed-off-by: Mark Greer <mgreer@animalcreek.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/lib: optimise PPC32 memcmp
Christophe Leroy [Wed, 30 May 2018 07:06:15 +0000 (07:06 +0000)]
powerpc/lib: optimise PPC32 memcmp

At the time being, memcmp() compares two chunks of memory
byte per byte.

This patch optimises the comparison by comparing word by word.

On the same way as commit 15c2d45d17418 ("powerpc: Add 64bit
optimised memcmp"), this patch moves memcmp() into a dedicated
file named memcmp_32.S

A small benchmark performed on an 8xx comparing two chuncks
of 512 bytes performed 100000 times gives:

Before : 5852274 TB ticks
After:   1488638 TB ticks

This is almost 4 times faster

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/lib: optimise 32 bits __clear_user()
Christophe Leroy [Wed, 30 May 2018 07:06:13 +0000 (07:06 +0000)]
powerpc/lib: optimise 32 bits __clear_user()

Rewrite clear_user() on the same principle as memset(0), making use
of dcbz to clear complete cache lines.

This code is a copy/paste of memset(), with some modifications
in order to retrieve remaining number of bytes to be cleared,
as it needs to be returned in case of error.

On the same way as done on PPC64 in commit 17968fbbd19f1
("powerpc: 64bit optimised __clear_user"), the patch moves
__clear_user() into a dedicated file string_32.S

On a MPC885, throughput is almost doubled:

Before:
~# dd if=/dev/zero of=/dev/null bs=1M count=1000
1048576000 bytes (1000.0MB) copied, 18.990779 seconds, 52.7MB/s

After:
~# dd if=/dev/zero of=/dev/null bs=1M count=1000
1048576000 bytes (1000.0MB) copied, 9.611468 seconds, 104.0MB/s

On a MPC8321, throughput is multiplied by 2.12:

Before:
root@vgoippro:~# dd if=/dev/zero of=/dev/null bs=1M count=1000
1048576000 bytes (1000.0MB) copied, 6.844352 seconds, 146.1MB/s

After:
root@vgoippro:~# dd if=/dev/zero of=/dev/null bs=1M count=1000
1048576000 bytes (1000.0MB) copied, 3.218854 seconds, 310.7MB/s

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/time: inline arch_vtime_task_switch()
Christophe Leroy [Tue, 29 May 2018 16:19:14 +0000 (16:19 +0000)]
powerpc/time: inline arch_vtime_task_switch()

arch_vtime_task_switch() is a small function which is called
only from vtime_common_task_switch(), so it is worth inlining

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/Makefile: set -mcpu=860 flag for the 8xx
Christophe Leroy [Mon, 28 May 2018 06:08:34 +0000 (06:08 +0000)]
powerpc/Makefile: set -mcpu=860 flag for the 8xx

When compiled with GCC 8.1, vmlinux is significantly bigger than
with GCC 4.8.

When looking at the generated code with objdump, we notice that
all functions and loops when a 16 bytes alignment. This significantly
increases the size of the kernel. It is pointless and even
counterproductive as on the 8xx 'nop' also consumes one clock cycle.

Size of vmlinux with GCC 4.8:
   text    data     bss     dec     hex filename
5801948 1626076  457796 7885820  7853fc vmlinux

Size of vmlinux with GCC 8.1:
   text    data     bss     dec     hex filename
6764592 1630652  456476 8851720  871108 vmlinux

Size of vmlinux with GCC 8.1 and this patch:
   text    data     bss     dec     hex filename
6331544 1631756  456476 8419776  8079c0 vmlinux

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Implement csum_ipv6_magic in assembly
Christophe Leroy [Thu, 24 May 2018 11:33:18 +0000 (11:33 +0000)]
powerpc: Implement csum_ipv6_magic in assembly

The generic csum_ipv6_magic() generates a pretty bad result

00000000 <csum_ipv6_magic>: (PPC32)
   0: 81 23 00 00  lwz     r9,0(r3)
   4: 81 03 00 04  lwz     r8,4(r3)
   8: 7c e7 4a 14  add     r7,r7,r9
   c: 7d 29 38 10  subfc   r9,r9,r7
  10: 7d 4a 51 10  subfe   r10,r10,r10
  14: 7d 27 42 14  add     r9,r7,r8
  18: 7d 2a 48 50  subf    r9,r10,r9
  1c: 80 e3 00 08  lwz     r7,8(r3)
  20: 7d 08 48 10  subfc   r8,r8,r9
  24: 7d 4a 51 10  subfe   r10,r10,r10
  28: 7d 29 3a 14  add     r9,r9,r7
  2c: 81 03 00 0c  lwz     r8,12(r3)
  30: 7d 2a 48 50  subf    r9,r10,r9
  34: 7c e7 48 10  subfc   r7,r7,r9
  38: 7d 4a 51 10  subfe   r10,r10,r10
  3c: 7d 29 42 14  add     r9,r9,r8
  40: 7d 2a 48 50  subf    r9,r10,r9
  44: 80 e4 00 00  lwz     r7,0(r4)
  48: 7d 08 48 10  subfc   r8,r8,r9
  4c: 7d 4a 51 10  subfe   r10,r10,r10
  50: 7d 29 3a 14  add     r9,r9,r7
  54: 7d 2a 48 50  subf    r9,r10,r9
  58: 81 04 00 04  lwz     r8,4(r4)
  5c: 7c e7 48 10  subfc   r7,r7,r9
  60: 7d 4a 51 10  subfe   r10,r10,r10
  64: 7d 29 42 14  add     r9,r9,r8
  68: 7d 2a 48 50  subf    r9,r10,r9
  6c: 80 e4 00 08  lwz     r7,8(r4)
  70: 7d 08 48 10  subfc   r8,r8,r9
  74: 7d 4a 51 10  subfe   r10,r10,r10
  78: 7d 29 3a 14  add     r9,r9,r7
  7c: 7d 2a 48 50  subf    r9,r10,r9
  80: 81 04 00 0c  lwz     r8,12(r4)
  84: 7c e7 48 10  subfc   r7,r7,r9
  88: 7d 4a 51 10  subfe   r10,r10,r10
  8c: 7d 29 42 14  add     r9,r9,r8
  90: 7d 2a 48 50  subf    r9,r10,r9
  94: 7d 08 48 10  subfc   r8,r8,r9
  98: 7d 4a 51 10  subfe   r10,r10,r10
  9c: 7d 29 2a 14  add     r9,r9,r5
  a0: 7d 2a 48 50  subf    r9,r10,r9
  a4: 7c a5 48 10  subfc   r5,r5,r9
  a8: 7c 63 19 10  subfe   r3,r3,r3
  ac: 7d 29 32 14  add     r9,r9,r6
  b0: 7d 23 48 50  subf    r9,r3,r9
  b4: 7c c6 48 10  subfc   r6,r6,r9
  b8: 7c 63 19 10  subfe   r3,r3,r3
  bc: 7c 63 48 50  subf    r3,r3,r9
  c0: 54 6a 80 3e  rotlwi  r10,r3,16
  c4: 7c 63 52 14  add     r3,r3,r10
  c8: 7c 63 18 f8  not     r3,r3
  cc: 54 63 84 3e  rlwinm  r3,r3,16,16,31
  d0: 4e 80 00 20  blr

0000000000000000 <.csum_ipv6_magic>: (PPC64)
   0: 81 23 00 00  lwz     r9,0(r3)
   4: 80 03 00 04  lwz     r0,4(r3)
   8: 81 63 00 08  lwz     r11,8(r3)
   c: 7c e7 4a 14  add     r7,r7,r9
  10: 7f 89 38 40  cmplw   cr7,r9,r7
  14: 7d 47 02 14  add     r10,r7,r0
  18: 7d 30 10 26  mfocrf  r9,1
  1c: 55 29 f7 fe  rlwinm  r9,r9,30,31,31
  20: 7d 4a 4a 14  add     r10,r10,r9
  24: 7f 80 50 40  cmplw   cr7,r0,r10
  28: 7d 2a 5a 14  add     r9,r10,r11
  2c: 80 03 00 0c  lwz     r0,12(r3)
  30: 81 44 00 00  lwz     r10,0(r4)
  34: 7d 10 10 26  mfocrf  r8,1
  38: 55 08 f7 fe  rlwinm  r8,r8,30,31,31
  3c: 7d 29 42 14  add     r9,r9,r8
  40: 81 04 00 04  lwz     r8,4(r4)
  44: 7f 8b 48 40  cmplw   cr7,r11,r9
  48: 7d 29 02 14  add     r9,r9,r0
  4c: 7d 70 10 26  mfocrf  r11,1
  50: 55 6b f7 fe  rlwinm  r11,r11,30,31,31
  54: 7d 29 5a 14  add     r9,r9,r11
  58: 7f 80 48 40  cmplw   cr7,r0,r9
  5c: 7d 29 52 14  add     r9,r9,r10
  60: 7c 10 10 26  mfocrf  r0,1
  64: 54 00 f7 fe  rlwinm  r0,r0,30,31,31
  68: 7d 69 02 14  add     r11,r9,r0
  6c: 7f 8a 58 40  cmplw   cr7,r10,r11
  70: 7c 0b 42 14  add     r0,r11,r8
  74: 81 44 00 08  lwz     r10,8(r4)
  78: 7c f0 10 26  mfocrf  r7,1
  7c: 54 e7 f7 fe  rlwinm  r7,r7,30,31,31
  80: 7c 00 3a 14  add     r0,r0,r7
  84: 7f 88 00 40  cmplw   cr7,r8,r0
  88: 7d 20 52 14  add     r9,r0,r10
  8c: 80 04 00 0c  lwz     r0,12(r4)
  90: 7d 70 10 26  mfocrf  r11,1
  94: 55 6b f7 fe  rlwinm  r11,r11,30,31,31
  98: 7d 29 5a 14  add     r9,r9,r11
  9c: 7f 8a 48 40  cmplw   cr7,r10,r9
  a0: 7d 29 02 14  add     r9,r9,r0
  a4: 7d 70 10 26  mfocrf  r11,1
  a8: 55 6b f7 fe  rlwinm  r11,r11,30,31,31
  ac: 7d 29 5a 14  add     r9,r9,r11
  b0: 7f 80 48 40  cmplw   cr7,r0,r9
  b4: 7d 29 2a 14  add     r9,r9,r5
  b8: 7c 10 10 26  mfocrf  r0,1
  bc: 54 00 f7 fe  rlwinm  r0,r0,30,31,31
  c0: 7d 29 02 14  add     r9,r9,r0
  c4: 7f 85 48 40  cmplw   cr7,r5,r9
  c8: 7c 09 32 14  add     r0,r9,r6
  cc: 7d 50 10 26  mfocrf  r10,1
  d0: 55 4a f7 fe  rlwinm  r10,r10,30,31,31
  d4: 7c 00 52 14  add     r0,r0,r10
  d8: 7f 80 30 40  cmplw   cr7,r0,r6
  dc: 7d 30 10 26  mfocrf  r9,1
  e0: 55 29 ef fe  rlwinm  r9,r9,29,31,31
  e4: 7c 09 02 14  add     r0,r9,r0
  e8: 54 03 80 3e  rotlwi  r3,r0,16
  ec: 7c 03 02 14  add     r0,r3,r0
  f0: 7c 03 00 f8  not     r3,r0
  f4: 78 63 84 22  rldicl  r3,r3,48,48
  f8: 4e 80 00 20  blr

This patch implements it in assembly for both PPC32 and PPC64

Link: https://github.com/linuxppc/linux/issues/9
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/32: Optimise __csum_partial()
Christophe Leroy [Thu, 24 May 2018 11:22:27 +0000 (11:22 +0000)]
powerpc/32: Optimise __csum_partial()

Improve __csum_partial by interleaving loads and adds.

On a 8xx, it brings neither improvement nor degradation.
On a 83xx, it brings a 25% improvement.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/lib: Adjust .balign inside string functions for PPC32
Christophe Leroy [Fri, 18 May 2018 13:01:16 +0000 (15:01 +0200)]
powerpc/lib: Adjust .balign inside string functions for PPC32

commit 87a156fb18fe1 ("Align hot loops of some string functions")
degraded the performance of string functions by adding useless
nops

A simple benchmark on an 8xx calling 100000x a memchr() that
matches the first byte runs in 41668 TB ticks before this patch
and in 35986 TB ticks after this patch. So this gives an
improvement of approx 10%

Another benchmark doing the same with a memchr() matching the 128th
byte runs in 1011365 TB ticks before this patch and 1005682 TB ticks
after this patch, so regardless on the number of loops, removing
those useless nops improves the test by 5683 TB ticks.

Fixes: 87a156fb18fe1 ("Align hot loops of some string functions")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/signal32: Use fault_in_pages_readable() to prefault user context
Christophe Leroy [Tue, 24 Apr 2018 16:04:25 +0000 (18:04 +0200)]
powerpc/signal32: Use fault_in_pages_readable() to prefault user context

Use fault_in_pages_readable() to prefault user context
instead of open coding

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/8xx: Remove RTC clock on 88x
Christophe Leroy [Tue, 17 Apr 2018 12:47:35 +0000 (14:47 +0200)]
powerpc/8xx: Remove RTC clock on 88x

The 885 familly processors don't have the Real Time Clock

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/boot: remove unused variable in mpc8xx
Christophe Leroy [Tue, 17 Apr 2018 12:36:45 +0000 (14:36 +0200)]
powerpc/boot: remove unused variable in mpc8xx

Variable div is set but never used. Remove it.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/misc: merge reloc_offset() and add_reloc_offset()
Christophe Leroy [Tue, 17 Apr 2018 11:23:10 +0000 (13:23 +0200)]
powerpc/misc: merge reloc_offset() and add_reloc_offset()

reloc_offset() is the same as add_reloc_offset(0)

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: optimises from64to32()
Christophe Leroy [Tue, 10 Apr 2018 06:34:35 +0000 (08:34 +0200)]
powerpc/64: optimises from64to32()

The current implementation of from64to32() gives a poor result:

0000000000000270 <.from64to32>:
 270: 38 00 ff ff  li      r0,-1
 274: 78 69 00 22  rldicl  r9,r3,32,32
 278: 78 00 00 20  clrldi  r0,r0,32
 27c: 7c 60 00 38  and     r0,r3,r0
 280: 7c 09 02 14  add     r0,r9,r0
 284: 78 09 00 22  rldicl  r9,r0,32,32
 288: 7c 00 4a 14  add     r0,r0,r9
 28c: 78 03 00 20  clrldi  r3,r0,32
 290: 4e 80 00 20  blr

This patch modifies from64to32() to operate in the same
spirit as csum_fold()

It swaps the two 32-bit halves of sum then it adds it with the
unswapped sum. If there is a carry from adding the two 32-bit halves,
it will carry from the lower half into the upper half, giving us the
correct sum in the upper half.

The resulting code is:

0000000000000260 <.from64to32>:
 260: 78 60 00 02  rotldi  r0,r3,32
 264: 7c 60 1a 14  add     r3,r0,r3
 268: 78 63 00 22  rldicl  r3,r3,32,32
 26c: 4e 80 00 20  blr

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Remove stale_map[] handling on non SMP processors
Christophe Leroy [Wed, 21 Mar 2018 14:07:53 +0000 (15:07 +0100)]
powerpc/mm: Remove stale_map[] handling on non SMP processors

stale_map[] bits are only set in steal_context_smp() so
on UP processors this map is useless. Only manage it for SMP
processors.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: constify LAST_CONTEXT in mmu_context_nohash
Christophe Leroy [Wed, 21 Mar 2018 14:07:51 +0000 (15:07 +0100)]
powerpc/mm: constify LAST_CONTEXT in mmu_context_nohash

last_context is 16 on the 8xx, 65535 on the 47x and 255 on other ones.

The kernel is exclusively built for the 8xx, for the 47x or for
another processor so the last context can be defined as a constant
depending on the processor.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Reformat old comment]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Avoid unnecessary test and reduce code size
Christophe Leroy [Wed, 21 Mar 2018 14:07:49 +0000 (15:07 +0100)]
powerpc/mm: Avoid unnecessary test and reduce code size

no_selective_tlbil hence the use of either steal_all_contexts()
or steal_context_up() depends on the subarch, it won't change
during run. Only the 8xx uses steal_all_contexts and CONFIG_PPC_8xx
is exclusive of other processors.

This patch replaces the test of no_selective_tlbil global var by
a test of CONFIG_PPC_8xx selection. It avoids the test and
removes unnecessary code.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: constify FIRST_CONTEXT in mmu_context_nohash
Christophe Leroy [Wed, 21 Mar 2018 14:07:47 +0000 (15:07 +0100)]
powerpc/mm: constify FIRST_CONTEXT in mmu_context_nohash

First context is now 1 for all supported platforms, so it
can be made a constant.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/dma: remove unnecessary BUG()
Christophe Leroy [Wed, 28 Feb 2018 18:21:45 +0000 (19:21 +0100)]
powerpc/dma: remove unnecessary BUG()

Direction is already checked in all calling functions in
include/linux/dma-mapping.h and also in called function __dma_sync()

So really no need to check it once more here.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/sstep: Fix emulate_step test if VSX not present
Ravi Bangoria [Mon, 21 May 2018 04:21:08 +0000 (09:51 +0530)]
powerpc/sstep: Fix emulate_step test if VSX not present

emulate_step() tests are failing if VSX is not supported or disabled.

  emulate_step_test: lxvd2x         : FAIL
  emulate_step_test: stxvd2x        : FAIL

If !CPU_FTR_VSX, emulate_step() failure is expected and testcase should
PASS with a valid justification. After patch:

  emulate_step_test: lxvd2x         : PASS (!CPU_FTR_VSX)
  emulate_step_test: stxvd2x        : PASS (!CPU_FTR_VSX)

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/sstep: Fix kernel crash if VSX is not present
Ravi Bangoria [Mon, 21 May 2018 04:21:07 +0000 (09:51 +0530)]
powerpc/sstep: Fix kernel crash if VSX is not present

emulate_step() is not checking runtime VSX feature flag before
emulating an instruction. This is causing kernel crash when kernel
is compiled with CONFIG_VSX=y but running on a machine where VSX
is not supported or disabled. Ex, while running emulate_step tests
on P6 machine:

  Oops: Exception in kernel mode, sig: 4 [#1]
  NIP [c000000000095c24] .load_vsrn+0x28/0x54
  LR [c000000000094bdc] .emulate_loadstore+0x167c/0x17b0
  Call Trace:
    0x40fe240c7ae147ae (unreliable)
    .emulate_loadstore+0x167c/0x17b0
    .emulate_step+0x25c/0x5bc
    .test_lxvd2x_stxvd2x+0x64/0x154
    .test_emulate_step+0x38/0x4c
    .do_one_initcall+0x5c/0x2c0
    .kernel_init_freeable+0x314/0x4cc
    .kernel_init+0x24/0x160
    .ret_from_kernel_thread+0x58/0xb4

With fix:
  emulate_step_test: lxvd2x         : FAIL
  emulate_step_test: stxvd2x        : FAIL

Reported-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/sstep: Introduce GETTYPE macro
Ravi Bangoria [Mon, 21 May 2018 04:21:06 +0000 (09:51 +0530)]
powerpc/sstep: Introduce GETTYPE macro

Replace 'op->type & INSTR_TYPE_MASK' expression with GETTYPE(op->type)
macro.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoselftests/powerpc: Add perf breakpoint test
Michael Neuling [Mon, 28 May 2018 23:22:38 +0000 (09:22 +1000)]
selftests/powerpc: Add perf breakpoint test

This tests perf hardware breakpoints (ie PERF_TYPE_BREAKPOINT) on
powerpc.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Enhance the information in cpu_show_spectre_v1()
Michal Suchanek [Mon, 28 May 2018 13:19:14 +0000 (15:19 +0200)]
powerpc/64s: Enhance the information in cpu_show_spectre_v1()

We now have barrier_nospec as mitigation so print it in
cpu_show_spectre_v1() when enabled.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: Use barrier_nospec in syscall entry
Michael Ellerman [Tue, 24 Apr 2018 04:15:59 +0000 (14:15 +1000)]
powerpc/64: Use barrier_nospec in syscall entry

Our syscall entry is done in assembly so patch in an explicit
barrier_nospec.

Based on a patch by Michal Suchanek.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Use barrier_nospec in copy_from_user()
Michael Ellerman [Tue, 24 Apr 2018 04:15:58 +0000 (14:15 +1000)]
powerpc: Use barrier_nospec in copy_from_user()

Based on the x86 commit doing the same.

See commit 304ec1b05031 ("x86/uaccess: Use __uaccess_begin_nospec()
and uaccess_try_nospec") and b3bbfb3fb5d2 ("x86: Introduce
__uaccess_begin_nospec() and uaccess_try_nospec") for more detail.

In all cases we are ordering the load from the potentially
user-controlled pointer vs a previous branch based on an access_ok()
check or similar.

Base on a patch from Michal Suchanek.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Enable barrier_nospec based on firmware settings
Michal Suchanek [Tue, 24 Apr 2018 04:15:57 +0000 (14:15 +1000)]
powerpc/64s: Enable barrier_nospec based on firmware settings

Check what firmware told us and enable/disable the barrier_nospec as
appropriate.

We err on the side of enabling the barrier, as it's no-op on older
systems, see the comment for more detail.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Patch barrier_nospec in modules
Michal Suchanek [Tue, 24 Apr 2018 04:15:56 +0000 (14:15 +1000)]
powerpc/64s: Patch barrier_nospec in modules

Note that unlike RFI which is patched only in kernel the nospec state
reflects settings at the time the module was loaded.

Iterating all modules and re-patching every time the settings change
is not implemented.

Based on lwsync patching.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Add support for ori barrier_nospec patching
Michal Suchanek [Tue, 24 Apr 2018 04:15:55 +0000 (14:15 +1000)]
powerpc/64s: Add support for ori barrier_nospec patching

Based on the RFI patching. This is required to be able to disable the
speculation barrier.

Only one barrier type is supported and it does nothing when the
firmware does not enable it. Also re-patching modules is not supported
So the only meaningful thing that can be done is patching out the
speculation barrier at boot when the user says it is not wanted.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Add barrier_nospec
Michal Suchanek [Tue, 24 Apr 2018 04:15:54 +0000 (14:15 +1000)]
powerpc/64s: Add barrier_nospec

A no-op form of ori (or immediate of 0 into r31 and the result stored
in r31) has been re-tasked as a speculation barrier. The instruction
only acts as a barrier on newer machines with appropriate firmware
support. On older CPUs it remains a harmless no-op.

Implement barrier_nospec using this instruction.

mpe: The semantics of the instruction are believed to be that it
prevents execution of subsequent instructions until preceding branches
have been fully resolved and are no longer executing speculatively.
There is no further documentation available at this time.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/stacktrace: Update copyright
Michael Ellerman [Wed, 2 May 2018 13:07:29 +0000 (23:07 +1000)]
powerpc/stacktrace: Update copyright

This now has new code in it written by Nick and I, and switch to a
SPDX tag.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
6 years agopowerpc/64s: Wire up arch_trigger_cpumask_backtrace()
Michael Ellerman [Wed, 2 May 2018 13:07:28 +0000 (23:07 +1000)]
powerpc/64s: Wire up arch_trigger_cpumask_backtrace()

This allows eg. the RCU stall detector, or the soft/hardlockup
detectors to trigger a backtrace on all CPUs.

We implement this by sending a "safe" NMI, which will actually only
send an IPI. Unfortunately the generic code prints "NMI", so that's a
little confusing but we can probably live with it.

If one of the CPUs doesn't respond to the IPI, we then print some info
from it's paca and do a backtrace based on its saved_r1.

Example output:

  INFO: rcu_sched detected stalls on CPUs/tasks:
   2-...0: (0 ticks this GP) idle=1be/1/4611686018427387904 softirq=1055/1055 fqs=25735
   (detected by 4, t=58847 jiffies, g=58, c=57, q=1258)
  Sending NMI from CPU 4 to CPUs 2:
  CPU 2 didn't respond to backtrace IPI, inspecting paca.
  irq_soft_mask: 0x01 in_mce: 0 in_nmi: 0 current: 3623 (bash)
  Back trace of paca->saved_r1 (0xc0000000e1c83ba0) (possibly stale):
  Call Trace:
  [c0000000e1c83ba0] [0000000000000014] 0x14 (unreliable)
  [c0000000e1c83bc0] [c000000000765798] lkdtm_do_action+0x48/0x80
  [c0000000e1c83bf0] [c000000000765a40] direct_entry+0x110/0x1b0
  [c0000000e1c83c90] [c00000000058e650] full_proxy_write+0x90/0xe0
  [c0000000e1c83ce0] [c0000000003aae3c] __vfs_write+0x6c/0x1f0
  [c0000000e1c83d80] [c0000000003ab214] vfs_write+0xd4/0x240
  [c0000000e1c83dd0] [c0000000003ab5cc] ksys_write+0x6c/0x110
  [c0000000e1c83e30] [c00000000000b860] system_call+0x58/0x6c

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
6 years agopowerpc/nmi: Add an API for sending "safe" NMIs
Michael Ellerman [Wed, 2 May 2018 13:07:27 +0000 (23:07 +1000)]
powerpc/nmi: Add an API for sending "safe" NMIs

Currently the options we have for sending NMIs are not necessarily
safe, that is they can potentially interrupt a CPU in a
non-recoverable region of code, meaning the kernel must then panic().

But we'd like to use smp_send_nmi_ipi() to do cross-CPU calls in
situations where we don't want to risk a panic(), because it doesn't
have the requirement that interrupts must be enabled like
smp_call_function().

So add an API for the caller to indicate that it wants to use the NMI
infrastructure, but doesn't want to do anything "unsafe".

Currently that is implemented by not actually calling cause_nmi_ipi(),
instead falling back to an IPI. In future we can pass the safe
parameter down to cause_nmi_ipi() and the individual backends can
potentially take it into account before deciding what to do.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
6 years agopowerpc/64: Save stack pointer when we hard disable interrupts
Michael Ellerman [Wed, 2 May 2018 13:07:26 +0000 (23:07 +1000)]
powerpc/64: Save stack pointer when we hard disable interrupts

A CPU that gets stuck with interrupts hard disable can be difficult to
debug, as on some platforms we have no way to interrupt the CPU to
find out what it's doing.

A stop-gap is to have the CPU save it's stack pointer (r1) in its paca
when it hard disables interrupts. That way if we can't interrupt it,
we can at least trace the stack based on where it last disabled
interrupts.

In some cases that will be total junk, but the stack trace code should
handle that. In the simple case of a CPU that disable interrupts and
then gets stuck in a loop, the stack trace should be informative.

We could clear the saved stack pointer when we enable interrupts, but
that loses information which could be useful if we have nothing else
to go on.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
6 years agopowerpc: Check address limit on user-mode return (TIF_FSCHECK)
Michael Ellerman [Mon, 14 May 2018 13:03:16 +0000 (23:03 +1000)]
powerpc: Check address limit on user-mode return (TIF_FSCHECK)

set_fs() sets the addr_limit, which is used in access_ok() to
determine if an address is a user or kernel address.

Some code paths use set_fs() to temporarily elevate the addr_limit so
that kernel code can read/write kernel memory as if it were user
memory. That is fine as long as the code can't ever return to
userspace with the addr_limit still elevated.

If that did happen, then userspace can read/write kernel memory as if
it were user memory, eg. just with write(2). In case it's not clear,
that is very bad. It has also happened in the past due to bugs.

Commit 5ea0727b163c ("x86/syscalls: Check address limit on user-mode
return") added a mechanism to check the addr_limit value before
returning to userspace. Any call to set_fs() sets a thread flag,
TIF_FSCHECK, and if we see that on the return to userspace we go out
of line to check that the addr_limit value is not elevated.

For further info see the above commit, as well as:
  https://lwn.net/Articles/722267/
  https://bugs.chromium.org/p/project-zero/issues/detail?id=990

Verified to work on 64-bit Book3S using a POC that objdumps the system
call handler, and a modified lkdtm_CORRUPT_USER_DS() that doesn't kill
the caller.

Before:
  $ sudo ./test-tif-fscheck
  ...
  0000000000000000 <.data>:
         0:       e1 f7 8a 79     rldicl. r10,r12,30,63
         4:       80 03 82 40     bne     0x384
         8:       00 40 8a 71     andi.   r10,r12,16384
         c:       78 0b 2a 7c     mr      r10,r1
        10:       10 fd 21 38     addi    r1,r1,-752
        14:       08 00 c2 41     beq-    0x1c
        18:       58 09 2d e8     ld      r1,2392(r13)
        1c:       00 00 41 f9     std     r10,0(r1)
        20:       70 01 61 f9     std     r11,368(r1)
        24:       78 01 81 f9     std     r12,376(r1)
        28:       70 00 01 f8     std     r0,112(r1)
        2c:       78 00 41 f9     std     r10,120(r1)
        30:       20 00 82 41     beq     0x50
        34:       a6 42 4c 7d     mftb    r10

After:

  $ sudo ./test-tif-fscheck
  Killed

And in dmesg:
  Invalid address limit on user-mode return
  WARNING: CPU: 1 PID: 3689 at ../include/linux/syscalls.h:260 do_notify_resume+0x140/0x170
  ...
  NIP [c00000000001ee50] do_notify_resume+0x140/0x170
  LR [c00000000001ee4c] do_notify_resume+0x13c/0x170
  Call Trace:
    do_notify_resume+0x13c/0x170 (unreliable)
    ret_from_except_lite+0x70/0x74

Performance overhead is essentially zero in the usual case, because
the bit is checked as part of the existing _TIF_USER_WORK_MASK check.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Rename thread_struct.fs to addr_limit
Michael Ellerman [Mon, 14 May 2018 13:03:15 +0000 (23:03 +1000)]
powerpc: Rename thread_struct.fs to addr_limit

It's called 'fs' for historical reasons, it's named after the x86 'FS'
register. But we don't have to use that name for the member of
thread_struct, and in fact arch/x86 doesn't even call it 'fs' anymore.

So rename it to 'addr_limit', which better reflects what it's used
for, and is also the name used on other arches.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/ptrace: Use copy_{from, to}_user() rather than open-coding
Al Viro [Tue, 29 May 2018 12:57:38 +0000 (22:57 +1000)]
powerpc/ptrace: Use copy_{from, to}_user() rather than open-coding

In PPC_PTRACE_GETHWDBGINFO and PPC_PTRACE_SETHWDEBUG we do an
access_ok() check and then __copy_{from,to}_user().

Instead we should just use copy_{from,to}_user() which does all that
for us and is less error prone.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Refactor report functions
Sam Bobroff [Fri, 25 May 2018 03:11:40 +0000 (13:11 +1000)]
powerpc/eeh: Refactor report functions

The EEH report functions now share a fair bit of code around the start
and end of each function.

So factor out as much as possible, and move the traversal into a
custom function. This also allows accurate debug to be generated more
easily.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
[mpe: Format with clang-format]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Cleaner handling of EEH_DEV_NO_HANDLER
Sam Bobroff [Fri, 25 May 2018 03:11:39 +0000 (13:11 +1000)]
powerpc/eeh: Cleaner handling of EEH_DEV_NO_HANDLER

If a device without a driver is recovered via EEH, the flag
EEH_DEV_NO_HANDLER is incorrectly left set on the device after
recovery, because the test in eeh_report_resume() for the existence of
a bound driver is done before the flag is cleared. If a driver is
later bound, and EEH experienced again, some of the drivers EEH
handers are not called.

To correct this, clear the flag unconditionally after EEH processing
is complete.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Introduce eeh_set_irq_state()
Sam Bobroff [Fri, 25 May 2018 03:11:38 +0000 (13:11 +1000)]
powerpc/eeh: Introduce eeh_set_irq_state()

To ease future refactoring, extract calls to eeh_enable_irq() and
eeh_disable_irq() from the various report functions. This makes
the report functions initial sequences more similar, as well as making
the IRQ changes visible when reading eeh_handle_normal_event().

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Introduce eeh_set_channel_state()
Sam Bobroff [Fri, 25 May 2018 03:11:37 +0000 (13:11 +1000)]
powerpc/eeh: Introduce eeh_set_channel_state()

To ease future refactoring, extract setting of the channel state
from the report functions out into their own functions. This increases
the amount of code that is identical across all of the report
functions.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Introduce eeh_edev_actionable()
Sam Bobroff [Fri, 25 May 2018 03:11:36 +0000 (13:11 +1000)]
powerpc/eeh: Introduce eeh_edev_actionable()

The same test is done in every EEH report function, so factor it out.

Since eeh_dev_removed() needs to be moved higher up in the file,
simplify it a little while we're at it.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Introduce eeh_for_each_pe()
Sam Bobroff [Fri, 25 May 2018 03:11:35 +0000 (13:11 +1000)]
powerpc/eeh: Introduce eeh_for_each_pe()

Add a for_each-style macro for iterating through PEs without the
boilerplate required by a traversal function. eeh_pe_next() is now
exported, as it is now used directly in place.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Clean up pci_ers_result handling
Sam Bobroff [Fri, 25 May 2018 03:11:34 +0000 (13:11 +1000)]
powerpc/eeh: Clean up pci_ers_result handling

As EEH event handling progresses, a cumulative result of type
pci_ers_result is built up by (some of) the eeh_report_*() functions
using either:
if (rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
if (*res == PCI_ERS_RESULT_NONE) *res = rc;
or:
if ((*res == PCI_ERS_RESULT_NONE) ||
    (*res == PCI_ERS_RESULT_RECOVERED)) *res = rc;
if (*res == PCI_ERS_RESULT_DISCONNECT &&
    rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
(Where *res is the accumulator.)

However, the intent is not immediately clear and the result in some
situations is order dependent.

Address this by assigning a priority to each result value, and always
merging to the highest priority. This renders the intent clear, and
provides a stable value for all orderings.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
[mpe: Minor formatting (clang-format)]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Add message when PE processing at parent
Sam Bobroff [Fri, 25 May 2018 03:11:33 +0000 (13:11 +1000)]
powerpc/eeh: Add message when PE processing at parent

To aid debugging, add a message to show when EEH processing for a PE
will be done at the device's parent, rather than directly at the
device.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Strengthen types of eeh traversal functions
Sam Bobroff [Fri, 25 May 2018 03:11:32 +0000 (13:11 +1000)]
powerpc/eeh: Strengthen types of eeh traversal functions

The traversal functions eeh_pe_traverse() and eeh_pe_dev_traverse()
both provide their first argument as void * but every single user casts
it to the expected type.

Change the type of the first parameter from void * to the appropriate
type, and clean up all uses.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Remove unused eeh_pcid_name()
Sam Bobroff [Fri, 25 May 2018 03:11:31 +0000 (13:11 +1000)]
powerpc/eeh: Remove unused eeh_pcid_name()

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Fix use-after-release of EEH driver
Sam Bobroff [Fri, 25 May 2018 03:11:30 +0000 (13:11 +1000)]
powerpc/eeh: Fix use-after-release of EEH driver

Correct two cases where eeh_pcid_get() is used to reference the driver's
module but the reference is dropped before the driver pointer is used.

In eeh_rmv_device() also refactor a little so that only two calls to
eeh_pcid_put() are needed, rather than three and the reference isn't
taken at all if it wasn't needed.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Add final message for successful recovery
Sam Bobroff [Fri, 25 May 2018 03:11:28 +0000 (13:11 +1000)]
powerpc/eeh: Add final message for successful recovery

Add a single log line at the end of successful EEH recovery, so that
it's clear that event processing has finished.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Unregister thread-imc if core-imc not supported
Anju T Sudhakar [Tue, 22 May 2018 09:12:37 +0000 (14:42 +0530)]
powerpc/perf: Unregister thread-imc if core-imc not supported

Since thread-imc internally use the core-imc hardware infrastructure
and is depended on it, having thread-imc in the kernel in the
absence of core-imc is trivial. Patch disables thread-imc, if
core-imc is not registered.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Return appropriate value for unknown domain
Anju T Sudhakar [Tue, 22 May 2018 09:12:36 +0000 (14:42 +0530)]
powerpc/perf: Return appropriate value for unknown domain

Return proper error code for unknown domain during IMC initialization.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Replace the direct return with goto statement
Anju T Sudhakar [Tue, 22 May 2018 09:12:35 +0000 (14:42 +0530)]
powerpc/perf: Replace the direct return with goto statement

Replace the direct return statement in imc_mem_init() with goto, to adhere
to the kernel coding style.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Rearrange memory freeing in imc init
Anju T Sudhakar [Tue, 22 May 2018 09:12:34 +0000 (14:42 +0530)]
powerpc/perf: Rearrange memory freeing in imc init

When any of the IMC (In-Memory Collection counter) devices fail
to initialize, imc_common_mem_free() frees set of memory. In doing so,
pmu_ptr pointer is also freed. But pmu_ptr pointer is used in subsequent
function (imc_common_cpuhp_mem_free()) which is wrong. Patch here reorders
the code to avoid such access.

Also free the memory which is dynamically allocated during imc
initialization, wherever required.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/xics: Add missing of_node_put() in error path
YueHaibing [Wed, 25 Apr 2018 11:27:07 +0000 (19:27 +0800)]
powerpc/xics: Add missing of_node_put() in error path

The device node obtained with of_find_compatible_node() should be
released by calling of_node_put().  But it was not released when
of_get_property() failed.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
[mpe: Invert the sense of the if so we only need one return path]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: cpm_gpio: Remove owner assignment from platform_driver
Fabio Estevam [Sat, 5 May 2018 03:01:25 +0000 (00:01 -0300)]
powerpc: cpm_gpio: Remove owner assignment from platform_driver

Structure platform_driver does not need to set the owner field, as this
will be populated by the driver core.

Generated by scripts/coccinelle/api/platform_no_drv_owner.cocci.

Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/xive: Remove (almost) unused macros
Russell Currey [Fri, 11 May 2018 08:03:13 +0000 (18:03 +1000)]
powerpc/xive: Remove (almost) unused macros

The GETFIELD and SETFIELD macros in xive-regs.h aren't used except for
a single instance of GETFIELD, so replace that and remove them.

These macros are also defined in vas.h, so either those should be
eventually replaced or the macros moved into bitops.h.

Signed-off-by: Russell Currey <ruscur@russell.cc>
[mpe: Rewrite the assignment to 'he' to avoid ffs() etc.]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agohvc_opal: don't set tb_ticks_per_usec in udbg_init_opal_common()
Stewart Smith [Thu, 29 Mar 2018 06:02:46 +0000 (17:02 +1100)]
hvc_opal: don't set tb_ticks_per_usec in udbg_init_opal_common()

time_init() will set up tb_ticks_per_usec based on reality.
time_init() is called *after* udbg_init_opal_common() during boot.

from arch/powerpc/kernel/time.c:
  unsigned long tb_ticks_per_usec = 100; /* sane default */

Currently, all powernv systems have a timebase frequency of 512mhz
(512000000/1000000 == 0x200) - although there's nothing written
down anywhere that I can find saying that we couldn't make that
different based on the requirements in the ISA.

So, we've been (accidentally) thwacking the (currently) correct
(for powernv at least) value for tb_ticks_per_usec earlier than
we otherwise would have.

The "sane default" seems to be adequate for our purposes between
udbg_init_opal_common() and time_init() being called, and if it isn't,
then we should probably be setting it somewhere that isn't hvc_opal.c!

Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: remove unused to_tm() helper
Arnd Bergmann [Mon, 23 Apr 2018 08:36:42 +0000 (10:36 +0200)]
powerpc: remove unused to_tm() helper

to_tm() is now completely unused, the only reference being in the
_dump_time() helper that is also unused. This removes both, leaving
the rest of the powerpc RTC code y2038 safe to as far as the hardware
supports.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: use time64_t in update_persistent_clock
Arnd Bergmann [Mon, 23 Apr 2018 08:36:41 +0000 (10:36 +0200)]
powerpc: use time64_t in update_persistent_clock

update_persistent_clock() is deprecated because it suffers from overflow
in 2038 on 32-bit architectures. This changes powerpc to use the
update_persistent_clock64() replacement, and to pass down 64-bit
timestamps consistently.

This is now simpler, as we no longer have to worry about the offset
numbers in tm_year and tm_mon that are different between the Linux
conventions and RTAS.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: use time64_t in read_persistent_clock
Arnd Bergmann [Mon, 23 Apr 2018 08:36:40 +0000 (10:36 +0200)]
powerpc: use time64_t in read_persistent_clock

Looking through the remaining users of the deprecated mktime()
function, I found the powerpc rtc handlers, which use it in
place of rtc_tm_to_time64().

To clean this up, I'm changing over the read_persistent_clock()
function to the read_persistent_clock64() variant, and change
all the platform specific handlers along with it.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: rtas: clean up time handling
Arnd Bergmann [Mon, 23 Apr 2018 08:36:39 +0000 (10:36 +0200)]
powerpc: rtas: clean up time handling

The to_tm() helper function operates on a signed integer for the time,
so it will suffer from overflow in 2038, even on 64-bit kernels.

Rather than fix that function, this replaces its use in the rtas
procfs implementation with the standard rtc_time64_to_tm() helper
that is very similar but is not affected by the overflow.

In order to actually support long times, the parser function gets
changed to 64-bit user input and output as well. Note that the tm_mon
and tm_year representation is slightly different, so we have to manually
add an offset here.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: always enable RTC_LIB
Arnd Bergmann [Mon, 23 Apr 2018 08:36:38 +0000 (10:36 +0200)]
powerpc: always enable RTC_LIB

In order to use the rtc_tm_to_time64() and rtc_time64_to_tm()
helper functions in later patches, we have to ensure that
CONFIG_RTC_LIB is always built-in.

Note that this symbol only controls a couple of helper functions,
not the actual RTC subsystem, which remains optional and is
enabled with CONFIG_RTC_CLASS.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pasemi: Set PCI_SCAN_ALL_PCI_DEVS
Olof Johansson [Wed, 6 Dec 2017 11:03:52 +0000 (12:03 +0100)]
powerpc/pasemi: Set PCI_SCAN_ALL_PCI_DEVS

Needed on Amiga X1000 with SB600.

Reported-by: Christian Zigotzky <chzigotzky@xenosoft.de>
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/hash: hard disable irq in the SLB insert path
Aneesh Kumar K.V [Fri, 1 Jun 2018 08:24:02 +0000 (13:54 +0530)]
powerpc/mm/hash: hard disable irq in the SLB insert path

When inserting SLB entries for EA above 512TB, we need to hard disable irq.
This will make sure we don't take a PMU interrupt that can possibly touch
user space address via a stack dump. To prevent this, we need to hard disable
the interrupt.

Also add a comment explaining why we don't need context synchronizing isync
with slbmte.

Fixes: f384796c4 ("powerpc/mm: Add support for handling > 512TB address in SLB miss")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/hugetlb: Update hugetlb related locks
Aneesh Kumar K.V [Fri, 1 Jun 2018 08:24:24 +0000 (13:54 +0530)]
powerpc/mm/hugetlb: Update hugetlb related locks

With split pmd page table lock enabled, we don't use mm->page_table_lock when
updating pmd entries. This patch update hugetlb path to use the right lock
when inserting huge page directory entries into page table.

ex: if we are using hugepd and inserting hugepd entry at the pmd level, we
use pmd_lockptr, which based on config can be split pmd lock.

For update huge page directory entries itself we use mm->page_table_lock. We
do have a helper huge_pte_lockptr() for that.

Fixes: 675d99529 ("powerpc/book3s64: Enable split pmd ptlock")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/hash: Add missing isync prior to kernel stack SLB switch
Aneesh Kumar K.V [Wed, 30 May 2018 13:18:04 +0000 (18:48 +0530)]
powerpc/mm/hash: Add missing isync prior to kernel stack SLB switch

Currently we do not have an isync, or any other context synchronizing
instruction prior to the slbie/slbmte in _switch() that updates the
SLB entry for the kernel stack.

However that is not correct as outlined in the ISA.

From Power ISA Version 3.0B, Book III, Chapter 11, page 1133:

  "Changing the contents of ... the contents of SLB entries ... can
   have the side effect of altering the context in which data
   addresses and instruction addresses are interpreted, and in which
   instructions are executed and data accesses are performed.
   ...
   These side effects need not occur in program order, and therefore
   may require explicit synchronization by software.
   ...
   The synchronizing instruction before the context-altering
   instruction ensures that all instructions up to and including that
   synchronizing instruction are fetched and executed in the context
   that existed before the alteration."

And page 1136:

  "For data accesses, the context synchronizing instruction before the
   slbie, slbieg, slbia, slbmte, tlbie, or tlbiel instruction ensures
   that all preceding instructions that access data storage have
   completed to a point at which they have reported all exceptions
   they will cause."

We're not aware of any bugs caused by this, but it should be fixed
regardless.

Add the missing isync when updating kernel stack SLB entry.

Cc: stable@vger.kernel.org
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
[mpe: Flesh out change log with more ISA text & explanation]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Fix compiler store ordering to SLB shadow area
Nicholas Piggin [Wed, 30 May 2018 10:31:22 +0000 (20:31 +1000)]
powerpc/64s: Fix compiler store ordering to SLB shadow area

The stores to update the SLB shadow area must be made as they appear
in the C code, so that the hypervisor does not see an entry with
mismatched vsid and esid. Use WRITE_ONCE for this.

GCC has been observed to elide the first store to esid in the update,
which means that if the hypervisor interrupts the guest after storing
to vsid, it could see an entry with old esid and new vsid, which may
possibly result in memory corruption.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: flush remote CPUs out of single-threaded mm_cpumask
Nicholas Piggin [Fri, 1 Jun 2018 10:01:21 +0000 (20:01 +1000)]
powerpc/64s/radix: flush remote CPUs out of single-threaded mm_cpumask

When a single-threaded process has a non-local mm_cpumask, try to use
that point to flush the TLBs out of other CPUs in the cpumask.

An IPI is used for clearing remote CPUs for a few reasons:
- An IPI can end lazy TLB use of the mm, which is required to prevent
  TLB entries being created on the remote CPU. The alternative is to
  drop lazy TLB switching completely, which costs 7.5% in a context
  switch ping-pong test betwee a process and kernel idle thread.
- An IPI can have remote CPUs flush the entire PID, but the local CPU
  can flush a specific VA. tlbie would require over-flushing of the
  local CPU (where the process is running).
- A single threaded process that is migrated to a different CPU is
  likely to have a relatively small mm_cpumask, so IPI is reasonable.

No other thread can concurrently switch to this mm, because it must
have been given a reference to mm_users by the current thread before it
can use_mm. mm_users can be asynchronously incremented (by
mm_activate or mmget_not_zero), but those users must use remote mm
access and can't use_mm or access user address space. Existing code
makes the this assumption already, for example sparc64 has reset
mm_cpumask using this condition since the start of history, see
arch/sparc/kernel/smp_64.c.

This reduces tlbies for a kernel compile workload from 0.90M to 0.12M,
tlbiels are increased significantly due to the PID flushing for the
cleaning up remote CPUs, and increased local flushes (PID flushes take
128 tlbiels vs 1 tlbie).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: optimise pte_update
Nicholas Piggin [Fri, 1 Jun 2018 10:01:20 +0000 (20:01 +1000)]
powerpc/64s/radix: optimise pte_update

Implementing pte_update with pte_xchg (which uses cmpxchg) is
inefficient. A single larx/stcx. works fine, no need for the less
efficient cmpxchg sequence.

Then remove the memory barriers from the operation. There is a
requirement for TLB flushing to load mm_cpumask after the store
that reduces pte permissions, which is moved into the TLB flush
code.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: avoid ptesync after set_pte and ptep_set_access_flags
Nicholas Piggin [Fri, 1 Jun 2018 10:01:19 +0000 (20:01 +1000)]
powerpc/64s/radix: avoid ptesync after set_pte and ptep_set_access_flags

The ISA suggests ptesync after setting a pte, to prevent a table walk
initiated by a subsequent access from missing that store and causing a
spurious fault. This is an architectual allowance that allows an
implementation's page table walker to be incoherent with the store
queue.

However there is no correctness problem in taking a spurious fault in
userspace -- the kernel copes with these at any time, so the updated
pte will be found eventually. Spurious kernel faults on vmap memory
must be avoided, so a ptesync is put into flush_cache_vmap.

On POWER9 so far I have not found a measurable window where this can
result in more minor faults, so as an optimisation, remove the costly
ptesync from pte updates. If an implementation benefits from ptesync,
it would be better to add it back in update_mmu_cache, so it's not
done for things like fork(2).

fork --fork --exec benchmark improved 5.2% (12400->13100).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: prefetch user address in update_mmu_cache
Nicholas Piggin [Fri, 1 Jun 2018 10:01:18 +0000 (20:01 +1000)]
powerpc/64s/radix: prefetch user address in update_mmu_cache

Prefetch the faulting address in update_mmu_cache to give the page
table walker perhaps 100 cycles head start as locks are dropped and
the interrupt completed.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: make ptep_get_and_clear_full non-atomic for the full case
Nicholas Piggin [Fri, 1 Jun 2018 10:01:17 +0000 (20:01 +1000)]
powerpc/64s/radix: make ptep_get_and_clear_full non-atomic for the full case

This matches other architectures, when we know there will be no
further accesses to the address (e.g., for teardown), page table
entries can be cleared non-atomically.

The comments about NMMU are bogus: all MMU notifiers (including NMMU)
are released at this point, with their TLBs flushed. An NMMU access at
this point would be a bug.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: do not flush TLB on spurious fault
Nicholas Piggin [Fri, 1 Jun 2018 10:01:16 +0000 (20:01 +1000)]
powerpc/64s/radix: do not flush TLB on spurious fault

In the case of a spurious fault (which can happen due to a race with
another thread that changes the page table), the default Linux mm code
calls flush_tlb_page for that address. This is not required because
the pte will be re-fetched. Hash does not wire this up to a hardware
TLB flush for this reason. This patch avoids the flush for radix.

>From Power ISA v3.0B, p.1090:

    Setting a Reference or Change Bit or Upgrading Access Authority
    (PTE Subject to Atomic Hardware Updates)

    If the only change being made to a valid PTE that is subject to
    atomic hardware updates is to set the Refer- ence or Change bit to
    1 or to add access authorities, a simpler sequence suffices
    because the translation hardware will refetch the PTE if an access
    is attempted for which the only problems were reference and/or
    change bits needing to be set or insufficient access authority.

The nest MMU on POWER9 does not re-fetch the PTE after such an access
attempt before faulting, so address spaces with a coprocessor
attached will continue to flush in these cases.

This reduces tlbies for a kernel compile workload from 0.95M to 0.90M.

fork --fork --exec benchmark improved 0.5% (12300->12400).

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: do not flush TLB when relaxing access
Nicholas Piggin [Fri, 1 Jun 2018 10:01:15 +0000 (20:01 +1000)]
powerpc/64s/radix: do not flush TLB when relaxing access

Radix flushes the TLB when updating ptes to increase permissiveness
of protection (increase access authority). Book3S does not require
TLB flushing in this case, and it is not done on hash. This patch
avoids the flush for radix.

>From Power ISA v3.0B, p.1090:

    Setting a Reference or Change Bit or Upgrading Access Authority
    (PTE Subject to Atomic Hardware Updates)

    If the only change being made to a valid PTE that is subject to
    atomic hardware updates is to set the Reference or Change bit to 1
    or to add access authorities, a simpler sequence suffices because
    the translation hardware will refetch the PTE if an access is
    attempted for which the only problems were reference and/or change
    bits needing to be set or insufficient access authority.

The nest MMU on POWER9 does not re-fetch the PTE after such an access
attempt before faulting, so address spaces with a coprocessor
attached will continue to flush in these cases.

This reduces tlbies for a kernel compile workload from 1.28M to 0.95M,
tlbiels from 20.17M 19.68M.

fork --fork --exec benchmark improved 2.77% (12000->12300).

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/radix: Change pte relax sequence to handle nest MMU hang
Aneesh Kumar K.V [Tue, 29 May 2018 14:28:41 +0000 (19:58 +0530)]
powerpc/mm/radix: Change pte relax sequence to handle nest MMU hang

When relaxing access (read -> read_write update), pte needs to be marked invalid
to handle a nest MMU bug. We also need to do a tlb flush after the pte is
marked invalid before updating the pte with new access bits.

We also move tlb flush to platform specific __ptep_set_access_flags. This will
help us to gerid of unnecessary tlb flush on BOOK3S 64 later. We don't do that
in this patch. This also helps in avoiding multiple tlbies with coprocessor
attached.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Change function prototype
Aneesh Kumar K.V [Tue, 29 May 2018 14:28:40 +0000 (19:58 +0530)]
powerpc/mm: Change function prototype

In later patch, we use the vma and psize to do tlb flush. Do the prototype
update in separate patch to make the review easy.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/radix: Move function from radix.h to pgtable-radix.c
Aneesh Kumar K.V [Tue, 29 May 2018 14:28:39 +0000 (19:58 +0530)]
powerpc/mm/radix: Move function from radix.h to pgtable-radix.c

In later patch we will update them which require them to be moved
to pgtable-radix.c. Keeping the function in radix.h results in
compile warning as below.

./arch/powerpc/include/asm/book3s/64/radix.h: In function ‘radix__ptep_set_access_flags’:
./arch/powerpc/include/asm/book3s/64/radix.h:196:28: error: dereferencing pointer to incomplete type ‘struct vm_area_struct’
  struct mm_struct *mm = vma->vm_mm;
                            ^~
./arch/powerpc/include/asm/book3s/64/radix.h:204:6: error: implicit declaration of function ‘atomic_read’; did you mean ‘__atomic_load’? [-Werror=implicit-function-declaration]
      atomic_read(&mm->context.copros) > 0) {
      ^~~~~~~~~~~
      __atomic_load
./arch/powerpc/include/asm/book3s/64/radix.h:204:21: error: dereferencing pointer to incomplete type ‘struct mm_struct’
      atomic_read(&mm->context.copros) > 0) {

Instead of fixing header dependencies, we move the function to pgtable-radix.c
Also the function is now large to be a static inline . Doing the
move in separate patch helps in review.

No functional change in this patch. Only code movement.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/hugetlb: Update huge_ptep_set_access_flags to call __ptep_set_access_flags...
Aneesh Kumar K.V [Tue, 29 May 2018 14:28:38 +0000 (19:58 +0530)]
powerpc/mm/hugetlb: Update huge_ptep_set_access_flags to call __ptep_set_access_flags directly

In a later patch, we want to update __ptep_set_access_flags take page size
arg. This makes ptep_set_access_flags only work with mmu_virtual_psize.
To simplify the code make huge_ptep_set_access_flags directly call
__ptep_set_access_flags so that we can compute the hugetlb page size in
hugetlb function.

Now that ptep_set_access_flags won't be called for hugetlb remove
the is_vm_hugetlb_page() check and add the assert of pte lock
unconditionally.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoocxl: Document new OCXL IOCTLs
Alastair D'Silva [Fri, 11 May 2018 06:13:03 +0000 (16:13 +1000)]
ocxl: Document new OCXL IOCTLs

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoocxl: Add an IOCTL so userspace knows what OCXL features are available
Alastair D'Silva [Fri, 11 May 2018 06:13:02 +0000 (16:13 +1000)]
ocxl: Add an IOCTL so userspace knows what OCXL features are available

In order for a userspace AFU driver to call the POWER9 specific
OCXL_IOCTL_ENABLE_P9_WAIT, it needs to verify that it can actually
make that call.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoocxl: Expose the thread_id needed for wait on POWER9
Alastair D'Silva [Fri, 11 May 2018 06:13:01 +0000 (16:13 +1000)]
ocxl: Expose the thread_id needed for wait on POWER9

In order to successfully issue as_notify, an AFU needs to know the TID
to notify, which in turn means that this information should be
available in userspace so it can be communicated to the AFU.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoocxl: Rename pnv_ocxl_spa_remove_pe to clarify it's action
Alastair D'Silva [Fri, 11 May 2018 06:13:00 +0000 (16:13 +1000)]
ocxl: Rename pnv_ocxl_spa_remove_pe to clarify it's action

The function removes the process element from NPU cache.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: use task_pid_nr() for TID allocation
Alastair D'Silva [Fri, 11 May 2018 06:12:59 +0000 (16:12 +1000)]
powerpc: use task_pid_nr() for TID allocation

The current implementation of TID allocation, using a global IDR, may
result in an errant process starving the system of available TIDs.
Instead, use task_pid_nr(), as mentioned by the original author. The
scenario described which prevented it's use is not applicable, as
set_thread_tidr can only be called after the task struct has been
populated.

In the unlikely event that 2 threads share the TID and are waiting,
all potential outcomes have been determined safe.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Reviewed-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Use TIDR CPU feature to control TIDR allocation
Alastair D'Silva [Fri, 11 May 2018 06:12:58 +0000 (16:12 +1000)]
powerpc: Use TIDR CPU feature to control TIDR allocation

Switch the use of TIDR on it's CPU feature, rather than assuming it
is available based on architecture.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Reviewed-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add TIDR CPU feature for POWER9
Alastair D'Silva [Fri, 11 May 2018 06:12:57 +0000 (16:12 +1000)]
powerpc: Add TIDR CPU feature for POWER9

This patch adds a CPU feature bit to show whether the CPU has
the TIDR register available, enabling as_notify/wait in userspace.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Reviewed-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv: process all OPAL event interrupts with kopald
Nicholas Piggin [Thu, 10 May 2018 17:20:05 +0000 (03:20 +1000)]
powerpc/powernv: process all OPAL event interrupts with kopald

Using irq_work for processing OPAL event interrupts is not necessary.
irq_work is typically used to schedule work from NMI context, a
softirq may be more appropriate. However OPAL events are not
particularly performance or latency critical, so they can all be
invoked by kopald.

This patch removes the irq_work queueing, and instead wakes up
kopald when there is an event to be processed. kopald processes
interrupts individually, enabling irqs and calling cond_resched
between each one to minimise latencies.

Event handlers themselves should still use threaded handlers,
workqueues, etc. as necessary to avoid high interrupts-off latencies
within any single interrupt.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv: call OPAL_QUIESCE before OPAL_SIGNAL_SYSTEM_RESET
Nicholas Piggin [Thu, 10 May 2018 12:21:48 +0000 (22:21 +1000)]
powerpc/powernv: call OPAL_QUIESCE before OPAL_SIGNAL_SYSTEM_RESET

Although it is often possible to recover a CPU that was interrupted
from OPAL with a system reset NMI, it's undesirable to interrupt them
for a few reasons. Firstly because dump/debug code itself needs to
call firmware, so it could hang on a lock or possibly corrupt a
per-cpu data structure if it or another CPU was interrupted from
OPAL. Secondly, the kexec crash dump code will not return from
interrupt to unwind the OPAL call.

Call OPAL_QUIESCE with QUIESCE_HOLD before sending an NMI IPI to
another CPU, which wait for it to leave firmware (or time out) to
avoid this problem in normal conditions. Firmware bugs may still
result in a timeout and interrupting OPAL, but that is the best
option (stops the CPU, and possibly allows firmware to be debugged).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: change softe to irqmask in show_regs and xmon
Nicholas Piggin [Thu, 10 May 2018 01:04:24 +0000 (11:04 +1000)]
powerpc/64: change softe to irqmask in show_regs and xmon

When the soft enabled flag was changed to a soft disable mask, xmon
and register dump code was not updated to reflect that, which is
confusing ('SOFTE: 1' previously meant interrupts were soft enabled,
currently it means the opposite, the general interrupt type has been
disabled).

Fix this by using the name irqmask, and printing it in hex.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pmu/fsl: fix is_nmi test for irq mask change
Nicholas Piggin [Thu, 10 May 2018 01:04:23 +0000 (11:04 +1000)]
powerpc/pmu/fsl: fix is_nmi test for irq mask change

When soft enabled was changed to irq disabled mask, this test missed
being converted (although the equivalent book3s test was converted).

The PMU drivers consider it an NMI when they take a PMI while general
interrupts are disabled. This change restores that behaviour.

Fixes: 01417c6cc7 ("powerpc/64: Change soft_enabled from flag to bitmask")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/time: account broadcast timer event interrupts separately
Nicholas Piggin [Fri, 4 May 2018 17:19:35 +0000 (03:19 +1000)]
powerpc/time: account broadcast timer event interrupts separately

These are not local timer interrupts but IPIs. It's good to be able
to see how timer offloading is behaving, so split these out into
their own category.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: move a stray NMI IPI case under NMI_IPI ifdef
Nicholas Piggin [Fri, 4 May 2018 17:19:34 +0000 (03:19 +1000)]
powerpc: move a stray NMI IPI case under NMI_IPI ifdef

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: move timer broadcast code under GENERIC_CLOCKEVENTS_BROADCAST ifdef
Nicholas Piggin [Fri, 4 May 2018 17:19:33 +0000 (03:19 +1000)]
powerpc: move timer broadcast code under GENERIC_CLOCKEVENTS_BROADCAST ifdef

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: allow soft-NMI watchdog to cover timer interrupts with large decrementers
Nicholas Piggin [Fri, 4 May 2018 17:19:32 +0000 (03:19 +1000)]
powerpc: allow soft-NMI watchdog to cover timer interrupts with large decrementers

Large decrementers (e.g., POWER9) can take a very long time to wrap,
so when the timer iterrupt handler sets the decrementer to max so as
to avoid taking another decrementer interrupt when hard enabling
interrupts before running timers, it effectively disables the soft
NMI coverage for timer interrupts.

Fix this by using the traditional 31-bit value instead, which wraps
after a few seconds. masked interrupt code does the same thing, and
in normal operation neither of these paths would ever wrap even the
31 bit value.

Note: the SMP watchdog should catch timer interrupt lockups, but it
is preferable for the local soft-NMI to catch them, mainly to avoid
the IPI.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: generic clockevents broadcast receiver call tick_receive_broadcast
Nicholas Piggin [Fri, 4 May 2018 17:19:31 +0000 (03:19 +1000)]
powerpc: generic clockevents broadcast receiver call tick_receive_broadcast

The broadcast tick recipient can call tick_receive_broadcast rather
than re-running the full timer interrupt.

It does not have to check for the next event time, because the sender
already determined the timer has expired. It does not have to test
irq_work_pending, because that's a direct decrementer interrupt and
does not go through the clock events subsystem. And it does not have
to read PURR because that was removed with the previous patch.

This results in no code size change, but both the decrementer and
broadcast path lengths are reduced.

Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: lparcfg calculate PURR on demand
Nicholas Piggin [Fri, 4 May 2018 17:19:30 +0000 (03:19 +1000)]
powerpc/pseries: lparcfg calculate PURR on demand

For SPLPAR, lparcfg provides a sum of PURR registers for all CPUs.
Currently this is done by reading PURR in context switch and timer
interrupt, and storing that into a per-CPU variable. These are summed
to provide the value.

This does not work with all timer schemes (e.g., NO_HZ_FULL), and it
is sub-optimal for performance because it reads the PURR register on
every context switch, although that's been difficult to distinguish
from noise in the contxt_switch microbenchmark.

This patch implements the sum by calling a function on each CPU, to
read and add PURR values of each CPU.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: remove start_tb and accum_tb from thread_struct
Nicholas Piggin [Fri, 4 May 2018 17:19:29 +0000 (03:19 +1000)]
powerpc/64: remove start_tb and accum_tb from thread_struct

These fields are only written to.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: micro-optimise __hard_irq_enable() for mtmsrd L=1 support
Nicholas Piggin [Fri, 4 May 2018 17:19:28 +0000 (03:19 +1000)]
powerpc/64s: micro-optimise __hard_irq_enable() for mtmsrd L=1 support

Book3S minimum supported ISA version now requires mtmsrd L=1. This
instruction does not require bits other than RI and EE to be supplied,
so __hard_irq_enable() and __hard_irq_disable() does not have to read
the kernel_msr from paca.

Interrupt entry code already relies on L=1 support.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: put cede MSR[EE] check under IRQ_SOFT_MASK_DEBUG
Nicholas Piggin [Fri, 4 May 2018 17:19:26 +0000 (03:19 +1000)]
powerpc/pseries: put cede MSR[EE] check under IRQ_SOFT_MASK_DEBUG

This check does not catch IRQ soft mask bugs, but this option is
slightly more suitable than TRACE_IRQFLAGS.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: irq_work avoid interrupt when called with hardware irqs enabled
Nicholas Piggin [Fri, 4 May 2018 17:19:25 +0000 (03:19 +1000)]
powerpc/64: irq_work avoid interrupt when called with hardware irqs enabled

irq_work_raise should not cause a decrementer exception unless it is
called from NMI context. Doing so often just results in an immediate
masked decrementer interrupt:

   <...>-550    90d...    4us : update_curr_rt <-dequeue_task_rt
   <...>-550    90d...    5us : dbs_update_util_handler <-update_curr_rt
   <...>-550    90d...    6us : arch_irq_work_raise <-irq_work_queue
   <...>-550    90d...    7us : soft_nmi_interrupt <-soft_nmi_common
   <...>-550    90d...    7us : printk_nmi_enter <-soft_nmi_interrupt
   <...>-550    90d.Z.    8us : rcu_nmi_enter <-soft_nmi_interrupt
   <...>-550    90d.Z.    9us : rcu_nmi_exit <-soft_nmi_interrupt
   <...>-550    90d...    9us : printk_nmi_exit <-soft_nmi_interrupt
   <...>-550    90d...   10us : cpuacct_charge <-update_curr_rt

The soft_nmi_interrupt here is the call into the watchdog, due to the
decrementer interrupt firing with irqs soft-disabled. This is
harmless, but sub-optimal.

When it's not called from NMI context or with interrupts enabled, mark
the decrementer pending in the irq_happened mask directly, rather than
having the masked decrementer interupt handler do it. This will be
replayed at the next local_irq_enable. See the comment for details.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv/ioda2: Remove redundant free of TCE pages
Alexey Kardashevskiy [Wed, 30 May 2018 09:22:50 +0000 (19:22 +1000)]
powerpc/powernv/ioda2: Remove redundant free of TCE pages

When IODA2 creates a PE, it creates an IOMMU table with it_ops::free
set to pnv_ioda2_table_free() which calls pnv_pci_ioda2_table_free_pages().

Since iommu_tce_table_put() calls it_ops::free when the last reference
to the table is released, explicit call to pnv_pci_ioda2_table_free_pages()
is not needed so let's remove it.

This should fix double free in the case of PCI hotuplug as
pnv_pci_ioda2_table_free_pages() does not reset neither
iommu_table::it_base nor ::it_size.

This was not exposed by SRIOV as it uses different code path via
pnv_pcibios_sriov_disable().

IODA1 does not inialize it_ops::free so it does not have this issue.

Fixes: c5f7700bbd2e ("powerpc/powernv: Dynamically release PE")
Cc: stable@vger.kernel.org # v4.8+
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/xmon: use match_string() helper
Yisheng Xie [Thu, 31 May 2018 11:11:25 +0000 (19:11 +0800)]
powerpc/xmon: use match_string() helper

match_string() returns the index of an array for a matching string,
which can be used instead of open coded variant.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>