static_call,x86: Robustify trampoline patching
authorPeter Zijlstra <peterz@infradead.org>
Sat, 30 Oct 2021 07:47:58 +0000 (09:47 +0200)
committerPeter Zijlstra <peterz@infradead.org>
Thu, 11 Nov 2021 12:09:31 +0000 (13:09 +0100)
commit2105a92748e83e2e3ee6be539da959706bbb3898
tree04dd3c3a9a55be81189d30054b015b2587dc5968
parentdebe436e77c72fcee804fb867f275e6d31aa999c
static_call,x86: Robustify trampoline patching

Add a few signature bytes after the static call trampoline and verify
those bytes match before patching the trampoline. This avoids patching
random other JMPs (such as CFI jump-table entries) instead.

These bytes decode as:

   d:   53                      push   %rbx
   e:   43 54                   rex.XB push %r12

And happen to spell "SCT".

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20211030074758.GT174703@worktop.programming.kicks-ass.net
arch/x86/include/asm/static_call.h
arch/x86/kernel/static_call.c
tools/objtool/check.c