x86/mm: Align TLB invalidation info
authorNadav Amit <namit@vmware.com>
Wed, 31 Jan 2018 21:19:12 +0000 (13:19 -0800)
committerIngo Molnar <mingo@kernel.org>
Tue, 13 Feb 2018 14:05:49 +0000 (15:05 +0100)
commit515ab7c41306aad1f80a980e1936ef635c61570c
tree92f0918bf6984eb2b1c7bfc014926a6c88edae7d
parent178e834c47b0d01352c48730235aae69898fbc02
x86/mm: Align TLB invalidation info

The TLB invalidation info is allocated on the stack, which might cause
it to be unaligned. Since this information may be transferred to
different cores for TLB shootdown, this may cause an additional cache
line to become shared. While the overhead is likely to be small, the
fix is simple.

We do not use __cacheline_aligned() since it also defines the section,
which is inappropriate for stack variables.

Signed-off-by: Nadav Amit <namit@vmware.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20180131211912.52064-1-namit@vmware.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/mm/tlb.c