x86/mm/cpa: Optimize __cpa_flush_range()
authorPeter Zijlstra <peterz@infradead.org>
Wed, 19 Sep 2018 08:50:24 +0000 (10:50 +0200)
committerThomas Gleixner <tglx@linutronix.de>
Thu, 27 Sep 2018 18:39:42 +0000 (20:39 +0200)
If we IPI for WBINDV, then we might as well kill the entire TLB too.
But if we don't have to invalidate cache, there is no reason not to
use a range TLB flush.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Dave Hansen <dave.hansen@intel.com>
Cc: Bin Yang <bin.yang@intel.com>
Cc: Mark Gross <mark.gross@intel.com>
Link: https://lkml.kernel.org/r/20180919085948.195633798@infradead.org
arch/x86/mm/pageattr.c

index dc55282..62bb30b 100644 (file)
@@ -291,7 +291,7 @@ static bool __cpa_flush_range(unsigned long start, int numpages, int cache)
 
        WARN_ON(PAGE_ALIGN(start) != start);
 
-       if (!static_cpu_has(X86_FEATURE_CLFLUSH)) {
+       if (cache && !static_cpu_has(X86_FEATURE_CLFLUSH)) {
                cpa_flush_all(cache);
                return true;
        }