um: Optimize Flush TLB for force/fork case
authorAnton Ivanov <anton.ivanov@cambridgegreys.com>
Fri, 7 Dec 2018 09:05:53 +0000 (09:05 +0000)
committerRichard Weinberger <richard@nod.at>
Thu, 27 Dec 2018 21:48:34 +0000 (22:48 +0100)
When UML handles a fork the page tables need to be brought up
to date. That was done using brute force - full tlb flush.

This is actually unnecessary, because the mapped-in mappings are
all correct and the only mappings which need to be updated
after a flush are any unmaps (so that paging works) as well as
any pending protection changes.

This optimization squeezes out up to 3% from a full kernel rebuild
time under memory pressure.

Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
arch/um/kernel/tlb.c

index 9ca902d..8347161 100644 (file)
@@ -242,10 +242,11 @@ static inline int update_pte_range(pmd_t *pmd, unsigned long addr,
                prot = ((r ? UM_PROT_READ : 0) | (w ? UM_PROT_WRITE : 0) |
                        (x ? UM_PROT_EXEC : 0));
                if (hvc->force || pte_newpage(*pte)) {
-                       if (pte_present(*pte))
-                               ret = add_mmap(addr, pte_val(*pte) & PAGE_MASK,
-                                              PAGE_SIZE, prot, hvc);
-                       else
+                       if (pte_present(*pte)) {
+                               if (pte_newpage(*pte))
+                                       ret = add_mmap(addr, pte_val(*pte) & PAGE_MASK,
+                                                      PAGE_SIZE, prot, hvc);
+                       } else
                                ret = add_munmap(addr, PAGE_SIZE, hvc);
                } else if (pte_newprot(*pte))
                        ret = add_mprotect(addr, PAGE_SIZE, prot, hvc);