sched/fair: Comment some nohz_balancer_kick() kick conditions
authorValentin Schneider <valentin.schneider@arm.com>
Mon, 11 Feb 2019 17:59:44 +0000 (17:59 +0000)
committerIngo Molnar <mingo@kernel.org>
Tue, 19 Mar 2019 11:06:15 +0000 (12:06 +0100)
We now have a comment explaining the first sched_domain based NOHZ kick,
so might as well comment them all.

While at it, unwrap a line that fits under 80 characters.

Co-authored-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Dietmar.Eggemann@arm.com
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: morten.rasmussen@arm.com
Cc: vincent.guittot@linaro.org
Link: https://lkml.kernel.org/r/20190211175946.4961-2-valentin.schneider@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index 8213ff6..e6f7d39 100644 (file)
@@ -9612,8 +9612,12 @@ static void nohz_balancer_kick(struct rq *rq)
 
        sd = rcu_dereference(rq->sd);
        if (sd) {
-               if ((rq->cfs.h_nr_running >= 1) &&
-                   check_cpu_capacity(rq, sd)) {
+               /*
+                * If there's a CFS task and the current CPU has reduced
+                * capacity; kick the ILB to see if there's a better CPU to run
+                * on.
+                */
+               if (rq->cfs.h_nr_running >= 1 && check_cpu_capacity(rq, sd)) {
                        flags = NOHZ_KICK_MASK;
                        goto unlock;
                }
@@ -9621,6 +9625,11 @@ static void nohz_balancer_kick(struct rq *rq)
 
        sd = rcu_dereference(per_cpu(sd_asym_packing, cpu));
        if (sd) {
+               /*
+                * When ASYM_PACKING; see if there's a more preferred CPU
+                * currently idle; in which case, kick the ILB to move tasks
+                * around.
+                */
                for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
                        if (sched_asym_prefer(i, cpu)) {
                                flags = NOHZ_KICK_MASK;