Merge branch 'sched/urgent' into sched/core, to resolve conflicts
authorIngo Molnar <mingo@kernel.org>
Fri, 18 Jun 2021 09:31:25 +0000 (11:31 +0200)
committerIngo Molnar <mingo@kernel.org>
Fri, 18 Jun 2021 09:31:25 +0000 (11:31 +0200)
This commit in sched/urgent moved the cfs_rq_is_decayed() function:

  a7b359fc6a37: ("sched/fair: Correctly insert cfs_rq's to list on unthrottle")

and this fresh commit in sched/core modified it in the old location:

  9e077b52d86a: ("sched/pelt: Check that *_avg are null when *_sum are")

Merge the two variants.

Conflicts:
kernel/sched/fair.c

Signed-off-by: Ingo Molnar <mingo@kernel.org>
1  2 
include/linux/sched.h
init/main.c
kernel/sched/debug.c
kernel/sched/fair.c
kernel/sched/pelt.h

Simple merge
diff --cc init/main.c
Simple merge
Simple merge
@@@ -3252,6 -3298,24 +3252,33 @@@ static inline void cfs_rq_util_change(s
  
  #ifdef CONFIG_SMP
  #ifdef CONFIG_FAIR_GROUP_SCHED
+ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
+ {
+       if (cfs_rq->load.weight)
+               return false;
+       if (cfs_rq->avg.load_sum)
+               return false;
+       if (cfs_rq->avg.util_sum)
+               return false;
+       if (cfs_rq->avg.runnable_sum)
+               return false;
++      /*
++       * _avg must be null when _sum are null because _avg = _sum / divider
++       * Make sure that rounding and/or propagation of PELT values never
++       * break this.
++       */
++      SCHED_WARN_ON(cfs_rq->avg.load_avg ||
++                    cfs_rq->avg.util_avg ||
++                    cfs_rq->avg.runnable_avg);
++
+       return true;
+ }
  /**
   * update_tg_load_avg - update the tg's load avg
   * @cfs_rq: the cfs_rq whose avg changed
Simple merge