sched/numa: add statistics of numa balance task
authorChen Yu <yu.c.chen@intel.com>
Fri, 23 May 2025 12:51:15 +0000 (20:51 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Sun, 1 Jun 2025 05:46:15 +0000 (22:46 -0700)
On systems with NUMA balancing enabled, it has been found that tracking
task activities resulting from NUMA balancing is beneficial.  NUMA
balancing employs two mechanisms for task migration: one is to migrate
a task to an idle CPU within its preferred node, and the other is to
swap tasks located on different nodes when they are on each other's
preferred nodes.

The kernel already provides NUMA page migration statistics in
/sys/fs/cgroup/mytest/memory.stat and /proc/{PID}/sched.  However, it
lacks statistics regarding task migration and swapping.  Therefore,
relevant counts for task migration and swapping should be added.

The following two new fields:

numa_task_migrated
numa_task_swapped

will be shown in /sys/fs/cgroup/{GROUP}/memory.stat, /proc/{PID}/sched
and /proc/vmstat.

Introducing both per-task and per-memory cgroup (memcg) NUMA balancing
statistics facilitates a rapid evaluation of the performance and
resource utilization of the target workload.  For instance, users can
first identify the container with high NUMA balancing activity and then
further pinpoint a specific task within that group, and subsequently
adjust the memory policy for that task.  In short, although it is
possible to iterate through /proc/$pid/sched to locate the problematic
task, the introduction of aggregated NUMA balancing activity for tasks
within each memcg can assist users in identifying the task more
efficiently through a divide-and-conquer approach.

As Libo Chen pointed out, the memcg event relies on the text names in
vmstat_text, and /proc/vmstat generates corresponding items based on
vmstat_text.  Thus, the relevant task migration and swapping events
introduced in vmstat_text also need to be populated by
count_vm_numa_event(), otherwise these values are zero in /proc/vmstat.

In theory, task migration and swap events are part of the scheduler's
activities.  The reason for exposing them through the
memory.stat/vmstat interface is that we already have NUMA balancing
statistics in memory.stat/vmstat, and these events are closely related
to each other.  Following Shakeel's suggestion, we describe the
end-to-end flow/story of all these events occurring on a timeline for
future reference:

The goal of NUMA balancing is to co-locate a task and its memory pages
on the same NUMA node.  There are two strategies: migrate the pages to
the task's node, or migrate the task to the node where its pages
reside.

Suppose a task p1 is running on Node 0, but its pages are located on
Node 1.  NUMA page fault statistics for p1 reveal its "page footprint"
across nodes.  If NUMA balancing detects that most of p1's pages are on
Node 1:

1.Page Migration Attempt:
The Numa balance first tries to migrate p1's pages to Node 0.
The numa_page_migrate counter increments.

2.Task Migration Strategies:
After the page migration finishes, Numa balance checks every
1 second to see if p1 can be migrated to Node 1.

Case 2.1: Idle CPU Available

  If Node 1 has an idle CPU, p1 is directly scheduled there.  This
  event is logged as numa_task_migrated.

Case 2.2: No Idle CPU (Task Swap)

  If all CPUs on Node1 are busy, direct migration could cause CPU
  contention or load imbalance.  Instead: The Numa balance selects a
  candidate task p2 on Node 1 that prefers Node 0 (e.g., due to its own
  page footprint).  p1 and p2 are swapped.  This cross-node swap is
  recorded as numa_task_swapped.

Link: https://lkml.kernel.org/r/d00edb12ba0f0de3c5222f61487e65f2ac58f5b1.1748493462.git.yu.c.chen@intel.com
Link: https://lkml.kernel.org/r/7ef90a88602ed536be46eba7152ed0d33bad5790.1748002400.git.yu.c.chen@intel.com
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
Tested-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com>
Cc: Aubrey Li <aubrey.li@intel.com>
Cc: Ayush Jain <Ayush.jain3@amd.com>
Cc: "Chen, Tim C" <tim.c.chen@intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Libo Chen <libo.chen@oracle.com>
Cc: Mel Gorman <mgorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michal Koutný <mkoutny@suse.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Documentation/admin-guide/cgroup-v2.rst
include/linux/sched.h
include/linux/vm_event_item.h
kernel/sched/core.c
kernel/sched/debug.c
mm/memcontrol.c
mm/vmstat.c

index acf8558..cb279c6 100644 (file)
@@ -1697,6 +1697,12 @@ The following nested keys are defined.
          numa_hint_faults (npn)
                Number of NUMA hinting faults.
 
+         numa_task_migrated (npn)
+               Number of task migration by NUMA balancing.
+
+         numa_task_swapped (npn)
+               Number of task swap by NUMA balancing.
+
          pgdemote_kswapd
                Number of pages demoted by kswapd.
 
index f96ac19..1c50e30 100644 (file)
@@ -549,6 +549,10 @@ struct sched_statistics {
        u64                             nr_failed_migrations_running;
        u64                             nr_failed_migrations_hot;
        u64                             nr_forced_migrations;
+#ifdef CONFIG_NUMA_BALANCING
+       u64                             numa_task_migrated;
+       u64                             numa_task_swapped;
+#endif
 
        u64                             nr_wakeups;
        u64                             nr_wakeups_sync;
index 9e15a08..91a3ce9 100644 (file)
@@ -66,6 +66,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
                NUMA_HINT_FAULTS,
                NUMA_HINT_FAULTS_LOCAL,
                NUMA_PAGE_MIGRATE,
+               NUMA_TASK_MIGRATE,
+               NUMA_TASK_SWAP,
 #endif
 #ifdef CONFIG_MIGRATION
                PGMIGRATE_SUCCESS, PGMIGRATE_FAIL,
index c81cf64..62b0331 100644 (file)
@@ -3352,6 +3352,10 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
 #ifdef CONFIG_NUMA_BALANCING
 static void __migrate_swap_task(struct task_struct *p, int cpu)
 {
+       __schedstat_inc(p->stats.numa_task_swapped);
+       count_vm_numa_event(NUMA_TASK_SWAP);
+       count_memcg_event_mm(p->mm, NUMA_TASK_SWAP);
+
        if (task_on_rq_queued(p)) {
                struct rq *src_rq, *dst_rq;
                struct rq_flags srf, drf;
@@ -7953,8 +7957,9 @@ int migrate_task_to(struct task_struct *p, int target_cpu)
        if (!cpumask_test_cpu(target_cpu, p->cpus_ptr))
                return -EINVAL;
 
-       /* TODO: This is not properly updating schedstats */
-
+       __schedstat_inc(p->stats.numa_task_migrated);
+       count_vm_numa_event(NUMA_TASK_MIGRATE);
+       count_memcg_event_mm(p->mm, NUMA_TASK_MIGRATE);
        trace_sched_move_numa(p, curr_cpu, target_cpu);
        return stop_one_cpu(curr_cpu, migration_cpu_stop, &arg);
 }
index 56ae54e..f971c2a 100644 (file)
@@ -1206,6 +1206,10 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
                P_SCHEDSTAT(nr_failed_migrations_running);
                P_SCHEDSTAT(nr_failed_migrations_hot);
                P_SCHEDSTAT(nr_forced_migrations);
+#ifdef CONFIG_NUMA_BALANCING
+               P_SCHEDSTAT(numa_task_migrated);
+               P_SCHEDSTAT(numa_task_swapped);
+#endif
                P_SCHEDSTAT(nr_wakeups);
                P_SCHEDSTAT(nr_wakeups_sync);
                P_SCHEDSTAT(nr_wakeups_migrate);
index 7e64dbf..4e9771e 100644 (file)
@@ -474,6 +474,8 @@ static const unsigned int memcg_vm_event_stat[] = {
        NUMA_PAGE_MIGRATE,
        NUMA_PTE_UPDATES,
        NUMA_HINT_FAULTS,
+       NUMA_TASK_MIGRATE,
+       NUMA_TASK_SWAP,
 #endif
 };
 
index d888c24..6f740f0 100644 (file)
@@ -1347,6 +1347,8 @@ const char * const vmstat_text[] = {
        "numa_hint_faults",
        "numa_hint_faults_local",
        "numa_pages_migrated",
+       "numa_task_migrated",
+       "numa_task_swapped",
 #endif
 #ifdef CONFIG_MIGRATION
        "pgmigrate_success",