mm/slab_common.c: use list_for_each_entry in dump_unreclaimable_slab()
authorHui Su <sh_def@163.com>
Tue, 15 Dec 2020 03:03:47 +0000 (19:03 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 15 Dec 2020 20:13:37 +0000 (12:13 -0800)
dump_unreclaimable_slab() acquires the slab_mutex first, and it won't
remove any slab_caches list entry when itering the slab_caches lists.

Thus we do not need list_for_each_entry_safe here, which is against
removal of list entry.

Link: https://lkml.kernel.org/r/20200926043440.GA180545@rlk
Signed-off-by: Hui Su <sh_def@163.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/slab_common.c

index f9ccd5d..0cd2821 100644 (file)
@@ -978,7 +978,7 @@ static int slab_show(struct seq_file *m, void *p)
 
 void dump_unreclaimable_slab(void)
 {
-       struct kmem_cache *s, *s2;
+       struct kmem_cache *s;
        struct slabinfo sinfo;
 
        /*
@@ -996,7 +996,7 @@ void dump_unreclaimable_slab(void)
        pr_info("Unreclaimable slab info:\n");
        pr_info("Name                      Used          Total\n");
 
-       list_for_each_entry_safe(s, s2, &slab_caches, list) {
+       list_for_each_entry(s, &slab_caches, list) {
                if (s->flags & SLAB_RECLAIM_ACCOUNT)
                        continue;