locking/rtmutex: Prevent lockdep false positive with PI futexes
authorThomas Gleixner <tglx@linutronix.de>
Sun, 15 Aug 2021 21:29:20 +0000 (23:29 +0200)
committerIngo Molnar <mingo@kernel.org>
Tue, 17 Aug 2021 17:06:02 +0000 (19:06 +0200)
On PREEMPT_RT the futex hashbucket spinlock becomes 'sleeping' and rtmutex
based. That causes a lockdep false positive because some of the futex
functions invoke spin_unlock(&hb->lock) with the wait_lock of the rtmutex
associated to the pi_futex held.  spin_unlock() in turn takes wait_lock of
the rtmutex on which the spinlock is based which makes lockdep notice a
lock recursion.

Give the futex/rtmutex wait_lock a separate key.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20210815211305.750701219@linutronix.de
kernel/locking/rtmutex_api.c

index 92b7d28..5c9299a 100644 (file)
@@ -214,7 +214,19 @@ EXPORT_SYMBOL_GPL(__rt_mutex_init);
 void __sched rt_mutex_init_proxy_locked(struct rt_mutex_base *lock,
                                        struct task_struct *proxy_owner)
 {
+       static struct lock_class_key pi_futex_key;
+
        __rt_mutex_base_init(lock);
+       /*
+        * On PREEMPT_RT the futex hashbucket spinlock becomes 'sleeping'
+        * and rtmutex based. That causes a lockdep false positive, because
+        * some of the futex functions invoke spin_unlock(&hb->lock) with
+        * the wait_lock of the rtmutex associated to the pi_futex held.
+        * spin_unlock() in turn takes wait_lock of the rtmutex on which
+        * the spinlock is based, which makes lockdep notice a lock
+        * recursion. Give the futex/rtmutex wait_lock a separate key.
+        */
+       lockdep_set_class(&lock->wait_lock, &pi_futex_key);
        rt_mutex_set_owner(lock, proxy_owner);
 }