locking/rtmutex: Dont dereference waiter lockless
authorThomas Gleixner <tglx@linutronix.de>
Wed, 25 Aug 2021 10:33:12 +0000 (12:33 +0200)
committerPeter Zijlstra <peterz@infradead.org>
Wed, 25 Aug 2021 13:42:32 +0000 (15:42 +0200)
commitc3123c431447da99db160264506de9897c003513
treef938cc1749db6a0be0de60672c744a0009e48fbb
parent99409b935c9ac5ea36ab5218954115c52449234d
locking/rtmutex: Dont dereference waiter lockless

The new rt_mutex_spin_on_onwer() loop checks whether the spinning waiter is
still the top waiter on the lock by utilizing rt_mutex_top_waiter(), which
is broken because that function contains a sanity check which dereferences
the top waiter pointer to check whether the waiter belongs to the
lock. That's wrong in the lockless spinwait case:

 CPU 0 CPU 1
 rt_mutex_lock(lock) rt_mutex_lock(lock);
   queue(waiter0)
   waiter0 == rt_mutex_top_waiter(lock)
   rt_mutex_spin_on_onwer(lock, waiter0) { queue(waiter1)
      waiter1 == rt_mutex_top_waiter(lock)
    ...
     top_waiter = rt_mutex_top_waiter(lock)
       leftmost = rb_first_cached(&lock->waiters);
-> signal
dequeue(waiter1)
destroy(waiter1)
       w = rb_entry(leftmost, ....)
       BUG_ON(w->lock != lock)  <- UAF

The BUG_ON() is correct for the case where the caller holds lock->wait_lock
which guarantees that the leftmost waiter entry cannot vanish. For the
lockless spinwait case it's broken.

Create a new helper function which avoids the pointer dereference and just
compares the leftmost entry pointer with current's waiter pointer to
validate that currrent is still elegible for spinning.

Fixes: 992caf7f1724 ("locking/rtmutex: Add adaptive spinwait mechanism")
Reported-by: Sebastian Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210825102453.981720644@linutronix.de
kernel/locking/rtmutex.c
kernel/locking/rtmutex_common.h