locking/qspinlock: Use atomic_cond_read_acquire()
authorWill Deacon <will.deacon@arm.com>
Thu, 26 Apr 2018 10:34:21 +0000 (11:34 +0100)
committerIngo Molnar <mingo@kernel.org>
Fri, 27 Apr 2018 07:48:49 +0000 (09:48 +0200)
Rather than dig into the counter field of the atomic_t inside the
qspinlock structure so that we can call smp_cond_load_acquire(), use
atomic_cond_read_acquire() instead, which operates on the atomic_t
directly.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-8-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/locking/qspinlock.c

index b51494a..56af1fa 100644 (file)
@@ -337,8 +337,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
                 * barriers.
                 */
                if (val & _Q_LOCKED_MASK) {
-                       smp_cond_load_acquire(&lock->val.counter,
-                                             !(VAL & _Q_LOCKED_MASK));
+                       atomic_cond_read_acquire(&lock->val,
+                                                !(VAL & _Q_LOCKED_MASK));
                }
 
                /*
@@ -441,8 +441,8 @@ queue:
         *
         * The PV pv_wait_head_or_lock function, if active, will acquire
         * the lock and return a non-zero value. So we have to skip the
-        * smp_cond_load_acquire() call. As the next PV queue head hasn't been
-        * designated yet, there is no way for the locked value to become
+        * atomic_cond_read_acquire() call. As the next PV queue head hasn't
+        * been designated yet, there is no way for the locked value to become
         * _Q_SLOW_VAL. So both the set_locked() and the
         * atomic_cmpxchg_relaxed() calls will be safe.
         *
@@ -452,7 +452,7 @@ queue:
        if ((val = pv_wait_head_or_lock(lock, node)))
                goto locked;
 
-       val = smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_PENDING_MASK));
+       val = atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_PENDING_MASK));
 
 locked:
        /*
@@ -469,7 +469,7 @@ locked:
        /* In the PV case we might already have _Q_LOCKED_VAL set */
        if ((val & _Q_TAIL_MASK) == tail) {
                /*
-                * The smp_cond_load_acquire() call above has provided the
+                * The atomic_cond_read_acquire() call above has provided the
                 * necessary acquire semantics required for locking.
                 */
                old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL);