rcu/tree: segcblist: Remove redundant smp_mb()s
authorJoel Fernandes (Google) <joel@joelfernandes.org>
Wed, 14 Oct 2020 22:21:53 +0000 (18:21 -0400)
committerPaul E. McKenney <paulmck@kernel.org>
Thu, 7 Jan 2021 00:24:19 +0000 (16:24 -0800)
The full memory barriers in rcu_segcblist_enqueue() and in rcu_do_batch()
are not needed because rcu_segcblist_add_len(), and thus also
rcu_segcblist_inc_len(), already includes a memory barrier *before*
and *after* the length of the list is updated.

This commit therefore removes these redundant smp_mb() invocations.

Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
kernel/rcu/rcu_segcblist.c
kernel/rcu/tree.c

index 7777804..1e80a0a 100644 (file)
@@ -327,7 +327,6 @@ void rcu_segcblist_enqueue(struct rcu_segcblist *rsclp,
                           struct rcu_head *rhp)
 {
        rcu_segcblist_inc_len(rsclp);
-       smp_mb(); /* Ensure counts are updated before callback is enqueued. */
        rcu_segcblist_inc_seglen(rsclp, RCU_NEXT_TAIL);
        rhp->next = NULL;
        WRITE_ONCE(*rsclp->tails[RCU_NEXT_TAIL], rhp);
index cc6f379..b0fb654 100644 (file)
@@ -2523,7 +2523,6 @@ static void rcu_do_batch(struct rcu_data *rdp)
 
        /* Update counts and requeue any remaining callbacks. */
        rcu_segcblist_insert_done_cbs(&rdp->cblist, &rcl);
-       smp_mb(); /* List handling before counting for rcu_barrier(). */
        rcu_segcblist_add_len(&rdp->cblist, -count);
 
        /* Reinstate batch limit if we have worked down the excess. */