ti: Remove rcu_read_lock() around XDP program invocation
authorToke Høiland-Jørgensen <toke@redhat.com>
Thu, 24 Jun 2021 16:06:09 +0000 (18:06 +0200)
committerDaniel Borkmann <daniel@iogearbox.net>
Thu, 24 Jun 2021 17:46:39 +0000 (19:46 +0200)
The cpsw driver has rcu_read_lock()/rcu_read_unlock() pairs around XDP
program invocations. However, the actual lifetime of the objects referred
by the XDP program invocation is longer, all the way through to the call to
xdp_do_flush(), making the scope of the rcu_read_lock() too small. This
turns out to be harmless because it all happens in a single NAPI poll
cycle (and thus under local_bh_disable()), but it makes the rcu_read_lock()
misleading.

Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around.

Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Grygorii Strashko <grygorii.strashko@ti.com>
Reviewed-by: Grygorii Strashko <grygorii.strashko@ti.com>
Cc: linux-omap@vger.kernel.org
Link: https://lore.kernel.org/bpf/20210624160609.292325-20-toke@redhat.com
drivers/net/ethernet/ti/cpsw_priv.c

index 5862f0a..ecc2a6b 100644 (file)
@@ -1328,13 +1328,9 @@ int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp,
        struct bpf_prog *prog;
        u32 act;
 
-       rcu_read_lock();
-
        prog = READ_ONCE(priv->xdp_prog);
-       if (!prog) {
-               ret = CPSW_XDP_PASS;
-               goto out;
-       }
+       if (!prog)
+               return CPSW_XDP_PASS;
 
        act = bpf_prog_run_xdp(prog, xdp);
        /* XDP prog might have changed packet data and boundaries */
@@ -1378,10 +1374,8 @@ int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp,
        ndev->stats.rx_bytes += *len;
        ndev->stats.rx_packets++;
 out:
-       rcu_read_unlock();
        return ret;
 drop:
-       rcu_read_unlock();
        page_pool_recycle_direct(cpsw->page_pool[ch], page);
        return ret;
 }